Arm, the company that creates new processor architectures and other core technologies, has announced a new architecture called Armv8.1-m, targeted toward mobile machine learning and the internet of things space. This new architecture’s potential is massive, though just how much of it is realized obviously lies with the manufacturers who create products based on it.
To put that potential to numbers, Arm claims that the smallest classes of devices using the upcoming architecture can see machine learning performance increases of up to 15 times, while signal processing may happen up to 5 times faster. Part of the magic is a new tool kit called Arm Helium technology, which streamlines instructions for Armv8.1-M, and makes things simpler and more efficient. As a side effect, this also means that Armv8.1-M devices will be easier to optimize programs for.
A piece of hardware called a digital signal processor is normally necessary to help things along when it comes to the processors for smaller devices, Armv8.1-M eliminates the need for that, compressing those functions into an integrated core. This gives the design team for these products more possibilities, and allows the products to be much more capable than previously thought possible.
One of the biggest benefits of this new system and all of its functional consolidation is ease of use. With all of the core IoT functions pared down into a single core device, there need only be a single, unified toolchain to work on everything that a device based on Armv8.1-M can possibly do.
Armv8.1-M and Helium models and toolchains are already available for developers and device manufacturers. Arm estimates that new devices with the architecture will be available at some point within the next two years. This means that it will be roughly two to three years before we begin seeing the technology and its fringe benefits hit consumer devices. While that seems like a long wait, there is a silver lining; a lot can happen in two years, and there’s plenty of room for the IoT and integrated AI spaces to grow.
Machine learning on small devices, or even on consumer-facing devices, has always been somewhat of a challenge. The Qualcomm Snapdragon 835 was arguably the first mainstream chipset to bring the functionality to consumer smartphones, and things seem to have simply snowballed from there.
Onboard machine learning allows for all sorts of enhanced AI functionality, such as real-time translation, image recognition, and text transcription, all without having to connect to the cloud or use server farms for neural networking.
With all of this functionality contained on-device, the potential is already colossal, but with that function downsizing tot fit the IoT field, that goes double. With Armv8.1-M, even the smallest integrated devices will be able to achieve some semblance of onboard machine learning and AI smarts.
What this means for the average consumer is that just about every piece of their future smart home will be able to learn, doing its job better and offering the consumer more functionality and more refined usage as time goes by. This includes things like light bulbs, smart speakers, appliances, game consoles, and more.
The catch with all of this is that these advancements will only hit devices that run on an Arm architecture processor, or have an Arm-based subprocessor somewhere in the mix. This means that many common home computing devices will only feel the effects of the Armv8.1-M rollout by proxy. x86-based laptops and game consoles, such as the Lenovo Thinkpad X1 Carbon or Microsoft XBOX One, for example, are unlikely to see much of a change in functionality at first.
There are obvious privacy and security concerns with this new rollout, as there have always been with the IoT field. With such function-crushing AI power behind even the tiniest devices, a smart home that ends up compromised and turned into a botnet could, for example, suddenly become a neural network for a hacker to use as they please. These are natural risks that come with the territory, and only time will tell just how companies may choose to work around them, ignore them, or meet them head on in the product planning stages.