AI is Coming to You: The Neural Edge
In 2019 we are beginning to see trends such as Browser-Based Artificial Intelligence with TensorFlow.js taking over the landscape. This is part of a larger movement that will continue as AI solutions become less dependent on the backend-heavy distributed cloud-based infrastructure which was originally developed for big data applications.
In addition to browser-based AI, we'll likely see an explosion of new applications for AI chips with a small physical and power footprint. Apple's Neural Engine and Google's Edge Tensor Processing Unit (TPU) are two popular examples. These chips are designed for rapid Machine Learning and Deep Learning inference on a mobile device or at the "edge". Apple reportedly uses the neural engine for Face ID and detecting facial expressions for Animoji, while Google's device is reminiscent of a RaspberryPi for AI. Google's first use case for their Edge TPU is a sensor-based system called SmartPark which helps manage parking capacity in urban centers.
It's clear that we're seeing this technology only in its initial stages. Designers and Engineers still have to wrap their heads around the capabilities of the "neural edge". But the explosive growth of connected devices and the demand for privacy and confidentiality will provide a fertile ground, especially where there are power, latency, and bandwidth constraints.
So what's next? When people talk about "AI eating the world" they often imagine the physical world, but forget that currently AI has a diet mostly of Software. Why? Because integration with software is easier! Anything that is already in the digital realm is more readily available to be ingested by an insatiable AI. For AI to interact with the physical world, it needs sensors, and it needs to be everywhere. The neural edge allows AI -- for the first time -- to scale efficiently into the physical world via cheap hardware and few power or connectivity constraints.
When most or all of the processing (e.g. image recognition) is happening on a Neural Edge device, then the demand for a complex cloud-based backend infrastructure is of less concern. This could significantly accelerate deployment, thus making it easier to prototype, innovate, and scale out the final solution.
What will the future look like?
In the future, we will definitely see a growing number of AI-capable edge devices. The current retail environment is ripe for an upgrade as retailers need to provide a better shopping experience in physical stores if they want to compete with expedient ecommerce providers.
A great example of this is Amazon's grab-and-go stores that eliminate those pesky lines (undeniably one of the worst parts of shopping). For this to work, the shop needs to be equipped with sensors that recognize what products consumers are putting into their carts. The experience can be further enhanced by pointing the shopper to what they are looking for by inferring their interests from previous shopping experiences.
What this might mean for fashion and makeup shopping is that consumers can try things on virtually using a digital "smart mirror" and avoid queues for changerooms (also undeniably the worst part about shopping for clothing), or avoid the mess of trying on lipstick in front of a store mirror.
These are only a few examples where the neural edge has advantages over cloud-based solutions due to its small form factor, rapid response, and mitigation of privacy concerns (images don't have to be submitted to the cloud to be processed and are discarded instantly).
Want to learn more?
Edge TPUs can be used for a growing number of industrial use-cases such as predictive maintenance, anomaly detection, machine vision, robotics, voice recognition, and many more. They can also be used in manufacturing, on-premise, healthcare, retail, smart spaces, transportation, etc. Learn more about how we're approaching AI at Rangle here.