Machine learning may be most useful in tiny, embedded, offline processors


The tiny embedded processors in smart gadgets — including much of the Internet of Shit — are able to do a lot of sensing without exhausting their batteries, because sensing is cheap in terms of power consumption.


But it's when battery-powered devices start communicating, that their batteries start draining. Communicating — running a screen or a radio — is vastly more power-consumptive than merely sensing.

Enter machine-learning models: while creating a model is computationally expensive and cumbersome, the models themselves are tiny and consulting them is computationally cheap. Pete Warden (previously) says that this makes remote, mostly offline gadgets the perfect match for machine learning: these gadgets can sense and process their environments without turning on a radio or screen, saving their power for useful things like alerting a human or running an actuator.


In the last few years its suddenly become possible to take noisy signals like images, audio, or accelerometers and extract meaning from them, by using neural networks. Because we can run these networks on microcontrollers, and sensors themselves use little power, it becomes possible to interpret much more of the sensor data we’re currently ignoring. For example, I want to see almost every device have a simple voice interface. By understanding a small vocabulary, and maybe using an image sensor to do gaze detection, we should be able to control almost anything in our environment without needing to reach it to press a button or use a phone app. I want to see a voice interface component that’s less than fifty cents that runs on a coin battery for a year, and I believe it’s very possible with the technology we have right now.

As another example, I’d love to have a tiny battery-powered image sensor that I could program to look out for things like particular crop pests or weeds, and send an alert when one was spotted. These could be scattered around fields and guide interventions like weeding or pesticides in a much more environmentally friendly way.

One of the industrial examples that stuck with me was a factory operator’s description of “Hans”. He’s a long-time engineer that every morning walks along the row of machines, places a hand on each of them, listens, and then tells the foreman which will have to be taken offline for servicing, all based on experience and intuition. Every factory has one, but many are starting to face retirement. If you could stick a battery-powered accelerometer and microphone to every machine (a “Cyber-Hans”) that would learn usual operation and signal if there was an anomaly, you might be able to catch issues before they became real problems.

Why the Future of Machine Learning is Tiny [Pete Warden]


(via Four Short Links)

(Image: Cryteria, CC-BY)