There are a big variety of innovative hardware designs to drive machine learning to the edge. Some key dimensions of implementing these innovations include more optimized use of hardware to speed up math operations in CNNs, more parallel processing, and some exotic analog techniques to combine operations and storage in memory.
In this talk, we present the design and demonstration of GrAI One, the only edge accelerator that leverages time sparsity in the input stream. GrAI One uses our pragmatic approach to adding a new dimension of improvement derived from neuromorphics, called NeuronFlow that is both scalable and practical. Along with GrAI Flow that provides a familiar programming environment to developers using TensorFlow, Keras, we demonstrate the effectiveness of our architecture with a couple of live demos.