The huge breakthrough of recent artificial intelligence (AI) moves processing from
the cloud to edge devices . This is enabled by the technological innovation of Al
Algorithms from Neural Network (NN); however, it is challenging to fulfill all the
needs of AI functions, including data inference and image/voice recognition.
eMemory's analog memory solution addresses this challenge by significantly
reducing system operating power and realizing parallel computing operation.
eMemory already developed a new analog memory IP in the 55nm HV platform,
optimized to compute multiply-accumulate (MAC) in Multi-Layer Perceptron
(MLP) for next-generation AI Chips. eMemory’s analog memory solution
improves the system implementation of mainstream CNN(Convolutional Neural
Network) architectures. The solution does so with high accuracy through an
analog in-memory computing approach, enhancing AI inference at the edge. As
current CNN models may require more synapses (weights) for processing, it's
difficult to have enough bandwidth. In contrast, our analogy memory solution
stores synaptic weights in the floating gate-based NVM, offering significant
improvements in system latency. Compared to traditional SRAM-based
approaches, our solution delivers 10~100 times lower system power.