Skip to main content Skip to secondary navigation

CHIMERA: Efficient DNN Inference and Training at the Edge with On-Chip Resistive RAM

Main content start

Speaker: Kartik Prabhu, PhD Student, Stanford University
Date: July 1, 2021

In this talk, we will present CHIMERA, the first non-volatile deep neural network (DNN) chip for edge AI training and inference using foundry on-chip resistive RAM (RRAM) macros and no off-chip memory. CHIMERA achieves 0.92 TOPS peak performance and 2.2 TOPS/W. We scale inference to 6x larger DNNs by connecting 6 CHIMERAs with just 4% execution time and 5% energy costs, enabled by communication-sparse DNN mappings that exploit RRAM non-volatility through quick chip wakeup/shutdown. We demonstrate the first incremental edge AI training which overcomes RRAM write energy, speed, and endurance challenges. Our training achieves the same accuracy as traditional algorithms with up to 283x fewer RRAM weight update steps and 340x better energy-delay product. We thus demonstrate 10 years of 20 samples/minute incremental edge AI training on CHIMERA.

CHIMERA: Efficient DNN Inference and Training at the Edge with On-Chip Resistive RAM (Kartik Prabhu, Stanford University)