Machine learning (ML) has become ubiquitous and interwoven into many applications that are important to our daily lives, societal prosperity, and technological progress. However, the large data centers that serve ML workloads are facing tremendous challenges in keeping pace with demand. Additionally, the surge in ML workload demands has positioned data centers as major contributors to annual energy consumption. This project’s goal is to develop a transformative technology to substantially improve the energy efficiency of ML systems allowing for a corresponding reduction in their carbon emissions. Existing computing platforms are fundamentally limited by the memory wall and power wall and incremental technological improvements are proving inadequate to satisfy future demand. Transformative improvements in platform technology, such as in-memory analog computing, offer a potential solution, but it faces several practical challenges to overcome before becoming commercially viable. This project aims to address several of these challenges by developing a novel computer architecture and a corresponding framework for deploying ML workloads to it. The project is aligned with established national priorities seeking to sustain economic leadership goals in artificial intelligence, computing, and nanotechnology disciplines by bringing emerging technologies and computing architectures into wide use, providing a practical alternative to current high-energy ML systems.