Friday, January 27, 2023 - 02:20 pm
Swearingen (2A27).

Talk Abstract: As the technology industry is moving towards implementing machine learning tasks such as natural language processing and image classification on smaller edge computing devices, the demand for more efficient implementations of algorithms and hardware accelerators has become a significant area of research. In recent years, several edge deep learning hardware accelerators have been released that specifically focus on reducing the power and area consumed by deep neural networks (DNNs). On the other hand, spiking neural networks (SNNs) have been shown to achieve substantial power reductions over even the aforementioned edge DNN accelerators when deployed on specialized neuromorphic event-based/asynchronous hardware. While neuromorphic hardware has demonstrated great potential for accelerating deep learning tasks at the edge, the current space of algorithms and hardware is limited and still in rather early development. Thus, many hybrid approaches have been proposed which aim to convert pre-trained DNNs into SNNs. In this talk, we provide a general guide to converting pre-trained DNNs into SNNs while also presenting techniques to improve the deployment of converted SNNs on neuromorphic hardware in terms of latency, power, and energy.

Bio: Ramtin Zand is the director of the Intelligent Circuits, Architectures, and Systems (iCAS) Lab at the University of South Carolina, which is collaborating with and supported by several multinational companies including Intel, AMD, and Juniper Networks, as well as local companies such as Van Robotics. He has authored 50+ articles and received recognitions from ACM and IEEE including the best paper runner-up of ACM GLSVLSI’18, best paper of IEEE ISVLSI’21, as well as featured paper in IEEE Transactions on Emerging Topics in Computing. His research focus is on neuromorphic computing and real-time and energy-efficient AI.