ACM: Code-A-Thon Prep Meeting and Kickoff Announcement
- 36 views
DISSERTATION DEFENSE
Department of Computer Science and Engineering
University of South Carolina
Author : Tieming Geng
Advisor : Dr. Chin-Tser Huang
Date : October 25, 2023
Time: 12 pm
Place : Innovation Center, Room 2265 Virtual
Meeting Link : Teams
Abstract
The ocean covers two-thirds of Earth, which is relatively unexplored compared to the landmass. Mapping underwater structures is essential for both archaeological and conservation purposes. This dissertation focuses on employing a robot team to map underwater structures using vision-based simultaneous localization and mapping (SLAM). The overarching goal of this research is to create a team of autonomous robots to map large underwater structures in a coordinated fashion. This requires maintaining an accurate robust pose estimate of oneself and knowing the relative pose of the other robots in the team. However, the GPS-denied and communication-constrained underwater environment, along with low visibility, poses several challenges for state estimation. This dissertation aims to diagnose the challenges of underwater vision-based state estimation algorithms and provide solutions to improve their robustness and accuracy. Moreover, robust state estimation combined with deep learning-based relative localization forms the backbone for cooperative mapping by a team of robots.
The performance of open-source state-of-the-art visual-inertial SLAM algorithms is compared in multiple underwater environments to understand the challenges of state estimation underwater. Extensive evaluation showed that consumer-level imaging sensors are ill-equipped to handle challenging underwater image formation, low intensity, and artificial lighting fluctuations. Thus, the GoPro action camera that captures high-definition video along with synchronized IMU measurements embedded within a single mp4 file is presented as a substitute. Along with enhanced images, fast sparse map deformation is performed for globally consistent mapping after loop closure. However, in some environments such as underwater caves, it is difficult to perform loop closure due to narrow passages and turbulent flows resulting in yaw drift over long trajectories. Tightly-coupled fusion of high frequency magnetometer measurements in optimization-based visual inertial odometry using IMU preintegration is performed producing a significant reduction in yaw drift. Even with good quality cameras, there are scenarios during underwater deployments where visual SLAM fails. Robust state estimation is proposed by switching between visual inertial odometry and a model-based estimator to keep track of the Aqua2 Autonomous Underwater Vehicle (AUV) during underwater operations. For mapping large underwater structures, cooperative mapping by a team of robots equipped with robust state estimation and capable of relative localization with each other is required. A deep learning framework is designed for real-time 6D pose estimation of an Aqua2 AUV with respect to observing camera trained only on synthetic images. This dissertation combines robust state estimation and accurate relative localization that contribute to mapping underwater structures using multiple AUVs.
Abstract:
Recently, the cyberphyical system (CPS) has gained significant traction in various engineering fields. One of the challenges for CPS is to develop lightweight, real-time computational models to enable in-situ evaluation and decision-making capabilities on mobile decentralized platforms. This seminar presents multiple research efforts being pursued along this frontier at the Integrated Multiphysics & Systems Engineering Laboratory (iMSEL) at the University of South Carolina (USC). It starts with a fundamental introduction of key methodologies to enable lightweight and real-time computation in engineering, including reduced order modeling (ROM) and data-driven modeling. Then, the extension of the data-driven method by leveraging the recent advances in deep learning will be discussed. The strategies to integrate real-time evaluation and decision-making on edge computing devices to enable field deployment of CPS will be presented. Several real-world applications of significant interest demonstrated by iMSEL to federal agencies for real-time computing, such as design automation, massive data analytics, anomaly detection, system autonomy, and others, will also be presented.
Bio:
Yi Wang is an associate Professor in mechanical engineering at the University of South Carolina (USC). He completed his PhD at Carnegie Mellon University in 2005 and obtained his B.S. and M.S. from Shanghai Jiaotong University in China in 1998 and 2000, respectively. From 2005 to 2017, he held several positions of increasing responsibility at the CFD Research Corporation (CFDRC), Huntsville, Alabama. In 2017, he joined the University of South Carolina to start his academic career. His research interests focus on computational and data-enabled science and engineering (CDS&E), including reduced order modeling, large-scale and/or real-time data analytics, system-level simulation, computer vision, and cyberphysical system and autonomy with applications in aerospace, naval perception, unmanned systems, manufacturing, and biomedical devices. His research has been sponsored by several federal funding agencies, including DoD, NIH, NASA, DOT, and industries. He has published over 150 papers in referred journals and conference proceedings. He is also the recipient of the 2021 Research Breakthrough Star Award of USC.
DISSERTATION DEFENSE
Author: Bharat Joshi
Advisor: Dr. Ioannis Rekleitis
Date: October 11, 2023
Time: 3 pm - 5 pm
Place: Innovation Center, Room 2277 & Virtual
Abstract:
The ocean covers two-thirds of Earth, which is relatively unexplored compared to the landmass. Mapping underwater structures is essential for both archaeological and conservation purposes. This dissertation focuses on employing a robot team to map underwater structures using vision-based simultaneous localization and mapping (SLAM). The overarching goal of this research is to create a team of autonomous robots to map large underwater structures in a coordinated fashion. This requires maintaining an accurate robust pose estimate of oneself and knowing the relative pose of the other robots in the team. However, the GPS-denied and communication-constrained underwater environment, along with low visibility, poses several challenges for state estimation. This dissertation aims to diagnose the challenges of underwater vision-based state estimation algorithms and provide solutions to improve their robustness and accuracy. Moreover, robust state estimation combined with deep learning-based relative localization forms the backbone for cooperative mapping by a team of robots.
The performance of open-source state-of-the-art visual-inertial SLAM algorithms is compared in multiple underwater environments to understand the challenges of state estimation underwater. Extensive evaluation showed that consumer-level imaging sensors are ill-equipped to handle challenging underwater image formation, low intensity, and artificial lighting fluctuations. Thus, the GoPro action camera that captures high-definition video along with synchronized IMU measurements embedded within a single mp4 file is presented as a substitute. Along with enhanced images, fast sparse map deformation is performed for globally consistent mapping after loop closure. However, in some environments such as underwater caves, it is difficult to perform loop closure due to narrow passages and turbulent flows resulting in yaw drift over long trajectories. Tightly-coupled fusion of high frequency magnetometer measurements in optimization-based visual inertial odometry using IMU preintegration is performed producing a significant reduction in yaw drift. Even with good quality cameras, there are scenarios during underwater deployments where visual SLAM fails. Robust state estimation is proposed by switching between visual inertial odometry and a model-based estimator to keep track of the Aqua2 Autonomous Underwater Vehicle (AUV) during underwater operations. For mapping large underwater structures, cooperative mapping by a team of robots equipped with robust state estimation and capable of relative localization with each other is required. A deep learning framework is designed for real-time 6D pose estimation of an Aqua2 AUV with respect to observing camera trained only on synthetic images. This dissertation combines robust state estimation and accurate relative localization that contribute to mapping underwater structures using multiple AUVs.
Title:
Amir Yazdanbakhsh (Google DeepMind), Suvinay Subramanian (Google)
Abstract:
The rapid advancement of artificial intelligence (AI) has ushered in an era of unprecedented computational demands, necessitating continuous innovation in computing systems. In this talk, we will highlight how codesign has been a key paradigm in enabling innovative solutions and state-of-the-art performance in Google's AI computing systems, namely Tensor Processing Units (TPUs). We present several codesign case studies across different layers of the stack, spanning hardware, systems, software, algorithms, all the way up to the datacenter. We discuss how TPUs have made judicious, yet opinionated bets in our design choices, and how these design choices have not only kept pace with the blistering rate of change, but also enabled many of the breakthroughs in AI.
Bio:
Amir Yazdanbakhsh received his Ph.D. degree in computer science from the Georgia Institute of Technology. His Ph.D. work has been recognized by various awards, including Microsoft PhD Fellowship and Qualcomm Innovation Fellowship. Amir is currently a Research Scientist at Google DeepMind where he is the co-founder and co-lead of the Machine Learning for Computer Architecture team. His work focuses on leveraging the recent machine learning methods and advancements to innovate and design better hardware accelerators. He is also interested in designing large-scale distributed systems for training machine learning applications, and led the development of a massively large-scale distributed reinforcement learning system that scales to TPU Pod and efficiently manages thousands of actors to solve complex, real-world tasks. The work of our team has been covered by media outlets, including WIRED, ZDNet, AnalyticsInsight, InfoQ. Amir was inducted into the ISCA Hall of Fame in 2023.
Suvinay Subramanian is a Staff Software Engineer at Google, where he works on the architecture and codesign for Google's ML supercomputers, Tensor Processing Units (TPUs). His work has directly impacted innovative architecture and systems features in multiple generations of TPUs, and empowered performant training and serving of Google's research and production AI workloads. Suvinay received a Ph.D. from MIT, and a B.Tech from the Indian Institute of Technology Madras. He also co-hosts the Computer Architecture Podcast that spotlights cutting-edge developments in computer architecture and systems.
Abstract:
Quantum computing presents many challenges for the programming language community. How can we program quantum algorithms in a way that ensures they behave correctly? In this talk, I will discuss how types can be used to enforce various properties of quantum programs. I will first talk about how linear types and dependent types can be useful for programming quantum circuits. I will then discuss my recent work on designing a type system to enable the interaction of quantum circuit generation time and quantum circuit execution time. If time permits, I will sketch how to ensure reversibility and controllability of the quantum circuits using types.
Bio:
Frank (Peng) Fu is an assistant professor in the Computer Science and Engineering Department at the University of South Carolina. Previously, he was a postdoctoral researcher at Dalhousie University in Canada. He obtained his Ph.D. degree from University of Iowa. His research interests are in quantum programming languages, type theory and their applications.
Location:
In-person
Innovation Center Building 1400
SUMMARY: This talk will present an overview of recent research at UW FUNLab around the use of vehicular radar for advanced driver assistance systems (en route to a future vision of autonomous driving). Wideband (typically FMCW or chirp) radars are increasingly deployed onboard vehicles as key high-resolution sensors for environmental mapping or imaging and various safety features. The talk will be demarcated into two parts, centered on the evolving role of radar ‘cognition’ in complex operating environments to address two important future challenges:

Abstract
Large Language Models (LLMs) have garnered significant attention from researchers, including clinicians, due to their ability to respond to various human queries. Innovations like ChatGPT's groundbreaking reinforcement learning with human feedback and Google's domain-specific fine-tuning in Med-PaLM have introduced two potent information-providing platforms for general health inquiries. The 2023 Gartner Hype Curve places such LLMs at the pinnacle, foreseeing translational impact in the next 2-3 years. This foresight is grounded in comprehensive assessments of recent studies that have illuminated the limitations of these LLMs.
The remarkable potential of these LLMs, when fortified with features like human-level explainability, consistency, reliability, and safety, holds the promise of making deployable systems usable and readily adaptable to various scenarios where human lives may be affected. The talk will introduce a suite of methodologies (methods+metrics) under the Knowledge-powered CREST Framework for LLMs. This practical approach harnesses declarative, procedural, and graph-based knowledge within a neurosymbolic framework to shed light on the challenges associated with LLMs.
Bio
Manas Gaur is an assistant professor in the Department of Computer Science and Electrical Engineering at the University of Maryland, Baltimore County (UMBC). At UMBC, he leads the Knowledge-infused AI and Inference (KAI2) lab. Before entering academia, he was the lead research scientist in Natural Language Processing (NLP) at the AI Center within Samsung Research America. He also held a visiting researcher role at the Alan Turing Institute. Dr. Gaur earned his Ph.D. under the guidance of Prof. Amit P. Sheth at the Artificial Intelligence Institute, University of South Carolina. Together, they played a pivotal role in the development of Knowledge-infused Learning, a paradigm that harmonizes seamlessly with NeuroSymbolic AI. He has been recognized as AAAI New Faculty for 2023 and is currently an advisor to Balm.ai, a startup on Mental Health. More details about him are at: https://manasgaur.github.io/
Location:
In-person
Innovation Center Building 1400
Online/Room 2265, Storey Innovation Center
Online/Room 2277, Storey Innovation Center