Enhancing V2V Network Communication Reliability under Severe Weather

Tuesday, March 17, 2026 - 12:30 pm
Online/Room 2267, Storey Innovation Center

DISSERTATION DEFENSE

Author : Jian Liu

Advisor: Dr. Chin-Tser Huang

Date: March 17, 2026

Time: 12:30 PM

Place: Online/Room 2267,  Storey Innovation Center

Abstract

In this dissertation, we address a key reliability challenge in connected vehicle networks: V2V links can degrade sharply under adverse weather, especially in 5G mmWave channels, where environmental attenuation can be severe in regions with dust and sandstorms. Because conducting controlled field experiments in extreme weather is costly and difficult, this dissertation develops simulation-driven solutions that characterize weather-induced degradation. First, it introduces the first open-source NS-3 weather simulator for studying the adverse weather impacts on 5G mmWave V2V communications, enabling systematic evaluation under diverse environmental conditions. Building on this capability, the dissertation investigates predictive analytics such as ARIMA, Prophet, LSTM, and GRU to forecast weather-related performance degradation. We use these predictions to design a proactive channel-switching strategy that transitions from 5G mmWave to 4G LTE before major reliability loss occurs. Next, it advances beyond prediction-based control by developing a deep reinforcement learning (DRL) channel-switching approach that learns optimal switching decisions online using cumulative throughput as feedback, enabling vehicles to adapt autonomously to real-time environmental changes. Finally, this dissertation proposes a weather-aware, reinforcement learning–based open-loop power control method for decentralized sidelink V2V communication. Each vehicle learns how to adjust its transmitted power using only information it can measure locally together with the extra path loss caused by weather. In simulations from clear weather to severe rain, this approach achieves higher packet reception ratio (PRR) than the baseline 3GPP strategy and existing open-loop power control methods.

Machine Learning Toward Materials Discovery: From Crystal Mapping & OOD Property Prediction to Radiation Detection & Emerging Foundation Models.

Wednesday, March 18, 2026 - 01:20 pm
Online/Room 2267, Storey Innovation Center

DISSERTATION DEFENSE

Author : Qinyang Li

Advisor: Dr. Jianjun Hu

Date: March 18, 2026

Time: 1:20PM

Place: Online/Room 2267,  Storey Innovation Center

Remote join (ZOOM):

Link: https://sc-edu.zoom.us/j/4997546955

 

Abstract

The discovery and optimization of advanced materials are central to addressing global challenges in energy, healthcare, and sustainability. This dissertation develops representation-aware and distribution-aware machine learning frameworks to improve robustness, generalization, and interpretability in materials informatics and radiation detection. The work spans crystal structure mapping, adversarial learning for out-of-distribution prediction, foundation-model-based property prediction, and deep neural classification of photon interactions. A global mapping framework is first introduced to analyze the inorganic materials space using compositional, structural, physical, and neural descriptors derived from the Materials Project database. By embedding materials into low-dimensional manifolds, the framework reveals clustering behavior and structure–property relationships, enabling systematic exploration of underrepresented material families.

To address distributional fragility in materials property prediction, the Crystal Adversarial Learning (CAL) algorithm is developed. CAL synthesizes adversarial samples in high-uncertainty regions and incorporates stability-aware training objectives, improving generalization under covariate, prior, and relation shifts. Experimental results demonstrate enhanced robustness in data-scarce regimes typical of experimental materials research.

 

The dissertation further investigates in-context foundation models for data-efficient property prediction. By integrating a pretrained tabular transformer with compositional descriptors and graph-derived structural embeddings, the proposed framework achieves competitive performance on the MatBench benchmark suite and on lattice thermal conductivity prediction without task-specific fine-tuning. Representation analyses indicate that foundation-model adaptation reorganizes latent feature spaces to better align with physical property gradients, particularly in small-to-medium data regimes.

 

Finally, a deep learning framework is applied to gamma-photon interaction classification in room-temperature semiconductor detectors. The proposed model distinguishes Compton scattering and photoelectric events from pulse waveforms with high accuracy and robustness across varying noise and energy conditions, demonstrating the transferability of representation-learning principles to signal-level scientific data.

 

Collectively, this work advances machine learning methodologies that integrate representation geometry, distributional robustness, and physical interpretability across heterogeneous scientific domains. The developed approaches provide scalable and interpretable tools for accelerating materials discovery and improving radiation detection systems.

From Experience to Reasoning: Offline RL Subroutines and LLM-Based Grounding for Sample-Efficient Reinforcement Learning

Thursday, March 19, 2026 - 11:40 am
Online/Room 2267, Storey Innovation Center

DISSERTATION DEFENSE

Author : Jianhai Su

Advisor: Dr. Qi Zhang

Date: March 19, 2026

Time: 11:40 am- 1:40 pm (ET)

Place: Online/Room 2267,  Storey Innovation Center

Remote join (Microsoft Teams):

Link: https://teams.microsoft.com/meet/22389270607188?p=XPFrAyxA5Qo0IIh3tV

Meeting ID: 223 892 706 071 88

Passcode: YC7bg7zH

 

 

Abstract

Improving the learning efficiency of reinforcement learning (RL) agents remains a fundamental challenge, particularly in environments characterized by sparse rewards, long horizons, or partial observability. This dissertation investigates how RL agents can learn more efficiently through two complementary forms of guidance: mechanisms derived purely from an agent’s own experience and mechanisms that leverage reasoning priors from pretrained large language models (LLMs).

 

   On the experience-driven side, the first study develops a general framework for incorporating offline RL algorithms as subroutines within an online RL process. In this framework, an agent periodically repurposes its replay buffer as an offline dataset and applies offline optimization methods such as Implicit Q-Learning (IQL) or Calibrated Q-Learning (Cal-QL). Through systematic empirical analysis across diverse benchmark environments, this study characterizes when such experience-driven guidance improves policy quality under fixed interaction budgets and identifies several practical factors that influence its effectiveness.

 

   On the LLM-based side, the dissertation presents two complementary grounding approaches. The second study investigates implicit grounding, where a Flamingo-style vision–language model with an embedded pretrained language model acts as the high-level policy in a hierarchical RL agent. The agent processes multimodal interaction histories and proposes subgoals for a library of pretrained low-level skills, grounding pretrained language priors through policy learning.

 

   The third study introduces an explicit grounding framework in which reasoning traces produced by an external LLM are distilled into a latent reasoning module within a value-based RL agent. A potential function defined over this latent space is then learned from the agent’s trajectories and used for potential-based reward shaping. This dual-track framework combines reasoning transfer with interaction-driven learning to improve both learning efficiency and final policy performance.

 

   Together, these studies provide a structured investigation of how experience-driven learning and LLM-based grounding—both implicit and explicit—can guide reinforcement learning under realistic interaction constraints and offer practical insights for designing more sample-efficient RL agents.

Deep Learning Algorithms for Generative Materials Design and Composition Based Property Prediction

Monday, March 23, 2026 - 10:00 am
Online/Room 2265, Storey Innovation Center

DISSERTATION DEFENSE

Author : Rongzhi Dong

Advisor: Dr. Jianjun Hu

Date: March 23, 2026

Time: 10:00AM

Place: Online/Room 2265,  Storey Innovation Center

Remote join (ZOOM):

Link: https://sc-edu.zoom.us/j/4997546955

 

 

Abstract

The accelerated discovery of novel functional materials is critical for advancing transformative technologies in energy storage, electronics, and catalysis, yet current strategies remain fundamentally constrained by the limited size of existing materials databases and the difficulty of building predictive models that generalize to unseen compounds. This dissertation addresses these challenges through five interconnected deep learning and machine learning studies. First, a diffusion language model framework is proposed for the generative design of novel inorganic materials, with DFT validation confirming the thermodynamic stability of newly identified compounds. Second, generative modeling is extended to two-dimensional (2D) materials discovery, producing diverse and stable candidates that substantially expand the known structural landscape of this emerging materials class. Third, CondADiT, a composition-conditioned latent diffusion framework, is introduced for crystal structure prediction directly from chemical composition, achieving state-of-the-art performance on multiple benchmarks. Fourth, DeepXRD is presented as a deep learning framework for predicting X-ray diffraction spectra directly from composition, enabling scalable structural inference without costly simulations or experimental measurements. Fifth, domain adaptation techniques are systematically evaluated for materials property prediction under realistic distribution shifts, demonstrating significant improvements in out-of-distribution generalization. Together, these contributions establish a comprehensive data-driven framework that integrates generative modeling, structure learning, and domain-adaptive prediction to accelerate the discovery of stable, synthesizable, and functionally diverse materials.

 

Advancing Edge AI through Integrated Neuromorphic Algorithms and Hardware

Monday, March 23, 2026 - 01:00 pm
Online/Room 2277, Storey Innovation Center

DISSERTATION DEFENSE

Author: Peyton Chandarana

Advisor: Ramtin Zand

Date: March 23, 2026

Time: 1:00 PM

Place: Online/Room 2277,  Storey Innovation Center

Remote join (MS Teams):

Link: Peyton Chandarana: Dissertation Defense | Microsoft Teams

Meeting ID: 263 606 597 033 12

Passcode: DD6ts7Mg


Abstract

The pursuit of energy-efficient intelligence for constrained and always-on sensing environments has positioned neuromorphic computing as a pivotal alternative to conventional von Neumann architectures through its adoption of asynchronous and event-based computing inspired by the biological brain. Additionally, outside of these constrained environments, neuromorphic computing design principles can help alleviate the current power and efficiency dilemmas put forth by the rapidly growing AI industry. This dissertation presents research focused on the hardware-software co-design of spiking neural networks (SNNs), progressing from foundational signal encoding techniques to the deployment of complex, heterogeneous, and hybrid systems. We start by focusing on the deployment of practical workloads, such as American Sign Language recognition, on Intel’s Loihi neuromorphic platform. Benchmarking against standard edge accelerators demonstrates that neuromorphic paradigms achieve significant gains in energy efficiency and power reduction, maximizing runtime on edge devices deployed as assistive technologies and reducing the overall energy footprint for tasks without much accuracy degradation. We then explore the integration of spiking and non-spiking domains to leverage the unique advantages of each. We present an end-to-end co-design framework that utilizes SNNs for temporal feature extraction and artificial neural networks (ANNs) for high-precision classification. To facilitate this integration, we propose custom interface hardware, specifically an accumulator circuit, designed to synchronize asynchronous spike streams for synchronous edge processing. These co-design principles provide a blueprint for the next generation of neuromorphic capabilities by highlighting areas for improvement and how co-design principles can be expanded to create more capable and reliable autonomous systems while alleviating current problems faced by the immense scale and consumption of AI workloads.