New Approaches on Source Coding for Quantum Stochastic Sources and Implementation of Quantum Fanout gate

Friday, October 17, 2025 - 03:00 pm
Room 2265 Innovation Building

DISSERTATION DEFENSE

Author : Rabins Wosti
Advisor: Dr. Stephen Fenner
Date: Oct 17th, 2025
Time: 3:00 pm
Place: Room 2265 Innovation Building

Abstract

The accurate computation of advanced quantum algorithms like Shor’s integer factorization, quantum phase estimation (QPE), and the quantum Fourier transform (QFT) requires quantum circuits of considerable size and depth. It is difficult to achieve reliable computation with deep quantum circuits due to the limited coherence times of the current noisy quantum devices. The quantum fanout gate is known to be a powerful primitive for reducing the depth of many quantum circuits (Høyer and Špalek 2003; Gottesman and Isaac L. Chuang 1999). Shallow or constant-depth quantum circuits are desirable for both near-term and fault-tolerant quantum computations as they reduce noise and allow faster execution of quantum algorithms, potentially skirting the effects of short coherence times. In this work, we show new approaches towards implementation of quantum fanout gate. In particular, we show that by analogously time-evolving the quantum systems according to two well-studied Hamiltonians, namely quantum Ising and quantum Heisenberg models, we can implement quantum fanout operator using constant additional layers of digital quantum gates.

Important foundations to the area of quantum encoding were provided by Schumacher who proved the quantum analog of Shannon’s noiseless coding theorem for an independent and identically distributed (i.i.d.) quantum source, (Schumacher 1995). In this work, we show a lossless, variable-length block encoding scheme of quantum information emitted from a completely general stochastic quantum source, and it is encoded into the Fock space. While doing so, we extend the notion of uniquely de- codable (or completely lossless) quantum codes to be used for quantum block data compression. As our main result, for a fixed ml many pure states emitted by a given quantum stochastic source, we derive the optimal lower bound of the average codeword length over a subset of uniquely decodable quantum codes called “special block codes”, which are applied to encode the pure states of m many blocks each of block size l. Additionally, we show that for quantum stationary sources in particular, the optimal lower bound of the average codeword length per symbol computed over a subset of special block codes called “constrained special block codes” equals the von-Neumann entropy rate of the source for an asymptotically long block size.

Computational Analogies in the Era of Large Language Models

Thursday, October 16, 2025 - 10:00 am
AI Institute Seminar room

DISSERTATION DFENSE
 

Author : Amarakoon Mudiyanselage Thilini Ishanka Wijesiriwardene
Advisor: Dr. Amit Sheth
Date: Oct 16th, 2025
Time: 10:00 am
Place: AI Institute Seminar room
Join Zoom Meeting: https://sc-edu.zoom.us/j/86209851277?pwd=frrKtfwOKKs4EHZWkly1pfahO0z7n6…
Meeting ID: 862 0985 1277
Passcode: 309819


Abstract

 

Analogy-making is central to human cognition, requiring the integration of abstract reasoning, pattern recognition, and background knowledge. Despite significant advances in language modeling, the capacity of current methods to accurately identify, model, and evaluate analogies remains fundamentally underexplored.

Analogies are central to human cognition, enabling individuals to perceive deep similarities between superficially different situations. Effective analogy-making requires integrating knowledge about the external world with abstract reasoning and pattern recognition capabilities. While current language models (LMs), trained on massive textual corpora using autoregressive or masked objectives, achieve impressive performance across Natural Language Processing (NLP) tasks such as text generation, summarization, and classification, their capacity for analogical reasoning remains poorly understood. Three factors contribute to this gap: the inherent complexity of analogy-making, the scarcity of suitable evaluation data, and the absence of systematic frameworks for quantifying analogy complexity. This dissertation bridges this gap by advancing both the theoretical understanding of analogies in LMs and the practical tools needed to benchmark and improve their analogical capabilities.

 

This work makes six interconnected contributions to computational analogy research. First, we introduce a complexity-grounded taxonomy of analogies and develop evaluation methods that assess Large Language Models (LLMs) across this spectrum, revealing that knowledge-enhanced approaches are essential for proportional and long-text analogies. Second, we analyze student-generated analogies in Biochemistry, demonstrating how both hand-engineered features and LLM-generated embeddings contribute to distinguishing strong from weak analogies in educational contexts. Third, through linguistic probing techniques, we investigate the relationship between LLMs' syntactic-semantic encoding capabilities and their performance on sentence-level analogies. Fourth, we propose knowledge-enhanced methods specifically designed to address the challenging proportional analogies identified in our taxonomy. Fifth, we develop a generation pipeline for realistic long-text analogies addressing the limitations of existing overly-clean datasets, and benchmark state-of-the-art LLMs while exploring Graph Neural Network-based complementary evaluation methods. Sixth, recognizing that analogy research requires distinguishing between related phenomena of abstraction, we present a systematic taxonomy of abstraction levels, addressing the lack of consistent operational definitions in the Computer Science literature.

 

Together, these contributions establish a comprehensive framework for understanding, evaluating, and improving analogical reasoning in the era of large language models, with implications for both cognitive modeling and practical NLP applications.

Cross-layer Design and Optimization of Analog In-memory Computing Systems

Wednesday, September 24, 2025 - 02:00 pm
Online

 DISSERTATION DFENSE
 

Author : Md Hasibul Amin
Advisor: Dr. Ramtin Zand
Date: Sep 24th, 2025
Time: 2:00 pm
Place: Teams Meeting

Abstract

There has been a rapid growth in the computational demands of machine learning (ML) workloads in recent days. Conventional von Neumann architectures are not capable of keeping up with the high cost of data movement between the processor and memory, well-known as memory wall problem. In-memory computing (IMC) has been focused as a solution by the researchers, where the computation is performed inside the memory devices such as SRAM, MRAM, RRAM etc. Most commonly, the memory devices are arranged in a crossbar setting where the matrix-vector multiplication (MVM) operation is performed through intrinsic parallelism of analog computations. The conventional IMC systems require high-power signal conversion blocks to connect between analog crossbars and digital processing units, hindering efficient computation. In this dissertation, we propose In-Memory Analog Computing (IMAC) architectures that perform the MVM and nonlinear vector operation (NLV) consequently using analog functional units, eliminating the needs for costly signal conversions. Despite its advantages, computing the whole DNN in the analog domain introduces critical usability and reliability challenges. This dissertation systematically investigates these challenges and presents a set of circuit-, system-, and architecture-level solutions to mitigate their impact. Furthermore, we develop a comprehensive simulation framework to enable cross-layer design and performance optimization of IMAC systems tailored to user-defined ML workloads. Our results demonstrate that IMAC can achieve significant energy and latency savings with negligible accuracy loss, making it a compelling direction for next-generation ML hardware acceleration.

Application of Machine Learning for Vascular System Analysis

Wednesday, September 24, 2025 - 11:00 am
Rm 2267, Storey Innovation Building

DISSERTATION DFENSE
Department of Computer Science and Engineering


Author : Alireza Bagheri Rajeoni
Advisor: Dr. Homayoun Valafar
Date: Sep 24th, 2025
Time: 11:00 am
Place: Rm 2267, Storey Innovation Building


Abstract

The analysis of vascular structures is critical for diagnosing, monitoring, and treating cardiovascular diseases such as aneurysms, stenosis, and vascular calcification. Traditional methods often rely on manual interpretation of imaging data, which is time-consuming, subjective, and not scalable. This work explores the application of machine learning techniques to automate and enhance vascular system analysis across multiple research efforts. Leveraging both supervised and unsupervised learning, the studies presented encompass tasks such as vessel segmentation, anomaly detection, boundary localization, calcium measurement, and volume estimation from computed tomography angiography (CTA) data. Emphasis is placed on overcoming challenges in data scarcity through the use of pre-trained models, transfer learning, and rule-based systems. Results demonstrate that machine learning, when carefully integrated with domain knowledge, can deliver accurate, interpretable, and scalable tools for vascular assessment. This compilation highlights the potential of AI-driven methods to support clinical decision-making and improve vascular diagnostics in real-world settings.

Implicit Neural Representation for Image Reconstruction

Friday, September 12, 2025 - 01:00 pm
Rm 2277, Storey Innovation Building

DISSERTATION DFENSE
 

Author : Canyu Zhang
Advisor: Dr. Song Wang
Date: Sep 12th, 2025
Time: 1:00 pm
Place: Rm 2277, Storey Innovation Building


Abstract

Image reconstruction aims to restore corrupted images and recover visual content that is missing or degraded. Such degradation may arise from factors such as low resolution, occluded or masked regions, and shadow interference. This problem has become an increasingly important research topic, as people encounter and rely on visual information in nearly every aspect of daily life. Neural network–based approaches have recently emerged as highly effective solutions for this task. In particular, convolutional neural networks and transformer-based architectures have demonstrated remarkable success in producing visually convincing reconstructions. However, these models remain subject to several constraints, with one of the most significant being their inability to generate outputs of arbitrary size. For instance, most existing approaches can only process inputs and produce outputs of fixed dimensions, which restricts their flexibility and limits their applicability in real-world scenarios.

 

Recently, implicit neural function–based methods have been proposed for image processing tasks. Implicit neural representation (INR) provides a powerful framework for mapping discrete data into continuous representations, enabling flexibility and generalization. INR-based approaches have demonstrated significant progress in tasks such as image super-resolution, generation, and semantic segmentation. A key advantage of these methods is their ability to achieve super-resolution for images of arbitrary sizes. Despite these advancements, current INR-based techniques face several challenges. First, they primarily focus on intact images, making them less effective in scenarios involving damaged or incomplete regions. Second, many methods emphasize the continuous representation of pixel color while neglecting the rich semantic information embedded within images. Finally, training these models often requires large numbers of paired datasets, which are particularly difficult to obtain for tasks such as image de-shadowing due to the scarcity of labeled data. In this dissertation, we propose three novel approaches to address these limitations.

To fully exploit the potential of implicit neural representations (INRs) for processing damaged images, we introduce a novel task, $SuperInpaint$, which aims to reconstruct missing regions in low- resolution images while generating complete outputs at arbitrary higher resolutions. To address the second limitation, we develop the Semantic-Aware Implicit Representation (SAIR) framework. This approach augments the implicit representation of each pixel by jointly encoding both appearance and semantic information, thereby strengthening the model’s ability to capture fine- grained details as well as broader contextual structures. Building on the proven success of INR in image reconstruction, we further extend its applicability to the image de-shadowing task. To this end, we propose a specialized method designed to remove shadows while faithfully preserving the underlying image content. A critical challenge with existing INR-based methods lies in their dependence on large training datasets, which are particularly scarce for de-shadowing due to the limited availability of labeled data. To overcome this issue, we adopt a pre-training strategy, where the model is initially trained on large- scale image inpainting datasets. This enables the network to acquire strong reconstruction priors, which are subsequently transferred and fine-tuned for the shadow removal task.

Our proposed methods present a thorough and effective response to the identified challenges, establishing a robust framework that can be applied across a diverse spectrum of image processing tasks. By systematically addressing the key limitations of existing approaches, we expand both the flexibility and scalability of implicit neural representations, thereby enabling them to manage complex scenarios with higher levels of accuracy, robustness, and efficiency.

Elevating next generation wireless devices towards contactless sensing for healthcare applications

Friday, July 18, 2025 - 10:00 am
Onine

DISSERTATION DFENSE


Author : Aakriti Adhikari
Advisor: Dr. Sanjib Sur
Date: July 18, 2025
Time: 10:00 am
Place: Room 2265 Innovation building
Teams Link : Join Teams Meeting

 

Abstract


There is an increasing interest in technologies that can understand and perceive at-home human activities to provide personalized healthcare monitoring, aimed at early detection of disease markers and assisting physicians in making clinical decisions. Existing approaches, such as wearables, require users to wear sensors that can be cumbersome and cause discomfort. Vision based solutions, such as optical cameras, IRs, LiDARs, etc., can be used to design contactless at-home monitoring systems. However, these systems are limited by poor lighting and occlusion, and they are privacy-invasive. Fortunately, high-frequency millimeter-wave wireless devices provide an effective alternative to the existing systems to enable fine-grained health monitoring: Millimeter-wave signals can penetrate certain obstacles, work under zero visibility, and have higher resolution than Wi-Fi. Further, major network providers are actively deploying millimeter-wave technology, a core component of next-generation wireless networks, in both large-scale networks and home routers, thereby paving the way for its widespread adoption in 5G and future devices. This opens up a new opportunity for at-home contactless sensing. But, the eventual success of using millimeter-wave technology for sensing depends on system designs that address the unique challenges of millimeter-wave signals: specularity, variable reflectivity, and low resolution. These issues can lead to incomplete and noisy information about the human subject in the reflected signals, making it difficult to directly estimate human-related information. However, these reflected signals exhibit correlations with various human activities and carry distinct signatures, allowing for the use of data-driven learning models to deduce about humans from the reflected signals.


In this dissertation, we develop data-driven deep learning models to address the fundamental challenges of millimeter-wave sensing. We first design and evaluate deep learning models based on conditional Generative Adversarial Networks to estimate the posture of a person by generating high-resolution human silhouettes and predicting 3D locations of body joints. We then extend the   sensing capabilities to enable contactless sleep monitoring, classifying sleeping states, and predicting sleep postures. Furthermore, we facilitate contactless lung function monitoring by combining wireless signal processing with deep learning, enabling a software-only solution for at-home spirometry tests. Finally, we demonstrate the clinical utility of millimeter-wave sensing through two real-world deployments: a contactless cardiac monitoring system for stroke patients that estimates heart rate and heart rate variability; and  a bed event detection system deployed in hospitals for 24-hour monitoring of high-fall-risk patients, aiming to enable timely interventions and prevent inpatient falls. Together, these systems demonstrate the potential of millimeter-wave sensing to elevate next-generation wireless devices into scalable, privacy-preserving platforms for contactless health monitoring across both home and clinical settings.

Multi-Task Deep Learning Approach for Segmenting and Classifying Competitive Swimming Activities Using a Single IMU

Tuesday, July 15, 2025 - 03:00 pm
Room 2267, Innovation building

THESIS DFENSE
 

Author: Mark Shperkin
Advisor: Dr. Homayoun Valafar
Date: July 15, 2025
Time: 03:00 pm
Place: Room 2267, Innovation building

Abstract

Competitive swimming performance analysis has traditionally relied on manual video review and multi-sensor systems, both of which are resource-intensive and impractical for everyday training use. This thesis investigates whether a single wrist-worn inertial measurement unit (IMU) can be used to automatically segment and classify swimming activities with high accuracy. We propose a multi-task deep learning pipeline based on the MTHARS (Multi-Task Human Activity Recognition and Segmentation) architecture introduced by Duan et al. to perform stroke classification, lap segmentation, stroke count estimation, and underwater kick count estimation. Data were collected from eleven collegiate-level swimmers wearing left-wrist–mounted IMUs, each performing five 100-yard sets per stroke (butterfly, backstroke, breaststroke, freestyle, and individual medley) in a 25-yard pool. This research investigates whether a single IMU can accurately classify and segment all competitive swim strokes and evaluate performance across key swimming activities. Moreover, this pipeline delivers reliable multi-metric analysis while significantly reducing the complexity and cost of sensor setups. This work contributes to the growing field of wearable-based athlete monitoring and has the potential to empower coaches and athletes with real-time, fine-grained performance feedback in competitive swimming using minimal hardware.

 

Process Knowledge-Guided Neurosymbolic Learning and Reasoning

Tuesday, July 15, 2025 - 10:30 am
Online

DISSERTATION DFENSE

Author : Kaushik Roy
Advisor: Dr. Amit Sheth
Date: July 15, 2025
Time: 10:30 am
Place: Rm 529 AI Institute
Zoom Link : Join Zoom Meeting

Abstract

Neural‐network–driven artificial intelligence has achieved impressive predictive accuracy, yet its opaque, data-centric modus operandi still limits trustworthy deployment in safety-critical settings. A central barrier is the difficulty of aligning continuous representations learned by networks with the process knowledge -- formal, expert-crafted diagnostic or operational procedures that govern real-world decision making. This dissertation introduces Process Knowledge-Infused Learning and Reasoning (PK-iL), a neurosymbolic framework that injects task-specific process structures directly into end-to-end differentiable models. PK-iL marries symbolic constraints with gradient-based optimization, yielding predictors whose internal reasoning steps remain faithful to domain processes while retaining the adaptability and scale of modern deep learning.


The contributions are fourfold: (1) a formal representation for encoding process knowledge as differentiable constraints; (2) algorithms that integrate these constraints into training objectives and inference routines; (3) theoretical analysis showing how process alignment improves controllability and transparency without sacrificing expressivity; and (4) empirical validation in mental-health decision support, where psychiatric diagnostic criteria provide rigorous process ground truth. Across multiple datasets and baselines, PK-iL delivers higher diagnostic accuracy, markedly more evident explanation traces, and graceful handling of out-of-distribution cases, features essential for adoption as a human-AI “partner” in high-stakes workflows. These results demonstrate a viable path toward reliable, process-guided neurosymbolic AI.

Real-Time Simulation of Power Electronic Converters Using Physics-Informed Neural Networks

Tuesday, July 8, 2025 - 02:00 pm
Online

Author: James Crews
Advisor: Dr. Jason Bakos
Date: July 8, 2025
Time: 02:00 pm
Place: Teams
Meeting Link

Abstract

Physics-informed neural networks (PINNs) are an emerging machine learning method for learning the behavior of physical systems described by governing differential equations. Dc-dc power-electronic converters are used in a variety of industry applications such as motor drives or power supplies where real-time simulation is critical for control and safety. This thesis investigates physics-informed machine learning as an approach to develop a real-time digital twin for dc-dc power converters. Traditional numerical integration methods are used to approximate discretized behavior, and the results are compared with a trained PINN model. Modern ML frameworks (such as PyTorch and TensorFlow/Keras) are used to quickly compute exact derivatives of higher-order differential equations through automatic differentiation. The effects of fixed-point quantization on the neural network using the high-level synthesis for machine learning (HLS4ML) framework are detailed and compared with numerical integration methods, discussing the trade-offs in latency, hardware efficiency, and prediction accuracy over transient- and steady-state converter operation.