Thursday, June 19, 2025 - 09:00 am
AI Institute Seminar room
Author : Utkarshani Jaimini
Advisor: Dr. Amit Sheth
Date: June 19th, 2025
Time: 09:00 am
Place: AI Institute Seminar room
 

Abstract

Understanding and reasoning about cause and effect is innate to human cognition. In everyday life, humans continuously engage in causal reasoning and hypothetical retrospection to make decisions, plan actions, and interpret events. This cognitive ability allows us to ask questions such as: “What caused this situation?”, “What will happen if I take this action?”, or “What would have happened had I chosen differently?” This intuitive capacity to form mental models of the world, infer causal relationships, and reason about alternative scenarios, particularly counterfactuals, is

central to our intelligence and adaptability. In contrast, current machine learning (ML) and artificial intelligence (AI) systems, despite significant advances in learning from large-scale data and representing knowledge across time and space, lack a fundamental understanding of causality and counterfactual reasoning. This limitation poses challenges in high-stakes domains such as healthcare, autonomous systems, and manufacturing, where causal reasoning is indispensable for explanation, decision- making, and generalization. As argued by researchers such as Judea Pearl and Gary Marcus, endowing AI systems with causal reasoning capabilities is critical for building robust, generalizable, and human-aligned intelligence.
This dissertation proposes a novel framework: Causal Neuro-Symbolic (Causal NeSy) Artificial Intelligence, an integration of causal modeling with neuro-symbolic (NeSy) AI . The goal of Causal NeSy AI is to bridge the gap between statistical learning and causal reasoning, enabling machines to model, understand and reason upon the underlying causal structure of the world while leveraging the strengths of both neural and symbolic representations. At its core, the framework leverages causal Bayesian networks, encoded through a series of ontologies, to represent and propagate structured causal knowledge. By unifying structured causal symbolic knowledge with neural inference, the framework introduces a scalable and explainable causal reasoning pipeline grounded in knowledge graphs. The proposed Causal NeSy framework has been validated using the CLEVRER-Humans benchmark dataset, which involves video-based event causality annotated by human experts, and several real-world domains, including smart manufacturing, and  autonomous driving, areas that require high levels of robustness, interpretability, and causal understanding. Empirical results demonstrate that the integration of causal modeling into NeSy architectures significantly enhances both performance and explainability, particularly in settings with limited data or complex counterfactual scenarios. This dissertation advances the field of AI by proposing a unified framework that imbues NeSy systems with causal reasoning capabilities. By enabling machines to model, infer, and reason about causal structures, this work takes a crucial step toward building more human-aligned, trustworthy, and generalizable AI systems. It introduces scalable, explainable, and bias-aware methodologies for causal reasoning, by moving AI closer to human-like understanding. The contributions pave the way for future intelligent systems capable of meaningful intervention, retrospective explanation, and counterfactual reasoning. The Causal NeSy AI paradigm opens promising avenues for future research at the intersection of causality, learning, and reasoning, a necessary convergence on the path to truly intelligent systems.