Monday, October 31, 2022 - 03:00 pm
2265 Innovation

DISSERTATION DEFENSE 

Author : Chrisogonas Odero Odhiambo 

Advisor : Dr. Homayoun Valafar 

Date : Oct 31, 2022 

Time: 3:00 pm  

Place : 2265 Innovation and Teams

Teams Meeting Link

Abstract 

Humans engage in a wide range of simple and complex activities. Human Activity Recognition (HAR) is typically a classification problem in computer vision and pattern recognition, to recognize various human activities. Recent technological advancements, the miniaturization of electronic devices and the deployment of cheaper and faster data networks have propelled environments augmented with contextual and real-time information, such as smart homes and smart cities. These context-aware environments, alongside smart wearable sensors, have opened the door to numerous opportunities for adding value and personalized services to citizens. Vision-based and sensory-based HAR find diverse applications in healthcare, surveillance, sports, event analysis, Human-Computer Interaction (HCI), rehabilitation engineering, occupational science, among others, resulting into significantly improved human safety and quality of life. 

Despite being an active research area for decades, HAR still faces challenges in terms of gesture complexity, computational cost on small devices, energy consumption, as well as data annotation limitations. In this research, we investigate methods to sufficiently characterize and recognize complex human activities, with the aim to improving recognition accuracy, reducing computational cost, energy consumption, and creating a research-grade sensor data repository to advance research and collaboration. This research examines the feasibility of detecting natural human gestures in common daily activities. Specifically, we utilize smartwatch accelerometer sensor data and structured local context attributes, and apply AI algorithms to determine the complex activities of medication-taking, smoking, and eating gestures 

A major part of my work centers around modeling human activity and the application of machine learning techniques to implement automated detection of specific activities using accelerometer data from smartwatches. Our work stands out as the first in modeling human activity based on wearable sensors in a linguistic representation with grammar and syntax to derive clear semantics of complex activities whose alphabet comprises atomic activities. We apply machine learning to learn and predict complex human activities. I demonstrate the use of one of our unified models to recognize two activities using smartwatch: medication-taking and smoking. 

Another major part of my work addresses the problem of HAR activity misalignment through edge-based computing at data origination point, leading to improved rapid data annotation, albeit with assumptions of subject fidelity in demarcating gesture start and end sections. Lastly, I propose a theoretical framework for the implementation of a library of shareable human activities. The results of this work can be applied in the implementation of a rich portal of usable human activity models, easily installable in handheld mobile devices such as phones or smart wearables to assist human agents in discerning daily living activities. This is akin to a social media of human gestures or capability models. The goal of such a framework is to domesticate the power of HAR into the hands of everyday users, as well as democratize the service to the public by enabling persons of special skills to share their skills or abilities through downloadable usable trained models.