COLLOQUIUM Department of Computer Science and Engineering University of South Carolina Affective Computing from Human Faces Yan Tong Visualization and Computer Vision Lab GE Global Research Center Date: September 14, 2010 (Tuesday) Time: 1530-1630 (3:30-4:30pm) Place: 300 Main B201 Abstract It is a natural capability for a human being to recognize the emotion, attitude and affective status by observing the changes on a face. Vision-based affective computing equips computers with the ability to recognize and interpret human emotions from captured videos or still images, with the goal of assisting humans in many aspects to improve their quality of life. Affective computing is a disciplinary area spanning computer vision, pattern recognition, and cognitive science and has broad applications such as behavior analysis, human computer interaction, and security. In this talk, Dr. Yan Tong will share her research work on vision-based affective computing in two primary topics: Face Alignment and Spontaneous Facial Activity Modeling and Understanding. Face alignment is the process of localizing prominent facial components (e.g., eyes, mouth and etc.) from an image and is the foundation for all face-related research such as face recognition and facial expression recognition. Supervised face alignment is prevalent in real-time applications. However, it often requires many training images, each of which must be labeled with a set of landmarks, to work well. In practice, the labeling process is conducted manually, which is labor- intensive and error-prone. A semi-supervised least squares congealing approach was developed to estimate a set of landmarks for a large image ensemble with minimal human intervention. The proposed method has achieved much more accurate labeling results than the state-of-the-practice. Spontaneous Facial Activity Modeling and Understanding under natural situations is a key task of vision-based affective computing. In spite of recent advances in computer vision technologies, spontaneous facial activity recognition is still very challenging. A unified facial activity model was developed to systematically model and learn the semantic and dynamic relationships among head movement, facial muscular movements and the uncertainty with visual measurements. Facial activity recognition is performed through probabilistic inference via the model. The proposed model yields significant improvements in spontaneous facial activity recognition as compared to the state-of-the-art. In this talk, Dr. Tong will discuss her research activities in the next three to five years. Specifically, she is interested in developing theories of spontaneous facial expression modeling and recognition as well as of human activity recognition. Their potential in security, surveillance, medical diagnosis, and human computer interaction will be also discussed. Yan Tong has been a research scientist in Visualization and Computer Vision Lab of GE Global Research since January 2008. She received a Ph.D. degree in Electrical Engineering from Rensselaer Polytechnic Institute (RPI) in Troy, NY in 2007. In the same year, she received the Allen B. DuMont Prize. Her Ph.D. thesis research focused on spontaneous facial activity modeling and understanding through the integration of a probabilistic model and computer vision techniques. At GE Global Research, she is active in the biometric fusion, face modeling and face alignment areas. She is a receiver of GE Bronze Patent Medallion (2009), GE Level 3 Award (the second highest level, 2009), and Biometrics Workshop Best Paper Honorable Mention Award (CVPR 2009). She has published over 20 journal and peer-reviewed conference papers and 4 book chapters. One of her PAMI papers has been cited 57 times. Overall, her publications have been cited over 110 times. Dr. Tong also has 2 US patents pending and 4 disclosures.