Center for Human-Computer Innovations (CeHCI)
CeHCI investigates the problem of multimodal emotion modeling and empathic response modeling to build human-centered design systems. By multimodal, we mean we recognize a human’s emotion based on facial expressions, speech, and movement (such as posture and gait). Because we extend a computing system from software to a physical space, novel approaches to providing empathic responses are required. To achieve this, we use emotion-based interactions with sensor-rich, ubiquitous computing and ambient intelligence.
Research Team
- Norshuhani Zamin (Ph.D. Information Technology, Universiti Teknologi Petronas)
- Judith J. Azcarraga (Ph.D. Computer Science, De La Salle University)
- Rafael Cabredo (Ph.D Information Science and Technology, Osaka University)
- Gregory G. Cu (ongoing Ph.D. Computer Science, De La Salle University)
- Merlin Teodosia C. Suarez (Ph.D. Computer Science, De La Salle University)
- Jocelynn W. Cu (Ph.D. Computer Science Candidate, De La Salle University)
- Fritz Kevin S. Flores (ongoing Ph.D. Computer Science, De La Salle University)
- Ryan A. Ebardo (D. Information Technology, De La Salle University)
Research Projects
Patunhay: A Software to Generate Motion Capture Animated 3D Models of Philippine Folk Dances for Digital Archiving
This research aims to create software automatically generating motion-capture animated 3D models to preserve Philippine folk dances. The motion capture data was gathered from volunteer dancers wearing motion capture smart suits. The software has been created using Unity that automates the incorporation of the motion capture data into a 3D model. The animated 3D models that have been generated will be added to a digital archive of different Philippine folk dances containing motion capture animated 3D models, which offers a more intuitive way of preserving dances and provides its users with a complete picture of a folk dance’s spatial layout.
NewsMead: A User-Centric Design of a News Personalization Application for Older Adults
The transition from traditional print to online platforms for newspapers has presented challenges for older adults, who may struggle with unfamiliar digital interfaces. However, as older adults increasingly engage with digital content, there is a growing need for assistive technologies to support their digital literacy and inclusion. This study addresses this need by developing a user-centric news personalization application tailored to the needs and preferences of older adults. The initial application prototype was developed based on a literature review, prototyping, and wireframing. The results revealed that older adults do not actively consume news due to irrelevant content and perceive the activity outside their lifestyle. When provided with a news application considering their difficulties, they fostered a renewed interest in staying informed and connected with current events.
Pa2nhay: A Markerless Approach to Preserving Philippine Folk Dances
Folk dances are a crucial component to a country’s intangible cultural heritage (ICH) for their ability to express a culture’s traditions, history, and mythology through dynamic movements and rhythmic patterns. These dances, however, are currently at risk of being forgotten forever due to neglect, lack of a preservation method, or cumulative inconsistencies when passed down. One way of preserving these dances, and ICH in general, is by creating digital archives to store them in. Previous works have explored motion capture technologies to preserve folk dances. This study aims to further the efforts made to protect Philippine folk dances by adding to the existing digital archive and improving the application to access them. Video footage of the dance performance will be recorded with the help of volunteers who are knowledgeable and skilled in dancing. The video footage will be used to extract motion capture data using a marker less motion capture system. The extracted data will then be applied to a 3D model to animate it. The resulting animated 3D model will also be added to the existing digital archive of other Philippine folk dances preserved in animated 3D models. Alongside this, Patunhay, the software currently used to view these dances, will improve its viewing experience and teaching capabilities.
Facial Emotion Recognition in Mukbang Viewers using Support Vector Machines
Mukbang is a social media phenomenon that has grown in popularity over the last decade. In mukbangs, a host or mukbanger eats large amounts of food to an online audience. With a growing popularity, there has been an increased interest behind the psychology of mukbang viewers and their motivations for viewing. Academic work on this subject is mostly empirical studies through interviews and surveys. There is potential for computational methods to extend this research. Current research does not explore viewers’ emotions while watching mukbang. In this study, video samples were collected from ten Filipino university students tasked to watch four mukbang videos and answer the Positive and Negative Affect Schedule (PANAS) questionnaire to determine the affect states of the participants before and after watching the videos. The data collected from participants’ emotional responses was compiled into a dataset called EFMV (Emotions of Filipino Mukbang Viewers). Using OpenFace 2.0, action units based on the Facial Action Coding System (FACS) were extracted from the video samples. The extracted action units were inputted into a Support Vector Machine (SVM) that was trained on micro-expression databases: SAMM, MMEW, CASME II, and EFMV. The SVM was trained to detect three basic emotions which are happiness, disgust, and surprise. The trained SVM was able to achieve an accuracy of up to 80% which is competitive with other FER models for micro-expressions. The data collected from the PANAS questionnaires also exhibited a significant change in affect states before and after watching the videos.
GetEmotion: A Data Collection Tool for Building an Affective Haptic Dataset
Interpersonal communication between humans is driven by different emotions expressed by various communication channels, such as an individual’s voice, body language, facial expressions, and physiological signals, to enhance interaction further. Among the verbal and nonverbal media channels of communication include touch, which can also significantly contribute to emotion recognition on one of the most common things people have today – smartphones. This study aims to develop a haptic touch recognition model using machine learning methods to interpret emotional states through haptic data. To achieve this, the research will comprehensively review existing literature on smartphone sensors, stimuli datasets, haptic touch, and emotion recognition. This study will also implement an Android application designed to record haptic touch activities and create a haptic touch dataset for training, testing, validating, and evaluating various machine learning models to determine the most effective approach for recognizing emotions through haptic touch. The insights gained from this research hope to significantly contribute to emotion recognition and human-computer interaction, enhancing the capabilities of digital devices to understand and respond to users’ emotional states.
Proposed Projects
We are looking for candidates who are interested to work in the following projects:
- Analyzing the Correlation between Case Rates and Hospital Occupancy
- Analyzing the Social Media Data to Predict Potential Disease Outbreak and to Understand Public Perception and Readiness
- Personalized Learning Pathways for Students with Special Needs: AI-Driven Adaptive Learning Systems
- Emotion Recognition Systems for Autism Intervention: Developing AI-Based Tools for Social Skills Training
- Predictive Analytics for Early Detection of Learning Disabilities: Using AI Algorithms on Educational Data
- AI-Based Behavior Management Systems for Students with Emotional and Behavioral Disorders