Center for Human-Computer Innovations (CeHCI)
CeHCI investigates the problem of multimodal emotion modeling and empathic response modeling to build human-centered design systems. By multimodal, we mean we recognize a human’s emotion based on facial expressions, speech, and movement (such as posture and gait). Because we extend a computing system from a software to a physical space, novel approaches to providing empathic responses are required. To achieve this, we use emotion-based interactions together with sensor-rich, ubiquitous computing and ambient intelligence.