Driving Interface Group
Research Overview
Our team aims to realize a safe and comfortable mobility society through technologies for monitoring drivers and driving environments using image processing and machine learning, the design of human-machine interfaces (HMIs) that consider driver workload, and the development of human-centered control interfaces.
 Our research themes include understanding driving behavior through glance, posture, and foot movement estimation; situational awareness prediction including surrounding traffic conditions; and the detection of driver workload and drowsiness. For example, we have proposed systems that estimate situational awareness based on standard glance behavior models, track foot movements to evaluate the appropriateness of pedal operations, and detect symptoms of diseases through multimodal monitoring and voice interaction.
 In addition, we are working on multimodal notifications that combine haptic, auditory, and visual cues, as well as the design of multimodal HMIs for tactical-level driving in highly automated vehicles, aiming to provide intuitive and flexible interfaces that support drivers’ perception, decision-making, and actions.
 Grounded in a deep understanding of human behavior and cognition, and through the integration of informatics and engineering, we continue to tackle diverse societal challenges and create innovations that shape the future.
