Gunes Apr 2026
offset). This ensures the machine can recognize when a gesture starts, peaks in intensity, and ends. 5. Fuse Multi-modal Data
For specific gestures like head nods or shakes, the is considered the distinguishing feature. This "angle feature" represents the trajectory of the head's movement in 2D space. 4. Normalize and Segment Movement is segmented into temporal phases (e.g., neutral →right arrow →right arrow →right arrow offset)
The first step is to identify the face within a video frame. Researchers like Gunes often use standard detection techniques to isolate the facial area, ensuring that background noise does not interfere with the feature extraction. 2. Compute Optical Flow Fuse Multi-modal Data For specific gestures like head
To capture movement over time, is calculated between two consecutive frames. This process determines the magnitude and direction of pixel movement within the detected facial region. 3. Extract the Angle Feature Normalize and Segment Movement is segmented into temporal
The generated feature is a (such as a 2D head motion angle) that allows a system to classify human affective states or non-verbal behaviors. Project suggestions from Prof Hatice Gunes
Gunes’s work often emphasizes , where facial features are combined with other modalities like body gestures or audio markers (e.g., MFCCs) to improve the accuracy of emotion recognition. ✅ Result