FER-TBI: Facial Expression Recognition for Traumatic Brain Injured Patients
Published in 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP 2018), 2018

Abstract
Facial expression recognition (FER) of Traumatic Brain Injured (TBI) patients in realistic, unconstrained settings poses unique challenges not present in standard FER systems. TBI patients exhibit restricted muscle movements, reduced facial expressivity, non-cooperative behaviour, impaired reasoning, and inappropriate responses — all of which severely degrade the performance of conventional FER methods trained on healthy subjects.
This paper investigates these challenges and proposes a CNN-LSTM architecture that exploits spatio-temporal information to classify and analyse the mood of TBI patients. A dedicated TBI database was collected from residents of a neurological rehabilitation centre under three scenarios — cognitive, physiotherapy, and social rehabilitation activities — to ensure ecological validity. The model captures subtle temporal variations in facial muscle movements that are hallmarks of emotion in this population.
Experimental evaluation on the collected TBI database demonstrates an accuracy of 87.97%, significantly outperforming existing FER approaches when applied to TBI subjects. This work establishes a foundational framework for affect-aware therapeutic robot interaction with cognitively and physically impaired patients.
Key Contributions
- First dedicated investigation of FER specifically for TBI patients in unconstrained rehabilitation settings
- Novel TBI-specific database collected across three rehabilitation activity types
- CNN-LSTM pipeline capturing spatio-temporal facial cues under challenging conditions
- Quantitative benchmarking against standard FER methods, demonstrating significant performance gaps when applied to impaired populations
Methodology
The pipeline comprises:
- Face Detection — robust localisation under partial occlusion and limited facial mobility
- Facial Landmark Extraction — geometric features robust to partial paralysis
- CNN Feature Learning — spatial appearance features from facial regions
- LSTM Temporal Modelling — sequence-level reasoning over facial expression dynamics
- Mood Classification — mapping expression sequences to affective states relevant to rehabilitation
Venue
Presented at the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP 2018), part of VISIGRAPP, Funchal, Madeira, Portugal.
