Pepper-FER: Teaching Pepper Robot to Recognize Emotions of Traumatic Brain Injured Patients Using Deep Neural Networks
Published in IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2019), 2019

Abstract
Social robots have significant potential as assistive tools in the rehabilitation of Traumatic Brain Injured (TBI) patients, particularly for facilitating emotion-aware interactions during physiotherapy and cognitive exercises. This paper presents an end-to-end pipeline for deploying deep neural network-based facial expression recognition directly on the Pepper social robot, enabling real-time, affect-aware engagement with TBI patients.
The pipeline extends the CNN-LSTM architecture developed in prior work — achieving 87.97% accuracy on the challenging TBI database — and integrates it with Pepper’s perception stack. A direct comparison is performed against Pepper’s built-in FER module to quantify the improvement in recognition accuracy when using a domain-adapted model trained specifically on TBI patients versus a general-purpose commercial solution.
Results demonstrate that the domain-specific CNN-LSTM substantially outperforms the robot’s native FER system on TBI patients, highlighting the importance of population-specific training data and architecture choices in clinical Human-Robot Interaction scenarios.
Key Contributions
- Full deployment of a deep emotion recognition pipeline on the Pepper robot platform
- First systematic comparison of a custom TBI-adapted FER model against Pepper’s built-in commercial emotion recognition module
- Demonstrates concrete accuracy improvements from domain-specific training on TBI patients
- Provides a reproducible framework for affect-aware social robotics in neurorehabilitation
System Architecture
RGB Camera (Pepper) → Face Detection → CNN Feature Extraction
↓
LSTM Temporal Modelling
↓
Emotion Label → Robot Behaviour Adaptation
The robot’s response layer uses recognised emotion labels to adapt verbal and gestural outputs, supporting therapist-guided interactions.
Venue
Presented at the 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2019), New Delhi, India.
