Postdoctoral Researcher · DTU Compute, Denmark

Chaudhary Muhammad
Aqdus Ilyas

Eye-Tracking · Cognitive Neuroscience · Affective Computing · HRI

I am a Postdoctoral Researcher at the Department of Applied Mathematics and Computer Science, Technical University of Denmark (DTU), working on the Reading the Reader project funded by the Novo Nordisk Foundation, in collaboration with Assoc. Prof. Per Bækgaard.

My research sits at the intersection of Cognitive Neuroscience, UX/UI Design, Human–Machine Interaction (HMI), Human–Robot Interaction (HRI), and Artificial Intelligence — using eye-tracking and psychophysics to understand how people read, perceive, and interact with information.

Muhammad Aqdus Ilyas
Copenhagen, Denmark

Research Focus

My research spans cognitive science, affective computing, multimodal signal processing, and human-centred AI — developing computational methods that place human perception, attention, and emotion at the centre of intelligent system design.

👁️

Eye-Tracking & Visual Attention

Investigating how visual attention mechanisms operate using advanced eye-tracking technology and computational models based on Bundesen's Theory of Visual Attention (TVA). Leveraging Tobii eye-trackers to capture gaze trajectories, fixation dynamics, and Task-Evoked Pupillary Responses (TEPR) — then linking these signals to higher-level cognitive processes such as reading fluency, perceptual load, and attentional capacity.

Gaze Analysis TEPR Tobii Fixation Visual Attention
🧮

Cognitive Modelling: TVA, Psychophysics & Bayesian Inference

Unifying Bundesen's TVA with Bayesian hierarchical modelling to quantify individual differences in visual processing. Per-participant estimation of attentional parameters (visual processing speed C, storage capacity K, threshold t₀) using MCMC in PyMC. Psychophysical experiments examine how typography, font legibility, and stimulus properties modulate perception — with posterior difference distributions, Growth Curve Analysis (GCA), and LOO-CV for robust inference.

TVA PyMC · MCMC Psychophysics GCA Hierarchical Bayes LOO-CV
📡

Multimodal Signal Processing

Designing signal processing pipelines that fuse heterogeneous physiological streams — eye-tracking, pupillometry, EEG, and facial action units — into unified representations of cognitive and affective state. Research covers time-series alignment, artifact rejection, feature engineering, and the development of robust multimodal models for real-world sensing contexts including reading, workplace monitoring, and HRI. Central theme of the MSPARC workshop at IEEE ICASSP 2026.

Signal Fusion EEG · Pupillometry Feature Engineering Time-Series ICASSP 2026
🤖

Affective Computing & Human-Robot Interaction

Building emotionally intelligent machines that can perceive, interpret, and respond to human affective states. Facial expression recognition (FER) with dimensional valence/arousal modelling, adapted for socially assistive robots supporting citizens with Traumatic Brain Injury (TBI). Includes work on spontaneous affect in naturalistic environments (EU Horizon 2020 WorkingAge project, Cambridge AFAR Lab), where systems must be robust to unconstrained lighting, occlusion, and real-world variability.

FER Valence/Arousal HRI TBI Affective AI WorkingAge
🦾

Robotic Rehabilitation

Developing socially assistive and therapeutic robotic systems that support cognitive and physical rehabilitation for individuals with neurological conditions. PhD research focused on deploying facial emotion recognition within robot-assisted therapy for citizens with Traumatic Brain Injury (TBI) — enabling robots to perceive patient affective states and adapt therapeutic interactions in real time. Research intersects clinical HRI, affective sensing, and the ethical deployment of autonomous systems in healthcare and assistive technology environments.

Assistive Robotics TBI Therapy Social Robots Neurological Rehab Clinical HRI
🖥️

Human-Machine Interaction & Adaptive Interfaces

Studying how humans interact with intelligent systems — and using that understanding to build interfaces that adapt in real time to the user's cognitive and emotional state. Research spans gaze-contingent reading aids, attention-aware document rendering, and closed-loop interfaces that leverage eye-tracking and pupillometry to reduce cognitive load and personalise content presentation. Bridges HCI methodology with physiological computing and AI-driven adaptation.

Gaze-Contingent UI Adaptive Systems HCI Cognitive Load Personalisation
🧬

Deep Learning & Computer Vision

Applying deep neural architectures to perception and cognition problems: convolutional networks and transformers for facial action unit detection, expression recognition, and affect estimation from video streams. Research emphasises domain generalisation, transfer learning across affective datasets, and the design of lightweight models suitable for real-time deployment in embedded and robotic systems. Also encompasses BERT-based sequence models applied to gaze reading data for language identification.

CNNs · Transformers BERT Transfer Learning Action Units Real-Time CV
🌱

Human-Centred AI

Advocating for AI systems designed with human needs, values, and diversity at their core. This thread runs through all my work — from building models that explain their inferences in terms of measurable cognitive parameters, to designing experiments with genuine ecological validity, to ensuring assistive technologies serve populations with neurological and cognitive differences. Engages with transparency, fairness, and the responsible deployment of AI in health, education, and workplace settings.

Explainable AI Responsible AI Accessibility Fairness AI Ethics
🌐

NLP & Gaze-Based Language Modelling

Combining natural language processing with gaze behavioural data to study reading and language processing across populations. BERT-based classifiers applied to fixation sequences for native language identification from reading patterns (NLIR-BERT), evaluated on the MECO-L2 multilingual corpus spanning 13 languages. Explores how gaze reflects lexical access, syntactic processing, and L2 reading proficiency — with Monte Carlo simulation for robust small-sample statistical inference.

BERT · NLP NLIR MECO-L2 Multilingual Reading Science

Selected Publications

Peer-reviewed contributions to eye-tracking, cognitive science, and human-computer interaction. View full list on Google Scholar →

ETRA '25

CtxRead: Context Preservation Through Eye Tracking — Adaptive Reading Application Design for an Optimal Reading Experience

Jensen, H.E., Ilyas, C.M.A., Tashk, A., Cooreman, B., Beier, S. & Bækgaard, P.

ACM Symposium on Eye Tracking Research and Applications (ETRA 2025), Tokyo, Japan · May 2025 · Article 78 · Open Access CC BY 4.0

Introduces context preservation — a gaze-based mechanism that helps readers resume reading after typographic adaptations. A 22-participant within-subjects experiment compared four intervention designs (Popup, Undo, Notification, Gradual). The Gradual intervention achieved the lowest Reading-Resume Time and highest participant satisfaction.

ETRA '25

RtR-AI: Reading the Reader's Mind through Eye Tracking — Can AI Generated Texts Match Human Authors?

Ilyas, C.M.A., Noor, S.-E., Tashk, A., Cooreman, B., Beier, S. & Bækgaard, P.

ACM Symposium on Eye Tracking Research and Applications (ETRA 2025), Tokyo, Japan · May 2025 · Article 113 · Open Access CC BY 4.0

First eye-tracking investigation comparing cognitive reading behaviour between LLM-generated and human-authored texts. Gaze data from 13 participants analysed using the I2MC fixation detection algorithm. Significant differences in fixation characteristics, pupil dilation, and reading speed identified between AI and human text.

IEEE '25

NLIR-BERT: Native Language Identification from Gaze — Logistic Regression vs. BERT on MECO-L2

Tashk, A., Ilyas, C.M.A., Cooreman, B., Beier, S. & Bækgaard, P.

IEEE 7th International Conference on Cybernetics, Cognition and Machine Learning Applications (ICCCMLA 2025), 1–2 November 2025 · In Press

First BERT-based classifier for Native Language Identification from Reading (NLIR) using gaze features from the MECO-L2 Wave 1 corpus. Monte Carlo simulation with 20 randomised splits for robust small-sample evaluation. BERT produces linguistically more meaningful language clusters than logistic regression despite comparable accuracy.

arXiv '21

WorkAffect: Inferring User Facial Affect in Work-like Settings

Ilyas, C.M.A. et al.

arXiv preprint · WorkingAge / EU Horizon 2020 Project · November 2021 · arXiv:2111.11862

A system for continuous valence/arousal affect inference from facial cues in naturalistic work-like environments, developed within the EU H2020 WorkingAge project. Addresses challenges of subtle workplace expressions, variable illumination, and unobtrusive sensing — demonstrating that dimensional models outperform categorical classification for in-the-wild occupational affective computing.

Workshops & Tutorials

Organizing and leading advanced technical workshops at international venues.

Organizer
IEEE ICASSP 2026

MSPARC — Multimodal Signal Processing and Attentional Research in Cognition

A full-day workshop at IEEE ICASSP 2026 bringing together researchers working at the intersection of multimodal signal processing, eye-tracking, pupillometry, and cognitive science. The workshop covers advanced methods for analysing attentional processes through physiological signals, with a focus on real-world applications in reading, learning, and human–computer interaction.

Multimodal Signal Processing Eye-Tracking & Pupillometry Attentional Research Cognitive Science Adaptive AI Interfaces
Workshop Website →

Experience & Education

A trajectory spanning four countries, combining basic cognitive research with applied AI and human-centred computing.

2023 – Present
Postdoctoral Researcher

Working on the Reading the Reader project (Novo Nordisk Foundation, grant NF22OC0076877) with Assoc. Prof. Per Bækgaard. Developing eye-tracking and Bayesian methods to understand typographic effects on reading and deploy adaptive, AI-driven reading interfaces. Collaborating with the Royal Danish Academy — Architecture, Design, Conservation.

2021 – 2023
Postdoctoral Researcher

Contributed to the WorkingAge project (EU Horizon 2020, grant No. 826232), focusing on affective analysis of workers using computer vision, multimodal sensing, and deep learning to promote healthy habits and improved wellbeing in professional settings.

2017 – 2021
PhD in Computer Science
Aalborg University · Denmark

Doctoral research on Facial Emotion Recognition for Citizens with Traumatic Brain Injury for Therapeutic Robot Interaction, conducted jointly within the Visual Analysis & Perception (VAP) and Human–Robot Interaction (HRI) Labs.

2015 – 2016
MSc · Electrical & Electronics Engineering
Coventry University · Coventry, UK

Graduated with a Master's degree in Electrical & Electronics Engineering. Awarded the Global Leadership Certificate (GLC) for organising seminars, events, and talks under the Global Leadership Programme (GLP).

Blog Posts

Science communication pieces written during my PostDoc at the AFAR Lab, University of Cambridge, published on the WorkingAge project blog (EU Horizon 2020).

Nov 2021 WorkingAge · CSCW'21 Workshop

Inferring User Facial Affect in Work-like Tasks and Environment

Reporting on work presented at the "Human-Machine Partnerships in the Future of Work" workshop at CSCW'21. Describes a facial affect analysis system for naturalistic work settings that models workers' emotional states using valence and arousal dimensions — showing that pre-trained models on naturalistic datasets generalise well to workplace affect.

2021 WorkingAge · Part II

RED-AI: How Costly Are the New Machine Learning Models? — Fast & Efficient Training Methods

The second part of the RED-AI series covers practical techniques for training large machine learning models more efficiently — addressing the growing carbon footprint and accessibility barriers raised in Part I, and offering concrete methods to make deep learning research more sustainable and open.

2021 WorkingAge · Part I

RED-AI: How Costly Are the New Machine Learning Models? (Part I)

Introduces the concept of "RED AI" — the trend of prioritising accuracy over computational efficiency. Examines how the compute cost of state-of-the-art models grew 300,000× between AlexNet (2012) and AlphaZero (2017), creating carbon footprint concerns and raising the entry bar for academic researchers.

Mar 2021 WorkingAge · ICPR 2020 / CARE Workshop

Artificial Emotional Intelligence for Wellbeing

Covers the CARE 2020 workshop ("Pattern Recognition for positive teChnology And eldeRly wEllbeing") at ICPR2020. Discusses computational models for automatically evaluating the emotional wellbeing of elderly people using non-verbal social signals — facial expressions, body language, and gestures — in ecological, unconstrained settings.

2020–2021 WorkingAge · Beyondwork2020 Conference

Beyondwork2020 — How Can Artificial Emotional Intelligence Contribute to the Future of Work?

Co-authored with Dr. Hatice Gunes (Cambridge). Reports on the European Beyondwork2020 conference (Bonn, Germany), exploring how cognitive training via virtual reality and affective adaptation can shape healthier, more supportive workplaces in the post-COVID era.

Project Supervision & Teaching

Supervising student projects at DTU Compute, Department of Applied Mathematics and Computer Science, within the Reading the Reader research group. Projects span eye-tracking, adaptive reading interfaces, typography, cognitive AI, and HCI — at BSc, MSc, and Diploma level.

36
Total Projects
5
Active Now
26
MSc Theses
3
BSc Projects
4
Special Courses
2
Diploma Projects
🟢 Active Projects — 2026
Adaptive Reading Systems: A Modular Software Architecture
MSc Thesis Feb 2026 – Jul 2026
Ongoing
Smart Reading Support: Adaptive Cognitive Anchoring from Eye-Tracking Signals
MSc Thesis Feb 2026 – Jul 2026
Ongoing
Reading the Struggle: Using Machine Learning to Detect Reading Difficulty in Near Real-Time
MSc Thesis Feb 2026 – Jul 2026
Ongoing
Designing a Usable Eye-Tracking System for Vertigo Patients: Improving Interaction in Extreme Physical and Cognitive Conditions
MSc Thesis Mar 2026 – Sep 2026
Ongoing
The Impact of Font Width on Readability and User Experience in Digital Text
MSc Thesis Sep 2025 – Mar 2026
Ongoing
MSc Theses — Completed (Kandidatspeciale)
Developing a Tool for Detecting Manipulated Face Images Using Eye-Tracking Data from Human Super-Recognizers
Jul 2023 – Jan 2024
Completed
Analysing Eye Movements for Patients with Acute Vertigo
Feb 2024 – Jul 2024
Completed
Unveiling Reading Patterns: Exploring the Impact of Writing Styles on Reading Behaviour through Eye Tracking
Jan 2024 – Aug 2024
Completed
Designing a Typography-Adaptive Reading Interface for Individuals with Central Vision Loss
Jan 2024 – Jun 2024
Completed
Utilizing Transformer-Based Encoding on Eye-Tracking Data to Enhance Reading for CVL Patients
Sep 2024 – Apr 2025
Completed
Intervening without Interrupting: A Study on the Optimal Level of Intervention in Reading Applications with Adaptive User Interfaces
Aug 2024 – Jan 2025
Completed
Integrating User-Centric Design Practices to Enhance the Understandability of Complex Clinical Data Visualizations
Aug 2024 – Jan 2025
Completed
Visual Comfort and Reading Performance: Investigating the Impact of Text Spacing Through Eye Tracking
Dec 2024 – May 2025
Completed
Prototyping and Evaluating Adaptive Reading Interfaces for Improved User Experience
Jan 2025 – Jun 2025
Completed
Understanding and Modeling Reading Styles Using Eye Tracking Data
Jan 2025 – Jun 2025
Completed
Investigating the Role of Typography in Enhancing Readability for Individuals with Reading Disabilities
Jan 2025 – Jul 2025
Completed
User-Centered Design for Adaptive Reading Interfaces: Improving Accessibility through Usability Testing
Feb 2025 – Jul 2025
Completed
Utilizing Deep Learning for Gaze-Based Native Language Identification
May 2025 – Aug 2025
Completed
Utilizing Deep Learning for Native Language Identification from Reading
Jun 2025 – Aug 2025
Completed
Object and Action Recognition for Social Robotics
May 2025 – Oct 2025
Completed
Gaze Tracking with Transformer Model-Based Methods for Accurate Detection of Vertigo Attacks
Jun 2025 – Nov 2025
Completed
Evaluating Bold Typeface Variations for Readers With and Without Reading Difficulties to Improve Reading Experience
Aug 2025 – Oct 2025
Completed
Towards Early Detection of Children's Neurodevelopmental Disorders and Visual Impairments Using Eye Tracking: User Journey Mapping and Protocol Development
Jun 2025 – Dec 2025
Completed
Eye Signatures: Native Language Identification from Reading with Deep Vision Models
Aug 2025 – Jan 2026
Completed
Flow-Aware Typography: Designing Emotionally Adaptive Reading Interfaces Using Visual Rhythm and Microfeedback Features
Jul 2025 – Jan 2026
Completed
Implementation and Effects of a Generative-AI Agent as an Intelligent Tutoring System for Neurodivergent Students
Sep 2025 – Feb 2026
Completed
Designing and Validating a Multiple-Camera Remote Eye-Tracker Using Embedded Hardware and Software
Nov 2025 – Feb 2026
Completed
BSc Projects — Completed (Bachelorprojekt)
Improving Reading amongst Dyslexics by Identifying Specific Issues Using Eye Tracking
Jan 2024 – Jun 2024
Completed
User Experience Design for Adaptive Reading Systems
Feb 2025 – Jun 2025
Completed
The Effect of Design Changes on Gaze and User Interaction in Digital Learning Interfaces
Feb 2025 – Jun 2025
Completed
Special Courses — Completed (Specialkursus)
Adaptive Eye Tracking For Enhanced Reading
Jan 2024 – May 2024
Completed
Quantifying Reading Progress through Eye Tracking and Voice Analysis
Jun 2025 – Aug 2025
Completed
Modeling Reading Styles Using Eye-Tracking and Machine Learning (ML)
Aug 2025 – Oct 2025
Completed
Imaging Eye Tracking Data for Deep Vision
Nov 2025 – Dec 2025
Completed
Diploma Engineering Projects — Completed (Diplomingeniørprojekt)
From Voice to Journal: Helping Physiotherapists Capture the Moment
Sep 2025 – Dec 2025
Completed
Comparative Analysis of Reading and Gaze Patterns in Dyslexic and Non-Dyslexic Children Using Eye-Tracking Technology
Sep 2025 – Jan 2026
Completed

Research Interests

😊

Affective Computing

Modelling emotional and behavioural states in occupational and real-world settings.

🤖

Human–Robot Interaction

Designing socially intelligent robots that perceive and respond to human affect.

🧬

Neuroinspired AI

Brain-inspired computation for perception, reading, and decision-making.

👀

Computer Vision

Facial expression recognition, action recognition, and multimodal sensing.

📖

Reading Science

Typography, legibility, adaptive interfaces, and eye-tracking for literacy support.

📊

Bayesian Modelling

Hierarchical models, MCMC inference, and probabilistic analysis of cognitive data.

Honors, Awards & Grants

Selected recognitions across research, leadership, and innovation.

Sep 2020
International Networking Innovation Fund, Denmark — €30,000
(Applied in collaboration)
Nov 2018
Research Stay Abroad Grant — DKK 25,000
Feb 2018
Otto Mønsted Conference Grant — DKK 10,000
2018–2020
Board Member, PhD Network Association (PAU), Aalborg University
Jan 2016
Global Leadership Certificate (GLC), Coventry University
Awarded for organising seminars, events & talks under the Global Leadership Programme
Aug 2015
Travel Grant for Industry Visits — Airbus & Volkswagen, Germany
Jun 2015
Research Assistantship, Coventry University, UK
Apr 2015
2nd Prize, NEC Birmingham Engineering & Design Competition, UK
Mar 2015
Cultural Mundi Representative, Coventry University, UK
2010–2011
"Talent Award" (×2), Study Aid Foundation for Excellence (SAFE)

Research Funding

Novo Nordisk Foundation
Reading the Reader (RtR)
Grant agreement NF22OC0076877

Developing AI-driven adaptive reading interfaces using eye-tracking and biometric sensors, in collaboration with the Royal Danish Academy and DTU Compute's Cognitive Systems Section.

European Union · Horizon 2020
WorkingAge
Grant agreement No. 826232

Affective analysis of workers to promote healthy habits, improve working conditions, and support well-being in professional and daily living contexts. Hosted at AFAR Lab, Cambridge.

Let's Talk

Open to research collaborations, supervision discussions, invited talks, and questions about my work.