Evaluating Open‐Source Solutions for Computerized Inference of Infant Facial Affect
Martin Lund
Trinhammer
, Ida
Egmose
, Marianne Thode
Krogh
, and
4 more authors
Infant affect is often expressed through facial expressions, making this modality a key source of insight into the child’s well‐being and social functioning. Computational inference of infant affect could critically assist both researchers and clinicians working with infant development and mitigate the need for manual coding. While many studies have explored open‐source solutions in the adult domain, only the commercial Baby FaceReader 9 exists for the infant domain. To address this gap, we utilize the recently proposed, open‐source infant‐native action unit (AU) detection library PyAFAR (Python‐based Automated Facial Action Recognition) on a sample of 71 four‐month‐old infants, whose facial expressions were manually annotated frame‐by‐frame for three minutes according to the Infant Facial Affect (IFA) coding scheme. Using these AUs as features, we classify facial affect into negative, neutral, and positive using XGBoost and Bayesian filtering, both in a multiclass and a binary setup. Our results show that AUs estimates from PyAFAR, combined with an XGBoost classification model, can distinguish positive from neutral and positive from negative affect with AUC scores of 0.78 and 0.76, respectively. This performance is essentially on par with that reported in evaluation studies of the Baby FaceReader 9, when accounting for differences in study setup. Our work indicates that the area of infant facial affect is particularly well‐suited to supervised learning, given the availability of two distinct, commensurable measurement schemes that underpin the same phenomenon. Finally, we discuss how future iterations of PyAFAR may benefit from including AUs that capture more variability around infant forehead and mouth opening.,
Open‐source models for infant face detection and action unit estimation enable comparable affect
estimation compared to commercial tools.The two main measurement schemes used for annotating infant affect are highly commensurable, suggesting a fruitful avenue for imitation learning.Next iterations of infant action unit detection models may benefit from incorporating features specific for infant forehead activation, mouth opening, and mouth widening