Multimodal Interaction Modeling of Child Forensic Interviewing

Published in ACM International Conference on Multimodal Interactions, 2018

Recommended citation: Victor Ardulov, Madelyn Mendlen, Manoj Kumar, Neha Anand, Shanna Williams, Thomas Lyon, and Shrikanth Narayanan. 2018. Multimodal Interaction Modeling of Child Forensic Interviewing. In Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI 18). ACM, New York, NY, USA, 179-185. DOI: https://doi.org/10.1145/3242969.3243006 https://dl.acm.org/citation.cfm?doid=3242969.3243006

Abstract: Constructing computational models of interactions during Forensic Interviews (FI) with children presents a unique challenge in being able to maximize complete and accurate information disclosure, while minimizing emotional trauma experienced by the child. Leveraging multiple channels of observational signals, dynamical system modeling is employed to track and identify patterns in the influence interviewers’ linguistic and paralinguistic behavior has on children’s verbal recall productivity. Specifically, linear mixed effects modeling and dynamical mode decomposition allow for robust analysis of acoustic-prosodic features, aligned with lexical features at turn-level utterances. By varying the window length, the model parameters evaluate both interviewer and child behaviors at different temporal resolutions, thus capturing both rapport-building and disclosure phases of FI. Making use of a recently proposed definition of productivity, the dynamic systems modeling provides insight into the characteristics of interaction that are most relevant to effectively eliciting narrative and task-relevant information from a child

Published in Proceedings of ACM ICMI 2018