Can You Hear Me Now? Sanity Tests and Screening Difference in Machine Listening for Mental Health Care
In the United States, growing numbers of psychiatric and engineering professionals collaborate in attempts to build automated systems that conduct mental health screening based on the sounds of the voice alone. These “vocal biomarker” detection technologies propose to turn any utterance into clinically significant data, regardless of a speaker’s knowledge or interpretations of their psychological status. Dominant discourses surrounding these efforts frame the auditory superiority of artificial intelligence (AI) as key to unlocking a more efficient and equitable future for psychiatric medicine. They often describe AI as a “stethoscope” or “thermometer” for mental illness, implying a straightforwardly biological and quantitative measure detached from the sociocultural and political dimensions of the clinical encounter.
This talk explores the “sanity test”—a computer science term for assessing the desired functionality, i.e. “rationality,” of a model—as an alternative analogy for vocal biomarker systems that more aptly conveys the normative logics, social effects, and matrices of domination embedded within them. Drawing from ethnographic fieldwork with technologists and human test subjects whose sensory practices and voices shape how various vocal biomarker technologies will listen, I show that the boundary between “ill” and “well” bodies and subjects is in constant, contested flux throughout the design process.
Dr. Beth Semel is an incoming Assistant Professor of Anthropology at Princeton University. Her ethnographic research combines linguistic anthropology, science and technology studies, disability studies, and sound studies to explore the sociopolitical life of automated voice analysis, focusing on efforts to integrate these AI-enabled technologies into the U.S. mental health care system. She is currently a postdoctoral associate in Anthropology at the Massachusetts Institute of Technology, where she received her PhD in History, Anthropology Science, Technology and Society (HASTS). She is also the co-founder and associate director of the Language and Technology Lab.
On the lecture series: “Testing Infrastructures”
From QR codes used to verify COVID-19 vaccination status’ to cloud software used to train machine learning models, infrastructures of testing are proliferating. Whilst the infrastructures themselves come in different forms – from ‘off the shelf’ systems to tailor-made technologies – they all have a capacity to generate specific ‘test situations’ involving an array of different actors from ‘ghost’ workers to python scripts. An increasing reliance on digital platforms, protocols, tools, and procedures has led to a redistribution of testing itself: not just where testing takes place, and who performs the testing, but who has access to, and control over, mechanisms for testing, test protocols and of course, test results. In this lecture series, we focus on the practices making up the test infrastructures and explore different perspectives to make sense of the realities enacted by testing.
We invite our lecture guests to ask: how do testing infrastructures engender the construction of specific testing routines and practices? What kinds of affective experiences, reactions, and responses are generated through testing? Here we invite reflection on how testing infrastructures oft fade into the background, pointing to a tapestry of maintenance and repair practices. Lastly, what are the ways in which we can evaluate the role of digital infrastructures more broadly? This includes the challenge of what novel test methods can be developed and actually ‘tested’ to gain a better understanding of how infrastructures work. Our exploration of test practices in this context is interwoven with the search for test media that bind actors together or create barriers; that enable cooperation or declare it impossible.
Possible questions include (but are not limited to):
- What are the implications of testing in different social situations and in what moments do they come to the fore?
- When and where are tests conducted—for whom and what, through whom and what, and by whom and what actors?
- What are digital practices for/of testing and with what types of data do testing infrastructures support?
- What other practices spawn from distributed testing? Think of practices of passing and obfuscation within nested situations of testing and the outsourcing of ‘validation work’ as constructions that govern.
- What methodological strategies are there to make test procedures and their foundations transparent?
- Can different politics of testing be distinguished? If so, where and under what conditions?
- Can we demarcate between embodied testing and disembodied testing?
Guests are welcome to register via Mail with ‘Send an E-mail’