SFB 1187 ›Medien der Kooperation‹ an der Universität Siegen
Lecture Series: “Testing Infrastructures” – Noortje Marres (University of Warwick) & Philippe Sormani (Universität Siegen): “Testing ‘AI’: A Conversation”
Wednesday, 25 May 2022, 3:15 - 5:15 pm
Testing ‘AI’: Do we have a situation?
 
A conversation between Noortje Marres and Philippe Sormani
 
Proponents of the ‘new’ AI, in the shape of very large deep learning models, have claimed that these systems exhibit radically new capacities for judgement and decision-making (for a discussion, see Roberge and Castelle, 2021; see also Suchman, 2002). Tests and demos, such as AlphaGo’s victory at the Four Seasons Hotel in Seoul, South Korea (Sormani, 2018; Mair et al, 2021) and street trials of self-driving vehicles in the UK (Marres, 2020), have played a notable role in the propagation, as well as the problematisation, of such claims. In this conversation, Noortje Marres and Philippe Sormani discuss how social studies of technology are to approach and engage with these phenomena. 
 
The conversation will be structured around the following questions: What can we learn from today’s real-world testing of “AI” regarding the distribution of capacities between artefacts, environment and context in compute-intensive practices (Quéré & Schoch, 1998)? Does the performance and evaluation of “machine intelligence” continue to demand the erasure of situations and the bracketing of social life? What does this tell us about possible tensions and alignments between different “definitions of the situation” assumed in social studies, engineering and computer science? (To riff on Star (1999)’s dictum: “one person’s situation may be another person’s barrier.”) Does it make sense for social studies of technology to rely on the observation of situations in the re-specification of machine intelligence?
 

On the lecture series: “Testing Infrastructures”

From QR codes used to verify COVID-19 vaccination status’ to cloud software used to train machine learning models, infrastructures of testing are proliferating. Whilst the infrastructures themselves come in different forms – from ‘off the shelf’ systems to tailor-made technologies – they all have a capacity to generate specific ‘test situations’ involving an array of different actors from ‘ghost’ workers to python scripts. An increasing reliance on digital platforms, protocols, tools, and procedures has led to a redistribution of testing itself: not just where testing takes place, and who performs the testing, but who has access to, and control over, mechanisms for testing, test protocols and of course, test results. In this lecture series, we focus on the practices making up the test infrastructures and explore different perspectives to make sense of the realities enacted by testing.

We invite our lecture guests to ask: how do testing infrastructures engender the construction of specific testing routines and practices? What kinds of affective experiences, reactions, and responses are generated through testing? Here we invite reflection on how testing infrastructures oft fade into the background, pointing to a tapestry of maintenance and repair practices. Lastly, what are the ways in which we can evaluate the role of digital infrastructures more broadly? This includes the challenge of what novel test methods can be developed and actually ‘tested’ to gain a better understanding of how infrastructures work. Our exploration of test practices in this context is interwoven with the search for test media that bind actors together or create barriers; that enable cooperation or declare it impossible.

Possible questions include (but are not limited to):

  • What are the implications of testing in different social situations and in what moments do they come to the fore? 
  • When and where are tests conducted—for whom and what, through whom and what, and by whom and what actors?
  • What are digital practices for/of testing and with what types of data do testing infrastructures support?
  • What other practices spawn from distributed testing? Think of practices of passing and obfuscation within nested situations of testing and the outsourcing of ‘validation work’ as constructions that govern.
  • What methodological strategies are there to make test procedures and their foundations transparent?
  • Can different politics of testing be distinguished? If so, where and under what conditions?
  • Can we demarcate between embodied testing and disembodied testing?
 
References:
Mair, M., Brooker, P., Dutton, W., & Sormani, P. (2021). Just what are we doing when we’re describing AI? Harvey Sacks, the commentator machine, and the descriptive politics of the new artificial intelligence. Qualitative Research, 21(3), 341-359.
 
Marres, N. (2020). Coexistence or displacement: Do street trials of intelligent vehicles test society?. The British journal of sociology, 71(3), 537-555.
 
Roberge, J., & Castelle, M. (2021). Toward an End-to-End Sociology of 21st-Century Machine Learning. In The Cultural Life of Machine Learning (pp. 1-29). Palgrave Macmillan, Cham.
 
Quéré, L., & Schoch, C. (1998). The still-neglected situation?. Réseaux: The French journal of communication, Communication-Technologie-Société, 6(2), 223-253.
 
Sormani, P. (2018, June). Logic-in-Action? AlphaGo, Surprise Move 37 and Interaction Analysis. In Handbook of the 6th World Congress and School on Universal Logic (p. 378).
 
Star, S. L. (1999). The ethnography of infrastructure. American behavioral scientist, 43(3), 377-391.
 
Suchman, L. (2008). Feminist STS and the Sciences of the Artificial. The handbook of science and technology studies, 3, 139-164.
 

Venue

Online-Event