Report from the Winter School for Digital Methods

Auditing the Analyst: What Do LLMs See (and Miss)? 
 
by Hina Firdaus
 
A week-long Winter School for Digital Methods and Data Sprint at the University of Amsterdam proved to be an eye-opening experience. Around 200 researchers and practitioners from universities across countries including Germany, the Netherlands, Norway, the United States, and Canada gathered to collaboratively work on nine research projects. One of the projects was “Auditing the Analyst: What Do LLMs See (and Miss)? ”.
 
→ read the full report on “Auditing the Analyst”
 
About “Auditing the Analyst”
 
“Auditing the Analyst” was facilitated by Prof. Dr. Bilel Benbouzid, Carlos Rosas, and Irène Girard. Our team consisted of 20 researchers from diverse disciplinary backgrounds and was evenly divided into two sub-teams. The project focused on investigating the use of large language models (LLMs), such as Gemini, to analyze a corpus of 1,000 semi-structured anthropic interviews generated by a Claude-AI interviewer. Using Grounded Theory as our methodological framework, we conducted a comparative analysis between a fully automated, machine-only analytical approach and a human–machine hybrid approach that incorporated human oversight and intervention. Beyond procedural outcomes, we critically examined how theory generation differed across these two modes of analysis.
 
Our key findings indicate that while LLMs function as highly scalable analytical tools, they are also costly and subject to rapid obsolescence, as increasingly capable models are introduced at a fast pace opening both opportunities and challenges for future research. The LLM-led analyses tended to privilege broadly applicable and generalized narratives, often at the expense of interpretive depth, thereby creating an epistemic distance between researchers and the data. Moreover, human involvement in the hybrid approach frequently shifted researchers’ roles toward administrative tasks such as prompt design and output management, rather than deeper analytical engagement. Across both analytical tracks, we observed a shared pattern in how professional identity and legitimacy are negotiated in an AI-augmented research context: routine analytical labor is delegated to AI systems, while human expertise is redefined around validation, oversight, and boundary-setting. Overall, the project raises critical questions about the methodological validity of LLM-centric qualitative workflows and suggests that increased scalability through AI may come at the cost of interpretive richness and theoretical nuance.
 
 

Digital-Methods_2026

Image 1 of 3

 
About the DMI
 
The Digital Methods Winter School is part of the Digital Methods Initiative (DMI), Amsterdam, dedicated to developing techniques for Internet-related research and to the study of the natively digital. The Digital Methods Initiative also hosts the annual Digital Methods Summer Schools, which are intensive, full-time programs held in July.