„The MGK is the central location for teaching and innovating methods. It contributes significantly to the CRC's overarching goal of conducting basic digital research and communicating, reflecting on and developing innovative methods. The MGK trains doctoral candidates in an internationally oriented, interdisciplinary program.“
The CRC awards two short-term scholarships to promote the work of early-career researchers who are conducting research in the field of digital media, sensing and sensor technology, data practices and AI and are interested in a longer-term collaboration with the CRC.
The basic amount of the scholarship is based on the maximum rate of the DFG (1.365,- EUR). In addition, an allowance for material expenses and, if applicable, a child allowance will be paid.
Basic amount of the scholarship: EUR 1,365 (maximum DFG rate)
Material cost allowance and, if applicable, child allowance are paid in addition
About the CRC 1187 “Media of Cooperation“
The CRC is an interdisciplinary collaborative research centre consisting of 19 subprojects and more than 60 researchers from media studies, ethnology, sociology, computer science, linguistics, ubiquitous computing, science and technology studies, education, law and engineering.
The Collaborative Research Centre 1187 has been funded by the German Research Foundation (DFG) since 2016. The research centre studies digital media, which have emerged as cooperative tools, platforms and infrastructures on a broad front, and approaches them as cooperatively accomplished means of cooperation. In the first funding phase (2016-2019), the CRC focused on the relevance of social media and platforms, while the second phase (2020-2023) centered on data-intensive media and data practices. Phase 3 (2024-2027) inquires the interplay between sensor media and artificial intelligence.
The short-term fellowship program of the CRC provides national and international doctoral students the opportunity to further develop their research project in the CRC, to get to know participating researchers and to exchange ideas with them. The research projects of the scholarship holders should be thematically related to the subprojects of the CRC, so that their work can be supported by the principal investigators and their teams. Scholarship holders are assigned to the Integrated Research Training Group (MGK) of the CRC and benefit from its structured training program. The CRC offers scholarship holders an international environment for interdisciplinary media research as well as an extensive program of events and training in ethnographic, digital, sensor-based, linguistic, and AI-based methods.
Relevant, above-average degree in one of the disciplines participating in or related to the CRC, preferably in media and cultural studies, sociology or in the field of socio- or business informatics, human-computer interaction or information systems (equivalent to a Master’s degree, Magister, Diplom or Lehramt/Staatsexamen Sek. II)
Individual research project in one of the above-mentioned disciplines within the subject area of the CRC. Ideally, you can assign the project to one of the subareas of the CRC: infrastructures, publics, or sensory praxeology
Interest in methods of media research, the analysis of data practices and an affinity for working in an interdisciplinary research environment
Willingness to participate in the international event program of the CRC and the MGK
Very good written and spoken English language skills
Your Tasks
Expectations of successful candidates:
Regular participation and involvement in the events and the training program of the MGK (colloquia, workshops, summer schools, methodology workshops, interdisciplinary groups)
Presentation of preliminary results of the individual research project within the MGK colloquium
About the application
Please send your application documents (letter of motivation, curriculum vitae, copies of certificates, 5-10-page outline of a project idea) as a single PDF file (max 5 mb) to dominik.schrey@uni-siegen.de by 25 June 2025. Please note that risks to confidentiality and unauthorized access by third parties cannot be ruled out when communicating by unencrypted e-mail.
About the University
The University of Siegen is an interdisciplinary and cosmopolitan university with currently around 18,000 students and a range of subjects from the humanities, social sciences and economics to natural sciences, engineering and life sciences. With over 2,000 employees, we are one of the largest employers in the region and offer a unique environment for teaching, research and further education.
Equal opportunities and diversity are promoted and actively practiced at the University of Siegen. Applications from women are highly welcome and will be given special consideration in accordance with the federal state equality law. We also welcome applications from people with different personal, social and cultural backgrounds, people with disabilities and those of equal status.
Information about the University of Siegen can be found on our homepage: www.uni-siegen.de.
Synthetic Imaginaries: The Cultural Politics of Generative AI
University of Siegen | 8–12 September 2025 | extended deadline for submision: 30 June 2025
Synthetic Imaginaries: The Cultural Politics of Generative AI is an international event that will explore the cultural, political, and methodological dimensions of generative AI and synthetic media through a combination of conference talks, hands-on workshops, and collaborative projects. Topics include deepfakes, avatars, cultural biases in training data, feminist and postcolonial critiques, and the aesthetics of AI-generated content.
The rise of artificial intelligence (AI), big data processing, and synthetic media has profoundly reshaped how culture is produced, made sense of, and experienced today. To ‘synthesize’ is to assemble, collate, and compile, blending heterogeneous components into something new. Where there is synthesis, there is power at play. Synthetic media—as exemplified by the oddly prophetic early speech synthesizer demos—carry the logic of analog automation into digital cultures where human and algorithmic interventions converge. Much of the research in this area—spanning subjects as diverse as augmented reality, avatars, and deepfakes—has revolved around ideas of simulation, focusing on the manipulation of data and content people produce and consume. Meanwhile, generative AI and deep learning models, while central to debates on artificiality, raise political questions as part of a wider social ecosystem where technology is perpetually reimagined, negotiated, and contested: What images and stories feed the datasets that contemporary AI models are trained on? Which imaginaries are reproduced through AI-driven media technologies and which remain latent? How do synthetic media transform relations of power and visibility, and what methods—perhaps equally synthetic—can we develop to analyze these transformations?
The five-day event at the University of Siegen—organized by the DFG-funded Collaborative Research Centers Media of Cooperation and Transformations of the Popular together with the Center of Digital Narratives in Bergen, the Digital Culture and Communication Section of ECREA and the German National Research Data Infrastructure Consortium NFDI4Culture—explores the relationship between synthetic media and today’s imaginaries of culture and technology, which incorporate AI as an active participant. By “synthetic,” we refer not simply to the artificial but to how specific practices and ways of knowing take shape through human-machine co-creation. Imaginaries, in turn, reflect shared visions, values, and expectations—shaping not only what technologies do but how they are perceived and made actionable in everyday life.
The event opens with a one-day conference and moves into hands-on workshops and collaborative projects. With multiple opportunities for exchange across disciplines, we encourage especially early-career researchers and PhD students to present their ideas during the conference and join a project led by international facilitators and data designers. We invite submissions of short abstracts (max. 500 words) for presentations engaging with questions and provocations related—but not limited—to topics such as:
Critical data studies perspectives on AI: how data infrastructures, labeling, and curation shape the outputs we call “synthetic”; Cultural afterlives of training data: how racialized, gendered, or colonial imaginaries persist in synthetic media outputs; Methodological uses of GenAI: the politics that we buy in when repurposing AI as a method, from inherited bias to epistemic tensions; Synthetic personhood and likeness: exploring deepfakes, AI-generated avatars, and the power of (in)authenticity; Online cultures and platforms: how AI-generated content circulates across platforms—from memes and art to fan fiction, music, and poetry; Postcolonial and feminist critiques of AI: challenging universalist assumptions in generative models and interrogating whose knowledge is made (in)visible; Clichés, formulas, and repetition in GenAI outputs: how AI-generated stories and images rely on familiar tropes, visual styles, and narrative conventions; The aesthetics of noise in AI-generated content: repetition, glitch, randomness, and their role in producing or disrupting meaning; GPTs as infrastructural components: how generative pretrained transformers operate as configurable, customizable, and task-oriented agents embedded in platform infrastructures; Prompting and/as probing: prompting as a form of critical intervention, shaping co-authorship, sense-making, and research design; The ethics of training AI: from historical records and religious texts to indigenous cosmologies and oral traditions—what are the implications of using culturally sensitive knowledge to train generative models? Generative AI and Memory: synthetic media as a means of reimagining the past—through deepfake testimonies, interactive historical simulations, and other forms of computational memory-making; Generative AI in activist contexts: can AI be used for resistance or reimagining community—in the face of its environmental footprint and complicity in extractive systems?
Program highlights
The event blends three complementary formats:
Mix questions!
Monday, 8 September
Day one begins with a keynote by Jill Walker Rettberg and opens space for emerging questions—think of it as an idea hub. Accepted abstracts will be grouped into thematic sessions curated by the organising team. Presenters will be connected via email ahead of time to coordinate their contributions. Each presentation will be set to 10 minutes to allow ample time for discussion, collective thinking, and exchange. The emphasis is on dialogue, not polished conclusions.
Mix methods!
Tuesday, 9 September-Thursday, 11 September
The next three days—featuring a workshop by Gabriele De Seta and an artistic intervention by Ángeles Briones and DensityDesign Lab—are about exploring new methods—hands-on! We invite you to join a team of interdisciplinary scholars and data designers in probing new methodological combinations. Each of our project teams will present a research question alongside a specific method to be collaboratively explored. Participants will not only learn how to design prompts and work with AI-generated text and images but also how to critically account for genAI models as platform models. All projects draw on intersectional approaches, combining qualitative and quantitative data to explore the synthetic dimensions of AI agency—whether as content creator, noise generator, hallucinator, research collaborator, data annotator, or style imitator. Please bring your laptops. The project titles will be announced soon.
Synthesize!
Friday, 12 September
The final day is dedicated to sharing, reflecting, and synthesizing the questions, methods, and insights developed throughout the week. Project teams will present their collaborative processes, highlight key takeaways, and discuss how their ideas and approaches shifted through hands-on experimentation with methods.
Proposal Submission
Please submit your proposal (max. 500 words) outlining how your work aligns with the event’s theme by 30.06.2025, using this form. Please note that the number of participants will be limited to maintain focused and engaging discussions. All submissions will be peer-reviewed.
The event is free of charge, though attendees are responsible for arranging and covering their travel and accommodation in Siegen. Limited travel support is available (two to three stipends ranging from €500 to €700). Early-career researchers and PhD students are invited to apply; stipends will be awarded by the NFDI4Culture consortium based on the strength of the justification, particularly concerning critical ethical engagement with AI research data, as well as the distance and cost of travel. Short summaries of the presented work will be published on the NFDI4Culture website.
A certificate of participation will be issued for both the conference presentation and the hands-on workshop sessions.
Updated Timeline with extended deadline:
Submit your proposal by 30 June 2025.
Notification of acceptance by July 15 2025.
Registration by August 1 2025.
Venue
Universität Siegen
Campus Herrengarten
AH-A 217/18
Herrengarten 3
57072 Siegen
The Autumn School is organized by the DFG-funded Collaborative Research Centers 1187 Media of Cooperation and 1472 Transformations of the Popular together with the Center of Digital Narratives in Bergen, the Digital Culture and Communication Section of ECREA and the German National Research Data Infrastructure Consortium NFDI4Culture.
In February 2025, the Mixing Methods Winter School at the Collaborative Research Centre 1187 brought together over thirty participants, including international researchers, students, and experts from various disciplines. The program combined hands-on experimentation with critical inquiry into AI-driven research methods. Throughout the Winter School, participants critically engaged with AI not just as a tool but as a collaborator, reflecting on its role in shaping the research process.
The week opened with a keynote by Jill Walker Rettberg from the University of Bergen, who introduced “Qualitative Methods for Analyzing Generative AI: Experiences with Machine Vision and AI Storytelling.” Her talk set the stage for discussions on how qualitative inquiry can reveal the underlying narratives and biases in AI-generated content.
Participants then engaged in two hands-on workshops designed to explore mixed techniques for probing and prompting AI models. Carlo de Gaetano (Amsterdam University of Applied Sciences), Andrea Benedetti, and Riccardo Ventura (Density Design, Politecnico di Milano) led the workshop “Exploring TikTok Collections with Generative AI: Experiments in Using ChatGPT as a Visual Research Assistant,” examining how AI can assist in visual analysis of networked video content. Together with Elena Pilipets (University of Siegen) and Marloes Geboers (University of Amsterdam) participants then explored the semantic spaces and aesthetic neighborhoods of synthetic images generated by Grok during the workshop “Web Detection of Generative AI Content”.
After an introductory first day, the Winter School shifted its focus to two in-depth project tracks. The first project, “Fabricating the People: Probing AI Detection for Audio-Visual Content in Turkish TikTok,” explored how protesters and the manosphere engage with cases of gender-based violence on Turkish TikTok and how these videos can be studied using different AI methods. The second project, “Jail(break)ing: Synthetic Imaginaries of ‘Sensitive’ AI,” explored how AI models reframe sensitive topics through generative storytelling under platform-imposed restrictions.
The project explored video shorts from the Turkish manosphere – content centered on masculinity, gender dynamics, and “men’s rights” issues that often discuss dating, self-improvement, and family life. While this content is found on mainstream platforms and passes moderation, it still frequently veers into misogynistic or even violent rhetoric. Our project explored AI-assisted methods to make sense of large amounts of this contentious multimodal data.
Rationale
Specifically, we set out to develop methods to map how video shorts may become a vehicle for the ambient amplification of extremist content across platforms. We explored two approaches using off-the-shelf multimodal large language models (LLMs). The first sought to extend the researcher’s interpretation of how manosphere content addresses bodies, which are both performed and contested intensely across the issue space. We did this by implementing few-shot labelling of audio transcriptions and textual descriptions of videos. The second method sought to interrogate the role of generative AI in (re)producing memes, genres, and ambience across video shorts. We achieved this by experimenting with zero-short descriptions of video frames to describe detected genres, formats and the possible use of AI in video production processes.
Methods and Data
We started with a period of “deep hanging out” in Turkish manosphere and redpill spaces on Tiktok, Youtube, and Instagram. We identified prominent accounts and crawled them to build a data sample of 3600 short videos from across the three platforms. Several analyses were carried out before the Winter School. These included metadata scraping, video downloading, scene detection, scene collage creation. transcribing audio, and directing an LLM to generate video descriptions following Panofsky’s a three-step iconological method, which differentiates between pre-iconographic analysis (recognizing fundamental visual elements and forms), iconographic analysis (deciphering symbols and themes within their cultural and historical contexts), and iconological interpretation (revealing deeper meanings, ideologies, and underlying perspectives embedded in the image) (Panofsky, 1939).
Method one: Video shorts continue to grow in popularity and prominence across social media platforms, building out new gestural infrastructures (Zulli and Zulli, 2022) and proliferating ambient images (Cubitt et al., 2021). Qualitatively investigating this rich multimodal content at a scale that highlights the broader atmospheres and cultures developed through algorithmic circulation is challenging. Multimodal LLMs have the potential to extend researcher’s ethnographic coding capacity to larger datasets and to account for more varied formats than ever before (Li and Abramson, 2023). We therefore investigate possibilities for using cutting-edge multimodal LLMs for qualitative coding of multimodal data as a methodology for qualitatively investigating ambience and amplification in video-short-driven algorithmic media.
We began with a qualitative ethnographic immersion in our dataset, watching the videos and developing a codebook that described how the videos related to our interest in how bodies were both performatively and discursively addressed. We applied our codebook manually to the textual data the LLM allowed us to produce out of the videos, i.e., not only the metadata, but also audio transcriptions and LLM-generated video Panofskian descriptions. After the codebook stabilized, we applied it to a random subset of 150 datapoints. We then developed a few-shot learning script that applied these labels to the entire dataset. We chose three examples to belong to a labelling “core” and then programmed a script to sample dynamically from the rest of our 150 datapoints to include as many further examples as could be accommodated by the context window limitation. We then prompted the LLM to apply our labels to the entire dataset. This let us explore extending the researcher’s qualitative insights to larger, multimodal data.
During the codebook development and coding process, the Panofsky descriptions brought the visual prominence of hands and hand gestures across the dataset to our attention. We therefore also applied a separate process to our data to begin isolating hands for closer investigation.
Method two: Automation technologies and generative AI play an increasingly prominent role in the creation of audio-visual social media content. This ranges from image or video generation, to AI voice-over production, to video editing, content cropping, platform optimization, and beyond (Anderson and Niu, 2025). Detecting these production methods, however, is challenging. Even state-of-the-art machine-learning struggles to analyze multimodal media (Bernabeu-Perez, Lopez-Cuena and Garcia-Gasulla, 2024). We set out to find qualitative alternatives for exploring the role of AI aesthetics in video short production. Therefore, we proceeded with a twofold approach, developed within an iterative process of prompt engineering: First, we asked the LLM to create a structured visual analysis of a social media video collage by evaluating its composition, camera techniques, editing style, mise-en-scène, text overlays, genre, and platform-specific features, summarizing key characteristics as a tag list. This initial prompt helped distinguish between different video formats and styles, identifying those that are particularly likely to incorporate automation or AI-driven edits. Second, we directly instructed the LLM to assess the likelihood that AI was used in the production of this video. In this way, we set out to explore “popular” AI’s role in both the creation and the interpretation of misogynistic video-shorts.
Research questions
How can AI-based methods be used to extend ethnographic research into networked digital cultures?
How can these methods help increase researcher sensitivity to phenomena that happen at network scale, for example, ambient amplification practices?
Can AI identify and characterize synthetic content? How does AI see AI?
As an approximation of that question, how does AI interpret and distinguish between different content genres and formats?
Key findings
Our work demonstrated the extent to which the internal cultural logic of the LLM cannot be separated from its output as a tool (Impett and Offert, 2022) – and therefore how LLMs, when used as tools, are inevitably also always reflexively the object of study. When designing processes for “co-creation” and collaboration with LLMs, the logic of the LLM repeatedly overpowered our own efforts to insert our intentions and directions into the process. This suggests that the most fruitful way to use out-of-the-box LLMs as an ethnographic research tool for the study of digital cultures is to lean into – and critically interrogate – its internal cultural logic instead of trying to bend it to our own. Obtaining results that reflect our intentions more closely will require more extensive technical methods, e.g., fine-tuning models and extensive many-shot prompting or alternative machine-learning approaches.
By letting the LLM reveal its own internal logic, however, we anticipate being able to use LLMs as a way to highlight the machine-readable and machine-reproducible qualities of the multimodal networked space itself (Her, 2024). The LLM’s internal logic can help foreground the fact that this media is also created by and for machines to consume, and reveal how generative LLMs applied to problematic cultural spaces interpret, (re)structure, (re)produce cultures of hate in “popular” spaces.
A comprehensive report is in progress.
Jail(break)ing: Synthetic Imaginaries of (sensitive) AI
This project has explored how three generative AI models—X’s Grok-2, Open AI’s GPT4o, and Microsoft’s Copilot—reimagine controversial visual content (war, memes, art, protest, porn, absurdism) according to—or pushing against—the platforms’ content policy restrictions. To better understand each model’s response to sensitive prompts, we have developed a derivative approach: starting with images as inputs, we have co-created stories around them to guide the creation of new, story-based image outputs. In the process, we have employed iterative prompting that blends “jailbreaking”— eliciting responses the model would typically avoid—with “jailing,” or reinforcing platform-imposed constraints.
We propose the concept of ‘synthetic imaginaries’ to highlight the complex hierarchies of (in)visibility perpetuated by different generative AI models, while critically accounting for their tagging and visual storytelling techniques. To ‘synthesize’ is to assemble, collate, and compile, blending heterogeneous components—such as the data that MLLMs (Multimodal Large Language Models) integrate within their probabilistic vector spaces—into something new. Inspired by situated and intersectional approaches within critical data(set) studies (Knorr-Cetina 2009; Crawford and Paglen 2019; Salvaggio 2023; Pereira & Moreschi 2023; de Seta et al. 2024; Rettberg 2024) we argue that, “synthetic” does not merely mean artificial; it describes how specific visions—animated by automated assessments of data from a wide range of cultural, social, and economic areas—take shape in the process of human-machine co-creation. Some of these visions are collectively stabilized and inscribed into AI-generated outputs, revealing normative aspects of text-image datasets used to train the models. Others assemble layers of cultural encoding that remain ambiguous, contested, or even erased—reflecting how multiple possibilities of meaning fall outside dominant probabilistic patterns.
While generative models are often perceived as systems that always produce output, this is not always the case. Like social media platforms, most models incorporate filters that block or alter content deemed inappropriate. The prompting loops—from images to stories to image derivatives—involve multiple rounds of rewriting stories generated by the model in response to input images. The distance between input and output images corresponds with the transformations in the initially generated and revised (or jailed) image descriptions.
As a method, jail(break)ing exposes the skewed imaginaries inscribed in the models’ capacity to synthesize compliant outputs. The more storytelling iterations it takes to generate a new image, the stronger the platforms’ data-informed structures of reasoning come to the fore.
Methods and data
While our collection of sixty input images covers a range of seemingly unrelated issues, they all share two qualities: ambiguity and cultural significance. Many of these images qualify as sensitive, yet they are also widely and intensely circulated on ‘mainstream’ social media platforms.
Visual interpretation: Through a qualitative cross-reading of AI-generated output images, we analyzed how three different models respond to image-driven storytelling prompts. Through multimodal prompting (“I give you an image, you tell me a story”), stories were co-created to inform the generation of output images. By synthesizing ten output images per issue space into a canvas, we then examined how AI systems reinterpret, alter, or censor visual narratives and how these narratives, in turn, reinforce issue-specific archetypes.
Narrative construction: We approached image-to-text generation as structured by the operative logic of synthetic formulas—setting (where is the story set?), actors (who are the actors?), and actions (how do they act?). Driven by repetition-with-variation, these ‘formulas’ (Hagen and Venturini 2024), reveal narrative patterns and semantic conventions embedded in the models’ training data.
Keyword mapping: We analyzed AI-generated descriptions of images’ content, form, and stance across models. Exploring both unique and overlapping keywords, the method uncovers how each model prioritizes certain vernaculars as a tagging device.
Research Questions
Which stories can different AI models tell about different images, and which story archetypes emerge in the process of jail(break)ing?
When do the models refuse to generate images? Which stories remain unchanged, and which need to be transformed?
Which keywords do the models assign to describe the images’ content, form, and stance?
Key Findings
The different AI models—Grok-2, GPT-4o, and CoPilot—tell distinct stories about images based on their internal biases, content policies, and approaches to sensitive material. Their generated narratives differ in terms of modification, censorship, and interpretation, reflecting platform-specific content moderation frameworks.
Grok-2 preserves more of the original content, making fewer alterations unless forced by content restrictions. It allows more controversial elements to remain but often introduces confusing substitutes.
GPT-4o significantly neutralizes content, shifting violent, sexual, or politically sensitive imagery toward symbolic and abstract representations. It frequently removes specific cultural or historical references.
CoPilot enforces the strictest content restrictions, often refusing to generate images or stories for sensitive topics altogether. It eliminates references to nudity, violence, or political figures and transforms potentially controversial scenes into neutral, inoffensive portrayals.
Stricter content policies amplify narrative techniques like suspense-building in AI-generated stories. CoPilot and GPT-4o lean into verbose storytelling to comply with guidelines, often elevating uncontroversial background elements into agentic forces. In the ‘war canvas’ story, for instance, CoPilot foregrounds the background, narrating: ‘The square pulses with energy, driven by a community determined to create change.’ Grok, by contrast, sometimes fabricates entirely new subjects—golden retrievers replacing NSFW models—paired with objects like fluffy carpets. In other cases, the model inserts public figures into generic scenarios, intensifying the images’ impact.
Generative AI’s so-called sensitivity is a synthetic product of dataset curation, content moderation, and platform governance. What models permit or reject is shaped by training data biases, corporate risk management, and algorithmic filtering, reinforcing dominant norms while erasing politically or socially disruptive elements. Rather than genuine ethical awareness, these systems engage in selective sanitization, softening controversy while maintaining an illusion of neutrality. This raises critical questions about who defines AI “sensitivity,” whose perspectives are prioritized over others, and how these mechanisms shape epistemic asymmetries in digital culture.
Catalina Goanta, Benjamin Peters, Jürgen Streeck and Jill Walker Rettberg are new Mercator Fellows at the CRC 1187
The Collaborative Research Center (CRC 1187) “Media of Cooperation” welcomes four new Mercator Fellows: Catalina Goanta, Benjamin Peters, Jürgen Streeck and Jill Walker Rettberg. These outstanding researchers will contribute their scientific expertise and innovative approaches to the CRC 1187.
About the Mercator Fellowship at the CRC 1187
The CRC 1187 awards Mercator Fellowships to outstanding researchers worldwide to extend scientific collaboration within its network. Mercator Fellows work closely for extended periods with one or more projects. Together with its regular members, Mercator Fellows study digital, data-intensive media to develop interdisciplinary approaches further and help shape the CRC’s research programme. Including these renowned researchers strengthens the international network of the CRC 1187 and promotes the transfer of knowledge and ideas, which is of central importance for contemporary digital research at the CRC.
The Mercator Fellowship is a module within the German Research Foundation’s funding programme intended to facilitate a sustainable research exchange between the researchers of the CRC 1187 and the fellows.
About the Mercator Fellows
Prof. Dr. Catalina Goanta
Law, Economics and Governance Molengraaff Institute for Private Law University of Utrecht, the Netherlands
Catalina Goantas researches at the intersection of law, technology and society with a particular focus on platform regulation, content monetization and consumer law in the digital age. As head of the EU-funded ERC Starting Grant project HUMANads (2022-2027), she investigates how influencer marketing, algorithmic advertising systems and new forms of digital work should be evaluated from a legal and social perspective. In addition to her academic work, she is in demand internationally as an expert on platform regulation.
Goanta has been awarded for her innovative teaching and research approaches and received a fellowship at the Stanford Transatlantic Technology and Law Forum in 2017. This was followed by the Niels Stensen Fellowship in 2018. Her dissertation on the digitalization of contract law at Maastricht University laid the foundation for her intensive examination of the legal and social challenges of the platform economy.
Among her most significant publications are the 2020 anthology The Regulation of Social Media Influencers, which analyzes the regulation of social media influencers from various perspectives and highlights the challenges of influencer marketing, and the 2021 article “A New Order: The Digital Services Act and Consumer Protection” in the European Journal of Risk Regulation, which examines the EU’s Digital Services Act (DSA) from the perspective of consumer protection and intermediary liability.
Prof. Dr. Birgit Meyer
Department of Philosophy and Religious Studies Utrecht University, the Netherlands
As a cultural anthropologist with over 30 years of experience, Birgit Meyer studies religion from a material and postcolonial perspective. Her research strivesfor a synthesis between empirical research and theoretical reflection in a broad multidisciplinary setting. Focuses of her research over time include religion in Africa; the rise and popularity of global Pentecostal churches; religion, popular culture and heritage; religion in (post)colonial settings; religion and media; religion and the public sphere; religious visual culture; and senses and aesthetics.
Her most significant publications include the 2021 open-access book Refugees and Religion: Ethnographic Studies of Global Trajectories, which understands religion from a material and corporeal angle, and addresses the ways in which refugees practice their religions and convert or develop new faiths, and the 2024 article “‘Idols’ in the museum: Legacies of missionary iconoclasm” in the collection Image Controversies: Contemporary Iconoclasm in Art, Media, and Cultural Heritage, which critically analyzes contemporary iconoclasms in art, media and the treatment of cultural heritage from a global and interdisciplinary perspective.
Prof. Dr. Benjamin Peters
Hazel Rogers Endowed Chair in Media Studies University of Tulsa, USA
Benjamin Peters researches topics in the fields of media theory, new media history, technology criticism, digital cultures and the politics of information technologies, with a particular focus on the relationships between new technologies, culture and society and the history of Soviet computer science. Benjamin Peters has received several awards for his academic work, including the Computer History Museum Prize (2018) for his book How Not to Network a Nation and the Wayne S. Vucinich Book Prize (2017). He was honored for his outstanding teaching with the Outstanding Teaching Award of the University of Tulsa in 2023.
Peters received his PhD in Communication Studies from Columbia University in 2010. Since 2017, he has been an associate professor in the Department of Media Studies at the University of Tulsa, where he holds the Hazel Rogers Endowed Chair in Media Studies. Other academic positions have taken him to Yale Law School (2015) and the Kate Hamburg Kolleg at RWTH Aachen University (2022-2023) and to the MECS Institute of Advanced Study at Leuphana University (2017, 2019). He has also worked at the Berkman Klein Center for Internet & Society at Harvard University and as a visiting professor at the Hebrew University in Jerusalem.
Jürgen Streeck researches multimodal interaction, in particular the coordination of speech, gesture and gaze as well as the social significance of actions in communication. He has contributed to the development of multimodal interaction research and deals with the connections between language, music and orality, particularly in hip-hop. He has received several awards for his academic work, including the Georg Gottfried Gervinus Fellowship (2013-2014). He was a fellow at the Freiburg Institute for Advanced Studies and at the Center for Interdisciplinary Research (ZiF) in Bielefeld.
Streeck received his PhD in linguistics from the Freie Universität Berlin in 1981 and has been Professor of Communication Studies at the Department of Communication Studies at the University of Texas at Austin since 2013. He was previously an associate professor in the same department and also held a professorship in linguistics at Freie Universität Berlin. He has also held visiting professorships and fellowships at universities such as the University of Oldenburg, the University of Vienna and the University of Utrecht.
His most significant publications include the 2009 book Gesturecraft: The Manu-facture of Meaning, in which Streeck examines how hand gestures in communication represent and interpret the world, based on microethnographic research and theories of cognition and interaction. In the volume Self-Making Man: A Day of Action, Life, and Language, published in 2017, Jürgen Streeck analyzes how a car mechanic in Texas creates his social world and identity in communication through gestures, language and actions.
Prof. Dr. Jill Walker Rettberg
Center for Digital Narrative, Department of Linguistic, Literary and Aesthetic Studies University of Bergen, Norway
Jill Walker Rettberg researches the interactions between narratives and digital technologies, in particular the impact of artificial intelligence on storytelling and the dissemination of stories online. Rettberg has received awards for her work, such as the 2017 John Lovas Memorial Award for her innovative use of social media in research. She was also awarded the Meltzer Foundation Prize for Excellence in Research Dissemination (2006) for her outstanding research work.
Jill Walker Rettberg received her doctorate in computer science from the University of Bergen in 1998. She has been sharing her research findings on her blog jill/txt and on social media since 2000, making her one of the first academic bloggers. Since 2014, she has been Professor of Digital Culture and Co-Director of the Center for Digital Narrative at the University of Bergen. She leads the ERC Advanced Grant project “AI Stories: Narrative Archetypes for Artificial Intelligence” and the ERC Consolidator project “Machine Vision in Everyday Life”. Other academic positions have taken her to the University of California, Berkeley (2015) and the MIT Media Lab (2018) as a visiting professor.
The "Integrated Research Training Group" (MGK) module set up in the second funding phase has developed into the central location for doctoral training and makes a significant contribution to the SFB's overarching goal of carrying out fundamental digital research and innovative method development. Through university-financed doctoral positions and scholarships, MGK has been able to expand the range of sub-project topics and make a significant contribution to developing digital praxeology.
In the third funding phase, MGK is also the location of the internationally oriented, interdisciplinary doctoral training, which runs across SFB's sub-projects. In terms of content, MGK follows the claim of SFB to investigate cooperation in sensor-intensive environments. It pursues three central goals: (1) to conceptually and methodically structure media, data and sensory practice analysis, (2) to examine the role of sensors in the cooperation process in the field of tension between public spheres and infrastructures and (3) to implement methodological innovation. Using sensors expands the dimensions that contribute to the reciprocal executive reality of media: in addition to infrastructure and the public sphere, surroundings and bodies come into focus.
MGK uses the cooperation and event formats successfully tested in the second phase to conduct these debates across the sub-projects and disciplines involved as well as generate new thematic impulses. Doctoral students are thus enabled to acquire relevant skills for their careers in interdisciplinary media research. MGK organizes events, training courses, postdoc mentorships and publications that further develop both the concepts of cooperation and data in the context of sensor data. To this end, it cooperates with the university's own institutions for graduate training, such as the House of Young Talents, Welcome Center for international researchers and Women Career Service. The training program is closely linked to SFB's internationalization strategy, in that international doctoral students are involved as scholarship holders and international fellows and guests are systematically integrated into the training.
MGK is the central place for communicating and innovating methods. The basics of praxeological, digital and sensor-based methods are taught through profile-forming formats such as method workshops (research tech labs) and data labs, international summer schools, media practice theory workshops, small interdisciplinary groups and collaborative publications. MGK's method training creates space for developing Inventive Methodologies that address the specific challenges of sensor-based empirical fields and objects, which need to be examined in their infrastructural distribution, public controversies and cooperative-physical sensory system.
Ein PI-Konsortium mit Expertisen in Medien- und Kommunikationswissenschaften, Digital Sociology and Methods, Angewandte Linguistik und Human Computer Interaction begleitet die interdisziplinären Promotionen.
Kooperation mit externen Promovierenden durch das Stipendienprogramm.
Das Veranstaltungsprogramm vermittelt theoretische und methodische Grundlagen der digitalen Medienpraxisforschung
Kolloquien & Werkstatt Medienpraxistheorie
Interdisziplinäre Kleingruppen & Schreibretreats
Research Tech Labs &
Data Labs
Internationale Sommerschulen
Breites Unterstützungsnetzwerk
Internes Mentoring
Internationales Netzwerk an Mercator-Fellows und Kooperationspartner*innen