The pdf version of the Call for Papers can be downloaded here (PDF, 355 KB).
When ChatGPT went public in November 2022, it changed the communication ecosystem. The chatbot, which provides original, human-like responses to user prompts based on extensive training data and human feedback, reached a million users within a week and 100 million users by January 2023 – arguably one of the fastest rollouts of any technology in history.
ChatGPT is only one example for the broader development of “generative AI” that produces novel outputs based on training data and, by now, translates text (like DeepL), creates imagery (like DALL.E, Midjourney or Stable Diffusion), generates textual responses (like ChatGPT), and more (Cao et al. 2023). And generative AI, again, is only one, albeit prominent, example of artificial intelligence that has become relevant in fields like economy, science or healthcare and will also change the practice of science communication. For example, it can support science communication practitioners in generating content or identifying new ideas and trends, translating and preparing scientific results and publications for different channels and audiences, and enabling interactive exchanges with various user groups (De Angelis et al. 2023).
AI is also highly relevant for science communication research (Schäfer 2023), which is why the Annual Conference of the “Science Communication” Division of the German Communication Association (DGPuK) aims to connect researchers from the German-speaking countries with international colleagues, and bring together cutting-edge research assessing the role of AI in science communication. The conference will be a forum for science communication research on all facets of AI while also providing an open panel for non-AI-related research.
Submissions may focus on – but are by no means limited to – the following themes and perspectives:
Presentations can analyze AI as an object of science communication: They may focus, first, on the producers of AI-related communication, i.e., on the communication efforts and strategies of scholars, scientific organizations and institutions of higher education, but also of tech companies, regulators, NGOs, and other stakeholders (e.g., Richter et al. 2023). Second, they could focus on intermediaries of communication, such as journalists, social media influencers, or tech platforms, and their influence on communication about AI (Nishal & Diakopoulos 2023). Third, presentations may analyze public communication about generative AI in legacy media, social media, public imagery, fictional accounts etc. (Brause et al. 2023). Fourth, presentations could focus on consumption, i.e., on the perceptions, use and effects of AI-related communication – among citizens, but also among stakeholders, regulators, researchers, and others (Begenat & Kero 2023; Henestrosa et al. 2023; Starke & Lünich 2020).
We also welcome presentations analyzing AI as an agent of (science) communication. After all, AI differs from other objects of science communication because the technology itself has “increased agency” as a form of “communicative AI” (Guzman & Lewis 2020: 79; also Hepp et al. 2022). Therefore, presentations analyzing human-AI interactions would be highly relevant for the conference (e.g., Dogruel & Dickel 2022). How people interact with (generative) AI, evaluate it, how the technology responds and adapts, and what the results of these interactions are on both sides are some of the most interesting research questions of the near future (Chen et al. 2023; Henestrosa & Kimmerle 2022). This includes a focus on reconstructing the – often opaque and proprietary (Buhmann & Fieseler 2021) – inner workings of communicative AI, its underlying values and likely biases (cf. Seaver 2019). We are also interested in studies assessing how and to what extent science communicators and journalists use AI in the creation or distribution of science-related content (cf. Wilczek & Haim 2023), and whether they do so responsibly and ethically (Henke 2023; Medvecky & Leach 2019).
AI influences societal communication in various ways, from generative AI being an agent of communication over algorithmic curation on tech and social media platforms all the way to the use of AI for surveilling communication and content moderation. Presentations on such uses of AI vis-à-vis science communication, and the impact of AI technologies on the broader science communication ecosystem, are invited to the conference as well. They could assess whether AI-tools lend themselves equally well to different topics, formats, or audiences of science communication, or whether they result in, or reproduce, biases. They could focus on AI’s influence on the diversity of and balance of power between different science communicators, journalists etc., and on job market implications in science communication practice. They could focus on potential, AI-related changes in the content of public communication about science, e.g., how accurate AI-generated content is, how much misinformation or deep fakes it contains (Godulla et al. 2021), and whether it produces “wrongness at scale” (Ulken 2023). And they could focus on AI’s impact on users (Ho 2023), e.g., on whether it (dis)informs audiences better than humans (Spitale et al., 2023) or whether it produces digital divides (Hargittai & Hsieh 2013) in terms of access to the technology (“first-level divides”) or in terms of skills and literacy necessary to make optimal use of the technology (“second-level divides”).
AI will impact both the theoretical and conceptual foundations and methodological repertoire of science communication research. Therefore, contributions that look at AI from a meta-perspective are encouraged as well. On the one hand, this concerns theoretical and conceptual perspectives: After all, “artificial intelligence (AI) and people’s interactions with it […] do not fit neatly into paradigms of communication theory that have long focused on human–human communication” (Guzman & Lewis 2020: 70). Conceptual work and theory-building are therefore needed (cf. Greussing et al. 2022), drawing on fields like Human-Machine-Communication (Guzman 2018), social-constructivist approaches like Science and Technology Studies or Social Construction of Technology or critical-interventionist approaches like Value-Sensitive Design (Schäfer & Wessler 2020). On the other hand, contributions examining the impact of AI on the methodology and methods of science communication research are encouraged: AI will afford researchers new opportunities and can function as a research tool (Chen et al. 2023; Schmidt 2023), e.g., for conducting literature reviews and generating hypotheses, data collection and annotation, coding and summarizing and presenting findings (Stokel-Walker & van Noorden 2023).
The conference will also provide a forum for general science communication research, thus enabling current research unrelated to AI to be presented.
Submissions will be administrated with ConfTool under https://www.conftool.net/aiscicomm24. The following deadlines apply to all submission types:
Selected papers from the conference will be invited for publication in a special issue of “JCOM – Journal of Science Communication” (https://jcom.sissa.it), edited by the conference organizers and forthcoming in 2025. Please note that all invited publications will still be peer reviewed, and that the invitation to submit a full article does not yet guarantee eventual publication in the special issue.