mobile icon
Project

Artificial Intelligence for Science Communication: Acceptance and Lay People Comprehension

WorkgroupKnowledge Construction Lab
Duration07/2020-03/2025
FundingSondertatbestand Data Science
Project description

How do we obtain scientific information? Who do we get it from? What if artificial intelligence could provide us with complicated topics and technical information in an easily understandable way? This research project investigates how laypeople perceive and evaluate intelligent language assistants who communicate scientific information. In particular, it will explore how different textual representations of automated content affect the acceptance and reception of scientific knowledge.


Science and scientific information are essential components of a modern knowledge and media society. Decisions are often made based on scientific findings, which is why science communication is of great importance in shaping beliefs and actions. Scientific information is omnipresent, it reaches people via almost all media, and it is provided by a variety of stakeholders. Therefore it is of high practical relevance to investigate new methods for successful science communication.

Developments in digitization and advances in artificial intelligence (AI) methods make it possible to analyze large amounts of scientific data. These scientific data and findings can be processed in such a way that they are available to the public without major barriers. AI tools that summarise scientific information and prepare it in an easily understandable way could thus have an essential function in science communication. Text production programs have been successfully used in the area of journalism for nearly a decade now, as they can analyze large amounts of data, aggregate relevant information, and finally convert it into text without human intervention. Moreover, newer tools such as ChatGPT, combined with plug-ins specially developed for fact-checking and the linking of scientific databases (e.g., Wolfram, ScholarAI), can provide laypeople with supposedly scientific information much more directly.


However, the question arises of how well laypersons can deal with this automatically produced content and what they imagine by it. Do they understand that the information presented is based on larger amounts of individual information? How do they perceive the automated generation of text? Do people accept and trust these methods and the respective content? How do certain variations of the AI process influence experience and behavior?


Experimental studies examine how well laypeople understand the information (allegedly) prepared by an AI and if they accept this type of science communication. The acquired knowledge will help show the chances and limits of AI-based science communication and design these new methods optimally.

Publications

Lermann Henestrosa, A., & Kimmerle, J. (2024). The effects of assumed AI vs. human authorship on the perception of a GPT-generated text. Journalism and Media, 5, 1085-1097. https://dx.doi.org/10.3390/journalmedia5030069 Open Access
 

Lermann Henestrosa, A., Greving, H., & Kimmerle, J. (2023). Automated journalism: The effects of AI authorship and evaluative information on the perception of a science journalism article. Computers in Human Behavior, 138, Article 107445. https://dx.doi.org/10.1016/j.chb.2022.107445