News_1920x250_Detail

AI literacy: How easily are we deceived?

Prof. Dr. Lohweg dispels myths surrounding AI

Prof. Dr. Volker Lohweg explains large language models in anno 1578. Source: IWD

With the success of ChatGPT, large language models have become part of our everyday lives. But while the use of these systems is growing rapidly, how they work remains a mystery to many people. This lack of knowledge opens the door to misunderstandings and misinformation – especially when media or popular science articles give the impression that Artificial Intelligence (AI) is able to think, hallucinate or even feel. In such narratives, the supposed breakthrough to "artificial superintelligence" often seems only a step away.

Informative evening on AI at anno 1578

inIT Professor Dr. Volker Lohweg and Prof. Dr. Georg Halbeisen (Department of Business Psychology at the Faculty of Economics at TH OWL) addressed these and other myths surrounding Artificial Intelligence at the event "AI Literacy: How easily are we deceived?" at anno 1578 on 30 October 2025. The aim of the evening was to create a realistic understanding of how large language models work and, at the same time, to highlight the psychological mechanisms that contribute to their misperception.

Technical basics of large language models

In his lecture "Artificial Intelligence (AI) – How do we want to deal with it?", Prof. Dr. Volker Lohweg explained the technical basics of large language models and made it clear that their seemingly "intelligent" behaviour is based on complex, but ultimately statistical calculations.

"We must not humanise 'large language models', commonly referred to as 'AI' for short. Such models are highly complex, but ultimately they are mathematical algorithms. They do not 'think' or 'feel'. This machinery appears to 'think' to humans, but that is a fallacy. So we should not be too credulous in our use of it," Prof. Lohweg emphasised emphatically.

Credibility of large language models

Prof. Dr. Georg Halbeisen then addressed the question of why the output of large language models seems so credible to us humans. He explained that this is less due to the models themselves and more to typical human patterns of perception and interpretation. Halbeisen used vivid examples, such as the zipper supposedly invented in Norway, to show how easily people can be deceived by this – and how important it is to be aware of these mechanisms in order to deal critically with AI-generated content.

Lively exchange

The evening ended with a lively exchange between the audience and speakers – making it clear that education about Artificial Intelligence is no longer a niche topic, but a central competence of our digital present.

 

The event was part of the AI Academy OWL, funded by the Federal Ministry of Education, Research, Technology and Space, as well as the TRiNNOVATION OWL transfer project, which is also supported by the Federal Ministry of Education, Research, Technology and Space as part of the Federal-State Initiative Innovative University.