Publikationen_1920x250_Detail
In: Proceedings of the 2025 International Conference on Information Technology for Social Good, Association for Computing Machinery

Preventing Accidental Sharing of Misinformation Using Large Language Models

Mirko Franco , Valentin Grimm and Eelco Herder,
Dec 2025
The proliferation of misinformation is one of the most pressing challenges in today’s digital landscape, due to its far-reaching implications for public health, economic stability, trust in governmental institutions, and societal cohesion. Despite efforts to regulate online platforms and limit the spread of misinformation, many individuals are left behind because of their low digital literacy, level of education, and other contributing factors. In this context, we explore the use of Large Language Models (LLMs) to identify misinformation and we evaluate the capabilities of GPT-4.1-mini, as a representative example of these models. We then discuss how LLMs can help empower users to critically create and share information, thereby fostering more resilient online communities. We also present a set of possible interaction patterns for content creation and moderation.
Literature procurement: Proceedings of the 2025 International Conference on Information Technology for Social Good, Association for Computing Machinery
@inproceedings{3258,
author= {Franco, Mirko and Grimm, Valentin and Herder, Eelco},
title= {Preventing Accidental Sharing of Misinformation Using Large Language Models},
booktitle= {Proceedings of the 2025 International Conference on Information Technology for Social Good},
year= {2025},
editor= {},
volume= {},
series= {},
pages= {244–252},
address= {Antwerp, Belgium},
month= {Dec},
organisation= {},
publisher= {Association for Computing Machinery},
note= {},
}