DISINFORMATION AND ARTIFICIAL INTELLIGENCE: HALLUCINATIONS IMPACTS IN THE USE OF CHATGPT IN THE ACADEMIC FIELD

Authors

  • Lívia Inglesis Barcellos
  • João Pedro Albino

Keywords:

Hallucinations, Artificial Intelligence, Academic Research, Misinformation

Abstract

Technological advances driven by digital platforms have increased the risk of information distortion, intensifying misinformation across communication environments. The recent incorporation of generative artificial intelligence into knowledge production introduces additional challenges by enabling the creation of factually inaccurate content, known as “hallucinations”. This article examines occurrences of this phenomenon in AI systems, focusing on tests conducted with ChatGPT. The methodology combined a bibliographic review with controlled experimentation using specific prompts to obtain academic information, citations, and references. The results revealed inconsistencies, factual errors, and nonexistent references, indicating relevant risks to scientific research integrity. The study concludes that, although generative AI tools can support academic work, they require rigorous verification, semantic scrutiny, and awareness of their limitations to prevent the dissemination of inaccurate content and preserve the quality of scientific production.

DOI: https://doi.org/10.56238/sevened2025.038-069

Downloads

Published

2025-12-17

How to Cite

Barcellos, L. I., & Albino, J. P. (2025). DISINFORMATION AND ARTIFICIAL INTELLIGENCE: HALLUCINATIONS IMPACTS IN THE USE OF CHATGPT IN THE ACADEMIC FIELD. Seven Editora, 1151-1172. https://sevenpubl.com.br/editora/article/view/8770