DISINFORMATION AND ARTIFICIAL INTELLIGENCE: HALLUCINATIONS IMPACTS IN THE USE OF CHATGPT IN THE ACADEMIC FIELD
Keywords:
Hallucinations, Artificial Intelligence, Academic Research, MisinformationAbstract
Technological advances driven by digital platforms have increased the risk of information distortion, intensifying misinformation across communication environments. The recent incorporation of generative artificial intelligence into knowledge production introduces additional challenges by enabling the creation of factually inaccurate content, known as “hallucinations”. This article examines occurrences of this phenomenon in AI systems, focusing on tests conducted with ChatGPT. The methodology combined a bibliographic review with controlled experimentation using specific prompts to obtain academic information, citations, and references. The results revealed inconsistencies, factual errors, and nonexistent references, indicating relevant risks to scientific research integrity. The study concludes that, although generative AI tools can support academic work, they require rigorous verification, semantic scrutiny, and awareness of their limitations to prevent the dissemination of inaccurate content and preserve the quality of scientific production.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.