
Recent research conducted at Dikin University has raised serious questions about the use of AI-based chatbots like ChatGPT in academic research. According to the research results, it became clear that most of the information and literature provided by the bot was either incorrect or completely fabricated.
According to experts, ChatGPT has been tasked with preparing literature reviews on six topics related to mental health. During the inspection, 35 (19.9%) of the 176 citations turned out to be completely fake. The remaining 141 citations (45.4%) contained incorrect page numbers, missing DOI numbers, or errors in publication dates. Only 77 sources (43.8%) were recognized as accurate and reliable.
Worst of all, it is not easy to immediately distinguish these errors. For example, 64 percent of fake DOI numbers led to materials that had nothing to do with the topic, and another 36 percent directed to pages that did not exist at all.
Researchers emphasize that when using neural networks like ChatGPT, it is crucial to increase caution, develop clear guidelines, and conduct mandatory human verification of data. They also called on scientific journals and educational institutions to strengthen mechanisms to protect against such errors.
Furthermore, a separate analysis conducted by the European Union of Broadcasters (EBU) noted that ChatGPT, Claude, Gemini, and other chatbots fabricate up to 40% of their responses, presenting them as real facts.
Some examples identified:
- ChatGPT sometimes misrepresented "Papa Francis is alive";
- Microsoft Copilot denied Sweden's NATO membership;
- Google Gemini has responded to the possibility of Donald Trump's re-election - this is contrary to US law.
In general, experts unanimously say: when using artificial intelligence, strengthening discipline and attention, rechecking any information should be the main rule.
Read “Zamin” on Telegram!Users of Меҳмон are not allowed to comment this publication.