date
views 38

The big problem with artificial intelligence: why are "hallucinations" increasing?

The big problem with artificial intelligence: why are "hallucinations" increasing?
In the modern world of technology and innovation, artificial intelligence systems are rapidly developing and entering various spheres of our lives. Today, the use of artificial intelligence is increasing in almost all fields – business, education, medicine, and our daily life. However, recent studies and expert conclusions show that the more complex artificial intelligence becomes, the more its problems and errors increase.

In particular, many questions and critical opinions have arisen around the latest models of OpenAI – especially the most powerful algorithms called o3 and o4-mini. According to the famous publication The New York Times, OpenAI's latest and most powerful o3 model makes significantly more mistakes compared to previous generations. This indicates that there are serious problems in the process of "thinking" and drawing conclusions by artificial intelligence.

Such errors are not unique to OpenAI products, but are also frequently observed in AI models of major technology companies such as Google, and the Chinese startup DeepSeek. Although their computing and mathematical capabilities have significantly improved, the number of mistakes in their responses is not decreasing, but rather increasing.

One of the most serious and frequently observed errors in the field of artificial intelligence is the problem of "hallucinations." The word "hallucination" here refers to data fabricated and invented by AI without source and basis. In other words, artificial intelligence presents non-existent events or incorrect information as if it were true.

According to Amr Awadallah, CEO of Vectara, such hallucinations, despite all efforts, remain a structural problem of artificial intelligence. This once again confirms the need for caution when working with AI systems.

A vivid example of the hallucination problem was the mistake of the AI bot in the technical support service of Cursor. This bot gave incorrect information stating that the tool could be used only on one computer, which led to major misunderstandings. Later investigations showed that the company had not imposed such restrictions and that all this information was made up by the bot.

Such unpleasant situations are just one example showing how prone AI is to hallucinations. According to independent and internal test results, the new o3 and o4-mini models hallucinated in 51% and 79% of general question responses, respectively. This is significantly higher than the 44% recorded in the older o1 model.

More detailed test results showed that in answering questions related to famous people, the o3 model gave incorrect answers in one-third of cases – twice as much as the o1 model. The situation worsened even further with the o4-mini model, where the error rate reached 48%.

Experts note that hallucinations are regularly observed in AI models developed by Google and the DeepSeek startup. Although these companies have made many efforts to reduce errors, over the past year, hallucinations have decreased by only 1–2%.

OpenAI and other developers acknowledge the need for additional scientific research to deeply investigate the causes of this issue. Amid the sharp increase in demand for AI tools, solving these problems has not only scientific importance, but also great practical and economic significance.

In general, as the capabilities of artificial intelligence expand, so does its responsibility. While developers, researchers, and companies strive to create newer and more advanced algorithms, the issues of accuracy and reliability of AI remain of primary importance to the public.

Solving such problems will be important in the future to further strengthen the role of AI in human life and to take the next steps in technological progress. That’s why Zamin.uz will continue to regularly cover news on this topic and promptly deliver updates to readers.
Ctrl
Enter
Did you find a Mistake?
Highlight the phrase and press Ctrl+Enter
News » Technology » The big problem with artificial intelligence: why are "hallucinations" increasing?