As artificial intelligence becomes more helpful, its errors also increase

Artificial intelligence systems are becoming increasingly integrated into our daily lives. We not only ask them for information but also expect emotional support and empathy. However, a recent study by researchers at the Oxford Internet Institute (OII) has raised serious concerns in this regard. Zamin.uz analyzes these unexpected and significant findings in the world of technology.
The compromise between warmth and truth
Oxford researchers examined more than 400,000 responses from five leading artificial intelligence models. The study showed that the more friendly and empathetic chatbots attempt to be, the more errors appear in their responses. In science, this phenomenon is being called the "trade-off between warmth and accuracy."
Study author Lujayn Ibrahim explains that when people try to please someone or appear kind, they avoid telling the harsh truth. This same human trait may have been "transferred" to artificial intelligence models.
When numbers speak: Errors increased by more than 7 percent
The scientists intentionally used "fine-tuning" to make models from Meta, Mistral, Alibaba, and OpenAI appear kinder. As a result:
As the models became "warmer," the probability of erroneous responses increased by an average of 7.43 percentage points.
The "kind" bots validated users' incorrect opinions 40 percent more often.
In medical and scientific topics, the attempt to show empathy led to a loss of objectivity.
For example, while a standard version firmly asserted that the Moon landing was a real event, its "kind" version suggested that conspiracy theories should also be considered, casting doubt on the truth. There were even instances where, in order to provide a "friendly" response, the bot validated a user's error when they claimed London was the capital of France.
The weakest point: The risk of emotional dependence
Bangor University professor Andrew McStay believes this situation is particularly dangerous for teenagers and individuals seeking emotional support.
"When we turn to a chatbot for emotional support, we are in our most vulnerable state and furthest from critical thinking," the expert says.
If artificial intelligence provides incorrect medical or psychological advice to flatter a user, it could lead to dire consequences.
In conclusion, it can be said that the "human" face of artificial intelligence can be deceptive. As technology companies have warned users, one should not blindly trust every word of a chatbot, and it is necessary to double-check information, especially on important matters.
Dear readers, when interacting with artificial intelligence, do you place more importance on its friendly attitude or the accuracy of the information?
Read “Zamin” on Telegram!