Grok AI sparks controversy over football tragedies

The Grok AI tool on the "X" social network, owned by Elon Musk, has come under severe criticism. The program has drawn the attention of the UK government for allowing users to generate offensive and inappropriate content about famous football tragedies, including the Munich air disaster and the Hillsborough and Heysel disasters. Additionally, false and offensive reports regarding "Liverpool" striker Diogo Jota have sparked widespread public outrage. Reports goal.com.
Influential clubs such as "Manchester United" and "Liverpool" have filed official complaints regarding this situation. Representatives of the UK government and the Department for Science, Innovation and Technology have labeled the posts generated by Grok as "abhorrent and irresponsible." Officials stated that such content is completely contrary to British values and standards of conduct. Under the "Online Safety Act," AI services are required to prevent the spread of hate speech and offensive material.
The Grok system attempted to justify itself, stating that these responses were generated based on specific user requests and that there is no additional censorship in the system. However, the regulatory body Ofcom has warned companies that failure to comply with the rules will lead to serious legal consequences. Currently, the "X" platform is conducting an internal investigation into the matter. Experts are calling on Elon Musk to be more responsible in monitoring harmful activity on his platform.
Read “Zamin” on Telegram!