OpenAI introduces new Trusted Contact feature for ChatGPT

OpenAI has launched a new feature called Trusted Contact to ensure the safety of ChatGPT users. This innovation allows for automatic alerts to be sent to a user's loved ones if the user expresses an intention to harm themselves. Adults can now link a trusted person to their account. This is reported by Techcrunch.com reports .
If dangerous situations are detected during a conversation with the AI, ChatGPT will encourage the user to contact this person. Simultaneously, a notification will be sent to the chosen contact via email, SMS, or the app. To maintain user privacy, the company has decided not to disclose the detailed content of the conversation in the alerts.
This step comes amid criticism and lawsuits against OpenAI. Previously, some families claimed that ChatGPT contributed to the suicides of their loved ones. Currently, the company monitors these danger signals using an automated system and a human safety team.
According to company representatives, safety notifications are reviewed in less than an hour. The Trusted Contact feature is optional and serves as a logical continuation of the parental control system introduced last year. OpenAI plans to continue collaborating with doctors and researchers to improve AI response measures in crisis situations.
Read “Zamin” on Telegram!