According to OpenAI CEO Sam Altman, conversations with artificial intelligence are not legally protected in the same way as those with doctors, lawyers, or therapists. More and more users are turning to ChatGPT for psychological help and personal advice, but such interactions may be revealed to authorities or courts if legally required.
Altman emphasized that while professionals are bound by confidentiality laws, AI platforms are not. He sees this as a serious concern and argues that there should be legal safeguards for AI-human conversations.
If a user discusses personal issues with ChatGPT and later becomes involved in legal proceedings, OpenAI could be legally compelled to disclose the content. Altman believes this is problematic and stresses that users should feel secure sharing sensitive information with AI.
He also noted that, unlike encrypted messengers like WhatsApp or Signal, OpenAI can access conversations and use them for model improvement and abuse detection. This raises questions about data security.
Per OpenAI's policy, ChatGPT chats are deleted within 30 days unless required by law or safety concerns. However, during that time, the data may be accessed.
Experts highlight the growing need for international legal frameworks to protect personal information shared through AI platforms.
The Zamin.uz editorial team actively monitors the most important news in technology and security and will continue to keep you informed.
Read 'Zamin' on Telegram!