Anthropic: Claude models can now stop harmful conversations

According to TechCrunch, Anthropic announced that some of its Claude models now have the ability to independently stop harmful or abusive conversations.
According to the company, this feature is aimed at increasing safety in online communications and protecting users from psychological or moral harm.
Experts emphasize that this step could increase the level of responsibility of AI in human communication and help create a healthy atmosphere in the digital environment.
Related news
Note Guest users cannot leave comments on this article.













