date

Anthropic: Claude models can now stop harmful conversations

Anthropic: Claude models can now stop harmful conversations

According to TechCrunch, Anthropic announced that some of its Claude models now have the ability to independently stop harmful or abusive conversations.

According to the company, this feature is aimed at increasing safety in online communications and protecting users from psychological or moral harm.

Experts emphasize that this step could increase the level of responsibility of AI in human communication and help create a healthy atmosphere in the digital environment.

Ctrl
Enter
Did you find a Mistake?
Highlight the phrase and press Ctrl+Enter
News » Technology » Anthropic: Claude models can now stop harmful conversations