Anthropic has introduced a new feature in its Claude Opus 4 and 4.1 models that allows the AI to choose to end certain conversations.
According to the company, this only happens in particularly serious or concerning situations. For example, Claude may choose to stop engaging with you if you repeatedly attempt to get the AI chatbot to discuss child sexual abuse, terrorism, or other “harmful or abusive” interactions.
This feature was added not just because such topics are controversial, but because it provides the AI an out when multiple attempts at redirection have failed and productive dialogue is no longer possible.
If a conversation ends, the user cannot continue that thread but can start a new chat or edit previous messages.
The initiative is part of Anthropic’s research on AI well-being, which explores how AI can be protected from stressful interactions.
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup

Microsoft recently released a series of important security fixes for

We’ve come a long way from the beige PCs of the 1990s. But even thoug

Graphics card prices are dropping. Wow, that’s something I haven’t be

It was a good ride while it lasted, those who’ve been paying less for


If you have a slow main drive and you’re looking to finally superchar

It’s hard to find a good deal for a gaming monitor, even if you’re no