Anthropic has introduced a new feature in its Claude Opus 4 and 4.1 models that allows the AI to choose to end certain conversations.
According to the company, this only happens in particularly serious or concerning situations. For example, Claude may choose to stop engaging with you if you repeatedly attempt to get the AI chatbot to discuss child sexual abuse, terrorism, or other “harmful or abusive” interactions.
This feature was added not just because such topics are controversial, but because it provides the AI an out when multiple attempts at redirection have failed and productive dialogue is no longer possible.
If a conversation ends, the user cannot continue that thread but can start a new chat or edit previous messages.
The initiative is part of Anthropic’s research on AI well-being, which explores how AI can be protected from stressful interactions.
Ak chcete pridať komentár, prihláste sa
Ostatné príspevky v tejto skupine

Most modern laptops lack an optical drive, yet CDs and DVDs are still

You’ve had a rough week. You deserve some time to relax, chill out, m

If you want a decently powerful PC that won’t cost an arm and a leg,

A couple years ago, I finally said goodbye to my old bucket of a car

Today, I’ve found an incredible deal on one of the best higher-end ga

Blink and you may have missed it, but Google gave us a peek at what s