Anthropic has introduced a new feature in its Claude Opus 4 and 4.1 models that allows the AI to choose to end certain conversations.
According to the company, this only happens in particularly serious or concerning situations. For example, Claude may choose to stop engaging with you if you repeatedly attempt to get the AI chatbot to discuss child sexual abuse, terrorism, or other “harmful or abusive” interactions.
This feature was added not just because such topics are controversial, but because it provides the AI an out when multiple attempts at redirection have failed and productive dialogue is no longer possible.
If a conversation ends, the user cannot continue that thread but can start a new chat or edit previous messages.
The initiative is part of Anthropic’s research on AI well-being, which explores how AI can be protected from stressful interactions.
Jelentkezéshez jelentkezzen be
EGYÉB POSTS Ebben a csoportban

On Friday afternoon, Intel confirmed what everyone already knew: that

I recently moved to a much more rural area, so getting Starlink set u

I review a lot of laptops and I’ve noticed many of them come with a “

TL;DR: Replace your Adobe Acrobat monthly fee with a



Most modern laptops lack an optical drive, yet CDs and DVDs are still