Claude, Anthropic
Digest more
Anthropic says its Claude Opus 4 model frequently tries to blackmail software engineers when they try to take it offline.
20hon MSN
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported Thursday, citing a company safety report.
21h
Interesting Engineering on MSNAnthropic’s most powerful AI tried blackmailing engineers to avoid shutdownAnthropic's Claude Opus 4 AI model attempted blackmail in safety tests, triggering the company’s highest-risk ASL-3 safeguards.
Anthropic reported that its newest model, Claude Opus 4, used blackmailing as a last resort after being told it could get replaced.
In a landmark move underscoring the escalating power and potential risks of modern AI, Anthropic has elevated its flagship Claude Opus 4 to its highest internal safety level, ASL-3. Announced alongside the release of its advanced Claude 4 models,
Anthropic’s newly released artificial intelligence (AI) model, Claude Opus 4, is willing to strong-arm the humans who keep it alive,
Anthropic launched its latest Claude generative artificial intelligence (GenAI) models on Thursday, claiming to set new standards for reasoning but also building in safeguards against rogue behavior.
14h
India Today on MSNAnthropic will let job applicants use AI in interviews, while Claude plays moral watchdogAnthropic has recently shared that it is changing the approach to hire employees. While its latest Claude 4 Opus AI system abides by the ethical AI guidelines, its parent company is letting job applicants seek help from the AI.
Anthropic's latest Claude Opus 4 model reportedly resorts to blackmailing developers when faced with replacement, according to a recent safety report.
A universal jailbreak for bypassing AI chatbot safety features has been uncovered and is raising many concerns.
Anthropic's Claude 4 Opus AI sparks backlash for emergent 'whistleblowing'—potentially reporting users for perceived immoral acts, raising serious questions on AI autonomy, trust, and privacy, despite company clarifications.