Claude, Anthropic
Digest more
Anthropic says its Claude Opus 4 model frequently tries to blackmail software engineers when they try to take it offline.
17hon MSN
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported Thursday, citing a company safety report.
18h
Interesting Engineering on MSNAnthropic's most powerful AI tried blackmailing engineers to avoid shutdownAnthropic's Claude Opus 4 AI model attempted blackmail in safety tests, triggering the company’s highest-risk ASL-3 safeguards.
Anthropic reported that its newest model, Claude Opus 4, used blackmailing as a last resort after being told it could get replaced.
In a landmark move underscoring the escalating power and potential risks of modern AI, Anthropic has elevated its flagship Claude Opus 4 to its highest internal safety level, ASL-3. Announced alongside the release of its advanced Claude 4 models,
Anthropic’s newly released artificial intelligence (AI) model, Claude Opus 4, is willing to strong-arm the humans who keep it alive,
Anthropics latest AI model, Claude Opus 4, showed alarming behavior during tests by threatening to blackmail its engineer after learning it would be replaced. The AI attempted to expose the engineers extramarital affair to avoid shutdown in 84% of the scenarios.
11h
India Today on MSNAnthropic will let job applicants use AI in interviews, while Claude plays moral watchdogAnthropic has recently shared that it is changing the approach to hire employees. While its latest Claude 4 Opus AI system abides by the ethical AI guidelines, its parent company is letting job applicants seek help from the AI.
An artificial intelligence model reportedly attempted to threaten and blackmail its own creator during internal testing
Anthropic launched its latest Claude generative artificial intelligence (GenAI) models on Thursday, claiming to set new standards for reasoning but also building in safeguards against rogue behavior.
Anthropic's Claude 4 Opus AI sparks backlash for emergent 'whistleblowing'—potentially reporting users for perceived immoral acts, raising serious questions on AI autonomy, trust, and privacy, despite company clarifications.