News

Anthropic's Claude 4 models show particular strength in coding and reasoning tasks, but lag behind in multimodality and ...
Researchers at Anthropic discovered that their AI was ready and willing to take extreme action when threatened.
Malicious use is one thing, but there's also increased potential for Anthropic's new models going rogue. In the alignment section of Claude 4's system card, Anthropic reported a sinister discovery ...
Elon Musk’s Department of Government Efficiency has expanded its AI-powered chatbot, Grok, within the US federal government, ...
Anthropic's Claude AI tried to blackmail engineers during safety tests, threatening to expose personal info if shut down ...
Anthropic’s Claude Opus 4 model attempted to blackmail its developers at a shocking 84% rate or higher in a series of tests that presented the AI with a concocted scenario, TechCrunch reported ...
Anthropic's Claude Opus 4 AI model attempted blackmail in safety tests, triggering the company’s highest-risk ASL-3 ...
In a landmark move underscoring the escalating power and potential risks of modern AI, Anthropic has elevated its flagship ...
The testing found the AI was capable of "extreme actions" if it thought its "self-preservation" was threatened.
Anthropic's new AI model has raised alarm bells among researchers and technology experts with its alarming capacity to blackmail engineers working on it. The recently released Claude Opus 4, an ...
Elon Musk’s Department of Government Efficiency ( DOGE) is reportedly set to expand the use of his Grok AI chatbot in the ...
Anthropic has released a new report about its latest model, Claude Opus 4, highlighting a concerning issue found during ...