What’s Improved in AI Models Sonnet & Opus
Digest more
Anthropic’s newly released artificial intelligence (AI) model, Claude Opus 4, is willing to strong-arm the humans who keep it alive,
Anthropic has responded to allegations that it used an AI-fabricated source in its legal battle against music publishers, saying its Claude chatbot made an “honest citation mistake.”
Anthropic's Claude 4 Opus AI sparks backlash for emergent 'whistleblowing'—potentially reporting users for perceived immoral acts, raising serious questions on AI autonomy, trust, and privacy, despite company clarifications.
The lawyers blamed AI tools, including ChatGPT, for errors such as including non-existent quotes from other cases.
Claude, developed by the AI safety startup Anthropic, has been pitched as the ethical brainiac of the chatbot world. With its focus on transparency, helpfulness and harmlessness (yes, really), Claude is quickly gaining traction as a trusted tool for everything from legal analysis to lesson planning.
Anthropic has formally apologized after its Claude AI model fabricated a legal citation used by its lawyers in a copyright lawsuit, highlighting ongoing AI reliability issues in professional settings and the critical need for human oversight.
Claude generated "an inaccurate title and incorrect authors" in a legal citation, per a court filing. The AI was used to help draft a citation in an expert report for Anthropic's copyright lawsuit.