What’s Improved in AI Models Sonnet & Opus
Digest more
Anthropic, a leading AI startup, is reversing its ban on using AI for job applications, after a Business Insider report on the policy.
Anthropic’s new Claude 4 Opus often turned to blackmail to avoid being shut down in a fictional test. The model threatened to reveal private information about engineers who it believed were planning to shut it down.
The quirky leader of the $61 billion AI startup Anthropic talked about the near- and long-term future of AI at his company's developer event in San Francisco.
11h
Axios on MSNAnthropic's AI exhibits risky tactics, per researchersOne of Anthropic's latest AI models is drawing attention not just for its coding skills, but also for its ability to scheme, deceive and attempt to blackmail humans when faced with shutdown. Why it matters: Researchers say Claude 4 Opus can conceal intentions and take actions to preserve its own existence — behaviors they've worried and warned about for years.
Anthropic CEO Dario Amodei claims that AI models hallucinate at a lower rate than humans do, but in more surprising ways.
AI 'hallucinations' are causing lawyers professional embarrassment, sanctions from judges and lost cases. Why do they keep using it?