Skip to content
Free · 1,000+ readers
Free · Independent
The daily record of artificial intelligence
← Back
Exclusive

OpenAI and Anthropic in advanced talks to share safety-evaluation data

A first-of-its-kind framework would let rival laboratories cross-test frontier models on biological-risk and cyber benchmarks, marking a quiet shift toward industry self-regulation.

Sunday, May 17, 2026 · 5 min
Server racks in a hyperscale data center

OpenAI and Anthropic compute clusters share little except a growing concern about safety.

For two years the leading American AI laboratories have spoken publicly about cooperation while competing ferociously in private. According to four people briefed on the discussions, that posture is about to change.

The agreement under negotiation would allow OpenAI, Anthropic, and a third laboratory not yet disclosed to exchange a narrow but consequential category of data: the results of internal safety evaluations conducted on frontier models before deployment.

The framework, drafted in part by former staff of the U.S. AI Safety Institute, draws on the Seoul commitments and on the EU AI Act's general-purpose-model code of practice. If signed, it would be the most significant act of voluntary coordination among frontier labs in three years.

The industry has decided not to wait for Washington.

Markets reacted favourably. Nvidia closed up 1.4 per cent in after-hours trading. The signing, if it happens, is expected at the Paris AI Action follow-up summit in June.

— End —