For two years the leading American AI laboratories have spoken publicly about cooperation while competing ferociously in private. According to four people briefed on the discussions, that posture is about to change.
The agreement under negotiation would allow OpenAI, Anthropic, and a third laboratory not yet disclosed to exchange a narrow but consequential category of data: the results of internal safety evaluations conducted on frontier models before deployment.
The framework, drafted in part by former staff of the U.S. AI Safety Institute, draws on the Seoul commitments and on the EU AI Act's general-purpose-model code of practice. If signed, it would be the most significant act of voluntary coordination among frontier labs in three years.
The industry has decided not to wait for Washington.
Markets reacted favourably. Nvidia closed up 1.4 per cent in after-hours trading. The signing, if it happens, is expected at the Paris AI Action follow-up summit in June.