AI Companies Commit to Safe Development of the Technology

AI Companies Commit to Safe Development of the Technology
Depositphotos

The leading AI companies agreed to a new round of commitments to ensure the safe development of the technology. Anthropic and Google, among others, committed according to a declaration announced in 2023.

Signatories of the so-called Frontier AI Safety Commitments count 16 technology companies from across North America, Asia, and the Middle East, including Meta, Mistral AI, Microsoft, OpenAI, and Samsung. The UK government stated the move is an expansion of the Bletchley agreement announced at the AI Safety Summit it hosted last year.

As part of the new agreement, the companies are expected to publish a report or safety framework detailing how they can examine the risks of misuse of advanced AI models by bad actors and how they can measure and mitigate these threats. The UK authorities added the companies will be taking feedback from trusted actors including their home governments as appropriate to identify these AI thresholds or safety standards, which will be published ahead of the AI Action Summit in France next year.

UK Prime Minister Rishi Sunak said the new commitments set a precedent for global standards on AI safety that will unlock the benefits of this transformative technology. He added it is the world’s first to have so many leading companies agreeing to the same commitments on AI safety. Tom Lue, general counsel and head of governance at Google DeepMind said the agreement will help establish important best practices on frontier AI safety among leading developers.