The UK and the US Establish Global AI Cybersecurity Guidelines

The UK and the US Establish Global AI Cybersecurity Guidelines
Depositphotos

The UK and the US cybersecurity authorities collaborated on establishing a set of global safety guidelines for the development of AI. Those measures are endorsed by more than a dozen international agencies and aimed at curbing threats linked to the technology.

The UK’s National Cyber Security Centre and US Cybersecurity and Infrastructure Security Agency stated the protocol was led by the former country and is the first of its kind to be agreed on a global level. In total 18 countries have endorsed the guidelines, which have been developed in cooperation with 21 international ministries.

At its center, the AI safety protocol intends to help developers create systems that are secure by design, to assess the end-to-end safety of AI from its development stage to deployments and updates. The government said this will aid developers in ensuring that cyber security is both an essential pre-condition of AI safety systems and integral to the development process from the outset and throughout.

The guidelines are split into four key areas that evaluate the safety of the design, development, deployment operation, and maintenance stages. The UK’s cyber arm claimed it would prioritize transparency and accountability for a secure AI infrastructure and in turn make the tools safer for customers.

“When the pace of development is high, as is the case with AI, security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system,” the body said in a statement.