The United States, Britain, and 16 other countries have unveiled the world's first detailed international agreement to ensure AI's safe and secure development. The agreement strongly emphasises a 'secure by design' approach, urging companies to prioritise safety when developing and deploying AI systems.
The Guidelines for Secure AI System Development, were developed by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) with international partners, while major companies such as Amazon, Anthropic, Google, IBM, Microsoft, OpenAI contributed.
The agreement offers non-binding recommendations, providing a framework for responsible AI practices, including monitoring AI systems for potential abuse, safeguarding against data tampering, and conducting rigorous vetting of software suppliers. The agreement's recommendations mainly focus on preventing hackers from hijacking AI technology. While not legally binding, these guidelines offer a shared understanding among participating nations to promote responsible AI development.
The guidelines are broken down into four key areas within the AI system development life cycle:
Governments worldwide are actively shaping AI development, with Europe taking the lead in regulatory efforts. Concurrently, negotiations surrounding the EU's AI Act are reaching their culmination. Recently, France, Germany, and Italy solidified agreements (their proposal) on EU AI regulation, accentuating the importance of 'mandatory self-regulation through codes of conduct.' Meanwhile, the White House issued an executive order in October targeting the mitigation of risks linked to AI. This directive focuses on fortifying consumer protection, safeguarding worker rights, and reinforcing national security in response to the rapid advancements in AI technologies.