Major global technology companies, including Google, DeepMind, Meta, Microsoft, and OpenAI, are urging the United Kingdom to expedite safety tests on their AI software, seeking clarity on the testing process and outcomes. These companies voluntarily agreed to participate in tests conducted by the newly established AI Safety Institute (AISI) following the AI Safety Summit as part of the UK's effort to lead in AI regulation. The tech companies want to know how long the tests will take and what actions will be taken if the AISI identifies flaws in their software.
AISI's chair, Ian Hogarth, defended the testing approach, stating that 'companies agreed that governments should test their models before they are released' and that 'the AI Safety Institute is putting that into practice.'
AISI, backed by the government, is crucial to Prime Minister Rishi Sunak's ambition for the UK to play a central role in addressing existential risks associated with AI. By testing both existing and unreleased AI models, such as Google's Gemini Ultra, the AISI focuses on the risks associated with AI misuse, especially in cybersecurity and bioweapon construction.
Leveraging expertise from the National Cyber Security Centre, AISI allocated