The UK should implement a system to log misuse and malfunctions in AI to keep ministers informed of alarming incidents, according to a report by the Centre for Long-Term Resilience (CLTR). The think tank, which focuses on responses to unforeseen crises, urges the next government to establish a central hub for recording AI-related episodes across the country, similar to the Air Accidents Investigation Branch.
CLTR highlights that since 2014, news outlets have recorded 10,000 AI 'safety incidents,' documented in a database by the Organisation for Economic Co-operation and Development (OECD). These incidents range from physical harm to economic, reputational, and psychological damage. Examples include a deepfake of Labour leader Keir Starmer and Google's Gemini model depicting World War II soldiers inaccurately. The report's author, Tommy Shaffer Shane, stresses that incident reporting has been transformative in aviation and medicine but is largely missing in AI regulation.
The think tank recommends the UK government adopt a robust incident reporting regime to manage AI risks effectively. It suggests following the safety protocols of industries like aviation and medicine, as many AI incidents may go unnoticed due to the lack of a dedicated AI regulator. Labour has pledged to introduce binding regulations for advanced AI companies, and CLTR emphasises that such a setup would help the government anticipate and respond quickly to AI-related issues.
Additionally, CLTR advises creating a pilot AI incident database, which could collect episodes from existing bodies such as the Air Accidents Investigation Branch and the Information Commissioner's Office. The think tank also calls for UK regulators to identify gaps in AI incident reporting and build on the algorithmic transparency reporting standard already in place. An effective incident reporting system would help the Department for Science, Innovation and Technology (DSIT) stay informed and address novel AI-related harms proactively.