The Biden-Harris administration released key AI measures, including risk assessments in vital industries, that exceeded criteria ninety days after Biden's AI executive order.
Three months after the release of President Biden's highly anticipated executive order, the White House released a fact sheet with its key actions on AI. On 30 October 2023, the US President issued an executive order on 'Safe, Secure, and Trustworthy AI' to promote a coordinated, federal government-wide approach toward the safe and responsible development of AI.
The executive order directed comprehensive action to 'strengthen AI safety and security, protect Americans' privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.'
The published fact sheet states that these updates mark 'substantial progress' toward the president's mandate to 'protect Americans from AI's risks while seizing its enormous promise.'
The updates include using the Defense Production Act to urge AI developers to report 'vital information,' including AI safety test results, to the Department of Commerce.
A proposed draft rule was also mentioned as an accomplishment, mandating all cloud computing firms based in the US to disclose whether they provide computer resources that help foreign AI models during their training process. Additionally, nine federal agencies covering AI's use in every critical infrastructure sector completed risk assessments.
The fact sheet emphasized new efforts and innovations over the past three months to 'attract and train' AI talent. Examples include a partnership between Nvidia and the National Science Foundation (NSF) to advance AI, the launch of the 'AI Talent Surge' to encourage the hiring of AI professionals across the federal government, and the creation of an AI task force at the Department of Health and Human Services to help develop policies for AI innovation in healthcare.