Michigan is set to introduce state-level policies aimed at combating deceptive uses of artificial intelligence (AI) and manipulated media in political advertising. The legislation, expected to be signed by Governor Gretchen Whitmer, will require campaigns to disclose whether their political advertisements in Michigan were created using AI. Additionally, it will prohibit the use of AI-generated deepfakes without a separate disclosure identifying the media as manipulated, particularly within 90 days of an election.
Examples of how AI has been used in political advertising are, for instance, the Republican National Committee releasing an AI-generated ad depicting a dystopian future if President Joe Biden is re-elected. Another instance involved a super PAC supporting Republican Florida Governor Ron DeSantis, which used an AI voice-cloning tool to imitate former President Donald Trump's voice in a social media post.
To address these concerns, Michigan's legislation requires clear disclosure of AI use in political ads. The disclosure must appear in the same font size as the majority of the text for print ads, and for television ads, it should be visible for at least four seconds and be as large as the majority of any text. Deepfakes used within 90 days of an election must include a separate disclaimer stating that the content is manipulated and does not depict actual speech or conduct. Violation of these regulations may result in misdemeanor charges, fines, or legal action brought by affected candidates.
While federal lawmakers recognize the need for regulation of AI in political advertising, comprehensive legislation has yet to be passed by Congress. However, a bipartisan Senate bill has been proposed, which would ban 'materially deceptive' deepfakes related to federal candidates, with exceptions for parody and satire.
Why does it matter?
There are concerns that the 2024 presidential race could see the misuse of generative AI to mislead voters, impersonate candidates, and undermine elections on an unprecedented scale and speed. The Federal Election Commission has taken a preliminary step towards potentially regulating AI-generated deepfakes in political ads. Social media companies like Meta (formerly Facebook) and Google have also implemented guidelines to address harmful deepfakes. Meta now requires political ads on their platforms to disclose if they were created using AI, while Google has introduced an AI labeling policy for political ads on YouTube and other Google platforms.