A bipartisan bill introduced in the US House of Representatives aims to address the growing concerns surrounding the AI-backed deepfake technology by requiring the identification and labelling of AI-generated online content. Deepfakes, which can convincingly mimic real voices and visuals, have raised fears of widespread misinformation, scams, and a loss of trust in online media.
The proposed legislation would mandate AI developers to label content created using AI technology with digital watermarks or metadata, facilitating platforms like TikTok and YouTube to notify users about the nature of the content.
The bill seeks to complement tech companies' voluntary efforts and President Biden's previous executive order, which directed federal agencies to establish guidelines for AI products and assess their risks.
Under the proposed rules, the Federal Trade Commission would collaborate with the National Institute of Standards and Technology to finalise the implementation specifics. Advocates for AI safeguards welcomed the legislation as a step forward, acknowledging the importance of embedding identifiers in AI content to empower the public in discerning AI-generated material.
Despite bipartisan support for regulating AI to protect citizens and promote responsible innovation, the bill's passage before the 2024 election remains uncertain.