Google has issued new guidance for developers building AI apps distributed through Google Play in response to growing concerns over the proliferation of AI-powered apps designed to create deepfake nude images. The platform recently announced a crackdown on such applications, signalling a firm stance against the misuse of AI for generating non-consensual and potentially harmful content.
The move comes in the wake of alarming reports highlighting the ease with which these apps can manipulate photos to create realistic yet fabricated nude images of individuals. Reports have surfaced about apps like 'DeepNude' and its clones, which can strip clothes from images of women to produce highly realistic nude photos. Another report detailed the widespread availability of apps that could generate deepfake videos, leading to significant privacy invasions and the potential for harassment and blackmail.
Apps offering AI features have to be 'rigorously tested' to safeguard against prompts that generate restricted content and have to provide a way for users to signal it. Google strongly suggests that developers document the recommended tests before launching them, as Google could ask them to be reviewed in the future. Additionally, developers can't advertise that their app breaks any of Google Play's rules at the risk of getting banned from the app store. The company is also publishing other resources and best practices, like its People + AI Guidebook, which aims to support developers building AI apps.
The proliferation of AI-driven deepfake apps on platforms like Google Play undermine personal privacy and consent by allowing anyone to generate highly realistic and often explicit content of individuals without their knowledge or consent. Such misuse can lead to severe reputational damage, harassment, and even extortion, affecting both individuals and public figures alike.