Meta's integration of AI across its platforms, including Facebook, Instagram, and WhatsApp, has raised concerns as Wired reports the proliferation of explicit ads for AI 'girlfriends' on these platforms. The investigation found tens of thousands of such ads violating Meta's adult content advertising policy, which prohibits nudity, sexually suggestive content, and sexual services. Despite this policy, these ads continue to circulate on Meta's platforms, sparking criticism from various communities, including sex workers, educators, and LGBTQ individuals, who feel unfairly targeted by Meta's content policies.
For years, users have criticised Meta for what they perceive as discriminatory enforcement of its community guidelines. LGBTQ and sex educator accounts have reported instances of shadowbanning on Instagram, while WhatsApp has banned accounts associated with sex work. Additionally, Meta's advertising approval process has come under scrutiny, with reports of gender-biased rejections of ads, such as those for sex toys and period care products. Despite these issues, explicit AI 'girlfriend' ads have evaded Meta's enforcement mechanisms, highlighting a gap in the company's content moderation efforts.
When approached, Meta acknowledged the presence of these ads and stated its commitment to removing them promptly. A Meta spokesperson emphasised the company's ongoing efforts to improve its systems for detecting and removing ads that violate its policies. However, despite Meta's assurances, Wired found that thousands of these ads remained active even days after the initial inquiry.