Google-owned US cybersecurity firm Mandiant has reported an increasing trend of AI being used for manipulative information campaigns online. Since 2019, Mandiant has observed instances of AI-generated content, like fake profile pictures, being employed in politically-driven online influence campaigns by groups associated with various governments, including Russia, China, Iran, and others.
This comes as generative AI models, such as ChatGPT, gain popularity, enabling the creation of convincing fake videos, images, text, and code. While AI use in influence campaigns has grown, its role in other digital intrusions remains limited.
Researchers believe that AI's role in threats from major countries like Russia, Iran, China, and North Korea is still minor and that practical AI usage hasn't surpassed traditional tools. Despite this, the cybersecurity experts predict that AI's involvement in such activities will likely expand in the future.
Why does it matter?
As AI becomes more sophisticated at mimicking real people and generating convincing content, it becomes harder for users to discern between authentic and fabricated information. Using AI-driven tools, malicious actors can sway public sentiment and influence electoral outcomes. The European Commission has urged companies using generative AI tools like ChatGPT and Bard to label such content transparently as a measure against the proliferation of fake news. A response will require a multidimensional approach involving technological advancements, policy frameworks, and public awareness to mitigate its potential negative impacts.