The increasing influence of AI, particularly generative AI such as OpenAI's ChatGPT, in shaping elections has raised concerns about the integrity of democracy. With over 70 countries scheduled to hold regional or national elections by the end of 2024, the stakes are high for more than 2 billion people worldwide.
However, the focus should not solely be on the content generated by AI but rather on how people receive, process, and comprehend the information facilitated by AI systems on tech platforms. The spread of disinformation through social media platforms like TikTok, Facebook, and Twitter has already become a defining feature of elections globally, with vulnerable AI systems being easily manipulated to spread propaganda and quash dissent.
The role of AI in amplifying hate-filled backlash and microtargeting specific populations has also come under scrutiny. While the tech industry, led by figures like OpenAI's founder Sam Altman, calls for regulation and risk mitigation of AI, it is important to consider the existing harms perpetuated by AI and the exploitative practices of tech corporations in developing countries. The extractive and destructive practices of the tech industry in these regions, coupled with the detrimental impact on the mental health and well-being of data workers who train AI algorithms, highlight the need for a more comprehensive approach to addressing the risks and injustices associated with AI.
Understanding the political and economic conditions from which AI emerges is crucial, as its power is centralized, its economics are extractive, and its growth is reckless. Advocates and observers of democracy, particularly in developing countries, must not lose sight of these existing harms and ensure that principles of fairness, accountability, and inclusivity guide AI development.