Canada's top cybersecurity official, Sami Khoury, has disclosed that cybercriminals are exploiting AI for hacking and misinformation campaigns. They are utilising AI to create harmful software, craft convincing phishing emails, and spread disinformation online. Although specific evidence was not provided, the revelation highlights the urgent concern that AI technology is being wielded by rogue actors. While Khoury acknowledges that the use of AI for drafting malicious code is still in its early stages, he warns that the rapid evolution of AI models poses challenges in monitoring and curbing their malicious potential before they impact common users.
Reports from various cybersecurity watchdogs have previously warned about the potential risks of AI, particularly large language models (LLMs) such as OpenAI's ChatGPT, which can generate authentic-sounding dialogue and documents. Real-world instances of suspected AI-generated content have been observed, including an LLM trained to draft a persuasive email seeking a cash transfer. The fast-evolving nature of AI models makes it difficult for experts to stay ahead of potential malicious applications before they are unleashed.