Regístrese ahora para una mejor cotización personalizada!

ChatGPT and more: What AI chatbots mean for the future of cybersecurity

Feb, 14, 2023 Hi-network.com
Image: Getty

From relatively simple tasks, such as composing emails, to more complex jobs, including writing essays or compiling code, ChatGPT -- the AI-driven natural language processing tool from OpenAI -- has been generating huge interest since its launch.

It is by no means perfect, of course -- it's known to make mistakes and errors as it misinterprets the information it's learning from, but many see it, and other AI tools, as the future of how we'll use the internet. 

In Depth: These experts are racing to protect AI from hackers. Time is running out

OpenAI's terms of service for ChatGPT specifically ban the generation of malware, including ransomware, keyloggers, viruses or, "other software intended to impose some level of harm". It also bans attempts to create spam, as well as use cases aimed at cybercrime. 

But as with any innovative online technology, there are already people who are experimenting with how they could exploit ChatGPT for murkier ends.  

Recommends

The best AI chatbots: ChatGPT and other interesting alternatives to try

AI chatbots and writers can help lighten your workload by writing emails and essays and even doing math. They use artificial intelligence to generate text or answer queries based on user input. ChatGPT is one popular example, but there are other noteworthy chatbots.

Read now

Following launch, it wasn't long before cyber criminals were posting threads on underground forums about how ChatGPT could be used to help facilitate malicious cyber activity, such as writing phishing emails or helping to compile malware.  

And there are concerns that crooks will attempt to use ChatGPT and other AI tools, such as Google Bard, as part of their efforts. While these AI tools won't revolutionize cyberattacks, they could still help cyber criminals -- even inadvertently -- to conduct malicious campaigns more efficiently.

"I don't think, at least in the short term, that ChatGPT will create completely new types of attacks. The focus will be to make their day-to-day operations more cost-efficient," says Sergey Shykevich, threat intelligence group manager at Check Point, a cybersecurity company. 

Also: What is ChatGPT and why does it matter? Here's everything you need to know

Phishing attacks are the most common component of malicious hacking and fraud campaigns. Whether attackers are sending emails to distribute malware, phishing links or are being used to convince a victim to transfer money, email is the key tool in the initial coercion. 

That reliance on email means gangs need a steady stream of clear and usable content. In many cases -- especially with phishing -- the aim of the attacker is to persuade a human to do something, such as to transfer money. Fortunately, many of these phishing attempts are easy to spot as spam right now. But an efficient automated copywriter could make those emails more compelling.

Cybercrime is a global industry, with criminals in all manner of countries sending phishing emails to potential targets around the world. That means language can be a barrier, especially for the more sophisticated spear-phishing campaigns that rely on victims believing they're speaking to a trusted contact -- and someone is unlikely to believe they're speaking to a colleague if the emails are full of uncharacteristic spelling and grammar errors or strange punctuation.  

Also:The scary future of the internet: How the tech of tomorrow will pose even bigger cybersecurity threats

But if AI is exploited correctly, a chatbot could be used to write text for emails in whatever language the attacker wants.

"The big barrier for Russian cyber criminals is language -- English," says Shykevich. "They now hire graduates of English studies in Russian colleges to write for phishing emails and to be in call centres -- and they have to pay money for this." 

He continues: "Something like ChatGPT can save them a lot of money on the creation of a variety of different phishing messages. It can just improve their life. I think that's the path they will look for." 

Image: Getty/picture alliance

In theory, there are protections in place that are designed to prevent abuse. For example, ChatGPT requires users to register an email address and also requires a phone number to verify registration. 

And while ChatGPT will refuse to write phishing emails, it's possible to ask it to make email templates for other messages, which are commonly exploited by cyber attackers. That effort might include messages such as claiming an annual bonus is on offer, an important software update must be downloaded and installed, or an attached document needs to be looked at as a matter of urgency.

Also: Email is our greatest productivity tool. That's why phishing is so dangerous to everyone

"Crafting an email to convince someone to click on a link to obtain something like a conference invite -- it's pretty good, and if you're a non-native English speaker this looks really good," says Adam Meyers, senior vice president of intelligence at Crowdstrike, a cybersecurity and threat intelligence provider.  

"You can have it create a nicely formulated, grammatically correct invite that you wouldn't necessarily be able to do if you were not a native English speaker." 

Security

  • 8 habits of highly secure remote workers
  • How to find and remove spyware from your phone
  • The best VPN services: How do the top 5 compare?
  • How to find out if you are involved in a data breach -- and what to do next

But abusing these tools isn't exclusive to just email; criminals could use it to help write script for any text-based online platform. For attackers running scams, or even advanced cyber-threat groups attempting to conduct espionage campaigns, this could be a useful tool -- especially for creating fake social profiles to reel people in. 

"If you want to generate plausible business speak nonsense for LinkedIn to make it look like you're a real businessperson trying to make connections, ChatGPT is great for that," says Kelly Shortridge, a cybersecurity expert and senior principal product technologist at cloud-computing provider Fastly. 

Various hacking groups attempt to exploit LinkedIn and other social media platforms as tools for conducting cyber-espionage campaigns. But creating fake but legitimate-looking online profiles -- and filling them with posts and messages -- is a time-consuming process.

Shortridge thinks that attackers could use AI tools such as ChatGPT to write convincing content while also having the benefit of being less labour-intensive than doing the work manually.  

"A lot of those kinds of social-engineering campaigns require a lot of effort because you have to set up those profiles," she says, arguing that AI tools could lower the barrier to entry considerably.

"I'm sure that ChatGPT could write very convincing-sounding thought leadership posts," she says. 

The nature of technological innovation means that, whenever something new emerges, there will always be those who try to exploit it for malicious purposes. And even with the most innovative means of attempting to prevent abuse, the sneaky nature of cyber criminals and fraudsters means they're likely to find means of circumnavigating protections. 

"There's no way to completely eliminate abuse to zero. It's never happened with any system," says Shykevich, who hopes that highlighting potential cybersecurity issues will mean there's more discussion around how to prevent AI chatbots from being exploited for errant purposes. 

"It's a great technology -- but, as always with new technology, there are risks and it's important to discuss them to be aware of them. And I think the more we discuss, the more likely it is OpenAI and similar companies will invest more in reducing abuse," he suggests.  

See also

  • How to use ChatGPT to write Excel formulas
  • How to use ChatGPT to write code
  • ChatGPT vs. Bing Chat: Which AI chatbot should you use?
  • How to use ChatGPT to build your resume
  • How does ChatGPT work?
  • How to get started using ChatGPT

There's also an upside for cybesecurity in AI chatbots, such as ChatGPT. They are particularly good at dealing with and understanding code, so there's potential to use them to help defenders understand malware. As they can also write code, it's possible that, by assisting developers with their projects, these tools can help to create better and more secure code quicker, which is good for everyone. 

As Forrester principal analyst Jeff Pollard wrote recently, ChatGPT could provide a massive reduction in the amount of time taken to produce security incident reports. 

"Turning those around faster means more time doing the other stuff -- testing, assessing, investigating, and responding, all of which helps security teams scale," he notes, adding that a bot could suggest next-recommended actions based on available data. 

"If security orchestration, automation, and response is set up correctly to accelerate the retrieval of artifacts, this could accelerate detection and response and help [security operations center] analysts make better decisions," he says. 

So, chatbots might make life harder for some in cybersecurity, but there might be silver linings, too.

contacted OpenAI for comment, but didn't received a response. However, asked ChatGPT what rules it has in place to prevent it being abused for phishing -- and we got the following text.

"It's important to note that while AI language models like ChatGPT can generate text that is similar to phishing emails, they cannot perform malicious actions on their own and require the intent and actions of a user to cause harm. As such, it is important for users to exercise caution and good judgement when using AI technology, and to be vigilant in protecting against phishing and other malicious activities."

MORE ON CYBERSECURITY

  • Russian hackers are trying to break into ChatGPT, says Check Point
  • Cybersecurity, cloud and coding: Why these three skills will lead demand in 2023
  • The metaverse is coming, and the security threats have already arrived
  • These are the cybersecurity threats of tomorrow that you should be thinking about today
  • The next big security threat is staring us in the face. Tackling it is going to be tough

tag-icon Etiquetas calientes: tecnología seguridad

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.