VMware Explore Registration Is Open
VMware Explore Registration Is Open

Map your next move at the industry’s essential cloud event in Las Vegas August 26 – 29.

Security4 min read

ChatGPT will definitely up attackers’ game. How will you respond?

Umesh Mahajan
Artificial intelligence robot analyzing for human brain and body - stock photo

Every technology can be used for good or evil – and that goes extra for generative AI technologies like ChatGPT. AI luminaries including Sam Altman, the chief executive officer of ChatGPT creator OpenAI, recently signed a letter1 warning that AI should be treated as seriously as pandemic or nuclear war as a potential cause of human extinction.

If that’s not a wake-up call for companies to start thinking about the security implications of generative AI, I don’t know what is.  

The bad actors certainly aren’t waiting. Within a few weeks of the release of ChatGPT last November, cybercriminals were sharing2 tools on dark web forums, particularly ones that could be used by people with zero coding experience, to recreate sophisticated malware. For millions of people, ChatGPT is essentially an engineer-for hire –– one with access to all the knowledge on the internet, is always available, and doesn’t ask to be paid. 

Here’s a scary example. One of our senior technologists asked ChatGPT to help him find software to exploit a known security vulnerability, and to adapt it so it would remain undetected by most security software. It took him just six questions over around two minutes to get the “obfuscated” code he was after. Had he wanted to, he could have planted the code on a compromised device to slip past a company’s security perimeter and use that code to take up residence inside the network, free to move laterally inside the company in search of valuable data. It's important to note that OpenAI, the creator of ChatGPT, does have systems to lock out people who are obviously using it for nefarious purposes, but our senior technologist in this example was permitted to do his experiment by simply identifying himself as a “white hat” security professional. 

While it’s early days, it’s not difficult to see how generative AI could be used by attackers:  

  • More effective and efficient phishing campaigns: As the above example shows, ChatGPT is capable of sophisticated actions. For example, it can write up a phishing email as easily as it can write a song or a term paper, and it can do it far more convincingly3 than many attackers. How many times have you been tempted to click on an email, until you noticed misspellings or grammatical mistakes that made it clear the writer was up to no good? ChatGPT eliminates those red flags by default.  
  • Deepfake attacks: Just as you could ask ChatGPT to write a poem in the style of Henry David Thoreau, an attacker could easily generate a note that appears to be from your CEO – or even sound like them. For example, in 2021 a Japanese company was the victim of a deepfake audio attack, when a branch manager was tricked into transferring $35 million4 to a fraudulent account after receiving a call from someone that sounded just like a top company official.
  • Automation: Successful cyberattacks require a lot of boring work, much of which is tailormade for generative AI. So just as legitimate companies are exploring ways to use generative AI to automate tasks, so are bad actors. As VMware’s  Giovanni Vigna , Sr. Director Threat Intelligence points out, thousands of developers are already using generative AI technologies such as GitHub Copilot to handle some lower-level programming tasks and to translate5 code into other languages for cross-platform deployment. A sophisticated misinformation operation could undoubtedly use such tools rather than hire people to write up misleading posts.  
  • Live off the land attacks: The goal of many attacks in recent years was to break into a network and evade detection while they searched for a valuable quarry. Routinely, attacks would use the same protocols that network administrators use to do their jobs, such as the Remote Desktop Protocol (RDP) they use to troubleshoot users' problems. RDP is one of the “most common observed network behaviors associated with lateral movement.”6
  • Privacy concerns over intellectual property: ChatGPT has been tied to alleged data leaks where employees shared confidential corporate data with ChatGPT, opening up the data to OpenAI’s users. The source7 states that this “included the source code of software responsible for measuring semiconductor equipment.”  

So, what can companies do to prepare for such AI-enhanced attacks? For starters, they need to have good security hygiene. In a recent survey, “71 percent of respondents said an attack uncovered a vulnerability they didn’t even know they had.”8 Good security hygiene starts with awareness and visibility. Even if you have reached this point without being attacked, your odds of being targeted will be higher in the years ahead as Generative AI becomes more prevalent.

Companies should also get comfortable sharing more information on threats with each other, so they can leverage as much wisdom as possible to foil new kinds of attacks. By sharing threat intelligence across organizations and industries, your cybersecurity specialists will be far better prepared.  

Finally, make sure you are leveraging generative AI as well. As our senior technologist demonstrated with his experiment, ChatGPT can be used to identify and classify potential threats. Other types of AI can be used to spot the subtle anomalies left behind by sophisticated attackers that have taken up residence inside your network, and to automate critical systems to search for new threats, send alerts and quickly respond.  

Generative AI is undoubtedly leading to a new generation of threats. Look for my next piece, with more details on how this remarkable new technology can be used by the good actors.  


[1] New York Times, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” by Kevin Roose, May 2023.

[2] SC Magazine, “Cybercriminals are already using ChatGPT to own you,” by Derek B. Johnson, January 2023.

[3] WIRED. “Brace Yourself for a Tidal Wave of ChatGPT Email Scams,” Bruce Schneier and Barath Raghavan, April 2023.

[4] Dark Reading, “Deepfake Audio Scores $35M in Corporate Heist,” Robert Lemos, October 2021.

[5] SC Media, “Cybercriminals are already using ChatGPT to own you,” Derek B. Johnson, January 2023.

[6] VMware Security Blogs,  “Lateral Movement in the Real World: A Quantitative Analysis,” Stefano Ortolani and Giovanni Vigna, June 2022. Note: see Figure 6.

[7] VMware. “Global Incident Response Threat Report,” August 2022. Note: see page 10 for 71% metric.

[8] Cybernews, "ChatGPT tied to Samsung's alleged data leak," Villius Petkauskas, April 2023.