From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI
It’s time to replace traditional, rule-based approaches to cybersecurity with “smarter” technology and training.
For the past several years, cybercriminals have been using artificial intelligence to hack into corporate systems and disrupt business operations. But powerful new generative AI tools such as ChatGPT present business leaders with a new set of challenges.
Consider these entirely plausible scenarios:
- A hacker uses ChatGPT to generate a personalized spear-phishing message based on your company’s marketing materials and phishing messages that have been successful in the past. It succeeds in fooling people who have been well trained in email awareness, because it doesn’t look like the messages they’ve been trained to detect.
- An AI bot calls an accounts payable employee and speaks using a (deepfake) voice that sounds like the boss’s. After exchanging some pleasantries, the “boss” asks the employee to transfer thousands of dollars to an account to “pay an invoice.” The employee knows they shouldn’t do this, but the boss is allowed to ask for exceptions, aren’t they?
- Hackers use AI to realistically “poison” the information in a system, creating a valuable stock portfolio that they can cash out before the deceit is discovered.
- In a very convincing fake email exchange created using generative AI, a company’s top executives appear to be discussing how to cover up a financial shortfall. The “leaked” message spreads wildly with the help of an army of social media bots, leading to a plunge in the company’s stock price and permanent reputational damage.
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
These scenarios might sound all too familiar to those who have been paying attention to stories of deepfakes wreaking havoc on social media or painful breaches in corporate IT systems. But the nature of the new threats is in a different, scarier category because the underlying technology has become “smarter.”
Until now, most attacks have used relatively unsophisticated high-volume approaches. Imagine a horde of zombies — millions of persistent but brainless threats that succeed only when one or two happen upon a weak spot in a defensive barrier. In contrast, the most sophisticated threats — the major thefts and frauds we sometimes hear about in the press — have been lower-volume attacks that typically require actual human involvement to succeed. They are more like cat burglars, systematically examining every element of a building and its alarm systems until they can devise a way to sneak past the safeguards.
Comments (3)
Tsun Kit Yiu
James T
Kumar Venkatesan