Almost Human: The Threat Of AI-Powered Phishing Attacks

Artificial Intelligence (AI) is undoubtedly a hot topic, and has been hailed as a game-changer in many fields including cybersecurity. There is much buzz about it, from the good, to the bad, and everything in between. Even Elon Musk and other tech leaders are advocating for AI development to be curbed, or at least slowed. While there are untold scintillating and amazing implications for AI technology in society, there are also plenty of bad and strange things that could happen. This is something we discussed in detail when the Metaverse was all the craze, but all of the technological scenarios pale in comparison to what happens when the plainest, simplest of threats wind up in the wrong hands.

Think Like a Hacker

As with any technological advancement, with AI there is always the potential for malicious misuse. To understand the impact of AI on cybersecurity, we need to first think like a hacker. Hackers like to use tools and techniques that are simple, easy, effective, and cheap. AI is all those things, especially when applied in fundamental ways. Thus, we can use our knowledge of the hacker mindset to get ahead of potential threats.

Aside from nation-state sponsored groups and the most sophisticated cyber hacker syndicates, the commotion over cyber hackers using AI in advanced technological ways is missing the bigger, more threatening point. AI is being used to mimic humans in order to fool humans. AI is targeting YOU, and can do so when you:

  • Click on a believable email
  • Pick up your phone or respond to SMS
  • Respond in chat
  • Visit a believable website
  • Answer a suspicious phone call

Just as AI is making everyday things easier, it’s making attacks easier for cybercriminals. They’re using the technology to write believable phishing emails with proper spelling and grammar correction, and to incorporate data collected about the target company, its executives, and public information. AI is also powering rapid, intelligent responses to messages. AI can rapidly create payloaded websites or documents that look real to an end-user. AI is also used to respond in real time with a deep faked voice, extracted from recording real voices through suspicious unsolicited spam calls.

Just the Beginning

Many of the hacks on the rise today are driven by AI, but in a low-tech way. AI tools are openly available to everyday people now, but have been in use in dark corners of the internet for a while, and often in surprisingly simple and frightening ways. The surging success rate for phishing campaigns, MITM (Man in the Middle attacks), and ransomware will prove to be related to arrival of AI and the surge of its adoption.

The use of AI in phishing attacks also has implications for the broader cybersecurity landscape. As cybercriminals continue to develop and refine their AI-powered phishing techniques, it could lead to an “arms race” between cybercriminals and cybersecurity professionals. This could result in an increased demand for AI-powered cybersecurity solutions, that might be both costly and complex to implement.

Cybersecurity Response

To protect against AI-powered phishing attacks, individuals and businesses can take several steps including:

  • Educating about the risks of phishing attacks and how to identify them
  • Implementing strong authentication protocols, such as multi-factor authentication
  • Using anti-phishing tools to detect and prevent phishing attacks
  • Implementing AI-powered cybersecurity solutions to detect and prevent AI-powered phishing attacks
  • Partnering with a reputable Managed Security Services Provider (MSSP) who has the breadth, reach, and technology to counter these attacks

AI is becoming ubiquitous in homes, cars, TVs, and even space. The unfolding future of AI and sentient technologies is an exciting topic that has long captured the imagination. However, the dark side of AI looms when it’s turned against people. This is the beginning of an arms escalation, although there is no AI that can be plugged into people (yet). Users beware.

This article was originally published in Forbes, please follow me on LinkedIn.