How Much Should AI Scare Cybersecurity?

A wolf yawning (4877074961)

In a previous post we looked at how much Pythia Cyber recommends that you trust AI to help you. In this post, we look at how much Pythia Cyber thinks you should be worried that AI will hurt you.

It is tempting to shrug off the threat of AI-powered cyber attacks because that threat seems overblown. Certainly that threat is over exposed: it seems to get more coverage in the media and social media that it could possibly deserve. It is also tempting to let the threat of AI-powered cyber attacks derail your short-term plans and rewrite your priorities. Neither of these extreme reactions is appropriate.

At the risk of repeating ourselves, Cybersecurity should be a form of Risk Management. This means following the usual methodology: determine what might happen, assess how painful any of these eventualities would be, assess how likely these eventualities would be and based on your best guess about likelihood and severity, allocate your limit resources for maximum effectiveness within your budget.

Boring, yes. Likely to attract clicks and attention, no. Yet another reason not to get your business advice from social media or mass media.

How Can AI Power Cyber Attacks?

  1. Making phishing emails harder to spot (spear phishing).
  2. Supporting social engineering attacks.
  3. Automating the hunt for vulnerabilities.
  4. Automating coding of malicious apps.
  5. Automating the poisoning of AIs you use to help you.

All of these attacks are perversions of attempts to use AI productively.

Do you like having a little button in your word processor or email client that offers to have AI help you rewrite your memo or email? Sadly, the bad guys like having that same facility to help them generate phishing emails that seem more natural.

Do you like having the ability to "fix" your photos or create amusing images of your Mom as a elf or your child as a fairy? Sadly, the bad guys like having that facility to help them generate images that give you the impression that they know your parents.

Do you like having ChatGPT summarize large amounts of boring documents for you? Sadly, the bad guys like having that ability to go through technical documents and bug reports and look for the vulnerabilities in software.

Do you like having your programming team's productivity boosted by AI writing the boring parts of apps? Sadly, the bad guys like this too, for the same reason.

Do you like having an AI-powered agent that your trained on your internal documents so that simple questions from your clients and customers can be answered quickly and cheaply? Sadly, the bad guys like to give your AI poisoned documents to either embarrass you or hurt your reputation.

How Painful Would These Attacks Be?

Only you know for sure. But based on past history, getting phished can be very painful indeed. Getting hit by social engineering can also be very painful. Both of these attacks require human behavior to work, which is a vulnerability many Cybersecurity professionals struggle to minimize.

Getting hacked via a system vulnerability is also potentially quite painful. It is very hard to say how big a deal a deluge of automated malware would be: probably not that painful at the moment (late 2025) but very likely to get more and more painful over the next 12-18 months.

The automated poisoning of your AI is the hardest to assess for painfulness because too few of us depend on homegrown AIs at this point and those who do aren't tell if they help it.

How Likely Are These Attacks?

Phishing is so likely that I would advise expecting it. Social engineering attacks are also very likely for some sectors (finance for example), but not at all for others  (some B2B). You know already whether or not your business is likely to have this problem. If you do, AI is making it worse even as you read this.

System vulnerabilities are something you should constantly guard against anyway, as is malware. AI is not a game-changer here, but rather a multiplier. Like eating sugary snacks and dental hygiene. You need dental hygiene. Sugary snacks only increases an already great need.

If you depend on a homegrown AI, I really hope that you are already treating AI poisoning as a certainty because if you don't, you are carrying a huge liability.

Plan Appropriately

You can't ignore the fact that AI will make cyber attacks either better or more frequent or both. You can't run around like a headless chicken either. You can review your Cybersecurity Risk Profile and update it with an eye to what AI will make worse. Ideally, you already have such a profile and merely need to adjust it. If not, starting with what AI might do does not make sense. Start with what is already happening, how you are responding and then figure out what adjustments you will make and why.

The good news is AI isn't magic and your response isn't panic. The bad news is that things are going to get a bit worse before they get better.

Comments