Is The Hottest Thing In AI Part Of Your Cybersecurity Process?


Some things in artificial intelligence (AI) that people ohh & aahh about include the use of AI to create fake videos, write research papers, fly fighter jets, perform surgery, act as 'digital twins,' and otherwise do things that we thought only skilled sentient beings could do. That's amazing because after all AI is math, it can't prefer to do some things instead of others, and it has no way to want to do things -- it just does them.

These creative uses of agentic robot features have made an impression on gangs using AI agents to engage in deception to attack cybersecurity systems. 

At a very low level of a deception-based attack, the AI mimics the characteristics of a known/trusted individual and seeks access to systems that the known/trusted person should have. That's what this post is about. 

Now, of course we've always had deception. That's the whole point of the term "Trojan horse." That's how Lucifer breaks out of hell in the epic poem Paradise Lost. That's what these images of inflatable tanks are all about, and in fact militaries are replete with battlefield deception lessons. 

But it's all taking a new turn with AI-based cyberattacks. Welcome to the hottest thing in AI: deception detection, denial, and counter-AI.

The plot line goes something like this. Your AI model or cybersecurity process is looking for something that could be a threat, such as anomolous system behavior or problematic employee behavior. The cybercriminal AI model anticipates this search/alert function and does something actively in anticipation or response, because they know what your model's parameters are. The range of what it does can span from camouflage (blending in, much like the snake in the picture or the mimicked voice, or masking patches/"patch attacks") to hiding (Trojan horse) to creating false information (inflatable tanks) to injecting bad information into your model ("data poisoning") to attacking your model ("algorithmic poisoning"). 

And where we have deception, we have deception detection, counter-AI, and denial. These functions will soon become part of cybersecurity counter-measures. Their work is to counter attempts at deception. At a very high level this works like putting a filter on your camera's lens, or a macro in your database/spreadsheet. Because this area is evolving quickly and is highly technical, we'll leave it here with the advisory that the AI-based cybersecurity arms race is going to spiral. Work on generative adversarial networks ("GANs") is an example.

The AI wunder-geeks at Carnegie Mellon University have made available a lovely document on counter-AI, and it's good reading. They suggest that there are three key phases in counter-AI (the following three points are material we quote at length from this white paper): 

1. Model the target Al system. To attack an Al system, the attacker must either have access to the target Al system within its operational context or be able to gather sufficient information about the target to build a proxy (an approximation of the target that is likely to share the same vulnerabilities as the target during its operations). 

2. Train the counter on the model. In typical ML [machine learning], the data are held constant as the model learns. In AML [adversarial machine learning], this is reversed: the attacker “trains the data” as the model is held constant. Controlling the input in this way allows the attacker to identify how best to poison a model trained on a given set of data (learn attack), drive the target system to a desired state (do attack), or reveal information about the training data or model (reveal attack). 

3. Test the counter. In typical ML, a trained model is evaluated against several hold-out data sets in several contexts to provide evidence that the model is generalizable. Similarly in AML, a trained counter is evaluated against several hold-out models in several contexts to determine its efficacy as a learn, do, or reveal attack within the operational context of the target Al system.

What can you do?

First, you're in luck as deception-based AI is expensive, at least for now. Unlike other AI processes, which are readily available and scalable, it takes a lot of effort and talented programmers to create effective deception techniques. Thus cybersecurity-based deception efforts are not yet maintsream.

Second, as a CISO or IT professional you need to improve your game. That means money and time. We're now way beyond annual certificate-oriented training. 

Finally, it's tempting -- and maybe not a bad idea -- to have an AI-based deception detection service. But remember that there are many steps along the path to effective cybersecurity even with adversarial AI. Spending the most money won't guarantee you the most effective cybersecurity. You need to have a plan and a process. The AI part of your defense could be one piece of your process among several.

Ask us how we can prepare your company to be ready for deception detection and access denial.



Comments