AI-Powered Threats: Finding Vulnerabilities
Recently we posted about what, broadly speaking, AI can do to hurt you. This is the second of two posts getting a little more specific. This post is about AI-powered network vulnerability detection. Here is a link to the first post that gets specific.
Networks are no longer solely devoted to being a data transport layer: these days, networks have to balance functionality with security.
This means that large part of what networks are designed to do is prevent unauthorized access. How do you confirm that your network is secure? Well, ahem, the best way is to attack your network. What to know how good your lock is? Ask an expert lock picker to pick it. Not very elegant is it? But there is really no other way. No one is interested in the theoretical protection a lock provides: we only care about what actual protection a lock provides.
It is the same with engineering. Mechanical engineers determine the safe load for a component by subjecting that component to stress until that component fails. Network engineers determine how secure a network is by attacking it.
What's the difference between a criminal and a locksmith? Intent, mostly. The tools of the two trades are often indistinguishable. So it with a network engineer versus a hacker.
Just as there are lock picking kits which we hope are mostly used by locksmiths providing legitimate service, so are there network-attacking kits which we hope are mostly used by network engineers to find vulnerabilities. When network engineers find vulnerabilities, the intention is to remove those vulnerabilities. When hackers find vulnerabilities, the intention is to exploit those vulnerabilities. The tools work just as well no matter what the intent.
In order to test a network there are well-known recipes and techniques. "White hat" expert humans make good money probing for vulnerabilities because the network owner has paid them to. "Black hat" expert humans make evil money probing for vulnerabilities because some random criminal has paid them to.
AI can be taught these recipes and techniques and then do the job faster and more reliably, if less creatively. Ideally, these AIs would be white hat and making cybersecurity professionals better at their jobs; but we don't live in an ideal world. What an idea of how well AI can do this job? Check out this article on a real life white hat AI from Stanford.
Don't freak out about what AI is doing to the cybersecurity threat environment. Don't shrug off what AI is doing to the cybersecurity threat environment. Do use the tools to make your security better because that is what the bad guys are doing.
Need help reorienting your cybersecurity posture? Contact us. We can help.
Comments
Post a Comment