Evil Social Engineering Can Be Defeated And There Is A Science For That
A traditional cybersecurity attack involves targeting people with authorized access to information technology to get them to allow people who do not have authorized access to gain access inappropriately. It's called "social engineering" and it works well, historically, as a means for criminal activity or inappropriate use of the host organization's compute. Social engineering works because the attacker has found a way to manipulate the target to do something they should not do to the benefit of the attacker.
It does not have to be that easy.
Ultimately we don't know how effective social engineering is because the attackers are not going to engage in A/B experiments. We know that social engineering is effective it because there is a low barrier to entry -- much like CBD & vape shops, there are more attacks every day -- at almost no cost and at a tremendous upside when it works.
(Sophisticated state-sponsored actors that hack tech companies or federal government agencies work on a different level of attack almost universally for the purpose of stealing secrets; this does not typically involve social engineering, but instead, poor or weak cybersecurity practices.)
(More bad news -- attacks by artificial intelligence agents are getting better and better because the AI learns what attack tactics work. Upside of this bad news: more people are losing coding jobs, and maybe some of these job-losers are hackers...cue the applause...)
Directing people not to do things such as clicking on links, etc., is transparently ineffective. This is "don't-training." In fact it's a stretch to call it "training" because all that's done is to say to employees Hey don't click on suspicious links, etc.
Training to do something, not to not do something, is a better route.
Let's stipulate immediately that there will always be people who don't do what they were trained to do. Also, we'll agree that there is a place for don't-do training as part of a training regimen -- that is, along with do-training. The problem is that don't-do training by itself is ineffective.
Large-scale empirical studies of the outcomes of training, known as meta-analyses, have shown that training with a needs analysis is effective at a meaningful level. There are a few catches. First, as one study by Winfred Arthur put it (p. 242), "Trained and learned skills will not be demonstrated as job-related behaviors or performance if incumbents do not have the opportunity to perform them." In other words if you do annual cybersecurity refresher training, it's maybe better than nothing but it's better to have refresher behavioral training. This can happen; think of fire drills where the entire office needs to exit a building an assemble outside (be honest: would you think just telling people once a year what to do in case of a fire would be as effective as emptying the whole office?). Second, another meta-analysis, this by Christina Lacerenza, noted (p. 1704) that there should be as much training as possible, based on a training needs analysis, it should be mandatory, and attendance should be mandatory.
What would you train people to do in cybersecurity training? Results by Arthur and colleagues, as well as by Lacerenza and colleagues, strongly show that there should be a training needs analysis, first, to determine what employees and leaders need to do to create a cybersecure environment. Should they forward an email to a sys admin, ask for assistance when in doubt, not have access to the internet while using company equipment? What should they do/what will management want them to do if they believe they have messed up -- quit, blame someone else, ignore it? What if an employee wants to work at a coffee shop or while in a location with "free" wi-fi, or wants to use a macro-enable spreadsheet they got from an external source? Suppose you implement multifactor authentication using tokens (for example) -- what if the employee forgot to bring the token to work?
Oh yeah how about using AI at work -- do you explain what data poisoning is and ask people to use good judgment? What about developing performance reviews using AI? Analyzing quarterly reports using AI?
Well-developed training is effective, as has been shown in meta-analyseswhen it is based on defeating realistic cybersecurity incidents. Telling people not to do things is not training and moreover it can go spectacularly wrong. Ask us how we can help you identify the right training interventions for your cybersecurity risk management goals.
Comments
Post a Comment