Cybersecurity Tyrants
After a career in which cybersecurity was almost exclusively considered from a technological or procedural standpoint, I yearned to address the elephant in the room: human behavior. The best technology or procedures are not enough to counterbalance bad behavior. So I reached out to the best behavioral scientist I know and Pythia Cyber was born.
As part of our mission to highlight the role of behavior in cybersecurity I present a series of three posts about how your organization's culture can shape your cybersecurity. Specifically, how your organization's culture's attitudes toward cybersecurity hamper or help your cybersecurity program.
The first post describes the cybersecurity janitor model. The second post (this one) describes the cybersecurity tyrant model. The third post describes the cybersecurity partner model. This is the Goldilocks narrative: one is too loose, one is too tight and one is just right.
In my long career I have seen the tail wag the dog: I have seen organizations whose cybersecurity is buttoned up so tightly that cybersecurity is a real impediment to getting work done. But when I dug into the history of these situations I rarely found a power-mad lunatic at the helm; usually I found someone who had suffered through the cybersecurity janitor model and was overcorrecting. Often the janitor model had recently failed in some spectacular way and the pendulum had swung wildly in the other direction: suddenly the cybersecurity team was in charge of everything after having been in charge of nothing. The combination of power corrupting and the bitter memory of being ignored can lead you to some pretty grim places.
Sometimes overly restrictive cybersecurity programs are not simply about ego or control. When you have suffered under having the rest of your organization do whatever they like while expecting you to clean up after them the opposite option seems very appealing: clamp down on IT services like a vice and then demand that all exceptions to the "do nothing" rule be justified.
When I say "doing whatever they like" I mean installing unapproved software on their work devices, bringing personal devices into work and sometimes doing work on them, demanding software be installed on organizational servers and desktops without regard to cybersecurity and even using work laptops on public wifi. Is using your laptop for work at Starbucks a good idea? No, it is not.
(As an aside, I must ask: why do so many of you want to do this? And why do so many of you refuse to use VPNs?)
There is another culture that breeds overly aggressive cybersecurity: the "risks are real but benefits are imaginary" crowd. These people have cousins in finance who believe that costs are real but benefits are imaginary. Both crowds end up defaulting to "do nothing" philosophy. Don't provide that IT service because it is *certainly* risky and only *possibly* useful. After all, our ancient ancestors managed to survive without being able to use the guest wifi in the restroom to check their email while using the toilet, so we can survive too.
However you come to it, the tail wagging the dog is bad because it is untenable and sooner or later there will be a revolution (or counter-revolution). And in the post-revolution euphoria the chances that a reasonable balance will be struck are not great. Thus I see some organizations oscillate between no IT services and all IT services, between great cybersecurity and terrible cybersecurity.
Cybersecurity is a team sport. Everyone is on the team, like it or not. If we are tasked with saving you from yourself then we likely won't choose the option of reasonable risk assessment. If you have the power, we will be janitors. If we have the power, we will be tyrants. If we are partners in managing risk then we have a shot balancing risks and rewards, costs and benefits, at being productive and safe as we will see in the next post.
Striking the right balance means that you trust that you have the right people in the right jobs doing the right things. People, placement, procedure. This is hard. We can help. Ask us how.
Comments
Post a Comment