Reality Based Consulting: Behavior & Cyber Security

Sigmund Freud

Brendan Hemingway writes:

I am the leader of our IT practice because I have spent decades in many parts of the applied technology world: application software development, embedded systems, databases, IT and cyber security. In those decades I have observed a tendency in my field to focus on what we can control (technology) and shrug at what we cannot control (human behavior).

Specifically, complex applications often require a complex security model, with user roles and permissions and restrictions. The trouble is that reality rarely conforms to the model. If we are lucky, these  use cases that do not conform are edge cases and we can shrug them away with "how often does that happen?" or "don't do that."

But we are not always lucky: sometimes those non-conforming cases are common, or those edge cases are important despite being rare. Then we have a problem: people are great at getting around restrictions and many of these workaround are as unsafe as they are useful.

To pick a specific example that won't get me into trouble, timeouts on public information terminals. In other words, the policy that if an information terminal (something on which you can call up information) is not behind access controls, even valid sessions time out, and time out rapidly. This makes all the sense in the world: you might log in, get called away and forget to log out.

However, in the Emergency Department, this was a comically poor fit for the actual use case: a doctor with their hands full needed to know lab results or patient history isn't going to keep going back to the COW (Computer on Wheels) and the COW might go anywhere so it had to be considered a publicly accessible terminal. So maybe have nurses run the COW? But nurses also have better things to do. How about Physician's Assistants? They were rarely in the ED and also had other things to do. How about Nursing Aides? Well, we are starting to get far enough down the hierarchy of insurance and certification that management was not comfortable opening up access to the Electronic Health Record. And forget about placing orders for tests or drugs.

So what happened? Doctors gave their IDs to whoever was around and had whoever was around use the COW however the doctor told them to. And as a software developer, I shrugged. It was too big a mountain to climb. But through Cyber Security eyes, this particular behavior was unsafe specifically and tended to make the environment less safe generally. An example of the behavioral aspect to Cyber Security.

I am not saying that there was a solution for this complex socio-regulatory mess, but I am saying that I felt that we could something more than nothing!

Ted Hayes replies:

If only there were a solution to this socio-regulatory mess! Let’s reflect on what happened here:

  1.  People are adaptable and will find a way to get their work done by working around your rules

  2. No one in this scenario thought of themselves as increasing a cyber security risk; in fact quite the opposite, they saw their behavior as a net plus by reducing health risk while sure maybe increasing cyber security risk

  3. Leadership oversight was missing or tolerant

How would PythiaCyber approach this?

As the leader of PythiaCyber’s Behavioral Science practice, I’ll start by noting that no one can anticipate every permutation of the risks entailed through human-computer teaming. Systems serve people and not vice-versa. That’s why we have edge cases.

In short there are two possible solutions: reduce the number of situations that are truly “edge” cases by anticipating them – in lean manufacturing this is referred to as a “Six Sigma” approach – and incorporate user input to raise awareness of the trade-offs involved.

A medical practitioner would not re-use hypodermic needles just because it was more convenient; what are the risks raised by swapping credentials, etc.? It’s very likely that the staff (let alone leadership) in this office never gave much thought to what is called “systems thinking” -- why things were being done in a certain way, and what the costs and benefits of doing things that way would be.

An investor or board might review this situation differently. They might ask what metrics are driving behavior? It is a bedrock fact in psychology that people will continue to do what they get reinforced (a.k.a. paid) to do. So if we pay for metrics that are contrary to cyber security risk management, we will increase cyber security risk. In-office output metrics are easy to understand, but it’s necessary to understand that cyber security risks matter too: loss of PII or PHI, or ransomware attacks, are real potential scenarios.

Cyber security risks are not binary: 0% risk or 100% risk. The goal PythiaCyber would recommend is to ensure that leadership has considered user input and is willing to tolerate increased cyber security risk to achieve higher patient care rates and reduced wait times – but only up to a point. The effective leader working with users and cyber security staff will identify trade-offs and security measures so that (a) better training or better technology can reduce the number of cases that are truly “edgy” and (b) processes can be evaluated relative to what’s tolerable. 

Comments