The Other Half Of The Cybersecurity Talent Problem


Eric Cole recently published a piece called The CyberTalent Lie that is worth reading. His argument is that the persistent industry narrative about a cybersecurity talent shortage has become a cover story for avoidable strategic failures in how organizations recruit, develop, evaluate, and retain security professionals.

He's right. Organizations have systematically eliminated entry-level positions, leaned on certification filters that exclude strong candidates, treated compensation as the lever for retention when exit data points to culture and mission, and excluded security leaders from the strategic conversations where their authority would matter most. His five-step rebuild -- audit credential inflation, restore early-career pathways, conduct honest exit analysis, elevate strategic positioning, protect the learning budget -- is the right prescription for the problem he diagnoses.

The problem Eric describes is the 'supply-side' talent problem for the people who do security for a living. His framing presumes that cybersecurity strategy follows from problems identified at the security team level.

The second talent problem operates at a different layer of the organization where the work of even a talented security team produces security outcomes: the behavioral risk surface across the rest of the workforce.

A well-resourced, well-led, well-retained security team (Brendan refers to this as TAU) protects an organization whose other employees, a.k.a. the 95 percent of the workforce who do not work in security, make daily decisions about how to use IT systems, how to handle sensitive information, whether to follow policy or work around it, whether to report a suspicious email or stay quiet about a mistake, whether to use the sanctioned file-sharing tool or the personal one that's faster. Decisions aggregated across thousands of people and millions of interactions and your logistics chain determine actual organizational risk to a degree that no security team's quality alone can offset.

The pattern shows up consistently in major incidents we highlight in the Litany of the Hacked. The compromise vector is rarely "the security team was understaffed" or "the SOC analysts weren't well-trained." Instead it's something like:

  • A logistics integration was procured under a business-side carve-out and never came into security's scope.
  • An employee shared credentials with a contractor to keep a project moving.
  • A manager modeled workaround behavior that the team adopted as norm.
  • A phishing attempt succeeded because the workforce had no muscle memory for reporting suspicious messages quickly.

There is also a different kind of seam worth naming between the cyber-doers and the cyber-overseers: the gap between the technical workforce that sees the operational reality and the executives and board members who make decisions about resourcing, risk acceptance, and strategy:

  • A SOC analyst sees an alert pattern that suggests a coordinated reconnaissance campaign and escalates it through three layers of management. By the time it reaches the executive level it has been compressed into a status update that strips out the context that would have made the right decision obvious. 
  • A CISO knows the team is operating beyond its capacity but cannot translate that operational reality into a board-level resourcing argument that competes with other investment priorities.
  • An incident debrief makes operational sense to the security team but never produces the strategic implications the executive team needs to act on. 

These are failures of translation. The downward language (technical, operational, threat-specific) and the upward language (risk, governance, capital allocation, strategy) require different vocabularies, and most organizations have not invested in the people or the practices that bridge them.

None of these failures are the security team's fault, and none of them get solved by a better security team alone.

This is what Pythia Cyber's approach to behavioral cybersecurity addresses, and it's the layer Eric's analysis, as sharp as it is, doesn't reach. His talent strategy could be executed flawlessly across an industry and the behavioral risk surface in the broader workforce would be largely unchanged.

What does the second talent problem look like in practice? 

In our work with organizations, we use a framework we call BARC: Behavioral Analysis of Risks to Cybersecurity. BARC captures five categories of behavioral conditions that determine security outcomes regardless of security team quality:

Policy attitudes and perceived workability. Whether the workforce experiences security policies as reasonable, supportive of their work, and well-designed by people who understand the actual job. When this perception is low, workaround behavior rises predictably, regardless of how good the security team is.

Workaround prevalence and team norms. Whether work groups have developed informal practices that bypass security controls, e.g. sharing credentials, using unsanctioned tools, proceeding without security approval. These norms spread laterally through teams faster than they can be addressed by training.

Reporting behavior and security psychological safety. Whether employees report mistakes, suspicious activity, and observed policy violations, or stay quiet because reporting is punishing or pointless. A workforce that doesn't report turns the security team's incident response capability into a slow and partial instrument. (Psychological safety in this context refers specifically to the perception by employees that they will not be penalized for reporting incidents.)

Manager security modeling. Whether managers visibly follow the policies they expect their teams to follow, and whether they understand the policies well enough to do so. Managers who model workarounds give their teams permission to do the same.

Role clarity and organizational climate. Whether employees have clear understanding of what's expected of them at work -- including what's expected on security. Recent meta-analytic work on workplace stressors finds role ambiguity to be the single most consequential predictor of counterproductive work behavior across hundreds of studies. Security compliance is downstream of this. If an organization's employees aren't clear about what security means for them, they will find it stressful and are likely to ignore it.

These five conditions become visible only when they're measured directly. That is the work behavioral cybersecurity does, and which most organizations have never systematically attempted.

Solving the very real supply-side talent problem in cybersecurity is necessary but not sufficient. The most resilient organizations work both talent and implementation layers simultaneously. They build security teams the way Eric describes and we endorse: investing in early-career talent, removing artificial credential filters, treating security as strategic, retaining people through culture rather than through compensation arms races. And they measure and address the behavioral conditions in the broader workforce that determine whether the security team's work translates into actual security outcomes. Either approach alone leaves real risk on the table. Organizations that work both layers will outperform organizations that work only the first.

Ask us how measuring the behavioral risk surface in your organization -- the layer where most cybersecurity programs succeed or fail -- can extend your cybersecurity talent strategy.

(image credit: Acediscovery, CC BY 4.0 <https://creativecommons.org/licenses/by/4.0>, via Wikimedia Commons)





Comments