Sometimes The Nerd Must Be Heard
Here at Pythia Cyber we do go on about two things that most of our competitors ignore:
- How leading a technical group differs from leading other groups
- How cybersecurity has a huge behavioral component
These areas of focus are the result of years of experience, some of it bitter. Experience can be a terrific school, but the tuition is high and the teachers are mean.
This post is about Thing 1: specifically why we say that leaders of technical groups don't have to be great technologists themselves, but they do have to have technical credibility. We define technical credibility as the ability to comprehend basic technology issues AND to display that ability to your troops.
One of the ways in which technical credibility is crucial is as a tool when it comes time to play referee. All too often leading a technical group requires that you referee disputes about technology deployment or development. All too often these disputes are between Goofus and Gallant, ie between someone who is proposing something that suits them and someone who is proposing something that furthers the organization's goals. For example, Goofus might be a novelty junkie who gets bored tweaking other people's work and want to redo everything, but tells you that "technical debt requires it." Or, Goofus might be shying away from a needed rewrite and proposing a nice, simple, inadequate patch job. Gallant is the opposite: clear-eyed, hard-working and ready to do what is best even if that is boring or mundane or frustrating.
The tricky part is all of us who work in technology have sometimes been Gallant and sometimes been Goofus. We all hope that we are 90% Gallant but at least some of us are wrong about that.
So a core attribute for a leader of a technical group is the ability to sort out the Goofus side from the Gallant side with bonus points for not simply pigeon-holing people into one or other category. A deep understanding of human nature might be all you need for this, but in my experience you also need a sense of the track record of the people involved and a basic understanding of the underlying issue.
So far, so abstract. So let us get down and nerdy.
Incompleteness
Here is something you might not realize you care about: Gödel's incompleteness theorems. This is a big deal in some branches of philosophy and logic and formal systems. A brutal summary is this:
Consider a formal system "S". Like all formal systems, S consists of statements which in this case must be provably true. Consider a statement in S which we will call "P". Statement P is this: "statement P is false." Note that S allows statements to refer to themselves, which is rather obviously called "self-reference." Here is the tricky bit: if P is true, then P is false. Similarly, if P is false, then P is true. A complete formal system consists only of statements, so S is "incomplete."
Super-nerdy, yes? Perhaps, until you consider that software is written in more-or-less formal languages, which means that software (and, alas, much technology) is a formal system. Which means that if your programming language allows self-reference, then validating software created with that programming language can be a challenge.
Again, so what? Well self-modifying code is kind of self-referential. Which means that all the slick auto-configuration code on which every Operating System relies has this potential problem. This is why, for a while there, it was depressing common to upgrade your desktop and have the upgrade hang and then have to back out the upgrade. We got better, but it took time and that time sucked.
As a technical leader it is quite likely that you will be asked to referee between the "self-modification is evil" camp and the "self-modification is the only hope of high performance" camp. It would be best if you had at least some help in the form of understanding the basic issue. When deciding the slick:dangerous ratio of given proposal, in other words who is Goofus and who is Gallant, it helps to understand the inherent risks. At the very least, this will help you ignore the manipulative claims that a given approach has no potential downsides. Incompleteness has benefits, but it also has costs. Those costs are often requiring large amounts of testing. Anyone who tells you anything else is lying or ignorant or over confident--or all three at once.
System Reliability vs System Complexity
There has been some work over the past few years to prove that there is a limit to how much you can improve system reliability through added complexity. In other words, there is a limit to how well it works to build a system and then patch every problem you run into. Eventually your patches will do more (overall) harm than (specific) good. I will spare you the references to the many papers, but a quick Google search will send you down a rabbit hole of glorious depth.
So what? Well, again, your deployed technology makes up a single complex system from which you require reliability. This means that there must always be, lurking at the back of your mind, a sense of how much complexity you already have on board. At some point the tempting, simple, easy, direct patch of a system will be the straw that breaks the camel's back. It may be that the claim that the simple, easy, direct patch is too much is just Goofus, itching to do something new. Or that claim may be Gallant, saving you from short-sightedness. Which is which? A basic understanding of the issue can't hurt when you have to decide.
Yes, AI As Well
One cool thing about understanding core concepts is that you can stack your understanding of individual issues like Lego bricks or like a step stool that helps you understand related issues.
Now that we have considered incompleteness and complexity versus reliability, we can sail through a recent article in Quanta Magazine: Cryptographers Show That AI Protections Will Always Have Holes. This article confirms that we all suspected: that Artificial Intelligence systems are fundamentally the same as every other human technology that forms a system, prey to the same fundamental problems.
To summarize (although I encourage you to read the article), the article addresses the question of how to protect AI systems from evil input. For example, while we want our AI oracles to understand Physics, we don't want our AI oracles to readily explain how to make nuclear weapons. It is easier to patch the system by providing an input filter rather than trying to retrain the AI to understand what not to explain. Broadly speaking, the incompleteness theorems tell us that verifying what an AI will explain is a huge and painful task while the reliability vs complexity principle tells us that there is only so much we can to keep the wrong questions from being asked.
Securing AI oracles is rapidly becoming an enormously important problem and soon it will become a very common problem. There are no simple answers here, but understanding the basic underlying issues is a great start.
In Conclusion
We don't claim that you need to be a formal linguist in order to lead a technical group. You also don't have to be a theoretical physicist or a formal logician. But if the concepts in this post don't seem worth considering, or worse, bore you to tears, then perhaps leading a technical group is not for you.
Comments
Post a Comment