Posts

AI-Powered Threats: Spear Phishing

Image
Recently we posted about what, broadly speaking, AI can do to hurt you . This is the first of two posts getting a little more specific;  This post is about AI-powered spear phishing. Here is a link to the second post  that gets specific. When I first encountered email in late 1980, it was a text-only affair. A simple transfer protocol (which quickly became...more complicated), a simple text file format (which quickly became...more complicated) and a simple app to view or create simple messages (which slowly became so complicated as to terrify the meek and appall the brave). Way back Unix types had the mighty talk app which split the screen and  allowed us to type messages on our dumb ASCII terminals and have the responses displayed on the other half of the screen--in real time! It was miraculous. But it required coordination: you both had to be logged in at the same time, And back then users usually were logged in to get work done, not to chat, so this was terrific occas...

Cut Out The Stupid Stuff

Image
A very useful, if overused, software design concept is "Keep It Simple, Stupid" or KISS. As is probably obvious, the idea is to avoid complexity where possible. This idea is necessary because it is so very tempting to solve very aspect of a problem individually, ending up with a solution that is as large as it possibly can be. The more lines of code you have, the more opportunities for bugs and inefficiencies and conflicts, all of which are things you don't want. However, features are external and visible to the peole who pay you while poor structure is internal, only visible indirectly as when software is too slow or takes too long to modify, or tends to blow up. This means that external features tend to carry more weight than internal elegant simplicity. That means that software can end up too complicated and bloated. Which leads to managers having posters, t-shirts, mugs and knickknacks with K.I.S.S emblazoned on them. In that spirit, Pythia Cyber has our own pithy bum...

The Cybersecurity Goldfish Bowl

Image
One distinct aspect of technological leadership, especially cybersecurity leadership, is the requirement of transparency. In order to do your job as a member of the management team you have to keep informing your peers of every significant screw up and every exploited oversight and every exploited vulnerability of every product that gets exploited, so sometimes other people's mistakes as well. If you are thin-skinned in this regard you are going to struggle to succeed as a cybersecurity leader. If you are uncomfortable telling people about mistakes or oversights, then this job is not for you. If you hate taking responsibility for other people's mistakes, than this job is really not for you. Isn't every senior management position like this? Yes and no. Yes, in that taking responsibility for mistakes and being transparent is a big part of just about any senior position. No, in that most other jobs don't have armies of people dedicated to breaking your hard work and don...

What's The Shape Of Your Career?

Image
Lately we've been discussing a new question: What's the shape of your career? Most people think of the shape of their career being something like a fused set of triangles, much like this: The blue part is what you do as an individual contributor. The yellow part is what you do as a leader. Duration of career is along the X-axis. This is reasonable as far as it goes: we all start as a 'doer', even if you start somewhere new as a leader. Our conversations are swinging around to thinking of your career as having more of the shape of an amphora, as shown in the lead photo (all pictures of amphorae from Wikimedia Commons, blue/yellow rectangle drawn by Gemini).  An amphora is very narrow at the bottom, then broadens out, then and only then becomes narrow. Think of a cybersecurity career: starts with skills, then moves to project and program and team leadership, then maybe a few people become higher-/enterprise-level leaders. Here is what that looks like relative to a career:...

The Balance

Image
Our friend Tomas Chamorro-Premuzic is back at it: over-use v. under-use of leadership talents .  But this time he challenges his audience to find the balance, warning that neither pole is good. For example, if you woke me up at 3am and asked whether being un-self-aware was bad, I'd say, sure: bad. But if you woke me up at 3am and asked whether being un-self-aware was sometimes good, I'd say...well...as Tomas says: "In other words, even the qualities we most admire become dysfunctional when taken too far, and even the traits we distrust can be valuable in moderation. Human behavior functions the same way: Most psychological strengths aren’t inherently good or bad; they’re dose-dependent." At this point you're either annoyed that psychologists can't get this answer down to good or bad, or you're thinking you know better.  Wrong & wrong. Tomas notes this: self-awareness predicts higher job performance, enhances leadership performance, and increases the qu...

You Need To Have The Talk

Image
Very recently Brendan put up two posts about AI and cybersecurity. This part caught our wandering eye: You can't ignore the fact that AI will make cyber attacks either better or more frequent or both. You can't run around like a headless chicken either. You can review your Cybersecurity Risk Profile and update it with an eye to what AI will make worse. Ideally, you already have such a profile and merely need to adjust it. If not, starting with what AI might do does not make sense. Start with what is already happening, how you are responding and then figure out what adjustments you will make and why. The good news is AI isn't magic and your response isn't panic. The bad news is that things are going to get a bit worse before they get better. It's time to have the talk with your senior leaders. You know this is coming: the talk about AI and cybersecurity. It will feel awkward. It will be like sign language in a language you don't speak. You're an expert tech...

How Much Should AI Scare Cybersecurity?

Image
In a previous post we looked at how much Pythia Cyber recommends that you trust AI to help you. In this post, we look at how much Pythia Cyber thinks you should be worried that AI will hurt you. It is tempting to shrug off the threat of AI-powered cyber attacks because that threat seems overblown. Certainly that threat is over exposed: it seems to get more coverage in the media and social media that it could possibly deserve. It is also tempting to let the threat of AI-powered cyber attacks derail your short-term plans and rewrite your priorities. Neither of these extreme reactions is appropriate. At the risk of repeating ourselves, Cybersecurity should be a form of Risk Management. This means following the usual methodology: determine what might happen, assess how painful any of these eventualities would be, assess how likely these eventualities would be and based on your best guess about likelihood and severity, allocate your limit resources for maximum effectiveness within your bud...