If you’re following the cybersecurity buzz and – at least occasionally – take the time to think for a minute, it’s hard to avoid getting the feeling that everything is broken. And it is. But the show must go on. Here’s how it works.
It’s not unique to the security discipline and better known as ‘calculated risk’. Except most of the time it isn’t calculated, it’s (no one wants to admit this) unknown, random, unpredictable. Definitely not calculated. We don’t like to hear this because it shakes the foundations of our lives, our society. Which are built on the assumption of predictability.
Interestingly though, this assumption has worked for hundreds of years. Worked well even. Unpredictable things happened, accidents, flood, earthquakes, fires, other acts of nature, etc. They were bad, they were expensive, they cost lives but they were also part of life. Unpredictable but not unexpected.
In recent years this picture has changed dramatically. Unpredictable has become equivalent to unexpected. And I’m not talking about climate change, which exacerbated but didn’t initiate the change. We seem to have more random accidents, more outages, more catastrophes, more problems. Technology that was supposed to make things simpler, instead generate complexity, dependencies and more. But as tempting as it is, let’s not take the easy way out, blame technology and move on. It doesn’t taste good, but let’s taste it anyway: The bigger problem seems to be that we stopped thinking, listening, even caring. Fast became more important than good, perception more important than reliability.
Case in point: Even in my part of the world – Northern Europe – nature seems to have been acting up lately. And it has. Flooding and mudslides, avalanches, landslides, fires etc. move from rarities to elements of the daily newsbeat. Casualties, destruction, suffering, loss. What’s interesting though, is that many of them were predictable. Like the minor shakes before an earthquake. Or like remembering that this particular area had several floodings (or landslides or some other serious ‘acts of nature’) 150 years ago. So we could have been prepared, but we weren’t. The big and small catastrophes happen more often than not because we ignored the (short term) signals and past knowledge.
We knew (or should have known) that it was unwise to allow certain (very attractive) areas to be developed for housing, industry, whatever, because we knew about the risks. Too close to the river or the ocean, to close to that steep hillside. They were ignored. Then came the flood, landslide, bushfire, even in some cases dangerous gases because someone forgot the area was a garbage dump 50 years ago, etc.
These are not accidents, not calculated risk, more like uncalculated stupidity or worse: Ignorance and bad management. And blaming politicians doesn’t help. We know they don’t have a clue, but their advisers should have. What happened to the sane voices, the warnings?
Back to cybersecurity, here’s the point: Accepting the ‘everything is broken’ statement as a fact changes everything, mindset in particular. Because it keeps everyone on their toes. Don’t think you are protected so you can relax for a second, because you can’t – and you know it. Someone somewhere is targeting you right now, and they know something you likely don’t. Some MFA scheme was broken, some piece of security equipment was penetrated, some huge data leakage, etc. (Check out this article from ImmuniWeb for some extra ‘inspiration’.)
The reason for the ransomware wave we’ve seen in the past few years is only partly more activity from the bad guys. The other and just as important reason is complacency. Not that security departments and vendors are incompetent, but the attitude – the mindset – is more often than not wrong. For example – most security vendors, in particular the big ones, want their customers to believe that they have everything you need. That they can be your sole partner which will not only make you more secure but also be less expensive, provide broader coverage and more comprehensive tools.
It sounds great but this kind of thinking is broken – and most professionals know it. Among many other factors, going with one vendor exposes a customer to the dangers of monocultures – a well known threat to those who’ve been around the block a few times. In short, a single bug in a key piece of software or hardware or procedure may disable (or expose) your entire infrastructure. For example.
It doesn’t matter whether we’re talking about ventilation systems, communication systems, energy or security. It’s a calculated or – more often – ignored risk. Which is one of the reasons we’re seeing so many technical glitches with big consequences these days. (By the way, the term ‘monocultures’ comes from agriculture – look it up to find some really educating examples as to how this works).
What’s scary is that this is nothing new. Not the monoculture threat, not the general ‘things are broken’ scenario. 20 years ago, a handful of the world’s most renowned security experts, among them Bruce Schneier and Daniel Geer, wrote a paper to call attention to the problem, titled CyberInsecurity: The Cost of Monopoly which summarises the problem like this:
- Our society’s infrastructure can no longer function without computers and networks.
- The sum of the world’s networked computers is a rapidly increasing force multiplier.
- A monoculture of networked computers is a convenient and susceptible reservoir of platforms from which to launch attacks; these attacks can and do cascade.
- This susceptibility cannot be mitigated without addressing the issue of that monoculture.
- Risk diversification is a primary defense against aggregated risk when that risk cannot otherwise be addressed; monocultures create aggregated risk like nothing else.
- The growth in risk is chiefly amongst unsophisticated users and is accelerating.
- Uncorrected market failures can create and perpetuate societal threat; the existence of societal threat may indicate the need for corrective intervention.
Very interesting and educating reading – and very compact, just a few pages long. It’s sort of unbelievable, but it’s also exactly like the examples above: 20 years later the situation is worse, not better. Much worse. How come?
I could go on with a deep dive into problems and catastrophes that could have been avoided if we’d handled this and many other problems differently, but even more important right now is ‘why we are here’. We – our world – seem to have prevailed, actually flourished in spite of the lax attitudes on many fronts. How bad can it be?
Here’s the thing: Doing the ‘right’ thing is always expensive – much more expensive than the next best and next-next best thing. And we got away with it, most of the time so it must be ok, right?
This is where the risk actually becomes calculated: We’re taking our chances hoping it’s good enough. And we always have an excuse for not pushing harder on the choices our professional consciousness tells us to make. Like ‘we couldn’t get the money’ or ‘we weren’t that badly exposed’ or ‘the experts claimed it was good enough’, etc. Which is fine until the mudslide or flooding – or ransomware – is a fact.
As we head towards 2023, the world is very different than 2 years ago. Much more so than anyone could have anticipated. It’s more fragile, hostile and unpredictable which means we need to think, plan and act differently. Use past knowledge and experience – not to tell us what to do, but to indicate what to not do. In cybersecurity and in many other contexts and disciplines. If we don’t, we’re not professionals.
“It worked yesterday/last year” is not reassuring, more like scary: “You mean, you haven’t updated/changed/replaced” – depending on context. Get real. It’s a hostile world and everybody’s exposed. You have been warned.
See also
• Surprise: Your Cyber Security Sucks
• “I got yor password”
Leave a Reply