Zero trust is an interesting concept. ‘Don’t trust anyone – ever’ seems so simple and so enticing now that the world is falling apart because we decided to trust the untrustworthy. We created huge vulnerabilities, now they’re haunting us. Can zero trust work outside the narrow technical settings in which it has already proven itself?
Think about it: The world is in disarray today because we decided – many years ago – to trust Russia. And China. And other previously unreliable ‘parties’. They became our friends. Implicit trust — easy sharing — easy trade. The new balance did benefit everyone for quite some time. Then new leaders with different ideas came to power, and suddenly it became a problem. Trust became vulnerability – and we’re in the middle of it right now: The world is scrambling for the exact opposite of trust, for self-reliance, autonomy. In a hurry. It’s like turning a supertanker. It takes time. In the meanwhile we’re vulnerable.
A huge subject that affects all aspects of life – from you and your family vs. the neighbors to geopolitics. Not to mention technology, where zero trust has been an important concept for a decade.
I plan to revisit the applicability of zero trust in several contexts in the coming months. Can a country, a city, a family or a person apply and live with zero trust? The immediate answer is no, but there are nuances and may be room for creativity. Can we adapt the interesting and generally positive experiences from the network/cybersecurity environment to entirely different settings?
What about software: If you’re in the tech sector, you’re probably overloaded by vulnerability-reports daily – or at least weekly. Just like last year. And the year before, even 10 or 20 years ago. There seems to be NO improvement whatsoever, more like the opposite: It’s getting worse. How come?
Seriously – in any other discipline such standstill would have been unacceptable. Correction: Was unacceptable. But not anymore. Because software is ‘infecting’ just about any product out there and brings with it – yes, you said it – vulnerabilities. If it weren’t for universal connectivity, the Internet and automatic update mechanisms, this bad spiral would have stalled long time ago.
Instead we’re getting used to it, barely noticing the ‘has been updated’ messages that pop up here and there daily. Is this sustainable? Is it time to take a step back and think differently? Obviously we’ve been curing symptoms, it’s time to attack the root cause, but what is it? Is it bad engineering, sloppy developers, bad tools, outdated regimes – or just another law of nature? Do we need zero trust, software-style? Is it possible? A chain of thought with some surprising twists.
The thing about zero trust is that it seems to deliver complete control: No dependencies, no implicit vulnerabilities, no loose ends. Which in some settings – like autonomous cars – seem like absolute requirements. It must work regardless – or fail safely. Dependable, predictable, safe – in all possible situations: Heavy rain, snowstorms, dirty roads, heavy traffic, high speed, low speed, populated areas etc. No network? OK. No GPS? No problem. Resilience.
Obvious requirements, right? In theory, yes. In practice, no. Actually, we’re already trying hard to get around the restrictions all these ‘requirements’ imply, looking for shortcuts. Case in point: Some of the largest car manufacturers are selling the idea of the hybrid driver, and we – the customers – are buying it. Seriously! ‘Hybrid autonomy’ and ‘hybrid drivers’ are supposed to alleviate our scepticism toward robots (i.e. autonomy). It’s like ‘yes, we’re back in the driver’s seat, I knew it, we’re the best’. The ultimate unreliability (and vulnerability) is back in the equation and we’re rejoicing? I discussed this contradiction in terms a while back – in Hybrid Drivers? Gimme a Brake … It just doesn’t make sense. Put more bluntly, it’s dangerous.
And there is more. Our relentless fight to avoid humiliation and keep robots out of the driver’s seat goes on and on. Like ‘fake news’ (actually just lies and statistics) as recently discussed in the post Autonomous Lifesavers. Humans – you and me – simply don’t like the idea that autonomous drivers are more reliable or better than us. Apparently we’d rather continue to kill each other than hand over the wheel to a machine. Not that today’s autonomous technology is perfect. It isn’t and it may fail, but the likelihood is much lower than you or me failing.
Does this fight make sense to you? I didn’t think so. We actually have to trust the machine, not because it’s perfect, but because it’s better than the alternative. Which is an interesting conclusion given where we started: Trust is not an option, it’s a balance. Which also means that some vulnerability is unavoidable. Where does that leave ‘zero trust’? You’ll be surprised. Stay tuned.