Want to get back to normal? Be ‘normal’? Don’t. It’s dangerous. It’s about to become lethal.
Inspiration
It’s becoming tiresome, isn’t it? Every week AI is seemingly conquering new territory, doing more things, becoming more capable and more useful – or threatening, depending on point of view. High noise factor, low value because it’s mostly speculation. What about taking the opposite angle: What AI cannot do – would that be more useful?
Is that what you want? Actually, what you want may not matter, you’ll get it anyway. It may come as a surprise, but AI has been predicting the future for a long time – and very successfully. To the extent that we’ve been totally addicted for ages. Oh, we haven’t…
It’s almost like running out of gas, except everyone’s surprised: Oops, this thing runs on gas? Where can we get more and who pays? Of course chatbots don’t run on gas, but they do run on data and the data-pipes are about to close. How can this happen and can this possibly be…
Digital trust hasn’t delivered. In fact, digital trust, blockchains, even Zero Trust – which is a concept, not a product – turned out to be less than trustworthy. The reason? We couldn’t get humans out of the equation. What we got was more complexity, less understanding and centralization instead of the expected democratization – more power…
Tech luminaries (!) want us to pause AI/ChatBot development for 6 months. Many others are jumping on the bandwagon. The message is this: ‘We need to get our bearings straight(ened).’ Which may make sense, but seriously – what are they on?
GPT-4, ChatGPT’s bigger brother, is just out and has already created a new market: One click lawsuits. Yes, you read correctly. How clever. Just what the world needed.
Can you blame people (or politicians) for choices they make when they have no clue? Of course you can. That’s how they – and all of us – learn. And having no clue is no excuse for bad business decisions. Nevertheless, most managers and leaders get away with it. Again and again. No wonder we’re in trouble…
Apparently, chatbots suddenly hit a wall. Went from ‘wunderkinder’ to laughingstock in a few short weeks. Bard and ChatGPT (or the enhanced Bing which suddenly became ‘de-enhanced’) got undressed. No intelligence was found, just a lot of data – and a language model, a very big one. Which explains the output – and (pardon my French) the BS.
How would you know it it’s lying? Most of us wouldn’t. Is that a problem? Only if you (blindly) believe what it’s throwing at you.