Unless you’ve been living under a rock, you’ve been hit by the AI hype wave. If you’re more than average interested you’ve wondered when the downturn comes. It always does – it’s as predictable as gravity – and the Gartner Hype Cycle. What happens then?
So the Tesla autopilot has been involved in more than 700 accidents over the past few years. That’s bad, but how bad? That depends on the metric. How do accidents caused compare to accidents prevented? Are autopilots good or bad? The answer: We don’t know.
It’s becoming tiresome, isn’t it? Every week AI is seemingly conquering new territory, doing more things, becoming more capable and more useful – or threatening, depending on point of view. High noise factor, low value because it’s mostly speculation. What about taking the opposite angle: What AI cannot do – would that be more useful?
Is that what you want? Actually, what you want may not matter, you’ll get it anyway. It may come as a surprise, but AI has been predicting the future for a long time – and very successfully. To the extent that we’ve been totally addicted for ages. Oh, we haven’t…
Tech luminaries (!) want us to pause AI/ChatBot development for 6 months. Many others are jumping on the bandwagon. The message is this: ‘We need to get our bearings straight(ened).’ Which may make sense, but seriously – what are they on?
GPT-4, ChatGPT’s bigger brother, is just out and has already created a new market: One click lawsuits. Yes, you read correctly. How clever. Just what the world needed.
Can you blame people (or politicians) for choices they make when they have no clue? Of course you can. That’s how they – and all of us – learn. And having no clue is no excuse for bad business decisions. Nevertheless, most managers and leaders get away with it. Again and again. No wonder we’re in trouble…
Apparently, chatbots suddenly hit a wall. Went from ‘wunderkinder’ to laughingstock in a few short weeks. Bard and ChatGPT (or the enhanced Bing which suddenly became ‘de-enhanced’) got undressed. No intelligence was found, just a lot of data – and a language model, a very big one. Which explains the output – and (pardon my French) the BS.
How would you know it it’s lying? Most of us wouldn’t. Is that a problem? Only if you (blindly) believe what it’s throwing at you.
They are in the news – every day and all over. Chatbots are doing homework, writing novels and poems, taking exams, solving mysteries, fooling people, sometimes fooling themselves. It’s incredible – but everyone is worried. It doesn’t make sense. Shouldn’t we be celebrating?