Wow! The ‘year of AI’ has just started and AI is already hallucinating. I’m not joking. Experts and pundits alike proclaim that AI systems (mostly LLMs) are having hallucinations – which (to them) is a very positive thing: It means the bots are becoming creative. Really? It looks more like a bunch of bugs …
Technology
If the question surprises you, you’re in good company. Most of us moved email (and a lot of other stuff) to the cloud 10+ years ago, assuming – and expecting – to be done worrying. Why does the question continue to pop up? Should we be worried?
Digital privacy is a challenge. GDPR and its siblings make it worse. We need a reset. And then some serious effort to understand the problem. Something the GDPR creators never took the time to. Possibly the most expensive – and detrimental – ‘blind-leading-the-blind’ exercise of all time.
So the Tesla autopilot has been involved in more than 700 accidents over the past few years. That’s bad, but how bad? That depends on the metric. How do accidents caused compare to accidents prevented? Are autopilots good or bad? The answer: We don’t know.
It’s becoming tiresome, isn’t it? Every week AI is seemingly conquering new territory, doing more things, becoming more capable and more useful – or threatening, depending on point of view. High noise factor, low value because it’s mostly speculation. What about taking the opposite angle: What AI cannot do – would that be more useful?
Is that what you want? Actually, what you want may not matter, you’ll get it anyway. It may come as a surprise, but AI has been predicting the future for a long time – and very successfully. To the extent that we’ve been totally addicted for ages. Oh, we haven’t…
It’s almost like running out of gas, except everyone’s surprised: Oops, this thing runs on gas? Where can we get more and who pays? Of course chatbots don’t run on gas, but they do run on data and the data-pipes are about to close. How can this happen and can this possibly be…
Digital trust hasn’t delivered. In fact, digital trust, blockchains, even Zero Trust – which is a concept, not a product – turned out to be less than trustworthy. The reason? We couldn’t get humans out of the equation. What we got was more complexity, less understanding and centralization instead of the expected democratization – more power…
Tech luminaries (!) want us to pause AI/ChatBot development for 6 months. Many others are jumping on the bandwagon. The message is this: ‘We need to get our bearings straight(ened).’ Which may make sense, but seriously – what are they on?
GPT-4, ChatGPT’s bigger brother, is just out and has already created a new market: One click lawsuits. Yes, you read correctly. How clever. Just what the world needed.








