It’s weird. You’ve been learning new stuff all your life. And changed accordingly – maybe ‘evolved’ is a better term. As adults most of us have embraced learning, even occasionally bragged about it – as in ‘lifelong learning, that’s me’ etc. Then – suddenly, it’s bad. “Reeducation? No thank you – I’m good.” Why?
You’ve probably noticed. Suddenly the newsbeat is about ChatGPT & co. becoming more stupid – less likely to deliver correct results. What’s going on?
Unless you’ve been living under a rock, you’ve been hit by the AI hype wave. If you’re more than average interested you’ve wondered when the downturn comes. It always does – it’s as predictable as gravity – and the Gartner Hype Cycle. What happens then?
Most of us have experienced that if something seems too good to be true, it usually is. We’re most likely seeing only part of the picture, the rest being either hidden or ignored – or both. Unfortunately, this seems to be the case for most of our so called sustainable energy sources. Looking closer, they turn out to be not so sustainable after all. Quite possibly the opposite.
Ok, so AI will not give you more time (see part I). And AI can be this huge threat to mankind etc. – according to an increasing collection of experts. Sounds serious, but it’s still kind of distant, isn’t it? So let’s bring it closer to home: Is AI a real…
If you don’t know what it is, it’s OK. It’s a recent invention – not the concept but the name. A ‘prompt engineer’ is an expert in ‘bot relations’. In ‘how to interact with chatbots’. It’s weird, isn’t it? I thought the point with chatbots that we – anyone – can talk to them. I must have been mistaken …
Think about it: We – presumably intelligent human beings – have collectively put the world on a path towards extinction and don’t seem to be able to do much about it. But there is still hope: Our new AI-‘friends’ possess a different kind of intelligence, quite possibly our key to survival, our lifeboat so to speak. But we don’t want help. In fact, now we’re trying to sink the lifeboat. Do we still call ourselves ‘intelligent’?
So the Tesla autopilot has been involved in more than 700 accidents over the past few years. That’s bad, but how bad? That depends on the metric. How do accidents caused compare to accidents prevented? Are autopilots good or bad? The answer: We don’t know.
Want to get back to normal? Be ‘normal’? Don’t. It’s dangerous. It’s about to become lethal.
It’s becoming tiresome, isn’t it? Every week AI is seemingly conquering new territory, doing more things, becoming more capable and more useful – or threatening, depending on point of view. High noise factor, low value because it’s mostly speculation. What about taking the opposite angle: What AI cannot do – would that be more useful?