Think about it: We – presumably intelligent human beings – have collectively put the world on a path towards extinction and don’t seem to be able to do much about it. But there is still hope: Our new AI-‘friends’ possess a different kind of intelligence, quite possibly our key to survival, our lifeboat so to speak. But we don’t want help. In fact, now we’re trying to sink the lifeboat. Do we still call ourselves ‘intelligent’?
So the Tesla autopilot has been involved in more than 700 accidents over the past few years. That’s bad, but how bad? That depends on the metric. How do accidents caused compare to accidents prevented? Are autopilots good or bad? The answer: We don’t know.
Want to get back to normal? Be ‘normal’? Don’t. It’s dangerous. It’s about to become lethal.
It’s becoming tiresome, isn’t it? Every week AI is seemingly conquering new territory, doing more things, becoming more capable and more useful – or threatening, depending on point of view. High noise factor, low value because it’s mostly speculation. What about taking the opposite angle: What AI cannot do – would that be more useful?