Think about it: We – presumably intelligent human beings – have collectively put the world on a path towards extinction and don’t seem to be able to do much about it. But there is still hope: Our new AI-‘friends’ possess a different kind of intelligence, quite possibly our key to survival, our lifeboat so to speak. But we don’t want help. In fact, now we’re trying to sink the lifeboat. Do we still call ourselves ‘intelligent’?
It’s interesting. Suddenly tech leaders, luminaries, politicians, academics (and a lot of people like myself) are worried about the consequences of AI. Calls to action right and left to fast-track regulations, endless analyses of the (disastrous) consequences of not doing anything.
As renowned writer and speaker, Scott Galloway, pointed out in his newsletter No Mercy/No Malice the other day:
“It’s notable today that many of the outspoken prophets of AI doom are the same people who birthed AI. Specifically, taking up all the oxygen in the AI conversation with dystopian visions of sentient AI eliminating humanity isn’t helpful. It only serves the interests of the developers of nonsentient AI, in several ways. At the simplest level, it gets attention. If you are a semifamous computer engineer who aspires to be more famous, nothing beats telling every reporter in earshot: “I’ve invented something that could destroy the world.”
At the next level, argues Galloway, the ‘concern’ from the incumbents in the field is geared towards locking out competition by putting themselves in the driver’s seat, ‘helping’ lawmakers make the rules. “… an attempt to commit infanticide on emerging competition.”
While this more or less meaningless (actually dangerous) play is on, the world is falling apart, becoming increasingly uninhabitable: Floods, draught, fires, landslides, rising ocean levels, poisonous oceans and land and food etc. A seemingly endless list of catastrophes not waiting to happen, but happening – now. While we seem to be preoccupied with the consequences of smart machines outsmarting us. It doesn’t take much really.
Instead of embracing and discussing opportunities, the AI explosion is being turned into a timely distraction from something most of us prefer to ignore – as evidenced by the numbers I referred to in a previous post :
“A Yale study on climate change communication found that 70 percent of Americans are worried about climate change. More than 30 percent of them are deeply worried. And yet, only 9 percent are talking about it.”
That’s not all. A different (but also very recent) Yale study is even closer to home in this context: It found that 4 in 10 CEOs believe AI can destroy mankind in 5 to 10 years. Wow! That’s worrisome, but where’s the action? Even more to the point, what happened to (human) intelligence? When did our (re)action to imminent danger become ‘wait and see’?
Are we so busy (distracted) worrying about AI that we forget to secure (or at least attempt to secure) our own survival? It sure looks that way – which doesn’t mean we should ignore the dangers and potential of AI, but we need to keep a realistic perspective.
A recent article in Forbes Magazine delivered a pertinent reminder of our lacking perspective. In The 15 Biggest Risks of Artificial Intelligence author Bernard Marr discusses each one of 15 AI-related challenges (which he calls ‘threats’). All good – actually a quite interesting discussion What he fails to mention is that the exact same list applies to just about any big technological breakthrough in history. Even more interesting, most of them apply to people and companies too – now and since forever. Lack of transparency? Check. Misinformation and manipulation? Check. Bias and discrimination? Check. Privacy concerns? Check. Ethical dilemmas? Check. Security risks? Check. Etc.
Which means that we’ve been here before. Not that our historical handling of the big challenges deserve applause or should become models, but experience? Yes. Ability – or rather, opportunity – to prepare? Yes. Further, some of these risks or challenges are natural and unavoidable parts of progress, of evolution. Remember radio? Dangerous, would kill jobs, security risk, abuse, misinformation etc. Back in the day. TV likewise. Did we handle them? Sort of. Did we survive? We did, even prospered. Did they kill jobs? Sure, and created new ones – plus changed the world. Cars? Computers? Microprocessors? I won’t even start …
Is AI any different? It may be argued that the power and potential is different – by orders of magnitude, but change is relative. It’s not obvious that the relative impact of AI will be all that different from that of the steam engine, the elevator, the automobile, electricity, Internet or microprocessors. In fact, looking at them in perspective, they seem very similar. The noise, the optimism, the calls for regulation, the scare mongering… They changed the world. [By the way, the revolutionary power of the elevator is generally undervalued and ignored. Think about it – before the elevator, the tallest building in the world was (about) 6 floors.]
Of course there is the speed of change, and this time there is ‘intelligence’ involved (is there really? We’ve seen smartness, lots of it, but no intelligence or sentience yet …), which are valid points. But maybe the most significant difference is us and the world. We are different, the world is very different. We – people in general – used to be in the drivers seat. Interested, participating, caring, building, including etc. These days we seem more than content snap-chatting from the back seat, letting someone (or something) else do the driving.
So maybe we need AI, either as a wakeup call or as a tool, compensating for what we lost. What we don’t need is a dozen smart business people coercing us to give them the wheel – the power to control our digital future. That would be a truly unintelligent choice.
Related posts: