Artificial Stupidity

Image ©

You’ve probably noticed. Suddenly the newsbeat is about ChatGPT & co. becoming more stupid – less likely to deliver correct results. What’s going on?

It’s interesting. The developers are working hard to make the bots more ‘human’: Easier to communicate with, sensitive to detectable emotions or emotional states, better at guessing what the user (really) means from whatever context is available. The efforts may have been successful, but the results have not. The new bot-generation appear to fail 98% of the time instead of delivering correct results 98% of the time. Again, what’s going on?

Here’s the takeaway so far: Developer-focus has shifted from results to appearances. Try that on a human and measure the results. Exactly the same – of course. You don’t send your children to the university to focus on dress codes and looks, but to learn and understand. 

To put it simply, the new model is wrong. We need to get humans out of, not into the (technical) equation. Our new tools should not be like us, but useful to us, right? Which means to open up, see and explore what’s possible – which was the idea to begin with, wasn’t it? Create new tools to expand our reach, our capabilities. 

By forcing AI into a human model, we create conflict instead of progress. Of course smart machines can outdo humans in many ways – and will do so fast if we choose to go that way. Or we can create the most incredible tools/helpers/services (un)imaginable. Create the future instead of maintaining history. But we cannot do both – and I’m not going to argue why. You’ve probably figured it out already.

That said, looking forward instead of backward is an increasing challenge. We – humans – weren’t created to handle fast and big changes like this. Like author Kevin Kelly points out in the introductory chapter of his renowned book The Inevitable

We are morphing so fast that our ability to invent new things outpaces the rate at which we can civilize them.

This is why we are constantly trying to force new inventions into old models. We make progress ever faster, get scared and try to pull back. But the genie is already out of the lamp. And – like the saying goes – we really need to focus on what to wish for instead of wasting time trying to get it back (into the lamp). 

The thing is – we need to get better at understanding and managing change. Even more to the point – our own role in changes – slow and fast. And instead of yet another long list of things you should or should not do, here’s a good starting point: Read the book I just mentioned. 

The Inevitable is from 2017 which may seem old, but it isn’t. I pulled it from my bookcase the other day, started reading – and found it even more relevant today than 6 years ago. Not the least because the author ‘attacks’ our attitudes, our mindset. Think about this one for example: 

Endless Newbie is the new default for everyone, no matter your age or experience.

This hurts, but let’s leave that feeling behind and consider our current situation: A year ago we didn’t have chatbots, now they’re all over the place and we’re trying to deal with them, to adjust. And unless we realize that we’ve never been here before, we’re going to make all the mistakes in the book and then some. Artificial stupidity.

If reading The Inevitable feels enlightening, here’s your next step: Max Tegmark’s Life 3.0. Also from 2017 and another one that is even more relevant today than 6 years ago. Some of the examples may seem a bit outdated, but they still work – and underscore an important point: 6 years is a (very) long time these days. 

Like Kevin Kelly, Max Tegmark is indirectly attacking our stodgy mindset. Like, in the chapter called Intelligence Explosion, where he points out:

If you roll your eyes when people talk of gun-toting Terminator-style robots taking over, you’re spot on: This is a really unrealistic and silly scenario. These Hollywood robots aren’t that much smarter than us and they don’t even succeed. In my opinion, the danger with the Terminator story isn’t that it will happen, but that it distracts us from the real risks and opportunities presented by AI.

Max Tegmark in Life 3.0

Both books are thought-provoking and build understanding. Forget TL;DR – without the longreads, you’ll never understand the world and never understand change. Your mind needs food – stupidity come from ‘starved minds’. It shouldn’t be you…

Leave a Reply