Site icon mindset3.org

The Power of Unintelligence

Photo © Pawel Skokowski/Adobe Stock

You’ll recognize the feeling. You’ve had this itch, this foreboding, maybe an important idea or the like, for a long time. Then you’re suddenly reading about it. Big relief. The ‘I’m not alone’ relief.

I’ve had this kind of feeling about AI for years. I’ve not been quiet about it, and it doesn’t go away. Not the itch, not the grand misrepresentation and subsequent misunderstandings, not the feeling that we’re on a wrong, possibly dangerous path.

Let’s set the record straight: Despite the name, AI is completely void of intelligence. Period. An ant is much more capable than any AI we’ve seen to date. Notice I used ‘capable’ instead of ‘intelligent’. Because I don’t know if ants are intelligent. Where would I find a metric to gauge that, what would the tests be? And what kind of intelligence would we be looking for? The experts have at least 8 distinct categories, some have 12.

It’s all about words and definitions. If you redefine intelligence sufficiently, your shoes may be intelligent. But what does that make you and me? We do call ourselves intelligent, don’t we?

I’m not going to bore you with technicalities, but it’s time we land this AI hype. Quit being delusional about the wonders and threats this ‘intelligent’ technology can deliver. Accept that we can make technology smart, we can create software-engines capable of extracting knowledge from data and use that knowledge to draw (propose) conclusions in certain (narrow) disciplines. Face recognition doesn’t require intelligence, it requires smartness, sensors and data – which I discussed in a recent post on mindset3.org. Finding trends in huge amounts of data doesn’t require intelligence. Driving a car in a predictable environment doesn’t require intelligence (but a lot of sensors and a lot of computational power).

I was elated when I found the article Artificial Intelligence: The Revolution Hasn’t Happened Yet on Medium.com 4 years ago. Finally someone a lot closer to the technology and the research than me, with a conclusion that made sense. And a discussion that made it all clear. 

Among a number of very quotable paragraphs, Michael Jordan writes

Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us.

This hasn’t changed, and isn’t likely to change any time soon, regardless of what the pundits may say. But fortunately, the general acceptance that AI is completely void of intelligence is spreading. As indicated by Michael Jordan’s post collecting 160 comments and 51,000 claps.

What triggered this issue again this month was a Wired article named This Researcher Says AI Is Neither Artificial nor Intelligent. Kate Crawford is a researcher as USC and Microsoft, and recently wrote a book to ‘clear the air’ about AI. The ATLAS of AI approaches the technology from a different angle, attempting to make us – the public – understand the mechanisms, which in turn leads to an understanding of what level of intelligence (or lack thereof) we may expect from technology – any technology.

The Wired article is particularly focused on her position on ’emotional recognition’, which has been promoted by several big-tech companies recently. She says …

Emotion detection represents the fantasy that technology will finally answer questions that we have about human nature that are not technical questions at all. This idea that’s so contested in the field of psychology made the jump into machine learning because it is a simple theory that fits the tools. Recording people’s faces and correlating that to simple, predefined, emotional states works with machine learning—if you drop culture and context and that you might change the way you look and feel hundreds of times a day.

She continues …

That also becomes a feedback loop: Because we have emotion detection tools, people say we want to apply it in schools and courtrooms and to catch potential shoplifters. Recently companies are using the pandemic as a pretext to use emotion recognition on kids in schools. This takes us back to the phrenological past, this belief that you detect character and personality from the face and the skull shape.

Very interesting and very encouraging. It’s not that the technology frequently called AI is bad or useless or fake or anything. It’s great – and it’s getting better all the time. It’s just that so many people – big part of the market – have inflated expectations that invite to incorrect use and potentially catastrophic results.

It may be less impressive, but it’s more useful: Let’s call it smart technology. That’s what it is. And like any tool, any technology, it may be dangerous. Then again, even a hammer is dangerous – in the wrong hands at the wrong time.

Exit mobile version