The AI Crash

Illustration © Feodora/Adobe Stock

My previous post, AI’s notorious lack of intelligence, raised quite a few eyebrows – judging from the (inspiring) feedback. The distance between hype and reality is huge, the misunderstandings and disappointments plentiful while big money seems to be pouring in from salivating investors. What are we missing?

It turns out – again – that appearances are deceiving. This time on both the technical and financial sides. AI-funding is no longer increasing, but falling dramatically – as reported by The Information recently. In particular in the medical sector.

The downturn was predictable. We’ve seen these cycles since the idea – and term – AI was coined 60 years ago. Ideas – hype – funding – more hype – more funding, then disappointment and falling interest. Over and over again, the new cycles usually triggered by some new idea or technology.

It’s tempting to ask why we never learn. It’s also easy to forget that the research, the experiments and many of the products are useful, sometimes even revolutionary, although in narrow markets, smaller scale and along different axes than anticipated. It’s not lack of results that cause the downturns, it’s wrong or overinflated expectations.

Such expectations are fuled by the type of media coverage delivered by the NYT Magazine recently: AI Is Mastering Language. Should We Trust What It Says? A doubly misleading headline followed by a discussion/analysis which seems to follow a specific vendor’s bullet points without asking the critical and mostly obvious questions.

Fortunately, shortly after publication, the article received a strong and well written rebuttal – on Medium.com, written by UoW Linguistics Professor Emily M. Bender: On NYT Magazine on AI: Resist the Urge to be Impressed. Long, thorough and to the point – very enjoyable reading if you’re interested in the whole picture, not just the enthusiastic hype from the pundits.

The bizarre thing is this: If the word ‘intelligence’ was eliminated, we would be so much better off, almost as in ‘home free’: We have all this smart technology using tons of data in ever smarter ways, in many cases doing things that humans cannot possibly do. All good except it’s getting a bad reputation because we chose a misleading name which implies the wrong metric: We measure intelligence instead of smartness or usefulness. And since the intelligence is zero, the score always sucks.

Professor Bender also discusses the use of words like ‘training’ and ‘learning’ (as in machine learning) in addition to ‘intelligence’, and undresses the rather simplistic mechanisms under the hood, pointing out that all 3 words are totally out of place in this context. Algorithms can be very complicated and quite smart, but never intelligent.

If you’re more than average interested, check out Bender’s article. It’s interesting, sobering – and important. We really need to understand this. Not the technology, but the difference between intelligence and anything any machine can deliver. Only then can we use it wisely – and detect and avoid harmful use.

Like Greek general and philosopher Thucydides pointed out 2,500 years ago: Knowledge without understanding is useless.

1 Trackbacks & Pingbacks

  1. Sentient AI? Dream on … – mindset3.org

Leave a Reply

G-YEJJDB2X5L