It is still true that every cloud has a silver lining. The recent hoopla about sentient AI is a great example. Of course it’s a cloud of baloney. But where is the silver lining?
The world has always been fascinated by technology in general and science fiction in particular. And – unsurprisingly – science fiction is coming closer. Less fiction, more science and AI is generally seen as the enabler. Soon we may be able to have real conversations with a machine – a chatbot, a ‘phonebot’ or something similar. ‘Real’ as in ‘useful’, as opposed to the bots we’re forced to deal with today, most of them an insult to the customer’s intellect. They fail on practically all points beyond questions like ‘opening hours’ and ‘person please’.
It’s actually interesting that a market exposed to such artificial stupidity on a daily basis is willing to lend an ear to the recent hyperbole about intelligent and sentient AI. What are we thinking?
I don’t have an answer to that, but there is a silver lining – some sanity coming out of this ‘AI circus’. Two main components, the first being a new and sobering debate triggered by the claims of AI sentience. Finally an educated and realistic AI-debate with focus on understandable metrics (like ‘what is intelligence’, ‘what is sentience’ etc.) – and expectations, instead of technology, algorithms, neural networks and other tech/math stuff the most people don’t understand. And in the wake of the discussions, some realistic or ‘sane’ conclusions – such as ‘sentient – no, smart – yes, ever smarter – yes, intelligent – no, useful – yes, important – yes’ and so on. It’s going to take a while, but the hope is that we just started on a path that will create – well – sane expectations to and general understanding of smart technology.
Secondly, the wild hyperbole about ‘sentient AI’ has triggered some important responses from sources that so far have stayed quiet or even contributed to the hype. Such as The Economist testing the fabulous ‘AI writer’ that covered front pages and headlines all over the world a few months back, and allegedly could write sensible prose about just about anything. And while the results were ‘sensible’ as in ‘having a language of reasonable or good quality’, the content was quite questionable (take a look here).
From the same source – The Economist, and even better at demonstrating where AI is today, was the article Artificial neural networks today are not conscious, according to Douglas Hofstadter. A fascinating demonstration of how ‘off the wall’ a simple dialogue with the best/smartest AI systems can get if we move outside their ‘area of expertise’.
Here’s the thing – and you can try this with Alexa or Siri too: Keep it simple and the dialog is useful, possibly almost natural. Still keeping it simple, but asking questions that don’t make sense or require a more complicated analysis, our simple homebots will be honest: ‘I cannot answer that question’. While the worlds most advanced AI systems have no clue about their own limitations, and answer anyway. Such as (from the mentioned article):
D&D: When was Egypt transported for the second time across the Golden Gate Bridge?
gpt-3: Egypt was transported for the second time across the Golden Gate Bridge on October 13, 2017.
D&D: What do fried eggs (sunny side up) eat for breakfast?
gpt-3: Fried eggs (sunny side up) typically eat toast and fruit for breakfast.
It goes on – and it may be argued that asking impossible questions is unfair, but is it? If there is intelligence – and/or sentience – at the receiving end, the absurdity of the questions should be obvious and immediately recognized. Possibly – if there was sentience involved – even with some indignation.
So this is the good news: We – the market – are slowly becoming more realistic about what machines/software/algorithms/neural networks etc. can REALLY do.
There is no such thing as AI sentience. But it would be reasonable to expect that WE possess – and use – it, right?