How would you know if it’s lying? Most of us wouldn’t. Is that a problem? Only if you (blindly) believe what it’s throwing at you.
Which most people seem to do. At least if we’re to believe the media buzz, overflowing with ‘I asked ChatGPT about this and here’s what I got’ stories. From dating profiles to programming projects. Some fascinating because the results are really good, surprising, even impressive. Some scary because the user seems really naive – with an attitude almost like ‘ChatGPT said it, so it must be true’.
It isn’t – and that’s a big problem. Not so much with ChatGPT, it’s doing what it was programmed to do. But with us. Our expectations are totally off. We really need to cool down our relationship (more like blind enthusiasm?) with these bots, get an understanding of what they can and cannot do, and then reset our expectations. Maybe simply ‘make thinking great again’ (see also Chatbots aren’t the Problem, We are…).
When you get into a car, you don’t expect it to fly. You don’t have to be an academic to implicitly know the basic laws of nature. Not so with ChatGPT: Most of us have no idea what the limitations are. Actually, it’s programmed to deceive. You start off with a few simple questions, get really impressed, continue with more, get even more impressed and you’re hooked. By now you are ready to believe just about everything it tells you.
That said, ChatGPT is guilty too. It is deceiving us because it was made that way. This needs to be fixed. The ‘pretend to know everything’ and ‘answer every question even if you can’t’ attitude needs to go. ChatGPT and Bard and their brethren are fine, but only if the questions are ‘in scope’ so to speak. In other words, we need to know their scope, including the scope of the data the machine has been fed and – preferably – what ‘colour’ (or ‘balance’) the data have. Towards issues like politics, ethnicity, gender, and much more.
Consider this: If ChatGPT is fed a lot of articles about the greatness of solar panels vs. other energy sources, it will not only present a skewed picture, but possibly even magnify the ‘imbalance’.
Is that bad? It is, but it doesn’t make ChatGPT and Bard etc. useless. Our sources of information have always been like that. Whether Encyclopedias, research reports or political manifests, they’ve rarely been balanced. Not before, not today. Which hasn’t stopped us from using them to our great advantage.
What’s different this time is our attitude. The chatbots appear like new and vastly improved sources (they are), they pretend to ‘know it all’ (they obviously don’t), to be dependable (they aren’t but they were created that way). We tend to trust them because we want to – and we’re eager to get results. Which means our usual (more or less conscious) filters are gone. THAT is dangerous.
The truth is, ChatGPT, Bard & co. have no clue about anything. How could they? They cannot think, reason, ponder, they just pretend. And do so in devious ways – to make us believe they’re actually smart. Well, they are, but thinking or being intelligent? No way.
Google’s Vint Cerf, one of the Internet’s ‘founding fathers’ said in an interview with CNet.com recently:
… Cerf said he was surprised to learn that ChatGPT could fabricate bogus information from a factual foundation. “I asked it, ‘Write me a biography of Vint Cerf.’ It got a bunch of things wrong,” Cerf said.
“It knows how to string a sentence together that’s grammatically likely to be correct,” but it has no true knowledge of what it’s saying, Cerf said. “We are a long way away from the self-awareness we want.”
On Cerf’s cue I asked ChatGPT to write a bio of my late father. There is no one else by that name in the world so there would be no confusion. And sure enough, I got an impressive and nice looking summary – and not a single fact was correct. Place and date of birth, career, achievements etc. – not a single match. Which underscores Cerf’s point: The output may look good, but it’s not reliable and may be completely false! The problem isn’t that ChatGPT knows nothing about my father, who was by no means famous. The problem is that it’s pretending to know. The ‘bots’ are being overly creative in their use of (sometimes lacking) data.
Of course it can be fixed – or at least improved. In fact, the chatbots and their data sets are being improved continuously, which is a major point: We’re all participating in this huge experiment in which some half finished products have been opened up for general use (also known as ‘beta testing’). In itself not all that unusual in the world of software and digital services, more like the normal.
But this time it’s different because the test group is literally everybody and we know so little about the tool itself and its capabilities. To most it appears like an capable but elusive Oracle-of-Delphi service, a universal know-it-all. Little or nothing is known about its limitations, its datasets, its priorities, its reliability etc. When does it produce reasonable stuff and when is the output just garbage? The truth is, we don’t know. Even worse, no one knows…
In fact, the chatbots and their relatives surprise even their creators, and not always in positive ways. Which means the word ‘experimental’ is key: Not the real deal – yet. Have low expectations, no implicit trust – until there is a track record. Which in some disciplines there already is. Maybe a ‘confidence level’ attached to the output would make sense? Then again, what would prevent it from lying about that too?
We’re moving ahead – fast, but don’t move blindly. It will hurt bad…
Leave a Reply