There is so much to say about AI. And so much being said about AI. Still, no one seems to (completely) understand it. Not even the (AI-) experts, who admit to scratching their heads, sometimes publicly. The ‘honest’ or ‘good’ experts, that is. The no-so-good ones don’t admit anything. Of course. But here’s the point: How can we put to (good or bad) use, take advantage of, something we don’t understand?
Come to think of it, and I’m sure you have had the same reflection more than once recently: How can we have such a ball, a frenzy almost, and not understanding what it’s all about? Look at the world (and this is nothing to be proud of): Billions of dollars spent on development – monthly. Thousands, more like tens of thousands brilliant brains at work. What do they do? Ask them and you get a shipload of tribe language, unintelligible to most us, and maybe to them too. But it gets them off the hook – for now.Â
What we see – from the outside – is unclear goals, minimal revenue, enormous datacenters dubious results and formidable amounts of precious energy spent – every second – in a world that is severely short on that particular resource. The wasted energy on cryptomining suddenly became like a small fish in a big pond.
Seriously, this doesn’t add up. While the world is drowning or burning in climate disasters, we’re spending all these critical resources on something we don’t really understand? Of course, I do hear the counterpoints. ‘Look at all the achievements.’ Indeed, look at all the achievements. Where they are and how they come about: Typically pointed systems with clear, attainable goals, specific purposes with reasonable budgets and business plans. They protect borders, datacenters, infrastructure, banks and companies (security), they find needles in (data)haystacks every minute and save lives in medical and many other settings (smart search & analysis). They contributes to smarter designs, new products, improved services, better medicine and a lot of other things.
This is good, right? Of course. And let’s for this discussion call this group ‘pointed AI’. It’s a small part of the entire field, which is dominated – at least as far as publicity goes – by what’s often called ‘general’ or ‘generative’ AI – two very different branches that accidentally melted together because we (the public) and the media read too fast (or too sloppy) and mix them up all the time. By the way, is ‘General AI’ the same as Artificial General Intelligence’? Don’t expect the experts to agree on the definition, so let’s stick with the general (pun intended) impression: This is the type of AI that tries to mimic the human brain – thinking, reasoning, even feeling etc. A bad (impossible for any number of reasons) idea to begin with – and definitely not what we need. What we (the world) really need is something vastly better than the human brain – in areas where the human brain comes short. Not hard to find at all. Common sense really.
Still, with this bad human-brain-copy idea as a starting point, we now have all these dubious tools that pretend to help, sometimes replace us. They often deliver impressive stuff that when examined closely, is either bad or complete garbage. Or – in some cases – quite good, and seemingly useful. Like the writing tools cleaning up (‘improving’) our quickly whipped up emails or documents to make them look professional – by some more or less defined metric. But are these ‘improvements’ really useful beyond the immediate ‘save time’ illusion? Or are they creating a distorted reality that allows us even more lazy- or sloppiness? Hiding incompetence or even delivering completely different messages without us realizing it, because we neither took the time to write it all out nor to proofread the result?
Instead of improving productivity and/or saving time, this seems like a fast lane to more bullshit jobs and misunderstandings – the former because we end up needing human eyes to verify that all this artificial stuff makes sense.
My favourite area (close to home) is programming, where AI tools have made big inroads over the past couple of years. Some of these tools fit into the successful group described above – pointed, with specific goals that helps good software development groups become better or deliver better. Like quality control and bug hunting, not perfect but useful. While ‘general AI tools’ (aka artificial programmers) fail miserably after having passed cursory inspection from half-interested managers.
The thing is, as pointed out by a renowned software guru recently, that however generative or general the AI is, it’s doesn’t think and it has no clue what it’s doing. Indeed, it’s the most important factor to always keep in mind when dealing with, planning, discussing, using AI. Forget that fact and we open up for disasters. Remember it – and really useful tools can be built.
Let me repeat this: AI systems have no clue about what they’re doing. They match up lots of accumulated and very smartly curated and organized data, use enormous computing power to process the data in real time and come up with something that seems reasonable based on the request (these days called a ‘prompt’). In fact, unless the prompt – the description of the problem to be solved, in a language understood by the AI machine – is very carefully created and honed by someone with deep understanding of both the problem at hand and the AI system, the output will be questionable at best, good-looking garbage at worst.
Think Excel and 20, maybe 30 years back. Accounting departments got their hands on a tool that helped them immensely. Could do wonders – really fast. Easy to use even: A table and some formulas, everything visible and immediate results – what could possibly go wrong? Then came macros, and disasters followed as accountants started to program these macros. They had no clue about programming and were incapable of detecting the meaninglessness of the results, or possibly too sloppy to check. Companies went belly up – or lost billions, or artificially (and unintentionally) inflated their own sales or assets etc. You probably remember, and by the way, it still happens.
AI in the field of programming is potentially even more dangerous, because software is running the world, from our pacemakers to our energy grid and -production, from e-bikes to satellites. Do we want ‘AI-bots’ to create that software? Definitely not. The thing is – and it seemingly cannot be repeated often enough: Like other AI systems, AI programmers don’t think, they don’t understand. They can do small stuff fine, they can replace low level programmers doing simple low level stuff, but they don’t understand the big picture, and – when set to check your code, they have no clue about how you were thinking, about intent. So code created by a smart programmer may be completely incomprehensible to the AI-assistant.
Remember that however smart, the bots don’t understand your code, they analyze based on zillions of ‘learned’ samples from history. There is no creativity, and there cannot be. They can create predictable and clean APIs (sorry for the tribe language, bear with me), connect to databases, set up/create tables and joins and much more of the boring ‘scaffolding stuff’, but will the result work in a real life setting with 1000s of hits per second? Or per minute? That requires understanding intent – and the big picture. They don’t. Not the big picture and not anything else.
That’s my point: It’s all about understanding – and accepting the lack of such in any machine. Understanding cannot be programmed. It’s a human trait. If we pretend otherwise, we’ll lose against those who see the limitations and take advantage of the (real) opportunities.
Here’s what we – you and I – should understand about AI: It’s a tool, likely the most sophisticated tool we ever saw. Just like a hammer, a car, a computer (see AI isn’t Magic, it’s Just a Computer…) – it’s worthless until combined with a human with the skills to use it.
If we approach AI with that attitude, we’re on the right track. Right now most of the world isn’t. That’s unfortunate.

Leave a Reply