AI: And Now You are a ‘Prompt Engineer’!

Image ©

If you don’t know what it is, it’s OK. It’s a recent invention – not the concept but the name. A ‘prompt engineer’ is an expert in ‘bot relations’. In ‘how to interact with chatbots’. It’s weird, isn’t it? I thought the point with chatbots that we – anyone – can talk to them. I must have been mistaken …

If you ask a so called expert (in AI) about this, you get the classic (and meaningless) answer: “It’s complicated.” Not true. If we are to believe what the very same experts said yesterday, it’s easy. Anyone can use ChatGPT etc. In fact everyone seems to be using ChatGPT. And most are getting interesting results – some times impressive, some times hilarious, some times just plain wrong – but always interesting. And the plain wrong ones usually go unnoticed because we – the users – are so impressed with the language of the answer that its correctness is implicitly assumed (see ChatGPT is Lying, Now What?).

Yet ‘prompt engineering’ is a thing. Apparently the new AI wunderkinds aren’t so smart after all. Specialists are needed to make them understand what we want. A journalist in a local business magazine recently pointed out that with the chatbots, you can no longer write questions in plain language like we’ve done for ages when googling. We need careful phrasing plus trial and error. 

Of course it may be me, but this journalist (an many others) seem confused. Or maybe we’re just using the Internet tools differently. In any case, to me the beauty of chatbots is that they take plain language – we can ‘talk’ to them and and get answers in plain language, plus pictures, code and other content where appropriate. While Google and Bing etc. took words and phrases and responded with links (preceded by ads and other garbage, but that’s a different story). 

Admittedly I sometimes give Google a full sentence – such as ‘how do I prepare my iMac for sale’, but that was my fingers talking, knowing that the extraneous words (‘my’, ‘do’, ‘I’ and ‘for’ in this case) would be ignored. Writing the full sentence is actually faster than ‘self-filtering’.

To me, Chatbots are in a totally different league. We talk, they talk back. But apparently not. We need experts, aka prompt engineers, in order to ensure correct – or ‘good enough’ – results according to the promoters of this ‘new’ profession. Understandable for specialized bots trained on specialized, ‘narrow’ datasets for disciplines like medical, security, programming, research etc. Specialized tools for special purposes. 

But the general chatbots, those aspiring to be our new googles and bings, maybe even general business tools with some specialization for our area of business, should be capable of understanding you and me without help, right?

Actually, that’s not a unreasonable expectation. What’s also unreasonable is to require prompt engineers to help us use the tools. Which is my point: By (over)focusing on (the need for) prompt engineering, we’re  creating an artificial distance between the users and the new tools.

Here’s the thing: We’re all natural prompt engineers. Some better than others and some have it as part of their profession. Example: What does a great investigator, journalist, lawyer, manager, salesperson, pollster etc. have in common? They are good at formulating questions that emit the desired/optimal/… answer. Does that make them (‘us’ if you like) prompt engineers? Of course. 

We’re born prompt engineers, so don’t for a second believe that this is something new. But even more importantly: Introducing the idea that our new generation of general AI tools require special skills, removes the attractiveness of the tools and actually denounces their capabilities. They’re actually better than that. As a matter of fact, in this context they’re almost human: They respond differently to the same question put in different ways. 

Is that good? That’s a different discussion altogether, but making the chatbots (appear) human, was the goal, right? If not, the sales pitch really needs to change …

Come to think of it, if these allegedly incredibly smart Large Language Models cannot deal with plain language, someone needs to do some serious rethinking – or redesign.

Related posts:

2 Trackbacks & Pingbacks

  1. What AI Cannot Do (Part II) –
  2. Why Humanizing AI is, well – Stupid –

Leave a Reply