Brussels celebrates. AI is finally reigned in. The EU is ready for the future – and setting a model for the world. Except they forgot to define AI … and a whole lot of other things. In short, they missed the target.
Yet another bureaucratic animal has been created, with its own office, staff, laws and regulations. Either completely useless or dangerous. We can only hope for the former. More (useless) bullshit jobs – that’s how monstrous bureaucracies work and evolve (check out the posts Why GDPR is a Bigger Problem than Privacy and EU Won the Battle, Apple Won the War). They’re not hurting anyone or anything except the budget.
Unfortunately, ‘dangerous’ is a more likely consequence – and obviously bad. Since the lawmakers forgot the definitions, the new regulations can be applied to just about anything involving technology, new or old, that some politician or bureaucrat deems threatening. Yes, you read me correctly, threatening – and it’s getting worse.
Writer/editor Vaclav Vincalek (Recurring Patterns) summarized it all with great accuracy (and significant irony) on Medium.com recently (The EU AI Act. An Orgy in Bureaucracy, strongly recommended), where he observes:
As an example, the Act lists as unacceptable [the] use of ‘Cognitive behavioural manipulation of people or specific vulnerable groups’. Does it mean that in the future any TV commercial promoting beer will be banned? Am I being manipulated or am I an alcoholic of my own accord? And I thought that we already had plenty of regulations in place for this??
If you think this sounds like (dangerous) confusion, it gets worse:
While the Act is using the generic term AI, it provided real examples of the what and why the AI has to be regulated for. The Act mentions the famous ChatGPT, which ‘will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law’, unlike the ‘..advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission…’. Why? What is a ‘serious incident’ or criteria for ‘thorough evaluation’? Who knows! The Act calls for AI transparency not for the Act transparency.
If this seems vaguely familiar, it should be. Like GPRS. Like regulating phone chargers (!) (Universal chargers: The Good, the Bad, the Ugly), like ‘opening up’ Apple’s AppStore (EU Won the Battle, Apple Won the War) – and many more.
The root cause is always the same – which is Vincalek’s ‘recurring pattern’: “It is always easy to ban things which you don’t understand.”
Leave a Reply