Consider this scenario: A huge machine the size of several football fields producing products (or services) vital to the world. In the process, it guzzles more energy than a steel plant, ‘eats’ data by the shipload (think supertankers) – delivered via pipes the size of the cables carrying the Golden Gate Bridge. And not the least, one billion knobs and dials to steer/operate the monster. This is the machinery behind the ‘largest digital brains’ in the world.
They are not experimental. On the contrary, they’re very real and we’re becoming increasingly dependent on their operation and the results they deliver every day. What do they do – except creating big headlines every time a journalist discovers something s/he thinks is incredible or unbelievable? Such as writing prose, faking sentience, detecting cancer from X-ray pictures etc. They make the impossible possible. Such as testing new vaccines on a virtual world population before real trials. Or running manned missions to the moon and back while the spacecraft is under construction. Or – on a much smaller scale (all car manufactures are doing this) – testing new cars, engines, designs, safety measures etc. – long before real life crash tests. Or doing science calculations and experiments on a previously impossible level and changing our perception of the world and there laws of physics in the process.
All extremely important, accelerating development, saving lives, reducing carbon footprint and more – according to the experts and the machine-owners. And this is just the beginning.
Is it true? Can these ‘brains’ – correction: large machine learning systems – really do that? Indeed they can, and much more – they’re doing it every day. ‘Much more’ because the experts are still learning to ‘tame the monsters’, to understand the potential and to tune them. The scale of some of the models these machines ‘run’ is really beyond comprehension with (say) 200 billion elements and maybe 1 billion tuneable parameters. The largest models are actually two orders of magnitude larger than this, but the numbers get so big they’re incomprehensible to most of us.
Obviously, tuning such models is a science of its own requiring extremely sophisticated tools and expertise. Think about it – how do you find the optimal combination of settings when you have a billion knobs and dials? Certainly not manually, not even if you reduce the number to 1,000.
The complexity however, is not my point. Resources is. Is there anyone – governments or companies – with sufficient resources to develop and run these monsters? Obviously the answer is yes, since we already have them. And no, it’s not the government. It’s Google, Facebook, Microsoft, Amazon … and you will probably recognize some of the names from the news: GPT-3, DALL-E, GLaM, Galactica, Lex. These are not the ‘brains’, but rather the models ‘implanted’ in and run by the brains. Software such as OpenAI acquired by and/or developed by these large companies and run on their already huge infrastructures.
How big are they? Interestingly, there is no easy answer to that because we don’t have a metric. Traditional ones, like number of processors, GPUs, amount of RAM, operations per second, bandwidth, neural nodes, neural interconnections etc. don’t work because they vary all the time. So the winner – if there is one – is not in the numbers but in the results.
This is the point where the scenario becomes really interesting to the rest of us: Leaving the technical, entering the marketing or ‘market’ domain. Make no mistake, the race for the best/most powerful digital brain may seem like a lot of research & development and a lot of expenses, and is just that. But at the end of the day it’s business. Big business – if not necessarily yet. The companies are hoarding data, creating APIs and tools, tuning their models and rolling out new services by the week – a seemingly very healthy competitive situation which has benefited the world immensely already and will continue to do so.

But there is a snag. For every new customer signing up with the big ones to use their services, the chosen vendor will hoard even more data. Which in turn will be used to improve their models even further. That’s the way Machine Learning works. Beneficial for the customer and the vendor, but not necessarily for the market. If resources of this magnitude – funding, processing power, data, models, … – are required to enter the market, it’s actually locked up already. Which cannot possibly be good. All the smaller ones are either going to die or get sucked in by the big ones. Like a friend ‘close to the matter’ pointed out in a conversation about the issue the other day:
“So bottom line, a ton of ML companies will die trying, and in the end the only guys running these massive models will be Google/FB/MSFT, by training them on real data, not needing inference.”
Most of us don’t know the difference between ‘inference’ and ‘training’ in ML, but we still get his point: For all practical purposes we have an oligopoly. A few big ones owning the market. Arguably, we need the results they can deliver – now, and there is no lack of competition between them. But in the longer run, all experience tells us it’s not good. Do we need government regulations? We do, but chances are the technology and the market changes faster than politicians can act, so they end up regulating history instead of managing the future. As usual.
At some point however, an outsider is going to outsmart the giants – and we have an entirely new situation. Including a new definition of ‘digital brainpower’. Evolution is taking another turn. Of course it will take time, but remember that these days, 3 years is a very long time.
Exciting times indeed. And rest assured, however big and capable and impressive and important these digital brains may be, there is no intelligence in there. None. Our monopoly on intelligence is (still) safe.
See also
Leave a Reply