PC or console? Yes, I’m talking about gaming preferences…and if you answered PC, then we all owe you a big thank you. Today’s episode is all about the geopolitics of gaming (specifically, the advancements its caused in computing capabilities).
If the terms ping or lag mean anything to you, then you have likely experienced the frustration that has plagued gamers for ages. That very frustration is what helped to advance processing power and high performance chips (GPUs, aka graphics processing units) when most others’ computer needs were satiated. Since gamers needed top-tier graphics and a very responsive system, GPUs were developed to handle multiple processes simultaneously. And guess what those chips were also pretty damn good at? Running AI models.
Without those gamers pushing the boundaries and driving technological progress in this sphere, we would be at a loss for how to handle the AI buildout. Which require being able to handle massive amounts of data simultaneously. So, a tip of the hat and raise of the glass to all the nerds out there.
Here at Zeihan on Geopolitics, our chosen charity partner is MedShare. They provide emergency medical services to communities in need, with a very heavy emphasis on locations facing acute crises. Medshare operates right in the thick of it, so we can be sure that every cent of our donation is not simply going directly to where help is needed most, but our donations serve as a force multiplier for a system already in existence.
For those who would like to donate directly to MedShare or to learn more about their efforts, you can click this link.
Transcript
Hey, everybody. Peter Zeihan here coming to you from snowy Colorado, where we just got our first nine inches, and there’s another 13 inches on the way. Boy howdy. Today, we are going to take an entry from the Patreon page’s Ask Peter Forum. The question is the GOP politics of video games, which I know, I know, I know some of you are like, “What?” Now, this is actually quite planned to become one of the most important economic sectors in the world in the last five years.
I’m not sure whether or not it’s going to continue, but let me kind of lay it out for you. For the period of roughly 2010 to 2021—roughly that window—we had everything we needed for computing power. I mean, yeah, yeah, yeah, you’d upgrade your laptop every 2 or 3 years to get the newest chip.
But we had digitized most things that could be digitized. We’d moved into logistics and communication and information, and all the low-hanging fruit had already been computerized. The question was, “Why do you need ever faster processors and ever more memory if you really don’t have a need for it?” And yeah, yeah, we got Starlink coming up and running, so satellite communications can be an issue. We wanted to build a smart grid. You know, these are all reasonable things, but you only need so good of a chip for that.
As chips got better and better and better and better and better, the number of people who were willing to cash for them got lower and lower and lower and lower and lower. Then the gamers came in because they were solid demand. They always wanted the fastest possible chips with the best graphics processing capacity so they could join larger and larger multiplayer forums and never have drag or lag. It got to the point that they basically kicked off people who didn’t have good enough hardware because they would slow down the process for everybody.
The chip that is at the heart of that, where you had the largest drag and so the highest demand among the gamers for improvement, is something called a GPU—a graphics processing unit.
And they are definitely the most advanced chips in the world today. But a bunch of gamers sitting at home are not exactly what you would call the bellwether of global economic patterns, even in technology. So there was only so much money that could go behind this sort of effort. And then we developed this little thing called large language models and artificial intelligence.
It turns out that the function of the GPU, which is designed to run multiple processes at the same time so that graphics don’t lag, is exactly what you need to run an efficient large language model. And if you put 10,000 or 20,000 of these things running at the same time in the same place, all of a sudden, AI applications become a very real thing.
We would not have AI applications if not for those people who sit at home in the basement and play role-playing games all day. So thanks to the geeks and the nerds and the dorks because it wouldn’t have happened without you. The question is, What happens now? You see, GPUs, because they were designed by dorks for dorks, have some very dork restrictions.
Normally, you only have one GPU in a gamer console, and you have several fans blowing on it because when it runs in parallel, it’s going to generate a lot more heat and use a lot more energy than any other chip within your rig. Well, you put 10,000 of those in the same room, and everything will catch on fire.
So the primary source of electricity demand for data centers isn’t so much running the chips themselves. It’s running the coolant system to keep these banks of GPUs from burning the whole place down.
Now for artificial intelligence, it’s not that the GPUs are perfect—they’re just the best hardware we have. There are a number of companies, including Nvidia, of course, that are now generating designs for an AI-specific sort of chip.
Instead of a GPU, which is like the size of a postage stamp, you would instead have something where there are multiple nodes on the chip. So basically, it’s the size of a dinner plate or even bigger so that you can run billions, trillions—lots of processes simultaneously.
Because the chip is going to be bigger and designed specifically for AI, cooling technologies will be included. It won’t be the power suck per computation—or at least that’s the theory. The problem is the timing. Assuming for the moment that the first designs are perfect (they never are), we don’t get our first prototype until the end of calendar year 2025. It will then be 18 to 24 months before the first fab facility can be retrofitted to run and build these new chips, and we get our first batch.
Now we’re talking about the end of 2027. And if all of that goes off without a hitch (it won’t), we’re not talking about having enough to outfit sufficient server farms to feel the difference until probably 2029 or 2030.
So the gamers have taken it this far. The question is whether the rest of us can take it the rest of the way in an industry with a supply chain that, just to say, has some complications.
So gamers, salute to you. We wouldn’t be in this pickle without you, but we also wouldn’t be able to imagine the future without you.