Consoles, what they are, were and could be.
Update: It’s been brought to my attention that I left out discussion of the contentious PS5 SSD technology. I should have mentioned it. Believe it or not, my intention here wasn’t to make a flame war.
The demo looks cool, but I don’t have much of an opinion on it except to say that consoles can generally offer a more optimized platform for developers than general PC hardware. And since that platform is standardized, developers can micro-optimize over the lifetime of the console. So maybe the PS5 SSD tech is as revolutionary as people say, and I’m missing that point here. We’ll see!
My first reaction to the PS5 announcement was “Demon’s Souls! Eeeeeee.”
But later, I was looking at the specs for the two new consoles and came back around to my previous position for the last few years: “who wants to play games on a locked down PC?”.
See, the thing is, these new consoles’ guts are simply midrange PC guts. Constructed using AMD hardware, “customized” for the customers (i.e. Microsoft and Sony), but really not significantly different from each other nor what you could put in a PC. And since AMD is selling these complete, basically-the-same systems to Sony and Microsoft, you also end up with AMD graphics. Whatever you think of Zen, chances are you wouldn’t be buying one of their GPUs when you build your own PC. Most people would buy nVidia.
Since it’s just a PC, in the end, all you’re paying for is access to whatever exclusive games will be available for that console. There is no other reason to buy these anymore. And in exchange for those games, there are downsides:
- Fewer games.
- A worse gaming experience, designed for controllers instead of keyboard and mouse.
- A locked down ecosystem that makes it tougher for indie games.
- You have to pay for Xbox Live or PS+ to play online ($60 a year or so, but still)
- Can’t use this $500 device for much beyond games (and maybe Netflix).
- Contribute to ewaste, since the only reason for them to exist at all is to play exclusive game titles.
So just… don’t… buy them.
Build a gaming PC! You’ll get more out of it, it’s upgradeable, not locked down, and you’ll be able to play games that aren’t possible on a console. And forget about those exclusives, they’re not worth it anymore given these downsides.
So now I’ve said that everyone should not buy these consoles–and clearly you will all listen to me so that’s settled–I want to take a moment to say what would change my mind.
A return to the yesteryear of consoles
Back in the day, a new console represented novel technology. Prior to Xbox 360 and PS3, that’s what it was like. For example, the PS2 had its Graphics Synthesizer and the Sega Saturn had its obtuse dual-processor design (in 1994!). The N64 was built in collaboration with SGI!
But the PS3/X360 generation flipped things on its head, and it’s why consoles are PC-lite today.
My recollection of the PS3 story (I can’t find a reference for this right now, so take this as lore): Sony originally wanted the PS3 to be comprised of two Cell processors, each with a few dozen SPUs instead of 8. They believed the PPEs would be sufficient for CPU-type work (simulator, memory management) and the SPUs would do the vector work (rendering), without a dedicated GPU. However, they could not make this work. Either they couldn’t get enough cores onto the cell, or it wasn’t fast enough, or the market had changed (Cg/HLSL had started to become a thing). So they instead went to nVidia to get a chip from them for the graphics and cut back to just one Cell processor.
The Xbox 360 also used a PowerPC chip, but it was a more traditional chip with 3 cores and 6 hyperthreads. Although it wasn’t X86, it was easier to program. The Cell/PS3 was hard to program. The Gran Turismo folks called it “a nightmare.” Gabe Newell called it “a waste of everybody’s time.” The PS3 engineers I knew were constantly trying to come up with new ideas on how to get the most out of it. And later, when the PS4 was being designed, some of those engineers went to Sony and strongly voiced the opinion that they’d like more standard x86 type hardware. Which is what Sony did.
Yet, the Cell was actually a model for what we see today in other parts of computing, systems designed to optimize heterogeneous workloads. We’re up against Moore’s law and we can’t get any more performance out of x86 or ARM alone, so we have to turn to specialized cores such as TPUs or the Neural Engine. And these cores can knock their specialized work out of the park. Software like TensorFlow and Core ML make it much easier to use those specialized cores, and that’s one area where the Sony failed badly with Cell.
Many have been saying lately that the Cell was ahead of its time. And maybe it was. After all, the (previous) #1 supercomputer in the world, and first Petaflop supercomputer, Roadrunner ran on Cell.
At the time, I disliked Cell and agreed with the sentiment that it was a waste of time. But today it seems philosophically aligned with what we need in the future. Intel released a new chip that reminds me of it, Lakefield. Maybe someday consoles will return to using more novel technology, making them meaningful purchases over a generic PC, just with better tools, making it easier on game developers than earlier attempts like the Cell.