Nice, right here’s the entitled journalist telling me that the $2,000 graphics card gained CES 2025. I’ve seen loads of sturdy opinions about Nvidia’s CES bulletins on-line, however even ignoring the bloated value of the brand new RTX 5090, Nvidia gained this 12 months’s present. And it sort of gained by default. Between Intel’s barebones bulletins and an overstuffed AMD presentation that ignored what may be AMD’s most vital GPU launch ever, it’s not stunning that Staff Inexperienced got here out forward.
However that’s regardless of the insane value of the RTX 5090, not due to it.
Nvidia launched a brand new vary of graphics playing cards, and the spectacular multi-frame era of DLSS 4, however its bulletins this 12 months have been rather more vital than that. All of it comes all the way down to the ways in which Nvidia is leveraging AI to make PC video games higher, and the fruits of that labor might not repay instantly.
There are the developer-facing instruments like Neural Supplies and Neural Texture Compression, each of which Nvidia briefly touched on throughout its CES 2025 keynote. For me, nevertheless, the standout is neural shaders. They actually aren’t as thrilling as a brand new graphics card, at the least on the floor, however neural shaders have huge implications for the way forward for PC video games. Even with out the RTX 5090, that announcement alone is critical sufficient for Nvidia to steal this 12 months’s present.

Get your weekly teardown of the tech behind PC gaming
Neural shaders aren’t some buzzword, although I’d forgive you for pondering that given the force-feeding of AI we’ve all skilled over the previous couple of years. First, let’s begin with the shader. For those who aren’t acquainted, shaders are primarily the applications that run in your GPU. A long time in the past, you had fixed-function shaders; they may solely do one factor. Within the early 2000s, Nvidia launched programmable shaders that had far better capabilities. Now, we’re beginning with neural shaders.
Briefly, neural shaders permit builders so as to add small neural networks to shader code. Then, whenever you’re enjoying a recreation, these neural networks will be deployed on the Tensor cores of your graphics card. It unlocks a boatload of computing horsepower that, up thus far, had pretty minimal functions in PC video games. They have been actually simply fired up for DLSS.
Nvidia has makes use of for neural shaders that it has introduced to this point — the aforementioned Neural Supplies and Neural Texture Compression, and Neural Radiance Cache. I’ll begin with the final one as a result of it’s probably the most fascinating. The Neural Radiance Cache primarily permits AI to guess what an infinite variety of mild bounces in a scene would appear to be. Now, path tracing in actual time can solely deal with so many mild bounces. After a sure level, it turns into too demanding. Neural Radiance Cache not solely unlocks extra reasonable lighting with much more bounces but in addition improves efficiency, based on Nvidia. That’s as a result of it solely requires one or two mild bounces. The remainder are inferred from the neural community.
Equally, Neural Supplies compresses dense shader code that might usually be reserved for offline rendering, permitting what Nvidia calls “film-quality” property to be rendered in actual time. Neural Texture Compression applies AI to texture compression, which Nvidia says saves 7x the reminiscence as conventional block-based compression with none loss in high quality.

That’s simply three functions of neural networks being deployed in PC video games, and there are already large implications for the way nicely video games can run and the way good they will look. It’s vital to do not forget that that is the beginning line, too — AMD, Intel, and Nvidia all have AI {hardware} on their GPUs now, and I think there will likely be numerous improvement on what sorts of neural networks can go right into a shader sooner or later.
Perhaps there are fabric or physics simulations which are usually run on the CPU that may be run by way of a neural community on Tensor cores. Or perhaps you’ll be able to broaden the complexity of meshes by inferring triangles that the GPU doesn’t must account for. There are the seen functions of AI, reminiscent of by way of non-playable characters, however neural shaders open up a world of invisible AI that makes rendering extra environment friendly, and due to this fact, extra highly effective.
It’s simple to get misplaced within the sauce of CES. For those who have been to consider each government keynote, you’ll stroll away with actually 1000’s of “ground-breaking” improvements that hardly handle to maneuver a patch of dust. Neural shaders don’t match into that class. There are already three very sensible functions of neural shaders that Nvidia is introducing, and other people a lot smarter than myself will possible dream up a whole bunch extra.
I needs to be clear, although — that gained’t come immediately. We’re solely seeing the very floor of what neural shaders could possibly be able to sooner or later, and even then, it’ll possible be a number of years and graphics card generations down the highway earlier than their affect is felt. However when trying on the panorama of bulletins from AMD, Nvidia, and Intel, just one firm launched one thing that would actually be worthy of that “ground-breaking” title, and that’s Nvidia.