Thu. Mar 19th, 2026

NVIDIA DLSS 5: “Neural Slop” Instead of Photorealism — Why the Technology Faced Criticism?

NVIDIA has announced DLSS 5, promising a revolution in graphics through neural networks and photorealism. However, the gaming community has met this with a wave of criticism, labeling the technology “neural slop.” In this article, we will examine whether DLSS 5 is a true breakthrough or merely a striking yet impractical demonstration. We will analyze the presented demos, the opinions of developers and gamers, and the technical aspects to assess the real value of NVIDIA’s marketing claims.

In a recent presentation, NVIDIA once again underscored its leadership in applying neural networks to 3D rendering. DLSS 5, unlike previous versions that primarily boosted frame rates, now uses AI to generate “photorealistic” lighting and surfaces. The company positions this as a true “breakthrough” in graphics, reflected in the presentation title: “NVIDIA DLSS 5 Delivers AI-Powered Breakthrough In Visual Fidelity For Games.” NVIDIA considers this the most significant advancement in computer graphics since the debut of ray tracing (RTX) in 2018.

Breakthrough or Just Hype?

Despite ambitious claims, the RTX technology itself, while promising, has not become a true breakthrough for the gaming industry. Several factors hinder the widespread adoption of ray tracing: the high cost of high-performance RTX-capable graphics cards, insufficient implementation by competitors (especially AMD), and a limited number of AAA titles that genuinely demonstrate a significant graphical leap. This is more than seven years after its initial demonstration. Therefore, despite striking wording and impressive videos, NVIDIA’s bold statements should be approached with a fair degree of skepticism.

It’s also important to recall that in January 2025, NVIDIA unveiled an updated version of its ACE technology, initially announced in 2023. This iteration promised to enable developers to create fully autonomous in-game characters driven by generative AI models responsible for their perception, cognition, actions, and rendering. We were shown an AI ally in PUBG (PUBG Ally), playable characters in inZOI, and the world’s first AI boss in MIR5. However, more than a year after that presentation, widespread adoption of NVIDIA ACE in games has yet to materialize. Are you already playing with AI allies or battling AI bosses? Most likely, no.

While these technologies are indeed promising, their effective utilization requires specific skills from developers. The problem is exacerbated by the mistaken belief among some managers that AI can completely replace human developers.

Furthermore, amid the current hardware shortage (graphics cards and RAM) caused by the AI boom and the race for computing power in data centers, the mention of “artificial intelligence” often provokes annoyance rather than interest among many gamers.

Add to this the growing discontent over the use of AI in game development—from asset generation to dialogue and voiceovers—and you have fertile ground for skepticism toward any “neuro-innovations.” In such an atmosphere, even genuinely useful technologies risk being drowned out by a wave of justified mistrust and, at times, unfounded negativity.

So, let’s delve deeper: what is this technology, how does it work, and is it truly a breakthrough?

How the Technology Works

For those following AI advancements, NVIDIA’s latest development will not come as a surprise. As noted in a previous article about NVIDIA ACE last year:

“…AI might, in the future, be able to make the virtual world so convincing that the line between reality and simulation becomes almost indistinguishable. Or vice versa. Imagine a hypothetical DLSS X that not only improves resolution on the fly but also renders the image exactly as you want to see it. Want Doom rendered with realism? Here you go.”

In essence, DLSS 5 is designed to do precisely this.

Simply put, the neural network enhances the “realism” of an image in real-time by correcting lighting and textures. This process is similar to using an AI to process a photograph, removing defects and adjusting lighting. Theoretically, the algorithm could apply any filter, for instance, stylizing an image to look like anime by redrawing the original in the desired style. However, currently, NVIDIA has only demonstrated filters aimed at realism.

NVIDIA’s official website describes the technology’s operation as follows:

“The AI model is trained end-to-end to understand complex scene semantics like characters, hair, cloth, and translucent skin, as well as lighting conditions such as frontal, side, or ambient, all based on analyzing a single frame. DLSS 5 then leverages its deep knowledge to create visually accurate images that account for intricate elements like subsurface scattering on skin, subtle sheen on fabric, and the interaction of light with hair material, all while preserving the structure and semantics of the original scene.”

Interestingly, DOOM was not the first game chosen for the demonstration. Showcasing a realistic “gore” shooter with blood and detailed guts, even if from monsters, was likely deemed unsuitable for the presentation. This is despite Bethesda, DOOM’s publisher, being among NVIDIA’s partners, alongside CAPCOM, Hotta Studio, NetEase, NCSOFT, S-GAME, Tencent, Ubisoft, and Warner Bros. Games. The demonstration videos featured examples from Resident Evil Requiem, Hogwarts Legacy, Starfield, and other titles, illustrating the activation of DLSS 5 and how the neural network transforms the image.

Gamers’ reactions were immediate and, predictably, polarized. Responses ranged from enthusiastic exclamations like: “Wow, this is really impressive!” to skeptical remarks such as: “Did you see the video? It’s the uncanny valley, not beauty!” and “It definitely looks like neural slop.” The most popular comment under Digital Foundry’s video aptly stated: “I thought it was an April Fool’s joke, but it’s still March.”

Analysis and Details

Whether this will be regarded as “neural slop” or the dawn of a new graphical era is for players to decide, but several technical aspects warrant attention.

Firstly, the demonstrations primarily featured static scenes. Characters stood still, with only slight hair movement. Dynamic actions were largely absent. The sole truly dynamic moment in EA Sports FC, where a footballer runs towards the camera to celebrate a goal, was shown with DLSS 5 disabled. The celebration itself, with the player already on the grass in a close-up, was demonstrated with the technology activated. This suggests that DLSS 5 might not perform as smoothly in highly dynamic scenarios as NVIDIA would hope.

Secondly, the situation with lighting and contrast proved to be ambiguous. In Digital Foundry’s review, activating DLSS 5 in Starfield significantly transformed the image, making the lighting more contrasted. Does this add realism? The answer is mixed. Digital Foundry experts, while impressed by the neural network’s ability to “fantastically improve detail” and convert “flat” graphics into something resembling ray tracing, still acknowledged the technology’s imperfections. In processed images, effects dependent on scene geometry appear to “break.” They emphasized that the neural network “doesn’t fully understand all light characteristics” and still heavily relies on the game’s base render.

In the real world, on a bright sunny day, the contrast between lit and shadowed areas is indeed high. However, the human eye functions differently from a camera. When looking at bright areas, the pupil contracts, and we lose detail in shadowed regions; conversely, when looking into shadows, the pupil dilates, revealing details in the dark but losing them in the light.

In the presented demos, we observe the opposite effect: the original image’s contrast was artificially boosted, and details vanished precisely where the human eye would expect them to be preserved. The neural network attempts to mimic “natural” contrast but does so blindly, without understanding the player’s gaze focus. As a result, bright areas can be overexposed, while dark ones “fall into” indistinguishable blackness.

The human brain perceives such lighting as unnatural. In real life, detail loss occurs selectively, depending on where attention is directed. Here, the algorithm simultaneously “cuts” details across the entire frame. This dynamic gives rise to the “plasticity” effect: the image appears overly processed yet less informative.

This problem is fundamentally unsolvable without player gaze tracking, which would allow contrast and detail to adapt to the focal point, or without deep adaptation of the base render. Currently, the technology attempts to optimize the entire screen at once. Thus, the “uncanny valley” effect in graphics is a natural outcome of the conflict between algorithmic “imitation” and biological perception. The attempt to make the image too “perfect” makes it subconsciously alien.

Thirdly, changes in character faces are noticeable. On one hand, they do become more realistic. On the other, the perception of these changes is mixed. While the transformations in Starfield seem like an improvement (partly due to Bethesda’s outdated engine and peculiar character design), the results in Resident Evil Requiem are not always successful—the neural network’s interpretation appears quite idiosyncratic.

The “uncanny valley” effect rears its head again. Creating high-quality motion capture, especially for facial animation, is extremely expensive in games, even when the engine supports it. And what about engines like Bethesda’s? Developers rarely have the budget for Hollywood-level mocap and facial animation.

In games with stylized graphics, “clunky” animation is perceived as natural—it’s an accustomed gaming convention. But when a character visually closely resembles a human, yet their movements and speech are “robotic,” a sense of discomfort arises. DLSS 5, by enhancing the photorealism of skin textures, only amplifies this dissonance: the more realistic the outer shell, the more noticeable the unnaturalness of the movements.

This problem, too, is fundamentally unsolvable with current mocap technologies. As Albert Zhiltsov, lead developer of “War of the Worlds: Siberia,” noted:

“Theatrical art, unlike cinema, exaggerates emotions—because from the eleventh row, facial expressions are poorly visible. What we call ‘overacting’ is actually a necessity; otherwise, the action simply won’t reach the audience. While this is a problem in cinema (and film directors probably ‘scold’ theater actors for it), for mocap, it’s the opposite—it’s great. Because you’re only giving your body. You have sensors. So, if you’re ‘hit,’ the movement must be a full sweep, without cinematic subtleties. Everything needs to be slightly exaggerated. The same applies to facial expressions.”

It turns out that approaches effective for stylized rendering do not work for realism. Exaggerated facial expressions, necessary for game expressiveness, destroy the illusion of life in a photorealistic image. True realism requires cinematic technologies and acting subtlety that are currently unavailable to mainstream game development. DLSS 5 merely enhances the visual aspect but does not change the essence of animation, thereby driving us deeper into the “uncanny valley.”

Furthermore, changing lighting alone does not make the environment more realistic. It’s good if designers and artists have thoroughly developed the world’s content and the neural networks have something rich to work with. But often, environments are intentionally simplified, and primary rendering resources are focused on central characters. DLSS 5 improves their appearance, but the world around them remains conventional. This only heightens the visual dissonance: a hyperrealistic hero against a “cardboard” world looks even more unnatural.

Is It All Bad?

For NVIDIA, as a technology leader, demonstrating such an innovation is an undeniable success and an opportunity to reaffirm its leading role in AI. However, for game companies, the situation is not as clear-cut.

For instance, Bethesda reacted on X to Digital Foundry’s video showcasing Starfield with DLSS 5 (notably, the video was filmed with a camera pointed at a monitor, not recorded directly from a PC). Bethesda representatives stated:

“Thank you for your interest and analysis of the new DLSS 5-based lighting. This is a very early demonstration, and our teams will continue to work on refining the lighting and final effects to achieve the visual style that we believe best suits each game. The final decision will remain with our artists, and for players, this feature will be entirely optional.”

One must agree that there is significantly less enthusiasm here than in NVIDIA’s press releases. It seems the players’ reactions did not align with the company’s expectations.

Nevertheless, the technology itself is not “bad.” It is indeed an innovative solution that, in the future, could lead to desired photorealism or allow players to apply any graphical “filters” to games.

There are at least two possible scenarios for using this technology.

For new games, rendering and lighting will need to be adjusted so that the image is prepared for subsequent neural network processing. This is additional work, and there’s no guarantee that AI will deliver high quality with minimal effort; most likely, extensive further fine-tuning will be required.

For older projects, the technology could theoretically be useful. The neural network is indeed capable of improving image quality. However, there are nuances here too. Animations, including facial ones, in old games are even more unnatural compared to modern ones, and lighting is configured differently. Perhaps the neural network should be tuned not for “realism,” but for general image enhancement while preserving the original art style.

It’s worth recalling that NVIDIA already offered NVIDIA RTX Remix for older projects. Yet, there hasn’t been a flood of retro game remixes with improved graphics. Although these are different technologies, the “magic button” principle for graphical enhancement is similar. And if the more specialized RTX Remix didn’t cause a revolution, why should we expect one from DLSS 5?

No, DLSS 5 is not a “magic button.” Like any other complex technology, DLSS 5 needs to be mastered. “Out of the box,” it will not work as many developers would hope. But in skilled hands and with a competent approach, it could indeed be a breakthrough.

Nevertheless, several factors could hinder this “breakthrough.” AI filters demand significant performance. How effectively DLSS 5 will run on budget graphics cards remains an open question.

Furthermore, the current list of projects supporting DLSS 5 is small. Whether it will expand is also unclear. Will it not become another RTX situation: the technology exists, there are a few showcase projects where it performs well, but the majority of games remain without its full support?

Adding to the skepticism is the fact that nearly all modern gaming consoles use AMD chips. The only current platform compatible with DLSS is the Nintendo Switch 2. This means that in multiplatform projects, developers primarily target the capabilities of the dominant consoles. DLSS, being NVIDIA’s proprietary technology, is simply unavailable on PlayStation and Xbox, so its support in cross-platform games often becomes merely “optional” for the PC version, if a studio has the resources.

Consequently, a paradox emerges: a technology that NVIDIA calls “groundbreaking” effectively remains exclusive to a relatively small audience of owners of the latest GeForce RTX graphics cards, primarily higher-end models. Steam statistics confirm this: only 23.6% of users play on GeForce 50 series cards, with 9.12% using GeForce RTX 5070, and 10.65% on RTX 5060 and RTX 5060 Ti.

AMD, in turn, offers FSR—an open solution that works on any hardware. However, it currently lacks a DLSS 5 equivalent with AI-powered lighting and surface generation.

Thus, a vicious circle forms: consoles do not encourage widespread adoption of “neuro-rendering,” and the lack of broad developer support renders the technology niche. While NVIDIA showcases impressive demos on powerful PC builds, the real gaming industry is moving at a different pace.

Conclusion

The technology is undoubtedly impressive. The ability to “fill in” lighting, textures, and scene details in real-time using AI is a significant step into a future that recently seemed like science fiction. The concept itself deserves recognition: if leveraged skillfully, it could unlock new possibilities in the gaming industry.

However, for now, it’s more of a spectacular gimmick than a tool ready for widespread adoption. NVIDIA’s demonstrations, despite their visual appeal, raise more questions than they answer.

The real answers to these questions will come not from press releases but from actual game projects. In the autumn of 2026, when announced games finally receive full DLSS 5 support, we will have the opportunity to evaluate the technology not in the vacuum of idealized demo videos, but within actual gameplay.

Until then, it’s wise to maintain a healthy skepticism. Not because the technology is inherently “bad,” but because the history of the gaming industry, and NVIDIA’s technologies in particular, has many examples where a proclaimed “revolution” in practice turned out to be merely an “option for enthusiasts.”

By Artemius Grimthorne

Artemius Grimthorne Independent journalist based in Manchester, covering the intersection of technology and society. Over seven years investigating cyber threats, scientific breakthroughs and their impact on daily life. Started as a technical consultant before transitioning to journalism, specializing in digital security investigations.

Related Post