
I’m completely speechless. This looks so terrible I thought it was a joke, but apparently Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look like it’s running through an ultra realistic beauty filter.
The photo above is used as the promo image for the official blog post by the way. It completely ignores artistic intent and makes Grace’s face look “sexier” because apparently that’s what realism looks like now.
I wouldn’t be so baffled if this was some experimental setting they were testing, but they’re advertising this as the next gen DLSS. As in, this is their image of what the future of gaming should be. A massive F U to every artist in the industry. Well done, Nvidia.


Video game news oriented community. No NanoUFO is not a bot :)
Posts.
Comments.
My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.
Other communities:

Almost makes me wish I hadn’t already switched to Team Red, so that I could switch to Team Red due to how comically bullshit this is (on top of their recent vibe coded driver releases)
Ai hate is so strong!
I think it looks extremely promising. It may be a bit uncanny with the faces, probably fixable, but environments are dope!
This is fantastic, this is probably the way to get completely realistic graphics. This is the way, this is finally actual progress in realism.
Yeah I agree. Looks like this will begin to finally solve the uncanny valley problem. The crying is so loud though. I almost feel sorry for them. This is unstoppable. I wish they could see that but they won’t. It’s crazy to me that these anti AI cultists think that they’re going to shame AI into going away. It’s just not going to happen and they’re going to just get more screechy and moral and blind. I hope they get the help they need.
Yeah, but you know what will happen, all of a sudden the games will become amazing and everybody will want to play. All of them 😁
The uncanny valley is more a product of designing for “life-like” over things like “cute” or “endearing” or “emotionally expressive”.
Meaning, we can already cross the uncanny valley, and this AI filter will not help with any of it.
Strong disagree. I do not want these GPUs changing the game to what they “think” the artist intended.
LLMs have a place. Art is not one of them.
Not art, realism.
I don’t know how to tell you this, man, but videogames aren’t realism. They’re art.
You have no idea what your are taking about.
If you think games are more about realism than art, I’m afraid you’re the one who has no idea what they are talking about.
Some games have realistic graphics, some have stylized graphics, some have no graphics at all.
You can call them art or fart, the fact is realistic graphics are becoming better with the help of ai. The technology makes them better. Even the technology you don’t like make them better.
Not everyone finds realism in games as the desired end goal of videogame graphics, and historically stylized graphics age better.
That’s a funny comment. Did I say they do?
Most gamers enjoy different genres in all kinds of styles.
deleted by creator
The model only works on the rendered image and motion vectors. Other than the image it has no information about the lighting in the scene, weather or anything else. So in the current form it really doesn’t have much to do with realism.
That’s like saying that no ai image can be realistic, because it was ai generated.
Sloppyfilter 9000™ X-tream
Only on Nvidia
To be fair though, this is the kind of AI enhancement that could be an actual enhancement.
Most AI solutions are a race to the bottom strategy. They claim to reduce a little of the quality of the product and also massively reducing cost (where in reality it’s a massive reduction in quality and costs a decent bit).
This is what I imagined the AI revolution would look like 10 years ago, having AI enhance the product on top of the same level of quality as before, not really trying to get rid of the artists and developers.
Having said all that, the faces looked kind of creepy…
I agree, but also I see the other side. This is actually a neat usage of AI, it’s not slop in my book, it’s akin to upscaling.
That being said, those limitations are what drove the original artwork. The artists used those limitations to make the styles and characters we now love.
Master chiefs classic armor was just as much designed my the polygon limitations as much as what they imagined could be done
Exactly, this is forcing lowest-common denominator instagram art “style” onto existing art. Should we do this for all the paintings in the Louvre too? It would be more realistic that way, right?
This is exactly the opposite of what I want a graphics card doing in the background. Just leave the games the way the developers made them, for fucks sake. If they suck, they suck…if they don’t, they don’t. But this just makes them all suck.
This seems like it’s intended as a texture and lighting improver, not an “AI Slopificator”
Among the other screenshots, a lot of them seem to have a marked improvement.
Aspects of that Grace image comparison definitely look bad, but this is a work in progress that we’re getting a glimpse into. I really hope that bimbofication doesn’t make it into the final product.
This reminds me a lot of the Smile EQ comparison that speaker sellers would make to impress average, 45-year-old men in Best Buy.
In the Grace picture alone, it removes the distant fog, it destroys the mood, it overrides the art style, it over-brightens the scene, it adds light sources that don’t exist, it removes the warm light spilling out of the shop window, it makes the color palette colder, it hyper-contrasts everything—there is no world in which I would call this an improvement.
Oh look, DF prostituting their audience to make a buck from NVIDIA. Britain, how far you have fallen to behave just like yanks…
Bro they taught them that shit.
Nice (/s)
It seems like it’s just making it more realistic rather then “beautifying” her. The original picture doesn’t have any flaws or blemishes or anything to indicate that the modelers wanted for her to be anything but attractive. If anything the other one gives more detail to the skin and makes it look real rather than ultra smooth clay.
She is a nervous, FBI desk lady who’s never seen a day of action and stutters while giving case reports. Compared to Leon, she is an ordinary person. The filter makes her look like any star on The CW.
Even if it looked good, it has zero context of the original artists’ intent. This is like having AI summarize pages of a book as you read. You’re now locked a layer away from the original artist’s work and it’s a layer controlled by corpos. No thank you.
At least 2 layers.
LLMs don’t think. They copy paste something that’s been found repeatedly in the data it was trained on, statistical probability of words going with other words. Hell, it doesn’t even know what words are or much less mean. So it’s at least 2+ layers removed from the truth, one being the one you pointed out, and another being an amalgamation (mishmash) of the data it was trained on.
I get that lemmy hates AI, and I’m not going to try to talk you out of that, but please stop repeating this factually incorrect myth. LLMs are not stochastic parrots, despite what you may have heard. And they do think… to a degree. Note that they’re by no means everything CEOs and tech bros want them to be, but if you’re going to criticize them, please do it accurately.
They do know the meaning of words, but only in relation to other words. It’s how they work. It’s not a statistical thing like word frequency patterns— they’re not doing the same thing autocomplete does. Instead, they’re doing math on words in a several hundred-thousand dimensional array where placement on this grid indicates the meaning of the word— one vector direction indicates plurals, another indicates rudeness or politeness, another indicates frog-like, another might indicate related to 1993 ibm pentium CPUs, etc, etc, etc. It developed this array via training on terabytes of text, but it’s not storing a copy of that text, nor looking it up, nor copying anything from it… it’s defining words based on how they are used, then doing math on it to figure out what is the most appropriate thing to say next— not the most likely thing according to statistics, the most meaningful based on the definitions of the words it understands.
They really do not copy and paste. They do use definitions. They do think about the words in a very real way.
They don’t apply logical consistency and fact checking. There are hacks to make them talk to themselves in a way that following the meaningful definitions of words will more likely lead to fact checking and logical consistency, but it’s not 100% fool proof.
You should take your own advice.
That’s only one part to meaning and it’s the only one LLMs have. It’s facinating what this one part can do, but we don’t operate this way. LLM have no world model, no logic model to associate a word to. It doesn’t think, it’s still just and input - output machine.
I’m sorry, how is this not statistics?
The training is by it’s very nature statistical. We give millions of text inputs with expected outputs and tune the model until they match. How is this anything but statistics??
Yes and no? Yes - it’s not storing a copy of the training data in the text form. No - it most definetly can “memorize” text, if that’s not a copy I don’t know what is.
I could memorize foreign script text without understanding it and then I could recreate it. Did I make a copy? no. Can I make a copy? yes.
You’re right that there is an internal representation for tokens and token sequences, but they also do copy. There is a whole area of research on this, and here is an example article on extracting image datasets.
Having a number that relates words to other words is not understanding words. Stop believing the hype for fuck’s sake. What they ‘know’ is NOT knowledge. They do not know anything. Period.
There is a reason they start to fail when trained on other slop; because they don’t know what any of it means!
Their ‘knowledge’ comes from the basic weights of what word is most likely to follow. Period. The importance of that weight comes from humans. It is not intrinsic knowledge even after training. It is pure association, and not association like you or I do word association.
They do build a representation of words and sequences of words and use that representation to predict what should come next.
A simplistic representation is this embedding diagram that shows how in certain vector spaces you can relate man/woman/king/queen/royal together:
The thing is, these are static representations and are only bound to the information provided to the model. Meaning there is nothing enforcing real world representations and only statistically consistent representations will be learned.
They don’t “learn” anything, though. They’re ‘trained’ (still a bad term but at least the industry uses it) to spit the correct answer out.
People, especially CEOs and advertising firms, need to stop anthropomorphizing them. They do not learn. They do not “know”. They have statistically derrived association and that’s it. That’s all.
Holy hell ELIZA effect is in full swing and it’s beyond sad. They don’t build the association themselves. They don’t know what the representations mean. They absolutely do not know why two words are strongly associated. It’s just a bunch of math that computes a path through that precomputed vector space. That’s it.
I didn’t use the word learn, although that’s really just a matter of semantics. I said they build a representation of words/sequences in a vector space to understand the interplay of words.
You can down vote me all you want, but that’s literally just the math that’s happening behind the scene. Whether any of that approaches something called “learning”, probably not, but I’m not a neruoscientist.
“it doesn’t draw anything, it’s just a bunch of math” to describe vector graphics pipelines used to render frames for games.
I’m not actually disagreeing it’s just really funny seeing decades of engineers and mathmaticians collective output being hand waved as “just a bunch of math”
You know what I mean, dammit. lol
Saying that an LLM knows words is not a value judgement. It doesn’t mean “LLMs are sentient” or “LLMs are smart like humans”. It’s doesn’t imply they have real world experiences. It’s just a description of what they do. That word has been used to describe much more basic kinds of information / functionalities of computers already. What makes it so offensive now?
If you taught children slop at school they would not get far either. Although training LLMs on LLM output is more akin to getting rid of books and relying on what teachers remember to teach the students.
It comes from the llm and not from the outside, that’s what intrinsic means. How is it not intrinsic knowledge? I think you mean to say without humans to read it, an llm’s output holds no inherent value. That is true and nobody is claiming that it does. llms don’t derive pleasure from talking like humans do so the only value llm output has is from the the person reading it.
llm weights are anything but basic, but regardless, this is also true and lunnrais said as such:
The difference between human knowledge and llm knowledge is that an llm’s entire universe is words while humans understand words in relation to real world experiences. Again, nobody is claiming those two understandings are equivalent, just that they exist.
Also on the point of statistics, I think the way people understand statistics and the statistics used in llms are vastly different. It is true that an llm finds which word is most likely to be next, but how it does that is not a classical statistical method. An llm itself is a statistical model. When one says an llm ‘knows’ or ‘understands’ they mean it has captured abstract information in a incomprehensibly complex neural network not dissimilar to how we do it. How it can only use that information for word prediction doesn’t change the fact that it has captured information beyond what is present in a word prediction.
It seems to me that ‘statistics’ is often brought up to devalue llms by associating them with basic statistics. This association is wrong as I’ve explained in the previous paragraph. And themselves being a statistical model doesn’t mean their ability to express knowledge (although limited to textual domain) has to be inferior to a human’s.
I understand the need to warn people of the limitations of llms. Their limitation is that they are text models with no concept of real life. Not that they are statistical models or copy paste machines
Even simply using the word “know” is anthropomorphising them and is wholly incorrect.
You are suffering from the ELIZA effect and it is just… sad.
Computers have been getting anthropomorphised for a long time. Why is it only when talking about llms that you start clutching your pearls about it? Why do you think that verb has to be exclusive to humans? To me that seems like a strange and inconsequential thing to dig your heels in.
And I struggle to see how you could genuinely believe I was suffering from ‘ELIZA effect’ after reading my comment. You need more nuance and less absolutism in your world view if you genuinely do.
Your eagerness to fool yourself is beyond sad.
I am of the opinion now, and this is entirely AI’s fault, that for the collective mental health of our society, a grocery store self-checkout should not even be allowed to “thank” you for your purchase.
Seen a bit of a rise of those sort of people since moltbook or whatever it’s called emerged, trying to sucker people into believing the random bullshit generator is sentient or cognizant of its assets in any way.
What’s worse homie said “nu-uh” it’s not statistical probability and then proceeded to describe a statistical probability mesh.
Might help a bit if we all stop slapping the AI term on everything and start calling things what they are such as scripting, large language models, cronjobs, etc.
Trying to argue with those people just makes me sad and tired :(
AI Summary, of the last 240 pages
Wow that’s hyper realistic!
Only artist intent matters. Personal preference be damned!
Yes, the artist’s intent is the part that matters in art.
That’s a fine opinion you have there. It reminds me of “Only the chefs taste matters during a meal”
Why am I ordering from a chef if I don’t like their cooking?
Implying that it’s the only thing that matters is dumb. If you want uncanny valley faces, go for it. I’m not interested in dumb AI permeating yet another corner of my life.
Two 5090s for this shit lol. The first 5090 calculates all the shadows and then the second 5090 takes it back out again lmao. What a fucking joke.
Lmaooo I will stick to turning down all the settings on my shitbox computer because apparently that’s the same experience hahah.
That’s impressively bad
Would you say the same thing if you didn’t know that it was AI? I think it actually looks pretty good overall, although some of the changes (like deciding that this character dyes her hair had has undyed roots) are odd.
Edit: It seems to do a better job with the soccer player.
Edit 2: I wonder if it works better with male faces than with female ones. It’s making the woman’s eyes and lips bigger but not the man’s.
Yes, it looks bad regardless of the tech behind it.
The popular “it looks awful” kneejerk is so telling.
I dislike AI, but the utter delusion around it reminds me of how people complained about the internet in the 90s. There was a sensible fear, and then there were essentially Luddites.
Everything repeats.
“All technological development is good because there are always people who complain and then the technological development continues regardless. There’s no point in critique, everything is always perfect and nothing you say will change it anyway so who cares.”
“I only see things in black and white because I haven’t grown up”
I actually blocked you on another account because you’re such a douchebag. I wish those could be exported more easily!
Oh I remember you now!! You’re the “we should rise up against fascism” coward who months ago said
And here you are, pathetically typing about AI. What happened, tough boy?
This is still you:
Who needs AI when real humans are this worthless?
Dunno whats the beef between you guys, but its healthy to see and interract with people who have different opinions than you.
Blocking people just because you think they are douchebags because they have different opinion just makes you live in a bubble where you hear only things that reinforce your own views.
But the younger women in those screenshots are absolutely “sexified.”
I’m an AI evangalist as far as Lemmy goes, but that is a problem. It’s beyond a “sensible fear” problem, its unignorable and unacceptable. I’m kind of shocked DF didn’t point it out.
Unrelated to the AI
What’s up with his chin? The chins off center and then the beard is even more.
I kind of agree with you.
It does not look bad. What im worried is that the ai cant keep up and will end up changing the look of the characters and i hate that it will take the agency from the artists.
What of the agency of the players? Who will be sitting in the room and impacted, if this optional feature is enabled in the privacy of one’s home? Will the artist, halfway around the world suddenly wail and fall over in pain because of personal preference?
I guess it’s the same as when people put steak sauce on a steak, and all the chefs who love the taste of bloody meat cry that the meal has been ruined… Except they’re not the ones eating it. They can enjoy their own creations however they like without getting all nitpicky about how others have their own preferences.
In game design what artists do is more than just graphics. There are minute details in characters that effect the story amd what ai overlay dont know have importance. Like scars or whenether the character uses makeup or colours their hair or something. These are storybeats and i dont really care if people want to see them or not. Some people skip the dialogue and its their decition.
Then the graphic artists and level designers work together with enviromental clues like is that wall climpable, or is that obstacle breakable. Also level design has often designed breadcrumbs like lights flickerin on a door or paint/blood splashes with vibrant colours to show where to go. If the ai starts to change these things they might become hard to see or overly bright wich in turn makes the game worse and player does not necessarily even know why it feels bad.
Maybe the right word is that something that sees only 2d render of the game and enchances it just how they see best at the moment lacks the undertstanding of the artistic intent.
Realistically the artists working on it have a say into what graphics settings are allowed, and they already deal with the fact some people will need to run on very low settings, also affecting their ideal viewing conditions. If the newer DLSS really makes such sweeping changes they would either ask Nvidia for improvements, disable it, or heavily dissuade it.
But player autonomy is also important, so it’s a balancing act. If the players end up wanting it and you take it away from them, it still won’t make sense to strip it out at the end of the day.
Do you have a problem with all the reshade presets for Skyrim you can find on Nexus Mods?
Agreed.
The effect is waaaaay too strong in those screenshots, but a more subtle version would be alright.
And yes. It’s definitely “sexifying” the woman in the shot. Transformers img2img models are notorious for basically:
I could speculate why. Could be that it’s (unfortunately) mostly male Tech Bros developing them? Or it could be that a massive fraction of the dataset is sexualized photos of women scraped from social media. But TBH, while I don’t know why this is the case, pretty much all diffusion models tend to “Instagram” women more than men.
Bias in Bias out
I feel like the soccer player does look more like his real life counterpart (Virgil van Dijk) with DLSS off.
it’s an unwoke/untumblrisation filter, right here on your graphics card.
https://knowyourmeme.com/memes/original-vs-un-tumblrized
This shit would excite certain people, if 90% of the population could afford to buy graphics cards anymore.
You also apparently need a separate GPU to run this. So not only can 90% of people not afford one, but the article states they used 2 to accomplish this.
Stuff Id expect and maybe even interested in from a small indie open source dev and absolutely glad their being shit on for doing as one the world’s richest and scummiest companies for doing
Well there is an off toggle.
You still pay for it 🤷
How so? You can avoid buying games with yhis feature
You pay for hardware that’s capable of doing this. Even if you don’t use it, you still pay for it.
It’s like when you buy a car like a Tesla, that has features locked behind a subscription. The hardware is there and you can be damn sure you’ll pay for it upfront, even if they make you rent it.
AMD is an option, and who the fuck would buy a tesla except neo nazis?
Tesla aren’t the only car manufacturer that does this. Hence "like. "