- cross-posted to:
- nvidia
- cross-posted to:
- nvidia
In other words their USP is no longer and legacy cards stay relevant.
I’ll take it if they give it, but I’m not gonna hold my breath.
As a 3090 owner: ok, but largely meh & IDGAF.
Admittedly, I’ve only tried the FSR on a single game (Cyberpunk 2077), and while it looks cool & smooth, the input lag it generates feels horrid. While the DLSS FG might offer a bit lower input lag, it’s still more than it would be without FG, so… meh?
I didn’t like motion smoothing on TV’s, and (so far) I sure don’t enjoy it on my pc. Old man yells at cloud, etc.
Why would anyone want to sacrifice input latency for a few more fake frames. Just buy GeForce now or something andcloud stream your games at that point.
Can we stop with the fake frame nonsense? They aren’t any less real than other frames created by your computer. This is no different from the countless other shortcuts games have been using for decades.
Also, input latency isn’t “sacrificed” for this. There is about 10 ms of overhang with 4x DLSS 4 frame gen, which however gets easily compensated for by the increase in frame rates.
The math is pretty simple on this: At 60 fps native, a new frame needs to be generated every 16.67 ms (1000 ms / 60). Leaving out latency from the rest of the hard- and software (since it varies a lot between different input and output devices and even from game to game, not to mention, there are many games where graphics and e.g. physics frame rate are different), this means that at three more frames generated per “non-fake” frame and we are seeing a new frame on screen every 4.17 ms (assuming the display can output 240 Hz). The system still accepts input and visibly moves the view port based on user input between “fake” frames using reprojection, a technique borrowed from VR (where older approaches are working exceptionally well already in my experience, even at otherwise unplayably low frame rates - but provided the game doesn’t freeze), which means that we arrive at 14.17 ms of latency with the overhang, but four times the amount of visual fluidity.
It’s even more striking at lower frame rates: Let’s assume a game is struggling to run at the desired settings and just about manages to achieve 30 fps (current example: Cyberpunk 2077 at RT Overdrive settings and 4K on a 5080). That’s one native frame every 33.33 ms. With three synthetic frames, we get one frame every 8.33 ms. Add 10 ms of input lag and we arrive at a total of 18.33 ms, close to the 16.67 ms input latency of native 60 fps. You can not tell me that this wouldn’t feel significantly more fluent to the player. I’m pretty certain you would actually prefer it over native 60 fps in a blind test, since the screen gets refreshed 120 times per second.
Keep in mind that the artifacts from previous generations of frame generation, like smearing and shimmering, are pretty much gone now, at least based on the footage I’ve seen, and frame pacing appears to be improved as well, so there really aren’t any downsides anymore.
Here’s the thing though: All of this remains optional. If you feel the need to be a purist about “real” and “fake frames”, nobody is stopping you from ignoring this setting in the options menu. Developers will however increasingly be using it, because it enables previously impossible to run higher settings on current hardware. No, that’s not laziness, it’s exploiting hardware and software capabilities, just like developers have always done it.
Obligatory disclaimer: My card is several generations behind (RTX 2080, which means I can’t use Nvidia’s frame gen at all, not even 2x, but I am benefiting from the new super resolution transformer and ray reconstruction) and I don’t plan on replacing it any time soon, since it’s more than powerful enough right now. I’ve been using a mix of Intel, AMD and Nvidia hardware for decades, depending on which suited my needs and budget at any given time, and I’ll continue to do use this vendor-agnostic approach. My current favorite combination is AMD for the CPU and Nvidia for the GPU, since I think it’s the best of both worlds right now, but this might change by the time I’m making the next substantial upgrade to my hardware.
My friend you can throw all the numbers you want at me, but real life use case is the true tell. I have experienced that on my 4090, when playing any semi-competitive game even call of duty, frame generation kills your ability to perform on par with anyone that isn’t using frame generation. When my internet latency is 3ms, 10ms waiting for the GPU to generate frames and accept input is a gigantic noticeable difference. So I stand by what I said FAKE FRAMES.
I know I wrote perhaps a bit too much, so it’s understandable that you glossed over it, but the thing with 4x DLSS 4 frame generation compared to what you’ve tried on your 4090 is that it should result in less felt latency.
Keyword being “should”, tech companies of which most predominantly Nvidia love to make performance claims. Nothing is set in stone until it’s actually tested and benchmarked. I will always be skeptical, having frame generation instead of good optimisation will always bring caveats.
Cool, decades old tech :/
Absolutely not. I would recommend reading into this further.
Interpolation isn’t new. TV’s with interpolation have existed for decades. I have to say that I do like their extrapolation idea. And we’ll see how that turns out I suppose.
Because with all the claims and lies from companies like nvidia, I’ve yet to see this tech work for myself. Just a bunch of tech demos.
Just because a far simpler type of interpolation has existed before, this doesn’t mean that this type is “decades old”. You know that.
If you’d actually read up on how their interpolation works, you’d know this isn’t new tech. I’m more interested in the extrapolation