I am probably unqualified to speak about this, as I am using an RX 550 low profile and a 768P monitor and almost never play newer titles, but I want to kickstart a discussion, so hear me out.

The push for more realistic graphics was ongoing for longer than most of us can remember, and it made sense for most of its lifespan, as anyone who looked at an older game can confirm - I am a person who has fun making fun of weird looking 3D people.

But I feel games’ graphics have reached the point of diminishing returns, AAA studios of today spend millions of dollars just to match the graphics’ level of their previous titles - often sacrificing other, more important things on the way, and that people are unnecessarily spending lots of money on electricity consuming heat generating GPUs.

I understand getting an expensive GPU for high resolution, high refresh rate gaming but for 1080P? you shouldn’t need anything more powerful than a 1080 TI for years. I think game studios should just slow down their graphical improvements, as they are unnecessary - in my opinion - and just prevent people with lower end systems from enjoying games, and who knows, maybe we will start seeing 50 watt gaming GPUs being viable and capable of running games at medium/high settings, going for cheap - even iGPUs render good graphics now.

TLDR: why pay for more and hurt the environment with higher power consumption when what we have is enough - and possibly overkill.

Note: it would be insane of me to claim that there is not a big difference between both pictures - Tomb Raider 2013 Vs Shadow of the Tomb raider 2018 - but can you really call either of them bad, especially the right picture (5 years old)?

Note 2: this is not much more that a discussion starter that is unlikely to evolve into something larger.

  • MentalEdge@sopuli.xyz
    link
    fedilink
    arrow-up
    52
    ·
    edit-2
    11 months ago

    Shadow can definitely look a lot better than this picture suggests.

    The biggest advancements in game graphics have not occurred in characters, except for perhaps in terms of animation and subsurface scattering tech.

    The main character always gets a disproportionate graphical resource allocation, and we achieved “really damn good” in that category a while ago.

    Adam Jensen didn’t look that much better in Mankind Divided, than he did in Human Revolution, but Prague IS SO MUCH MORE DETAILED than Detroit was.

    Then there’s efficiency improvements in rendering brought by systems like nanite, material shader improvements, more detailed lighting systems and more efficient ambient occlusion.

    Improvements in reverse kinematics is something I’m really excited about, as well.

      • MentalEdge@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        11 months ago

        It is. Adam works for the secret underground interpol base in the middle of the city. There are abusive secret societies to dismantle, murder cases to solve, drug rings to bust, corrupt cops to beat up. Mankind Divided is a prime example of making a hub-world medium sized but super detailed, being just as good if not better than huge and full of nothing.

    • Doods@infosec.pubOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      11 months ago

      I can not elaborate on that as I am unqualified - remember, I have never played newer titles.

      • MentalEdge@sopuli.xyz
        link
        fedilink
        arrow-up
        15
        ·
        edit-2
        11 months ago

        My main point is that a headshot of the main character is not a good yardstick. The mc is always going to be rendered with enough oomph to look good, no matter the settings or game generation.

        The difference in recent years has been in environment detail and material shading, lightning, things you maybe can’t even enable due to playing on older hardware.

        While I agree ray tracing is a total energy hog, that’s not the only area seeing advancement. Rendering pipelines like nanite enable more graphics, AND less power consumption.

      • MrZee@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        Three thoughts:

        1. I wonder if you would still have this take if you played a newer, high quality AAA game on a high end setup. I don’t mean to imply that your mind will definitely be blown — really don’t know — but it would be interesting to see what doing so would do to your opinion.

        2. Gaming is about entertainment. There is no denying that better/bigger/smoother/more immersive tends to add to the entertainment. So devs push those boundaries both for marketing reasons and because they want to push the limits. I have a hard time seeing a world in which gaming development as a whole says “hey, we could keep pushing the limits, but it would be more environmentally friendly and cheaper for our customers if we all just stopped advancing game performance.”

        3. There are SO MANY smaller studios and indie devs making amazing games that can run smoothly on 5/10/15 year old hardware. And there is a huge number of older games that are still a blast to play.

      • MentalEdge@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        11 months ago

        Another point in favour of new graphics tech, you mentioned you’re worried about artist needing to do more work. As someone who has done 3D work, I can tell you that its actually easier to make something photo-real. The hard part is making it look good within the limitations of a game engine. How to get something that looks just as good, with simpler material shaders and fewer polygons.

        Tech like nanite actually eliminates the need for all that work. You can give the game engine the full-quality asset, and it handles all the difficult stuff to render it efficiently. This is why we are now seeing games that look as good as Unrecord coming from tiny new studios like DRAMA.