• KingRandomGuy@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    7 hours ago

    Useless is a strong term. I do a fair amount of research on a single 4090. Lots of problems can fit in <32 GB of VRAM. Even my 3060 is good enough to run small scale tests locally.

    I’m in CV, and even with enterprise grade hardware, most folks I know are limited to 48GB (A40 and L40S, substantially cheaper and more accessible than A100/H100/H200). My advisor would always say that you should really try to set up a problem where you can iterate in a few days worth of time on a single GPU, and lots of problems are still approachable that way. Of course you’re not going to make the next SOTA VLM on a 5090, but not every problem is that big.

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Exactly, 32 is plenty to develop on, and why would you need to upgrade ram? It was years ago I did that in any computer let alone a tensor workstation. I feel like they made pretty good choices for what it’s for