• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    edit-2
    13 hours ago

    The 5090 is basically useless for AI dev/testing because it only has 32GB. Mind as well get an array of 3090s.

    The AI Max is slower and finicky, but it will run things you’d normally need an A100 the price of a car to run.

    But that aside, there are tons of workstations apps gated by nothing but VRAM capacity that this will blow open.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      7 hours ago

      Useless is a strong term. I do a fair amount of research on a single 4090. Lots of problems can fit in <32 GB of VRAM. Even my 3060 is good enough to run small scale tests locally.

      I’m in CV, and even with enterprise grade hardware, most folks I know are limited to 48GB (A40 and L40S, substantially cheaper and more accessible than A100/H100/H200). My advisor would always say that you should really try to set up a problem where you can iterate in a few days worth of time on a single GPU, and lots of problems are still approachable that way. Of course you’re not going to make the next SOTA VLM on a 5090, but not every problem is that big.

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        1
        ·
        42 minutes ago

        Exactly, 32 is plenty to develop on, and why would you need to upgrade ram? It was years ago I did that in any computer let alone a tensor workstation. I feel like they made pretty good choices for what it’s for

      • Amon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        No, it runs off integrated graphics, which is a good thing because you can have a large capacity of ram dedicated to GPU loads