• Dudewitbow
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    6 months ago

    faster ram generally has dimishing returns on sustem use, however it does matter for gpu compute reasons on igpu (e. g gaming, and ML/AI would make use of the increased memory bandwith).

    its not easily to simply just push a wider bus because memory bus size more or less affects design complexity, thus cost. its cheaper to push memory clocks than design a die with a wider bus.

    • Paragone@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      Computational-Fluid-Dynamics simulations are RAM-limited, iirc.

      I’m presuming many AI models are, too, since some of them require stupendous amounts of RAM, which no non-server machine would have.

      “diminishing returns” is what Intel’s “beloved” Celeron garbage was pushing.

      When I ran Memtest86+ ( or the other version, don’t remember ), & saw how insanely slow RAM was, compared with L2 or L3 cache, & then discovered how incredible the machine-upgrade going from SATA to NVMe was…

      Get the fastest NVMe & RAM you can: it puts your CPU where it should have been, all along, and that difference between a “normal” build vs an effective build is the misframing the whole industry has been establishing, for decades.

      _ /\ _