Unfortunately, due to the complexity and specialized nature of AVX-512, such optimizations are typically reserved for performance-critical applications and require expertise in low-level programming and processor microarchitecture.

  • racemaniac@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    145
    ·
    edit-2
    2 months ago

    Whomever wrote this article is just misleading everyone.

    First of all, they did this for other kinds of similar instruction sets before, so this is nothing special. Second of all, they measure the speedup compared to a basic implementation that doesn’t use any optimizations.

    They did the same in the past for AVX-2, which is 67x faster in the test where avx-512 got the 94x speed increase. So it’s not 94x faster now, it’s 1.4x faster than the previous iteration using the older AVX-2 instruction set. It’s barely twice as fast as the implementation using SSE3 (40x faster than the slow version), an instruction set from 20 years ago…

    So yeah, it’s awesome that they did the same awesome work for AVX-512, but the 94x boost is just plain bullshit… it’s really sad that great work then gets worded in such a misleading way to form clickbait, rather than getting a proper informative article…

    • vithigar@lemmy.ca
      link
      fedilink
      arrow-up
      48
      ·
      2 months ago

      Even more ridiculous since a 1.4x performance increase is already incredible news for anyone who makes regular of this.

      If someone found a software optimization that improved, say, blender performance by 1.4x people would be shouting praises from the rooftops.

      • racemaniac@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        8
        ·
        2 months ago

        Indeed, it’s a very nice boost, and great work, but this clickbait nonsense is just so stupid…

        And i’m really bothered how it’s just parrotted everywhere… Doesn’t anybody wonder “94x faster is like… really a LOT… that can’t be true”

    • Finadil@lemmy.world
      link
      fedilink
      arrow-up
      24
      ·
      2 months ago

      Relevant section:

      Intel made waves when it disabled AVX-512 support at the firmware level on 12th-gen Core processors and later models, effectively removing the SIMD ISA from its consumer chips.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    39
    ·
    2 months ago

    There is an issue, though: Intel disabled AVX-512 for its Core 12th, 13th, and 14th Generations of Core processors, leaving owners of these CPUs without them. On the other hand, AMD’s Ryzen 9000-series CPUs feature a fully-enabled AVX-512 FPU so the owners of these processors can take advantage of the FFmpeg achievement.

    Intel can’t stop the L.

    As for the claims and benchmarking, we need to see how much it actually improves. Because the 94x performance boost is compared to baseline when no AVX or SIMD is used (if I understand the blog post correctly). So I wonder how much the handwritten AVX-512 assembler code improves over an AVX-512 code written in C (or Rust maybe?). The exact hardware used to benchmark this is not disclosed either, unfortunately.

    • zod000@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      2 months ago

      Someone else in the comments mentioned it is about 40% faster than the AVX-2 code and slightly more than twice as fast as the SSE3 code. That’s still a nice boost, but hopefully no one was relying on the radically slow unoptimized baseline.

      • thingsiplay@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        But my question is, how much faster is it that its written in assembly rather than “high” level language like C or Rust. I mean if the AVX-512 code was written in C, would it be 40% faster than AVX-2?

  • collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 months ago

    As someone who has done some hand coding of AVX-512, I appreciate their willingness to take this on. Getting the input vectors setup correctly for the instructions can be a hassle, especially when the input dataset is not an even multiple of 64.

  • Mettled@reddthat.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    When this comes to the BSD’s, it will be interesting to see if there is a significant difference in multimedia. I bought Intel 11th gen over 10th for it’s AVX-512.

      • Mettled@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        2 months ago

        Does the BSD’s heavy emphasis on code correctness, that’s when the quality and security of the code will be revealed. I will watch what the OpenBSD developers say when they try to port the new code for FFMpeg.

  • ganymede@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    2 months ago

    nice.

    can usually get a pretty good performance increase with hand writing asm where appropriate.

    don’t know if its a coincidence, but i’ve never seen someone who’s good at writing assembly say that its never useful.

    • four
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      To be fair, people who don’t find assembly useful probably wouldn’t get good at writing assembly

      • ganymede@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        2 months ago

        for sure, its perfectly reasonable to say “this tool isn’t useful for me”

        its another thing to say “this tool isn’t useful for anyone”

      • ganymede@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 months ago

        from the article it’s not clear what the performance boost is relative to intrinsics (its extremely unlikely to be anything close to 94x lol), its not even clear from the article if the avx2 implementation they benchmarked against was instrinsics or handwritten either. in some cases avx2 seems to slightly outperform avx-512 in their implementation

        there’s also so many different ways to break a problem down that i’m not sure this is an ideal showcase, at least without more information.

        to be fair to the presenters they may not be the ones making the specific flavour of hype that the article writers are.

          • ganymede@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 months ago

            yes, as i said

            from the article it’s not clear what the performance boost is relative to intrinsics

            (they don’t make that comparison in the article)

            so its not clear exactly how handwritten asm compares to intrinsics in this specific comparison. we can’t assume their handwritten AVX-512 asm and instrinics AVX-512 will perform identically here, it may be better, or worse.

            also worth noting they’re discussing benchmarking of a specific function, so overall performance on executing a given set of commands may be quite different depending what can and can’t be unrolled and in which order for different dependencies.

  • Papamousse@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    2 months ago

    I worked in the media broadcasting, we had an internal lib to scale/convert whatever format in real time, and it went from basic operation, to SSE3, to AVX512, to CUDA, and yes crafting some functions/loops wit assembly can give an enormous boost.