I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.

This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.

Feedback is very much welcome. Thank you.

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    6 months ago

    Kilo meaning 1,000 inside computer science is the retcon.

    Tell me, how much RAM do you have in your PC. 16 gig? 32 gig?

    Surely you mean 17.18 gig? 34.36 gig?

    • PsychedSy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      abhibeckert in this thread had a good point. Floppies used the power of ten prefixes, so it wasn’t particularly consistent.

    • Eyron@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      6 months ago

      209GB? That probably doesn’t include all of the RAM: like in the SSD, GPU, NIC, and similar. Ironically, I’d probably approximate it to 200GB if that was the standard, but it isn’t. It wouldn’t be that much of a downgrade to go to 200GB from 192GiB. Is 192 and 209 that different? It’s not much different from remembering the numbers for a 1.44MiB floppy, 1.5436Mbps T1 lines, or ~3.14159 pi approximation. Numbers generally end up getting weird: trying to keep it in binary prefixes doesn’t really change that.

      The definition of kilo being “1000” was standard before computer science existed. If they used it in a non-standard way: it may have been common or a decent approximation at the time, but not standard. Does that justify the situation today, where many vendors show both definitions on the same page, like buying a computer or a server? Does that justify the development time/confusion from people still not understanding the difference? Was it worth the PR reaction from Samsung, to: yet again, point out the difference?

      It’d be one thing if this confusion had stopped years ago, and everyone understood the difference today, but we’re not: and we’re probably not going to get there. We have binary prefixes, it’s long past time to use them when appropriate-- but even appropriate uses are far fewer than they appear: it’s not like you have a practical 640KiB/2GiB limit per program anymore. Even in the cases you do: is it worth confusing millions/billions on consumer spec sheets?