• jj4211@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    2 months ago

    Thing is that they could have preserved the textual nature and had some sort of external metadata to facilitate the ‘fanciness’. I have worked in other logging systems that did that, with the ability to consume the plaintext logs in an ‘old fashioned’ way but a utility being able to do all the nice filtering, search, and special event marking that journalctl provides without compromising the existence of the plain text.

    • Possibly linux
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      6
      ·
      2 months ago

      Plain text is slow and cumbersome for large amounts of logs. It would of had a decent performance penalty for little value add.

      If you like text you can pipe journalctl

      • msage@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        But if journalctl is slow, piping is not helping.

        We have only one week of very sparse logs in it, yet it takes several seconds… greping tens of gigabytes of logs can be sometimes faster. That is insane.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        As I said, I’ve dealt with logging where the variable length text was kept as plain text, with external metadata/index as binary. You have best of both worlds here. Plus it’s easier to have very predictable entry alignment, as the messy variable data is kept outside the binary file, and the binary file can have more fixed record sizes. You may have some duplicate data (e.g. the text file has a text version of a timestamp duplicated with the metadata binary timestamp), but overall not too bad.