DNS is the most neoliberal shit system that too many have just accepted as how computers work and always worked to the point where I have heard actual landlord arguments deployed to defend it

It’s administered by ICANN, which is like the ideal neoliberal email factory NGO that justifies the de facto monopoly of a tiny few companies with responsible internet government-stewardship stakeholderism, etc type bureaucracy while upholding the right of domain landlords to charge hundreds or even thousands of dollars in rent for like 37 bytes on a server somewhere lol

Before this it was administered by the US military-industrial complex, you can thank Bill Clinton and the US Chamber of Commerce for this version of it along with Binky Moon for giving us cheap .shit TLDs for 3 dollars for the first year

Never forget the architects of the internet were some of the vilest US MIC and Silicon Valley ghouls who ever lived and they are still in control fundamentally no matter how much ICANN and IANA claim to be non-partison, neutral, non-political, accountable, democratic, international, stewardshipismists

“Nooooo we’re running out of IPv4 addresses and we still can’t get everyone to use the vastly better IPv6 cuz uhhh personal network responsibility. Whattttt??? You want to take the US Department of Defense’s multiple /8 blocks? That’s uhhhh not possible for reasons :|” Internet is simultaneously a free-market hellscape where everyone with an ASN is free to administer it however they want while at the same time everyone is forced into contracts with massive (usually US-based) transit providers who actually run all the cables and stuff. Ohhh you wanna run traffic across MYYYYYYY NETWORK DOMAINNNNNNN??? That’ll be… 1 cent per packet please, money please now money now please money now money please now now nwoN OWOW

    • PaX [comrade/them, they/them]@hexbear.netOPM
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      12 days ago

      I have recently tried to use this and no matter what I did I could not make it work on my machine :(

      Very cool tech, absolutely overcomplicated in every way from its build system (Autotools is… unspeakably bad lol CHECKING FOR WEIRD UNIX QUIRK LAST SEEN ON SOME GNU NERD’S MACHINE IN 1994… not found :) ERROR LIBC NOT FOUND) to its weird modular configuration (they made their own fuckin init for their different services lol) to its API

      When it wasn’t segfaulting it was dumping hundreds of thousands of lines of confusing error messages to its log :/

      I tried for dayss, I was inside it with a syscall tracer and a debugger and still couldn’t make it work and no idea why :( Sometime I must get on their IRC and see if anyone can help me get it working on OpenBSD cuz I do rly like what they’re trying to do

      Sooooo its the typical GNU project :3 Somehow Freenet (I refuse to acknowledge the new crypto-adjacent project under that name) is better technically lmao

      Edit: just to be clear, GNU and the FSF are cool, no hate for them they have some of the coolest free software nerds working for them. I do have hate for that sex pest who formerly headed the FSF though lol

      • hello_hello [comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        11
        ·
        12 days ago

        The Domain Name System today enables traffic amplification attacks, censorship (i.e. China)

        catgirl-disgust

        Autotools is pretty dense, but all build-systems are like that. Believe me, the only good build system is the cp -rf $src $dest build system badeline-heh.

        At least autotools was one of the first (and transparent too). I don’t know what the excuse for CMakeLists.txt is

        • PaX [comrade/them, they/them]@hexbear.netOPM
          link
          fedilink
          English
          arrow-up
          7
          ·
          12 days ago

          yea All these projects are rly lib, and it sucks cuz I kinda would like an 1984 AUTHORITARIAN 1984 overlay network that actually has the power to like… ban ppl and content from the network (at least with nodes who agree with the mod team, maybe have some election process idk) cuz Freenet is extremely dead partially cuz it just became filled with the worst stuff imaginable and lib freeze peach ideology has no solution to that

          Oh yeahhh, somehow CMake is harder to fix than Autotools and it has like 3 layers of macro shit going on lol. CMake does tend to work on first try more ime at least

          And I have run into good build systems, they do exist imo. Like building anything on Plan 9 is wonderful and BSD Makefile templates are great. C and Unix was just… never meant for all this :( C is somehow more portable than soooo many programming languages but I wouldn’t say its a portable language lol, ppl just have to do all these hacks to write portable C seemingly. Yet we’re still stuck with it and I hate it. I wish Rust had not become the new C++ :/

          • hello_hello [comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            10
            ·
            12 days ago

            Programmers aren’t taught portability in Uni at all and that’s why we have a dozen build systems for Python and JS.

            Why be portable when you can shove a huge docker container into it and forget about it?

            I wish Rust had not become the new C++

            As someone who had to write build scripts for Rust. Fuck Rust (from a package maintainer perspective) -> no dynamic linking, compute intensive compiler, virtually single source of truth in crates dot io. Dependency trees are so fucked that a trivial library has the power to pull in the test framework for a GAME ENGINE (which requires compiling that engine). Slow ass fuck compile times that I can’t cache because I write packages.

            • PaX [comrade/them, they/them]@hexbear.netOPM
              link
              fedilink
              English
              arrow-up
              7
              ·
              12 days ago

              Programmers aren’t taught portability in Uni at all and that’s why we have a dozen build systems for Python and JS.

              Yehh, like fundamentally most programmers don’t even care. Anything after Windows and/or Ubuntu is an afterthought at best to so many programmers

              Why be portable when you can shove a huge docker container into it and forget about it?

              Legitttt lol. I hate how containers have become a substitute for portability or even good security design (yeahh it runs as root but it’s in a container, how bad could it beeeee how-much-could-it-cost, there’s never been issues with chroots or Linux cgroup namespace things before)

              And yehh I feel similarly about Rust :/ They just reinvented npm again lol with all its problems. The compiler is sooooo large and slow, which is why we only have just the one :| which is very concerning for portability and sustainability reasons. Like if it takes an army of corpo-paid engineers to even keep the thing running and no one else can write a standards-complying (Rust standard is set by the one compiler too lol) implementation is it rly even portable?? Like you can port it and LLVM to new platforms… if you have a lottttttt of time and energy or money to pay people to do it cuz its so overcomplicated and large. This is also why I can’t have anything that uses GTK (cuz of librsvg and Spidermonkey I think) or Firefox on my Pinebook Pro or my Mac PowerPC machines, cuz Rust is broken on 32-bit PPC architectures and needs 4 GB of RAM to build those things even just using a single processor :( In practice it just breaks in so many places idk. We don’t even have the committee like with C++ to put in every feature they can think of, rustc itself is basically the standard lol

              Although… no dynamic linking is a feature imo hehe. I forget how large Rust binary sizes are though lol

                • PaX [comrade/them, they/them]@hexbear.netOPM
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  12 days ago

                  I am aware of this and glad to see its progressing :3

                  But its also in a very early state isn’t it? Tbh… I have been hearing about this for years but I haven’t seen anyone using it :|

              • hello_hello [comrade/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                12 days ago

                Anything after Windows and/or Ubuntu is an afterthought at best to so many programmers

                Chromium Embedded Framework (CEF) with .NET: " take it or leave it."

                librsvg

                librsvg is such a jumpscare since it just adds the rust compiler to the dependency tree (which has to be bootstrapped from older versions of the compiler, fun!)

                I mean, Rust will have to just be ported to different operating systems, it’s not gonna go away any time soon for my use cases.

                • PaX [comrade/them, they/them]@hexbear.netOPM
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  12 days ago

                  Chromium Embedded Framework (CEF) with .NET: " take it or leave it."

                  Oh no lol, I gotta leave it

                  librsvg is such a jumpscare since it just adds the rust compiler to the dependency tree (which has to be bootstrapped from older versions of the compiler, fun!)

                  Ikrrr lol, not a good time. And now you gotta build LLVM if you didn’t already and that’s gonna take… a longggg time but at least it doesn’t do that staged compilation stuff GCC does lol

                  I understand why they switched and I do rly like Rust’s borrow checker but they also wanna rewrite all the portable C code into supposedly-portable Rust and that’s gonna leave a lot of ppl out (especially ppl with older hardware who can’t afford newer stuff) and probably add to the complexity of entire systems a lottt (idk which Linux rewriter type I prefer: the “rewrite Linux in Rust” type or the “rewrite everything in eBPF” type hehe, there must be other rewriters)

                  I mean, Rust will have to just be ported to different operating systems, it’s not gonna go away any time soon for my use cases.

                  True :/ It’s not for me either

                  LLVM has the reputation of being easy to port but I’ve never tried and that’s only one piece of the whole thing :|

            • piggy [they/them]@hexbear.net
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              12 days ago

              Because portability has only been practical for the majority of applications since 2005ish.

              You’re not having a system where every executable has 100mb of OS libs statically linked to them in the 90’s be fuckin for real.

              You complain a lot about static linking in rust and it’s the only way to actually achieve portability.

              • PaX [comrade/them, they/them]@hexbear.netOPM
                link
                fedilink
                English
                arrow-up
                4
                ·
                12 days ago

                I agree about static linking but… 100mb of code is absolutely massive, do Rust binaries actually get that large?? Idk how you do that even, must be wild amounts of automatically generated object oriented shit lol

                Because portability has only been practical for the majority of applications since 2005ish.

                Also wdym by this? Ppl have been writing portable programs for Unix since before we even had POSIX

                Also Plan 9 did without dynamic linking in the 90s. They actually found their approach was smaller in a lot of cases over having dynamic libraries around: https://groups.google.com/g/comp.os.plan9/c/0H3pPRIgw58/m/J3NhLtgRRsYJ

                • piggy [they/them]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  12 days ago

                  I agree about static linking but… 100mb of code is absolutely massive, do Rust binaries actually get that large?? Idk how you do that even, must be wild amounts of automatically generated object oriented shit lol

                  My brother in Christ if you have to put every lib in the stack into a GUI executable you’re gonna have 100mb of libs regardless of what system you’re using.

                  Also Plan 9 did without dynamic linking in the 90s. They actually found their approach was smaller in a lot of cases over having dynamic libraries around: https://groups.google.com/g/comp.os.plan9/c/0H3pPRIgw58/m/J3NhLtgRRsYJ

                  Plan 9 was a centrally managed system without the speed of development of a modern OS. Yes they did it better because it was less complex to manage. Plan 9 doesn’t have to cope with the fact that the FlatPak for your app needs lib features that don’t come with your distro.

                  Also wdym by this? Ppl have been writing portable programs for Unix since before we even had POSIX

                  It was literally not practical to have every app be portable because of space constraints.

                  • PaX [comrade/them, they/them]@hexbear.netOPM
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    12 days ago

                    My brother in Christ if you have to put every lib in the stack into a GUI executable you’re gonna have 100mb of libs regardless of what system you’re using.

                    You just link against the symbols you use though :/ Lemme go statically link some GTK thing I have lying around and see what the binary size is cuz the entire GTK/GLib/GNOME thing is one of the worst examples of massive overcomplication on modern Unix lol

                    There are also Linux distros around that don’t have a dynamic linker but I couldn’t find any stats when I did a quick search

                    Also I’m not a brother :|

                    Plan 9 was a centrally managed system without the speed of development of a modern OS. Yes they did it better because it was less complex to manage. Plan 9 doesn’t have to cope with the fact that the FlatPak for your app needs lib features that don’t come with your distro.

                    It was less complex cuz they made it that way though, we can too. FlatPaks are like the worst example too cuz they’re like dynamically linked things that bring along all the libraries they need to use anyway (unless they started keeping track of those?) so you get the worst of both static and dynamic linking. I just don’t use them lol

                    It was literally not practical to have every app be portable because of space constraints.

                    You mean portable like being able to copy binaries between systems? Cuz back in the 90s you would usually just build whatever it was from source if it wasn’t in your OS or buy a CD or smth from a vendor for your specific setup. Portable to me just means like that programs can be be built from source and run on other operating systems and isn’t too closely attached to wherever it was first created. Being able to copy binaries between systems isn’t something worth pursuing imo (breaking userspace is actually cool and good :3, that stable ABI shit has meant Linux keeps around so much ancient legacy code or gets stuck with badddd APIs for the rest of time or until someone writes some awful emulation layer lol)

    • piggy [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      12 days ago

      Yeah too bad gns implements rate limiting and bad actor protection by essentially mimicking butt coin and requiring proof of work.

      Trust is not a technologically solvable problem. This includes current DNS systems because ownership is effectively a proof of stake.