DNS is the most neoliberal shit system that too many have just accepted as how computers work and always worked to the point where I have heard actual landlord arguments deployed to defend it
It’s administered by ICANN, which is like the ideal neoliberal email factory NGO that justifies the de facto monopoly of a tiny few companies with responsible internet government-stewardship stakeholderism, etc type bureaucracy while upholding the right of domain landlords to charge hundreds or even thousands of dollars in rent for like 37 bytes on a server somewhere lol
Before this it was administered by the US military-industrial complex, you can thank Bill Clinton and the US Chamber of Commerce for this version of it along with Binky Moon for giving us cheap .shit TLDs for 3 dollars for the first year
Never forget the architects of the internet were some of the vilest US MIC and Silicon Valley ghouls who ever lived and they are still in control fundamentally no matter how much ICANN and IANA claim to be non-partison, neutral, non-political, accountable, democratic, international, stewardshipismists
“Nooooo we’re running out of IPv4 addresses and we still can’t get everyone to use the vastly better IPv6 cuz uhhh personal network responsibility. Whattttt??? You want to take the US Department of Defense’s multiple /8 blocks? That’s uhhhh not possible for reasons :|” Internet is simultaneously a free-market hellscape where everyone with an ASN is free to administer it however they want while at the same time everyone is forced into contracts with massive (usually US-based) transit providers who actually run all the cables and stuff. Ohhh you wanna run traffic across MYYYYYYY NETWORK DOMAINNNNNNN??? That’ll be… 1 cent per packet please, money please now money now please money now money please now now nwoN OWOW
Oh yeahhh, somehow CMake is harder to fix than Autotools and it has like 3 layers of macro shit going on lol. CMake does tend to work on first try more ime at least
And I have run into good build systems, they do exist imo. Like building anything on Plan 9 is wonderful and BSD Makefile templates are great. C and Unix was just… never meant for all this :( C is somehow more portable than soooo many programming languages but I wouldn’t say its a portable language lol, ppl just have to do all these hacks to write portable C seemingly. Yet we’re still stuck with it and I hate it. I wish Rust had not become the new C++ :/
Programmers aren’t taught portability in Uni at all and that’s why we have a dozen build systems for Python and JS.
Why be portable when you can shove a huge docker container into it and forget about it?
As someone who had to write build scripts for Rust. Fuck Rust (from a package maintainer perspective) -> no dynamic linking, compute intensive compiler, virtually single source of truth in crates dot io. Dependency trees are so fucked that a trivial library has the power to pull in the test framework for a GAME ENGINE (which requires compiling that engine). Slow ass fuck compile times that I can’t cache because I write packages.
Yehh, like fundamentally most programmers don’t even care. Anything after Windows and/or Ubuntu is an afterthought at best to so many programmers
Legitttt lol. I hate how containers have become a substitute for portability or even good security design (yeahh it runs as root but it’s in a container, how bad could it beeeee
, there’s never been issues with chroots or Linux cgroup namespace things before)
And yehh I feel similarly about Rust :/ They just reinvented npm again lol with all its problems. The compiler is sooooo large and slow, which is why we only have just the one :| which is very concerning for portability and sustainability reasons. Like if it takes an army of corpo-paid engineers to even keep the thing running and no one else can write a standards-complying (Rust standard is set by the one compiler too lol) implementation is it rly even portable?? Like you can port it and LLVM to new platforms… if you have a lottttttt of time and energy or money to pay people to do it cuz its so overcomplicated and large. This is also why I can’t have anything that uses GTK (cuz of librsvg and Spidermonkey I think) or Firefox on my Pinebook Pro or my Mac PowerPC machines, cuz Rust is broken on 32-bit PPC architectures and needs 4 GB of RAM to build those things even just using a single processor :( In practice it just breaks in so many places idk. We don’t even have the committee like with C++ to put in every feature they can think of, rustc itself is basically the standard lol
Although… no dynamic linking is a feature imo hehe. I forget how large Rust binary sizes are though lol
https://rust-gcc.github.io/
I am aware of this and glad to see its progressing :3
But its also in a very early state isn’t it? Tbh… I have been hearing about this for years but I haven’t seen anyone using it :|
Chromium Embedded Framework (CEF) with .NET: " take it or leave it."
librsvg is such a jumpscare since it just adds the rust compiler to the dependency tree (which has to be bootstrapped from older versions of the compiler, fun!)
I mean, Rust will have to just be ported to different operating systems, it’s not gonna go away any time soon for my use cases.
Oh no lol, I gotta leave it
Ikrrr lol, not a good time. And now you gotta build LLVM if you didn’t already and that’s gonna take… a longggg time but at least it doesn’t do that staged compilation stuff GCC does lol
I understand why they switched and I do rly like Rust’s borrow checker but they also wanna rewrite all the portable C code into supposedly-portable Rust and that’s gonna leave a lot of ppl out (especially ppl with older hardware who can’t afford newer stuff) and probably add to the complexity of entire systems a lottt (idk which Linux rewriter type I prefer: the “rewrite Linux in Rust” type or the “rewrite everything in eBPF” type hehe, there must be other rewriters)
True :/ It’s not for me either
LLVM has the reputation of being easy to port but I’ve never tried and that’s only one piece of the whole thing :|
Because portability has only been practical for the majority of applications since 2005ish.
You’re not having a system where every executable has 100mb of OS libs statically linked to them in the 90’s be fuckin for real.
You complain a lot about static linking in rust and it’s the only way to actually achieve portability.
I agree about static linking but… 100mb of code is absolutely massive, do Rust binaries actually get that large?? Idk how you do that even, must be wild amounts of automatically generated object oriented shit lol
Also wdym by this? Ppl have been writing portable programs for Unix since before we even had POSIX
Also Plan 9 did without dynamic linking in the 90s. They actually found their approach was smaller in a lot of cases over having dynamic libraries around: https://groups.google.com/g/comp.os.plan9/c/0H3pPRIgw58/m/J3NhLtgRRsYJ
My brother in Christ if you have to put every lib in the stack into a GUI executable you’re gonna have 100mb of libs regardless of what system you’re using.
Plan 9 was a centrally managed system without the speed of development of a modern OS. Yes they did it better because it was less complex to manage. Plan 9 doesn’t have to cope with the fact that the FlatPak for your app needs lib features that don’t come with your distro.
It was literally not practical to have every app be portable because of space constraints.
You just link against the symbols you use though :/ Lemme go statically link some GTK thing I have lying around and see what the binary size is cuz the entire GTK/GLib/GNOME thing is one of the worst examples of massive overcomplication on modern Unix lol
There are also Linux distros around that don’t have a dynamic linker but I couldn’t find any stats when I did a quick search
Also I’m not a brother :|
It was less complex cuz they made it that way though, we can too. FlatPaks are like the worst example too cuz they’re like dynamically linked things that bring along all the libraries they need to use anyway (unless they started keeping track of those?) so you get the worst of both static and dynamic linking. I just don’t use them lol
You mean portable like being able to copy binaries between systems? Cuz back in the 90s you would usually just build whatever it was from source if it wasn’t in your OS or buy a CD or smth from a vendor for your specific setup. Portable to me just means like that programs can be be built from source and run on other operating systems and isn’t too closely attached to wherever it was first created. Being able to copy binaries between systems isn’t something worth pursuing imo (breaking userspace is actually cool and good :3, that stable ABI shit has meant Linux keeps around so much ancient legacy code or gets stuck with badddd APIs for the rest of time or until someone writes some awful emulation layer lol)
If you link against symbols you are not creating something portable. In order for it to be portable the lib cannot ever change symbols. That’s a constraint you can practically only work with if you have low code movement and you control the whole system. (see below for another way but it’s more complex rather than less complex).
My bad. I apologize. I am being inconsiderate in my haste to reply.
But there’s no other realistic way.
That’s a completely different usage of “portable” and is basically a non-problem in the modern era, as long as and see my response to the symbols point, you are within the same-ish compatibility time frame.
It’s entirely impossible to do this over a distributed ecosystem over the long term. You need symbol migrations so that if I compile code from 1995 it can upgrade to the correct representation in modern symbols. I’ve built such dependency management systems for making evergreen data in DSLs. Mistakes, deprecation, and essentially everything you have ever written has to be permanent, it’s not a simple way to program. It can only be realized in tightly and directly controlled environments like Plan 9 or if you’re the architect of an org.
Dependency management is an organization problem that is complex, temporal, and intricate. You cannot “technology” your way out of the need to manage the essential complexity here.
I’m not entirely sure what you mean tbh. Like if something changes in a library you linked against? I guess you would have to rebuild it but you would have to rebuild a shared library too and place it into the system. Actually, you don’t necessarily have to rebuild anything, you can actually just relink it if you still have object files around (like OpenBSD does this to relink the kernel into a random order on every boot), just swap in a different object file for what you changed
It’s okay :3
This is just my experience ofc but I’ve never used Flatpaks or Snaps anywhere tbh, I just get binaries from my distribution or build them myself if I need something unusual. The issue with that is that it’s not as easy as it should be, I legit should just be able to do “make” and have it work but ofc I have to fix stuff all the time. Plan 9 is a carefully tuned system ofc and I obviously have the Plan 9 brainworms but like… it’s never been a problem cuz the entire operating system builds in like… 7 minutes on a Core 2 Duo, not joking lol. And it was IO-bottlenecked during that on an SSD even! If you have fast compilers it’s not so bad and you only ever need to build the whole system on an update (and mk, the build tool, will ofc not rebuild things that don’t need rebuilding)
Tbh… I would be in favor of just having an interpreted or JIT-compiled language everywhere too (the line between static and dynamic linking gets blurrier but also simpler anyway here hehe). There are many different ways to approach this problem. Idk it’s just easy to write stuff off like that as “not realistic”, especially if you’re an expert in a highly technical field who has done it one way for a long time, but it is realistic cuz its been done even. We should do it cuz our methods and knowledge improving is good
I’ve never written any programs that were subject to such strict verification tbh. I had to look up what “DSL” means lol, Wikipedia says “definitive software library”. I rly think it’s not such a problem most of the time, code changes all the time and people update it, as they should imo, cuz it’s impossible outside of formal verification (which is cool and good) to write perfect bug-free software. And that formal verification can only get you as far as verifying there are no bugs but it can’t force you to write good systems or specifications and can’t help you if there are things like cosmic rays striking your processor ofc hehe
I’m not sure what kind of software you have experience with, like if it needs to not make planes fall out of skies or ppl’s insulin pumps not shut off (you would def know more than me about writing that kind of software) but I think there are many ways to address software reliability regardless of how you link or how you distribute software. Make hashed symbols idk hehe, relink them all you like but they all have a hash in the “definitive” software library maybe. Personally, I love formal methods for stuff like this
I agreee, this isn’t just a technological problem to me but also a social one. Like ideally I would love to see way more money or resources for computer systems research and state-sponsored computer systems. Tbh I feel like most of the reason ppl focus so much on unchanging software, ABIs, APIs, instruction sets, operating systems, etc is cuz capitalists use them to make products and them never changing and just being updated forever is labor reducing lol. When software is designed badly or the world has changed and software no longer suits the world we live in (many such cases), we (the community of computer-touchers lol) should be able to change it. Ofc there will be a transition process for anything and this is quite vague but yeh
Am rly tired, may respond later if you reply
Okay let’s say I am writing MyReallyCoolLibrary V1. I have a
myReallyCoolFunction()
. You want to usemyReallyCoolFunction
in your code. Regardless if your system works on API or ABI symbols, what a symbol is is a universal address for a specific functionality. So when my library is compiled it makes aS_myReallyCoolFunction
, and when your app is compiled it makes acall S_myReallyCoolFunction
and this symbol needs to be resolved from somewhere.So static linking is when you compile the app with
S_myReallyCoolFunction
inside of it so when it seescall S_myReallyCoolFunction
it finds theS_myReallyCoolFunction
in the app data. Dynamic linking is when it findscall S_myReallyCoolFunction
in a library that’s a file on your machine. Plan9 uses static linking.So let’s talk about this what it means for “code portability”. Let’s say I make an MyReallyCoolLibrary V1 and I have to change a few things, here are alternate universes that can happen:
myReallyCoolFunction
myReallyCoolFunction
but I do not change its behavior, I simply refactor the code to be more readable.myReallyCoolFunction
and I change its behavior.myReallyCoolFunction
and change it’s interface.myReallyCoolFunction
.So let’s compute what this should mean for encoding a Symbol in this case.
myReallyCoolFunction
from V2 can stay declared asS_myReallyCoolFunction
myReallyCoolFunction
from V2 can stay declared asS_myReallyCoolFunction
myReallyCoolFunction
from V2 has to be declared asS_myReallyCoolFunctionNew
myReallyCoolFunction
from V2 has to be declared asS_myReallyCoolFunctionNew
S_myReallyCoolFunction
Now these are the practical consequences for your code:
So now to make code truly portable I must now remove the app refactor pieces. I have 2 ways of doing that.
With #1 you have the problem everyone complains about today.
With #2 you essentially carry forward all work ever done. Every mistake, every refactor, every public API that’s ever been written and it’s behaviors must be frozen in amber and reshipped in the next version.
There is no magic here, it’s a simple but difficult to manage diagram.
I agree that Plan 9 is really cool, but in practice Linux is the height of active development OS complexity that our society is able to build right now. Windows in comparison is ossifying, and OSX is much simpler.
DSL in this case means Domain Specific Language
But here’s the problem with this statement, it unravels your definition of “code portability”. The whole point of “code portability” is that I don’t have to update my code. So I’m kind-of confused about what we’re arguing if it’s not Flatpak style portability, it’s not code portability, what are we specifically talking about?
The formal verification can only reify the fact that you need something called Foo and I can provide it. The more formal it is the more accurate we can make the description of what Foo is and the more accurately I can provide something that matches that. But I can’t make it so that your Foo is actually a Bar because you meant a Bar but you didn’t know you needed a Bar. We can match shapes to holes but we cannot imbue the shapes or the holes with meaning and match on that. We can only match geometrically, that is to say (discrete) mathematically.
I generally agree with this sentiment but I think the capitalist thing defeating better computing standards, tooling, and functionality is the commodity form. The commodity form and its practical uses don’t care about our nerd shit. The commodity form barely cares to fulfill the functional need it’s reified form (e.g. an apple) provides. That is to say, the commodity form doesn’t care if you make Shitty Apples or Good Apples as long as you can sell Apples. That applies to software, and as software grows more complex, capitalism tends to produce shitty software simply because the purpose of the commodity form is to facilitate trade not to be correct/reliable/be of any quality.