You just said that this software was much more complex than Unix tools
Probably need to keep in mind incidental versus essential complexity here.
So with all those configuration options, why is the standalone binary expected to have defaults that may sound same on one system but insane in a different one?
Because this is how much of what we use already is implemented.
Significant effort goes in to portability, interoperability and
balancing compromises. When I’m doing software development e.g.
writing HTTP APIs (of which I apparently know nothing about ;) ) - I
feel like I’ve got a responsibility to carefully balance what I expose
as some user-configurable thing versus something managed internally by
the application. Sometimes, thankfully, the application doesn’t even
have to think about it al all - like what TCP flags to set when I dial
some service.
You bring up containers which is a great example of some cool features
provided by the Linux kernel to solve interesting problems. If you’re
interested, have a look at FreeBSD’s Jails, Plan 9 and LXC. Compare
the interface to all these systems, both at the library level and
userspace, and compare the applications developed using those systems.
How easy is it to get going? How much do I need to keep in my head
when using these features? Docker, Kubernetes, and the rest all have made different tradeoffs and compromises.
Another one I think about is SQLite. Some seriously clever smarts.
Huge numbers of people don’t know anything about for-loops, C, or
B-Trees but can read & write SQL. That’s technology at its best.
Consider how difficult it could be to, say, start a car in all the
different operating conditions it is expected to be used in. But we
never think about it.
We as tech people pride ourselves on familiarity with esoteric detail,
but it doesn’t need to be like this. Nor does memorising it all have
anything to do with “skill”.
What I’m struggling with are thoughts of significant vested commercial
interest in exposing this kind of detail, fuelling multi-billion
dollar service industries. Feelings of being an outsider despite
understanding how it all fits together.
It is a pluggable service that connects to one or more TSDBs, performs periodic queries, and notifies another service when certain thresholds are exceeded.
Have you ever written this kind of software before?
It sounds like you are comfortable with the status quo of this part of
the software industry, and I’m truly jealous! If you’ve got any tips on dealing with this kind of stuff you can find my email at https://www.olowe.co/about.html
Thanks :)
Probably need to keep in mind incidental versus essential complexity here.
Go on…
Because this is how much of what we use already is implemented. Significant effort goes in to portability, interoperability and balancing compromises. When I’m doing software development e.g. writing HTTP APIs (of which I apparently know nothing about ;) ) - I feel like I’ve got a responsibility to carefully balance what I expose as some user-configurable thing versus something managed internally by the application. Sometimes, thankfully, the application doesn’t even have to think about it al all - like what TCP flags to set when I dial some service.
In the case of vmalert, the binary makes no assumptions as to default behaviour because it was not meant to be run standalone. It comes as part of a container with specific environment variables, which in turn is packaged as a Helm chart which has sane configurations. Taking the vmalert binary by itself is like taking a kerberos server binary without its libraries and config files in /etc files and complaining that it’s not working.
You bring up containers which is a great example of some cool features provided by the Linux kernel to solve interesting problems. If you’re interested, have a look at FreeBSD’s Jails, Plan 9 and LXC. Compare the interface to all these systems, both at the library level and userspace, and compare the applications developed using those systems. How easy is it to get going? How much do I need to keep in my head when using these features? Docker, Kubernetes, and the rest all have made different tradeoffs and compromises.
I am very well versed in jails, chroot, openvz, LXC, etc. OCI containers are in a different class - don’t think of them as an OS-like environment, think of them as a self-contained, packaged service. Docker is then one example of a runtime runtime on which those services run, and Kubernetes is an orchestrator that managed containers in runtimes. And yes, there are some tradeoffs and compromises, but those are well within the bounds of the Pareto principle - remove the 10% long tail of features on the host, reduce user-facing complexity by 90%.
Another one I think about is SQLite. Some seriously clever smarts. Huge numbers of people don’t know anything about for-loops, C, or B-Trees but can read & write SQL. That’s technology at its best.
Are you arguing that Kubernetes doesn’t do that for you? Because with Kubernetes I can say “run the service in this container with these settings and so many replicas”, attach some conditions like “stop sending traffic to any one container that takes longer than N seconds to respond” and “restart the container if a certain command returns an error”, and just let it run. I can do a rolling upgrade of the nodes and Kubernetes will reschedule the containers on any other available node, it can load balance traffic, I can update the spec of a deployment and Kubernetes will do a zero-downtime upgrade for me. Try implementing the same on a Unix system. You’d need a way to push configs (Ansible, Puppet, etc?). You need load balancing and leader election (Keepalived?). You need error detection. You need DNS. You need to run the services. You need to ensure there’s no library conflict. There’s a LOT of complexity that a Kubernetes user does not need to worry about any more. Tell me that’s not serious smarts and technology at its best.
What I’m struggling with are thoughts of significant vested commercial interest in exposing this kind of detail, fuelling multi-billion dollar service industries. Feelings of being an outsider despite understanding how it all fits together.
You seem to be conflating Kubernetes and cloud services. Being a cloud native technology does not mean it has to run on a managed cloud service. It just means that it has certain expectations as to how workloads run on it, and if those expectations are met then it makes certain promises about how it will behave.
Have you ever written this kind of software before?
I have contributed to several similar open source projects, yes. What about it?
It sounds like you are comfortable with the status quo of this part of the software industry, and I’m truly jealous!
I am comfortable with my knowledge of this part of the software industry. There is no status quo - there’s currently an equilibrium, yes, but it is a tenuous one. I know the tools I use today will likely not be the same tools I will be using a decade from now. But I also know that the concepts and architectures I learn from managing these tools will still be applicable then, and I can stay agile enough to adapt and become comfortable in a new ecosystem. I would urge you to consider the same approach for yourself.
Probably need to keep in mind incidental versus essential complexity here.
Because this is how much of what we use already is implemented. Significant effort goes in to portability, interoperability and balancing compromises. When I’m doing software development e.g. writing HTTP APIs (of which I apparently know nothing about ;) ) - I feel like I’ve got a responsibility to carefully balance what I expose as some user-configurable thing versus something managed internally by the application. Sometimes, thankfully, the application doesn’t even have to think about it al all - like what TCP flags to set when I dial some service.
You bring up containers which is a great example of some cool features provided by the Linux kernel to solve interesting problems. If you’re interested, have a look at FreeBSD’s Jails, Plan 9 and LXC. Compare the interface to all these systems, both at the library level and userspace, and compare the applications developed using those systems. How easy is it to get going? How much do I need to keep in my head when using these features? Docker, Kubernetes, and the rest all have made different tradeoffs and compromises.
Another one I think about is SQLite. Some seriously clever smarts. Huge numbers of people don’t know anything about for-loops, C, or B-Trees but can read & write SQL. That’s technology at its best.
Consider how difficult it could be to, say, start a car in all the different operating conditions it is expected to be used in. But we never think about it.
We as tech people pride ourselves on familiarity with esoteric detail, but it doesn’t need to be like this. Nor does memorising it all have anything to do with “skill”.
What I’m struggling with are thoughts of significant vested commercial interest in exposing this kind of detail, fuelling multi-billion dollar service industries. Feelings of being an outsider despite understanding how it all fits together.
Have you ever written this kind of software before?
It sounds like you are comfortable with the status quo of this part of the software industry, and I’m truly jealous! If you’ve got any tips on dealing with this kind of stuff you can find my email at https://www.olowe.co/about.html Thanks :)
Go on…
In the case of
vmalert
, the binary makes no assumptions as to default behaviour because it was not meant to be run standalone. It comes as part of a container with specific environment variables, which in turn is packaged as a Helm chart which has sane configurations. Taking thevmalert
binary by itself is like taking a kerberos server binary without its libraries and config files in/etc
files and complaining that it’s not working.I am very well versed in jails, chroot, openvz, LXC, etc. OCI containers are in a different class - don’t think of them as an OS-like environment, think of them as a self-contained, packaged service. Docker is then one example of a runtime runtime on which those services run, and Kubernetes is an orchestrator that managed containers in runtimes. And yes, there are some tradeoffs and compromises, but those are well within the bounds of the Pareto principle - remove the 10% long tail of features on the host, reduce user-facing complexity by 90%.
Are you arguing that Kubernetes doesn’t do that for you? Because with Kubernetes I can say “run the service in this container with these settings and so many replicas”, attach some conditions like “stop sending traffic to any one container that takes longer than N seconds to respond” and “restart the container if a certain command returns an error”, and just let it run. I can do a rolling upgrade of the nodes and Kubernetes will reschedule the containers on any other available node, it can load balance traffic, I can update the spec of a deployment and Kubernetes will do a zero-downtime upgrade for me. Try implementing the same on a Unix system. You’d need a way to push configs (Ansible, Puppet, etc?). You need load balancing and leader election (Keepalived?). You need error detection. You need DNS. You need to run the services. You need to ensure there’s no library conflict. There’s a LOT of complexity that a Kubernetes user does not need to worry about any more. Tell me that’s not serious smarts and technology at its best.
You seem to be conflating Kubernetes and cloud services. Being a cloud native technology does not mean it has to run on a managed cloud service. It just means that it has certain expectations as to how workloads run on it, and if those expectations are met then it makes certain promises about how it will behave.
I have contributed to several similar open source projects, yes. What about it?
I am comfortable with my knowledge of this part of the software industry. There is no status quo - there’s currently an equilibrium, yes, but it is a tenuous one. I know the tools I use today will likely not be the same tools I will be using a decade from now. But I also know that the concepts and architectures I learn from managing these tools will still be applicable then, and I can stay agile enough to adapt and become comfortable in a new ecosystem. I would urge you to consider the same approach for yourself.