Being a noob and all I was wondering whats the real benefit of having a monolithic lets say proxmox instance with router, DNS, VPN but also home asssistant and NAS functionalitiy all in one server? I always thought dedicated devices are simpler to maintain or replace and some services are also more critical than others I guess?
I mean that with k3s you can get a kubernetes cluster running with 0 effort on a single machine. It is easier to maintain, because it handles restarting containers, updating containers, managing ports, provisioning storage, creating databases, etc for you. I’ve found the logs and events system to be super useful for troubleshooting compared to Dockerd, but maybe it can be tricky if it does something you don’t expect it to.
Obviously you need to learn how to use that automation to take advantage of it, and stuff like networking and persistent volumes can be confusing if you don’t have a good guide on it. The fact that there are different drivers for networking, storage, database management, etc can also take a bit of time. That said, networking and storage can be confusing on Docker too if you don’t have a good guide, and Docker-compose also has a learning curve, so I honestly don’t think Kubernetes is that much more effort. The main thing is that most guides are written for Docker, but the Kubernetes documentation is really good too.
If you just want to just run containers for jellyfin and home-assistant, Docker compose will be good enough. But if you want databases, reverse proxy, certificates, dns, self-healing, etc, for running bigger stuff like nextcloud and lemmy, then I would spend the extra 50% effort and do it on Kubernetes, it’ll save you time and headaches in the long run.
Asking an LLM like Lllama or ChatGPT might be a good way to learn the basics with Kubernetes, but things move fast once you start getting into the newest operators like CNPG and Gateway API.
I do all that with docker… I fail to see what Kubernetes adds to that on a single machine.
Kubernetes does it a lot better. No more messing with caddy config files, or docker sockets, you get the real deal, production stuff.
Containers automatically take themselves off the built-in loadbalancer and/or restart when they fail a health check.
A new high-availability postgres cluster with automatic backups is just a Cluster, a firewall rule is just a NetworkPolicy, a new subdomain is just an HTTPRoute, a new proxy container is just a Gateway, a new auto-renewed Let’s Encrypt certificate is just a Certificate, and DNS is set up automatically with the domain name from the HTTPRoute without me touching anything. Everything is high-availability and self-healing, I’ve never had anything go down or crash.
The other thing is ArgoCD, which automatically syncs your cluster with git. If I edit any of my config files in git, it is instantly updated on the cluster itself.
Here is my configuration for my 200+ containers, even my Lemmy instance is running here: https://codeberg.org/jlh/h5b/src/branch/main/argo/custom_applications
Docker and the Docker ecosystem copies a lot of features from Kubernetes, because they’re essentially the same thing, but Kubernetes does it in a production-ready, maintainable way. Kubernetes is an automation tool that lets 1 engineer do the work of 10.
Right, right, you just have to reinvent a dozen wheels, use only software that Kubernetes knows how to work with, and learn a bunch of new names for everything.
Once you learn it it isn’t super crazy but takes a lot of effort obviously. I think most people who do use k3s and k8s at home are people who use it for work so already knows how and where things should work and be. That said I work with kubernetes every day for work managing a handful of giant production clusters and at home I use unraid to keep it simple.
deleted by creator