Hello everyone! During one of those illuminated evenings, I got the idea to move my small server in Scaleway to some more powerful server in Hetzner. If I will make the move, I am thinking of splitting the server in various VMs, to host different services that belongs to different trust boundaries, for example:
- A Lemmy/writefreely instance
- Vaultwarden/Gitea
- Wireguard tunnel to my home infrastructure
- Blogs, and other convenience services
In order to achieve the best level of separation, I was thinking of using VMs. My default choice would be Proxmox, because I used it in the past, and because I generally trust it, however I am trying to evaluate multiple options, and maybe someone has good or better experiences to share.
Other options I thought about are:
- Run everything in Docker. I am going to do this nevertheless, but Docker escapes are always possible, especially with public facing images that I did not write myself and/or that require a host volume.
- KVM directly? I am OK even without a GUI to be honest. I am not aware if there is some ansible module or even better Terraform provider for this, it would be great. (EDIT: I found https://registry.terraform.io/providers/dmacvicar/libvirt/0.7.1 which seems awesome!)
- ESxi? I have no experience with this solution.
Any idea or recommendation?
Oh right, there is the XML aspect that I didn’t consider.
I have to say that I very much have a preference for the declarative terraform strategy vs ansible, and I saw that the libvirt terraform provider is quite mature. I have seen that there are even some providers for proxmox (but less mature in my opinion), so it seems that either way the machine definition could be codified and automated. But the thing is, if the machines are all in Terraform code, basically there is no much use of proxmox (metrics are going to be in node exporter, maybe just backups and snapshots?).
Ansible can be declarative if you do it right, and take the time to write a few roles to manage your use case. For example my ansible libvirt config looks like this:
libvirt_vms: - name: front.example.org xml_file: '{{ playbook_dir }}/data/libvirt/front.example.org.xml' autostart: no - name: home.example.org xml_file: "{{ playbook_dir }}/data/libvirt/home.example.org.xml" state: running libvirt_port_forwards: - vm_name: front.example.org vm_ip: 10.10.10.225 vm_bridge: virbr1 dnat: - host_interface: eth0 host_port: 22225 # SSH vm_port: 22 - host_interface: eth0 host_port: 19225 # netdata vm_port: 19999 libvirt_networks: - name: home mac_address: "52:52:10:ae:0c:cd" forward_dev: "eth0" bridge_name: "virbr1" ip_address: "10.10.10.1" netmask: "255.255.255.0" autostart: yes state: active
This is the only config I ever touch since the role handles changing configuration, running/stopping VMs, networks, etc. transparently. For initial provisioning I have a shell script that wraps around
virsh/virt-install/virt-sysprep
to setup a new VM in ~1 minute (It uses a preseed file, which is similar to what cloud-init offers). This part could be better integrated with ansible. Terraform has other advanced features such as managing hosts on cloud providers, but I don’t need those at the moment. If I ever do, I think I would still use ansible to run terraform deployments [1]Edit: the libvirt role if you’re curious