Hello Linux Gurus,

I am seeking divine inspiration.

I don’t understand the apparent lack of hypervisor-based kernel protections in desktop Linux. It seems there is a significant opportunity for improvement beyond the basics of KASLR, stack canaries, and shadow stacks. However, I don’t see much work in this area on Linux desktop, and people who are much smarter than me develop for the kernel every day yet have not seen fit to produce some specific advanced protections at this time that I get into below. Where is the gap in my understanding? Is this task so difficult or costly that the open source community cannot afford it?

Windows PCs, recent Macs, iPhones, and a few Android vendors such as Samsung run their kernels atop a hypervisor. This design permits introspection and enforcement of security invariants from outside or underneath the kernel. Common mitigations include protection of critical data structures such as page table entries, function pointers, or SELinux decisions to raise the bar on injecting kernel code. Hypervisor-enforced kernel integrity appears to be a popular and at least somewhat effective mitigation although it doesn’t appear to be common on desktop Linux despite its popularity with other OSs.

Meanwhile, in the desktop Linux world, users are lucky if a distribution even implements secure boot and offers signed kernels. Popular software packages often require short-circuiting this mechanism so the user can build and install kernel modules, such as NVidia and VirtualBox drivers. SELinux is uncommon, ergo root access is more or less equivalent to the kernel privileges including introduction of arbitrary code into the kernel on most installations. TPM-based disk encryption is only officially supported experimentally by Ubuntu and is usually linked to secure boot, while users are largely on their own elsewhere. Taken together, this feels like a missed opportunity to implement additional defense-in-depth.

It’s easy to put code in the kernel. I can do it in a couple of minutes for a “hello world” module. It’s really cool that I can do this, but is it a good idea? Shouldn’t somebody try and stop me?

Please insert your unsigned modules into my brain-kernel. What have I failed to understand, or why is this the design of the kernel today? Is it an intentional omission? Is it somehow contrary to the desktop Linux ethos?

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    You absolutely can if you want to. Xen have been around for decades, most people that do GPU passthrough also kind of technically do that with pure Linux. Xen is the closest to what Microsoft does: technically you run Hyper-V then Windows on top, which is similar to Xen and the special dom0.

    But fundamentally the hard part is, the freedoms of Linux brings in an infinite combination of possible distros, kernels, modules and software. Each module is compiled for the exact version of the kernel you run. The module must be signed by the same key as the kernel, and each distro have its own set of kernels and modules. Those keys needs to be trusted by the bootloader. So when you go try to download the new NVIDIA driver directly from their site, you run into problems. And somehow this entire mess needs to link back to one source of trust at the root of the chain.

    Microsoft on the other hand controls the entire OS experience, so who signs what is pretty straightforward. Windows drivers are also very portable: one driver can work from Windows Vista to 11, so it’s easy to evaluate one developer and sign their drivers. That’s just one signature. And the Microsoft root cert is preloaded on every motherboard, so it just works.

    So Linux distros that do support secure boot properly, will often have to prompt the user to install their own keys (which is UX nightmare of its own), because FOSS likes to do things right by giving full control to the user. Ideally you manage your own keys, so even a developer from a distro can’t build a signed kernel/module to exploit you, you are the root of trust. That’s also a UX nightmare because average users are good a losing keys and locking themselves out.

    It’s kind of a huge mess in the end, to solve problems very few users have or care about. On Linux it’s not routine to install kernel mode malware like Vanguard or EAC. We use sandboxing a lot via Flatpak and Docker and the likes. You often get your apps from your distro which you trust, or from Flathub which you also trust. The kernel is very rarely compromised, and it’s pretty easy to cleanup afterwards too. It’s just not been a problem. Users running malware on Linux is already very rare, so protecting against rogue kernel modules and the likes just isn’t in need enough for anyone to be interested in spending the time to implement it.

    But as a user armed with a lot of patience, you can make it all work and you’ll be the only one in the world that can get in. Secure boot with systemd-cryptenroll using the TPM is a fairly common setup. If you’re a corporate IT person you can lock down Linux a lot with secure boot, module signing, SELinux policies and restricted executables. The tools are all there for you to do it as a user, and you get to custom tailor it specifically for your environment too! You can remove every single driver and feature you don’t need from the kernel, sign that, and have a massively reduced attack surface. Don’t need modules? Disable runtime module loading entirely. Mount /home noexec. If you really care about security you can make it way, way stronger than Windows with everything enabled and you don’t even need an hypervisor to do that.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    2 days ago

    It’s easy to put code in the kernel. I can do it in a couple of minutes for a “hello world” module. It’s really cool that I can do this, but is it a good idea? Shouldn’t somebody try and stop me?

    Yes, not being root stops you. Don’t run untrusted code as root.

    • henfredemars@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      edit-2
      2 days ago

      My illustration is meant to highlight the lack of care that is taken w.r.t. kernel code compared to systems that require code signing. If some privileged process is compromised, it can simply ask the kernel to insert a module with arbitrary code. Should processes be able to do this? For many systems, the answer is no: only otherwise authenticated code can run in the kernel. No userspace process has the right to insert arbitrary code. A system with a complete secure boot implementation and signed kernel modules prevents even root from inserting an unauthorized module. Indeed, on Android on a Samsung device with RKP, unconfined root still cannot insert a kernel module that isn’t signed by Samsung. The idea of restricting even root from doing dangerous things isn’t new. SELinux uses rules to enforce similar concepts.

      Yes, not being root is a useful step, but protecting the kernel from root might still be desirable and many systems try to do this. Exploits can sometimes get untrusted code running as root on otherwise reasonable secure systems. It’s nice if we can have layered security that goes beyond, so I ask: why don’t we have this today when other systems do?

      • Possibly linux
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        What is your threat model? If someone gains root they can do whatever they want. No security will protect you from that.

        • Blaster M@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          If a browserjack malware does a complicated zero-click attack to gain root when you accidently typo a website, unfettered access to the system by root is a big problem. This is why SELinux exists. This is why browser sandboxing exists. This is why virtualization of modules and drivers and so on exists. This “security theatre” as you call it is to provide protection. Is protection guaranteed? No, but it’s the difference between locking your door at night and leaving it wide open.

          • tiddy@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Jesus H Christ youre running your browser as root?

            Unless you mean an oceans 11-esque double zero-day exploit that jacks the userspace browser, stacked on a root-level privilege escalation zero-day on arguably the most secure OS in the world.

            I think we have insanely different threat models

            • Blaster M@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              And yet, state actors have done exactly what you’ve laid out. This is challenge accepted to a hacker.

              • tiddy@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                24 hours ago

                So your threat model is state level hackers?

                On desktop PC’s?

                Any malicious actor in the universe would love to be able to make a bot net out of 90% of the worlds computers, doesn’t make it any less plausible out of movies

          • Possibly linux
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 day ago

            There are no zero click root on any platform. That’s not how it works.

            Browsers don’t run as root and all of the browser processes are sandboxed with least privilege being enforced. So many things would need to go wrong.

          • henfredemars@infosec.pubOP
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            2 days ago

            Precisely! It’s about making compromise expensive, multi-layered, driving up the cost so it becomes fiscally unattractive for the attacker.

        • henfredemars@infosec.pubOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 days ago

          The threat model is that root shouldn’t have to be a lose condition. It is certainly very bad, but there should be some things root cannot do, like modify the kernel, while still being the highest privilege level designed into the system. SELinux rules severely constrain the root user on Android for example to frustrate a total system compromise even if an attacker gains root.

          The attacker must then find a way to patch the kernel to get the unconstrained root that we have today on Linux desktops.

          • Possibly linux
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            A root use can modify the kernel on disk and then trigger a reboot. You need either containers or full virtualization to protect against that.

            • henfredemars@infosec.pubOP
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 day ago

              This cannot be done on most consumer OSs like Macs or Windows, or Android smartphones, because secure boot would refuse to load a modified kernel from the disk. It is possible on typical desktop Linux installations if they don’t implement secure boot.

              • tiddy@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                Root access on any of these platforms would still result in persistent low level system access

                • henfredemars@infosec.pubOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  23 hours ago

                  On Android, secure boot causes boot loader validation, kernel validation, and subsequent validation by the kernel of all application code that is loaded into the system. You need an additional bug to obtain persistent access if the code has not been signed by an authorized party.

                  This is why iPhone jailbreaks are bifurcated into teathered and unteathered — many modern OSs require a second bug to survive a reboot and achieve persistence. The introduced code won’t pass signature check.

              • Possibly linux
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 day ago

                Secure boot is build on a model of proprietary software and antiuser freedom. For secure boot to do anything you first have to restrict what software the user can run which is already a no no. If you ignore that secure boot is often riddle with security problems and many companies use default keys. Your average device has multiple exploits.

                Also, why would it matter if an adversary gain root vs kernel level access? Root can do anything so it wouldn’t matter much.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I should not be forbidden from running my own code on my own hardware, right? But I should be protected from random code taking over my entire system, right? That’s why Linux restricts certain operations to root.

        • henfredemars@infosec.pubOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          2 days ago

          Absolutely! It’s your computer and it should always obey you. Trouble is, the kernel doesn’t know the difference between you the human being and you the program running as root user in your service, like wpa_supplicant for example, that may be potentially open to compromise.

          Perhaps, like a safety on a gun, there should be another step to inserting code into your kernel to ensure it’s being done very deliberately. We kind of see this with mokmanager for enrolling secure boot keys. Physical button presses are required to add a key and it cannot be (easily) automated software by design. You have to reboot and physically do it in the UEFI.

          This is where runtime or hypervisor kernel protections make sense – in making sure the kernel is behaving under expected parameters except when we really, truly want to load new kernel code. It’s the same reason why we have syscall filtering on so many services, like OpenSSH server process pre-authentication. We don’t want the system to get confused when it really matters.

  • Blaster M@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    2 days ago

    This is a question I myself have wondered for a long while now. Before the Arch warriors come in to shout about how Secure Boot is evil and also useless and how everything Windows, Mac, and so on does for security is only needed because they’re insecure and not free and spyware and other angry words, I agree with your assessment.

    The problem is that while Linux is well tested in Server environments, it is still an insignificant factor on the desktop. Servers are very well locked down in a lot of cases, so if something makes its way into the system itself, many security mitigations on the way have already failed.

    Desktops are different because the user is a lot more likely to install/run/browse to stuff that is dangerous.

    Right now, the only saving grace for Linux is that malware targets Windows and Android primarily, the most commonly used operating systems. What’s the point of targeting less than 4 percent of the world when you can target 90 percent of the world?

    This will change if “The year of Linux desktop” actually happens and people start mass using Linux desktops. You can bet on more Linux malware happening.

    • henfredemars@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      One consideration is that on a Linux server, the data of interest to attackers is more likely to be accessible by some low-privileged daemon like a SQL server. Compromising the kernel in such a fundamental way doesn’t provide anything of value on its own, so defenses perhaps are not as mature along this plane. It’s enough to get to the database. You might go for the kernel to move laterally, but the kernel itself isn’t the gold and jewels.

      Server environments are much more tightly controlled as you mentioned. I feel like there are more degrees of trust (or distrust) on a user system than on a server (configured top-to-bottom by an expert) for that reason and the differences in use case, and Linux desktop doesn’t really express this idea as well as maybe it should. It places a lot of trust on the user to say the least, and that’s not ideal for security.

      I think secure boot is a great idea. There must be a way to have layered security without abusing it to lock out users from their owned machines.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      7
      ·
      2 days ago

      You have absolutely zero clue what in the world you are talking about 😂😂😂😂

      You’re commenting as if there is a difference between a “desktop” and “server” install, when in practicality there is none. It’s not Windows with different tiered builds by price. 😭

      • Blaster M@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        Incorrect. The difference is not that there’s a server edition or desktop edition (which for many linux distros, there very much are server and desktop editions, even if the only difference is which packages are installed by default), but that when you properly setup a server with internet-exposed services, you usually are smart enough, have gone to school for this, learned from experience, or all of the above, how to secure a linux system for server use, and should have a configuration setup that would be inconvenient at best for a desktop, but is more secure for the purpose of a server. In addition, when running a server, you stick to what you need, you don’t arbitrarily download stuff onto a server, as that could break your live service(s) if something goes wrong.

        The average desktop user does not have any of that experience or knowledge to lock down their system like ft knox, nor do they have the willpower to resist clicking on / downloading and running what they shouldn’t, so if most of everyone stopped using Windows and jumped to Linux, you would see a lot more serious issues than the occasional halfass attempt at linux malware.

        • just_another_person@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          1 day ago

          OP is talking about hypervisor security, and now you’re off on a tangent about package and configuration management to try and prove a faulty point…what in the world.

          • Blaster M@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            If hypervisor security is an addon I can add via a suite of packages, okay. But, I don’t see that. Besides, OP is asking about why it isn’t part of the system natively. What’s the fault in the point?

  • Natanael@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    19 hours ago

    Qubes OS

    Edit: stop downvoting correct answers. If you don’t want to be helpful, just leave

    • henfredemars@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      2 days ago

      It does appear to take an interesting approach with using VMs to separate out the system components and applications, but I don’t think it introspects into those VMs to ensure the parts are behaving correctly as seen from the outside of the VM looking in.

      It’s a really cool OS I haven’t heard about before though!

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        The only things I’ve ever heard of doing that is in very high security corporate environments

    • henfredemars@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      Why? Do you think there’s no value in using virtualization to enforce constraints on the runtime behavior of the kernel?

      • Possibly linux
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        What would that accomplish? If someone manages to compromise the kernel you are in trouble. Linux uses a monolithic design so there is no isolation between kernel modules. The kernel would need a totally redesign and rewrite to make it run as a microkernel. Microkernels are problematic in general since they add a lot of complexity.