cross-posted from: https://lemmy.ml/post/18154572

All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It’s all very exciting, personally, as someone not responsible for fixing it.

Apparently caused by a bad CrowdStrike update.

  • EpicFailGuy@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    5 months ago

    I work on the field. They pushed an update, their software loads a low altitude driver on the kernel at boot time. The driver is causing servers and workstations to crash. The only fix so far is to reboot in safe mode, restart, uninstall and restart again.

    Imagine having to do that by hand, on every windows device in your organization.

    • teft@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      5 months ago

      Meh, that’s an easy fix compared to some other BSODs I’ve had to deal with.

      The every device part is daunting but again, at least it’s an easy fix.

      • Sethayy@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        But have those ever been released as an update?

        And with the employee to computer ratio only getting worse, this really highlights a lot of issues in the system

    • Possibly linux
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      There are people on r/sysadmin that have 50,000 machines to deal with. Also a lot of companies have remote workers

    • Nommer@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      I was with a large organization as tech support and upper IT pushed out an update that corrupted everybody’s certificates to log into the network. Imagine having to talk 40k users, who most of them whined and bitched to us about having to do all this work to fix the computer, through a removal of the old certificate, reboot, get the new one after logging in with a backup account, rebooting again and verifying that they can log in. Each computer was about 20-40 minutes to get done. We only had about 50 of us working at peak hours. It took about 2 months of non stop calls to get them fixed.

      • EpicFailGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        YIKES … I got one worse still … I was NOC at a company where one of my friends from Desktop services made a mistake pushing hard drive encryption and basically corrupted the hard drive of a large number em laptops. It wasn’t everyone thank god because they were rolling it out in stages … but it was THOUSANDS and there was no real way to get it back. Every single one had to get re-imaged.

  • scholar@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    5 months ago

    This has been a lot of fun, from the perspective of someone not affected, apparently CrowdStrike have lost 20% of their share price today.

    • nova_ad_vitum@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      5 months ago

      The fact that they can fuck up this bad and only lose 20 percent is kind of hilarious. Major infrastructure across the world is on its knees because of their fuck up. If that’s not enough to kill them then nothing is.

      Edit - when I checked they were rebounding, only down 11.3 percent today now. I guess the stockmarket has determined that this fuckup isn’t so bad.

      • Possibly linux
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 months ago

        They are in trouble as a company. I bet the big market leaders are going to reevaluate and potentially move to something else