As the world recovers from the largest IT outage in history, it shows the danger of one point of failure in IT infrastructure

A global IT failure wreaked havoc on Friday, grounding flights and disrupting everything from hospitals to government agencies. Over all the chaos hung a question: how did a flawed update to Microsoft Windows software bring large swaths of society to a screeching halt?

The problem originated with an Austin, Texas-based cybersecurity firm called CrowdStrike, relied upon by most of the global technology industry, including Microsoft, for its Falcon program, which blocks the execution of malware and cyber-attacks. Falcon protects devices by securing access to a wide range of internal systems and automatically updating its defenses – a level of integration that means if Falcon falters, the computer is close behind. After CrowdStrike updated Falcon on Thursday night, Microsoft systems and Windows PCs were hit with a “blue screen of death” and rendered unusable as they were trapped in a recovery boot loop.

Microsoft is a juggernaut with significant market power, dominating cloud-computing infrastructure across Europe and the United States. So it wasn’t just computers that were affected, but servers and a host of other systems as well. Overwhelming requests from users, devices, services and businesses ushered in a cascading series of failures with Microsoft products – namely Azure Cloud and Microsoft 365. Failures plaguing Azure led to additional but separate disruptions with 365 services. A giant clusterfuck ensued.

  • Ooops@feddit.org
    link
    fedilink
    arrow-up
    109
    arrow-down
    6
    ·
    4 months ago

    No… the Crowdstrike debacle primarily shows the dangers of today’s corporate culture in software development.

    Ship as fast as possible, fix issues later if necessary…

    • dugmeup@lemmy.world
      link
      fedilink
      arrow-up
      59
      arrow-down
      1
      ·
      4 months ago

      Yup. Push to prod!

      This is the Boeing debacle in software land. Kill the engineering and pay the executives. QA? Testing? Strict standards? People? Naaah, more conferences! More logos on F1 cars!

      • mynameisigglepiggle@lemmy.world
        cake
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        I totally agree. But without knowing a bit more about the specifics, I can’t help but think that just maybe… The updating mechanism could have perhaps just rolled back an update if it caused a bsod?

        Seems like that infrastructure is really the biggest oversight and people would have been none the wiser.

        Also surprised just how many things are running windows. I thought for sure the self checkout registers would have been some embedded Linux system.

    • Rentlar@lemmy.ca
      link
      fedilink
      arrow-up
      27
      arrow-down
      4
      ·
      edit-2
      4 months ago

      I disagree. You are correct that the cause of the fuck up is because of bad development practices. However, if every firm is being reckless with development, but only one out of a myriad of competing firms fucks up because of it, maybe you’d take one airline or hospital network offline or something like that.

      It’s only because of consolidation and market monopolization of the sector, that an outage at such a global scale was even possible to begin with.

    • Wooki@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      You’re partly right as is the article.

      Centralization is dangerous for security, innovation and cost (monopolies, duopolies).