this post was submitted on 26 Jul 2024
500 points (97.5% liked)

People Twitter

5299 readers
90 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying or international politcs
  5. Be excellent to each other.

founded 1 year ago
MODERATORS
500
We are peers (lemmy.world)
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/whitepeopletwitter
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 4 months ago (2 children)

But if it was a race condition, then some computers would just boot normally. I didn't see anyone report that the issue was happening selectively. And that wouldn't even be fix, just a one-off boot. Unless the file is removed the issue will come back on next reboot.

[–] [email protected] 1 points 4 months ago

Your server also had to be patched for that to work I think.

[–] sugar_in_your_tea 1 points 4 months ago (1 children)

It's probably one central server controlling access to the network or distributing images or something. So they need to reboot one machine in that cluster enough times and all of the machines in the cluster will work.

The vulnerability broke every machine, the fix was the one that took multiple reboots to apply.

[–] [email protected] 1 points 4 months ago (1 children)

I'm not sure we are talking about the same issue. In case of CrowdStrike, the update pushed a botched file that crashed the kernel on boot. Until the file was removed, the machine wouldn't even boot to be patched.

[–] sugar_in_your_tea 2 points 4 months ago

Yes, that's what I'm talking about.

I'm saying that in production, the screens and whatnot probably aren't fetching that file on boot, they're probably pulling from some central server. So in the case of an airport, each of those screens is probably pulling images from a local server over PXE, and the server pulls the updates from CrowdStrike. So once you get the server and images patched, you just power cycle all of the devices on the network and they're fixed.

So the impact would be a handful of servers in a local server rack, and then remote power cycle. If they're using POE kiosks (which they should be using), it's just a simple call to each of the switches to force them to re-PXE boot and pull down a new image. So you won't see IT people running around the airport, they'll be in the server room cycling servers and then sending power-cycle commands to each region of the airport.