this post was submitted on 19 Jul 2024
768 points (94.3% liked)

Linux

48315 readers
681 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

This isn't a gloat post. In fact, I was completely oblivious to this massive outage until I tried to check my bank balance and it wouldn't log in.

Apparently Visa Paywave, banks, some TV networks, EFTPOS, etc. have gone down. Flights have had to be cancelled as some airlines systems have also gone down. Gas stations and public transport systems inoperable. As well as numerous Windows systems and Microsoft services affected. (At least according to one of my local MSMs.)

Seems insane to me that one company's messed up update could cause so much global disruption and so many systems gone down :/ This is exactly why centralisation of services and large corporations gobbling up smaller companies and becoming behemoth services is so dangerous.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 183 points 4 months ago (1 children)

The annoying aspect from somebody with decades of IT experience is - what should happen is that crowdstrike gets sued into oblivion, and people responsible for buying that shit should have an epihpany and properly look at how they are doing their infra.

But will happen is that they'll just buy a new crwodstrike product that promises to mitigate the fallout of them fucking up again.

[–] [email protected] 92 points 4 months ago (3 children)

decades of IT experience

Do any changes - especially upgrades - on local test environments before applying them in production?

The scary bit is what most in the industry already know: critical systems are held on with duct tape and maintained by juniors 'cos they're the cheapest Big Money can find. And even if not, There's no time. or It's too expensive. are probably the most common answers a PowerPoint manager will give to a serious technical issue being raised.

The Earth will keep turning.

[–] [email protected] 34 points 4 months ago (1 children)

some years back I was the 'Head' of systems stuff at a national telco that provided the national telco infra. Part of my job was to manage the national systems upgrades. I had the stop/go decision to deploy, and indeed pushed the 'enter' button to do it. I was a complete PowerPoint Manager and had no clue what I was doing, it was total Accidental Empires, and I should not have been there. Luckily I got away with it for a few years. It was horrifically stressful and not the way to mitigate national risk. I feel for the CrowdStrike engineers. I wonder if the latest embargo on Russian oil sales is in anyway connected?

[–] [email protected] 17 points 4 months ago

I wonder if the latest embargo on Russian oil sales is in anyway connected?

Doubt it, but it's ironic that this happens shortly after Kaspersky gets banned.

[–] [email protected] 29 points 4 months ago (2 children)

Unfortunately falcon self updates. And it will not work properly if you don't let it do it.

Also add "customer has rejected the maintenance window" to your list.

[–] [email protected] 35 points 4 months ago

Turns out it doesn't work properly if you do let it

[–] [email protected] 6 points 4 months ago

Well, "don't have self-upgrading shit on your production environment" also applies.

As in "if you brought something like this, there's a problem with you".

[–] [email protected] 25 points 4 months ago (2 children)

Not OP. But that is how it used to be done. Issue is the attacks we have seen over the years. IE ransom attacks etc. Have made corps feel they needf to fixed and update instantly to avoid attacks. So they depend on the corp they pay for the software to test roll out.

Autoupdate is a 2 edged sword. Without it, attackers etc will take advantage of delays. With it. Well today.

[–] [email protected] 15 points 4 months ago* (last edited 4 months ago) (1 children)

I'd wager most ransomware relies on old vulnerabilities. Yes, keep your software updated but you don't need the latest and greatest delivered right to production without any kind of test first.

[–] [email protected] 13 points 4 months ago (1 children)

Very much so. But the vulnerabilities do not tend to be discovered (by developers) until an attack happens. And auto updates are generally how the spread of attacks are limited.

Open source can help slightly. Due to both good and bad actors unrelated to development seeing the code. So it is more common for alerts to hit before attacks. But far from a fix all.

But generally, time between discovery and fix is a worry for big corps. So why auto updates have been accepted with less manual intervention than was common in the past.

[–] [email protected] 5 points 4 months ago

I would add that a lot of attacks are done after a fix has been released - ie compare the previous release with the patch and bingo - there's the vulnerability.

But agree, patching should happen regularly, just with a few days delay after the supplier release it.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago)

I get the sentiment but defense in depth is a methodology to live by in IT and auto updating via the Internet is not a good risk to take in general. For example, should Crowdstrike just disappear one day, your entire infrastructure shouldn't be at enormous risk nor should critical services. Even if it's your anti-virus, a virus or ransomware shouldn't be able to easily propagate through the enterprise. If it did, then it is doubtful something like Crowdstrike is going to be able to update and suddenly reverse course. If it can then you're just lucky that the ransomware that made it through didn't do anything in defense of itself (disconnecting from the network, blocking CIDRs like Crowdsource's update servers, blocking processes, whatever) and frankly you can still update those clients anyway from your own AV update server which is a product you'd be using if you aren't allowing updates from the Internet in order to roll them out in dev first, phasing and/or schedules from your own infrastructure.

Crowdstrike is just another lesson in that.