IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.
Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: “It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers.”
He isn’t alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.
Sadly, for our administrator, things are less than ideal.
Another Redditor posted: "They sent us a patch but it required we boot into safe mode.
"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.
We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.
Someone never tested their DR plans, if they even have them. Generally locking your keys inside the car is not a good idea.
I remember a few career changes ago, I was a back room kid working for an MSP.
One day I get an email to build a computer for the company, cheap as hell. Basically just enough to boot Windows 7.
I was to build it, put it online long enough to get all of the drivers installed, and then set it up in the server room, as physically far away from any network ports as possible. IIRC I was even given an IO shield that physically covered the network port for after it updated.
It was our air-gapped encryption key backup.
I feel like that shitty company was somehow prepared for this better than some of these companies today. In fact, I wonder if that computer is still running somewhere and just saved someone’s ass.
We also backup our bitlocker keys with our RMM solution for this very reason.
I hope that system doesn’t have any dependencies on the systems it’s protecting (auth, mfa).
It’s outside the primary failure domain.
They also don’t seem to have a process for testing updates like these…?
This seems like showing some really shitty testing practices at a ton of IT departments.
Apparently from what I was reading these are forced updates from Crowdstrike, you don’t have a choice.
I’ve heard differently. But if it’s true, that should have been a non-starter for the product for exactly reasons like this. This is basic stuff.
Companies use crowdstrike so they don’t need internal cybersecurity. Not having automatic updates for new cyber threats sorta defeats the purpose of outsourcing cybersecurity.
Automatic updates should still have risk mitigation in place, and the outage didn’t only affect small businesses with no cyber security capability. Outsourcing does not mean closing your eyes and letting the third party do whatever they want.
Not bothering doing basic, minimal testing - and other mitigation processes - before rolling out updates is absolutely terrible policy.
Unfortunately, the pace of attack development doesn’t really give much time for testing.
More time that the zero time than companies appear to have invested here.
I was just thinking about something similar. I can understand wanting to get a security update as quickly as possible, but it still seems like some kind of rolling update could have mitigated something like this. When I say rolling, I mean for example split all of your customers into 24 groups and push the update once an hour to another group. If it causes a massive fuck up it’s only some or most, but not all.
Heck even 30 minutes ahead for 1% of devices wouldve had a reasonable chance of catching this
Pity the administrators who dutifully kept a list of those keys on a secure server share, only to find that the server is also now showing a screen of baleful blue.
Lol, can you imagine? It empathetically hurts me even thinking of this situation. Enter that brave hero who kept the fileshare decryption key in a local keepass :D
That’s why the 3-2-1 rule exists:
- 3 copies of everything on
- 2 different forms of media with
- 1 copy off site
For something like keys, that means:
- secure server share
- server share backup at a different site
- physical copy (either USB, printed in a safe, etc)
Any IT pro should be aware of this “rule.” Oh, and periodically test restoring from a backup to make sure the backup actually works.
We have a cron job that once a quarter files a ticket with whoever is on-call that week to test all our documented emergency access procedures to ensure they’re all working, accessible, up-to-date etc.
Seems like an argument for a heterogeneous environment, perhaps a solid and secure Linux server to host important keys like that.
Lmao this is incredible
Another Redditor posted: "They sent us a patch but it required we boot into safe mode.
"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.
“Most of our comms are down, most execs’ laptops are in infinite bsod boot loops, engineers can’t get access to credentials to servers.”
N.B.: Reddit link is from the source
I hope a lot of c-suites get fired for this. But I’m pretty sure they won’t be.
Our administrator is understandably a little bitter about the whole experience as it has unfolded, saying, "We were forced to switch from the perfectly good ESET solution which we have used for years by our central IT team last year.
Sounds like a lot of architects and admins are going to get thrown under the bus for this one.
“Yes, we ordered you to cut costs in impossible ways, but we never told you specifically to centralize everything with a third party, that was just the only financially acceptable solution that we would approve. This is still your fault, so we’re firing the entire IT department and replacing them with an AI managed by a company in Sri Lanka.”
Stupid argument though, honestly just chance that crowdstrike was the vendor to shit the bed. Might aswell have been set. You should still have procedures for this
Fired? I hope they get class-actioned out of existence as a warning to anyone who skimps on QA
Lemmy appears to be weathering the storm quite well…
…probably runs on linux
I doubt many Lemmy servers are running enterprise level antivirus.
If you have EC2 instances running Windows on AWS, here is a trick that works in many (not all) cases. It has recovered a few instances for us:
- Shut down the affected instance.
- Detach the boot volume.
- Move the boot volume (attach) to a working instance in the same region (us-east-1a or whatever).
- Remove the file(s) recommended by Crowdstrike:
- Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
- Locate the file(s) matching “C-00000291*.sys”, and delete them (unless they have already been fixed by Crowdstrike).
- Detach and move the volume back over to original instance (attach)
- Boot original instance
Alternatively, you can restore from a snapshot prior to when the bad update went out from Crowdstrike. But that is not always ideal.
A word of caution, I’ve done this over a dozen times today and I did have one server where the bootloader was wiped after I attached it to another EC2. Always make a snapshot before doing the work just in case.
Good advice!
At least no mission critical services were hit, because nobody would run mission critical services in Windows, right?
…
RIGHT??I didnt know so many servers still run windows.
I’m the corporate world, very much Windows gets used. I know Lemmy likes a circle jerk around Linux. But in the corporate world you find various OS’s for both desktop and servers. I had to support several different OS’s and developed only for two. They all suck in different ways there are no clear winners.
Thank för addressing Lemmy circlejerk för Linux . They really take it far
On prem AD. At least for my MSP’s clients. Have been pushing hard last few years to migrate to azure.
I can’t imagine how much work it would be to migrate all your services onto Linux. The problem was people adopting windows in the first place.
I love the Linux bros coming out of the woodwork on this one when this could have very well have been Linux on the receiving end of this shit show. Given that it’s a kernal level software issue, and not necessarily an OS one.
It’s largely infeasible to use Linux for many, most, of these endpoints. But facts are hard.
The Linux kernel has a special kernel extension scheme specifically to keep software like CloudStrike from crashing it https://ebpf.io/what-is-ebpf/ This is supported by CloudStrike on recent versions of Linux (if you’re running an older version, then yes CloudStrike still has the ability to ruin your day)
They are just butt hurt that this whole thing really shines a light on how inaccurate the line of “the world runs on Linux” truly is.
The world runs on a lot of different things for different reasons and that does not fit nicely into their Richard Stallman like world view.
Just to clarify: the world runs in linux servers. The market share for the non-server market is abysmal.
Proxmox with windows containers is used widely
Sounds like the best time to unionize
Any time is a good time to unionize
Agreed, just here they have then by the metaphorical balls.
To preface, I want to see a tech workers union so, so bad.
With that said, I genuinely don’t believe that most tech workers would unionize. So many of them are brainwashed into thinking that a union would dictate all salaries, would force hiring to be domestic-only, or would ensure jobs for life for incompetent people. Anyone that knows what a union does in 2024 knows that none of that has to be true. A tech union only needs to be a flat fee every month, guaranteed access to a lawyer with experience in your cases/employer, and the opportunity to strike when a company oversteps. It’s only beneficial.
Even if you could get hundreds of thousands of signatories, the recent layoffs have shown that tech companies at the highest level would gladly fire a sizable number of employees if it meant stamping out a union. As someone that has conducted interviews in big tech, the sheer numbers at peak of people that had applied for some roles was higher than the number of active employees in the whole company. In theory, Google could terminate everyone and replace them with brand-new workers in a few months. It would be a fucking mess, but it (in theory) shows that if a Google or Apple decided that it wanted no part of unions they could just dig into their fungible talent pool, fire a ton of people, promote people that stayed, and fill roles with foreign or under-trained talent.
I feel you with this. They do not see themselves as workers. Thank you for the preface.
Agreed, sadly to many there is still the view of tech being a meritocracy, and that they’re in FAANG because of their hard work over everything else, so fuck everyone else. Naturally, many change their tune once their employer actions regressive policies, but it’s surprising how many people just have zero understanding of what a union does. They see cop shows or The Wire and assume it’ll be like the unions there…
All true… All sad. Time to snap some fools to reality I guess
lmao
It might be CrowdStrike’s fault, but maybe this will motivate companies to adopt better workflows and adopt actual preproduction deployment to test these sort of updates before they go live in the rest of the systems.
I know people at big tech companies that work on client engineering, where this downtime has huge implications. Naturally, they’ve called a sev1, but instead of dedicating resources to fixing these issues the teams are basically bullied into working insane hours to manually patch while clients scream at them. One dude worked 36 hours straight because his manager outright told him “you can sleep when this is fixed”, as if he’s responsible for CloudStrike…
Companies won’t learn. It’s always a calculated risk, and much of the fallout of that risk lies with the workers.
That dude should not have put up with that.
Sounds so illegal, that it makes labour authoririty happy
Is it illegal? I’m not American so I have no idea if there are laws in your country against on-call maximum hours.
- It’s not about oncall, they are literally in the office
- See 1
- Not sure about America, but it is very illegal in Russia.
That comment about sleep…that’s about where I tell them to go fuck themselves. I’ll find a new job, I’m not going to put up with bullshit like that.
100% agree
Might be hard to do. Crowdstrike release several updates per day to the channel files to match changes in adversarial behaviour. In this case, BCP and backup are what need to be done.
Oh sweet summer child.
80% of our machines were hit. We were working through 9pm on Friday night running around putting in bitlocker keys and running the fix. Our organization made it worse by hiding the bitlocker keys from local administrators.
Also gotta say… way the boot sequence works, combined with the nonsense with raid/nvme drivers on some machines really made it painful.
I got super lucky. got paid for my car just before the dealership systems went down, got my return flight 2 days before this shit started.
deleted by creator
If it only impacts a percentage of your machines then there was a problem in the deployment strategy or the solution wasn’t worthwhile to begin with.