From the article –
McDonald’s was hit by a system failure Friday that closed restaurants and disrupted online and app orders around the world, including in the United States, Australia, Japan, Hong Kong, and the United Kingdom.
More than 20 ice cream machines were working at the same time. This caused a buffer overlow that crashed the McDatabase in the McCloud.
RIight after there was claims of starting a open-source official fix for the machines.
Ten bucks says it was a DNS issue.
“This issue was not directly caused by a cybersecurity event; rather, it was caused by a third-party provider during a configuration change.”
Sounds probable.
I swear I didn’t delete the raid config.
Or an upstream certificate expired.
Or it’s cousin BGP
…it was DNS
Robble robble!
Ex con can’t get a job due to his criminal record and is forced to steal food to survive.
My question is why are the systems designed to be dependent to upstream services 24/7? Wouldn’t a better approach be to have systems that can run disconnected, then simply upload/replicate data when a connection returns?
These are franchises, right?
I deployed such POS (Point of Sale) systems in the late 90’s, because connectivity wasn’t ubiquitous then. They were designed so franchises could upload/replicate however you needed: continuously, when a connection was available, on a schedule, etc. Some places had pooled telephone lines to achieve the needed throughput.
I get the mobile ordering being impacted, but why would you tie the local kiosk to a web service?
Didn’t you hear? The future is the cloud!
Why host stuff locally when you can host it on someone else’s computer, and have fun, exciting, and completely foreseeable failures like this…
The internet is now just AWS, Azure, GCP and Cloudflare.
First ive read mcdonalds hit global failure, but then i got disappointed ☹️
Someone over in IT has a bad day 😰
So they got hacked / ransomwared?
Far more likely something just went wrong