

What, you expect people to read?! Smh
What, you expect people to read?! Smh
Kind of splitting hairs, but a company that can let go of “scores” of employees and still exist is not a small business.
Yes, you’re anthropomorphizing far too much. An LLM can’t understand, or recall (in the common sense of the word, i.e. have a memory), and is not aware.
Those are all things that intelligent, thinking things do. LLMs are none of that. They are a giant black box of math that predicts text. It doesn’t even understand what a word is, orthe meaning of anything it vomits out. All it knows is what is the statistically most likely text to come next, with a little randomization to add “creativity”.
You’re completely right, if the goal is good customer support and decent working conditions for the operators.
It’s not. The goal is like 1rre said - make people get fed up and stop trying to get their stuff fixed, just buy a new one. Oh, and they could fire half the operators too, since less people would be willing to wade through the pile of shit to talk to them.
Money and profit, screw the rest.
And an excuse to fire half of the support staff.
This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.
I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.
Don’t look for statistical precision in analogies. That’s why it’s called an analogy, not a calculation.
No, this is the equivalent of writing off calculators if they required as much power as a city block. There are some applications for LLMs, but if they cost this much power, they’re doing far more harm than good.
I’ll take a stab at it.
“Researchers spend $X to see whether poison leaking into the ground gets into our water.”
Exactly this, and rightly so. The school’s administration has a moral and legal obligation to do what it can for the safety of its students, and allowing this to continue unchecked violates both of those obligations.
I worked on an industrial robot once, and we parked it such that the middle section of the arm was up above the robot and supposed to be level. I could tell from 50 feet away and a glance that it wasn’t, so we checked. It was off by literally 1 degree.
Degrees are bigger than we think, but also our eyes are incredible instruments.
This is the way, and good for you.
Purposefully making women afraid they’re about to get assaulted is abhorrent.
I mean, there is a hard limit on how much info your brain can take in. It’s time. Every hour spent learning one thing is an hour not spent learning everything else.
Agreed. The solution to this is to stop using LLMs to present info authoritatively, especially when facing directly at the general public. The average person has no idea how an LLM works, and therefore no idea why they shouldn’t trust it.
My guess is that your name is so poorly represented in the training data that it just picked the most common kind of job history that is represented.
Bullshit generator generating bullshit, news at 11.
Which athlete / event was this?
This. Satire would be writing the article in the voice of the most vapid executive saying they need to abandon fundamentals and turn exclusively to AI.
However, that would be indistinguishable from our current reality, which would make it poor satire.
Exactly, it’s just regular old enshittification.