Don’t look for statistical precision in analogies. That’s why it’s called an analogy, not a calculation.
Don’t look for statistical precision in analogies. That’s why it’s called an analogy, not a calculation.
No, this is the equivalent of writing off calculators if they required as much power as a city block. There are some applications for LLMs, but if they cost this much power, they’re doing far more harm than good.
I’ll take a stab at it.
“Researchers spend $X to see whether poison leaking into the ground gets into our water.”
Exactly this, and rightly so. The school’s administration has a moral and legal obligation to do what it can for the safety of its students, and allowing this to continue unchecked violates both of those obligations.
I worked on an industrial robot once, and we parked it such that the middle section of the arm was up above the robot and supposed to be level. I could tell from 50 feet away and a glance that it wasn’t, so we checked. It was off by literally 1 degree.
Degrees are bigger than we think, but also our eyes are incredible instruments.
I mean, there is a hard limit on how much info your brain can take in. It’s time. Every hour spent learning one thing is an hour not spent learning everything else.
Agreed. The solution to this is to stop using LLMs to present info authoritatively, especially when facing directly at the general public. The average person has no idea how an LLM works, and therefore no idea why they shouldn’t trust it.
My guess is that your name is so poorly represented in the training data that it just picked the most common kind of job history that is represented.
Bullshit generator generating bullshit, news at 11.
Which athlete / event was this?
This. Satire would be writing the article in the voice of the most vapid executive saying they need to abandon fundamentals and turn exclusively to AI.
However, that would be indistinguishable from our current reality, which would make it poor satire.
What part of “we paid these guys and they said we’re fine” do you not? Why would they choose and pay and release the results from a company they didn’t trust to clear them?
I’m not saying it’s rotten, but the fact that the third party was unilaterally chosen by and paid for LMG makes all the results pretty questionable.
It’s more like, “I own 17 homes and it wasn’t that hard to get that many. They must not be trying hard enough.”
If an LLM had to say “I don’t know” when it doesn’t know, that’s all it would be allowed to say! They literally don’t know anything. They don’t even know what knowing means. They are complex (and impressive, admittedly) text generators.
Do you think every paper writer would comply? Do you think that the actually problematic writers, like those cutting so many corners that they directly paste ChatGPT results into their paper, would comply?
No, close the lid because that’s how you avoid coating everything in the room with a film of urine and feces. Open toilets are disgusting.
Came here for this, thank you for your service.
That analogy relies on the reader having any idea what wire EDM manufacturing is. ;) Not exactly an everyday topic.
This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.
I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.