

Frankly, with the garbage Microsoft is producing these days, and the rate at which the quality, for lack of a better word, is degenerating, I’m starting to consider if LLM slop might actually be less worse…
Frankly, with the garbage Microsoft is producing these days, and the rate at which the quality, for lack of a better word, is degenerating, I’m starting to consider if LLM slop might actually be less worse…
lizard that can spray blood from its eye, but nothing in the animal kingdom past or present has a human’s innate ability for ranged attack
I don’t know, a hawk plummeting from the sky at 190km/h onto something the size of a small rodent is kind of impressive, too, if you count the bird throwing itself as throwing…
Also in very short races (up to 100m) if the human is an olympic athlete, though mostly because momentum is a bitch and it takes time for the horse to accelerate all that mass, and by the time it’s done the race is already over (it also probably helps that the athlete knows what they’re doing while the horse is just along for the ride and wondering where it can get some grass).
in the unable-to-reason-effectively sense
That’s all LLMs by definition.
They’re probabilistic text generators, not AI. They’re fundamentally incapable of reasoning in any way, shape or form.
They just take a text and produce the most probable word to follow it according to their training model, that’s all.
What Musk’s plan (using an LLM to regurgitate as much of its model as it can, expunging all references to Musk being a pedophile and whatnot from the resulting garbage, adding some racism and disinformation for good measure, and training a new model exclusively on that slop) will produce is a significantly more limited and prone to hallucinations model that occasionally spews racism and disinformation.
Most animals know humans are too much trouble to mess with.
Sure, you can kill one human. But next thing you know your whole species has gone extinct, or worse, has been domesticated into pocket yappy dogs that can’t breathe properly.
In places where we’ve been around long enough staying away from humans has practically been bred into every surviving predator’s instincts by now (which is what makes polar bears so terrifying, they’re about the only dangerous predator that doesn’t have this instinct yet, and probably never will, now that murdering whole species has become a bit of a bad look); anything that considered us prey and didn’t learn not to simply doesn’t exist anymore.
Wolves in particular (in the few places where they survive) definitely know not to mess with us, except maybe in the frozen depths of Canada, and so do most bears (again, with possible exceptions in the least populated bits of North America) except polar ones.
Most of the stuff in Jules Verne’s books, even Paris in the Twentieth Century.
(Well, the moon gun would need to be a very long railgun, not a gunpowder cannon, if you want crewed capsules, but still.)
True (though the AVE also stops at Atocha, as it did back in 2004).
They also tend to carry more passengers, which means the number of victims was significantly larger than if it had been an AVE.
And yet, your prediction of a nine-eleven-like security theater didn’t come to pass. 🤷♂️
Those are dense packed commuter trains from more than 20 years ago
So, even fucking worse when it comes to number of victims.
If you search for “bomb train” you’ll get results
I don’t need to search for it, it was all over the news for months.
And yet, we got over it.
Don’t jinx it.
As I said in another reply, too late, by twenty-one years.
And yet, no TSA-like bullshit.
Didn’t cause security theater, though. 🤷♂️
one terrorist attack
Had one in 2004, didn’t result in security theater (though its mishandling did almost certainly result in the ruling party losing the election).
You’re not taking into account the fact that LLMs are an obvious dead end.
Once that bubble bursts it’ll take decades before anyone invests in AI research again and for anything attached to the term “AI” to not be seen as a scam (LLMs are obviously not AI or anything close, but they’re being sold as such and that’s what the term will be associated with), not to mention we’ll need decades to clean up all the LLM slop spillage before proper research of any kind can proceed.
What you said was valid before the well got poisoned.
Now it’s extremely unlikely we’ll survive long enough to get back on track.
LLM peddlers murdered the future, in the name of short term profits.
We were on track for it, but LLMs derailed that.
Now we’ll have to wait for the bubble to burst, which will poison the concept of AI (since LLMs are being sold as AI despite being practically the opposite) in the minds of both users and investors for decades.
It’d probably take a couple generations for any funding for AI research to be available after that (not to mention cleaning up all the LLM slop spillage from our knowledge repositories)… but by that time we’ll almost certainly be extinct due to global warming.
The LLM peddlers murdered the future for short term profits, and doomed us all in the process.
Egalitarian, too. Sithrak doesn’t discriminate. Everyone will burn, regardless of race, gender, or creed.
Sounds like a typical enlightened centrist.
Curiously, once you Scooby Doo their mask off they’re always quite far to the right of the centre they claim to value so much.
Nah, Sithrak’s followers are chill, they know eternal torment awaits everyone eventually, there’s no rush; they just spread the horrid word, no need to implement it themselves.
Not everyone hates life like you do
Work isn’t life.
It’s the opposite of life (no, death is just its absence).
hang out with co-workers all the time
Bonding over shared trauma and Stockholm syndrome is not a good basis for a relationship (though there’s probably no relationship other than you pestering them while they try to work).
I don’t know about Silverhand (not enough chrome, really, he’s just a natural asshole), but most Vs, definitely.