Might be the “horseshoe effect” where they (ultimately) converge on a few things. Change the tshirt, and that would fit trumpy gen z too.
Might be the “horseshoe effect” where they (ultimately) converge on a few things. Change the tshirt, and that would fit trumpy gen z too.
Would he end up institutionalized? Like, realistically, what would happen to him?
If the police did this, that’s not surprising, but they are really stupid.
This case is going to be under a microscope like an inverted OJ trial, and every bit of police misconduct is ammunition for his lawyers
Imagine the uproar if this guy gets away on a technicality… it would be a national celebration, lol.
And it’s all evidence his lawyers can use in court.
And (indirectly) on financial breaks for the mega rich and other paranoid investigations into govt agencies, all while shooting US debt through the roof so fast a large fraction will go to paying interest to… more rich people. Indefinitely. Even after Trump.
How’s that sound?
Their reputation and past reporting is supposed to back up things they state as facts (like assuming that reviews they cite are real) for practicality and brevity. Imagine having to document every bit of background research in a presentable way.
They could have included screenshots though.
And the skepticism is healthy. I do personally ‘trust’ Axios (which I read almost daily but regularly double check).
I can picture little shitlord kids (I love them) joining this just because, reading this in surprise… And loving that they can preach to absolutely anyone with an ear.
The American way is to bail them out financially, then re-elect them.
Ah yeah I meant Sweeny.
Epic Games is not publicly traded.
And TBH their history with Unreal is not that bad. And Valve is already extracting a truckload of money out of us through their percentage cut.
Carmack is absolutely a character though, lol. I have to wonder how controversial EGS would be without him.
And a small cluster like Alibaba used to train Qwen 2.5 is basically a drop in the bucket.
The hoard of GPUs Meta, Microsoft/OpenAI, and especially X have are apparently being used extremely inefficiently, or perhaps mostly not used to train AI at all, but do regular ad/engagement optimization stuff.
Ah yeah those were the good old days when vendors were free to do that, before AMD/Nvidia restricted them. It wasn’t even that long ago, I remember some AMD 7970s being double VRAM.
And, again, I’d like to point out how insane this restriction is for AMD given their market struggles…
What could they do, enlist North Korea?
Thats an Onion headline, surely…
Not even that, they really just need to pay more taxes to help Ukraine.
But nope.
And Russia wouldn’t dare bomb NATO unless they want to get totally screwed.
I suppose there hasn’t been much urgency and thr goal of many members is ‘stability,’ but they are now facing a new reality where, according to historical precedent, there will be much less stability after Jan 20th, and there is new urgency to bend rules before then.
Strix Halo is your unicorn, idle power should be very low (assuming AMD VCE is OK over quicksync)
Just that bursts of inference for a small model on a phone or even a desktop is less power hungry than a huge model on A100s/H100s servers. The hardware is already spun up anyway, and (even with the efficiency advantage of batching) Nvidia runs their cloud GPUs in crazy inefficient voltages/power bands just to get more raw performance per chip and squeak out more interactive gains, while phones and such run at extremely efficient voltages.
There are also lots of tricks that can help “local” models like speculative decoding or (theoretically) bitnet models that aren’t great for cloud usage.
Also… GPT-4 is very inefficient. Open 32B models are almost matching it at a fraction of the power usage and cost, even in servers. OpenAI kind of sucks now, but the larger public hasn’t caught on yet.
Oh also you might look at Strix Halo from AMD in 2025?
Its IGP is beefy enough for LLMs, and it will be WAY lower power than any dGPU setup, with enough vram to be “sloppy” and run stuff in parallel with a good LLM.
You could get that with 2x B580s in a single server I guess, though yoi could have already done that with the A770s.
I’m not asserting that they are close to the same, but that they end up aligning on certain issues more than either “side” would admit, especially at the ends of the horseshoe (like violent frustration over the same things, and disdain for some common institutions).