

The problem is that they’re happy to own it.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.
The problem is that they’re happy to own it.
I should note, since this comment is IMO confusingly worded, that the quote you provide is not from the first amendment of the US constitution. It’s from the United States Flag Code, which isn’t even a law let alone part of the constitutions. The flag code is basically just a guideline of etiquette that the American Legion published.
There are preventative measures that can be taken that don’t involve them, but unfortunately it’s looking like America might be past those options now.
“Pride flags classified as a terrorist symbol” is not something I had on my bingo card, but in hindsight it probably should have been.
This is actually another of Reddit’s decisions that I’m in agreement with. Subscriber count isn’t a very useful number, it largely just measures how old a subreddit is. You can already see how old the subreddit is much more accurately by looking at its founding date.
This is actually one of the few decisions Reddit has made in recent years that I agree with. They’re trying to eliminate the pattern of having “powermods” that just accumulate endless numbers of huge subreddits under their belts.
I’ve found Qwen3-30B-A3B-Thinking-2507 to be the best all-around “do stuff for me” model that fits on my hardware. I’ve mostly been using it for analyzing and summarizing documents I’ve got on my local hard drive; meeting transcripts, books, and so forth. It’s done surprisingly well on those transcripts, I daresay its summaries are able to tease out patterns that a human wouldn’t have had an easy time spotting.
When it comes to creative writing I mix it up with Llama-3.3-70B-Instruct to enrich the text, using multiple models helps keep it from becoming repetitive and too recognizable in style.
I’ve got Qwen3-Coder-30B-A3B-Instruct kicking around as a programming assistant, but while it’s competent at its job I’ve been finding that the big online models do better (unsurprisingly) so I use those more. Perhaps if I was focusing on code analysis and cleanup I’d be using that one instead but when it comes to writing big new classes or applications in one swoop it pays to go with the best right off the bat. Maybe once the IDEs get a little better at integrating LLMs it might catch up.
I’ve been using Ollama as the framework for running them, it’s got a nice simple API and it runs in the background so it’ll claim and release memory whenever demand for it comes. I used to use KoboldCPP but I had to manually start and stop it a lot and that got tedious.
This paradigm has been building for a long time. Bush II was a quarter century ago now and he was already sending the US off to invade countries arbitrarily to seize their resources back then.
I don’t think the US is going to pull back from this paradigm quickly. It’s a generational process at best. So as an outsider who is under threat from the US I’m fine with quicker solutions to diminishing their power if that’s what it comes to.
I’m surprised how bad the neighborhood is with America being one big piece. I think chunking it up would make things better for us in the long run, especially considering that the relatively “saner” chunks would act as somewhat of a buffer between us and the total froot-loops down south.
As I said, there are other solutions I’d rather see happen. But if the US is going to stay as it is now then by all means break it up, that’s better than having it be one giant hostile nation.
This is not a good tool and it does not work.
For you, perhaps. But there are an awful lot of people who seem to be finding it a good tool and are getting it to work for them.
Well, I’m Canadian, and America has been threatening my country’s sovereignty lately. I’d rather they just go back to being relatively sane again but if that’s not in the cards then splitting America up into more manageable pieces is an adequate fallback I suppose.
Those pieces wouldn’t be superpowers. They’d just be regular countries like everyone else. Is that so terrible?
If you haven’t already switched to more secure algorithms you’ll be impressed and also penniless when it can break 192-bit encryption with proper entropy.
You know how doctors will tell you to stop picking at a scab, and maybe put a bandage on it to get you to stop picking at it so that your body can heal without interference? It’s like that.
LLMs have no intelligence. They are just exceedingly well at language, which has a lot of human knowledge in it.
Hm… two bucks… and it only transports matter? Hm…
It’s amazing how quickly people dismiss technological capabilities as mundane that would have been miraculous just a few years earlier.
In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.
I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.
Isn’t that what you yourself are doing, right now?
The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.
Yes, because people have more than one single criterion for determining whether a tool is “better.”
If there was a machine that would always give me a thorough well-researched answer to any question I put to it, but it did so by tattooing the answer onto my face with a rusty nail, I think I would not use that machine. I would prefer to use a different machine even if its answers were not as well-researched.
But I wasn’t trying to present an argument for which is “better” in the first place, I should note. I’m just pointing out that AI isn’t going to “go away.” A huge number of people want to use AI. You may not personally want to, and that’s fine, but other people do and that’s also fine.
So it has advantages, then.
BTW, all the modern LLMs I’ve tried that do web searching provide citations for the summaries they generate. You can indeed evaluate the validity of their responses.
OpenAI has an enormous debt burden from having developed this tech in the first place. If OpenAI went bankrupt the models would be sold off to companies that didn’t have that burden, so I doubt they’d “go away.”
As I mentioned elsewhere in this thread I use local LLMs on my own personal computer and the cost of actually running inference is negligible.
Turns out very few people use it that way. Most people use it for far more practical things.
And even if they were mostly using it for that, who are you to decide what is “valuable” for other people? I happen to think that sports are a huge waste of time, does that mean that stadiums are not valuable?
And yet a great many people are willingly, voluntarily using them as replacements for search engines and more. If they were worse then why are they doing that?
[ Removed by Reddit ] is the only idea that comes to mind, alas.