It was and has been thus far. Has a lot of great features, but this might be a good reason to fork the project.
It was and has been thus far. Has a lot of great features, but this might be a good reason to fork the project.


Fair enough. I’m just hopeful I’ve given them a little spark of doubt and a reminder that multibillion dollar companies aren’t in the business of telling the objective truth.


This is not a good source. This is effectively, “We’ve investigated ourselves and found [that AI is a miraculous wonder].” Anthropic has a gigantic profit incentive to shill AI, and you should demand impartiality and better data than this.
I don’t see a question in there. What are you asking?


Considering your comments, you don’t seem to know what the point I made was.


Thanks! I appreciate you noticing.


Cool, know what job could easily be wiped out? Management. Sam Altman is a manager.
Therefore, Sam Altman doesn’t do real work. Fuck you, asshole.


And they were a lobbyist. They weren’t just a pretty functionary writing emails and getting coffee for the people in charge. They were actively and knowingly advocating for the abuses Meta is infamous for.


I mean, they used to work for a company that quite literally is built upon the practice of stealing people’s data and selling it to the highest bidder. Surely, that means they are the best person to know how to counter that, right? Right?!
/S
It’s resistant, though, specifically because you can fork it. Don’t like where things are going? Like the features of a previous version? Fork that version and run with it.
It does mean extra work for somebody to maintain that forked version, but the option is nonetheless there.


That won’t matter. VPNs typically use a known set of IPs, and services like Reddit that are run by surveillance capitalism companies simply block those known ranges (see also YouTube, for example).


Cool, and I bet it will be just as trustworthy as WhatsApp (i.e. not at all).


If we Americans can’t sensibly regulate ourselves, seems like the reasonable thing to do.
PIA does not have WireGuard configs available. To get those, you have to use third-party tools to capture and generate the necessary info. Otherwise, you have to use their client, or else no WireGuard.
Users have been asking for years (since 2018, I think), and they’ve never provided them.


It’s good to know what we can do to reduce our own use—we all have to live on this planet, after all—but these kinds of articles pop up and, at the very least, make people think their efforts will have a meaningful impact. They go to sleep thinking they’re solving the problem (barring extreme situations like war-driven scarcity, for example).
But if every household stopped using electricity, many countries would still have a massive energy problem on their hands, because households aren’t really the problem.


This is actually an excellent use case for AI. Physics and chemistry as scientific disciplines are lots of complex pattern recognition and manipulation. AI is just a pattern recognition and generation engine, despite what the tech bros and apologists like to tell us.
What these engines generate will ultimately be vetted by experts before it even goes to trials. Scientists don’t just take things on blind faith simply because a robot or even another expert comes up with something; their entire deal is to understand their particular field of study in great detail, after all!


You are correct, but who said it would be the Democrats doing the work?


This is one of the things that frustrates me about my current boss. He keeps talking about some future project that uses a new codebase we’re currently writing, at which point we’ll “clean it up and see what works and what doesn’t.” Meanwhile, he complains about my code and how it’s “too Pythonic,” what with my docstrings, functions for code reuse, and type hints.
So I secretly maintain a second codebase with better documentation and optimization.
No, but good projects have come from forking away from bad decisions.