

Sure, it’s always a step of 10x, but you do have to remember all the prefixes. Or you can only remember the 1000x prefixes - but you also need to remember centi-. Then, nobody says “megagram” - it’s “ton”. So there are quirks to remember.
Sure, it’s always a step of 10x, but you do have to remember all the prefixes. Or you can only remember the 1000x prefixes - but you also need to remember centi-. Then, nobody says “megagram” - it’s “ton”. So there are quirks to remember.
Well, you can theoretically make a second app-view “instance”, call it “Greenearth” or something, and have different policies than Bluesky on how to verify or select content. But until someone actually does so, it’s not really decentralized. I’m not sure what’s stopping people from doing so, but it’s been a while, so I assume there must be some roadblock.
There’s also the issue of how Blueky itself was depicted as the decentralized network - when it’s more akin to a single instance, instead.
Currently not, because it’s not de-facto decentralized. There would need to be multiple relays, managed by different organizations, AND multiple app views, also managed by different orgs, for me to consider it such.
The non-existence of de facto decentralization indicates that the ecosystem doesn’t actually promote decentralization, even though it technically allows for it.
Meta calls its penalty a ‘tariff’
That’s a retaliatory tariff. Meta broke the law, and the EU retaliated.
How can someone support them in good faith? I’ll focus on China, but here are some reasons:
For starters, I don’t believe that it’s possible to impose on a society from the outside to accept LGBTQ people. For example, making LGBTQ acceptance as a precondition on having good relations with China has literally 0% chance of improving life of LGBTQ people there. It’s more likely to backfire. On the other hand, having good relations, and allowing cultural exchange to happen naturally, can - and I think, over the last few decades before relations soured, has - improved LGBTQ acceptance there.
Also, amongst superpowers, China has a relatively good track record when in comes to using military force. They have had conflicts with neighboring countries, but it’s nothing compared to the wars the US or Russia (and USSR) have fought.
Finally (this one I don’t share, but I think it can be held in good faith), someone might not care about human rights all that much. For example, if you consider government-sponsored murders to be just the same as any other - not better, but also not worse - then even if you include Tienanmen Square and other murders by the government, the murder rate in China is still lower than most of the world.
I’d be very skeptical of claims that Debian maintainers actually audit the code of each piece of software they package. Perhaps they make some brief reviews, but actually scrutinizing every line for hidden backdoors is just not feasible.
Any accessibility service will also see the “hidden links”, and while a blind person with a screen reader will notice if they wonder off into generated pages, it will waste their time too. Especially if they don’t know about such “feature” they’ll be very confused.
Also, I don’t know about you, but I absolutely have a use for crawling X, Google maps, Reddit, YouTube, and getting information from there without interacting with the service myself.
I would love to think so. But the word “verified” suggests more.
while allowing legitimate users and verified crawlers to browse normally.
What is a “verified crawler” though? What I worry about is, is it only big companies like Google that are allowed to have them now?
I agree that it’s difficult to enforce such a requirement on individuals. That said, I don’t agree that nobody cares for the content they post. If they have “something cool they made with AI generation” - then it’s not a big deal to have to mark it as AI-generated.
No, that’s because social media is mostly used for informal communication, not scientific discourse.
I guarantee you that I would not use lemmy any differently if posts were authenticated with private keys than I do now when posts are authenticated by the user instance. And I’m sure most people are the same.
Edit: Also, people can already authenticate the source, by posting a direct link there. Signing wouldn’t really add that much to that.
Sure, but that has little to do with disinformation. Misleading/wrong posts don’t usually spoof the origin - they post the wrong information in their own name. They might lie about the origin of their “information”, sure - but that’s not spoofing.
I don’t understand how this will help deep fake and fake news.
Like, if this post was signed, you would know for sure it was indeed posted by @[email protected], and not by a malicious lemm.ee admin or hacker*. But the signature can’t really guarantee the truthfulness of the content. I could make a signed post that claiming that the Earth is flat - or a deep fake video of NASA’a administrator admitting so.
Maybe I’m missing your point?
(*) unless the hacker hacked me directly
I don’t think any kind of “poisoning” actually works. It’s well known by now that data quality is more important than data quantity, so nobody just feeds training data in indiscriminately. At best it would hamper some FOSS AI researchers that don’t have the resources to curate a dataset.
What makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That’s what the “Qwen” or “Llama” parts mean in the name. The 7B model is trained on synthetic data produced by Qwen, so it is effectively a compressed version of Qen. However, neither Qwen nor Llama can “reason,” they do not have an internal monologue.
You got that backwards. They’re other models - qwen or llama - fine-tuned on synthetic data generated by Deepseek-R1. Specifically, reasoning data, so that they can learn some of its reasoning ability.
But the base model - and so the base capability there - is that of the corresponding qwen or llama model. Calling them “Deepseek-R1-something” doesn’t change what they fundamentally are, it’s just marketing.
There are already other providers like Deepinfra offering DeepSeek. So while the the average person (like me) couldn’t run it themselves, they do have alternative options.
A server grade CPU with a lot of RAM and memory bandwidth would work reasonable well, and cost “only” ~$10k rather than 100k+…
To be fair, most people can’t actually self-host Deepseek, but there already are other providers offering API access to it.
A server can decide what servers it’s connected to. It can have a blacklist of blocked instances - or even go further and have a whitelist of allowed instances, blocking all else.
Such a feature is necessary to deal with issues like spam instances, or instances that host illegal content.
One of the things I like a lot about lemme.ee is that they have blocked very few instances.