Disclaimer: I wrote this article and made this website.
There was some talk of this issue in the recent fediverse inefficiencies thread. I’m hopeful that in the future we’ll have a decentralized solution for file hosting but for now I deeply believe that users should pay for their own file hosting.
Ok, hear me out.
We find the users with the slowest internet and start sending them all the data. They don’t have to keep anything on disk. Then they send it all back and forth between each other. Any time a user makes a request, we just wait for one of the slow nodes to come across the data and send it out.
We use the slowest wires for all the storage. It’s fool proof.
Somebody actually did make this as a joke years ago haha https://github.com/yarrick/pingfs
I was brushing my teeth when reading this comment and inadvertently ended up swallowing all my toothpaste.
don’t forget to spit
Ha! That’s awesome!
Top 5 YouTube videos this one
This is amazing and I love it.
Too wet for server racks in the forest.
They grew there
Look I know there called “farms” but like I told the last forest gnome, the dank woods is no place to host data.
anyway to use torrent protocol somehow? Like popcorn time did?
this is actually a super interesting idea of which i have never seen proof of concept. other than maybe freenet or something.
Ipfs would be similar but more purpose based
Never heard of that for, it looks exactly built for this problem, better than torrent. Good call.
if it was easy to set up id definitely host a terabyte of Lemmy. As a pet.
What would you name your pet
IPFS?
What would an IPFS solution look like here? That’s a genuine question. I don’t have much experience with IPFS. It seems like it isn’t really used outside of blockchain applications.
The sustainability of it is questionable. If I’m not mistaken, IPFS is based on Ethereum, which has gone over to proof of stake rather than proof of work, but it’s still a pretty cumbersome system.
We’re talking about something that needs to compete with Quic and CloudFlare. I’m not sure that Ethereum or even crypto itself is efficient enough as a content delivery method, that IPFS - though a nice idea - is unrealistic.
But that’s just speculation from someone who has zero knowledge behind IPFS as a technology and protocol, so take it with a grain of salt.
EDIT: honestly, why qualify with “I’m not sure” when besserwissers and their alts roam the fediverse instead of going to therapy. Smh. Give the people a Tl;Dr at least. I’m not here for long form content.
IPFS has absolutely nothing whatsoever to do with Ethereum, or indeed any blockchain. It is a protocol for storing distributing and addressing data by hashes of the content over a peer to peer network.
There is however an initiative to create a commercial market for “pinning*”, which is blockchain based. It still has nothing to do with Ethereum, and is a distinct project that uses IPFS rather than being part of the protocol, thankfully. It is also not a “proof of work” sort of waste, but built around proving content that was promised to be stored is actually stored.
Pinning in IPFS is effectively “hosting” data permanently. IPFS is inherently peer to peer: content you access gets added to your local cache and gets served to any peer near you asking for it—like BitTorrent—until it that cache is cleared to make space for new content you access. If nobody keeps a copy of some data you want others to access when your machines are offline, IPFS wouldn’t be particularly useful as a CDN. So peers on the network can choose to pin some data, making them exempt from being cleared with cache. It is perfectly possible to offer pinning services that have nothing to do with Filecoin or the blockchain, and those exist already. But the organization developing IPFS wanted an independent blockchain based solution simply because they felt it would scale better and give them a potential way to sustain themselves.
Frankly, it was a bad idea then, as crypto grift was already becoming obvious. And it didn’t really take off. But since Filecoin has always been a completely separate thing to IPFS, it doesn’t affect how IPFS works in any way, which it continues to do so.
There are many aspects of IPFS the actual protocol that could stand to be improved. But in a lot of ways, it does do many of the things a Fediverse “CDN” should. But that’s just the storage layer. Getting even the popular AP servers to agree to implement IPFS is going to be almost as realistic an expectation as getting federated identity working on AP. A personal pessimistic view.
TL;Dr. From Wikipedia
IPFS allows users to host and receive content in a manner similar to BitTorrent. As opposed to a centrally located server, IPFS is built around a decentralized system of user-operators who hold a portion of the overall data. Any user in the network can serve a file by its content address, and other peers in the network can find and request that content from any node who has it using a distributed hash table (DHT).
So it’s BitTorrent in the web browser… thanks. How is that to be competitive with CloudFlare and Quic again? It has the same network issues that the blockchain has, in that it will be cumbersome and slow - for anyone else that doesn’t have millions to throw into infrastructure. Welcome to the same problem again, but in a different way.
Ironically, because there’s no UDP in browsers, we can’t actually get proper p2p on the web. WebRTC through centralized coordination servers at best. Protocol Labs has all but given up on this use-case in favor of using some bootstrapped selection of remote helper nodes.
Is file hosting really a must? I mean Reddit and feddit are basically forums. And not many forums allow file uploads. Also, we should have retention limits. Low value posts are allowed to fade away. High value posts that have some level of interaction stay alive longer.
Reddit is basically entirely image or video posts, all hosted by reddit directly.
This feels like something the Fediverse is ultimately going to build for itself. I know jack squat about the details, but it’s gonna have to be a thing eventually, I think.
I think Tenfingers could be an interesting option as hosters do not know what they host, the data can be modified, and it’s 100% decentralised.
Is a p2p system for media with the instances just hosting magnet links too slow for fediverse purposes? To me this seems like the most resilient way to handle media in a decentralized system
If a social network is to take off, it must be accessible from mobile devices behind CGNAT (carrier grade network address translation).
p2p from behind a CGNAT works just fine as long as a single server is accessible and can mediate connections between other peers. Most non-servers are behind some sort of NAT these days.
I wish there was some version of PBS for Lemmy, like public funds for hosting. I’ll admit I haven’t really thought this through, so there’s probably some problems with my idea.
What is stopping some big giant, let’s say Yahoo/Verizon from buying a shitload of storage, starting their own private instance which is open to the public, but private in the sense that only Verizon employees are admins and mods. Only Verizon controls things. Then advertise to the point that the average person on the street knows that Verizon.Lemmy exists, and assosiates Lemmy with being a Verizon thing? What is stopping big tech from pouring the money required for this concept to take off, and using their control over their instance from making the decentralized a centralized service in the general public’s minds?
Right now Lemmy is 60k people. Ok. What if Lemmy was 200 million people, and only 60k knew it was a decentralized service? Everyone else just thought Verizon owned Lemmy?
This is literally exactly what happened to email. It didn’t go great
Thank you for writing this. Small typo: focued (focused).
Thanks for reading and pointing out that typo! (I fixed it)
I found this https://github.com/Marie673/Torrent_Proxy
Its in chinese so idk if its whats needed but if each instance also acted as a torrent proxy then thats decentralised domain agnostic file hosting that doesnt break frontends but also allows clients to update to do resource resolution themselves while helping to serve the fediverses files as a whole.