

Internet Explorer meme bout to be replaced by just rendered on my 8gb card
Internet Explorer meme bout to be replaced by just rendered on my 8gb card
He has a wife, you know!
And you can even export it there.
Better to slap it twice at half strength so that it’s cooked when you catch it.
You’ve heard of minimalism, now get ready for nonenism
As if companies would ever stop putting ads and logos on their products
The problem is that I want failover to work if a site goes offline, this happens quite a bit with private ISP where I live and instead of waiting for the connection to be restored my idea was that kubernetes would see the failed node and replace it.
Most data will be transfered locally (with node affinity) and only on failure would the pods spread out. The problem that remained in this was storage which is why I’m here looking for options.
Thanks for the info!
I’ll try Rook-Ceph, Ceph has been recommended quite a lot now, but my nvme drives sadly don’t have PLP. Afaict that should still work because not all nodes will face power loss at the same time.
I’d rather start with the hardware I have and upgrade as necessary, backups are always running for emergency cases and I can’t afford to replace all hard drives.
I’ll join Home Operations and see what infos I can find
Think of all the sticks and stones everybody will have once this is over.
It’s fine if the bottleneck is upload/download speed, there’s no easy way around that.
The other problems like high latency or using more bandwith than is required are more my fear. Maybe local read cache or stuff like that can be a solution too but that’s why I’m asking for what is in use and what works vs what is better reserved for dedicated networks.
Not dystopian enough for that.
Ceph (and longhorn) want “10 Gbps network bandwidth between nodes” while I’ll have around 1gbit between nodes, or even lower.
What’s your experience with Garage?
I heard that ceph lives and dies with the network hardware. Is a slow internet connection even usable when the docs want 10 gbit/s networking between nodes?
They both support k8s, juicefs with either just a hostpath (not what i’d use) or the JuiceFS CSI Driver. Linstore has an operator which uses drbd and provides it too.
If you know of storage classes which are useful for this deployment (or just ones you want to talk about in general) then go on. From what I’m seeing in this thread I’ll probably have to deploy all options that seem reasonable and test them myself anyways.
Well, if that is the case then I will have to try them all but I’m hoping at least general behaviour would be similar to others so that I can start with a good option.
I want the failover to work in case of internet or power outage, not local cluster node failure. Multiple clusters would make configuration and failover across locations difficult or am I wrong?
I mean storage backends as in the provisioner, I will use local storage on the nodes with either lvm or just storage on a filesystem.
I already set up a cluster and tried linstore, I’m searching for experiences with the options because I don’t want to test them all.
I currently manage all the servers with a NixOS repository but am looking for better failover.
You can still create a system in which not even you know the password if the right conditions are met.
Like, you didn’t authenticate in 3 days, the master encryption key is now overwritten and noone can decrypt it.
Though this would be a very unstable system, a slight mistake and you have to start over, having lost all data.
All of them