For the latest version of lemmy, hot sort works in the new fashion. There is a pull request with further implementation details linked in the GitHub issue.
Just an explorer in the threadiverse.
For the latest version of lemmy, hot sort works in the new fashion. There is a pull request with further implementation details linked in the GitHub issue.
Ah, fair enough. My response doesn’t apply then.
You misunderstand what the Hot rank is doing. It’s not balancing newness vs hotness, it’s scaling hotness according to community size. This might feel like newness if you’re focused on vote counts as a proxy for post age, but it’s a different approach. See https://github.com/LemmyNet/lemmy/issues/3622 for details.
There’s a couple ways to think about this:
At any rate, this preference toward smaller communities in hot is a recent change and deliberate. While they might further tweak the scaling factors, I wouldn’t expect it to be drastically different. It sounds to me like what you want is Top, Active, or Most Comments. All these are unscaled according to community size and will get you top posts by their absolute metric rather than posts that are doing well relative to their community size.
I dunno how to hotlink, but if you scroll to the active users graph at https://fedidb.org/software/lemmy you can see there’s been like a 25% dropoff in active users since the peak in July. Lemmy has still grown 50x since May, and it’s much MUCH more active than it was then. But we’ve definitely crested a peak and not everyone who gave Lemmy a shot then is sticking around in a monthly basis.
This isn’t necessarily bad. Lemmy is still young and has many rough edges, it wasn’t realistic to win all the users that tried it on ease-of-use in a head to head with reddit. And Mastodon has had multiple growth waves interspersed with periods of declining usage, but with the spikes has grown ie remained stable overall. Early-stage commercial social media have big ups and downs in engagement and growth as well, and just like lemmy those ups and downs are often externally driven… when competitors mess up, when a big global news story hits, when a major sporting event happens… these can all be catalysts for one-time growth. It’s not a straight line.
Time will tell what user level we stabilize at in the short-term and what events spur new growth, but it’s normal to have a big expansion be followed by some degree of contraction.
No no, sorry. I mean can I still have all my network traffic go through some VPN service (mine or a providers) while Tailscale is activated?
Tailscale just partnered with Mullvad so this works out of the box for that setup: https://tailscale.com/blog/mullvad-integration/
For others, it’s a “yes on paper” situation. It will probably often not work out of the box, but it seems likely to be possible as an advanced configuration. At the end of the line of possibilities, it would definitely be possible to set up a couple of docker containers as one-armed routers, one with your VPN and one with Tailscale as an exit node. Then they can each have their own networking stack and you can set up your own routes and DNS delegating only the necessary bits to each one. That’s a pretty advanced setup and you may not have the knowhow for it, but it demonstrates what’s possible.
To a first approximation, Tailscale/Headscale don’t route and traffic.
Ah, well damn. Is there a way to achieve this while using Tailscale as well, or is that even recommended?
Is there a way to achieve what? Force tailscale to route all traffic through the DERP servers? I don’t know, and I don’t know why you’d want to. When my laptop is at home on the same network as my file-server, I certainly don’t want tailscale sending filserver traffic out to my Headscale server on the Internet just to download it back to my laptop on the same network it came from. I want NAT traversal to allow my laptop and file-server to negotiate the most efficient network path that works for them… whether that’s within my home lab when I’m there, across the internet when I’m traveling, or routing through the DERP server when no other option works.
OpenVPN or vanilla Wireguard are commonly setup with simple hub-and-spoke routing topologies that send all VPN traffic through “the VPN server”, but this is generally slower path than a direct connection. It might be imperceptibly slower over the Internet, but it will be MUCH slower than the local network unless you do some split-dns shenanigans to special-case the local-network scenario. With Tailscale, it all more or less works the same wherever you are which is a big benefit. Of course excepting if you have a true multigigabit network at home and the encryption overhead slows you down… Wireguard is pretty fast though and not a problematic throughout limiter for the vast majority of cases.
Have a read through https://tailscale.com/blog/how-nat-traversal-works/
You, and many commenters are pretty confused about out tailscale/Headscale work.
So I have a question, what can I do to prevent that from happening? Apart from hosting everything on my own hardware of course, for now I prefer to use VPS for different reasons.
Others have mentioned that client-caching can act as a read-only stopgap while you restore Vaultwarden.
But otherwise the solution is backup/restore. If you run Vaultwarden in docker or podman container using volumes to hold state… then you know that as long as you can restart Vaultwarden without losing data that you also know exactly what data needs to be backed up and what needs to be done to restore it. Set up a nightly cron job somewhere (your laptop is fine enough if you don’t have somewhere better) to shut down Vaultwarden, rsync it’s volume dirs, and start it up again. If you VPS explodes, copy these directories to a new VPS at the same DNS name and restart Vaultwarden using the same podman or docker-compose setup.
All that said, keeypass+filesync is a great solution as well. The reason I moved to Vaultwarden was so I could share passwords with others in a controlled way. For single-user, I prefer how keypass folders work and keepass generally has better organization features… I’d still be using it for only myself.
My take echoes this. If one puts any stock in streamer recommendations, Baalorlord who has at various times held spire world record winstreaks, has recently cited Monster Train as his current favorite spirelike (other than spire itself), and also cited Griftlands as a playthrough a highlight.
Baalor probably doesn’t have an opinion on Inscryption as he tends to avoid things with even a slight horror theme. I enjoyed what I played of Inscryption a lot, but very little about playing it evoked the vibe of playing spire. Monster Train is quite adjacent though, the mechanics are different enough to feel fresh but it slots into the same gameplay mood for me whereas Inscryption is just a different (and still very good) thing.
Neither has the tight balance of Spire or feels quite as deep strategically to me (though in all honesty I’m probably not a strong enough player to be trusted in this regard), but both are fun.
That’s an interesting report but it’s possible to “work” at different latencies. And unless you have specialized audio capture/playback hardware and have done some tuning and testing to determine the lowest stable latency that your system is capable of achieving… “works” for you is likely to mean something very different than it does to someone who does a lot of music production.
It remains an interesting question to some users whether Wayland changes the minimum stable latency relative to X and if so whether it does so for better or worse.
I’d consider asking in a Linux audio or music production community (I’m not aware of any on Lemmy that are big enough to have a likely answer though). If music production is a primary use case and audio latency matters to you, almost no general users are going to be able to comment on the difference between X and Wayland from a latency perspective. There may not be a difference, but there might and you won’t be likely to learn about it outside of an audio-focused discussion.
It may seem kinda stupid to consider that an accomplishment, but I feel quite genuinely proud of myself for actually succeeding at this instead of just throwing in the towel…
Way to go. I’ve been at this a decent while and do some pretty esoteric stuff at work and at home… but this loop of feeling stupid, doing the work, and feeling good about a success has been a constant throughout. I spent a week struggling to port some advanced container setups to podman a month or so ago, same feeling of pride when I got them humming.
It’s not stupid to be proud of an accomplishment even if it’s a fundamental one that’s early in a bigger learning curve. Soak it in, then on to the next high. Good luck.
I use Headscale, but Tailscale is a great service and what I generally recommend to strangers who want to approximate my setup. The tradeoffs are pretty straightforward:
Tailscale is great, and there’s no compelling reason that should prevent most self-hosters that want it from using it. I use Headscale because I can and I’m comfortable doing so… But they’re both awesome options.
Tailscale is out, unfortunately. Because the server also runs Plex and I need to use it with Chromecast on remote access…
I rather suspect you already understand this, but for anyone following along… Tailscale can be combined with other networking techniques as well. So one could:
It’s not an all or nothing proposition, but of course the more networking components you have the more complicated everything gets. If one can simplify, it’s often well worth doing so.
Good luck, however you approach it.
So for something like Jellyfin that you are sharing to multiple people you would suggest a VPS running a reverse proxy instead of using DDNS and port forwarding to expose your home IP?
I run my Jellyfin on Tailscale and don’t expose it directly to the internet. This limits remote access to my own devices, or the devices of those I’m willing to help install and configure tailscale on. I don’t really trust Jellyfin on the public internet though. It’s both a bit buggy, which doesn’t bode well for security posture… and also a misconfiguration that exposes your content could generate a lot of copyright liability even if it’s all legitimately licensed since you’re not allowed to redistribute it.
But if you do want it publicly accessible there isn’t a hoge difference between a VPS proxying and a dynamic DNS setup. I have a VPS and like it, but there’s nothing I do with it that couldn’t be done with Cloudflare tunnel or dyndns.
What VPS would you recommend? I would prefer to self host, but if that is too large of a security concern I think there is a real argument for a VPS.
I use linode, or what used to be linode before it was acquired by Akamai. Vultr and Digitalocean are probably what I’d look to if I got dissatisfied. There’s a lot of good options available. I don’t see a VPS proxy as a security improvement over Cloudflare tunnel or dyndns though. Tailscale is the security improvement that matters to me, by removing public internet access to a service entirely, while lettinge continue to use it from my devices.
Do I need to set up NGINX on a VPS (or similar cloud based server) to send the queries to my home box?
A proxy on a VPS is one way to do this, but not the only way and not necessarily the best one… depending on your goals.
Do I need to purchase a domain (randomblahblah.xyz) to use as the main access route from outside my house?
Not for tailscale, and I don’t think for Cloudflare tunnel. Yes for a VPS proxy.
I’ve run a VPS for a long while and use multiple techniques for different services.
I use k8s at work and have built a k8s cluster in my homelab… but I did not like it. I tore it down, and currently using podman, and don’t think I would go back to k8s (though I would definitely use docker as an alternative to podman and would probably even recommend it over podman for beginners even though I’ve settled on podman for myself).
Overall, the simplicity and lightweight resource consumption of podman/docker are are what I value at home. The extra layers of abstraction and constraints k8s employs are valuable at work, where we have a lot of machines and alot of people that must coordinate effectively… but I don’t have those problems at home and the overhead (compute overhead, conceptual overhead, and config-overhesd) of k8s’ solutions to them is annoying there.
11,263 lbs, huh? It’s not a kind estimate, but not unrealistic either.
No, Beehaw defederated your instance. The open-source community on lemmy.ml someone else already mentioned is your best bet.
Another user posted the blog where they discuss their speedup techniques: https://tailscale.com/blog/more-throughput/
It’s likely that the kernel version can use similar techniques to surpass the performance of the userspace version that tailscale uses, but no one has put in the work to to make the kernel implementation as sophisticated as the userspace one.