This is definitely not possible with base KDE. If you’re using x11, you might be able to follow https://askubuntu.com/questions/1398508/split-a-widescreen-monitor-in-two but it’s pretty fragile, and I’m not sure if KDE will respect those monitors.
This is definitely not possible with base KDE. If you’re using x11, you might be able to follow https://askubuntu.com/questions/1398508/split-a-widescreen-monitor-in-two but it’s pretty fragile, and I’m not sure if KDE will respect those monitors.
And if you can’t afford the post your-fault car insurance you don’t get to drive a car, so if you can’t afford the post your-fault health insurance you don’t get to live?
I think it’s the swing arm for a TV /monitor mount and it swings up when you remove the TV from it
I don’t know a lot about tailscale, but I think that’s likely not relevant to what’s possible (but maybe relevant to how to accomplish it).
It sounds like the main issue here is dns. If you wanted to/were okay with just IP based connections, then you could assign each service to a different port on Bob’s box, and then have nginx point those ports at the relevant services. This should be very easy to do with a raw nginx config. I could write one for you if you wanted. It’s pretty easy if you’re not dealing with https/certificates (in which case this method won’t work anyway).
Looking quickly on google for npm (which I’ve never used), this might require adding the ports to the docker config and then using that port in npn (Like here). This is likely the simplest solution.
If you want hostnames/https, then you need some sort of DNS. This is a bit harder. You can take over their router like you suggested. You could use public DNS that points at a private IP (this is the only way I’m suggesting to get public trusted ssl certificates).
You might be able to use mdns to get local DNS at Bob’s house automatically, which would be very clean. You’d basically register like jellyseer.local
and jellyfin.local
on Bob’s network from the box and then setup the proxy manager to proxy based on those domains. You might be able to just do avahi-publish -a -R jellyseer.local 192.168.box.ip
and then avahi-publish -a -R jellyfin.local 192.168.box.ip
. And then any client that supports mdns/avahi will be able to find the service at that host. You can then register those names nginx/npn and I think things should just work
To answer your questions directly
I’d be happy to try and give more specifics if you choose a path similar to one of the above things.
Wait so the images in your post are the after images?
Cool!
I wouldn’t worry about making a second post. We can use all the content that we can get and this is neat
Isn’t .local a mdns auto configured domain? Usually I think you are supposed to choose a different domain for your local DNS zone. But that’s probably not the source of the problem?
From looking at the github, I think you don’t need to/want to host this publicly. It doesn’t automatically get and store your information. It’s more a tool for visualizing and cross referencing your takeout/exported data from a variety of tech platforms. It’s just developed as a web app for ease of UI/cross platform/ locally hostable.
I feel like this really depends on what hardware you have access too. What are you interested in doing?How long are you willing to wait for it to generate, and how good do you want it to be?
You can pull off like 0.5 word per second of one of the mistral models on the CPU with 32GB of RAM. The stabediffusion image models work okay with like 8-16GB of vram.
Your ISP knows where you’re going anyway. They don’t need DNS for that. They see all the traffic.
I rum Creo under wine, and while the performance is great, the stability is not. Creo loves crashing even on windows, and it’s much worse on Wine. It’s the one program that I kinda wish I had kept dual boot around for.
We were in Altmar, so kinda close.
This is one reason I’m switching away from pla+ back to normal pla. The esun pla+ really seems to get brittle when held under stress. This is an issue with printed parts as well. I’ve had parts suddenly crack in half where they were stressed over a few months.
Also it’s really annoying when little bits of filament get stuck in your filament guide tube :(
The symptoms you describe are exactly what happens to my machine when it runs out of memory and then starts swapping really hard. This is easy to check by seeing if disk io also spikes when it happens, and if memory usage is high
Well I either got a personal fire or I’m on fire myself. Witch fire sounds better
On linux and Mac there’s also https://vorta.borgbase.com/ which is pretty good
This is a really fantastic explanation of the issue!
It’s more like improv comedy with an extremely adaptable comic than a conversation with a real person.
One of the things that I’ve noticed is that the training/finetuning that’s done in order to make it give good completions to the “helpful ai conversation scenario” is that it flattens a lot of the capabilities of the underlying language model for really interesting and specific completions. I remember playing around with gpt2 in it’s native text completion mode, and even with that much weaker model, it was able to complete a much larger variety of text styles without sliding into the sameness and slickness of the current chat model fine-tuning.
A lot of the research that I read on LLMs is using them in the original token completion context, but pretty much the only way people interact with them is through a thick layer of ai chatbot improv. As an example for code, I imagine that one would have more success using an LLM to edit your code if the context that you give it starts out written like it is a review of a pull request for the code, or some other commentary of a form that matches the way that code is reviewed in the training data. But instead of having access to create that context directly, we have to ask for code review through the fogged window of a chat between an AI assistant and a person discussing code. And that form of chat likely isn’t well represented in the training data.