So the trick is to use the #fragment
part of the URL, that is not sent to the server.
Of course the JS one downloads from the server could easily upload it to it, so you still need to trust the JS.
So the trick is to use the #fragment
part of the URL, that is not sent to the server.
Of course the JS one downloads from the server could easily upload it to it, so you still need to trust the JS.
Maybe consider static ip assignment in your DHCP server (e.g. internet router) if at all possible… Then you can add a name to it to /etc/hosts
.
Alternatively you could use Avahi to provide mdns names to your local network
Do you have that file? If not, then unset SSH_AUTH_SOCK
will work just as well.
If it does exist, then I suppose it has good chances of working correctly :). ssh-add -l
will try to use that socket and list your keys in the service (or list nothing if there are no keys, but it would still work without error).
At the end of the log you find:
822413 connect(4, {sa_family=AF_UNIX, sun_path="/run/user/1000/gcr/ssh"}, 110) = 0
...
822413 read(4,
meaning it’s trying to interact with the ssh-agent, but it (finally) doesn’t give a response.
Use the lsof
command to figure out which program is providing the agent service and try to resolve issue that way. If it’s not the OpenSSH ssh-agent, then maybe you can disable its ssh-agent functionality and use real ssh-agent in its place…
My wild guess is that the program might be trying to interactively verify the use of the key from you, but it is not succeeding in doing that for some reason.
As mentioned, -v
(or -vv
) helps to analyze the situation.
My theory is that you already have something providing ssh agent service, but that process is somehow stuck, and when ssh tries to connect it, it doesn’t respond to the connect, or it accepts the connection but doesn’t actually interact with ssh. Quite possibly ssh doesn’t have a timeout for interacting with ssh-agent.
Using eval $(ssh-agent -s)
starts a new ssh agent and replaces the environment variables in question with the new ones, therefore avoiding the use of the stuck process.
If this is the actual problem here, then before running the eval
, echo $SSH_AUTH_SOCK
would show the path of the existing ssh agent socket. If this is the case, then you can use lsof $SSH_AUTH_SOCK
to see what that process is. Quite possibly it’s provided by gnome-keyring-daemon
if you’re running Gnome. As to why that process would not be working I don’t have ideas.
Another way to analyze the problem is strace -o logfile -f ssh ..
and then check out what is at the end of the logfile
. If the theory applies, then it would likely be a connect
call for the ssh-agent.
Well that’s exactly the worry. Why shouldn’t it be? It is their business and livehood.
As if taking down the systems is the biggest cybersecurity threat a company might have.
If you want to have multi-host redundant storage at home (via e.g. minio or ceph), S3 is a pretty good protocol to provide it.
S3 is nice in the way it’s not a file system so it can have relaxed semantics, while also providing secure access to individual files over HTTPS via URL signing.
Some people seem to be stuck in the idea that S3 means cloud hosting. Not sure if that was your view, but it’s worth spelling out sometimes.
Moving away from Discord can mean you need to stop interacting with the community using it. My personal examples are: Tilt5, Makera, Turbo Sliders. In the these cases Discord is also the way to access support for something you’ve paid for.
Getting thise communities to move into something open (e.g. Matrix) can be a tall order.
I just noticed https://lemmy.ml/u/[email protected] had proposed the same, but here’s the same but with more words ;).
I would propose you try to split the data you have manually into logically separate parts, so that you could logically fit 0.8 TB on one drive, 0.4 TB on another, and maybe sets of 0.2TB+0.2TB on a third one. Then you’d have a script that uses traditional backup approaches with modern backup apps to back up the particular data set for the disk you have attached to the system. This approach will allow you to access painlessly modern “infinite increments” backups where you persist older versions of data without doing full and incremental backups separately. You should then write a script to ensure no important data is forgotten to be backed up and that there are no overlapping backups (except for data you want to back up twice?).
For example, you could have a physical drive with sticker “photos and music” on it to back up your ~/Photos and ~/Music.
At some point some of those splits might become too large to fit into its allocated storage, which would be additional manual maintenance. Apply foresight to avoid these situations :).
If that kind of separation is not possible, then I guess tar+multi volume splitting is one option, as suggested elsewhere.
I rather enjoy Tilix. It can tile a single tab without tmux and it can also give special handling to links matched from regexps. I use it to go from Python stacktraces to correct line in Emacs with just a click. It can also do Quake-like terminal, which I use alot.
The project is looking for maintainers, though, so it’s possible at some point I need to start looking for alternatives…
In theory, yes. But if you follow the link and that leads to downloading the JS and running it, you’re already too late inspecting it.
And even if you review it once (and it wasn’t too large or obfuscated via minification), the next time you load a page, the JS can be different. I guess there could be a web browser extension for pinning the code?
The only practial alternative I know of is to have a local client you can review once (and after updates).