• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle


  • Bash and a dedicated user should work with very little effort. Basically, create a user on your VM (maybe called git), set up passwordless (and keyless) ssh for this user but force the command to be the git-shell. Next a simple bash script which iterates directories in this user’s home directory and runs git fetchall. Set cron to run this script periodically (every hour?). To add a new repository, just ssh as your regular user and su to the git user, then clone the new repository into the home directory. To change the upstream, do the same but simply update the remote.

    This could probably be packaged as a dockerfile pretty easily, if you don’t mind either needing to specify the port, or losing the machine’s port 22.

    EDIT: I found this after posting, might be the easiest way to serve the repositories, in combination with the update script. There’s a bunch more info in the Git Book too, the next section covers setting up HTTP…




  • For no 1, that shouldn’t be dind, the container would be controlling the host docker, wouldn’t it?

    If so, keep in mind that this is the same as giving root SSH access to the host machine.

    As far as security goes, anything that allows GitHub to cause your server to download (pull) and use a set of arbitrary of Docker images with arbitrary configuration is remote code execution. It doesn’t really matter what you to secure access to the machine, if someone compromises your GitHub account.

    I would probably set up SSH with a key dedicated to GitHub, specifically for deploying. If SSH is configured to only allow keys for access, it’s not much of a security risk to open it up to the internet. I would then configure that key to only be able to run a single command, which I would make a very simple bash script which runs git fetch, and then git verify-commit origin/main (or whatever branch you deploy), befor checking out the latest commit on that branch.

    You can sign commits fairly easily using SSH keys now, which combined with the above allows you to store your data on GitHub without having to trust them to have RCE on your host.


  • My recommendation would be to utilize LVM. Set up a PV on the new drive and create an LV filling the drive (wit an FS), then move all the data off of one drive onto this new drive, reformat the first old drive as a second PV in the volume group, and expand the size of the LV. Repeat the process for the second old drive. Then, instead of extending the LV, set the parity option on the LV to 1. You can add further disks, increasing the LV size or adding parity or mirroring in the future, as needed. This also gives you the advantage that you can (once you have some free space) create another LV that has different mirroring or parity requirements.








  • If you’re seeing an OOM killer messsage note that it doesn’t necessarily kill the problem process, by default the kernel hands out memory upon requestt, regardless of whether it has ram to back the allocation. When a process then writes to the memory (at some later time) and the kernel determines that there is no physical ram to store that write, it then invokes OOM Killer. This then selects a process and kills it. MySQL (and MariaDB) use large quantities of ram for cache, and by default the kernel lies about how much is available, so they often end up using more than the system can handle.

    If you have many databases in containers, set memory limits for those containers, that should make all the databases play nicer together. Additionally , you may want to disable overcommit in the kernel, this will cause the kernel to return out of memory to a process attempting to allocate ram and stop lying about free ram to processes that ask, often greatly increasing stability.