I’m in the process of setting up backups for my home server, and I feel like I’m swimming upstream. It makes me think I’m just taking the wrong approach.

I’m on a shoestring budget at the moment, so I won’t really be able to implement a 3-2-1 strategy just yet. I figure the most bang for my buck right now is to set up off-site backups to a cloud provider. I first decided to do a full-system backup in the hopes I could just restore it and immediately be up and running again. I’ve seen a lot of comments saying this is the wrong approach, although I haven’t seen anyone outline exactly why.

I then decided I would instead cherry-pick my backup locations instead. Then I started reading about backing up databases, and it seems you can’t just back up the data directory (or file in the case of SQLite) and call it good. You need to dump them first and backup the dumps.

So, now I’m configuring a docker-db-backup container to back each one of them up, finding database containers and SQLite databases and configuring a backup job for each one. Then, I hope to drop all of those dumps into a single location and back that up to the cloud. This means that, if I need to rebuild, I’ll have to restore the containers’ volumes, restore the backups, bring up new containers, and then restore each container’s backup into the new database. It’s pretty far from my initial hope of being able to restore all the files and start using the newly restored system.

Am I going down the wrong path here, or is this just the best way to do it?

  • RadDevon@lemmy.zipOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    23 hours ago

    OK, cool. That’s helpful. Thank you!

    I know in general you can just grab a docker volume and then point at it with a new container later, but I was under the impression that backing up a database in particular in this way could leave you with a database in a bad state after restoring. Fingers crossed that was just bad info. 😅

    • supersheep@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 hours ago

      In theory the database can end up in an invalid state when you leave the database container running. What I do for most containers is to temporarily stop them, backup the Docker volume and then restart the container.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        21 hours ago

        Seconded, and great callout @[email protected] , yes part of my script was to stop the container gracefully, tar it, start it again, and then copy the tar somewhere. it “should” be fine, in a production environment where you could have zero downtime I would take a different approach, but we’re selfhosters. Just schedule it for 2am or something.

        Oh, and feel free to test! Docker makes it super easy. Just extract the tar somewhere else on the drive, point your container to the new volume, see if it spins up. Then you’ll know your backup strategy is working!

        • RadDevon@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          20 hours ago

          Is your script something you can share? I’d love to see your approach. I can definitely live with a few minutes of down time in the early morning.

          • Scrubbles@poptalk.scrubbles.tech
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 hours ago

            That particular one is long gone I’m afraid, but it’s essentially just docker compose down, tar like I did above, docker compose up -d, and then I used rclone to upload it