• 0 Posts
  • 11 Comments
Joined 5 days ago
cake
Cake day: March 17th, 2025

help-circle
  • The nice thing about docker is all you need to do is backup your compose file, .env file, and mapped volumes, and you can easily restore on any other system. I don’t know much about CasaOS, but presumably you have the ability to stop your containers and access the filesystem to copy their config and mapped volumes elsewhere? If so this should be pretty easy. You might have some networking stuff to work out, but I suspect the rest should go smoothly and IMO would be a good move.

    When self-hosting, the more you know about how things actually work, the easier it is to fix when something is acting up, and the easier it is to make known good backups and restore them.


  • Yes it’s paid, but the quality is worlds above Bing, DDG, or Google. The best description I can make is that it’s what Google Search was about 15 years ago, back when there were no AI results, no ads, no artificially promoted results, and you could vote on results and block domains from appearing in your searches. Back when Google Search was actually good.

    So it doesn’t do anything new or groundbreaking, it’s just what a search engine is supposed to be, in a time when every other option has abandoned that goal in the endless search for more revenue.






  • Sure, it’s a bit hack-and-slash, but not too bad. Honestly the dockcheck portion is already pretty complete, I’m not sure what all you could add to improve it. The custom plugin I’m using does nothing more than dump the array of container names with available updates to a comma-separated list in a file. In addition to that I also have a wrapper for dockcheck which does two things:

    1. dockcheck plugins only run when there’s at least one container with available updates, so the wrapper is used to handle cases when there are no available updates.
    2. Some containers aren’t handled by dockcheck because they use their own management system, two examples are bitwarden and mailcow. The wrapper script can be modified as needed to support handling those as well, but that has to be one-off since there’s no general-purpose way to handle checking for updates on containers that insist on doing things in their own custom way.

    Basically there are 5 steps to the setup:

    1. Enable Prometheus metrics from Docker (this is just needed to get running/stopped counts, if those aren’t needed it can skipped). To do that, add the following to /etc/docker/daemon.json (create it if necessary) and restart Docker:
    {
      "metrics-addr": "127.0.0.1:9323"
    }
    

    Once running, you should be able to run curl http://localhost:9323/metrics and see a dump of Prometheus metrics

    1. Clone dockcheck, and create a custom plugin for it at dockcheck/notify.sh:
    send_notification() {
    Updates=("$@")
    UpdToString=$(printf ", %s" "${Updates[@]}")
    UpdToString=${UpdToString:2}
    
    File=updatelist_local.txt
    
    echo -n $UpdToString > $File
    }
    
    1. Create a wrapper for dockcheck:
    #!/bin/bash
    
    cd $(dirname $0)
    
    ./dockcheck/dockcheck.sh -mni
    
    if [[ -f updatelist_local.txt ]]; then
      mv updatelist_local.txt updatelist.txt
    else
      echo -n "None" > updatelist.txt
    fi
    

    At this point you should be able to run your script, and at the end you’ll have the file “updatelist.txt” which will either contain a comma-separated list of all containers with available updates, or “None” if there are none. Add this script into cron to run on whatever cadence you want, I use 4 hours.

    1. The main Python script:
    #!/usr/bin/python3
    
    from flask import Flask, jsonify
    
    import os
    import time
    import requests
    import json
    
    app = Flask(__name__)
    
    # Listen addresses for docker metrics
    dockerurls = ['http://127.0.0.1:9323/metrics']
    
    # Other dockerstats servers
    staturls = []
    
    # File containing list of pending updates
    updatefile = '/path/to/updatelist.txt'
    
    @app.route('/metrics', methods=['GET'])
    def get_tasks():
      running = 0
      stopped = 0
      updates = ""
    
      for url in dockerurls:
          response = requests.get(url)
    
          if (response.status_code == 200):
            for line in response.text.split("\n"):
              if 'engine_daemon_container_states_containers{state="running"}' in line:
                running += int(line.split()[1])
              if 'engine_daemon_container_states_containers{state="paused"}' in line:
                stopped += int(line.split()[1])
              if 'engine_daemon_container_states_containers{state="stopped"}' in line:
                stopped += int(line.split()[1])
    
      for url in staturls:
          response = requests.get(url)
    
          if (response.status_code == 200):
            apidata = response.json()
            running += int(apidata['results']['running'])
            stopped += int(apidata['results']['stopped'])
            if (apidata['results']['updates'] != "None"):
              updates += ", " + apidata['results']['updates']
    
      if (os.path.isfile(updatefile)):
        st = os.stat(updatefile)
        age = (time.time() - st.st_mtime)
        if (age < 86400):
          f = open(updatefile, "r")
          temp = f.readline()
          if (temp != "None"):
            updates += ", " + temp
        else:
          updates += ", Error"
      else:
        updates += ", Error"
    
      if not updates:
        updates = "None"
      else:
        updates = updates[2:]
    
      status = {
        'running': running,
        'stopped': stopped,
        'updates': updates
      }
      return jsonify({'results': status})
    
    if __name__ == '__main__':
      app.run(host='0.0.0.0')
    

    The neat thing about this program is it’s nestable, meaning if you run steps 1-4 independently on all of your Docker servers (assuming you have more than one), then you can pick one of the machines to be the “master” and update the “staturls” variable to point to the other ones, allowing it to collect all of the data from other copies of itself into its own output. If the output of this program will only need to be accessed from localhost, you can change the host variable in app.run to 127.0.0.1 to lock it down. Once this is running, you should be able to run curl http://localhost:5000/metrics and see the running and stopped container counts and available updates for the current machine and any other machines you’ve added into “staturls”. You can then turn this program into a service or launch it @reboot in cron or in /etc/rc.local, whatever fits with your management style to start it up on boot. Note that it does verify the age of the updatelist.txt file before using it, if it’s more than a day old it likely means something is wrong with the dockcheck wrapper script or similar, and rather than using the output the REST API will print “Error” to let you know something is wrong.

    1. Finally, the Homepage custom API to pull the data into the dashboard:
            widget:
              type: customapi
              url: http://localhost:5000/metrics
              refreshInterval: 2000
              display: list
              mappings:
                - field:
                    results: running
                  label: Running
                  format: number
                - field:
                    results: stopped
                  label: Stopped
                  format: number
                - field:
                    results: updates
                  label: Updates
    


  • Anything on a separate disk can be simply remounted after reinstalling the OS. It doesn’t have to be a NAS, DAS, RAID enclosure, or anything else that’s external to the machine unless you want it to be. Actually it looks like that Beelink only supports a single NVMe disk and doesn’t have SATA, so I guess it does have to be external to the machine, but for different reasons than you’re alluding to.