Great news! I started my selfhost journey over a year ago, and I’m finding myself needing better hardware. There’s so many services I want that my NAS can’t handle. And I unfortunately need to add GPU transcoding to my Jellyfin setup.

What’s the best OS for a machine focused on containers and (getting started with) VMs? I’ve heard Proxmox

What CPU specs should I be concerned about?

I’m willing to buy a pre-built as long as its hardware has sufficient longevity.

  • Cousin Mose@lemmy.hogru.ch
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    Honestly I just run Alpine Linux on a mini PC (router) or Raspberry Pi (NAS). I don’t like to screw around with outdated, bloated Debian-based distros.

  • borari@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    20
    ·
    13 hours ago

    Depending on how many bays your Synology is, you might be best off getting a nuc or a mini pc for compute and using your synology just for storage.

      • borari@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        I have a 6 bay, so yeah that might be a little limiting. I have all my personal stuff backed up to an encrypted cloud mount, the bulk of my storage space is pirated media I could download again, and I have the Synology using SHR so I just plug in a bigger drive, expand the array, then plug in another bigger drive and repeat. Because of duplication sectors you might not benefit as much from that method with just 4 bays. Or if you have enough stuff you can’t feasible push to up to the cloud to give piece of mind during rebuilding I guess.

    • curbstickle@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      This is precisely what I do with my nas.

      I have 9…ish tiny/mini/micros for compute, two NAS (locally).

      Solid approach

      • Zikeji@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        12 hours ago

        9? That’s quite a bit of compute lol.

        My journey started with 1 server, then 4, then 5 (one functioning as a NAS), then 1 (just the NAS box), then I moved and decided to slim it down to a proper NAS and 1 mini PC/NUC clone. Now I’m up to two because the first was an Intel N105 which just isn’t up for the challenges lol

        • curbstickle@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 hours ago

          3 are for the family, 3 are for work stuff, 3 are for me as toys.

          (Plus a Mac mini and a p330 as spare desktops for me, thus the -ish)

    • Ebby@lemmy.ssba.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      That’s the route I took too. NAS for storage and simple docker containers, Minipc for compute/GPU.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    9
    ·
    12 hours ago

    I think at this point I agree with the other commenter. If you’re strapped for storage it’s time to leave Synology behind, but it sounds more like it’s time to separate your app server from your storage server.

    I use proxmox, and it was my primary when I got started with the same thing. I recommend build out storage in proxmox directly, that will be for VM images and container volumes. Then utilize regular backups to your Synology box. That way you have hot storage for drives and running things, cold storage for backups.

    Then, inside your vms and containers you can mount things like media and other items from your Synology.

    For you, I would recommend proxmox, then on top of that a big VM for running docker containers. In that VM you have all of your mounts from Synology into that VM, like Jellyfin stuff, and you pass those mounts into docker.

    If you ever find yourself needing to stretch beyond the one box, then you can think about kubernetes or something, but I think that would be a good jump for now.

    • Leax@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 hours ago

      Why not use Proxmox to host the containers directly instead of using a VM? I know it’s easier to use this way but it kinda misses the point of using proxmox then

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        Not at all. Proxmox does a great job at hosting VMs and giving a control plane for them - but it does not do containers well. LXCs are a thing, and it hosts those - but never try to do docker in an LXC. (I tried so many different ways and guides and there were just too many caveats, and you end up always essentially giving root access to your containers, so it’s not great anyway). I’d like to see proxmox offer some sort of docker-first approach will it will manage volumes at the proxmox level, but they don’t seem concerned with that, and honestly if you’re doing that then you’re nearing kubernetes anyway.

        Which is what I ended up doing - k3s on proxmox VMs. Proxmox handles the instances themselves, spins up a VM on each host to run k3s, and then I run k3s from within there. Same paradigm as the major cloud providers. GKE, AKS, and EKS all run k8s within a VM on their existing compute stack, so this fits right in.

    • LazerDickMcCheese@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      Thanks, that’s some of the info I’m needing to make the jump over. How’s the learning curve? One of my big concerns is wrapping all of these things under Tailscale. It was easy on Synology, but Proxmox (I imagine) isn’t as straightforward. Eventually, I’d like to switch to headscale, but one thing at a time

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        Just focus on one project at a time, break it out into small victories that you can celebrate. A project like this is going to be more than a single weekend. Just get proxmox up and running. Then a simple VM. Then a backup job. Don’t try to get everything including tailscale working all at once. The learning curve is a bit more than you’re probably used to, but if you take it slow and focus on those small steps you’ll be fine.

  • ragebutt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 hours ago

    Define goals. What services can’t be handled?

    If transcoding is a goal build around intel. Quicksync video is a no brainer, imo. GPU is unnecessary power draw (15-25w+ idle depending on card) and waste of a pcie slot unless you want to do LLM stuff. Imo 10th gen intel is the sweet spot for quicksync unless you desperately need av1/vp9. If so then you need much more expensive 13/14 gen, which use more power and have more considerations for thermal management

    OS is an endless debate. Proxmox is fine and free, why not try it? Unraid is easier to get your bearings but it does cost money. Debian is also free but a bit more confusing because not purpose built. Truenas as well. All can do containers and VMs, but approach in different ways. None is “best” but some are more “free” which is nice

    CPU specs are dependent on goals. For transcoding as said above quicksync is necessary and is so impressive. I can transcode a 4k remux to one device while transcoding a 1080 remux to another and direct playing a 4k remux and cpu sits under 25% load on Xeon equivalent of 10700. You don’t need a Xeon btw, I just got a great deal where this was $50 (see next point). Otherwise specs depend wildly on what you plan to do. I can run windows VMs pretty well with this though for the handful of times I need a windows machine

    Prebuilt is a waste. Used hardware is cheap and gives more options and can plan more. What are you willing to buy now and what do you eventually want? My NAS started as a 36tb array with 16gb ram and no cache, now it’s 234tb and 4tb cache with 32gb ecc ram years later. Slowly building up was easier on wallet and used hardware, refurb drives, etc is 100% the build. Your goals will likely vary but figure out your roadmap and go from there

    Also keep in mind that not every service benefits from running on a NAS. My homeassistant server is run on a raspberry pi for example. Easier to keep it segregated and don’t have to worry about getting zwave/zigbee/mqtt/etc all working with a docker plus dealing with any server downtime impacting home. Tbf literally everything else is run on the nas though haha

  • some_guy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 hours ago

    My Synology is compatible with an expansion unit and can support two of them. Check if yours can do the same for the storage aspect.