Is it a bad idea to use my desktop to self host?

What are the disadvantages?? Can they be overcome?

I use it primarily for programming, sometimes gaming and browsing.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    It’s a terrible idea - do it anyway. Experimentation is how we learn.

    If you have a reasonably modern multi-core system you probably won’t even notice a performance hit. The biggest drawback is that you have a single thing that is holding all your eggs. So if an upgrade goes wrong, or you’re taking things down for maintenance then everything is affected. And there can be conflicts between required versions of libraries, OS, etc. that each service needs.

    Separating services, even logically, is a good idea. So I’d recommend you use containers or VMs to make it easier to just “whelp, that didn’t work” and throw everything away or start from scratch. It also makes library dependencies much easier to deal with.

    • Cyclohexane@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      So I already host a lot of stuff on a raspberry pi 4B. But when I tried to host Jellyfin, encoding was trouble on it, so I used my desktop to host Jellyfin as a quick solution, but using sshfs from the raspberry pi to access the media files. So now I wonder, is it worth it moving Jellyfin to something else? Is it worth it moving the media files to the desktop?

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Is it performing well as is? sshfs isn’t very high performance, but if it’s working it’s fine - nfs would likely perform better though. I run jellyfin in a vm with an nfs mount to my file server and it works fine. Interface is zippy and scanning doesn’t take too long. I don’t get GPU acceleration but the CPU on that system (10th gen i7 I think) is fast enough that I haven’t had much trouble with transcoding (yet).

        • Cyclohexane@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          It’s actually not bad, surprisingly. I have had issues sometimes, but they’re network issues related to my router. I haven’t had them in a while.

          • atzanteol@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            If it’s working - that’s fine. Creating dependencies can make things more complex (you now need two systems running for one service to work) - but also isolating ‘concerns’ can be beneficial. Having a single “file server” lets me re-build other servers without worrying about losing important data for example. It separates system libraries and configuration from application data. And managing a file-server is pretty simple as the requirements are very basic (Ubuntu install with nfs-utils - and nothing else). It also lets me centralize backups as everything on the file server is backed-up automatically.

            Things can be as simple or as complex as you want. I will re-iterate that keeping a “one server per service” mindset will pay off in the long-run. If you only have your desktop and a Pi then docker can help with keeping your services well isolated from each other (as well as from your desktop).

  • SGG@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    If you are going to use your desktop, I would suggest putting all of the self-hosted services into a VM.

    This means if you decide you do want to move it over to dedicated hardware later on, you just migrate the VM to the new host.

    This is how I started out before I had a dedicated server box (refurb office PC repurposed to a hypervisor).

    Then host whatever/however you want to on the VM.

  • CriticalMiss@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    1 year ago

    By hosting services on your desktop, you are increasing your threat surface. Every additional software that you run increases your potential to catch malware. It also requires powering a beefy machine 24/7 to keep the service up, when in reality anything that isn’t a media server can run on 3rd gen Intel CPUs that have relatively low TDP.

  • jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I would not recommend using your primary desktop for self hosting. If you just absolutely have to, install Virtual Box or some other hypervisor solution and run your servers in separate VM’s.

    Use a dedicated host. It can be a desktop, server, Raspberry Pi, etc. Depending on your needs. Sooner or later you’ll find that hosting on a workstation that you use for other things is horribly inconvenient. Depending on what you’re self hosting, it can consume lots of resources. If you become dependent on the services you’re hosting, which is the point of self hosting to begin with, even really small things like rebooting your workstation can become really inconvenient.

    I’ve got an old Dell PowerEdge ticking away in my basement that runs all my VM’s. I can reboot my desktop without interrupting any of my self hosted services. It also makes it easier to back up my VM’s and I can easily spin up a new one if needed. You have to be careful if you use server hardware though. The T430 that I have is pretty efficient but some servers can be thirsty little space heaters.

  • iwasgodonce@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I do and it’s fine.

    I used to have a separate machine for server stuff but it just cost more in electricity since I would leave them both on 24x7 anyway.

    I’ve got 64G of ram and I often use up to 48 of it with various VMs. I wouldn’t get any power savings with a separate server since I have a cron job to transcode everything that plex recorded off of TV during the day to av1 for disk space savings (usually turns 3GB of mpeg2 into 700MB of av1), so I would need a server with a moderately powerful cpu anyway for that.

    I have a ryzen 3700X. got it since it was the highest performance that was still 65w tdp at the time, didn’t want to spend a ton on electricity and extra air conditioning since I would be leaving it running 24x7.

    The only time I notice a performance impact during gaming is if my windows 11 vm is running, I don’t really need that one running 24x7 so I shut that one down if it happens to be running at the time.

  • Justin@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    What do you want to self host? To learn or experiment buy a cheap old x86 box. I get mine at goodwill auction. Otherwise desktop is good if you want something that needs more compute and that you’d spin up as needed vs always on.

  • fuckwit_mcbumcrumble@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I use a “regular” desktop as my server. It uses much less power than most servers and still has plenty of horsepower for what I do.

    Remote management and (cheap) ECC ram are the biggest reason to get a server. But those usually aren’t issues for most work loads, especially at home.

    Shit I used to run my stuff off of a laptop with maxed out ram, and some people just have a raspberry pi and call it a day.

  • Tylerdurdon@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I would learn about mtbf if I were you. Everything has a failure point (generally buried deep in product info), and when you start keeping it on 24/7, the hours will burn. Those fans on your GPU will give it up way before they normally would.

    Finding a used server is not a good idea either. You aren’t in a data center and servers are super loud. Also, they chew up electricity like a hungry dog and his dinner.

    As others have said, find a used desktop somewhere and a cheap KVM switch so you can use the same peripherals for both. It doesn’t need to be beefy by any measure (maybe drive space), just affordable.

  • thejevans@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    My first homelab was a synology NAS, and my gaming PC with a DIY linux hypervisor as the main OS, a linux VM for hosting servers, and a Windows/Mac/Linux VM trio (each with GPU passthrough) that I would switch between for my workstation. I lost performance for sure, but it taught me a lot without the need to purchase more hardware.

    If you consider it temporary, it’s not a bad way to learn.

  • equidamoid@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Whatever works for you. Just do it. It is convenient as f when you are just starting. You can always improve incrementally later on when (if) you encounter a problem.

    Too much noise/power costs to run a small thing - get a pi and run it there. Too much impct on your desktop performance - okay, buy a dedicated monster. Want to deep dive into isolating things (and VMs are too much of a hassle) - get multiple devices.

    No need to spend money (maybe sponsoring more e-waste) and time until it’s justified for your usecases.

  • Alami@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I did the other way round. An old nuc from 2013 turned into a gui-less debian selfhosted server (yunohost) for 2.5 years, until my old laptop (2008) died. Then I installed xfce on top, plugged a wireless keyboard/touchpad combo + hdmi to my TV, and use it as a desktop mainly for web browsing, open office and some gimp processing. But just once a week or so.