Hey, folks I’m moving my main PC to linux soon, and for that I have settled on Mint. However, I also plan to build a homelab pc for the first time to selfhost some services, mainly Jellyfinn, some game servers, and possibly next cloud, but I’m unsure which distro to go with for that.

I have some experience running debian headless (on an orange pi) and I can use ssh and the cli just fine, however, I also want the server pc to (maybe) serve as a moonlight client in my living room, so I was leaning towards something that is not headless, and I am unsure if I should also go with Mint for that or if something else might be more suitable.

  • Nibodhika@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    3 hours ago

    Everyone who said proxmox didn’t read your post to the end. Proxmox is great for people who want a machine to just self-host things and don’t care about how things work. You don’t seem like that sort of person, and you also mentioned Moonlight which will be annoying to do on proxmox as it’s not intended for that use case.

    Every system capable of being used as a Moonlight client can run self-hosted services, but the other way around is not true. So it’s better to start with the Moonlight part.

    So, with that in mind I imagine you want this machine to be plugged to a TV in the living room or something similar, so it needs to have a GUI, and the GUI probably needs to be something you can navigate with a controller (although the new Steam controller probably increases that definition dramatically).

    You will already have one system with a GUI, so it’s easier to use the same thing. Really, don’t overthink this, if it’s good for general use it’s good for self-hosting, and you don’t want to have to learn how to solve the same problem in multiple ways because of different distros. In the future considering different distros makes sense, but when you’re just getting started nailing the basics is easier with consistency across systems. Think about it this way, if you were learning how to write mixing cursive and print at the same time would be harder than choosing one and then learning the other.

    But why proxmox is great? It’s because it makes it easy and gives you a GUI to add services. How hard is it to do the same on Linux using docker? Ssh into the server, edit a small text file and run a single command, all of which should be easy for you since you’ve probably done this in the past, but for most people that is very hard and that is where proxmox shines.

    Don’t believe me? You said Jellyfin, this is the whole Jellyfin file with comments:

    # Services that this file creates
    services:
      # Name of the service, it can be whatever you want
      jellyfin:
        # Image this server runs, this is what tells what the service is
        image: lscr.io/linuxserver/jellyfin:latest
        # Volumes to mount. In the format <local>:<inside the image>
        # So this will mount the ./jellyfin folder inside /config for the image
        # some services require specific folders inside of them, e.g. /config to store jellyfin's configs, otherwise the folder would get lost with every restart of the service 
        volumes:
          - ./jellyfin:/config
        # Rarely needed, but this gives hardware access to the image. Specifically access to the /dev/dri device
        # Jellyfin specifically benefits from this for transcoding 
        devices:
          - /dev/dri:/dev/dri
        # This shows what ports you want to expose, again in the format <local>:<inside the image>
        # So if you want Jellyfin on port 8080 on your machine you don't need to change settings, just do 8080:8096
        ports:
          - 8096:8096
          - 8920:8920
          - 7359:7359/udp
        # This tells docker to restart the service if it crashes, unless you've stopped it
        restart: unless-stopped
    

    That’s it, and this is one of the most complicated ones out there, here’s a simple one:

    services:
      radarr:
          image: lscr.io/linuxserver/radarr:latest
          volumes:
            - ./radarr:/config
    

    Of course there’s more to those files, and lots of extra configurations to be used, but the core is very simple and the rest is just needed for special cases.

    • tinfoilhat@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      Yeah, debian is pretty solid. I use AlmaLinux, which is basically what replaced CentOS when RedHat killed it off.

      I never think about it.

  • neclimdul@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    23 hours ago

    Sounds like Debian is probably your goto based on experience you stated. KISS to start.

    My advice is choose something as stable as your requirements allow. Debian, Ubuntu LTS, etc. It can be fun to try new things but generally your homelab stuff you just want to work and spending a ton of time fixing broken updates isn’t the fun part.

    Similar to above, isolate and guard your data from your OS and programs. It lets you be flexible to trying some new things if you want. But if things go bad, reinstalling a different OS is easy. remount your JBOD or NAS or what ever and you’re back rolling. Backing up and transferring tons of files sucks and recovering them is worse.

    Declarative infrastructure can be your friend. Ansible, docker compose, etc. Again, when things go bad, getting things back up is that much quicker and you can keep doing the fun stuff not spend your weekend finding that old blog post, figuring out that weird ai promp, what ever .

  • doodoo_wizard@lemmy.ml
    link
    fedilink
    arrow-up
    15
    ·
    2 days ago

    Eventually proxmox will be the right choice for you. Right now it’s not because you’re not skilled or knowledgeable enough to be able to navigate it.

    That is not a dig or a slight, it’s a very powerful and complex package built on top of an already powerful and complex package.

    Just do containerless normal person Debian then when everything’s running how you’d like and you’re ready you can migrate to proxmox.

    The big benefit of doing that instead of jumping into proxmox with both feet immediately is that you’ll be learning more and be able to solve your own problems as you get to the point of using proxmox.

    • MolochHorridus@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      I started with Proxmox with zero Linux or self hosting knowledge and slowly built up what I need. It wasn’t always easy, but not that hard either.

    • wltr@discuss.tchncs.de
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      But what would Proxmox do?

      It’s the virtualisation, right? Won’t it consume extra resources? Or won’t it be unimportant, since it’s very little? (Never worked with VMs seriously, only casually ran some things here and there.)

      I’m not the original poster, but I’m curious too. I think I’d pick some Fedora / Arch for the task, depending whether someone else would use it too.

      • doodoo_wizard@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        Proxmox gives you a nice (and limited!) front end to manage containers and virtualization, but it also lets you do other cool stuff like resource pooling, credential management and too much to really get into.

        Really powerful enterprise and whole organization level management in that package.

        It’s not the only game in town, but it’s free and well documented and I recommended bare metal Debian as a stepping stone as opposed to alternative because proxmox runs on top of Debian so knowing that system is very nice.

        The overhead is real. On the other hand, all your little vms and containers are rarely doing something all at the same time so it doesn’t matter.

      • sakuraba@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        2 days ago

        there is some overhead but it’s not like running a full VM when you are talking about containers and it is way safer if you want to expose anything to the internet due to isolation

        edit: i’m talking about containers in general, not proxmox exclusively. depending on your case proxmox will let you spin a whole VM if needed

    • alphabethunter@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      What are the downsides of not using proxmox right now? Most people on this thread are recommending it, so even if it’s a little difficult now, but more capable in the long run, I’m up for the challenge.

      • Evil_Shrubbery@thelemmy.club
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 day ago

        Another pov/idea perhaps - you can still run proxmox & only one virtual machine (eg standard Debian, no performance loss), but perhaps gain backups/copies if you plan to eventually add a cheap second machine (location?) instead of local redundancy.

        Also with proxmox you can copy your virtual machines/containers & experiment easily.

      • doodoo_wizard@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        2 days ago

        The downsides of not going straight to proxmox are all pretty much permutations of missing out on features or having to deal with a migration later on down the line when you do switch to it.

        Those features are almost universally stuff you might decide to not use or to use in a particular way, so it’s easy to say “pump your brakes and get your feet underneath yourself first” before handing you a tool that can be configured (with the help of Reddit, stackexchange and llms) in infinite wrong ways.

        Kind of like suggesting someone learn how to make a simple miter joint before handing them the universally loved and used cordless oscillating multi tool. The tool is really powerful, but the skills and foresight doing even just one miter joint will give you let you make better choices about how to use the oscillating multitool when you have it.

        Migration from bare metal to literally anything else is incredibly well documented and not a big deal.

        Often times for some of the stuff you said you’d be running there are guides for migrating that particular package from metal to containers, vms, or to proxmox itself.

        I want to make it clear that everything you learn from bare metal Debian would transfer over and compliment learning skills directly with the proxmox package because proxmox runs on top of Debian and Debian would likely be the os your vms or containers are made from.

        You don’t need to throw yourself in the deep end to learn how to swim.

        E: there is the extremely rare possibility that you will have some crash or security problem due to lack of containers/vms. I say extremely rare and I mean extremely rare. My personal server which was bare metal for twenty years just recently had its first one and it was actually related to a problem with containerization as opposed to lack of it. Your mileage may vary but for home users who don’t have public IPs and services getting pounded on 24/7 it wasn’t even something I thought about.

  • DonutsRMeh@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    2 days ago

    My server has been running Debian for over a year now with zero issues. Here is a list of the things I run:

    1. Invidious.
    2. Audiobookshelf.
    3. Navidrome.
    4. Pihole with unbound.
    5. Searx.
    6. Cloudflare

    Hope this helps.

    • traxex@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      2 days ago

      How good is invidious? At the moment I’m pulling all of my videos in with TubeArchivist and that’s been hit or miss with getting blocked by YouTube. I also don’t watch my videos fast enough so they get backed up.

      • ohshit604@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        How good is invidious?

        The experience, at least in my case, has been the same. Invidious is very hit or miss due to YouTube’s methods of blocking connections, one day my instance will be working perfectly fine and the next will drop some obscure error. While my instance is technically public, it is only exposed to 1 country and even then hardly gets any hits off of my reverse proxy.

        Thankfully they tend to update pretty frequently when YouTube starts acting up on a large scale.

  • darcmage@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    10
    ·
    2 days ago

    Sounds like proxmox would fit the bill. Virtualize everything with LXC/Docker/VM depending on the app and you should be good to go. Moonlight should work in a vm running debian desktop for example.

    • Aufgehtsabgehts@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      How do I decide what to choose between VM and LXC? For example if I want to use Paperless/Jellyfin/Immich as Docker and Nextcloud without Docker.

      And would you have multiple VMs/LXCs for multiple Docker-Apps, or put them all in one?

  • monkeyman512@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    2 days ago

    I agree with having the server run Proxmox like others have said. Check out these YouTube channels for helpful information/guides:

    LearnLinuxTV

    LawrenceSystems

    CraftComputing

    • alphabethunter@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      Thanks! I’ll chech these out. I’m planning to start moving things over this weekend, so I’ll take the couple of days till then to learn as much as possible.

  • Ardor von Heersburg@discuss.tchncs.de
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    Honestly when you kind of know Debian than stay with it. It’s greater software. You can easily install a graphical desktop if you like to do so. Also it’s quit comfortable to have the same package manager on all you systems.

  • Sanctus@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    I’d still try Debian for this. Its just so rock solid, mine kills as a jellyfin server. Though I am unsure if the sometimes older packages will effect Moonlight in some sort of way. Never used Moonlight.

  • spaghettiwestern@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 days ago

    Running Mint for apps like Jellyfin and Icecast that aren’t critical, and Debian for apps like Frigate that are. Mint is easier to manage and more convenient, but Debian is amazingly reliable. Docker is used for everything.

    Consider adding Wireguard or similar for anywhere access. I have Tasker automatically connect whenever I’m not on home wifi so everything is always available without having detectable open router ports.

  • sneaky@r.nf
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    I did something similar. Everybody is different so maybe not the best option for you, but who knows.

    I have a single mini PC that handles my stack of virtual machines hosting various things. For the main OS I went with Fedora KDE. I chose something with a GUI for two reasons, the primary being that sometimes… Maybe not as often as you get more familiar, but sometimes there is an easier way to accomplish something in the GUI than in the CLI. Things like system settings. You can save a lot of time looking up commands and syntax by flipping a switch in the settings application.

    Second and most important reason for the GUI, I watch TV on this thing. Which I would not recommend if you are hosting anything that can’t handle a little downtime. Once in a while a web browser may hang, bluetooth could fail, and you end up having to restart. Nothing I host is critical to anybody so this isn’t a big deal to me. I also find a little inner peace knowing that I am interacting with the main system controlling these hosts on a daily basis. If it does get compromised in some way this makes it just a little more likely I will notice quickly.

    So that’s the hardware system and I’m running Libvirt as the hypervisor. It’s pretty bare bones, but easy to use and gets the job done. Hardest step to me was generating SSH certificates/keys. Not that it was hard moreso just new to me. Libvirt will not allow you to connect remotely with plain text. So regardless of your threat model this is a required step if you want remote access to the hypervisor remotely.

    If you make it that far you can start really getting into the weeds with networking. I’m not going to go into the topology of my network, but I will say if you are hosting anything public you should do as much as possible to isolate that from your home network. You can create a VM to act as a firewall/router for other VMs.

  • HuntressHimbo@lemmy.zip
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    If you want to get real fancy with it you could do something like Nix, but honestly I would recommend Debian first almost every time