How do you set up a server? Do you do any automation or do you just open up an SSH session and YOLO? Any containers? Is docker-compose enough for you or are you one of those unicorns who had no issues whatsoever with rootless Podman? Do you use any premade scripts or do you hand craft it all? What distro are you building on top of?

I’m currently in process of “building” my own server and I’m kinda wondering how “far” most people are going, where do y’all take any shortcuts, and what do you spend effort getting just right.

  • ppp@lemmy.one
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Debian + nginx + docker (compose).

    That’s usually enough for me. I have all my docker compose files in their respective containers in the home directory like ~/red-discordbot/docker-compose.yml.

    The only headache I’ve dealt with are permissions because I have to run docker as root and it makes a lot of messy permissions in the home directories. I’ve been trying rootless docker earlier and it’s been great so far.

    edit: I also use rclone for backups.

  • SpaceNoodle@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I’m a lazy piece of shit and containers give me cancer, so I just keep iptables aggressive and spin up whatever on an Ubuntu box that gets upgrades when I feel like wasting a weekend in my underwear.

  • clavismil@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I use debian VMs and create rootless podman containers for everything. Here’s my collection so far.

    I’m currently in the process of learning how to combine this with ansible… that would save me some time when migrating servers/instances.

  • VexCatalyst@lemmy.fmhy.ml
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Generally, it’s Proxmox, debían, then whatever is needed for what I’m spinning up. Usually Docker Compose.

    Lately I’ve been playing some with Ansible, but it’s use is far from common for me right now.

  • NixOS instances running Nomad/Vault/Consul. Each service behind Traefik with LE certs. Containers can mount NFS shares from a separate NAS which optionally gets backed up to cloud blob storage.

    I use SSH and some CLI commands for deployment but only because that’s faster than CICD. I’m only running ~’nomad run …’ for the most part

    The goal was to be resilient to single node failures and align with a stack I might use for production ops work. It’s also nice to be able to remove/add nodes fairly easily without worrying about breaking any home automation or hosting.

  • Sergey Kozharinov@lem.serkozh.me
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    A series of VPSes running AlmaLinux, I have a relatively big Ansible playbook to setup everything after the server goes online. The idea is that I can at any time scrape the server off, install an OS, put in all the persistent data (Docker volumes and /srv partition with all the heavy data), and run a playbok.

    Docker Compose for services, last time I checked Podman, podman-compose didn’t work properly, and learning a new orchestration tool would take an unjustifiable amount of time.

    I try to avoid shell scripts as much as possible because they are hard to write in such a way so that they handle all possible scenarios, they are difficult to debug, and they can make a mess when not done properly. Premade scripts are usually the big offenders here, and they are I nice way to leave you without a single clue how the stuff they set up works.

    I don’t have a selfhosting addiction.

  • saplyng@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I’ve set up some godforsaken combination of docker, podman, nerdctl and bare metal at work for stuff I needed since they hired me. Every day I’m in constant dread something I made will go down, because I don’t have enough time to figure out how I was supposed to do it right T.T

  • tr00st@lemmy.tr00st.co.uk
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Up until now I’ve been using docker and mostly manually configuring by dumping docker compose files in /opt/whatever and calling it a day. Portainer is running, but I mainly use it for monitoring and occasionally admin tasks. Yesterday though, I spun up machine number 3 and I’m strongly considering setting up something better for provisioning/config. After it’s all set up right, it’s never been a big problem, but there are a couple of bits of initial with that are a bit of a pain (mostly hooking up wireguard, which I use as a tunnel for remote admin and off-site reverse proxying.

    Salt is probably the strongest contender for me, though that’s just because I’ve got a bit of experience with it.

  • Neo@lemmy.hacktheplanet.be
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    I’ve recently switched my entire self hosted infrastructure to NixOS, but only after a few years of evaluation, because it’s quite a paradigm shift but well worth it imho.

    Before that I used to stick to a solid base of Debian with some docker containers. There are still a few of those remaining that I have yet to migrate to my NixOS infra (namely mosquitto, gotify, nodered and portainer for managing them).

  • EmptyRadar@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    After many years of tinkering, I finally gave in and converted my whole stack over to UnRAID a few years ago. You know what? It’s awesome, and I wish I had done it sooner. It automates so many of the more tedious aspects of home server management. I work in IT, so for me it’s less about scratching the itch and more about having competent hosting of services I consider mission-critical. UnRAID lets me do that easily and effectively.

    Most of my fun stuff is controlled through Docker and VMs via UnRAID, and I have a secondary external Linux server which handles some tasks I don’t want to saddle UnRAID with (PFSense, Adblocking, etc). The UnRAID server itself has 128GB RAM and dual XEON CPUs, so plenty of go for my home projects. I’m at 12TB right now but I was just on Amazon eyeing some 8TB drives…

  • nzeayn@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    About two years ago my set up had gotten out of control, as it will. Closet full of crap all running vms all poorly managed by chef. Different linux flavors everywhere.

    Now its one big physical ubuntu box. Everything gets its own ubuntu VM. These days if I can’t do it in shell scripts and xml I’m annoyed. Anything fancier than that i’d better be getting paid. I document in markdown as i go and rsync the important stuff from each VM to an external every night. Something goes wrong i just burn the vm, copy paste it back together in a new one from the mkdocs site. Then get on with my day.

  • philip@kbin.chat
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    I use the following procedure with ansible.

    1. Setup the server with the things I need for k3s to run
    2. Setup k3s
    3. Bootstrap and create all my services on k3s via ArgoCD
    • redcalcium@c.calciumlabs.com
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      People like to diss running kubernetes on your personal servers, but once you have enough services running in your servers, managing them using docker compose is no longer cut it and kubernetes is the next logical step to go. Tools such as k9s makes navigating as kubernetes cluster a breeze.