Solar Bear

  • 0 Posts
  • 54 Comments
Joined 3 years ago
cake
Cake day: June 27th, 2023

help-circle
  • I would say there’s solid benefits to breaking out your networking into at least 4 VLANs: IoT, guest, main, and infrastructure. IoT is obvious, these devices are security nightmares, but sometimes you have no alternative so you throw them into a network black hole. Guest for guests that you don’t want touching your stuff but keep asking for wifi. Main is for everybody else, this is your “real” network. Infrastructure for servers and network equipment.

    The reason you break infrastructure off into its own VLAN is that modern firewalls are stateful and you can allow the main VLAN to initiate connections to the infrastructure VLAN but not the other way around, so if your server or IoT stuff gets infected it can’t become an attack vector for all your other devices. You allow Main to access Infrastructure, but not vise versa.

    I take mine further and add two more VLANs, services and admin access. I split infrastructure (networking, proxmox hosts, etc) and services (proxmox VMs, NAS, etc) and then only allow admin access to the former, which is exclusive to my PC and phone. Some might call this excessive, but it helps me sleep a little better at night.


  • It gets logged in the event viewer, yeah. That’s how I discovered it, on account of the screens not waking up in time to show the actual bluescreen. The users were only reporting that their computers were deleting all their windows when waking up. From their perspective, all they saw was their computer taking a mildly longer time to wake up from deep sleep and then losing their entire session, but what it was actually doing was hard rebooting.

    Headless is fine, the bug was specifically triggered when a computer woke up and detected a monitor exists, but the monitor took some unspecified amount of time too long to wake up. It was also fixed at some point, I’m not sure when, but it went on long enough that we swapped dozens of cables because it specifically only happened on the ones using DisplayPort, not HDMI.



  • I feel like NixOS might be the only distro that could realistically handle all these use cases, but I’m a bit scared of the learning curve and the maintenance work it’d take to migrate everything over.

    It’s a very steep learning curve, but I personally think it is worth it if what you want is to sync up all your various devices to a single common baseline configuration. I sought a single-distro solution for all of my systems for a long time and always ended up fragmenting them eventually because nothing I tried until NixOS was capable of handling such a diverse set of use cases in a way that would satisfy me.

    I am similar to you, in that I regularly use a three server cluster, a gaming desktop, a multi-purpose personal laptop, and a work WSL instance on my work laptop. I still have some purpose-built distros where it makes sense; I use Proxmox for the actual server hosts themselves and then run NixOS VMs on them, along with running VMs for Home Assistant OS and TrueNAS (with the drives passed through, of course). All of these things I could do on raw NixOS (even Home Assistant is packaged in Nix, and there is a project to port Proxmox UI and tooling to NixOS) but I like the stability of the dedicated and battle-tested distros for critical infrastructure, especially for stuff whose configuration is very specific to a given task.

    With NixOS, each other device has a consistent shared configuration and package set, they all get updated to the exact same versions thanks to flakes so everything works the same and as expected no matter where I am, and it’s all declaratively configured and documented in one spot. Spinning up a new system or rebuilding an existing system is as easy as pulling the config and changing a few relevant lines, and from there it effectively assembles itself from scratch to the exact state I want it to be in. There’s never any lingering packages or configuration cruft because the system is assembled from scratch every time it updates. Much of my home configuration is also managed, so aliases, environment variables, even vim configs are consistent across the board and set in one location.

    The main downside is resource efficiency. Nix is designed to be reproducible and declarative, not fast or lean. It uses much more storage than a typical package manager, and packages are built with wide compatibility in mind so you often are leaving performance on the table from not using newer instruction sets like CachyOS. You can compile your own packages to fix that part, but that obviously takes a lot of spare processing power. I’ve been considering setting up my server cluster to do automatic building for me, but haven’t gotten around to it yet.


  • My main use case is using it to protect my exposed Home Assistant instance in a way that doesn’t require a VPN that family can screw up. I can just install the cert into the app for them and it Just Works. I also use it for my own Gotify notifications.

    As a more general rule, I apply it to anything I want to expose but can’t easily protect using OIDC logins. I used to put more behind it, but I recently opened up my services to friends and family, so I moved to using Authentik as my primary defense for most things. mTLS was great when it was just me, I can easily install the cert into my own browser and all of my Android apps (except Firefox Android…) but friends and family just zone out when I explain why their new phone doesn’t connect, so I had to adjust my systems to compensate.


  • It’s definitely dried up a fair bit over the last couple of years. In January 2025 I got some recertified 12TB Ironwolfs for $140 each from GoHardDrive, and that was already a fair bit over what they historically had been. Same drives are now $200 on GoHardDrive, and $220 on Amazon. You can just get them new $250, so at that point I barely think it’s worth it to get recertified unless you’re really stretching a budget. I’m sure the businesses are very happy with the demand they got now, but it’s hard to escape the conclusion that LTT and other Youtubers covering these sites really drove up demand and prices.

    Also, the smaller drives are a lot harder to find recertified these days since enterprise users will usually go for much larger capacities, so yeah, for 4TB you’ll probably have to go for new. You could also just get a larger drive and only use 4TB of it, assuming this is going into some kind of array. Upgrade the other one at a later date, then just expand your pool!


  • Authentik has done the opposite of enshittification. As they’ve gotten more successful, they’ve taken enterprise features and moved them into the community edition. I’ve been extremely happy with Authentik so far and the dev has been nothing short of fantastic every time I’ve seen them interacting with the community.



  • The counter to low-quality “Ubuntu sux” posts is not low quality “nuh uh it’s actually super epic!!!” posts, but that’s all we ever get. I’ve seen this pattern for probably fifteen years now, and it’s exhausting. If you don’t care about the criticisms and want to keep using it, then keep using it. More power to you. I probably use things you think are garbage. Hell, Windows users think we both use garbage. I’m just tired of people desperate to justify their choices like they need to “prove” something to everyone who disagrees.

    There are plenty of high quality takedowns of Ubuntu, but so rarely are there high quality defenses of it, generally because the criticisms are correct. Nobody ever talks about what makes Ubuntu good, not even Ubuntu users. Arch users will yap your ear off about ArchWiki and AUR. I’ll evangelize Nix to anybody who will listen as the future of advanced Linux management. OpenSUSE Tumbleweed fans will not shut up about rollbacks and bleeding edge software. Fedora users… well, Fedora users are usually busy out there actually doing productive things with their time instead of pointless internet squabbles.

    But what is Ubuntu strong at? I genuinely have no idea. All I ever see Ubuntu users say is that it “sucks the least”, in some vague indescribable way. That it’s not as bad as everyone says, that Snaps are actually fine, etc. Always on the defensive. If Ubuntu is actually good, somebody needs to get out there and make a case for what it’s good at, besides being featured as the default instructions for running proprietary third-party software.


  • I don’t know why we’re still doing snap discourse in 2025. I’m going to be harsh and direct.

    It has a proprietary server backend. This is objectively true. Theoretically you can build an open source backend, but nobody has completed a full implementation of it.

    If you don’t care about that, you can use Ubuntu, nobody is stopping you. You don’t need other people’s approval. Which is good, because of the people who disapprove, you’re never going to get their approval until it’s actually open sourced. You’re not going to convince anybody here to stop caring that it’s proprietary. So just get over it and use your own operating system without airing your insecurities online about it.


  • Solar Bear@slrpnk.nettoSelfhosted@lemmy.worldHelp me harden my home server
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 year ago

    Something you might want to look into is using mTLS, or client certificate authentication, on any external facing services that aren’t intended for anybody but yourself or close friends/family. Basically, it means nobody can even connect to your server without having a certificate that was pre-generated by you. On the server end, you just create the certificate, and on the client end, you install it to the device and select it when asked.

    The viability of this depends on what applications you use, as support for it must be implemented by its developers. For anything only accessed via web browser, it’s perfect. All web browsers (except Firefox on mobile…) can handle mTLS certs. Lots of Android apps also support it. I use it for Nextcloud on Android (so Files, Tasks, Notes, Photos, RSS, and DAVx5 apps all work) and support works across the board there. It also works for Home Assistant and Gotify apps. It looks like Immich does indeed support it too. In my configuration, I only require it on external connections by having 443 on the router be forwarded to 444 on the server, so I can apply different settings easily without having to do any filtering.

    As far as security and privacy goes, mTLS is virtually impenetrable so long as you protect the certificate and configure the proxy correctly, and similar in concept to using Wireguard. Nearly everything I publicly expose is protected via mTLS, with very rare exceptions like Navidrome due to lack of support in subsonic clients, and a couple other things that I actually want to be universally reachable.



  • Whatever you get for your NAS, make sure it’s CMR and not SMR. SMR drives do not perform well in NAS arrays.

    I just want to follow this up and stress how important it is. This isn’t “oh, it kinda sucks but you can tolerate it” territory. It’s actually unusable after a certain point. I inherited a Synology NAS at my current job which is used for backup storage, and my job was to figure out why it wasn’t working anymore. After investigation, I found out the guy before me populated it with cheapo SMR drives, and after a certain point they just become literally unusable due to the ripple effect of rewrites inherent to shingled drives. I tried to format the array of five 6TB drives and start fresh, and it told me it would take 30 days to run whatever “optimization” process it performs after a format. After leaving it running for several days, I realized it wasn’t joking. During this period, I was getting around 1MB/s throughput to the system.

    Do not buy SMR drives for any parity RAID usage, ever. It is fundamentally incompatible with how parity RAID (RAID5/6, ZFS RAID-Z, etc) writes across multiple disks. SMR should only be used for write-once situations, and ideally only for cold storage.



  • Hard disagree. Everything you learn on Arch is transferable because Arch is vanilla almost to a fault. The deep understandings of components I learned from Arch have helped me more times than I can count. It’s only non-transferable if you view each command as an arcane spell to be cast in that specific situation. I’ve fixed so many issues over the years using this knowledge, and it’s literally what landed me my current job and promotions.

    Arch is why I know how encryption and TPM works at a deeper level, which helped me find and fix the issue a Windows Dell PC was having that kept tripping into Bitlocker recovery. Knowledge of Grub and kernel parameters that I learned from Arch’s install process is why I was able to effortlessly break into a vendor’s DNS server whose root password was lost by the previous sysadmin before me when everybody else was panicking. Hell, it even helps in installing other distros, because advanced disk partitioning is a hot mess on a lot of distro GUI installers, so intimate knowledge of what I actually need helps me work around their failings. Plus all the countless other times that knowledge has helped me solve little problems instantly, because I knew how it worked from implementing it manually. When my coworkers falter because the GUI fails them and they know nothing else, I simply fix it with a command.

    If you use Arch and actually make the effort to learn, not just copy and paste commands from the wiki, you will objectively learn a lot about how Linux works. If you seek a career in Linux, there’s nothing I can recommend more than transitioning to using Arch (not Garuda, not Manjaro, Arch) full-time on your daily driver computer.

    Anyways, after about a decade I’ve recently switched to NixOS. Now there’s a distro where the skills you learn can’t be transferred out, but the knowledge I gained from Arch absolutely transferred in and gave me a head start.



  • Your response is “why are you doing X, you should do Y”

    Because they’re right, you shouldn’t do X. I know that’s not a satisfying answer for most people to hear, but it’s often one people need to hear.

    If the process must run as root, then giving a user direct and unauthenticated control over it is a security vulnerability. You’ve created a quick workaround for your issue, and to be clear it is unlikely to realistically cause you problems individually, but on a larger scale that becomes a massive issue. A better solution is required rather than recommend everybody create a hole in their security like yours in order to do this thing.

    If this is something that unprivileged users reasonably want to control, then this control should be possible unprivileged, or at least with limited privilege, not by simply granting permanent total control of a root service.

    This is ultimately an upstream issue more than anything else.