He/Him They/Them

Working in IT for about 15 years. Been online in one way or another since the late 90’s.

I like games / anime but very picky with them.

Cats are the best people.

  • 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • mail is the one thing I refuse to self host for the simple reason that despite not being particularly hard to get up and running initially, when it doesn’t work for whatever reason it can be and often is a gigantic pain in the ass to deal with, especially when it’s something out of your control. For personal there’s very good free options, for enterprise those same free options have paid options.

    Whether it be gmail having a bad day and blocking you or whatever cloud provider or on prem infrastructure crapping out for long periods of time causing you to be cut off from email for a while and potentially missing incoming mail permanently if the retries time out. Or anything in between. It’s one of those things where I’m glad it isn’t my problem to deal with.

    My only involvement with email is ensuring I have a local copy of my inbox synced up every week so if my provider were to ever die I still have all my content.


  • Buy the domain itself wherever you want. I like cloudflare, and a lot of people also suggest porkbun.com. You then point the nameservers for your domain to whatever DNS service you want. If you stick to cloudflare then it’s already done for you.

    For dynamic DNS I use cloudflare’s one using my router to keep it updated. It’s easy to set up. Depending on your router you may need to run a service on a machine to do this instead. things like pfsense/opnsense should have it built-in.







  • Possible yes. Cost effective / valid business case probably not. Every extra 9 is diminishing returns: it’ll cost you exponentially more than the previous 9 and money saved from potential downtime is reduced. Like you said 32 seconds of downtime, how much money is that for the business?

    You’re pretty much looking at multiple geographically diverse T4 datacenters with N+2 or even N+3 redundancy all the way up and down the stack, while also implementing diversity wherever possible so no single vendor of anything can cause you to not be operational.

    Even with all that though, you’ll eventually get wrecked by DNS somewhere somehow, because it’s always DNS.



  • I run linux for everything, the nice thing is everything is a file so I use rsync to backup all my configs for physical servers. I can do a clean install, run my setup script, then rsync over the config files, reboot and everyone’s happy.

    For the actual data I also rsync from my main server to others. Each server has a schedule for when they get rsynced to so I have a history of about 3 weeks.

    For virtual servers I just use the proxmox built in backup system which works great.

    Very important files get encrypted and sent to the cloud as well, but out of dozens of TB this only accounts for a few gigs.

    I’ve also never thrown out a disk or USB stick in my life and use them for archiving, even if the drive is half dead as long as it’ll accept data I shove a copy of something on it, label and document it. There’s so many copies of everything that it can all be rebuild if needed even if half these drives end up not working. I keep most of these off-site. At some point I’ll have to physically destroy the oldest ones like the few 13 GB IDE disks that just make no sense to bother with.


  • If you’re using memory for storage operations, especially for something like ZFS cache, then you ideally want ECC so errors are caught and corrected before they corrupt your data, as a best practice.

    In the real world unless you’re buying old servers off ebay that already have it installed the economics don’t make sense for self hosted. The issues are so rare and you should have good backups anyways. I’ve never run into a problem for not using ECC, been self hosting since 2010 and have some ZFS pools nearly that old. I exclusively run on consumer stuff with the exception of HBAs and networking, never had ECC.




  • Fediverse and the internet in general is already the metaverse. All we need to make it look like in sci-fi is for majority of users to interact with it in a 3D virtual space instead of 2D. XR technologies will get us there eventually but content on those platforms is lacking at best and is very far from general adoption.

    The next big hurdle is large corporations trying to “create” the metaverse, which already exists, and control it. Which basically disqualifies anything they are doing from ever being the metaverse. I actually felt some degree of rage when facebook renamed themselves to meta, they single-handedly ruined public perception of the concept, anyone talking about it now in the general public is not taken seriously.

    What’s also missing of the metaverse right now is owning your identity and taking it with you everywhere you go, fediverse comes close to that concept but is far from perfect since it’s pretty hard to interact with other fediverse technologies right now, if I’m on lemmy I don’t see any way to consume/interact with mastodon content or kbin content. However being able to traverse every instance of lemmy out there using one account, hosted on a server run by myself or someone else is a start.



  • My general rule is to not self host things that are good enough / free (as in $$ not FOSS). So I don’t host email or music. I’m not a huge music person so spotify does the job, and gmail’s been great since it started.

    Things I do host

    • media server (jellyfin + sonarr/radarr etc)
    • stable diffusion image generation server
    • games (starbound mostly, killed minecraft after microsoft takeover)
    • lemmy
    • comics/manga server (komga)
    • yt-dl web interface