You could also use nginx if you wanted; it’ll do arbitrary tcp data with the stream plugin.
You could also use nginx if you wanted; it’ll do arbitrary tcp data with the stream plugin.
I just uh, wrote a bash script that does it.
It dumps databases as needed, and then makes a single tarball of each service. Or a couple depending on what needs doing to ensure a full backup of the data.
Once all the services are backed up, I just push all the data to a S3 bucket, but you could use rclone or whatever instead.
It’s not some fancy cool toy kids these days love like any of the dozens of other backup options, but I’m a fan of simple and well, a couple of tarballs in a S3 bucket is about as simple as it gets since restoring doesn’t require any tools or configuration or anything: just snag the tarballs you need, unarchive them, done.
I also use a couple of tools for monitoring the progress and a separate script that can do a full restore to make sure shit works, but that’s mostly just doing what you did to make and upload the tarballs backwards.
I’m finding 8 years to be pretty realistic for when I have drive failures, and I did the math when I was buying drives and came to the same conclusion about buying used.
For example, I’m using 16tb drives, and for the Exos ones I’m using, a new drive is like $300 and the used pricing seems to be $180.
If you assume the used drive is 3 years old, and that the expected lifespan is 8 years, then the used drive is very slightly cheaper than the new one.
But the ‘very slight’ is literally just about a dollar-per-year less ($36/drive/year for used and $37.50/drive/year for new), which doesn’t really feel like it’s worth dealing with essentially unwarrantied, unknown, used and possibly abused drives.
You could of course get very lucky and get more than 8 years out of the used, or the new one could fail earlier or whatever but, statistically, they’re more or less equally likely to happen to the drives so I didn’t really bother with factoring in those scenarios.
And, frankly, at 8 years it’s time to yank the drives and replace them anyways because you’re so far down the bathtub curve it’s more like a slip n’ slide of death at that point.
I’m going to get downvoted to hell for this but uh, I usually tell clients Squarespace is what they want these days.
Self-hosting something like Wordpress or Ghost or Drupal or Joomla or whatever CMS you care to name costs time: you have to patch it, back it up, and do a lot of babysitting to keep it up and secure and running. It’s very much not a ship-and-forget - really, nothing selfhosting is.
I’m very firmly of the opinion that small business people should be focused on their business, not their email or website or whatever, because any time you spend fighting your tech stack is time you could have been actually making money. It’s all a cost, it just depends if you value $20 a month or your time more.
If I had someone come to me asking to setup this stuff for their business, I’d absolutely tell them to use gSuite for email, file sharing, documents, and such and Squarespace for the website and then not worry about shit, because they’re both reliable and do what they say on the tin.
They state the code will be released after the first orders ship, which makes a certain kind of sense given this is a competitive space suddenly.
Though, I 10000% agree that there’s no reason to take a leap of faith when you can just wait like, uh, a month, and see what they do after release. It’s not like they won’t still be selling these or something.
Right, but you’re pulling way more power than the homeserver I’m running is, and at 10-15w it’s doing frigate + openvino based (on the igpu) identification on 4 cameras, usually 2 jellyfin streams at any given time, 4 VMs, home assistant, and ~80 other containers plus a couple of on-host services for NAS duties (smb, nfs, ftp, afp, nginx, etc.)
I was just surprised that a Ryzen U-series chip would be worse re. power usage.
You know, I think I did the thing I always do and forget how bad the idle power for Ryzen cpus are due to how they’re architected.
Like, my home server is a 10850k, which is a CPU known for using 200+w… except that, of course, at idle/normal background loads it’s sitting at more like 8-15w. I did some tweaking to tell it to both respect it’s TDP and also adjusting turbo boost to uh, don’t, but still: it’s shockingly efficient after fiddling.
I wouldn’t have expected a 5500u to sit at 30w under normal loads, but I suppose that depends on the load?
Ryzen 7, so definitely not a low power
It’s a laptop chip, and a 15w one at that so I wouldn’t exactly say that’s high-power.
I don’t really think it’s necessarily a deal breaker, but it’s caused a lot of people a lot of nagging little issues and might be worth making sure you’re not going to run into anything.
I’m super stoked at the appearance of the nas appliance form factor with hardware that’s got performance that isn’t rotten potato level.
Next rebuild I do is certainly going to be one of these things, though that’s probably a billion years away since my current nas is hilariously overpowered.
Looks like it’s a i226-v nic, which uh, has a reputation for being kinda shit.
It’s not a universal problem, but it’s probably something to keep in mind.
Mine’s running just fine (along with about a dozen other things) on the A1/ARM instance you can get for free.
I wouldn’t say performance is stunningly good - the Ampere cores aren’t especially fast single threaded, and postgres is… well, it’s not the most threaded thing ever under really low loads - but it does what it’s supposed to.
So don’t take this as rude, but if none of you have experience running email for a business, you’re probably better off contracting that part out.
It’s a lot of work to get working, keep working, and is prone to exploding for no particular reason so if this is a business-critical component, it’s worth the $20 a month to get it hosted where making your email actually deliver to people’s inbox is someone else’s problem.
Same for the business website: if it being down is going to cost money, a simple static page like that is hostable for literally free with cloudflare or netlify or any of a couple of other providers, and that’s probably what I’d do. (And, frankly, is what I do with a lot of stuff I host.)
As for storing and accessing remote documents, if you pay for gsuite or office365, you’ll get that included in the price, so like uh, that might be the best way to go.
I know this is literally not what you asked, but…
If you have a credit card and can pass their validation, Oracle offers a shockingly good set of free cloud options.
4 core, 24gb ram ARM instance, two potato epyc instances, 200gb of disk space and 10tb of transfer and various other little bits and pieces for the grand total of $0.
Some people have had their accounts closed for “no reason”, but I’m closing in on 2 years of free shit with no problems, so ymmv.
(I strongly suspect no reason has a reason and a huge number of these people were running VPNs, so I’d wager they either did something stupid/illegal, or someone they gave access to did something stupid/illegal.)
OPUS is such a delightful format
Agreed. My audiobook library was transcoded from various formats to 32kbit OPUS and they still sound about the same.
Shocking how decent it is with spoken voice and stupid low bitrates.
Because I stuck a 1TB sd card in my phone and don’t have to deal with transcoding or dealing with, well, anything, but copying new files over and listening to things.
I’ve developed quite the liking for stupidly simple solutions, and ‘copy the files to a sd card’ is about as simple as it gets.
I’m going to go another route here: do you need streaming?
Like, I’ve simply gone with a giant pile of FLACs that I put on a SD card for my phone, and use over the NAS for when I’m at home and don’t currently use any fancy-pants streaming stuff.
So like, depending on how you’re using your music library, you might not even need to drop deep into the giant self-hosting rabbithole for this.
I hate to be that guy, but uh, what do the logs say?
The container logs would probably be most useful since you should (probably) be able to tell if they’re having issues starting and/or simply not attempting to launch at all.
Yeah, maybe could have been clearer.
I was very vividly remembering a VERY SMART client I had a while ago that had like 600 rules blocking all manner of ports and protocols and IPs, and wondering why everything performed like dogshit.
Sure, it’ll go until it hits the first match, but if you have enough rules, you’re going to be churning through an awful lot of cpu getting everything to the first match.
OP may not have been intending to do something quite that uh, special, but people do funky things.
Yeah it was NAS -> DAC -> Switch -> endpoints and for whatever reason, for some use cases, it would just randomly hiccup and break shit.
I could never figure out what the problem was and as far as I could tell there was nothing in the network path that stopped working or flapped or whatever unless it did it so fast it didn’t trigger any monitoring stuff, yet somehow still broke NFS (and only NFS).
Figured after a bit that since everything else seemed fine, and the data was being exported via like 6 other methods, that meh, I’ll just use something else.
Also if you’ve never seen it, lazydocker might be something up your alley.
It’s a TUI, but it provides easy access to docker containers, logs, updating/restarting/stopping/etc them and so on.