Yup, this - batteries are consumables. They have a service life of ~2-5 years depending on load. If the manual doesn’t tell you how to replace them then it’s basically ewaste already
Yup, this - batteries are consumables. They have a service life of ~2-5 years depending on load. If the manual doesn’t tell you how to replace them then it’s basically ewaste already
Depends on what you need:
I learnt a ton about Linux by fucking up my boot config and being too stubborn to just nuke and pave
Keycloak to provide OIDC, although in hindsight I should have gone with Authelia Authentik
Or, alternatively, coms management is important and formally declaring an incident is an important part of outage response - going from “hey Bob something isn’t looking right can you check when you get a sec” to “ok, shits broken, everyone put down what you are working on and help with this. Jim is in charge of coordinating the technical people so we don’t make things worse, and should feed updates to Mike who is going to handle comms to non-technical internal people and to externals” takes management input
There are very few things more obnoxious than an asshole with unsolicited parenting advice
https://www.servethehome.com/everything-homelab-node-goes-1u-rackmount-qotom-intel-review/ would probably be a better bet for a router
I moved just about everything to Route53 for registration - I run my own DNS so I don’t need to pay for that, and it’s ~40% cheaper than Gandi for better service.
Now I just need to move my .nz domain (R53 supports .{co,net,org}.nz, but not .nz itself?) and the 2 .xyz domains that are “premium” for some reason so R53 won’t touch
opens task manager
sees a system uptime of 4 years
I’ll lose my tabs!
For anything that is related to my backup scheme, it’s printed out hard copy, put in an envelope in a fire safe in my house. I can tell you from experience there is nothing more stressful than “oh fuck I need my backups but the key to unlock the backups is in the backups fuck fuck fuck”.
And for future reference, anyone thinking about breaking into my house to get access to my backups just DM me, I’m sure we can come to an arrangement that’s less hassle for both of us
So I pull out my keyboard
And I pull out my Glock
And I dismount your girl
And I mount slash proc
Cos I’ve got your PID
And the bottom line
Is you best not front
Or its kill dash nine
I was in the same place as you a few years ago - I liked swarm, and was a bit intimidated by kubernetes - so I’d encourage you to take a stab at kubernetes. Everything you like about swam kubernetes does better, and tools like k3s make it super simple to get set up. There _is& a learning curve, but I’d say it’s worth it. Swarm is more or less a dead end tech at this point, and there are a lot more resources about kubernetes out there.
They are, but I think the question was more “does the increased speed of an SSD make a practical difference in user experience for immich specifically”
I suspect that the biggest difference would be running the Postgres DB on an SSD where the fast random access is going to make queries significantly faster (unless you have enough ram that Postgres can keep the entire DB in memory where it makes less of a difference).
Putting the actual image storage on SSD might improve latency slightly, but your hard drive is probably already faster than your internet connection so unless you’ve got lots of concurrent users or other things accessing the hard drive a bunch it’ll probably be fast enough.
These are all Reckons without data to back it up, so maybe do some testing
I’ve not heard any out-and-out horror stories, but I’ve got no first hand experience.
I’m planning on picking up 3x manufacturer recertified 18TB drives from SPD when money allows, but for now I’m running 6x ancient (minimum 4 years old) 3TB WD Reds in RAID 6. I keep a close eye on SMART stats, and can pick up a replacement within a day if something starts to look iffy. My plan is to treat the 18TBs the same; hard drives are consumables, they wear out over time, and you have to be ready to replace them when they do
Pretty much - I try and time it so the dumps happen ~an hour before restic runs, but it’s not super critical
pg_dumpall
on a schedule, then restic to backup the dumps. I’m running Zalando Postgres in kubernetes so scheduled tasks and intercontainer networking is a bit simpler, but should be able to run a sidecar container in your compose file
If you figure it out, I know several companies that would be more than willing to drop 7 figures a year to license the tech from you
Yeah, they are mostly designed for classification and inference tasks; given a piece of input data, decide which of these categories it belongs to - the sort of things you are going to want to do in near real time, where it isn’t really practical to ship off to a data centre somewhere for processing.
That’s not how Neon works. Your install will upgrade itself once the team have finished rebuilding everything on top of 24.04 - it’s happening, but it takes a bit of time
I’ve used 85GB of the 128GB of my current phone after using it for 2 years and never deleting anything. I suppose if I took a lot more video I might burn through it quicker.