i’d avoid BIOS-based RAID… it doesn’t really offer many benefits over linux-based raid like MDADM, and MDADM offers a LOT of up-sides for portability, repairability, diagnostics, etc
i’d avoid BIOS-based RAID… it doesn’t really offer many benefits over linux-based raid like MDADM, and MDADM offers a LOT of up-sides for portability, repairability, diagnostics, etc
use of the words “sane” and “normal people” suggest that you still don’t understand one of the fundamental tenants of linux: freedom to choose
the implication that people who don’t operate their computer the same as you are insane and abnormal is… well… counter to many of the philosophies that make linux fantastic
i don’t think you’re aware that people can use interfaces differently to you
you say whose goal like someone decided one day that this is the only way to do things… i said “the goal” as in the goal of a particular method of scrolling, explaining that there is a logical reasoning behind the UI choice; that it’s not arbitrary
you should pick a hill to die on that’s not something as arbitrary as natural scrolling. there are more important things in life than fighting with people on the internet about scrolling interface preferences
the goal is to use the same action as you would use in a touch screen:
to scroll content, you “grab” a point on the page and push it up to reveal what’s below it
to scroll a scroll bar, you “grab” the bar and move it down because you’re not moving the content; you’re moving the “%age complete” indicator
it only seems illogical if you’ve not been using natural scrolling. it is, in fact, incredibly intuitive if you haven’t built up muscle memory for the opposite
you’re thinking of openbsd; not freebsd
sure, but a $1m over how many causes? i’d assume they don’t really even use freebsd, considering macos was based in openbsd? so i’d suggest that an employee match is pretty decent
so i just did a quick search and apparently
Starting with Gitea 1.19, Gitea Actions are available as a built-in CI/CD solution.
*edited:
also they support being a package repo, including container registry
not related to backup solution, but this is a great time to get some home monitoring sorted! put prometheus on it, run prometheus at home too, and have them monitor each other… great way to know why/when things aren’t working in general, but adds another level of confidence that your data are nice and safe
so what you ideally want is people to ONLY be able to access your backend service through caddy, so caddy should be the only one with ports publicly accessible, yes
caddy running in the same docker network as your services can talk to those services on their original ports; they don’t need to even be mapped to the host! in this case, you have 3 containers: caddy, service 1, service 2… caddy is the only one that needs to have ports forwarded and you can just forward caddy:443 and no need to worry! then caddy can talk directly to services:80 or services:443 (docker containers show up to other docker containers by their container name! so if you run eg: docker run … —name lemmy, then caddy in the same docker network would be able to connect to http://lemmy:80!)
… but if you forward say service 1 and 2 on :8443 and :9443 (without firewall, and even with it makes me uncomfortable - that’s 1 step away from a subtle security problem), someone could be able to access <yourserver>:8443, right? so they don’t have to go through caddy to get to the backend service… for some services, that can be a big deal in ways that it’s difficult to understand, so it’s best to just not allow it if possible
an alternative is to make sure your services are firewalled so that nobody from the internet can hit them, but caddy still can… but i like this less, because it’s less explicit what’s happening so it’s easier to forget about
if you’re only going to be using those services through the proxy, it can also be a useful security upgrade to not forward their ports at all, and run caddy inside docker to connect to them directly!
if you forward the ports (without firewalling them), people can connect to them directly which can be a security risk (for example, many services require a proxy to add the x-forwarded-for header to show which IP address originally made the request… if users can access the service directly, they can add this header themselves and make it appear as though they came from anywhere! even 127.0.0.1, which can sometimes bypass things like admin authentication)
useful thing to remember about these systems: you fuck up and it’s a high likelihood literally nobody at the company can do any work because all their files are inaccessible
that’s like… $10000/hr in lost man hours alone, let alone reputation from not being able to respond to customers accurately, possibly missed SLAs or other contract obligations
unless your company is all about tech, it’s highly unlikely your IT team has the skills necessary to take on that level of responsibility
kinda the same reason people suggest something like linux mint over slackware, gentoo, arch, etc… mint is easy to install and is preconfigured to be an easy to use user desktop environment. you can configure any other option to be have like that, but they tend to be a bit more “DIY”, which is great if you know what you’re doing!
dedicated NAS OSes will have good software out of the box that make it easy to configure and manage various common disk-related configurations (RAID, SMB, NFS, etc). you can certainly do all this yourself, but it might not have a pretty, unified user interface, or you might have to deal with software that isn’t compatible with some version of a library that’s in your distro of choice… all resolvable things, but they take time to solve: anywhere from installing a package manually to applying a kernel patch and recompiling the kernel to get something to work