• 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2024

help-circle
  • btrbk ... && curl https://uptime.my.domain/api/push/... is exactly what I do in a systemd service with nightly timer. Uptime Kuma sends a matrix message (via a bot account on matrix.org) if it doesn’t get a success notification in 25h. I have two servers in different locations that do mutual backups and mutual uptime kuma monitoring. Should both servers go down at the same time, there’s also some basic and free healthcheck from my dynamic-ipv6 provider https://ipv64.net/, so I also get an email if any of the two uptime kumas cannot be reached anymore.


  • You need to ask yourself what properties you want in your storage, then you can judge which solution fits. For me it is:

    • effortless rollback (i.e. in case something with a db updates, does a db migration and fails)
    • effortless backups, that preserve database integrity without slow/cumbersome/downtime-inducing crutches like sql dump
    • a scheme that works the same way for every service I host, no tailored solutions for individual services/containers
    • low maintenance

    The amount of data I’m handling fits on larger harddrives (so I don’t need pools), but I don’t want to waste storage space. And my homeserver is not my learn and break stuff environment anymore, but rather just needs to work.

    I went with btrfs raid 1, every service is in its own subvolume. The containers are precisely referenced by their digest-hashes, which gets snapshotted together with all persistent data. So every snapshot holds exactly the amount of data that is required to do a seamless rollback. Snapper maintains a timeline of snapshots for every service. Updating is semi-automated where it does snapshot -> update digest hash from container tags -> pull new images -> restart service. Nightly offsite backups happen with btrbk, which mirrors snapshots in an incremental fashion on another offsite server with btrfs.







  • Go on and keep using your distro another few years, and you’ll recognize the patterns of what keeps breaking. And then try some others for some years, and you’ll find that you can at most pick between smaller issues on a regular base on rolling ones, or larger batches of issues on release based ones. And some point you’ll find that every user creating a custom mix of packages that are all interdependent on another is quite the mess, and the number of package combinations times the number of configuration option combinations is so large that you can guarantee some of them will have issues. On top you have package managers rumaging around in the system while it is in use, and with a mix of old code that is still loaded in ram and new code on disk behaviour for these transients is basically undefinded. Ultimately you’ll grow tired of this scheme at some point, and then running a byte-to-byte copy of something that has been tested and doing atomic updates is quite attractive. And putting a stronger focus on containerized applications not only enables immutable distros for broad adoption in the first place, but also cuts down the combinatorial complexity of the OS. And lastly, to be honest, after so many years of the same kinds of issues over and over again, the advent of immutable+atomic distros + containerized desktop apps brought a couple of new challenges that are more interesting for the time being…



  • skilltheamps@feddit.orgBanned from communitytoLinux@lemmy.mlFlathub has passed 3 billion downloads
    link
    fedilink
    arrow-up
    15
    ·
    10 months ago

    Take a look from this perspective: with distro packages, a separate person (the package maintainer) has to build a piece of software against the versions of dependencies the distro offers, which are not the ones the developer of the software uses and tests against. Then you have users that encounter bugs with this build of the software, and the developer of the software receiving bug reports against all kinds of dependency matrices, whose combinatorial complexity is overwhelming. With the different paces of distros in terms of package versions this is inevitable. On top you have overworked package maintainers which leads to sparingly updated distro packages or even orphaned ones.

    For no party in the linux ecosystem this is a great experience.

    Either it is this, or giving packages the opportunity to not share dependency versions, which can cost a bit of disk space. With the low price of storage, I think it becomes quite clear why flatpaks are so popular. Also in the end, users do not shape the linux landscape like they would with commercial products, as distros do not rely on sales to users. Developers and maintainers shape the landscape, and so what floats their boat is largely what happens.

    For linux as a whole, flatpak is one of the greatest things that ever happened. For the first time, one can treat it as an actual platform, and that makes it a strong ecosystem.


  • skilltheamps@feddit.orgBanned from communitytoLinux@lemmy.mlWindows doesn't "just work"
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Nothing is bug free, but that doesn’t mean everything is sort of the same just different flavor.

    The last couple days I dealt with Windows, which is out of the ordinary for me. I had to build a little thing and chose PowerShell and that is quirky but ok at a glance. Now we are in 2025 and PowerShell is a modern thing, and kid you not you install a thing using Module-Install and then you uninstall it using Module-Uninstall and what happens? The thing is only gone partially and some broken remains stay. And then another curiosity comes up where after long rummaging it turns out that one user (Admin) simply cannot see another user’s mounted share - has microsoft ever heard of the concept of “permission denied”?

    That’s not a differently flavored bag of bugs, that is like decades of computing and software engineering hadn’t taken place


  • skilltheamps@feddit.orgBanned from communitytoLinux@lemmy.mlCan I ignore flatpak indefinitely?
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Only if the application source code fits the API of the library versions on your system. Otherwise you also need to port the application to your available library versions. Also using different dependency versions might surface bugs that you have to sort out yourself.

    I only want to point this out because it often seems that the people that complain about flatpak do not grasp what maintaining a package entails, and your suggestion effectively puts you in the position of being a package maintaier for your specific distro. (But the upshot is that with open source software you are always free to do this, and also share it with other people through (community-) repositories)


  • This is not practical for a home setup. Not because it would be expensive for more hardware or whatever, but because as soon as you have multiple systems doing the same thing, their state diverges and for pretty much anything that is popular for selfhosting you cannot merge them again or mirgrate users between them without loosing anything. Distributed databases alone are a huge pita, and maintaining such redundant setups would be a million times more effort than just making sure that you can easily and quickly atomically roll back failed updates






  • Nobody gets forced to port kernel stuff to rust. Also the rust compiler takes a lot of burden from maintainers by the safety it enforces.

    The whole conflict ist not a technical one, it is entirely human. Some long-term kernel developers don’t like people turning up and replacing the code they wrote. Instead of being proud that the concepts they built get to be elevated in a superior implementation, they throw tantrum and sabotage.


  • Be cautious with the answers when asking things like this. In discussion boards like here many are (rightfully) very excited about selfhosting and eager to share what they learned, but may (“just”) have very few years of experience. There’s a LOT to learn in this space, and it also takes a very long time to find out what is truly foolproof and easily recoverable.

    First of, you want your OS do be disposable. And just as the OS should be decoupled, all the things you run should be decoupled from one another. Don’t build complex structures that take effort to rebuild. When you build something, that is state. You want to minimize the amount of state you need to keep track of. Ideally, the only state you should have is your payload data. That is impossible of course, but you get the idea.

    Immutable distros are indeed the way to go for long term reliability. And ideally you want immutability by booting images (like coreOS or Fedora IoT). Distros like microOS are not really immutable, they still use the regular package manager. They only make it a little more reliable by encouraging flatpak/docker/etc (and therefore cutting down on packages managed by the package manager) and a slightly more controlled update-procedure (making them transactional). But ultimately, once your system is in some way defect, the package manager will build on top of that defect. So you keep carrying along that fault. In that sense it is not immune to “os drift” (well expressed), it is just that drift happens slower. “Proper” immutable distros that work with images are self healing, because once you rebase to another image (could be an update or a completely different distribution, doesn’t matter), you have a fresh system that doesn’t have anything to do with the previous image. Furthermore the new image does not get composed on your computer, it gets put together upstream. You only run the final result with you know is bit for bit what was tested by the distro maintainers. So microOS is like receiving a puzzle and a manual how to put it together, and gluing it in a frame is the “immutability”. Updates are like losening the glue of specific pieces and gluing in new ones. In coreOS you receive the glued puzzle and do not have to do anything yourself. Updates are like receiving an entire new glued puzzle. This also comes down to the state idea: some mutable system that was set up a long time ago and even drifted a bit has a ton of state. A truly immutable distro has a very tiny state, it is merely the hash of the image you run, and your changes in /etc (which should be minimal and well documented!).

    Also you want to steer clear from things like Proxmox and generally LXC-containers and VMs. All these are not immutable (let alone immune to drift), and you only burden yourself with maintaining more mutable things with tons of state.

    Docker is a good way to run your stuff. Just make sure to put all the persistent data the belongs together in subfolders of a subvolume and snapshot that, and then backup these snapshots. That way you ensure that you meet the requirements for the data(base)'s ACID properties to work. Your “backups” will be corrupted otherwise, since they would be a wild mosaic from different points in time. To be able to roll back cleanly if an update goes wrong, you should also snapshot the image hash together with the persistent data. This way you can preserve the complete state of a docker service before updating. Here you also minimize the state: you only have your payload data, the image hash and your compose.yml.


  • When you’re maintaining a product that is based on linux, you’re surely qualified to port that thing to your platform yourself.

    Open source developers are thanklessly giving away their work for free already, and for the many things where there’s just a github page it is just a one man show run in spare time. Don’t demand them to give away even more of their time to cater for whatever distro you’re using, just because you are not willing to invest the time to learn how linux works and also not willing to give a way a few megabytes for the dependencies they’re developing against.

    All the discussions about things like distrobox and flatpak where linux novices express their dissatisfaction due to increased disk space are laughable. In the linux universe sole users have no power in deciding what goes, they do not pay anything and at worst pollute the bug tracker. Developers are what make up the linux universe, and what appeals to them is what is going to happen. Flatpak is a much more pleasant experience to develop for than a gazillion distros, hence this is where it is going, end of story. As a user either be happy with wherever the linux rollercoaster goes, or - if you want to see change- step up and contribute.