• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: February 10th, 2024

help-circle
  • The command you’re looking for is btrfs send. See man btrfs-send.

    I know of at least one tool, btrbk, which automates both automatic periodic snapshots and incremental sync, but here’s an example manual process so you can know the basic idea. Run all this in a root shell or sudo.

    As initial setup:

    • Create a btrfs filesystem on the sender drive and another on the receiver drive. No need to link them or sync anything yet, although the receiver’s filesystem does need to be large enough to actually accept your syncs.
    • Use btrfs subvolume create /mnt/mybtrfs/stuff on the sender, substituting the actual mount point of your btrfs filesystem and the name you want to use for a subvolume under it.
    • Put all the data you care about inside that subvolume. You can mount the filesystem with a mount option like -o subvol=stuff if you want to treat the subvolume as its own separate mount from its parent.
    • Make a snapshot of that subvolume. Name it whatever you want, but something simple and consistent is probably best. Something like mkdir /mnt/mybtrfs/snapshots; btrfs subvolume snapshot /mnt/mybtrfs/stuff /mnt/mybtrfs/snapshots/stuff-20250511.
    • If the receiver is a separate computer, make sure it’s booted up and running an SSH server. If you’re sending to another drive on the same system, make sure it’s connected and mounted.
    • Send/copy the entire contents of the snapshot with a command like btrfs send /mnt/mybtrfs/snapshots/stuff-20250511 | btrfs receive /mnt/backup. You can run btrfs receive through SSH if the receiver is a separate system.

    For incremental syncs after that:

    • Make another separate snapshot and make sure not to delete or erase the previous one: btrfs subvolume snapshot /mnt/mybtrfs/stuff /mnt/mybtrfs/snapshots/stuff-20250518.
    • Use another send command, this time using the -p option to specify a subvolume of the last successful sync to make it incremental. btrfs send -p /mnt/mybtrfs/snapshots/stuff-20250511 /mnt/mybtrfs/snapshots/stuff-20250518 | btrfs receive /mnt/backup.

    If you want to script a process like this, make sure the receiver stores the name of the latest synced snapshot somewhere only after the receive completes successfully, so that you aren’t trying to do incremental syncs based on a parent that didn’t finish syncing.


  • zarenki@lemmy.mltoLinux@lemmy.mlThis looks cool but can it game?
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    “Dynamically compiled” and dynamic linking are very different things, and in turn dynamic linking is completely different from system calls and inter-process communication. I’m no emulation expert but I’m pretty sure you can’t just swap out a dynamically linked library for a different architecture’s build for it at link time and expect the ABI to somehow work out, unless you only do this with a small few manually vetted libraries where you can clean up the ABI. Calling into drivers or communicating with other processes that run as the native architecture is generally fine, at least.

    I don’t know how much Asahi makes use of the capability (if at all), but Apple’s M series processors add special architecture extensions that makes x86 emulation be able to perform much better than on any other ARM system.

    I wouldn’t deny that you can get a lot of things playable enough, but this is very much not hardware you get for the purpose of gaming: getting a CPU and motherboard combo that costs $1440 (64-core 2.2GHz) or $2350 (128-core 2.6GHz) that performs substantially worse at most games than a $300 Ryzen CPU+motherboard combo (and has GPU compatibility quirks to boot) will be very disappointing if that’s what you want it for. Though the same could to a lesser extent be said even about x86 workstations that prioritize core count like Xeon/Epyc/Threadripper. For compiling code, running automated tests, and other highly threaded workloads, this hardware is quite a treat.


  • With one of these Altra CPUs (Q64-22), I can compile the Linux kernel (defconfig aarch64 with modules on GCC 15.1) in 3m8s with -j64. Really great for compiling, and much lower power draw than any x86 system with a comparable core count. Idles at 68W full system power, pulls 130W when all cores are under full load. Pulling out some of my 4 RAM sticks can drive that down a lot more than you’d expect for just RAM. lm_sensors claims the “CPU Power” is 16W and 56W in those two situations.

    Should be awful for gaming. It’s possible to run x86 things with emulation, sure, but performance (especially single-thread) suffers a lot. I run a few containers where the performance hit really doesn’t matter through qemu.

    Ampere has a weird PCIe bug that results in either outright incompatibility or a video output filled with strange artifacts/distortion for the vast majority of GPUs, with the known good selection that aren’t bugged being only a few select Nvidia ones. I don’t happen to have any of those Nvidia cards but this workstation includes one. Other non-GPU PCIe things like NICs, NVMe, and SAS storage controllers work great, with tons of PCIe lanes.


  • Depends on what you consider self-hosted. Web applications I use over LAN include Home Assistant, NextRSS, Syncthing, cockpit-machines (VM host), and media stuff (Jellyfin, Kavita, etc). Without web UI, I also run servers for NFS, SMB, and Joplin sync. Nothing but a Wireguard VPN is public-facing; I generally only use it for SSH and file transfer but can access anything else through it.

    I’ve had NextCloud running for a year or two but honestly don’t see much point and will probably uninstall it.

    I’ve been planning to someday also try out Immich (photo sync), Radicale (calendar), ntfy.sh, paperless-ngx, ArchiveBox (web archive), Tube Archivist (YouTube archive), and Frigate NVR.


  • The 6-month release cycle makes the most sense to me on desktop. Except during the times I choose to tinker with it at my own whim, I want my OS to stay out of my way and not feel like something I have to maintain and keep up with, so rolling (Arch, Tumbleweed) is too often. Wanting to use modern hardware and the current version of my DE makes a 2-year update cycle (Debian, Rocky) feel too slow.

    That leaves Ubuntu, Fedora, and derivatives of both. I hate Snap and Ubuntu has been pushing it more and more in recent years, plus having packages that more closely resemble their upstream project is nice, so I use Fedora. I also like the way Fedora has rolling kernel updates but fixed release for most userspace, like the best of both worlds.

    I use Debian stable on my home server. Slower update cycle makes a lot more sense there than on desktop.

    For work and other purposes, I sometimes touch Ubuntu, RHEL, Arch, Fedora Atomic, and others, but I generally only use each when I need to.


  • If the only problem is that you can’t use dynamic linking (or otherwise make relinking possible), you still can legally use LGPL libraries. As long as you license the project using that library as GPL or LGPL as well.

    However, those platforms tend to be a problem for GPL in other ways. GPL has long been known to conflict with Apple’s App Store and similar services, for example, because the GPL forbids imposing extra limits that restrict user freedom and those stores have a terms of service that does exactly that.


  • If it was a community addition why would it matter? And why would they remove the codecs.

    You don’t have to be a corporation to be held liable for legal issues with hosting codecs. Just need to be big enough for lawyers to see you as an attractive target and in a country where codec patent issues apply. There’s a very good reason why the servers for deb-multimedia (Debian’s multimedia repo), RPM Fusion (Fedora’s multimedia repo), VLC’s site, and others are all hosted in France and do not offer US-based mirrors. France is a safe haven for foss media codecs because its law does not consider software patentable, unlike the US and even most other EU nations.

    Fedora’s main repos are hosted in the US. Even if they weren’t, the ability for any normal user around the world to host and use mirrors is a very important part of an open community-friendly distro, and the existence of patented codecs in that repo would open any mirrors up to liability. Debian has the same exact issue, and both distros settled on the same solution: point users to a separate repo that is hosted in France which contains extra packages for patent-encumbered codecs.


  • I stopped using Arch a long time ago for this same reason. Either Fedora (or derivatives like Nobara) or an atomic/immutable distro (like Bazzite, Silverblue, Kinoite) is probably the way to go.

    I used to feel like Ubuntu was a good option for this, but it no longer is: too often they try to push undesirable changes that need manual tweaking to fix after release upgrades. Debian Stable is generally good for low-maintenance use but doesn’t keep up as well with newer hardware or newer updates to video drivers and mesa, which makes it suboptimal for typical gaming use. Debian Testing can be prone to break things in updates (in my experience, worse than Arch does).

    I saw another comment recommend Rocky/RHEL, but note that their kernel doesn’t support btrfs. Since you mentioned a root snapshot, I expect you probably use it.



  • I’ve been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.

    I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.

    One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn’t do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.



  • That’s strange. As far as I can tell from any web searches, every version Windows still defaults to storing local time to the hardware clock and there are no reports of that changing with an update, nor is there any exposed setting control to configure this behavior outside of regedit. If you’re curious enough, you can check the current setting in the registry at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation. Windows maintains the current time as UTC if and only if the RealTimeIsUniversal key is present and nonzero.

    I expect it’s more likely some other issue would make the BIOS display an hour that’s inconsistent with your local timezone. For example, maybe a bug in the BIOS, maybe a timezone offset setting within the BIOS, or maybe a dead clock battery.


  • Unix time is far less universal in computing than you might hope. A few exceptions I’m aware of:

    • Most real-time clock hardware stores datetime as separate binary-coded decimal fields representing months, days, hours, minutes, and seconds as one byte each, and often the year too (resulting in a year 2100 limit).
    • Python’s datetime, WIN32’s SYSTEMTIME, Java’s LocalDateTime, and MySQL’s DATETIME similarly have separate attributes for year, month, day, etc.
    • NTFS stores a 64-bit number representing time elapsed since the year 1601 in 100-nanosecond resolution for things like file creation time.
    • NTP uses an epoch of midnight 1900-01-01 with unsigned seconds elapsed and an unusual base-2 fractional part
    • GPS uses an epoch of midnight 1980-01-06 with a week number and time within the week as separate values.

    Converting between time formats is a common source of bugs and each one will overflow in different ways. A time value might overflow in the year 2036, 2038, 2070, 2100, 2156, or 9999.

    Also, Unix time is often managed with a separate nanoseconds component for increased resolution. Like in C struct timespec, modern *nix filesystems like ext4/xfs/btrfs/zfs, etc.


  • as soon as the BIOS loaded and showed the time, it was “wrong” because it was in UTC

    Because you don’t use Windows. Windows by default stores local time, not UTC, to the RTC. This behavior can be overriden with a registry tweak. Some Linux distro installer disks (at least Ubuntu and Fedora, maybe others) will try to detect if your system has an existing Windows install and mimicks this behavior if one exists (equivalent to timedatectl set-local-rtc 1) and otherwise defaults to storing UTC, which is the more sane choice.

    Storing localtime on a computer that has more than one bootable OS becomes a particularly noticable problem in regions that observe DST, because each OS will try to change the RTC by one hour on its first boot after the time change.


  • Yes.

    My home server has dropbear-initramfs installed so that after reboot I can access the LUKS decryption prompt over SSH. The one LUKS partition contains a btrfs filesystem with both rootfs and home as subvolumes. For all the other drives attached to that system, I use ZFS native encryption with a dataset that decrypts with a keyfile from that rootfs and I have backups of an encrypted copy of that keyfile.

    I don’t think there’s a substantial performance impact but I’ve never bothered benchmarking.


  • Something I’ve noticed that is somewhat related but tangential to your problem: The result I’ve always gotten from using compose files is that container names and volume names get assigned names that contain a shared prefix by default. I don’t use docker and instead prefer podman but I would expect both to behave the same on this front. For example, when I have a file at nextcloud/compose.yml that looks like this:

    volumes:
      nextcloud:
      db:
    
    services:
      db:
        image: docker.io/mariadb:10.6
        ...
      app:
        image: docker.io/nextcloud
        ...
    

    I end up with volumes named nextcloud_nextcloud and nextcloud_db, with containers named nextcloud_db and nextcloud_app, as long as neither of those services overrides this behavior by specifying a container_name. I believe this prefix probably comes from the file-level name: if there is one and the parent directory’s name otherwise.

    The reasons I adjust my own compose files to be different from the image maintainer’s recommendation include: to accommodate the differences between podman and docker, avoiding conflicts between the exported listen ports, any host filesystem paths I want to mount in the container, and my own preferences. The only conflict I’ve had with other containers there is the exported port. zigbee2mqtt, nextcloud, and freshrss all suggest using port 8080 so I had to change at least two of them in order to run all three.


  • I never had problems with Debian stable, especially on headless server. But it’s not especially well-suited for brand new desktop hardware; even Ubuntu LTS and RHEL focus more on hardware enablement backports than Debian.

    I’ve had a worse experience with Debian testing breaking my system with updates than Arch. Adding to that the freeze period (2012’s was the worst, lasting 11 months) makes testing feel like the worst of both worlds between rolling and standard release distros.


  • zarenki@lemmy.mltolinuxmemes@lemmy.worldIndeed
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    As someone who used to use Arch a decade ago: I still use pacman for devkitpro at least, and I do miss how fast its parallel downloads get, but the tool I use to manage packages is far from the most important difference between distros to me, even if you assume not needing AUR.


  • The first I tried was Ubuntu 7.04 but I didn’t stick with it and went back to XP. Until I ended up with a hardware setup that wouldn’t work on Windows XP (widescreen monitor + Intel graphics driver with no widescreen mode options) but worked perfectly on Ubuntu 9.10. I never truly went back to Windows since.

    Tried a few other distros in 2011 then switched to Arch for a couple years, Xubuntu for a couple years, Ubuntu GNOME for 7-8 years, and finally switched to Fedora last year.