• 0 Posts
  • 104 Comments
Joined 1 year ago
cake
Cake day: January 26th, 2025

help-circle
  • Ok, while most of these don’t have companies behind them with huge revenues, most work on these projects is done by paid developers, with money coming from sponsorships, grants, donations and support deals. (Or in the case of Linux - device drivers are a prerequisite for anyone buying your product).

    Developers getting paid to work on open source is a good thing. These projects may have begun their life as small hobby projects - they aren’t anymore. (And that’s probably good)


  • They most likely run smaller pools and have their redundancy and replication provided by the application layers on top, replicating everything globally. The larger you go in scale, the further up in the stack you can move your redundancy and the less you need to care about resilience at the lower levels of abstraction.

    ZFS is fairly slow on SSDs and BTRFS will probably beat it in a drag race. But ZFS won’t loose your data. Basically, if you want between a handful TB and a few PB stored with high reliability on a single system, along with ”modest” performance requirements, ZFS is king.

    As for the defaults - BTRFS isn’t licence encumbered like ZFS, so BTRFS can be more easily integrated. Additionally, ZFS performs best when it can use a fairly large hunk of RAM for caching - not ideal for most people. One GB RAM per TB usable disk is the usual recommendation here, but less usually works fine. It also doesn’t use the ”normal” page cache, so the cache doesn’t behave in a manner people are used to.

    ZFS is a filesystem for when you actually care about your data, not something you use as a boot drive, so something else makes sense as a default. Most ZFS deployments I’ve seen just boot from any old ext4 drive. As I said, BTRFS plays in the same league as Ext4 and XFS - boot drives and small deployments. ZFS meanwhile will happily swallow a few enclosures of SAS-drives into a single filesystem and never loose a bit.

    tl;dr If you want reasonable data resilience and want raid 1 - BTRFS should work fine. You get some checksumming and modern things. As soon as you go above two drives and want to run raid5/6 you really want to use ZFS.


  • Look, there is a reason everyone who actually knows this stuff use ZFS. A good reason. ZFS is really fucking good and BTRFS has absolutely nothing on it. It’s a toy in comparison. ZFS is the gold standard in this class.

    You have four sane options:

    • mdraid raid5 with BTRFS on top. Raid5 on BTRFS still isn’t stable as far as I know, not even in 2026.
    • Mirror or triple mirror with mdraid. Have the third drive in the pool as more redundancy or outside the pool as separate unraided filesystem.
    • Same as above, but BTRFS. Raid1 is stable.
    • ZFS RaidZ1 (=raid5)

    (Not sure about bit rot recovery when running BTRFS on mdraid. All variants should at least have bit rot detection.)

    To reiterate, every storage professional I know has a ZFS-pool at home (and probably everywhere else they can have it, including production pools). They group BTRFS with Ext3, if they even know about it. When I built my home server, the distro and hardware was selected around running ZFS. Distros without good support for ZFS were disregarded right away.



  • Fairly significant factor when building really large systems. If we do the math, there ends up being some relationships between

    • disk speed
    • targets for ”resilver” time / risk acceptance
    • disk size
    • failure domain size (how many drives do you have per server)
    • network speed

    Basically, for a given risk acceptance and total system size there is usually a sweet spot for disk sizes.

    Say you want 16TB of usable space, and you want to be able to lose 2 drives from your array (fairly common requirement in small systems), then these are some options:

    • 3x16TB triple mirror
    • 4x8TB Raid6/RaidZ2
    • 6x4TB Raid6/RaidZ2

    The more drives you have, the better recovery speed you get and the less usable space you lose to replication. You also get more usable performance with more drives. Additionally, smaller drives are usually cheaper per TB (down to a limit).

    This means that 140TB drives become interesting if you are building large storage systems (probably at least a few PB), with low performance requirements (archives), but there we already have tape robots dominating.

    The other interesting use case is huge systems, large number of petabytes, up into exabytes. More modern schemes for redundancy and caching mitigate some of the issues described above, but they are usually onlu relevant when building really large systems.

    tl;dr: arrays of 6-8 drives at 4-12TB is probably the sweet spot for most data hoarders.


  • Oh, I fully agree that the tech behind X is absolute garbage. Still works reasonably well a decade after abandonment.

    I’m not saying we shouldn’t move on, I’m saying the architecture and fundamental design of Wayland is broken and was fundamentally broken from the beginning. The threads online when they announced the projects were very indicative of the following decade. We are replacing a bit unmaintainable pile of garbage with 15 separate piles of hardware accelerated soon-to-be unmaintainable tech debt.

    Oh, and a modern server doesn’t usually have a graphics card. (Or rather, the VM you want to host users within). I won’t bother doing the pricing calculations, but you are easily looking at 2-5x cost per seat, pricing in GPU hardware, licensing for vGPUs and hypervisors.

    With Xorg I can easily reach a few hundred active users per standard 1U server. If you make that work on Wayland I know some people happy to dump money on you.



  • enumerator4829@sh.itjust.workstolinuxmemes@lemmy.worldPreference
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 month ago

    The fundamental architectural issue with Wayland is expecting everyone to implement a compositor for a half baked, changing, protocol instead of implementing a common platform to develop on. Wayland doesn’t really exist, it’s just a few distinct developer teams playing catch-up, pretending to be compatible with each other.

    Implementing the hard part once and allowing someone to write a window manager in 100 lines of C is what X did right. Plenty of other things that are bad with X, but not that.


  • Tell me you never deployed remote linux desktop in an enterprise environment without telling me you never deployed remote desktop linux in an enterprise environment.

    After these decades of Wayland prosperity, I still can’t get a commercially supported remote desktop solution that works properly for a few hundred users. Why? Because on X, you could highjack the display server itself and feed that into your nice TigerVNC-server, regardless of desktop environment. Nowadays, you need to implement this in each separate compositor to do it correctly (i.e. damage tracking). Also, unlike X, Wayland generally expects a GPU in your remote desktop servers, and have you seen the prices for those lately?



  • The M-series hardware is locked down and absofuckinglutely proprietary and locked down and most likely horrible to repair.

    But holy shit, every other laptop I’ve ever used looks and feels like a cheap toy in comparison. Buggy firmware that can barely sleep, with shitty drivers from the cheapest components they could find. Battery life in low single digits. The old ThinkPads are kinda up there in perceived ”build quality”, but I haven’t seen any other laptop that’s even close to a modern macbook. Please HP, Dell, Lenovo, Framework or whoever , just give me a functional high quality laptop. I’ll pay.






  • Software compatibility is a problem on X as well, so I’m extrapolating. I don’t expect the situation to get better though. I’ve managed software that caused fucking kernel panics unless it ran on Gnome. The support window for this type of software is extremely narrow and some vendors will tell you to go pound sand unless you run exactly what they want.

    I’m no longer working with either educational or research IT, so at least it’s someone else’s problem.

    As for ThinLinc, their customers have asked about what their plan is for the past decade, but to quote them: ”Fundamentally, Wayland is not compatible with remote desktops in its core design.” (And that was made clear by everyone back in 2008)

    Edit: tangentially related, the only reasonable way to run VNC now against Wayland is to use the tightly coupled VNC-server within the compositor (as you want intel on window placements and redraws and such, encoding the framebuffer is just bad). If you want to build a system on top of that, you need to integrate with every compositor separately, even though they all support ”VNC” in some capacity. The result is that vendors will go for the common denominatior, which is running in a VM and grabbing the framebuffer from the hypervisor. The user experience is absolute hot garbage compared to TigerVNC on X.


  • It’s great that most showstoppers are fixed now. Seventeen years later.

    But I’ll bite: Viable software rendered and/or hardware accelerated remote deskop support with load balancing and multiple users per server (headless and GPU-less). So far - maybe possible. But then you need to allow different users to select different desktop environments (due to either user preferences or actual business requirements). All this may be technically possible, but the architecture of Wayland makes this very hard to implement and support in practice. And if you get it going, the hard focus on GPU acceleration yields an extreme cost increase, as you now need to buy expensive Nvidia-GPUs for VDI with even more expensive licenses. Every frame can’t be perfect over a WAN link.

    This is trivial with X, multiple commercially supported solutions exist, see for example Thinlinc. This is deployable in literally ten minutes. Battle tested and works well. I know of multiple institutional users actively selecting X in current greenfield deployments due to this, rolling out to thousands of users in well funded high profile projects.

    As for the KDE showstopper list - that’s exactly my point. I can’t put my showstoppers in a single place, I need to report to KDE, Gnome and wlroots and then track all of them, that’s the huge architectural flaw here. We can barely get commercial vendors to interact with a single project, and the Wayland architecture requires commercial vendors to interact with a shitton of issue trackers and different APIs (apparently also dbus). Suddenly you have a CAD suite that only works on KDE and some FEM software that only runs on a particular version of Gnome, with a user that wants both running at the same time. I don’t care about how well KDE works. I care that users can run the software they need, the desktop environment is just a tool to do that. The fragmentation between compositors really fucks this up by coupling software to display manager. Eventually, this will focus commercial efforts on the biggest commercial desktop environment (i.e. whatever RHEL uses), leaving the rest behind.

    (Fun story, one of my colleagues using Wayland had a postit with ”DO NOT TURN OFF” on his monitor the entire pandemic - his VNC session died if the DisplayPort link went down.)



  • It’s hilarious that all of this was foreseen 17 years ago by basically everyone, and here is a nice list providing just those exact points. I’ve never seen a better structured ”told ya so” in my life.

    The point isn’t that the features are there or not, but how horrendously fragmented the ecosystem is. Implementing anything trying to use that mess of API surface would be insane to attempt for any open source project, even when ignoring that the compositors are still moving targets.

    (Also, holy shit the Gnome people really wants everyone to use dbus for everything.)

    Edit: 17 years. Seventeen years. This is what we got. While the list is status quo, it’s telling that it took 17 years to implement most of the features expected of a display server back in the last millenium. Most features, but not all.