Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 271 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle
  • The main issue you’ll run into is nicher proprietary software being hard to install, but that’s what containers are for. The main one I see is if you need to install some proprietary VPN client it gets annoying, but since you’ll be running a VM anyway you can do some network trickery. My work’s antivirus only works on Ubuntu and RHEL, proprietary kernel modules so it’s got to be at least one of those kernels.

    Linux is Linux, nothing’s impossible to solve even with Bazzite’s immutability. Worst comes to worst you make your own images and it’s not that hard, you basically just fork it on GitHub and let the CI do its thing.

    But do you have time to fiddle to make it work and take the risk, or do you want to play it safe? How confident are you with Bazzite’s more advanced topics?






  • What do you want the UI for? For configuration it’s usually meh because it’s the kind of thing you configure by config file, often generated config files even. For stats it’s where it gets interesting, usually third-party options like Grafana is used along with something like Prometheus to collect the metrics.

    When it comes to easy configuration, newer options go for the zero configuration angle rather than a nice UI to configure it. Just need some Docker tags and Traefik automagically configures itself, so the UI is just for viewing information.



  • Few of them for most use cases, especially a VPS. My server have a couple of IPs each mapping to a different VM, they can all claim 22/80/443 as you’d expect, but that’s just basically the same as having a bunch of VPSes anyway.

    It’s useful for some other uses like, I might want to dedicate an IP for VPN exit that doesn’t expose any services.

    Another use is sometimes you just want two things to stay entirely separate, even if on a technical level it could work with a reverse proxy. It can eliminate some class of exploits like request smuggling.

    One use case I’ve had for a customer is they have a system that can only do TLSv1.0, which is wildly obsolete and exploitable. So that particular API endpoint was served from a secondary IP, that way I can continue to enforce TLSv1.2+ on the primary IP. It’s possible with some reverse proxy magic with HAproxy, but I could also just make a new server block in the existing NGINX bound to that IP and call it a day.


  • The performance is a good point. You can do the striped mirror with ZFS too and still get the advantages of ZFS.

    I think you can do all of that through the Proxmox UI, but it shouldn’t be too hard to do on the CLI either. You just make two mirror sets and you’re good to go. ZFS should automatically distribute the load across the two mirrors.


  • Max-P@lemmy.max-p.metoSelfhosted@lemmy.worldFirst time software set up help
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 month ago

    I’d probably do RAID-Z with ZFS rather than RAID10, better space utilization and better error correction. Should be able to easily set that up in the Proxmox web UI.

    Everything else sounds good. Don’t worry too much about it, you will find things you wish you did differently regardless, that’s part of the learning experience.



  • want someone to prove his LLM can be as insightful and accurate as paid one.

    The full DeepSeek model is available for download, and should generate about the same quality answers as the official one, with the bonus of less censorship. I pretty trivially got it to talk about the Tiananmen Square, and they can’t even ban me for it.

    That said, that’s rarely the point. It’s usually because you can, a cost saving measure, sometimes you plainly just don’t need a good model, sometimes you want privacy, sometimes you need privacy at the cost of quality.

    If your business is shoving customer reviews into a model, you really don’t need the best model for it to tell you how angry the customer is.

    Personally I just do it for fun and because I can. Sometimes you just do things for no other reason than because you can.







  • Max-P@lemmy.max-p.metoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    2 months ago

    In that specific context I was still thinking about how you need to run mysql_upgrade after an update, not the regular post upgrade scripts. And Arch does keep those relatively simple. As I said, Arch won’t restart your database for you, and also won’t run mysql_upgrade because it also doesn’t preconfigure a user for itself to do that. And it also doesn’t initialize /var/lib/mysql for you either upon installation. Arch only does maintenance tasks like rebuild your font cache, create system users, reload systemd. And if those scripts fail, it just moves on, it’s your job to read the log and fix it. It doesn’t fail the package installation, it just tells you to go figure it out yourself.

    Debian distros will bounce your database and run the upgrade script for you, and if you use unattended upgrades it’ll even randomly bounce in the middle of the night because it pull a critical security update that probably don’t apply to you anyway. It’ll bail out mid dist-upgrade and leave you completely fucked, because it couldn’t restart a fucking database. It’s infuriating, I’ve even managed to get apt to be incapable of deleting a package (or reinstalling it)/because it wanted to run a pre-remove script that I had corrupted in a crash. Apt completely hosed, dpkg completely hosed, it was a pain in the ass.

    With the Arch philosophy I still need to fix my database, but at least the rest of my system gets updated perfectly and I can still use pacman to install the tools I need to fix the damn database. I have all those issues with Debian because apt tries to do way too fucking much for its own good.

    The Arch philosophy works. I can have that automated, if I asked for it and set up a hook for it.


  • Max-P@lemmy.max-p.metoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    24
    arrow-down
    1
    ·
    2 months ago

    Pacman just does a lot less work than apt, which keeps things simpler and more straightforward.

    Pacman is as close as it gets to just untar’ing the package to your system. It does have some install scripts but they do the bare minimum needed.

    Comparatively, Debian does a whole lot more under the hood. It’s got a whole configuration management thing that generates config files and stuff, which is all stuff that can go wrong especially if you overwrote it. Debian just assumes apt can log into your MySQL database for example, to update your tables after updating MySQL. If any of it goes wrong, the package is considered to have failed to install and you get stuck in a weird dependency hell. Pacman does nothing and assumes nothing, its only job is to put the files in the right place. If you want it to start, you start it. If you want to run post-upgrade, you got to do it yourself.

    Thus you can yank an Arch system 5 years into the future and if your configs are still valid or default, it just works. It’s technically doable with apt too but just so much more fragile. My Debian updates always fail because NGINX isn’t happy, Apache isn’t happy, MySQL isn’t happy, and that just results in apt getting real unhappy and stuck. And AFAIK there’s no easy way to gaslight it into thinking the package installed fine either.


  • You can’t really easily locate where the last version of the file is located on an append-only media without writing the index in a footer somewhere, and even then if you’re trying to pull an older version you’d still need to traverse the whole media.

    That said, you use ZFS, so you can literally just zfs send it. ZFS will already know everything that needs to be known, so it’ll be a perfect incremental. But you’d definitely need to restore the entire dataset to pull anything out of it, reapply every incremental one by one, and if just one is unreadable the whole pool is unrecoverable, but so would the tar incrementals. But it’ll be as perfect and efficient as possible, as ZFS knows the exact change set it needs to bundle up. It’s unidirectional, so that’s why you can just zfs send into a file and burn it to a CD.

    Since ZFS can easily tell you the difference between two snapshots, it also wouldn’t be too hard to make a Python script that writes the full new version of changed files and catalogs what file and what version is on which disc, for a more random access pattern.

    But really for Blurays I think I’d just do it the old fashioned way and classify it to fit on a disc and label it with what’s on it, and if I update it make a v2 of it on the next disc.