• 0 Posts
  • 176 Comments
Joined 2 years ago
cake
Cake day: March 8th, 2024

help-circle
  • I can give that a whirl if it’s not set up like that already, but the monitor is VERY slow on its own. It basically never wakes up in time for the BIOS bootscreen and any signal interruption sends it on a wild goose chase of signal searching around its inputs that can take ten seconds at a time. It’s not a cheap monitor, either, which I assume is part of the problem, as it wants to be super smart about a bunch of things and has to contend with a bunch of options and alternatives that maybe a simpler setup wouldn’t.

    Still, worth a shot to try to tune grub and double check if it’s swapping modes unnecessarily between the bios image and the menu. I hadn’t considered it. Like so many Linux features and app there’s a bunch of stuff you can config on it that I keep not looking into because it’s only surfaced in documentation, if that.

    EDIT: Tried, didn’t help. The motherboard rebooting gives the monitor just enough time to search its display port input, decide it’s been unplugged and shut down, so by the time another monitor picks up the slack it’s too late and the timeout has expired unless you’re mashing down to stop it. The changes do make the second monitor come up at its native resolution instead of changing modes, but the mistake happens elsewhere.

    I could just set a longer timeout, but I’d rather have a faster boot when I’m sticking to the default than wait for the whole mess to sort itself out every time. Been mashing bios entry buttons and bootloader menus since the 90s, what’s a couple decades more.

    Still dumb, though.


  • I don’t know about Gentoo, but as a serial dual booter I know this pain well.

    I swear about two thirds of the time going through grub on every boot adds to the process are waiting for my monitor to figure itself out. Half the time it doesn’t get there on time at all.


  • I suppose it makes more sense the less you want to do and the older your hardware is. Even when repurposing old laptops and stuff like that I find the smallest apps I’d want to run were orders of magnitude more costly than any OS overhead. This was even true that one time I got lazy and started running stuff on an older Windows machine without reinstalling the OS, so I’m guessing anything Linux-side would be fine.


  • After a OS update? I mean, I guess, but most things are going to be in containers anyway, right?

    The last update that messed me up on any counts was Python-related and that would have got me on any distro just as well.

    Once again, I get it at scale, where you have so much maintenance to manage and want to keep it to a minimum, but for home use it seems to me that being on an LTS/stable update channel would have a much bigger impact than being on a lightweight distro.


  • I’m sidetracking a bit, but am I alone in thinking self hosting hobbyists are way too into “lightweight and not bloated” as a value?

    I mean, I get it if you have a whole data center worth of servers, but if it’s a cobbled together home server it’s probably fine, right? My current setup idles at 1.5% of its CPU and 25% of its RAM. If I turned everything off those values are close to zero and effectively trivial alongside any one of the apps I’m running in there. Surely any amount of convenience is worth the extra bloat, right?




  • The thing is in my memory it wasn’t that special because at the time computers came in a lot more flavors than now. There were a ton of semi-recent computers that used just some variant of Basic, others some variant of DOS, DOS and Windows were different things and both in use, Apple-IIs were a thing, but also Macs…

    I remember the first time I gave it a shot it was a bit of a teenage nerd challenge, because the documentation was so bad and you had to do the raw Arch thing with Debian and set up things step by step to get to a semblance of an X server, let alone a DE. And then after spending a couple nights messing with that I didn’t think about it much until a few years later when Ubuntu sort of figured out making things easy.

    By the mid 2000s I remember people my age laughing at older normies for not having heard of Linux already, so it all moved relatively fast. It was maybe less than a decade between it coming into being and then it being something you probably don’t use but you’ve heard of, which is faster than I would have said if you asked me.



  • Man, Linux is one of those things where it’s less old than I think.

    I don’t see myself as an early adopter at all, but I remember trying to get some version of Debian running on the same Pentium PC I had gotten to play stuff like Duke 3D, and I don’t remember at the time thinking “oh, this is some new thing”, so I had assumed the concept existed for decades, rather than being just a handful of years old.






  • Gonna raise the notion that a good, usable piece of software would not require much, if any level of awareness on this front, since most users aren’t willing or able to have that awareness in the first place.

    The way this should work is you click on things you want in a package manager and then those are present and available transparently whether you use them or not. That goes for all OSs.

    Hell, even Android’s semi-automatic hybernating of unused apps is a step too close to my face, as far as I’m concerned.


  • Yeah, we’re almost there. If you buy a pre-packaged box with Home Assistant you’re most of the way there. If you look under the hood most commercial NAS options and even some routers are scraping that territory as well.

    I think the way it needs to work to go mainstream is you buy some box that you plug in to your router and it just sets up a handful of (what looks to you) like web services you can access from anywhere. No more steps needed.

    The biggest blockers right now are that everybody in that space is too worried giving you the appearance of control and customizability to go that hard towards end-user focus… and that for some reason we as a planet are still dragging our feet on easily accessible permanent addresses for average users and still relying on hacks and workarounds.

    The tech is there, though. You could be selling home server alternatives to the could leaning into enshittification annoyance with the tech we have today. There’s just nobody trying to do an iServe because everybody is chasing that subscription money instead, and those who aren’t are FOSS nerds that want their home server stuff to look weird and custom and hard.


  • Yeah, that’s exactly where it comes from. And it fits just fine for people like you, doing it for a living. It’s just a bit obnoxious when us normies dabbling with what is now fairly approachable hobbyist home networking try to cosplay as that. I mean, come on, Brad, you’re not unwinding after work with more server stuff, you just have a Plex and a Pi-hole you mess around with while avoiding having actual face time with your family.

    And that’s alright, by the way. I think part of why the nomenclature makes me snarky is that I actually think we’re on the cusp of this stuff being very doable by everybody at scale. People are still running small services in dedicated Raspberry Pis and buying proprietary NASs that can do a bunch of one-button self-hosting. If you gave it a good push you could start marketing self-contained home server boxes as a mainstream product, it’s just that the people doing that are more concerned with selling you a bunch of hard drives and the current batch of midcore users like me are more than happy to go on about their “homelab” and pretend they’re doing a lot more work than they actually are to keep their couple of docker containers running.


  • Yeeeeah, I have less of a problem for that, because… well yeah, people host stuff for you all the time, right? Any time you’re a client the host is someone else. Self-hosting makes some sense for services where you’re both the host and the client.

    Technically you’re not self hosting anything for your family in that case, you’re just… hosting it, but I can live with it.

    I do think this would all go down easier if we had a nice marketable name for it. I don’t know, power-internetting, or “the information superdriveway”. This was all easier in the 90s, I guess is what I’m accidentally saying.


  • This is a me thing and not related to this video specifically, but I absolutely hate that we’ve settled on “homelab” as a term for “I have software in some computer I expose to my home network”.

    It makes sense if you are also a system administrator of an online service and you’re testing stuff before you deploy it, but a home server isn’t a “lab” for anything, it’s the final server you’re using and don’t plan to do anything else with. Your kitchen isn’t a “test kitchen” just because you’re serving food to your family.

    Sorry, pet peeve over. The video is actually ok.


  • And maybe I could get to some more in-depth solution that sorts it out, but that’s me spending time on a problem that a) I shouldn’t have to, and b) I have a functional workaround for already.

    Communal troubleshooting is the nature of Linux desktop, but also a massive problem. You shouldn’t need communal troubleshooting in the first place. It’s not a stand-in for proper UX, hardware compatibility or reliable implementation. If the goal is for more people to migrate to Linux the community needs to get over the assumption that troubleshooting is a valid answer to these types of issues.

    Which is not to say the community shouldn’t be helpful, but there’s this tendency to aggressively troubleshoot at people complaining about issues and limitations and then to snark at people actively asking for help troubleshooting for not reading documentation or not providing thorough enough logs and information. I find that obnoxious, admittedly because it’s been decades, so I may be on a hair trigger for it at this point.