Just this guy, you know?

  • 0 Posts
  • 47 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • zaphod@lemmy.catolinuxmemes@lemmy.worldOld is stability
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 months ago

    Yes I’m aware of the security tradeoffs with testing, which is why I’ve started refraining from mentioning it as an option as pedants like to pop out of the woodwork and mention this exact issue every damn time.

    Also, testing absolutely gets “security support”, the issue is that security fixes don’t land in testing immediately and so there can be some delay. As per the FAQ:

    Security for testing benefits from the security efforts of the entire project for unstable. However, there is a minimum two-day migration delay, and sometimes security fixes can be held up by transitions. The Security Team helps to move along those transitions holding back important security uploads, but this is not always possible and delays may occur.



  • zaphod@lemmy.catolinuxmemes@lemmy.worldOld is stability
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 months ago

    For the target users of Debian stable? No.

    Debian stable is for servers or other applications where security and predictability are paramount. For that application I absolutely do not want a lot package churn. Quite the opposite.

    Meanwhile Sid provides a rolling release experience that in practice is every bit as stable as any other rolling release distro.

    And if I have something running stable and I really need to pull in the latest of something, I can always mix and match.

    What makes Debian unique is that it offers a spectrum of options for different use cases and then lets me choose.

    If you don’t want that, fine, don’t use Debian. But for a lot of us, we choose Debian because of how it’s managed, not in spite of it.










  • That’s a goal, but it’s hardly the only goal.

    My goal is to get a synthesis of search results across multiple engines while eliminating tracking URLs and other garbage. In short it’s a better UX for me first and foremost, and self-hosting allows me to customize that experience and also own uptime/availability. Privacy (through elimination of cookies and browser fingerprinting) is just a convenient side effect.

    That said, on the topic of privacy, it’s absolutely false to say that by self-hosting you get the same effect as using the engines directly. Intermediating my access to those search engines means things like cookies and fingerprinting cannot be used to link my search history to my browsing activity.

    Furthermore, in my case I host SearX on a VPS that’s independent of my broadband connection which means even IP can’t be used to correlate my activity.





  • zaphod@lemmy.catolinuxmemes@lemmy.worldMy lazy ass, nowadays.
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    9 months ago

    Eh even as a Linux admin, I prefer hand installs I understand over mysterious docker black boxes that ship god knows what.

    Sure, if I’m trialing something to see if it’s worth my time, I’ll spin up a container. But once it’s time to actually deploy it, I do it by hand.



  • Honestly the issue here may be a lack of familiarity with how bare repos work? If that’s right, it could be worth experimenting with them if only to learn something new and fun, even if you never plan to use them. If anything it’s a good way to learn about git internals!

    Anyway, apologies for the pissy coda at the end, I’ve deleted it as it was unnecessary. Keep on having fun!