

… I actually like being able to copy a website and middle clicking to open it. I don’t think it’s a problem, it just needs to be telegraphed to the user better, and togleable.


… I actually like being able to copy a website and middle clicking to open it. I don’t think it’s a problem, it just needs to be telegraphed to the user better, and togleable.


… sure. Nothing here is wrong, but there’s ways to try and mitigate that. And then it’s kinda an arms race, and vigilance.


Good as a general recommendation.
I also feel like the risk levels are very different. If it’s something that performs a function but doesn’t save/serve any custom data (e.g. bentopdf), that’s a lot easier to decide to do than something complicate like Jellyfin.
I do have public addresses for Matrix, overleaf, AppFlowy, immich because they would be much less useful otherwise. Haven’t had any problems yet, but wouldn’t necessarily recommend it to others.
I’d never host any stuff with “Linux ISOs” on a public adress, that seems like it’d be looking for trouble.


You just have to Flash coreboot, I have three chromebooks deployed with family, one with mint and two with Endeavour. Even Touch and audio drivers work for those specific models (Acer Santa and Asus Babytiger).


I seem to remember that steam depends on the official nvidia drivers, so that might still be fumbly if you use their platform.


At some point, your SSD will fail. If you’re lucky, that is quite a while away. If you’re unlucky, that’s tomorrow. If your data is truly critical, at least copy it to a second drive, even if you don’t do a proper/full 3-2-1 backup.
Also, if you’re asking whether you can move data from one drive from an old file system to a new file system that replaces the old one on the same drive without copying data to a different drive - no.


If I’m not mistaken, illustrator is vector based, krita is pixel based. So drawing-wise, krita is closer to Photoshop than illustrator.


cloudflarestatus.com, seems to be hosted on AWS. Probably just got hammered because there was suddenly a lot of people caring about CFs status.
Hey, that was made at my former uni. And now I’m wondering whether other unis adopted it. It always seemed like a neat solution.


On my desktop, I have about 200GB free, which is about 10%, which feels like the bare minimum, and the only reason I haven’t upgraded yet is that there’s some large directories I can archive should it be necessary.
On my server, I recently was down to about 500GB free, also about 10%, which made me add more drives.
So it’s all relative.


Like Fedora Silverblue or OpenSuSE Aeon/Kalpa?


I’m in Germany, and it works pretty fine. They’ve got several datacenters around here, never had an issue with speed or latency.
I don’t like that they got that evil megacorp vibe, but what big Internet firm doesn’t?
Well, I need to run two separate tunnels to not run into hairpinning issue, so, some weirdness, I guess. More down to my services, though.


Interesting. As I said, I never tried yunohost. I usually work with podman, and just assign local ports to pods, then route traffic to those ports internally, which seems to work fine.
Anyway, I feel like we won’t be solving OPs issue here. Still, interesting to see some of the problems people with different setups have to deal with.


Yeah, I feel like we’re missing some info here.
I have to admit that I have no experience with yuno. Always seemed interesting, but not like something that fits into my work flow.
If they’re self-hosting at home (which I’m also doing for some services), I’d presume they’re probably running their stuff on a single machine, so I’m not sure where their router would come Into it. The data the cloudflare tunnel process receives should look the same to the router no matter the port it is ultimately sent to, and when it is sent to an address internal to the machine, shouldn’t pass through the router again.


I presume they mean pointing their cloudflare tunnel to direct lemmy.example.com to http://localhost/:[port], and I don’t think there’s any special rules about that port from cloudflares site.
I use tunnels and ports in about that range for all my sites, and don’t have any problems.


You probably don’t need me to tell you, but keep good backups. Friend of mine recently had his account nuked without any reason given, and without the possibility of recourse.



I actually kinda did that. Sent a preconfigured thinkcentre to my mum that boots into the jellyfin media player, connects to my server via tailscale. Just had to plug it into power, lan, hdmi. Immutable, atomic system that looks for updates on boot, applies them on next reboot, and does a rollback and ping me if the update fails.
I have ssh access, and my brother lives nearby in case everything fails, that makes things easier.


I don’t think it’s necessarily worth it for anyone currently on Linux, but if they provide support and a warranty, it might be helpful for some folks who aren’t that computer savvy, but still sick of Windows.


I guess you could install cockpit (via Terminal, sorry, but it’s pretty straightforward and there are good guides). After that, you could use the cockpit web interface to deploy docker/podman containers. It’s a bit clunky sometimes, but it does the job purely in UI.
You can also manage updates, backups, etc via cockpit if you install the required modules.
As base, I’d use any stable Linux distro that’s reccomended for server use.
Meh, I actively use it. I get why it might be unintuitive to someone newly switching.