• 0 Posts
  • 8 Comments
Joined 1 year ago
cake
Cake day: April 23rd, 2023

help-circle


  • The biggest thing missing for me is good VR support at the OS level. Even with all the optimizations in Bazzite making regular games perform about equivalent to Windows, latency in VR is awful, and motion smoothing just plain isn’t supported in Linux yet, on any hardware. Those two pain points make the experience much worse than on Windows, I’d be motion sick in minutes if I tried to actually play something. Thankfully, normal gaming works just fine, and I don’t play VR as often as flat games, so I can just boot into Windows when I want to do that.

    The second thing is the poor state of music players. I’m used to the very extensive feature set in MusicBee, and not a single native player hits all the boxes that MusicBee does. It can be run in Bottles, but not very well, and as a newbie, it took me a lot of extra tinkering to get things working even sort of right - file permissions, dotnet stuff, font libraries, etc. I still haven’t quite gotten file permissions working right, and font rendering is pretty bad (and custom font selection is broken entirely), but maybe I’ll figure some of that out eventually so I can stop booting into Windows whenever I want to make changes to my library.


  • Bazzite, from Universal Blue, based on Fedora Atomic Desktops. Immutable-style distro which means critical OS files and folders are read-only and all system apps (the ones preinstalled) are updated together as a full image rather than piecemeal. Anything not preinstalled can be installed in a distrobox or as a flatpak/appimage/aur, or as a last resort, layered with rpm-ostree. Extremely user-friendly, everything a gamer needs is either installed and preconfigured out of the box or available as a flatpak. Bazzite’s the first time I had a good enough experience on Linux that I made it my daily driver; now Windows is the secondary OS I only go to when I really need that one thing that only works there.



  • The first line of the bot’s readme states:

    This is a simple script that monitors specific Lemmy communities and attempts to #hashtag new posts so that they are discoverable in microblogging services like mastodon.

    It makes the Lemmy post visible to Mastodon users who search for or follow that tag; otherwise, since Lemmy doesn’t support tags and won’t for the foreseeable future, the only way for Mastodon users to find these posts would be to follow the Lemmy community itself (Lemmy communities show up as individual users in Mastodon), and that would require them to know it exists. The more people see a post, the more likely it is that a discussion will take place.

    If you don’t want to see the bot’s posts since you’re using Lemmy directly, the bio states you’re free to block or mute it if you find its posts annoying.


  • first training anything requires having content on hand, and that means children were exploited to get it

    Does it actually require that, though? I feel like a model trained on a sufficiently diverse selection of adult humans would be able to render an approximation of CSAM even if no CSAM was actually used to create the model. If not now, then very, very soon.

    I’m not sure what to do about that, but I am sure that rather than something reasonable like what you’ve said (focusing on the distribution of such material), privacy will end up in the crosshairs, as usual. Humans have always used tools as extensions of ourselves, and these generative tools will soon be used as yet another extension to the mind, up there with search engines. I worry that the legislative responses we actually see on this will stray closer to thought-crime than I’m comfortable with.