

Under notes, where you said my name, did you mean “Hedgedoc?”
Under notes, where you said my name, did you mean “Hedgedoc?”
Sure, but the license is limited to uses that “help you navigate, experience, and interact with online content as you indicate with your use of Firefox.”
Not sure how ads would help with that.
AI? Sure, if an AI solution did those things. But it wouldn’t be them training on your data. This would be them using your data in AI-powered services, whether that be search (especially relevant if Google is mandated to stop paying them to default to Google); automatic categorization of your web browsing to make Containers more streamlined and effective; or even just having a completely opt-in AI assistant chatbot that can access data entered elsewhere in Firefox once you activate it.
Worst case I suspect whatever they add will be things you can simply turn off in settings. Ideally it would be opt-in, of course, or at least prompted-opt-out and disabled until first use.
And there are plenty of things that aren’t ad or AI-related that this could apply to. Heck, this could be part of a step to consolidate licenses for other products - VPN, Pocket, email anonymizers, etc. - and to enable deeper integration of those into Firefox.
local docker hub proxy
Do you mean a Docker container registry? If so, here are a couple options:
Oh 100% agreed - in this instance, it’s clear that OBS has a well maintained package that should be prioritized. But they could keep their repo first and remove OBS (and other known-to-be-well-maintained apps) from it to accomplish that.
They put their repo first on the list.
Right. And are we talking about the list for OBS or of repos in general? I doubt Fedora sets the priority on a package level. And if they don’t, and if there are some other packages in Flathub that are problematic, then it makes sense to prioritize their own repo over them.
That said, if those problematic packages come from other repositories, or if not but there’s another alternative to putting their repo first that would have prevented unofficial builds from showing up first, but wouldn’t have deprioritized official, verified ones like OBS, then it’s a different story. I haven’t maintained a package on Flathub like the original commenter you replied to but I don’t get the impression that that’s the case.
Why did Fedora make their packages take priority? Is it because the priority is otherwise random and if you don’t have a priority set, that leads to the issue they mentioned? Because if so, that sounds like a reasonable action by Fedora and like the real culprit is Flathub.
Clearly they’re cosplaying as a Canonical engineer whose internal explanation and pleas for them to not take this approach fell upon deaf ears /j
If you’re a C developer who doesn’t know Rust, no.
I can’t use signal.
Why? Do you not have a phone number? Is it blocked in your country? Are you legally prohibited from using software with end to end encryption?
Giphy has a documented API that you could use. There have been bulk downloaders, but I didn’t see any that had recent activity. However you still might be able to use one to model your own script after, like https://github.com/jcpsimmons/giphy-stacks
There were downloaders for Gfycat - gallery-dl supported it at one point - but it’s down now. However you might be able to find collections that other people downloaded and are now hosting. You could also use the Internet Archive - they have tools and APIs documented
There’s a Tenor mass downloader that uses the Tenor API and an API key that you provide.
Imgur has GIFs is supported by gallery-dl, so that’s an option.
Also, read over https://github.com/simon987/awesome-datahoarding - there may be something useful for you there.
In terms of hosting, it would depend on my user base and if I want users to be able to upload GIFs, too. If it was just my close friends, then Immich would probably be fine, but if we had people I didn’t know directly using it, I’d want a more refined solution.
There’s Gifable, which is pretty focused, but looks like it has a pretty small following. I haven’t used it myself to see how suitable it is. If you self-host it (or something else that uses S3), note that you can use MinIO or LocalStack for the S3 container rather than using AWS directly. I’m using MinIO as part of my stack now, though for a completely different app.
MediaCMS is another option. Less focused on GIFs but more actively developed, and intended to be used for this sort of purpose.
Do you only experience the 5-10 second buffering issue on mobile? If not, then you might be able to fix the issue by tuning your NextCloud instance - upping the memory limit, disabling debug mode and dropping log level back to warn if you ever changed it, enabling memory caching, etc…
Check out https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html and https://docs.nextcloud.com/server/latest/admin_manual/installation/php_configuration.html#ini-values for docs on the above.
You could’ve scrolled down to the bottom, clicked on “Links,” then clicked on the repo link
The repo has instructions to install a Snap or build from source. If you build from source, it looks like you should download an archive from the releases page rather than just pulling from master.
Open-Webui published a docker image that has a bundled Ollama that you can use, too: ghcr.io/open-webui/open-webui:cuda
. More info at https://docs.openwebui.com/getting-started/#installing-open-webui-with-bundled-ollama-support
I made a typo in my original question: I was afraid of taking the services offline, not online.
Gotcha, that makes more sense.
If you try to run the reverse proxy on the same server and port that an existing service is using (e.g., port 80), then you’ll run into issues. You could also run into conflicts with the ports the services themselves use. Likewise if you use the same outbound port from your router. But IME those issues will mostly stop the new services from starting - you’d have to stop the services or restart your machine for the new service to have a chance to grab the ports while they were unused. Otherwise I can’t think of any issues.
I’m afraid that when I install a reverse proxy, it’ll take my other stuff online and causes me various headaches that I’m not really in the headspace for at the moment.
If you don’t configure your other services in the reverse proxy then you have nothing to worry about. I don’t know of any proxy that auto discovers services and routes to them by default. (Traefik does something like this with Docker services, but they need Docker labels and to be on the same Docker network as Traefik, and you’re the one configuring both of those things.)
Are you running this on your local network? If so, then unless you forward a port to your server on the port your reverse proxy is serving from, it’ll only be accessible from the local network. This means you can either keep it that way (and VPN in to access it) or test it by connecting directly to your server on that port and confirm that it’s working as expected before forwarding the port.
How does power consumption of those x86 PCs compare?
I don’t know that a newer drive cloner will necessarily be faster. Personally, if I’d successfully used the one I already have and wasn’t concerned about it having been damaged (mainly due to heat or moisture) then I would use it instead. If it might be damaged or had given me issues, I’d get a new one.
After replacing all of the drives there is something you’ll need to do to tell it to use their full capacity. From reading an answer to this post, it looks like what you’ll need to do is to select “Change RAID Mode,” then keep RAID 1 selected, keep the same disks, and then on the next screen move the slider to use the drives’ full capacities.
upper capacity
There may be an upper limit, but on Amazon there is a 72 TB version that would have to come with at least 18 TB drives. If 18 TB is fine, 20 TB is also probably fine, but I couldn’t find any reports by people saying they’d loaded 20 TB drives into theirs without issue.
procedure
You could also clone them yourself, but you’d want to put the NAS into read only mode or take it offline first.
I think cloning drives is generally faster than rebuilding them in RAID, as well as easier on the drives, but my personal experience with RAID is very limited.
Basically, what I’d do is:
In terms of timing… I have a Sabrent offline cloning hub (about $50 on Amazon), and it copies data at 60 Mbps, meaning it’d take about 9 hours per clone. Startech makes a similar device ($96 on Amazon, that allegedly clones data at 466 Mbps (28 GB per minute), meaning each clone would take 2.5 hours… but people report it being just as slow as the Sabrent.
Also, if you bought two offline cloning devices, you could do steps 1-3 and 4-6 simultaneously, and do the same again with steps 7-8.
I’m not sure how long it would take RAID to rebuild a pulled drive, but my understanding is that it’s going to be fastest with RAID 1. And if you don’t want to make the NAS read-only while you clone the drives, it’s probably your only option, anyway.
They’re focused entirely on the shitty practices those other manufacturers engaged in. In that regard, Valve didn’t do much (and that’s a good thing).
More secure according to what?