![](https://lemmy.ml/pictrs/image/e550bc4a-4cad-442b-a375-b82c4b28e476.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
Oh, flac fixes for HLS. I wonder if https://github.com/jellyfin/jellyfin/issues/8722 was fixed. I’ll have to try it out today.
Oh, flac fixes for HLS. I wonder if https://github.com/jellyfin/jellyfin/issues/8722 was fixed. I’ll have to try it out today.
Two regions of one provider isn’t really a reliable backup. What if the provider goes out of business, hacked, closes your account or has a software bug that affects all of their storage.
I also had a bad experience where I had a test website under a megabyte in a storage bucket. It was under the free tier and sat there for a few years. Then one month they sent me a bill (it was small, a handful of cents). Contact support saying that this use is under the free tier. They said that data was added then removed from the bucket. I hadn’t logged into the account, no living API keys. They wouldn’t forgive the charge.
Luckily my credit card had expired so they just locked my account.
With ansible you need to change the relevant step to use apt remove instead of apt install and to change the config file step in a step that removes the file.
Wait until you have 2 services that use the same resource. Now you need:
Doing this with Ansible is a nightmare. And 99% of the time you don’t even realize that you have this problem until your configs don’t work for some reason.
It really depends on the quality of software you are running? A SMTP, IMAP, Mumble, Photoprism, Jellyfin, bittorrent, Tor, Subsonic compatible server, who even remembers what else? Fine. One small Minecraft world? Boom you’re dead.
“Mouse movement detected, please restart for changes to take effect.”
I think that is the better case. That is just NPM aggregating the metadata. There are lots of packages that print their own ad.
Have you tried installing any packages from NPM recently?
If you haven’t used any configuration management before it would definitely be valuable to learn.
However I would also recommend trying Nix and NixOS. The provide much better reproducibility. For example using Ansible-like tools I would always have issues where I create a file, then remove the code to create the file but the file still exists or the server is still running. I wrote a post going into more detail about the difference a while ago https://kevincox.ca/2015/12/13/nixos-managed-system/. However this is more involved. If you already have a running server it will be a big shift, instead of just slowly starting to manage things via Ansible.
But I would definitely consider using something. Having configuration managed and versioned with history is super valuable.
If they can shove ads into the GMail UI I’m sure they could have found a place to put them in Google Reader.
I just use snapshots for taking backups. This ensures that I get a consistent state when the backup occurs. It seems to work well for that.
Video serving is a very sequential workload so hard drives will be more than sufficient and you can typically get storage at a lower price.
SSD may give you slightly faster start and seeking but it is unlikely to be noticeable.
If you want to serve multiple resolutions and bitrates you will probably want hardware that can do transcoding. However basically any graphics card (even integrated) will be able to transcode a video stream in real-time at a decent quality.
(If you wanted you can try to pre-transcode offline, but Jellyfin doesn’t support this well)
Although getting something that supports AV1 hardware decoding could be forward thinking. For now you are probably fine without it and if you are ripping DVDs you may consider just keeping the original encoding. But most likely you will start to see more AV1 files coming in the future, and having a server that can transcode AV1 to older formats easily will keep everything on your network working properly.
Currently s1
and t6
. I’m not a fun person.
Yes, if you ask about a tag on a commit that you don’t have git won’t know about it. You would need to download that history. You also can’t in general say “commit A doesn’t contain commit B” as you don’t know all of the parents.
You are completely right that --depth=1
will omit some data. That is sort of the point but it does have some downsides. Filters also omit some data but often the data will be fetched on demand which can be useful. (But will also cause other issues like blame
taking ridiculous amounts of time.)
Neither option is wrong, they just have different tradeoffs.
Once you push it once it is pretty fast.
Competence is expensive. Supply is low and demand is high.
What are you smoking? Shallow clones don’t modify commit hashes.
The only thing that you lose is history, but that usually isn’t a big deal.
--filter=blob:none
probably also won’t help too much here since the problem with node_modules
is more about millions of individual files rather than large files (although both can be annoying).
Algorithms are like AI but accurate, predictable and usually much faster.