QSV is the highest quality video transcoding hardware acceleration out there. It’s worth using if you have a modern Intel CPU (8th gen or newer)
QSV is the highest quality video transcoding hardware acceleration out there. It’s worth using if you have a modern Intel CPU (8th gen or newer)
Jellyfin doesn’t need any particular setup to work directly from LAN because it doesn’t ever try to use a central login provider the way Plex does.
The only reason OP is struggling with it is because they set it up so that they can only connect to it via Tailscale.
Right, I just mean if your connection speed is faster than your server can transcode, then the transcode speed will be the bottleneck
It’s limited to the transcode speed, but it’s important to keep in mind that e.g. if you transcode to a lower resolution especially it’ll usually transcode faster than realtime.
FYI Jellyflix also supports that
Well, Electron uses Chrome, so yes
This. Jellyfin has a direct HDHR integration and works as a DVR directly with one.
The person you’re replying to linked their literal reliability stats lmao
We’ll have to agree to disagree there.
I was making 2 separate statements. 1. I agreed with the previous comment, 2. I opined that all Arthropods are bugs.
“Bug” is a colloquial term, so I was stating my personal, broader definition
This. Arthropods are bugs
So first of all, you shouldn’t involve yourself in your friend’s business. Fraud is generally frowned upon.
But secondly, you know that ChatGPT was trained on the entire internet, right? Like, every book. I don’t think “more books” is gonna help.
I hope you take your computer skills and make something of yourself. Try not to get any more involved in this scheme, seriously. You don’t need this crap marring your reputation.
Besides, there are better reasons/ways to fight the system than helping other people avoid learning.
Just that they’re no easier to use to fool an anti-AI system than using ChatGPT, Gemini, Bing, or Claude. Those AI detectors also give false positives on works made by humans. They’re unreliable in the first place.
Basically, they’re “boring text detectors” more than anything else.
I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.
That’s not how it works, sorry.
Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it’ll still generally outperform a 16-bit Floating Point 7b parameter equivalent.
FTC is already on Google’s case
IMO, if you want the beast deals right now on a 12+ TB HDD, you should use serverpartdeals.com instead. I’ve got 2 manufacturer recertified 14 TB enterprise-grade drives from them and it was way cheaper than buying any 14 TB external drive.
Force push to the master branch or release branch, for one
They’re still mounted individually, so you do RAID5 or ZRAID on them, same as if they were internal. You can potentially be bandwidth-limited since USB 3.0 has a 5 Gbps speed limit, but realistically only for reads and you’re still fine in terms of overall performance since they’re all spinning disks anyhow and 5Gbps is fast enough for any media server/NAS unless you’ve got a 10-gig LAN/internet connection and feel the compulsive need to saturate it.
If you switch the devices line to
- /dev/dri:/dev/dri
as other have suggested, that should expose the Intel iGPU to your Jellyfin docker container. Presently you’re only exposing the Nvidia GPU.