• 5 Posts
  • 34 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle
  • In the US at least, most equipment (unless you get into high-and datacenter stuff) runs on 120V. We also use 240V power, but a 240V connection is actually two 120V phases 180-degrees out of sync. The main feed coming into your home is 240V, so your breaker panel splits the circuits evenly between the two phases. Running dual-phase power to a server rack is as simple as just running two 120V circuits from the panel.

    My rack only receives a single 120V circuit, but it’s backed up by a dual-conversion UPS and a generator on a transfer switch. That was enough for me. For redundancy, though, dual phases, each with its own UPS, and dual-PSU servers are hard ro beat.


  • Yes, a lot of my movies are 50GB or so. Not everything has a 4k repack available, though. I’d say the vast majority are around 20GB.

    1080p would just not be acceptable for me. There’s a clear difference between 1080p and 4k on a 4k screen, especially if the screen is large.

    If I’m in a situation where I don’t have connectivity to stream from my server, then I can always just start a Handbrake queue the night before and transcode a few videos to smaller size, or just dump a few onto an external drive. I have never been in a situation where I had to do this, though.


  • Any sort of media, including videos, I always go for the highest possible quality I can. I do have a number of 4k displays, so it makes sense to a certain extent, but a lot if it has to do with future-proofing.

    Here’s a good example: When personal video cameras were first starting to support 1080, I purchased a 1080i video camera. At the time, it looked great on my 1920x1080 (maybe 1024x768, not sure) monitor. Fast forward over 15 years later, and it the video I recorded back then looks like absolute garbage on my 4k TV.

    I remember watching a TV show when 1080p video first became available, and I was blown away at the quality of what was probably a less-than-1GB file. Now watching the same file even on my phone has a noticeable drop is quality. I’m not surprised you saw little difference between a 670MB and a 570MB file, especially if it was animation, which contains large chunks of solid colors and is thus more easily compressed. The difference between two resolutions, though, can be staggering. At this point, I don’t think you can easily find a 1080p TV; everything is 4k. 8k is still not widespread, but it will be one day. If you ever in your life think you’ll buy a new TV, computer monitor, or mobile device, eventually you’ll want higher quality video.

    My recommendation would be to fill your media library with the highest-quality video you can possibly find. If you’re going to re-encode the media to a lower resolution or bitrate, keep a backup of the original. You may find, though, that if you’re re-encoding enough video, it makes more sense to save the time and storage space, and spend a bit of money on a dedicated video card for on-the-fly transcoding.

    My solution was to install an RTX A1000 in my server and set it up with my Jellyfin instance. If I’m watching HDR content on a non-HDR screen, it will transcode and tone-map the video. If I’m taking a break at work and I want to watch a video from home, it will transcode it to a lower bitrate that I can stream over (slow) my home internet. Years from now, when I’m trying to stream 8k video over a 10Gb fiber link, I’ll still be able to use most of the media I saved back in 2024 rather than try to find a copy that meets modern standards, if a copy even exists.

    Edit: I wanted to point out that I realize not everyone has the time or financial resources to set up huge NAS with enterprise-grade drives. An old motherboard and a stack of cheap consumer-grade drives can still give you a fair amount of backup storage and will be fairly robust as long as the drive array is set up with a sufficient level of redundancy.



  • I did some research on this, and it turns out you’re absolutely correct. I was under the impression that ECC was a requirement for a ZFS cache. It does seem like ECC is highly recommended for ZFS, though, due to the large amount of data it Storrs in memory. I’m not sure I’d feel comfortable using non-ECC memory for ZFS, but it is possible.

    Anecdotally, I did have one of my memory modules fail in my TrueNAS server. It detected this, corrected itself, and sent me a warning. I don’t know if this would have worked had I been using non-ECC memory.


  • One thing to keep in mind if you go with an i5 or i7 is that you won’t have the option to use ECC memory. If you’re running TrueNAS, you’ll need ECC memory for the ZFS cache. A Xeon E5 v2 server is old, but still has a more than enough power for your use case, and they’re not particularly expensive.

    If you need something more powerful, you can find some decent Xeon Gold systems on eBay, but they’ll be a bit more pricey. The new Xeon W chips are also an option, but at least for me, they’re prohibitively expensive.



  • I decided to give up on it. Looking through the docs, they recommend that due to “reasons,” it should be restarted at least daily, preferably hourly. I don’t know if they have a memory leak or some other issue, but that was reason enough for me not to use it.

    I installed TubeArchivist, and it suits my needs much better. Not only do I get an archive of my favorite channels, but when a new video is released, it gets automatically downloaded to my NAS and I can play it locally without worrying about buffering on my painfully slow internet connection.



  • Like most of us, I have plenty of pictures and such that I don’t want to lose. The most important to me, though, are some of the documents that I’ve saved or scanned from years passed. While a tax return from 2003 or the title for a car I owned 20 years ago aren’t exactly useful, it’s fun to occasionally look at some of the old stuff and see how far I have come in my life since then.



  • I’m strongly in favor of keeping things compartmentalized. I have two main servers: One is a Proxmox host with a powerful CPU and a few hard drives set up in a fast but not-so redundant array (I use ZFS, but my setup is similar to RAID10). Then a have second server that runs TrueNAS; the CPU is slower, but it has a large amount of storage (120TB physical) arrayed in an extremely fault-tolerant configuration.

    My Proxmox box runs every service on my network, but all that gets stored the hard drives are the main boot disks. It backs up daily, so I’m not so concerned about drive failure. All my data is stored on the NAS, and it’s shared with the VMs via NFS, SMB, or iSCSI, depending on which is more appropriate.

    For you, I’d recommend building a NAS, and keep all your important data there. Your NUC can host your services, and they can pull data from the NAS. The 256GB on your NUC will be more than enough to host whatever services you need.


  • 4 Mbit is exceptionally slow by today’s standards; when I signed up for internet access (there’s only one provider available where I live), I told them “I will pay for whatever the fastest connection is that you can offer.” Turns out that’s just single-channel DSL. They won’t even install bonded DSL where I live, and believe me, I’ve tried. I do have Starlink as well, but because of the land around me, it’s always going to be obstructed by the land topology; when I calculated how high I would need to raise my antenna to avoid obstructions, it was several hundred feet. My pfSense box does a good job of routing traffic between my DSL connection and my Starlink connetion (and falling back when Starlink is obstructed), but for hosting anything, I need a stable connection. That leaves me with just my DSL connection.




  • I honestly didn’t know that Youtube “unlisted” was even a thing; I’ve never posted a video to Youtube before, but this might be a promising idea. I’m assuming they still inject ads into unlisted videos, which is a major barrier for me… I hate ads.

    I’ll admit that I’m a snob when it comes to video and audio quality; 4k/60 might be overkill, but I think at least 4k/30 has some merit in this case. Most modern phones and tablets (and TVs) are at least greater than 1080p, so assuming they’re watching the video horizontally, 1080p video would still result in a loss of quality. Would they care? Almost certainly not, but the idea of watching a UHD video source in a lower resolution bothers me far more than it should.

    It definitely seems like VPS hosting is out of my budget. I think that hosting multiple version of the same video (and paying for more HDD space) would probably be cheaper than a VPS with a GPU resources, but the recurring fees are probably more than I’m willing to spend.


  • I’m a big fan of Jellyfin. I run it at home with a dedicated Nvidia A2000 for hardware transcoding. It’s able to transcode multiple 4k streams with tonemapping faster than they can play.

    As much as I’d love to use Jellyfin, there are two major issues: My internet connection is so slow, that I’d be lucky to stream 720p at a low bitrate. I’d spend the money on a faster connection, but I live in an area that doesn’t even get cell phone service. My options are DSL and Starlink, and I have both; the DSL is just slow, and Starlink uplink speed isn’t much better, plus I have plenty of obstructions that make it somewhat unreliable. The second problem is that Jellyfin has too steep of a learning curve. Telling my relatives “oh, if it starts buffering, just lower the bitrate” isn’t an option. Not to mention, I’d have to run it on a VPS, and hosting a VPS with the resources required for this is way too expensive for me.



  • corroded@lemmy.worldtoSelfhosted@lemmy.worldWhy docker
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    6 months ago

    My personal opinion is that Docker just makes things more difficult. Containers are fantastic, and I use plenty of them, but Docker is just one way to implement containers, and a bad one. I have a server that runs Proxmox; if I need to set up a new service, I just spin up a LXC and install what I need to. It gives all the advantages of a full Linux installation without taking up the resources of a full-fledged OS. With Docker, I would need a VM running the docker host, then I’d have to install my docker containers inside this host, then forward any ports or resources between the hypervisor, docker host, and docker container.

    I just don’t get the use-case for Docker. As far as I can tell, all it does is add another layer of complexity between the host machine and the container.


  • If it’s really impossible to add an extra drive, are you able to attach an external drive or map a networked drive that has space for your VMs and LXCs?

    In your situation, what I would probably do is back up all my VMs to my NAS, replace the hard drive in my Proxmox hypervisor, re-install a fresh copy of Proxmox on the new drive, and restore the VMs back to my new Proxmox installation. If you don’t have a NAS, you could do this with a USB-attached hard drive, too.

    Ideally, though, you should have separate drives for your Proxmox boot drive and your VMs. Even if you’re using a SFF PC that doesn’t have an extra drive bay, could you double-sided-tape a SSD to the bottom of the case and use this as your storage drive? I’ve certainly done it before.



  • That’s a very valid point, and certainly a reason to obfuscate the calendar event. I would argue that in general, if the concern is the government forcing you to decrypt the data, there’s really no good solution. If they have a warrant, they will get the encrypted data; the only barrier is how willing you are to refuse to give the encryption key. I think some jurisdictions prevent this on 5th amendment grounds, but I’m not not a lawyer.