• 0 Posts
  • 12 Comments
Joined 11 months ago
cake
Cake day: December 14th, 2023

help-circle
  • I’ve done a backup swap with friends a couple times. Security wasn’t much of a worry since we connected to each other’s boxes over ssh or wireguard or similar and used tools that allowed encryption. The biggest challenge for us was that in my selfhosting friend group we all prefer different protocols so we had to figure out what each of us wanted to use to connect and access filesystems and set that up. The second challenge was ensuring uptime and that the remote access we set up for each other stayed up - and that’s what killed the project as we all eventually stopped maintaining the remote access and nobody seemed to care - so if I were to do it again I would make sure all participants have alerts monitoring their shared endpoint.






  • One game I used to play recently started working suddenly in the latest proton major release (I think 9), it wasn’t mentioned in the release notes and it has no community around the game since it was released around windows vista, as well as being pulled from stores for many years (I still have it on steam) so I don’t think anyone intentionally fixed it but probably just a result of some system call being implemented or tweaked to behave closer to correct.

    So yeah, it’s very good to test your broken wine apps every 6 months to a year because slowly anything I ever had issues with in wine is starting to work.


  • I’ve even experienced this in the 3D printing community, where I design a highly parametric model and put lots of effort into making all of the major dimensions and qualities parameterized and dynamically adjustable, with lots of bounds checking and value clamping, with all the parameters at the top of my scad file with comments explaining what each variable does.

    And then someone comes along to remix my model, says I don’t want to install openscad, and just scales the entire output stl to change the dimensions, squashing all the features of the model in the process (instead of having the size gracefully adjust with all the features moving around to account), and leaving anybody starting from their work with a hard to remix mesh with no parameters.



  • Isn’t Miracast for sending video data? The thing I like about Chromecast is that the phone or remote app just tells the Chromecast where to load the media directly from, and then only sends playback control commands. That makes it a lot lighter resource wise because you don’t need to proxy the stream through a device like a phone that wants to go to sleep to save battery.



  • BakedCatboy@lemmy.mltoSelfhosted@lemmy.worldNAS/Media Server Build Recommendations
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    10 months ago

    I went with the DS1621xs+, the main driving factors being:

    • that I already had a 6 drive raidz2 array in truenas and wanted to keep the same configuration
    • I also wanted to have ECC, which while maybe not necessary, the most valuable thing I store is family photos which I want to do everything within my budget to protect.

    If I remember correctly only the 1621xs+ met those requirements, though if I was willing to go without ECC (which requires going with xeon) then the DS620slim would have given me 6 bays and integrated graphics which includes quicksync and would have allowed me to do power efficient transcoding and thus running Plex/jf right on the nas. So there’s tradeoffs, but I tend to lean towards overkill.

    If you know what level of redundancy you want and how many drives you want to be running considering how much the drives will cost, whether you want an extra level of redundancy while rebuilds are happening after 1 failure, how much space is sacrificed to parity, then that’s a good way to narrow down off the shelf nases if you go that way. Newegg’s NAS builder comes in handy if you just select “All” capacities and then use the nas filters by number of drive bays, then you can compare whats left.

    And since the 1621xs+ has a pretty powerful xeon, I run most things on the nas itself. Synology supports docker and docker compose out of the box (once the container app is installed), so I just ssh into the box and keep my compose folders somewhere in the btrfs volume. Docker nicely allows anything to be run without worrying about dependencies being available on the host OS, the only gotcha is kernel stuff since docker containers share the host kernel - for example wire guard which relies on kernel support I could only get to work using a user space wire guard docker container (using boringtun) and after the VPN/tail scale app is installed (presumably because that adds tap/tun interfaces that’s needed for vpn containers to work.

    Only jellyfin/Plex is on my NUC. On the nas I run:

    • Adguard

    • Sonarr/radarr/lidarr/prowlarr/transmission/overseerr

    • Castblock

    • Grocy

    • Nextcloud

    • A few nginx instances for websites

    • Uptime-kuma

    • Vaultwarden

    • Traefik and wire guard which connects to a vps as a reverse proxy for anything that needs to be accessible from the public internet


  • Just want to second this - I use an Intel nuc10i7 that has quicksync for Plex/jellyfin, can transcode at least 8 streams simultaneously without breaking a sweat, probably more if you don’t have 4K, and a separate synology nas that mainly handles storage. I run docker containers on both and the nuc has my media mounted using a network share via a dedicated direct gigabit Ethernet connecting the two so I can keep all the filesystem access traffic off of my switch /LAN.

    This strategy was to be able to pick the best nas based on my redundancy needs (raidz2 / btrfs with double redundancy for my irreplaceable personal family memories) while being able to get a cost effective low power quicksync device for transcoding my media collection, which is the strategy I chose over pre-transcoding or keeping multiple qualities in order to save HDD space and be flexible to the low bandwidth requirements of whoever I share with who has a slow connection.