• 3 Posts
  • 116 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • When I was starting out I almost went down the same pathway. In the end, docker secrets are mainly useful when the same key needs to be distributed around multiple nodes.

    Storing the keys locally in an env file that is only accessible to the docker user is close enough to the same thing for home use and greatly simplifies your setup.

    I would suggest using a folder for each stack that contains 1 docker compose file and one env file. The env file contains passwords, the rest of the env variables are defined in the docker compose itself. Exclude the env files from your git repo (if you use this for version control) so you never check in a secret to your git repo (in practice I have one folder for compose files that is on git and my env files are stored in a different folder not in git).

    I do this all via portainer, it will setup the above folder structure for you. Each stack is a compose file that portainer pulls from my self hosted gitea (on another machine). Portainer creates an env file itself when you add the env variables from the gui.

    If someone gets access to your system and is able to access the env file, they already have high level access and your system is compromised regardless of if you have the secrets encrypted via swarm or not.








  • Thanks! Makes sense if you can’t change file systems.

    For what it’s worth, zfs let’s you dedup on a per dataset basis so you can easily choose to have some files deduped and not others. Same with compression.

    For example, without building anything new the setup could have been to copy the data from the actual Minecraft server to the backup that has ZFS using rsync or some other tool. Then the back server just runs a snapshot every 5 mins or whatever. You now have a backup on another system that has snapshots with whatever frequency you want, with dedup.

    Restoring an old backup just means you rsync from a snapshot back to the Minecraft server.

    Rsync only needed if both servers don’t have ZFS. If they both have ZFS, send and recieve commands are built into zfs are are designed for exactly this use case. You can easily send a snap shot to another server if they both have ZFS.

    Zfs also has samba and NFS export built in if you want to share the filesystem to another server.


  • I use zfs so not sure about others but I thought all cow file systems have deduplication already? Zfs has it turned on by default. Why make your own file deduplication system instead of just using a zfs filesystem and letting that do the work for you?

    Snapshots are also extremely efficient on cow filesystems like zfs as they only store the diff between the previous state and the current one so taking a snapshot every 5 mins is not a big deal for my homelab.

    I can easily explore any of the snapshots and pull any file from and of the snapshots.

    I’m not trying to shit on your project, just trying to understand its usecase since it seems to me ZFS provides all the benefits already




  • The proper way of doing this is to have two separate systems in a cluster such as proxmox. The system with GPUs runs certain workloads and the non GPU system runs other workloads.

    Each system can be connected (or not) to a ups and shut down with a power outage and then boot back up when power is back.

    Don’t try hot-plugging a gpu, it will never be reliable.

    Run a proxmox cluster or kubernetes cluster, it is designed for this type of application but will add a fair amount of complexity.




  • I tried to find this on DDG but also had trouble so I dug it out of my docker compose

    Use this docker container:

    prodrigestivill/postgres-backup-local

    (I have one of these for every docker compose stack/app)

    It connects to your postgres and uses the pg_dump command on a schedule that you set with retention (choose how many to save)

    The output then goes to whatever folder you want.

    So have a main folder called docker data, this folder is backed up by borgmatic

    Inside I have a folder per app, like authentik

    In that I have folders like data, database, db-bak etc

    Postgres data would be in Database and the output of the above dump would be in the db-bak folder.

    So if I need to recover something, first step is to just copy the whole all folder and see if that works, if not I can grab a database dump and restore it into the database and see if that works. If not I can pull a db dump from any of my previous backups until I find one that works.

    I don’t shutdown or stop the app container to backup the database.

    In addition to hourly Borg backups for 24 hrs, I have zfs snapshots every 5 mins for an hour and the pgdump happens every hour as well. For a homelab this is probably more than sufficient


  • Fair enough, I primarily use NFS for Linux to Linux sever communication and high file access.

    Smb is mostly for moving files around occasionally

    Not sure if trying to run a database over smb is a good idea but I do it on NFS all the time

    Regardless it doesn’t have to be exclusive. OP can change it up depending on the application



  • Just checked, it’s working fine for me Seadroid: 3.0.0 (from fdroid) Server: 11.0.8 Pixel 8 android

    It was working for me before as well with the 2.3.x version that I was using (don’t know the exact version)

    For example, in my aegis, I can select backup folders then in the file browser, seafile shows up and lets me select a folder for backup as expected.

    You could try restarting the phone in case it’s a weird android issue. Then you could try 3.0.0 seadroid.

    What happens when you select the 3 line button on the top left? For it shows seadrive.