![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
NFS is always cranky for me, and you can’t get it to use symlinks at all (yeah Samba’s implementation is janky but at least it exists)
NFS is always cranky for me, and you can’t get it to use symlinks at all (yeah Samba’s implementation is janky but at least it exists)
It’s UID/GID 10000 on the host because you are using an unprivileged LXC container. Unprivileged means that “root” inside the container (which is just a user space of the host with access restrictions) is user 10000 on the host - this is so that files and processes inside the container don’t run with the real UID zero, where they could plant a malicious file, or run a malicious program that escapes containment that ends up with root access on the host.
Quickest way to make this work over samba is to force user 10000 and force group 10000. That way everything connecting to Samba would see the files as their own.
Honestly the better solution is to make your software inside the containers run with a local non-root user (which would be something like 10001) and then force samba to use that. Then nothing is running as root in or out of the containers. Samba will still limit access to shares based on the samba login, but for file access purposes it will still use the read/write levels of your non-root user (because of the force- directives)
It’s also nice because it has ACME built right in (it takes care of your SSL/TLS certs for your site automatically without setting up a cron job or certbot yourself)
Is the container running out of disk space for its DB? Is the container running out of memory during the backup process and crashing?
If Proxmox is already installed on the machine, how are you running OPNSense? If it’s not bare metal, it’s a VM, and if it’s a VM it needs Proxmox’s virtual NICs to be VLAN aware, unless you are doing PCI pass through of the entire network card.
Yeah VictoriaMetrics is the new favorite since Influx keeps reinventing their wheels and trying to move everyone to the cloud.
Zap&Dash is incredible and it’s amazing it can even be done as a romhack of SMB. 100% worth checking out.
Hmm, Lemmy does have a lot of client apps. OP, you a cropper or did you post with a special app?
Been seeing this a lot on Lemmy with webcomics lately.
“Find Nedry! Check the vending machines!”
WebDAV, as others have said
That’s right, all it is is an auto-copy program. It doesn’t host a shared folder like NextCloud; it just saves you the clicks (or commands) of copying your newly-changed files to all the places you want a copy to be.
If you edit a file on your machine, and your wife edits her copy, you might even find there to be a conflict. (I don’t use Syncthing so I don’t know how it handles this)
Only on Ubuntu based distros AFAIK but sudo do-release-upgrade
is the correct command
Well I mean there is a big L right on his hat
And on a CLI a directory is just a list of other files.
Grocy is a neat project for stuff like this. Also available as a HomeAssistant add-on, if that’s your style
And, circling back to ports, you can make firewall rules that prevent devices from talking across VLANs on certain ports. Your Nintendo Switch doesn’t need SSH access to your KNX server, to re-use your previous example, so you block your console’s VLAN from being able to talk to your server VLAN at all.
The best way to do it is to block literally everything between VLANs, and then only allow the ports you know you need for the functionality you want.
The cultural elitism comes from years of tinkering with their system since all the information they can find is fragmented and spread around, highly opinionated,’poorly digestible, out of date, and often dangerous.
It does sound to me like ingesting all these different formats into a normalized database (aka data warehousing) and then building your tools to report from that centralized warehouse is the way to go. Your warehouse could also track ingestion dates, original format converted from, etc. and then your tools only need to know that one source of truth.
Is there any reason not to build this as a two-step process of 1) ingestion to a central database and 2) reporting from said database?