Do you have sample of what kind of errors you’re getting? are they docker related or service related? as in jackett can’t connect/reach sonarr for example?
Do you have sample of what kind of errors you’re getting? are they docker related or service related? as in jackett can’t connect/reach sonarr for example?
latest cargo crates updated
Navidrome over wireguard, and music library in folders and proper tagging trough beets and picard. using subsonic as a client for it. tried plex and plexamp but I’m moving away from them.
Check netmaker for wireguard vpn if you want a ui, but its straightforward to set it up manually.
I’d say, what kind of security are you talking about? Apart from standard HTTPS to keep things encrypted, there are other layers if you want to keep your service exposed to the internet.
Also how things are installed and if they are correct, proper file permissions. nothing different than having it on the server somewhere. You just need to keep thing up to date and you’ll be fine.
I like it here on Lemmy as there are quality talks from people and not too much circlejerking same concepts around. I actually like going trough here.
About 6 year uptime on one machine before we shut it down and relocated.
Yes, thats exactly it.
Make sure your HAProxy is listening on the wireguard interface as well.
Once you have the wireguard tunnel working, do a quick test, like curl -H "Host: domain.tld" https://router_wireguard_ip/
and if that works, add in the iptables roules and you should be all set.
Thanks for the reply, flux is pretty good, I’m using ArgoCD, but both are basically following gitops priciples.
I might give k3s a look and see how ot all work together.
What would be a benefit to run k8s at home, apart from bit dealing with it, compared to docker-compose on a single or two nodes? or docker swarm? Unless there is a big load of services that are selfhosted, which I get, and the autohealing from k8s as the orchestrator.
Just courious, not taking a swing. Thanks!
Yes, that would be possible with this setup. Port on which HAProxy listens just needs to be publicly accessible, and just DNAT traffic from the VPS to your $IP:$PORT .
Technically everything is possible, I just don’t have context if you have a static IP with your ISP or it changes every so often (daily, weekly, every n months). If it’s not, you might consider using a VPN connection between VPS and your router to keep the connection open at all times, and also not exposing HAProxy directly to the live internet.
I’m running both, via docker.
Here’s the basic setup:
NGiNX is standard installation, using certbot to manage the SSL certificates for the domains. Setup is via Nginx virtual hosts (servers), separate for Lemmy and Mastodon. Lemmy and Mastodon run each in their Docker containers, with different listning ports on localhost.
lemmy.domain.tld+------------------------+
+------------------+ |
| | Lemmy |
| | 127.0.0.1:3000 |
| +------------------------+
|
+--------------+----+
|NGiNX with SSL | mastodon.domain.tld
|and separate VHOSTS+--------------+-----------------------+
| | | Mastodon |
+-------------------+ | 127.0.0.1:3001
+------------------------
No problem. I’ll just go with a oversimplification.
The idea is that you just take whatever traffic hits port 443 and use iptables rules to route the traffic elsewhere, or in this case
Client --> [port 443] --> [iptables] --> [ port 443 home server]
So, it’s basically just traffic forwarding from the VPS directly to your home server, being directly to your ISP IP address, or via wireguard IP address.
So all the traffic you are sending back from the VPS is in its original state, and the actual processing happens on your local/home server.
On the home server you have a Web Server of your choice listening on port 443 with, loaded with your SSL certificates. So, request is made to the VPS IP address, iptables just forward the packets to your home server, and there is where the SSL/TLS termination happens. The client negotiates the TLS connection directly with your home server, and web server on your home server then sends the request where you tell it to ( reverse proxy to a docker container, or it serves the content directly).
With this, you basically turn the VPS into a passtrough for traffic.
Here’s a quick test I did… the two servers are connected with Wireguard mesh.
On the VPS you need have net.ipv4.ip_forward=1 .
net.ipv4.ip_forward=1
Your iptables rules should be. Obviously on the home server you can run the webserver on any port you like, doesn’t have to be 443. But let’s keep it 443 for the sake of argument.
iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to-destination HOME_SERVER_IP:443
iptables -t nat -A POSTROUTING -j MASQUERADE
If you want to drop the rules:
iptables -t nat -F
Best option is to directly NAT traffic from VPS to your home server, either directly to your IP or set up a wireguard peer and send traffic via wireguard to your local and do the SSL/TLS termination on your local.
You are best exposing just 443 port on the VPS and moving that traffic over wireguard. Server will have your local public key on the server, and you could implement a wireguard key rotation to change them frequently.
Traffic sent back will be encrypted with the certificate, and even if they get the wireguard server key, you can rotate it, but still they will see encrypted packets.
It depends what kind of things you’re doing on your local. If it is just a website thing, then reverse proxy is fine. Anything other than that, NAT would be cleanest one.
LUKS on the disks would encrypt it the data on the block storage level, and, in theory, they should not have a way of reding block storage files directly. But since it is a VPS they can, technically, gather data from host memory.
Next step might be going down a dedi server route, Luks encryption on disks. Only thing thats needed there would be sufficient network pipe.
I tried it, its great if you want to get started. or you want to run a vpn on a server that doesnt support wireguard. My main gripe with the client is that it can’t do high speeds, it’s just too cpu bound. Like going close to a gigabit transfer.
With wireguard I was able to get to 98% gigabit transfer. It was fine for a month I was using it, in the end I just setup a wireguard mesh with Netmaker.
There is headscale where you can run your own hosted central server, so you’re not using the tailscale one.
In the end netmaker did what I wanted, however they tend to introduce bit of changes in their releases, so if you’re not super technical it might pose a challenege with upgrading until they reach a super stable version. Like jump from 0.10.X to 0.20 had some big changes for the whole netmaker internals. Bit that does not impact wireguard connectivity.
Sam here, its just M710q I believe, got ir second hand. It came with 16GB RAM and 512GB nve and 1TB SSD, added an external 8TB drive for storage, and ir works fine.
You can try Woodpecker CI/CD https://woodpecker-ci.org/ It is a opensource fork of droneCI https://woodpecker-ci.org/faq
I have my lemmy instance currently using about 2 GB space, I’m going to set up Mastadon this weekend I hope. There are cheap smaller dedicated servers from kimsufi for like $10 with 1/2 TB HDD.
Also Hetzner cloud has compute and disk separated, so you can scale one or the other.
I’ve got a 500GBSSD box from Hetzner, but I’m also hosting other things there and dropped VMs from Linode I had previously and consolidated there.
You could use object storage, like s3 or wasabi, for lemmy picture/media storage, if that’s eating up most space. “cloud” providers are bit bananas when you need more disk, at one point its cheaper to go down a dedicated route. Depends on your budget…
Edit: Others might chime in with a better answer, at the moment bot sure what your budget is, so its bit of assumptions from my side.
it was sold to squarespace. https://www.androidpolice.com/google-domains-takeover-squarespace/
I agree, also thanks neovim 0.10 making me spent half a day tracking that obscure line that was throwing errors.