There are some use cases other than web page compatibility. One for me is in dealing with firewall and proxy policy, if the agent is a browser and comes in on specified explicit ports then force authentication, things of that nature.
Some dingbat that occasionally builds neat stuff without breaking others. The person running this public-but-not-promoted instance because reasons.
There are some use cases other than web page compatibility. One for me is in dealing with firewall and proxy policy, if the agent is a browser and comes in on specified explicit ports then force authentication, things of that nature.
That’s my go to for my quick scratch pad notes, generally something I only need for a one time deal.
I use the Bit Warden secure note feature for more permanent things.
If it is a more ongoing documentation deal that needs organization I like Bookstack.
All depends on the purpose one uses it for.
They’re a part of the mix. Firewalls, Proxies, WAF (often built into a proxy), IPS, AV, and whatever intelligence systems one may like work together to do their tasks. Visibility of traffic is important as well as the management burden being low enough. I used to have to manually log into several boxes on a regular basis to update software, certs, and configs, now a majority of that is automated and I just get an email to schedule a restart if needed.
A reverse proxy can be a lot more than just host based routing though. Take something like a Bluecoat or F5 and look at the options on it. Now you might say it’s not a proxy then because it does X/Y/Z but at the heart of things creating that bridged intercept for the traffic is still the core functionality.
It depends on what your level of confidence and paranoia is. Things on the Internet get scanned constantly, I actually get routine reports from one of them that I noticed in the logs and hit them up via an associated website. Just take it as an expected that someone out there is going to try and see if admin/password gets into some login screen if it’s facing the web.
For the most part, so long as you keep things updated and use reputable and maintained software for your system the larger risk is going to come from someone clicking a link in the wrong email than from someone haxxoring in from the public internet.
I have a dozen services running on a myriad of ports. My reverse proxy setup allows me to map hostnames to those services and expose only 80/443 to the web, plus the fact that an entity needs to know a hostname now instead of just an exposed port. IPS signatures can help identify abstract hostname scans and the proxy can be configured to permit only designated sources. Reverse proxies also commonly get used to allow for SSL offloading to permit clear text observation of traffic between the proxy and the backing host. Plenty of other use cases for them out there too, don’t think of it as some one trick off/on access gateway tool
Zabbix or Cacti are nice ways to draw maps that also serve a functional role in keeping track of the activity and alerting.
Looks like was just updated today pending transfer, so either the owner transferring registrars or someone took it over.
https://www.whois.com/whois/funkwhale.audio
Domain expired on the 19th, so it’s validly offline. Has always seemed to be a low-adoptiom platform, will have to see the status in the next few days.
Exactly, the term has been pretty well claimed by people who host things like, oh say, their own Lemmy service or such.
Self hosted in this context is pretty well aimed at the ‘I do a service on my own time and usually own gear’ crowd. IT for a company is an entirely separate thing. Professional self-hosting would be more on a community like ‘serveradmin’.
It depends on the load on the disk. My main docker host pretty well has to be on the SSD to not complain about access times, but there are a dozen other services on the same VM. There’s some advisory out there that things with constant IO should avoid SSDs to not wear out the read/write too fast, but I haven’t seen anything specific on just how much is too much.
Personally I split the difference and run the system on SSD and host the bulk data on a separate NAS with a pile of spinning disks.
I know some VPN providers have their own DNS service that you can use similar to other filtered public DNS. If you mean an in house DNS/VPN gateway then what you want is probably best served by something like a firewall distro (opnsense/pfsense) to handle both of them.
Been a while since I used proxmox but that’s the nature of a lot of those free/corporate type softwares. The free ‘community’ edition is pretty well a public beta that you can get forum level support for, or sometimes you can get paid support at some limited level.
Lots of them, if you want something large and powerful you could set up security onion, mirror a port and it’ll capture everything plus graph and slice up things all over. Needs a fairly hefty box not to choke if it gets fed a lot though.
https://www.grepular.com/Transparent_Access_to_I2P_eepSites
Something like this makes logical sense, but can’t say I’ve ever tried such a feat. As a general rule though keeping the gateway/firewall free of extraneous software is a good practice just to limit the potential attack surface. If you try it I’d create a dedicated VM somewhere to host the i2p/Tor gateway from to keep it off the network edge directly.
Not sure if you mean to run the service on the FW or what ‘handle’ means here. If you have a second box though it would be easy enough to run all those services on a distinct server and then route their relevant ports through there with a policy based route on the firewall. That way you would only have to set up one for node for example and just have the client machines use that.
I know it exists, have gotten it working with one of those AD compatible samba based DCs before, but not without some messing about. I’d really like to see it as simple as it is in Windows before saying it’s a drop in replacement.
Tried the other day with Mint and ran into something where one of the searches promoted manually editing the hosts file to point to the DC and Kerberos address. That kind of thing shouldn’t be required and is the kind of buggery I’d like to see sorted out.
I have not, the last time I made a real effort at moving to Nix for games was quite a while ago. The big factor is if I can get GOG working since that’s the preferred platform here.
Probably worth a shot. I’ve gotten it working on a version of Ubuntu in the past, but it was far from the simplicity of select domain, give join creds, and reboot that it is with Windows yet.
There is a separate thunderbird app from k-9 now, so apparently not just an update. Obnoxious though that the new app lists a bunch of collected data types while k-9 doesn’t list any. Maybe just a legacy thing since K-9 has been around forever and they didn’t list them originally, but still upsetting for the privacy minded.
TB:
K-9: