You have my sympathy. I do not know of a sure way to get isp’s to behave. Espesially not if they have regional monopoly
You have my sympathy. I do not know of a sure way to get isp’s to behave. Espesially not if they have regional monopoly
Thank you! :) I also notice i compleatly forgot the port exhaustion issue we see with larger networks behind roo few ipv4 NAT addresses…
I guess I am lucky. 3 out of 3 isp’s available from in my region provide IPv6 with a dhcp-pd assigned stable address by default. (Norway)
If there is a ipv6 service online. That you want to reach from a v4 only client. You can set up a fixed 1:1 nat on your firewall where you define a fake internal ipv4 address -> destination NAT onto the public ipv6 address of the service. And SRC NAT64 embed your clients internal v4 into the source ipv6 for the return traffic. And provide a internal dns view A record pointing to the fake internal ip record. It would work, but does not scale very well. Since you would have to set this up for every ipv6 ip.
A better solution would be to use a dualstack SOCKS5 proxy with dns forwarding where the client would use the IPv6 of the proxy for the connection. But that does not use NAT tho.
The best solution is to deploy IPv6 ofcourse. ;)
That is not how it works. You can have a home network on ipv6. And it can reach all of ipv4 via nat ( just like ipv4 do today). A net with only ipv4 can not reach any ipv6 without a proxy that terminst the v4 connection and make a new v6 connection. since ipv6 is backwards compatible. But ipv4 is naturally not forwards compatible.
Also it is the default deny of the stateful firewall that always coexist with NAT, since NAT depends on that state, that is the security in a NAT router.
That default deny is not in any way dependant on the NAT part.
But DNS rarely break. The meme about it beeing DNS’s fault is more often then not just a symptom of the complexity of IPv4 NAT problem.
If i should guesstimate i think atleast 95% of the dns issues i have ever seen, are just confusion of what dns views they are in. confusion of inside and outside nat records. And forgetting to configure the inside when doing the outside or vice verca. DNS is very robust and stable when you can get rid of that complexity.
That beeing said, there are people that insist on obscurity beeing security (sigh) and want to keep doing dns views when using IPv6. But even then things are much easier when the result would be the same in either view.
I assume the normal fear of unknown things. It is hard to hate ipv6 once you have equivalent competence in ipv4 and ipv6.
I felt dirty! and broke so much shit when i had to implement NAT on networks in the mid 90’s. Nowdays with ipv6 and getting rid of NAT is much more liberating. The difference is staggering!
Now the greatest and best effect of ipv6 is none of the above. It is that with ipv6 we have a slim hope of reclaiming some of what made the Internet GREAT in the first place. When we all stood on equal footing. Anyone could host their own service. Now we are all vassals of the large companies that have made the common person into a CGNAT4444 using consumer mindlessly lapping up what the large company providers sees fit to provide us. with no way to even try to be a real and true part of the Internet. Fight the companies that want to make you a eyeball in their statistic, Set up your own IPv6 service on the Internet today !
You do as well, if you run any operating system newer then the last 10 years.
That should simply not be allowed. Cgnat for ipv4 is fine if they also provide proper ipv6
Someyhing like searxng? Or what do you imagine?
I know it is a complete joke. But every time i think of c++ i am reminded of this prank article https://www-users.york.ac.uk/~ss44/joke/cpp.htm
old post, but I so wonder why you got downwoted for saying it like it is. a good isp will give you a /56, the minimum best practice. a great isp will give you a /48 you’r router will also participate in the wan /64, but that is just the uplink, and not something that will be used on the lan. https://www.ripe.net/publications/docs/ripe-690/#4--size-of-end-user-prefix-assignment---48---56-or-something-else-
Like the exact same thing can not happen in a closed source codebase. It probably does daily. Since closed codebases the due dilligence and reviews cost money, and nobody can see the state. They are intentionally neglected.
Open source nor closed source is immune to the 5$ wrench hack
Xfree86 was sonetimes a mess. And i did not have a browser anymore when it refused to start. So man pages only.
I once rm -rf all the db files of a running database: Recovered the files via inodes since they were all still open on the running database, that was a mess.
I find this to be least acurate with debian… on other distros a patch may or may not install a new version of that package. that can bring changes to the behavior.
On debian stable the security issues are backported. So you can patch and be sure that there is no changes to the behavior of the system. It is basically the reason all vm’s i manage are debian stable.
It is also true they never crash. But that is expected of linux. It is the extreme reliabillity that is the debian killer feature for me.
Unatended-upgrades keeps all systems securly patched. But there is a need for a reboot for kernel updates now and then.
All chat tools after irc have been trash for large communities. That includes slack. Irc somehow still works with 1500 people in it. I can not explain how. With a logging bot the discussions can be archived for google searchabillity. I guess that could be true for a discord or slack also, But i never seen it implemented. In most slacks i can not search more then 60 days back.
How often depends on how much work it is to recreate, or the consequences of loosing data.
Some systems do not have real data locally, get a backup every week. Most get a nightly backup. Some with a high rate of change , get a lunch/middle of the workday run.
Some have hourly backups/snapshots, where recreating data is impossible. CriticL databases have hourly + transaction log streaming offsite.
How long to keep a history depends on how likely an error can go unnoticed but minimum 14 days. Most have 10 dailes + 5 weeky + 6 monthly + 1 yearly.
If you have paper recipes and can recreate data lost easily. Daily seems fine.
I install molly-guard on important machines for this reason. So fast to do a reboot on the wrong ssh session