• Scoopta@programming.dev
    link
    fedilink
    arrow-up
    47
    arrow-down
    2
    ·
    4 months ago

    Should probably fix that given we’ve been out of IPv4 for over a decade now and v6 is only becoming more widely deployed

    • renzev@lemmy.worldOP
      link
      fedilink
      arrow-up
      29
      arrow-down
      1
      ·
      4 months ago

      Agreed. Though I wonder if ipv6 will ever displace ipv4 in things like virtual networks (docker, vpn, etc.) where there’s no need for a bigger address space

      • Captain Janeway@lemmy.world
        link
        fedilink
        arrow-up
        30
        arrow-down
        1
        ·
        4 months ago

        I hope so. I don’t want to manage two different address spaces in my head. I prefer if one standard is just the standard.

      • Domi@lemmy.secnd.me
        link
        fedilink
        arrow-up
        30
        arrow-down
        1
        ·
        4 months ago

        Yes, because Docker becomes significantly more powerful once every container has a different publicly addressable IP.

        Altough IPv6 support in Docker is still lacking in some areas right now, so add that to the long list of IPv6 migration todos.

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        20
        ·
        edit-2
        4 months ago

        I’m using IPv6 on Kubernetes and it’s amazing. Every Pod has its own global IP address. There is no NAT and no giant ARP routing table slowing down the other computers on my network. Each of my nodes announces a /112 for itself to my router, allowing it to give addresses to over 65k pods. There is no feasible limit to the amount of IP addresses I could assign to my containers and load balancers, and no routing overhead. I have no need for port forwarding on my router or worrying about dynamic IPs, since I just have a /80 block with no firewall that I assign to my public facing load balancers.

        Of course, I only have around 300 pods on my cluster, and realistically, it’s not really possible for there to be over 1 million containers in current kubernetes clusters, due to other limitations. But it is still a huge upgrade in reducing overhead and complexity, and increasing scale.

      • 30p87@feddit.de
        link
        fedilink
        arrow-up
        5
        ·
        4 months ago

        I wish everything would just default to a unix socket in /run, with only nginx managing http and stream reverse sockets.

        • verstra@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          4 months ago

          Wait, but if you have, for example an HTTP API and you listen on a unix socket in for incoming requests, this is quite a lot of overhead in parsing HTTP headers. It is not much, but also cannot be the recommended solution on how to do network applications.

          • WaterWaiver@aussie.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 months ago

            Replacing a TCP socket with a UNIX socket doesn’t affect the amount of headers you have to parse.

    • 0x0@programming.dev
      link
      fedilink
      arrow-up
      3
      arrow-down
      4
      ·
      4 months ago

      we’ve been out of IPv4 for over a decade now

      Really? Haven’t had trouble allocating new VPSs with IPv4 as of late…

      • frezik@midwest.social
        link
        fedilink
        arrow-up
        7
        ·
        4 months ago

        You’re probably in a country that got a ton of allocations in the 90s. If you came from a country that was a little late to build out their infrastructure, or even tried to setup a new ISP in just about any country, you would have a much harder time.