Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The IPv6 Mess (2002) (cr.yp.to)
29 points by commandersaki on Jan 1, 2022 | hide | past | favorite | 68 comments


This comes up every so often but it's a fundamentally broken idea and always has been. You can't fit a quart into a pint pot, and you can't fit more than 2^30 routeable endpoints into the IPv4 internet no matter how clever you are (and you'd have a terminally fragmented address space long before you reach 2^30). So as long as there's even one non-upgraded router on the Internet, any "compatible extension" of IPv4 will route packets to the wrong place and be unusable for anyone who doesn't have an IPv4 address.


You can easily fit 128 bit addresses into an IPv4 header - or you could in the 90s, before filtering out IP options became more widespread. See in this thread my comment on how you could arrange the header and address to make this work.

Legacy systems will route based only legacy part of the IP - that's the whole point of a backward compatible extension - and you will see the benefits of the extended address space when major infrastructure upgrades, just like in IPv6. The key difference is that you don't have to maintain two stacks, 128 bit addresses are enabled by simply upgrading existing IPv4 infrastructure with zero operator involvement: you upgrade the software on your IPv4 server and it's suddenly reachable by clients with addresses outside the IPv4 range, with the only condition that all intermediary routers preserve IPv4 options.


> you will see the benefits of the extended address space when major infrastructure upgrades, just like in IPv6.

But you don't, because you can't use the "extended" express space until every single router everywhere supports it (since routing is dynamic and your packets might go via anywhere). Whereas with IPv6 as soon as there's an upgraded path between two points, they can gain the benefits of having individual addresses and not needing NAT.


> But you don't, because you can't use the "extended" express space until every single router everywhere supports it

Let's break that down into two cases:

1. Hierarchically routed address, that "split" existing IPv4 addresses into say, a /32 domain for each IPv4. Since the routing is hierarchical, all the infrastructure needs to know is how to route to the upper (IPv4) part of the address. They are not concerned with interpreting the lower part, so as long as they pass along IP options you can have IPng "islands" that use the lower extended space, connected via the regular IPv4 Internet. This removes the need for the state-full NAT hacks, so while not a solution to address space exhaustion, it's a good feature promoting adoption.

2. Fully extended 128 bit addresses using the upper, say, 64 bits. These clearly cannot be routed to non-compliant IPv4 routers... but isn't that true for IPv6 also? It would make no sense for an IPng router to route such a packet to an IPv4 router knowing it would not be understood and would lead to misroutes. So two extended endpoints will be able to connect only when a full IPng route can be found between them, not when every router on the internet is upgraded.


> It would make no sense for an IPng router to route such a packet to an IPv4 router knowing it would not be understood and would lead to misroutes. So two extended endpoints will be able to connect only when a full IPng route can be found between them, not when every router on the internet is upgraded.

Right, but how do you implement that? The IPng router only knows other routers by their addresses - it doesn't know which ones support IPng and which don't (and even if knows at a given point in time, how will it know if the router at a given address fails over or changes configuration?). The only way to make this work is to ensure that the IPng network is completely segregated from the IPv4 network - which is exactly what's being complained about with IPv6!


Maybe I don't see the full complexity, but it seems to me like an implementation detail, when setting up the routing tables to discover IPng capability, for example using a dedicated ICMP structure. Compared to upgrades required for BGP etc., this is trivial.

IPv6 is in a way "naturally protected" against such a "short-circuit" due to the different structure and IP version of the packet, but fundamentally it's the same problem, you can't expect an IPv4 box to understand it.

The fundamental complaint against IPv6 is that it requires operator intervention on each internet connected box to operate. Whereas a virtual IPng mesh over existing infrastructure could be automated completely, when a box is upgraded it will respond to ICMPs from peers and start seeing the extended space.


> Maybe I don't see the full complexity, but it seems to me like an implementation detail, when setting up the routing tables to discover IPng capability, for example using a dedicated ICMP structure. Compared to upgrades required for BGP etc., this is trivial.

I don't see how you could ever discover a reliable route, because when a packet is routed to the right place you can never know whether it was routed correctly because all the hosts in between understood IPng or it was accidentally routed to the correct place by an IPv4 router - in which case packets following the same route will likely get misrouted as that router's routing decisions change in response to shifting load etc.. You'd have to have a kind of ICMP structure that was never routed by IPv4 routers - but then we're right back to having a completely segregated IPng network.

> IPv6 is in a way "naturally protected" against such a "short-circuit" due to the different structure and IP version of the packet, but fundamentally it's the same problem, you can't expect an IPv4 box to understand it.

Right, but that's much easier to deal with. If you have IPng as an extension of IPv4 then you're practically guaranteed nondeterministic routing loops that affect some but not all packets, and that's much harder to deal with than just not having a route.

> The fundamental complaint against IPv6 is that it requires operator intervention on each internet connected box to operate. Whereas a virtual IPng mesh over existing infrastructure could be automated completely, when a box is upgraded it will respond to ICMPs from peers and start seeing the extended space.

What's the part that makes a difference? People have experimented with things like silently enabling IPv6 on an OS upgrade; the reason it's a bad idea is that some percentage of IPv6 routers etc. will be misconfigured, and that would still be the case with IPng.


While I don't want to go into the woods with this, those don't seem to be show stoppers, just minor technical issues somebody skilled in the art would have been able to solve some 25 years ago.

For example, the discovery protocol can be tuned so that no legacy box has any reason to forward the packet, say using TTL=1 etc., while extended boxes will recognize the magic datagram and take apropriate action. Routing loops would not form if they don't exist already, the extended links are, at most, a subset of the non-looping topology already established - you can't turn a tree into a ring by removing branches.

> What's the part that makes a difference?

The most important lesson of the IPv6 mess is that things that need to be explicitly configured to work will always break, in a vicious circle of "nobody uses that, so don't bother, so nobody can use it". So any friendly upgrade path should require zero configuration to interoperate gracefully with both IPv4 and the new protocol. You should configure only the features you want to use - say, extended address space.

Clearly IPv6 fails this test. Because you can't rely on any local IPv6 connection to actually deliver connectivity, you can't simply make IPv6 the default, since it will break many installations.

When you upgrade the software on your equipment to IPng, the existing IPv4 connection becomes an IPng connection. Depending on the capability of upstream, you will be able to use some or all of the IPng features, but existing functionality will never break.

Routers that upgrade don't require any special configuration, in the absence of extended routes they pass-through trafic obliviously, yet they are ready from day one to work with extended routes if any are published. Whereas an IPv6 router is essentially two distinct routers into one, the IPv6 part is useless without sysadmin attention, more attack surface.

This aspect of zero configuration, plug and play compatibility both backward and forward, is fundamental. Almost any hardware you can find on the internet today is IPv6 capable, sometimes for decades, yet nobody can be bothered to configure them to work on the weird parallel internet to which no customers are demanding access.


> For example, the discovery protocol can be tuned so that no legacy box has any reason to forward the packet, say using TTL=1 etc., while extended boxes will recognize the magic datagram and take apropriate action.

At that point discovery is following different routing rules from regular packets, which isn't going to work. You could set the same magic flag on all next-gen packets, but then we're right back to having separate networks.

> Routing loops would not form if they don't exist already, the extended links are, at most, a subset of the non-looping topology already established

In that case the whole thing is completely pointless? The whole point of all this is to be able to have more routing destinations than IPv4.

> When you upgrade the software on your equipment to IPng, the existing IPv4 connection becomes an IPng connection. Depending on the capability of upstream, you will be able to use some or all of the IPng features, but existing functionality will never break.

> Routers that upgrade don't require any special configuration, in the absence of extended routes they pass-through trafic obliviously, yet they are ready from day one to work with extended routes if any are published. Whereas an IPv6 router is essentially two distinct routers into one, the IPv6 part is useless without sysadmin attention, more attack surface.

This is concretely the same for IPv4+IPv6. The difference between "routing to IPv4 addresses over IPv4 only" and "routing according to the new system" is just as big whether you call it a new protocol or call it an extension to an existing protocol.

There's no way to let legacy hosts connect to hosts that don't have IPv4 addresses. There's no way to let hosts that don't have IPv4 addresses connect to legacy hosts (except for NAT or equivalents like DS-Lite). The only thing backwards compatibility can ever do is let hosts with IPv4 addresses use your protocol to communicate with other hosts with IPv4 addresses - but there's no motivation for those hosts to upgrade since they can already use IPv4.

> This aspect of zero configuration, plug and play compatibility both backward and forward, is fundamental. Almost any hardware you can find on the internet today is IPv6 capable, sometimes for decades, yet nobody can be bothered to configure them to work on the weird parallel internet to which no customers are demanding access.

Every possible solution makes hosts without IPv4 addresses a weird parallel internet, there's no getting away from that, and so as long as IPv4 addresses were available adoption was always going to be an uphill struggle. IPv6 at least delivers benefits to users without requiring every router on the internet to upgrade first - as soon as there's one server with an IPv6 address then clients have a reason to upgrade, and vice versa. Technically-minded gamers and voice-chatters do make the effort to enable IPv6, or even ask their ISP to enable it. It's not much, but it's a bigger incentive than exists for IPng.


> The IPv6 designers made a fundamental conceptual mistake: they designed the IPv6 address space as an alternative to the IPv4 address space, rather than an extension to the IPv4 address space

Very much this. I'm so given up on hopes for IPv6, that I feel it is almost the time to create IPv8, designing it to be an extension to IPv4, and ditching IPv6 completely,


Strange time to give up on IPv6. Global availability is ~36% now and many countries it crossed 50% in 2021. If anything its time to ditch IPv4 once and for all.


And yet we are probably multiple decades away from 90% adoption and it is quite possible given the level of coordination required that we may never get to 100% adoption. We are more than 20 years in and I am still required to support ipv4 if I want to reach the market reliably while ipv6 has little incentive for me to support other than a general sense of goodwill/responsibility and the vague hope that someday it will result in an even larger network.

We need an expanded address space. The ipv6 rollout is proving to be one of the least effective ways to get it in practical terms.


> And yet we are probably multiple decades away from 90% adoption

IPv4 prices were in the mid-US$30/IP in early 2021 and they're now around mid-$50:

* https://auctions.ipv4.global

I think The Market™ will decide that trying to run IPv4 in a lot of places will be too expensive, and that end-points will get assigned IPv6 with various translation mechanisms being run for the 'legacy' Internet.

Heck, even AWS if finally allowing IPv6-only infrastructure:

* https://aws.amazon.com/blogs/networking-and-content-delivery...


> And yet we are probably multiple decades away from 90% adoption and it is quite possible given the level of coordination required that we may never get to 100% adoption.

100% adaption will never come. Just look how many companies still use COBOL software that was developed in the 70s.

The question rather is: when will IPv4 become insignificant enough that your company won't have to care about it anymore?


The statistics would seem to support a slightly more optimistic view: https://www.google.com/intl/en/ipv6/statistics.html

I see a linear trend that, once adoption started in earnest around 2015 has seen about 30% of users being converted to iPv6. At some point, it’ll be more common to run into issues with v4, which will drive more adoption. The toolchain now needs. To support v6 and defaults using it will become more common. The conversion will limit the scarcity of v4 to some extend, but continued growth of internet access and connected devices adds pressure on the other side.


IPv6 has an entire network range reserved for the IPv4 address space. You need an IPv4 capable router set up, but you can natively reach IPv4 from IPv6.

The problem is connectivity the other way around, but it's simply impossible to cram more address space into IPv4. You'd need a protocol on top of IPv4 and special routing software or hardware on either end of the connection to hack backwards compatibility into the old address ranges.


That’s fine, nat can do that just fine. But ipv6 fans hate nat, so instead we end up with proponents saying “run dual stack” and those who don’t want the added overhead simply ignoring ipv6 entirely


It's not just the added overhead.

For many, having a *private* IPv4 space is not just a feature, but a requirement.

The idea of a completely globally flat address space makes perfect technical sense.

However, policy sense may differ as Your Mileage Varies.


fc00::/7 is the space for that private range, it's as routable and subnettable as 10.0.0.0/8


> fc00::/7 is the space for that private range

A small nitpick: the lower part of that range is reserved for future expansion, so the private range is actually fd00::/8. And you should pick a random /48 within that to be your private 10.x.x.x-like range, not use the whole /8. (See https://en.wikipedia.org/wiki/Unique_local_address#Definitio... for more details.)


Ecxellent. Learned somthing imprtant today.

As long as there is a provision to soak it in CIDR.

I had looked at the dual-stack portion of AWS some years ago and did not see how this worked.

Doubtless, my oversight.


> As long as there is a provision to soak it in CIDR.

There is; the idea is to fill up random bits to /48, leaving you with 16 bits of subnetting space until you hit /64. If you're not concerned about aggregating networks, a common pattern on larger networks is to put the VLAN ID there in "false hex", i.e. VLAN 1357 = fd……:1357::/64.

If you need more space you can either generate more random /48 prefixes, or widen the prefix length on your existing /48. The latter is technically against the spirit of ULAs, but… only the pedants will care.


Nothing in the IPv6 spec forces you to publicly advertise your globally flat address space. Even if you do choose to advertise your prefixes publicly, you can still use firewalls to block some or even all traffic.


Not this discussion again.

Okay, you can indeed do that... but how do you get an address is the another question. You might answer "Just go to the RIR", sure, but ASNs aren't given willy-nilly, they need to prove that they'll need different routing needs for external peering, and some RIRs do insist on this. The most loose RIR in terns of regulations is RIPE, but what about other companies in other parts of the world?


You get your private prefix by appending 10 random hex nibbles to "fd". For example, fdb9:3667:a8da::/48. (Please make your own random prefix.)

You also missed the point of this sub-thread since the question was about private space. IPv6 fd00::/8 is IPv4 10.0/8. The only difference is an added bit of semantics to make merging networks easier.


The problem is that ipv6 advocates say "just have your network auto-renumber itself every time you change upstream provider", "run multiple IP addresses on every host" etc

I don't think I've ever seen anyone suggesting using a /48 in fd00:: and NAT66 at the edge, and NAT64 is discouraged rather than maintaining a dual stack solution (and if you need to maintain a working ipv4 solution, why even bother with ipv6)


> You might answer "Just go to the RIR", sure, but ASNs aren't given willy-nilly

You get an PI address block and ask your ISP to advertise it on your behalf in their public ASN, e.g.:

> If multi-homing to more than one provider, the customer must obtain an Autonomous System (AS) number, available at http://www.arin.net. If multi-homing only to us, we will provide a private AS number.

* https://support.allstream.com/knowledge-base/bgp-request-inf...


> You get an PI address block and ask your ISP to advertise it on your behalf

And? Get nothing in some regions because all of their ISPs aren't willing to provide this service even with money?

I know provider-independent addresses, but don't pretend it'll solve the problem of IPv6. The fact that you need to do some service wrangling rather than "take this IP for private use, just don't ask us when IP conflicts happen" (and ULA implementation is inconsistent even in enterprise routers) means headaches abound that didn't exist for IPv4.


> I'm so given up on hopes for IPv6, that I feel it is almost the time to create IPv8, designing it to be an extension to IPv4, and ditching IPv6 completely,

And new code has to be written to take advantage of that new protocol. And that new code has to be deployed on every single network element. Don't forget a new DNS record type so that old code is not confused by a new address format. And then everyone has to be convinced to deploy that new code and start assigning those new addresses.

Just like had/has to be done IPv6.

IPv8 will/would have the exact same issues with deployment as IPv6.


IPv6 has already won. Google and Facebook report 50% or higher IPv6 traffic in the united states.

There is absolutely no chance of a replacement at this point


26 years later, still not being the default while gaining about %50 of only for a couple of giant companies -excluding the fact that most traffic is not the users but many other third parties- I would not call this statistic a win.

If I connect to a Wi-Fi in a Starbucks with IPv6 by default or my recently bought mobile line comes with IPv6, then I would say it is a win.

It is a huge improvement but the story of the failure is a good topic for academic research for future protocols.


> my recently bought mobile line comes with IPv6, then I would say it is a win

I’m surprised it doesn’t. In Europe most providers only give you IPv6 and CGNAT IPv4. Even my landline does that.


It's getting more and more used by time but yes,ts not that common. I'm here in Estonia and the mobile carriers -that I used- use (CGNAT) IPv4.


> my recently bought mobile line comes with IPv6

it quite likely does today, IPv6 is common with mobile carriers.


Almost all ipv6 traffic is mobile, which is running under far more totalitarian execution environments (mobile OSs) and centrally controlled addressing.

While that is a ipv6 success, that is greenfield. And the telecoms are basically running a huge ipv6 nat when they need ipv4 routing.

Which is heretical to ipv6.

The issue of migration of existing ipv4 is still sucky.

Meanwhile normal desktop OS and infrastructure and consumer routers are (20 years after this article) a disaster.


Also Google: blocks mail coming from IPv6 space because of spam, effectively making it a non-option for hosted servers.


They're accepting my mails on IPv6 just fine… are you sure you're not simply missing a PTR record for your IPv6 address? That seems to be the most common rake to step on.

(Of course there's a whole bunch of other anti-spam heuristics, but missing PTR is the most common in my limited personal experience.)


Could not agree more. They also forgot to make sure every single of the most common networking tools, would have to work with IPv6 with no flaws.

Then there is the issue of equivalence of RFC 1918 for IPv6, another whole can of worms...

https://serverfault.com/questions/216602/what-is-the-ipv6-eq...


How would making it an extension help? You still have the fundamental compatibility problem that you can't address the wider address space with a narrower address.


> You still have the fundamental compatibility problem that you can't address the wider address space with a narrower address.

There are ways to work around that. All of them (and some more) were tried and/or are in use with IPv6: https://en.wikipedia.org/wiki/IPv6_transition_mechanisms


Those are all ways of allowing ipv6 to address ipv4, (or tunning ipv6 over ipv4 infrastructure) which is fairly easy to manage (getting the reverse connection still involves NAT though). None of them are ways of allowing ipv4 to address ipv6, because that fundamentally doesn't work by the pigeonhole principle.


You'd probably be encapsulating "IPv5" traffic in IPv4 traffic towards known translation routers at ISP networks. CG-NAT with some extra advertisements and routing knowledge, basically.

You'd be stuck with an abysmal MTU for "IPv5" though, because you need that wrapper and the world doesn't seem to want an MTU larger than 1500 outside internal networks.


> You'd probably be encapsulating "IPv5" traffic in IPv4 traffic towards known translation routers at ISP networks.

That's what 6to4 does. When enabled, it encapsulates IPv6 traffic in IPv4 packets and sends them towards a well-known anycast IPv4 address, and the return IPv6 traffic is sent towards a well-known anycast network, where a router encapsulates it in IPv4 packets and sends them towards the IPv4 address contained within the IPv6 address.

I find it amusing that all these "IPv5" proposals either cannot work, or do the same as something which already exists for IPv6 (and in this case, it works; I've used 6to4 for a while).


You're right about the concept, of course, but 6to4 gateways don't work the other way around. You can't reach an IPv6 network through them, you can only reach IPv4 from IPv6. My silly "IPv5" solution would also have a way for IPv4 sources to send IPv5 traffic upstream.

As complex as some may find IPv6, I find it a lot simpler and more independent of routing services than many proposals for alternatives. The weirdest one I've seen involves concatenation two IP headers, giving everyone one network IP for ISPs and such and one client IP for the individual hosts, but that's just replicating IPv6 with extra steps and an even smaller MTU.


You ought to use the past tense when speaking about 6to4..

As much as technically I think it was a brilliant idea (a whole /48 assigned to each existing IPv4 address owner), it failed because of the tragedy of commons.


One of the problems with 6to4 that make it hard to deploy in well-configured networks is the DNS problem. Google runs a 6to4 DNS but warns that DNSSEC will be broken by design, and clients would need their own translation software on the software side to solve the problem.

6to4 gateways are still a thing, you can traceroute 192.88.99.1 to find the nearest gateway if you have IPv4 connectivity. I've got one seven hops from my device, but I've also got a static IPv4 address so I have little need for a gateway.


Don't give up hope. The iot begs for ipv6 addresses and companies beg for adding iot capable chips on everything. It's a sad driving force, but it is a driving force.


I don't think its even that hard to create an extension at this point. Partially because we are mostly there already.

Call the extension "NAT tagging" and add a 24-bit NAT "tag" which reflects the bottom 24 bits of the 10.x space. Every machine with a private ipv4 appends this as its source tag, and it becomes the responsibility of NAT tagging aware NATs to notice the NAT tag on inbound packets and route them appropriately. Client machines which are themselves aware make sure to preserve the source tag when sending packets back to the NAT'ed machine. The tag of every existing public ipv4 is 0.0.0. The only tricky part at this point is encoding all these extra bits in a way which is ignored by existing ipv4 stacks. Although, figure out how to check for it in a handshake of some form (easier with stateful protocols like TCP) and it could hide outside of the IP header itself.

Non compliant endpoints continue to suffer the existing port forwarding/etc issues that exist with current NAT's while updated machines can directly address each other. The various hacks (already in use) to the HTTP hostname fields also solve the problem for tag unaware machines addressing public https servers.

This gives every existing IPv4 address 24 bits of additionally public routable addressing without having to upgrade a single core router/etc.


> Wake up, folks: Nobody will join your IPv6 network if it can't talk to Google

It's interesting to see how Google has gone from seemly uninterested in IPv6 (based on that quote from 2002) to today where it uses it everywhere.

As an example, it's one of the only large providers I've seen that will send and accept email over IPv6. This is contrast to others who refuse on the basis their anti-abuse techniques don't work in the larger IPv6 address space [1]

[1] https://sendgrid.com/blog/where-is-ipv6-in-email/


According to this article [1], Google enabled IPv6 back in March 2008 and has since started offering it on more and more of their services.

I think companies like Microsoft (which, let's be real, is the only other "large provider" when it comes to email) are afraid of breaking configurations and undeliverable mail because of bad IPv6 setups. MS isn't known for adopting anything quickly, especially when it comes to enterprise services.

The many old techs and their advocacy against IPv6 because they don't (want to) understand it will influence the web for years to come, but in reality you can enable IPv6 anywhere and things will work just fine, as long as you don't do stupid things like disabling all firewalls. Docker has some trouble working well with IPv6, but an IPv6-capable host with an nginx proxy will nip that problem right in the bud, and you're (hopefully) not using a publicly reachable IPv4 Docker network anyway.

[1]: https://www.cnet.com/news/google-tries-to-break-ipv6-logjam-...


Not quite everywhere, Google Cloud doesn't use IPv6 in their VMs (at least not publicly).


They do support it, currently only in four regions. My guess is they will enable IPv6 in all regions this year, due to competition from other providers and gov requirements.

https://cloud.google.com/compute/docs/ip-addresses/configure...


Oracle Cloud VMs support dual stack IPv4 + IPv6.


This is a very old article, but just like even many modern articles, it makes the flawed argument that you can only ever have one network stack: IPv4 or IPv6.

The truth is, dual-stack networking exists and is used all the time. My router is configured to use DHCPv4 and SLAAC and it Just Works, no strange protocol configuration or DHCP flags required. There is no choosing between what sites are on what networks, you always get both. Only IPv6-only networks pose a problem, and that's only a problem for people stuck on IPv4.

You'll have to manually disable IPv4 in your network config to miss out on anything when you enable IPv6. You'll have to do so at every separate client, because some ISPs even run their own complex 6to4 system, transparently encapsulating IPv4 traffic in IPv6 packets in their routers on the consumer side, and then turning those back into IPv4 at their network's edge. The "IPv4" that you get with those ISPs isn't even real IPv4!

Sure, you can't have any incoming connections over IPv4 in dual-stack lite configurations, but normal people don't need incoming connections. If they can reach Facebook, Tiktok and Google then they're all A-OK.

Only the minority with technical requirements for receiving connections need the full dual stack experience anyway. Businesses hosting their own stuff and technies should be able to request a public IP, at a small cost if ISPs are as greedy as ever, but that's it. The rest of the world can live with shitty CG-NAT for websites that haven't made the switch yet, because shitty NAT implementations are the de facto standard when dealing with IPv4.

My prediction for the (far away) future is that directly accessible IPv4 becomes a thing for businesses, cloud hosts and servers, where you're basically paying extra for real network connectivity if you really do need to receive connections from the legacy net. The worldwide transition is slow, but I'm certain that other countries will pick up the pace once big cloud companies start changing their IPv6 assumptions as American ISPs start to transition their last residential customers to cut costs. When your network has trouble reaching Google, no matter if it's your fault or Google's, your customers will blame you.


The big residential ISPs were some of the first to go IPv6. Consider the issue of how to issue management IP addresses to 30 million cable modems, plus an unknown number of “other” managed devices like set top boxes: the 10. space only has 16 million addresses, so you have to have multiple parallel networks. Managing that is a lot of overhead. The conversion to IPv6 was considered cheaper than the alternative, so the big ones mostly did it 10-15 years ago.


Dual stack means twice as much to maintain and twice as much to go wrong.


> twice as much to go wrong

Actually it's more than that, given that extra services like DNS are pretty much required these days and that DHCP has been replaced with two things, RA and DHCPv6, instead of just one.

As a concrete example, I run split-horizon DNS for an internal service I want Let's Encrypt certificates for. This was trivial to set up and worked great from my desktop computer, but for some reason my service didn't work with my phone. Turns out Android refuses to use provided IPv4 DNS addresses if it got allocated an IPv6 address, and reverts to Google's own DNS servers.


> Turns out Android refuses to use provided IPv4 DNS addresses if it got allocated an IPv6 address, and reverts to Google's own DNS servers.

That's strange, I haven't seen this happen to my devices at all, and my pihole is IPv4 only for now.

From what I can find online, it seems that this happens when Android is configured with an IPv6 address but the provided DNS server won't resolve AAAA queries. Allowing your DNS server to look up AAAA addresses would probably fix your problem?


> Allowing your DNS server to look up AAAA addresses would probably fix your problem?

I'm running Pi-Hole as well, and I don't have an issue with AAAA records as far as I can determine. I found several forum posts suggesting I make my router hand out the IPv6 address to my DNS server (also Pi-Hole) and after I did that it started working right away.


I think the problem is you have www.somesite.com which resolves publicly to an A and AAAA record, and you override its A record with a local DNS entry, but not its AAAA entry? Sounds like an Android bug if you NXdomain the AAAA entry and it still insists on using it (or it overrides your DNS server and goes to a google one)


I couldn't find the code that does this in my cursory check of the Android source code, but I think it's a misconfiguration check that's failing. That's the feeling I get from others reporting similar issues on help forums, anyway.

I think Android sees a world routable IP address, notices it has connectivity and then finds out the DNS server doesn't return IPv6 addresses for known IPv6 capable hosts (such as Google.com). That seems like a fair way to detect misconfiguration to me, though I would've put up a notification warning about the switchover so users can get someone to fix their issue.


It also means twice as much can go right. I've been able to rescue a broken IPv4 config via IPv6 access several times, and vice versa.


Are we sure a 2002 perspective is a good basis for discussion in 2022? Does this even reflect the latest opinion of the author? I dont see how AAAA-records created problems or why dual-stacking is a bad joke.


In retrospect, the decision to bootstrap IPng as a hierarchically routed superset of IPv4 seems painfully obvious. A quarter of a century later the great IPv6 transition is still incomplete and the limits of IPv4 continue to haunt us.

The IPv4 architecture even contained mechanisms to enable such a transition in the form of IP options. This proposal from Bernstein is not functional, but IPng designers could have defined the IPng header format so that 96 bits of the 128 bits source or destination addresses mapped to a structure that looks like an unknown IP option for legacy stacks.

For example the IPng address could have had a logical design like this:

  [upper 64 bit IPng][standard 32bit IPv4][lower 32 bit IPng]
With the [upper 64 bit IPng][lower 32 bit IPng] part stored inside the option, and the IPv4 stored at the regular location in the IPv4 header.

This means that legacy systems could still interpret and route IPng traffic, and even if they strip the IPng option, the connection can fallback transparently to IPv4, as long as the extended fields are nil.

In time, the infrastructure would upgrade to the new header format and would learn to route the extended address space, leaving the legacy IPv4 space to live in a /64 inside the 128 bit space. In the meantime, the lower IPng 32 bit could have prevented the proliferation of NAT, if the endpoints were upgraded and the backbone was transparent to options (essentially, IPng islands connected via the IPv4 internet).

There were proposals for such mechanisms in the 90s but the committee wanted a clean slate design, ignoring decades of experience with interoperability and technological transitions. Of course, any talk about such alternatives is today a purely pedantic exercise, the window was closed 20 years ago and today IPv6 is the only possible future.


> In retrospect, the decision to bootstrap IPng as a hierarchically routed superset of IPv4 seems painfully obvious.

Indeed. For IPv6, it's called 6to4, and it actually worked; for a while, I had configured the office network to use it, and could reach IPv6 networks even though that office only had a single public IPv4 address. (I'm working elsewhere right now and no longer the network admin, so I don't know how well 6to4 works nowadays.)

> The IPv4 architecture even contained mechanisms to enable such a transition in the form of IP options.

Unfortunately, that's no longer an option, and was already no longer an option back then. This is because of middleboxes like firewalls, IDS, or worse, which discard and/or reject anything they don't know, instead of leaving the end system to deal with it (as would be correct following the end-to-end principle the Internet is designed on). This is called "protocol ossification": https://en.wikipedia.org/wiki/Protocol_ossification

> This means that legacy systems could still interpret and route IPng traffic, [...] as long as the extended fields are nil.

They would have to be nil forever, or at least as long as the IPv6 transition is currently taking. If even a single endpoint you might want to talk to doesn't understand that new IP option (and since you don't control all the remote endpoints, you can never be sure), your local address has to have zeros on these extended fields. The dual-stack approach used by IPv6 avoids this issue by having two independent addresses, one to contact legacy remote endpoints, the other one to contact newer endpoints, and keeping them separate.

> In the meantime, the lower IPng 32 bit could have prevented the proliferation of NAT

It would instead lead to the proliferation of NAT, as that would be the only guaranteed way to make it work when the local endpoint address has a non-zero value in the extended fields.

> if the endpoints were upgraded and the backbone was option transparent

That's where all these proposals fail: they would require all endpoints to be upgraded before use of these extended addresses can start. And while the core Internet backbone might be completely transparent, the path to reach it often isn't.


> In other words: The current IPv6 specifications don't allow public IPv6 addresses to send packets to public IPv4 addresses. They also don't allow public IPv4 addresses to send packets to public IPv6 addresses. Public IPv6 addresses can only exchange packets with each other. The specifications could have defined a functionally equivalent public IPv6 address for each public IPv4 address, embedding the IPv4 address space into the IPv6 address space; but they didn't.

Why didn't they? What are the arguments against this solution?

You would have to upgrade the software and OS of systems using IPv4-only to understand the IPv6 IP header. (Which is much simpler than upgrading to support IPv6 AND assigning IPv6 addresses)

I guess the bigger problem is that all routers in the path between an IPv6-only host and an IPv4-only host would have to support IPv6 to parse the destination IPv4/IPv6 address and make the proper routing decision.

This would make "IPv6 to IPv4 and vice-versa" traffic only work for some, depending on which ISPs has upgraded their equipment to support IPv6, and which IPv4-only hosts has upgraded their software+OS to support IPv6. This could result in IPv6 getting a very bad reputation, further delaying adoption.

Anything I am missing?


>> In other words: The current IPv6 specifications don't allow public IPv6 addresses to send packets to public IPv4 addresses.

> Why didn't they? What are the arguments against this solution?

They did. See "Stateless IP/ICMP Translation Algorithm (SIIT)"

   This document specifies a transition mechanism algorithm in addition
   to the mechanisms already specified in [TRANS-MECH].  The algorithm
   translates between IPv4 and IPv6 packet headers (including ICMP
   headers) in separate translator "boxes" in the network without
   requiring any per-connection state in those "boxes".  This new
   algorithm can be used as part of a solution that allows IPv6 hosts,
   which do not have a permanently assigned IPv4 addresses, to
   communicate with IPv4-only hosts.  The document neither specifies
   address assignment nor routing to and from the IPv6 hosts when they
   communicate with the IPv4-only hosts.
* https://datatracker.ietf.org/doc/html/rfc2765 (2000)

* https://datatracker.ietf.org/doc/html/rfc7915 (update in 2016)


Oh, the arguments here reminded me of the joke called IPv10.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: