This is one of my favourite blog posts ever. For those unaware (or who didn't read right to the bottom), the author is the CEO of Tailscale.
One of the problems we have is when we're born we don't question anything. It just is the way it is. This, of course, lets us do things in the world much more quickly than if we had to learn everything from basic principles, but it's a disadvantage too. It means we get stuck in these local optima and can't get out. Each successive generation only finally learns enough to change anything fundamental once they're already too old and set in their ways doing the standard thing.
How I wish we could have a new generation of network engineers who just say "fuck this shit" and build their own internet.
> One of the problems we have is when we're born we don't question anything
I don't know about you personally but every grade-school, high-school, & college level instructor I ever had would probably vehemently disagree with this statement about me. I remember at least 70 year old college instructor becoming visibly irritated that I would ask what research supported the assertions he made
> How I wish we could have a new generation of network engineers who just say "fuck this shit" and build their own internet.
And doing so would improve nothing, and be no different than the IPV6 rollout. So you have to ship new code to every 'network element' to support an "IPv4+" protocol. Just like with IPv6.
So you have to update DNS to create new resource record types ("A" is hard-coded to 32-bits) to support the new longer addresses, and have all user-land code start asking for, using, and understanding the new record replies. Just like with IPv6. (A lot of legacy code did not have room in data structures for multiple reply types: sure you'd get the "A" but unless you updated the code to get the "A+" address (for "IPv4+" addresses) you could never get to the longer with address… just like IPv6 needed code updates to recognize AAAA, otherwise you were A-only.)
You need to update socket APIs to hold new data structures for longer addresses so your app can tell the kernel to send packets to the new addresses. Just like with IPv6. In any 'address extension' plan the legacy code cannot use the new address space; you have to:
* update the IP stack (like with IPv6)
* tell applications about new DNS records (like IPv6)
* set up translation layers for legacy-only code to reach extended-only destination (like IPv6 with DNS64/NAT64, CLAT, etc)
You're updating the exact same code paths in both the "IPv4+" and IPv6 scenarios: dual-stack, DNS, socket address structures, dealing with legacy-only code that is never touched to deal with the larger address space.
Deploying the new "IPv4+" code will take time, there will partial deployment of IPv4+ is no different than having partial deployment of IPv6: you have islands of it and have to fall back to the 'legacy' IPv4-plain protocol when the new protocol fails to connect:
The eternal problem with companies like Tailscale (and Cloudflare, Google, etc. etc.) is that, by solving a problem with the modern internet which the internet should have been designed to solve by itself, like simple end-to-end secure connectivity, Tailscale becomes incentivized to keep the problem. What the internet would need is something like IPv6 with automatic encryption via IPSEC, with IKE provided by DNSSEC. But Tailscale has every incentive to prevent such things to be widely and compatibly implemented, because it would destroy their business. Their whole business depends on the problem persisting.
> What the internet would need is something like IPv6 with automatic encryption via IPSEC, with IKE provided by DNSSEC.
I understand the appeal of this vision, but I think history has shown that it's not consistent with the realities of incremental deployment. One of the most important factors in successful deployment is the number of different independent actors who need to change in order to get some value; the lower this number the easier it is to get deployment. By very rough analogy to the effectiveness of medical treatments, we might call it the Number To Treat (NTT).
By comparison to the technologies which occupy the same ecological niches on the current Internet, all of the technologies you list have comparatively higher NTT values. First, they require changing the operating system[0], which has proven to be a major barrier. The vast majority of new protocols deployed in the past 20 years have been implementable at the application layer (compare TLS and QUIC to IPsec). The reason for this is obviously that the application can unilaterally implement and get value right away without waiting for the OS.
IPv6 requires you not only to update your OS but basically everyone else on the Internet to upgrade to IPv6. By contrast, you can just throw a NAT on your network and presto, you have new IP addresses. It's not perfect, but it's fast and easy. Even the WebPKI has somewhat better NTT properties than DNSSEC: you can get a certificate for any domain you own without waiting for your TLD to start signing (admittedly less of an issue now, but we're well into path dependency).
Even if we stipulate that the specific technologies you mention would by better than the alternatives if we had them -- which I don't -- being incrementally deployable is a huge part of good design.
[0] DNSSEC doesn't strictly require this, but if you want it to integrate with IKE, it does.
> First, they require changing the operating system
This was done very quickly with IPv6; most major vendors had adequate support very early. This shows that it can be done when the companies involved actually want to do it.
> IPv6 requires you not only to update your OS
Blatantly false. AFAIK, all mainstream OSs today has enough IPv6 support to work adequately in a theoretical IPv6-only environment.
> Even the WebPKI has somewhat better NTT properties than DNSSEC: you can get a certificate for any domain you own without waiting for your TLD to start signing (admittedly less of an issue now, but we're well into path dependency).
Wait for CDS and CDNSKEY record support to be more widespread among TLDs (some support it today, and from what I can see, the number is increasing). Then you don’t need even your registrar to be involved in you DNSSEC deployment, you can just enable DNSSEC in your DNS server and let it deploy automatically.
> being incrementally deployable is a huge part of good design.
Oh, agreed.
> [0] DNSSEC doesn't strictly require this, but if you want it to integrate with IKE, it does.
Yes, this kind of new feature would have to be implemented in a backwards compatible way, with fallback to normal connections if the other end does not support it. One idea would be to put KEY records in the reverse lookup zones; only if such a record exists will you get automatic IPsec.
Most tech businesses exist because problems exist. Tailscale delivers a solution that's available today. The only alternative is to sit and wait for IPv6. I don't imagine Tailscale is against IPv6 any more than security professionals are against memory-safe programming languages.
I thought that too and I've written a very similar comment before. But in fact Tailscale's main product seems to be the zero trust stuff, not dealing with IPv4. At least that's what they say...
It was somewhat unexpected to find section headings such as "Is IPv6 a failure?" in the product support documentation, but I thought it was interesting and informative nonetheless.
> How I wish we could have a new generation of network engineers who just say "fuck this shit" and build their own internet.
There are plenty of anarchists and disaster aid groups interested in building a more decentralized alternative to the internet. Meshtastic, AnoNet, Reticulum, MeshCore, etc are all evidence of that
Then there's also stuff like Dave Ackley's robust-first computing that's looking towards a completely different paradigm for computing in general that focuses on robustness.
One of the problems we have is when we're born we don't question anything. It just is the way it is. This, of course, lets us do things in the world much more quickly than if we had to learn everything from basic principles, but it's a disadvantage too. It means we get stuck in these local optima and can't get out. Each successive generation only finally learns enough to change anything fundamental once they're already too old and set in their ways doing the standard thing.
How I wish we could have a new generation of network engineers who just say "fuck this shit" and build their own internet.