I’ve just read U.S. shrugs off world’s address shortage which is an interesting article about how the world is running out of IP addresses and how the new IP standard IPv6 addresses(!) this problem.
But the article is full of basic errors. Here’s one.
IPv6 addresses are 128 bits. The resulting list of IP addresses is two googols long, an enormous number.
The bit about IPv6 addresses being 128 bits is true. It’s the rest that’s nonsense. As anyone who followed the recent Who Wants To Be A Millionaire cheating scandal knows, a googol is 1e100 (i.e. 1 followed by 100 zeroes). That’s a large number.
The number of IPv6 addresses is simple to calculate. If there are 128 bits and each bit can be 1 or 0 then there are 2**128 (where ** means “to the power of”) addresses. This number is about 3.4e38. Still a huge number, but nowhere near the size of 2 googols (or even 1 googol).
Here’s another interesting sounding fact, that turns out to be nonsense.
IPv6 creates enough IP addresses for every person on Earth to have 1,000 Web-enabled devices.
Now, 1,000 personal IP addresses sounds like a lot. But remember when 64k sounded like a lot of memory in a computer? It’s quite possible that in a few decades time we all have a use for more than 1,000 IP addresses. IPv6 is supposed to deal with these issues for centuries to come.
And, of course, it does. One current estimate has just under 6.5 billion people living on Earth right now. Another quick calculation and we can see that 3.4e3 IP addresses spread between 6.5e9 people gives us 5.2e28 IP addresses each. Plenty to go round for years to come.
All of this is just simple mathematics. I really don’t understand why the author of the article couldn’t run some simple checks to verify the accuracy of what he was saying.
Though the maths doesn’t work out if we have bizarreness similar to IPv4 address classes, which lead to the fact that there may by millions (substitute your favourite power of ten here) of addresses available but unusable because a whole block was assigned to someone who only uses a fraction of them.
For example, MIT has about 16 million IP addresses and surely doesn’t use them all, but if IP address allocation in, say, China starts running out, they probably can’t use addresses from 18.0.0.0/8 without major hassle. Router tables are getting really large already without dividing IP address ranges up further.
I can imagine this sort of thing might mean we can only use a tiny fraction of all addresses, or whatever, so it’s good that there is such a huge number to begin with. Still, it’s not a cure-all.