Timothy B. Lee Makes Persuasive Case Against Network Neutrality Regulations

The Cato Institute recently published Timothy B. Lee’s thorough examination of proposals to regulate ISPs to ensure “network neutrality.” Lee persuasively argues that doing so would be a mistake that would likely have long-term unintended consequences.

Lee’s argument against network neutrality is two-pronged. First, he argues that proponents of network neutrality laws/regulations vastly overestimate both the power of ISPs and the benefits that would accrue to an ISP that decided to wholesale violate the end-to-end principle of packets. When Comcast decided to interfere with the BitTorrent protocol, for example, not only was this very quickly discovered, but the public outcry quickly forced the company to backtrack. Similarly, Lee notes we’ve already seen a business model predicated on the idea of privileging a company’s own favored content over that of the vanilla Internet. That was, after all, AOL’s model and look how well it worked for them.

On the other side of the equation, Lee argues that any regulation could have long-term unforseen consequences, much as previous well-intentioned efforts to regulate interstate commerce, the telephone system, and air travel had. In each case, laws intended to benefit consumers ended up killing innovation for decades and costing consumers in higher prices.

Part of the innovation-killing in network neturality regulation would be introducing legal uncertainties that could persist for years in an industry where innovation frequently seems to be measured in months. For example, consider the problem of network jitter in multimedia applications,

As previously discussed, random delays in packet delivery (called “jitter”) degrade the performance of latency-sensitive applications. Of course, some of the major broadband providers are also telephone companies, and these firms may be tempted to increase the jitter of their networks in order to discourage competition from VoIP services. Such a strategy would sidestep some of the difficulties that would come with a strategy of explicit packet filtering because it could be applied indiscriminately to all traffic without significantly degrading the quality of non­latency-sensitive applications such as websites and e-mail. On the other hand, it would degrade the quality of latency-sensitive applications like network gaming and remote IPTV

In other cases, jitter may have innocent explanations, but network owners may choose not to perform network upgrades that would reduce it. In still other cases, a network owner might deliberately introduce jitter but pretend it had made the change that caused it for unrelated reasons. It could be quite difficult for a regulator to distinguish among these cases. Of course, a network owner under a network neutrality regime will never admit that it is increasing jitter on its network. So the FCC could be forced to second-guess the complex network- management decisions of network owners.

Lee raises a lot of interesting questions that proponents of network neutrality are going to have to address.

Fundamental Dishonesty about Net Neutrality

When it comes to the principle of net neutrality — the idea, essentially, that a packet is a packet is a packet — I’m largely neutral. Along with the political issues involved there are some technological issues that I rarely see discussed enough (and don’t know enough about to render a judgment) so I really don’t have an opinion one way or another.

Unfortunately, what I have noticed is that some advocates of net neutrality are intentionally distorting the issues at stake. Craig Newmark does just that in a Wall Street Journal debate with Mike McCurry. The claim goes something like this — what the industry wants to do is slow down connections from certain companies unless they pay a fee to large bandwidth providers. In Newmark’s version,

Do you believe Yahoo should be allowed to outbid Google to slow down Google on people’s computers? That’s the kind of thing that the big guys are proposing.

But Newmark debunks this idiocy just a bit later. The source for this is a Bellsouth exec,

FYI, Bellsouth guys have admitted that they don’t intend to play fair [according to a December 2005 Washington Post article]: “William L. Smith, chief technology officer for Atlanta-based BellSouth Corp., told reporters and analysts that an Internet service provider such as his firm should be able, for example, to charge Yahoo Inc. for the opportunity to have its search site load faster than that of Google Inc.”

But paying for my site to load faster is not the same thing as slowing down the speed of everyone else’s site. Rather, what telcos are proposing to do is essentially leave the existing Internet as it is and build a parallel system with higher bandwidth and lower latency and charge companies for traffic to be carried on this network.

Such a system already exists to some extent for those of us with access to Internet 2 connections, with the main difference being that I2 doesn’t charge, say, Youtube, for any of its traffic that finds its way over I2.

Is building such a separate network a good idea? Should companies be allowed to charge additional fees for data that traverses that separate network? I don’t know. But that is not the same thing as believing that “Yahoo should be allowed to outbid Google to slow down Google on people’s computers”.