For some time now, large American telephone companies have been actively investigating new ways to position a “broadband” service whose business model, and profitability, would be different from what we’ve come to know as the Internet.
One potential model being discussed within the industry would be more like the “wireless web” offered on cell phones, a walled garden of selected “content,” possibly with a narrow “hobo class” pipe to the public Internet, possibly not. Such a network would be based on the IP Multimedia Subsystem
(IMS) architecture that is now all the rage among the industry’s favorite equipment vendors.
They are also interested in remaking the Internet’s retail pricing model along the lines of the traditional telephone network. For years, local telephone service has run at a loss, subsidized by above-cost payments from long distance companies, who pay the local companies at both ends of a call. Applied to the Internet, this would have “content providers”—web site operators, online retail stores, net radio stations, and others—paying both their own ISPs, as they do now, and the ISPs (or telephone companies in their role as “broadband” network operators) whose subscribers are using them.
This could hold down the fixed monthly price of broadband access, but more than make up for it with usage-based payments collected indirectly. Never mind that consumers have been voting with their wallets against this business model ever since alternatives, such as flat-rate long distance plans and Internet telephony, have become available.
This certainly breaks the so-called end-to-end principle that underlies the Internet, which is sometimes called a “stupid” network, but telephone companies are not averse to complexity. A bevy of companies now produce Deep Packet Inspection systems designed to allow broadband operators to look at the contents of passing packets, to either charge for them on a “value” basis or to block them.
Carriers would then bill companies who are not their own customers—for instance, web music stores—for a share of the value of downloaded material or business transactions conducted across their networks. Encryption could, of course, make this more difficult, but encrypted packets could just be blocked or shunted off to the slow lane.How Much Competition is “Vibrant” Enough?
These scenarios are probably not the type of thing that consumers would be very happy with, if they were asked about it. But they’re not being asked. Instead, the Bells have been working with a compliant FCC
for the past six years to lay the groundwork. Key to making this possible has been getting rid of the independent ISPs who have, until this year, had an absolute right to lease access to the Bells’ wires, ostensibly on the same terms as the Bells’ ISP subsidiaries.
This was required under the FCC’s (News
) 1980 Computer II ruling, which was revoked in 2005 effective August, 2006. Under the new rules, the owner of the wire controls the content of the wire. The distinction between common carriage and information service was abolished. The FCC cited “vibrant” competition between the cable and telephone companies as being adequate to protect consumers.
That change sounded arcane, and was ignored by the press which, after all, was not usually concerned with FCC forbearance petitions and other regulatory minutiae. But SBC’s
Ed Whitacre accidentally blew the whistle on himself in a November Business Week interview, when he said that operators like Yahoo
wouldn’t get to ride his “pipes” for “free”.
(In the same interview, he said that a merger with BellSouth
“doesn't have much chance of happening because of market power, size, etc ... I don't think the regulators would let that happen, in my judgment.” A few weeks later he announced just such an agreement. So much for regulators.)
Within days, the Internet was buzzing, and the phrase “network neutrality” rose to center stage. The Internet was under attack and its main attacker’s loose lips were threatening to sink his ship. What to do? The Bell spin doctors came up with a plan to create a distraction: say that the whole thing was a misunderstanding, that Ed was only talking about video, and that he was only referring to his company’s plans to provide “IPTV (News
),” a form of cable television. Certainly there, who could argue for free carriage? It must be very costly!
(What he failed to notice was that cable companies pay the content providers to license their programs, not the other way around. So much for understanding the industry he was hell bent on entering.)
So now we have a popular press that thinks the network neutrality debate is all about TV, creating a “tiered” Internet with a “fast lane” for HDTV shows and other “content” providers that pay the Bells, and a “slow lane” for everyone else. This is ridiculous, but it has influenced the debate in Congress, and has been buttressed by quotes from industry leaders.
) ironically-named Henry Kafka has been widely quoted saying that their cost would be $112/month to support standard TV, or $552/month per subscriber for HDTV, if carried across the Internet. I suppose that I could arrange to have my dinner air-shipped daily from fine European restaurants too. That would cost a pretty penny and it would be cold by the time it reached Massachusetts. But that’s about as meaningful as Kafka’s estimate.
Why would anyone want to watch all of their TV across the Internet? Even if someone tried, an ISP would be well within their rights to establish some kind of a usage cap, some number of gigabytes per month, on a consumer account. And it could even be a content-neutral cap.There’s More Than One Way to Deliver TV
While some households still depend on over-the-air broadcasting, for all or some of their television sets, several alternative technologies are available for video delivery. By far the most popular is cable, which nowadays uses hybrid fiber-coax (HFC) technology. HFC modulates a radio frequency spectrum onto optical fiber, converting it to coaxial cable for relatively local neighborhood distribution. A typical North American HFC network carries downstream signals from 54 to 750 MHz (though some get close to 900 MHz) and has a little space below 42 MHz for upstream use.
That spectrum is in turn divided into 6 MHz wide TV channels. Each can carry a single analog TV signal, a digital video stream with about 10 channels multiplexed onto it, or a cable modem signal. A cable modem channel carries up to about 40 Mbps downstream, shared among a few dozen to a several hundred homes. Upstream bandwidth is more constrained, though the latest technology allows up to 30 Mbps of shared bandwidth, and systems can divide each upstream channel among fewer subscribers than the downstream.
Cable systems provide video on demand (VoD) using digital TV channels, assigned on demand. Since a given strand of fiber serves a finite number of homes (fewer than 100 in the newest builds, but over a thousand in a few cases) and only a modest proportion of subscribers actually watch VoD, there’s plenty of capacity on the coax. But this never touches the Internet—cable VoD comes from a server within the cable company’s video distribution network. If the content isn’t on the server, it’s not available to the user.Verizon’s (News
) FiOS brings optical fiber all the way to the home, but other than that it has a lot in common with HFC. It’s a system called Broadband Passive Optical Network (BPON). The fiber is lit up with three lambdas (colors, though not necessarily within the visible spectrum). One carries downstream telephony and data at 622 Mbps. One carries upstream telephony and data at 155 Mbps. Those are both multiplexed using ATM, with a contention system to allow a single fiber to be split up to 32 ways.
The third lambda carries TV channels, laid out as an analog radio spectrum just like HFC. Again, the Internet is entirely separate from the broadcast TV. A newer system with more data capacity, Gigabit PON, will probably replace BPON for new installations in the near future, but it will still have three lambdas. And for video purposes, it’ll still be cable.
Then there’s “IPTV.” That has a nice, new, Internety ring to it, no? In reality, it’s more of a broad marketing concept covering a number of possible approaches to delivering video. One very common approach—think AT&T Inc.’s
Lightspeed—is rather similar to what ADSL was invented for over in the early 1990s.
This provides video over DSL, with the subscriber selecting a channel from a server located within the network. The server, in turn, is fed by a number of channels. The head end digitizes the TV show and encapsulates it within an IP stream, which is multicast to the various DSL servers near the subscribers. So while it’s not literally a broadcast at the subscriber’s home, the subscriber is selecting multicast access to a real broadcast. In other words, it’s cable over twisted pair. IP yes; Internet, no.
DSL-based IPTV is less isolated from the Internet than HFC or BPON, though. Since the DSL has a relatively limited bit rate (around 20 Mbps downstream in the latest ADSL 2+ equipment, if the DSL node is within a few thousand feet of the subscriber), TV and Internet data could be contending for the same last-mile and metro-area network capacity. Internet access might slow down if someone’s watching a high-definition TV show, or if too many TVs are on. But that doesn’t mean that the TV show is coming from the Internet.
Still, there is always the possibility of a true Internet-based TV service. But even this breaks down between broadcast-style and web-style. Indeed, many web sites today do provide TV programming. Sites like YouTube
show videos within web browsers, actually downloading their material using TCP, not streaming it using UDP. This makes a big difference, because TCP shares capacity much more gracefully than UDP. These videos are compressed to show in a window on a computer monitor, not fill a TV screen. But even a high-def download over TCP could be handled gracefully.
If much real TV, especially live streaming over UDP, were carried across the Internet, though, it could become rather burdensome. Sure, if the Internet had infinite capacity worldwide, then there would be no need for local cable head ends. Users could watch yak racing streamed live from Tibet, or catch the latest telenovelas from Mexico. That would eat up costly capacity, which is why it would work best if capacity were infinite.
That brings us back to Mr. Kafka’s remarks. How many people really do want to watch TV shows that are so obscure that their local network operator wouldn’t even want to carry copies, or mirror it via an IPTV multicast server? Sure, for the occasional niche program, this is fine, but for the foreseeable future, it wouldn’t make sense for this to be the norm. And if users tried it, then indeed it could seriously undermine their local provider’s shared Internet upstream connection. Or the price would have to skyrocket to, uh, Kafkaesque levels.
One solution could be for competitive content caching providers to set up regional IPTV distribution points, with high-speed connections to the major broadband service providers, bypassing the costlier Internet backbone. Any content provider could sign up for this, much as high-volume web sites now use Akamai’s
web caching service.
This would address Kafka’s cost objections while still providing open access to IPTV. But the large broadband providers are unlikely to go along with this, precisely because it competes with their own content. It’s not really about cost; it’s about revenue. Big difference! If the ISP marketplace were really competitive, though, rather than a duopoly, this approach could work.Neutrality as a Trap
Only a few mass-market Internet providers could even begin to afford to provide enough capacity to support more than a little bit of video streaming. If watching live high-quality Tibetan yak racing broadcasts and the like across the Internet really were to catch on, then the cost per subscriber of upstream capacity would rise. The biggest providers, the cable and phone giants, would be better positioned to afford it than the struggling retail ISPs who have somehow managed to stay in business, whether by affiliating with a CLEC, putting up a wireless network, or making some kind of deal with the ILEC.
Some neutrality advocates may see IPTV as a new opportunity, and may be using this debate to force their way onto ISP networks, especially those owned by cable and ILECs. Cable companies are no doubt especially sensitive, for business reasons, to video upstarts, though they’ve generally been happier so far with leaving their Internet services alone.
It may well be reasonable for some ISPs to differentiate their services by the way they handle video. Perhaps, by some miracle, IPTV can lead to better quality viewing options. But should everyone who offers any kind of data service have to become de facto cable providers?
The wireless ISPs would be hardest hit. Wireless system capacity is very finite. Bits equal power; a faster service requires higher power or has a shorter range. A system that today has typical broadband ISP loads of, say, 50 kilobits/second/subscriber would be brought down quickly by some 6 Megabit streams.
Yet some of the network neutrality bills brought to Congress fail to make this distinction. They regulate all providers, from AT&T (News
) down to the small town wireless ISP, the same way. By making video the focus of neutrality, they pretty much guarantee that the little guy will not be able to protect himself against abuse by subscribers who want to watch too much Internet TV.
The old regulations focused on market power, on dominant providers, and on making it possible for ISPs to survive. New neutrality laws, however well-intentioned, shouldn’t do the opposite. The Internet is, first and foremost, a medium for data; it should not be sacrificed for the sake of TV.