.
Broadband Traffic Forecast
Home
Sci-Tech
Medical
Features
Profiles
Marriage Peril
Bio

BYTE.com > Features > 2004


Broadband Traffic Forecast: Congested, Chance of Clearing


Think DSL is fast? Think again.

A university researcher has announced a breakthrough data transfer scheme that makes conventional high-speed connections like DSL seem like tin cans and string. The new protocol—called BIC-TCP (Binary Increase Congestion-Transmission Control Protocol)—achieves speeds 6000 times that of 1.5 Mbps DSL, according to North Carolina State University computer scientist Injong Rhee.

It's not that BIC replaces DSL, although the rush of hyperbole that surrounded the BIC announcement in March made it sound something like that.

"Obviously, DSL and BIC-TCP are incomparable," Rhee said. "One is a medium (DSL) and the other is a protocol (BIC), and a protocol cannot run faster than the capacity of the medium on which the protocol runs."

Simply, BIC is a protocol designed to exploit mediums like DSL, particularly ultra-fast pipelines—high bandwidth connections like ESNet (Energy Sciences Network) or NLR (National LamdaRail) that span continental distances that were established to transmit mountains of scientific data. BIC can therefore run in a 10 Gbps network and use the network's full capacity.

Rhee said that while it can take conventional TCP two hours to reach full capacity use of a 10 Gbps pipe with 100 millisecond round trip time, BIC needs only a few seconds. While this might translate into music and video downloads in a flash, the real value of a super-protocol will more likely be seen in scientific and medical applications.

Conventional TCP plays a fundamental role in almost all computer networks, including the Internet, accounting for about 90 percent of traffic, although it serves two masters. One, it provides a reliable transport service (receipt of packets is positively acknowledged and dropped/damaged packets are retransmitted) and, two, it implements congestion control. Congestion control, introduced in the late 1980s after gridlock threatened the collapse of the Internet, is a scheme where most of the work done by a network is wasted retransmitting packets that are dropped before reaching their destination.

Following that development, current versions of TCP probe for available bandwidth by gently increasing their send rate and then backing off the send rate when network congestion is detected. Network congestion is inferred from the occurrence of dropped packets. In addition to preventing congestion collapse, this congestion control algorithm ensures a degree of fairness in the allocation of network resources between traffic flows.

While this congestion control algorithm has been remarkably resilient for nearly 20 years, it is generally recognized that it is not durable enough for next generation networks involving gigabit speed links and wireless connections.

On wireless links, packets are dropped for reasons unrelated to congestion, including noise and interference. Nevertheless, TCP interprets all packet loss as indicating congestion and backs-off its send rate. Consequently, throughput can be extremely low on noisy wireless channels.

New Grid on the Block

Continued advances in computing and storage technologies, combined with the development of national and global grid systems, require companion advances in networking. Grid computing, one of the hottest topics in computing circles today, refers to the use of multiple computers, typically in different locations, as a unified resource to perform one or more tasks.

Ultrascale networks—capable of spinning data at 100 Gbps or higher speeds over wide areas—are necessary to support these new generations of ultrascale computing and the associated petabyte (quadrillion, or 1024 trillion) to exabyte (quintillion, or 1024 quadrillion) datasets that scientists say promise to drive discoveries in fundamental and applied sciences over the next decade.

The challenge is that current network control and resource sharing algorithms at the core of current TCP are incapable of scaling to support the demand. Currently, the flood of scientific and medical imaging data being plunged through existing pipes results in massive traffic congestion, even on the newest high-speed networks such as ESNet or NLR, both 10 Gbps infrastructures created specifically for scientific collaborations.

On high-speed links, current TCP probes for bandwidth too slowly, Rhee said. As a result, much of the time TCP sources are less responsive and high-speed links are underutilized, leading to low throughput, particularly on long distance gigabit links.

"The problem is the inherent limitations of regular TCP, which was originally designed when Internet speeds were much slower and bandwidths much smaller," Rhee said. "Now we are trying to apply it to networks that have several orders of magnitude more available bandwidth—something like using an eyedropper to fill a water main."

Math of Least Resistance

Rhee's BIC-TCP uses a common binary database search approach that allows for rapid detection of maximum network capacities, which solves the performance problem of TCP in fast, long distance networks, effectively opening the floodgate.

"TCP severely under-utilizes available bandwidth in high bandwidth networks beyond 1 Gbps with network delays in the range of milliseconds, or bandwidth in the range of Mbps but delays in the range of seconds—common in satellite environments," Rhee said.

BIC resolves this by modifying TCP's congestion control (window increase) function. And it does it without hogging the line. Rhee's greatest challenge was to find a way to fill the pipe fast without starving out neighbors who might be downloading music using regular TCP at the same time.

"Effective usage of bandwidth is of prime importance in networking," said H. Michael Chung, professor, and director, Center for Information Strategies and Technologies, California State University, Long Beach.

"BIC-TCP sounds very promising for high speed networks. As computing is getting faster a breakthrough like this one in networking would give a significant boost to the current market," Chung said.

BIC-TCP is just one of several efforts tinkering with TCP protocol to run more efficiently over these long fat pipes. This interest originated in the grid community and stems from the need to transmit enormous quantities of scientific data from one site to another. The Particle Physics Data Grid, for instance, requires an infrastructure for widely distributed analysis of particle physics data at multi-petabyte scales by hundreds to thousands of physicists. With facilities such as the Large Hadron Collider near Geneva coming online, the need for high-speed TCP data transfer is expected to be even more important.

"Scientific applications will be making unprecedented demands on the Internet in terms of the amounts of data to be transferred," said James Kurose, professor of computer science and co-director of the Networking Research Laboratory at the University of Massachusetts.

"The challenge is to adapt the Internet's protocols, particularly the Internet's reliable data transfer protocol, TCP, to handle data that is being transferred over these long fat pipes," Kurose said.

One TCP enhancement effort is HighSpeed TCP (HSTCP), proposed by Sally Field at the International Computer Science Institute, an independent nonprofit research institute closely affiliated with the Electrical Engineering and Computer Science Department of the University of California at Berkeley.

Field's HSTCP is a minimal modification to TCP for connections with larger congestion windows, to allow TCP to achieve higher throughput. Her experimental protocol has been blessed by the Internet Engineering Task Force (IETF)—the group that standardizes Internet protocols, including TCP. The IETF charter is to ensure that all the variants of TCP have a fair share of network capacity and can reliably inter-operate.

The IETF is also responsible for judging the usefulness and best practice when new features are proposed. In general, since TCP is a part of every computer in the Internet, it is only willing to accept enhancements that benefit the Internet community as a whole, and not the specific needs of a subset of the community, such as scientists or satellite users.

Another advanced alternative TCP congestion control algorithm is designed into a version of TCP known as TCP FAST, developed at Cambridge University, Melbourne University, Caltech, UCLA, University of Illinois, and University of Massachusetts to support high-speed gigabyte data transfers across the Atlantic. While FAST TCP does not solve infrastructure problems (no TCP implementation can increase the throughput beyond the limit imposed by the underlying hardware), it does address the bottleneck issues inherent in current TCP.

Rhee's BIC-TCP is an addition to this growing body of work. "BIC-TCP, and protocols such as TCP FAST and HSTCP, are being developed to meet this need," Kurose said. "All of these protocols are concerned with making sure high bandwidth flows use link capacity efficiently and share this capacity fairly with lower bandwidth flows."

BIC-TCP is additionally concerned with making sure that among high bandwidth flows, that a flow is not severely penalized if it has a long round trip time from source to destination.

The Fair Watch Project

A major concern in all of these efforts is "fairness"—that is, high bandwidth TCPs should not take all of the bandwidth away from the other TCP connections in the network. Among high-speed flows there is also a secondary concern—how "fairly" do high-speed flows share bandwidth among themselves (presuming the high-speed flows as a group share bandwidth fairly with the group of non-high-speed flows). It is here —fairness among high-speed TCP flows—that BIC-TCP differs from the other projects.

Rhee has proposed a version of TCP that not only shares bandwidth relatively fairly with non-high-speed flows, but also shares bandwidth more fairly among high-speed flows. Here, BIC-TCP is more fair than HSTCP or TCP FAST. Many existing protocols simply increase the window faster without much consideration to fairness. Rhee found a way to aggressively fill the pipe without bogarting the link.

"We view congestion control as a search problem that finds the fair and efficient transmission target rate for the end-to-end path on which the current connection is running," Rhee said. This feature ensures stability of control while allowing the protocol to be friendlier to existing TCP traffic.

Your Mileage May Vary

Although BIC-TCP has yet to be implemented practically on a large scale, in recent tests run by the Stanford Linear Accelerator Center that compared parameters such as round trip time aggressiveness, stability, scalability, and fairness among seven competing protocols developed by researchers from schools around the world, BIC-TCP consistently outperformed the other entrants.

"BIC-TCP performed very well overall, which is a big plus," Chung said.

Chung also said the attractive thing about BIC-TCP is that it's a win-win situation for high speed applications. "Implementation would be a relatively easy thing and would save time and money," he said. "Further testing is required, though. If they can generate some more practical results we can get a better idea."

But your mileage may vary. "If you are an average Internet user connected to the Internet behind DSL or cable modems, BIC does not help you," Rhee said. "TCP works just fine for that kind of network."

But, if you need to push obscene amounts of data through existing networks, BIC is just what the doctor ordered. Medics, such as radiologists, in fact, are among those who might benefit from a BIC-like tonic the most. The sheer size of the data generated by digital mammography and new digital imaging modalities like MRI (magnetic resonance imaging) or CT (computed tomography), combined with the high-resolution requirements of all diagnostic images, have conspired to delay full implementation of medical specialities like teleradiology, hindering the use of the Internet for long-distance second opinions.

Reliable, ultrahigh-speed data transmission could also mean that data compression of digital diagnostic images—a current necessity causing malpractice attorneys to salivate—could be reduced or eliminated.

New versions of TCP might even be useful in avoiding a national disaster. The August 2003 blackout that unplugged most of the eastern U.S. and Canada exposed the need to spread data-rich backup systems rapidly across hundreds of thousands of miles.

So, although new protocols such as BIC-TCP might not speed your connection any time soon, they might help to keep it on. And by putting the rapid transfer of increasingly large packets of information over long distances on speed dial, BIC could be a boon to cutting-edge applications ranging from telemedicine and environmental monitoring to business operations and multi-user gaming.


Douglas Page is currently contributing editor of Diagnostic Imaging Magazine and technology correspondent for Fire Chief and Homeland Protection Professional magazines.