Site home page
(news and notices)

Get alerts when Linktionary is updated

Book updates and addendums

Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001)

Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free!

Contribute to this site

Electronic licensing info

 

 

Throughput

Related Entries    Web Links    New/Updated Information

  
Search Linktionary (powered by FreeFind)

Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM.

Throughput is the number of bits transmitted per second through a communication medium or system. It is also referred to as data rate or wire speed. Throughput is measured after data transmissions because a system will add delay caused by processor limitations, network congestion, buffering inefficiencies, transmission errors, traffic loads, congestion, or inadequate hardware designs. Throughput varies over time with traffic and congestion. In addition, data is packaged in frames and packets that contain header information, so if you are trying to measure actual data throughput, you need to subtract the bits used for overhead. The topic "Delay, Latency, and Jitter" describes some of the things that affect throughput.

Fast Ethernet is rated at 100 Mbits/sec, but after delays and protocol inefficiencies, the actual user data transfer rate is much less, possibly less than 50 percent of Fast Ethernet's specified data rate. The perceived speed of a shared Ethernet LAN will be worse as more users access the system. Collisions occur, which cause stations to back off, wait, then try to retransmit.

As mentioned, the header information of frames and packets significantly reduces the throughput of actual data. Headers contain source and destination address, handshaking information, error checking codes, and so on. For example, an ATM cell can hold 53 bytes of information, but five bytes of that is reserved for header information, so only 48 bytes of actual user data gets transported in the cell. The more header information, the less data sent. Only about 90 percent of the capacity of an ATM circuit is available for transmitting actual data. In addition, some protocols require that individual frames and packets or groups of frames and packets be acknowledged by the receiver. This creates excess traffic that is not sending real data.

Network devices like routers have store and forwarding delays that reduce throughput. So-called wire-speed devices are nonblocking, meaning that they don't hold up packets, even when fully loaded. A wire-speed device must have an internal capacity to move all the data coming in from all ports without delay, even when all ports are at maximum capacity.

Congestion and queuing problems cause dropped packets that exacerbate the delay problem. See "Congestion Control Mechanisms" for more information.

The following tables describe the packet-forwarding capabilities of various devices. The first table outlines the packet's-per-second rating of the three Ethernet technologies.

Ethernet (10 Mbits/sec)

14,880 pps

Fast Ethernet

148,800 pps

Gigabit Ethernet

1.4 million pps

The packet-per-second rates of various router and switch designs are listed here:

Traditional and legacy routers

From 5,000 to 500,000 pps

Traditional router designs with enhanced architectures

1 million pps

Routing switches

1 to 10 million pps

Gigabit routers

20 million pps

Core routers designed for Internet backbone networks

100 million pps or more

Throughput Considerations for TCP

Throughput is a big issue in the Internet and several RFCs discuss the problem. RFC 1323 (TCP Extensions for High Performance, May 1992) describes TCP extensions to improve performance over high-bandwidth long-delay networks such as cross-country fiber-optic links. The RFC refers to these links as "long, fat pipes." Another interesting document is RFC 2488 (Enhancing TCP Over Satellite Channels, January 1999).

Throughput is determined by the speed (data rate in bits/sec) of link and the propagation delay, which is relatively long on cross-country fiber links and satellite links. Congestion and errors add to delay. A pipe's capacity is determined by obtaining its bandwidth-delay products-that is, multiplying the bandwidth by the round-trip delay time. The idea is to match the capacity of the pipe with the TCP window size so that the pipe is always full. When congestion or errors occur, TCP hosts slow down their transmissions, which means that pipes may not be fully used until the transmitting hosts reestablish the window size that matches the network capacity. See "Flow-Control Mechanisms."




Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia.
All rights reserved under Pan American and International copyright conventions.