Site home page
(news and notices)

Get alerts when Linktionary is updated

Book updates and addendums

Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001)

Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free!

Contribute to this site

Electronic licensing info

 

 

Prioritization of Network Traffic

Related Entries    Web Links    New/Updated Information

  
Search Linktionary (powered by FreeFind)

Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM.

Prioritization of network traffic is simple in concept: give important network traffic precedence over unimportant network traffic. That leads to some interesting questions. What traffic should be prioritized? Who defines priorities? Do people pay for priority or do they get it based on traffic type (e.g., delay-sensitive traffic such as real-time voice)? For Internet traffic, where are priorities set (at the ingress based on customer preassigned tags in packets, or by service provider policies that are defined by service-level agreements)?

Prioritization is also called CoS (class of service) since traffic is classed into categories such as high, medium, and low (gold, silver, and bronze), and the lower the priority, the more "drop eligible" is a packet. E-mail and Web traffic is often placed in the lowest categories. When the network gets busy, packets from the lowest categories are dropped first.

Prioritization/CoS should not be confused with QoS. It is a subset of QoS. A package-delivery service provides an analogy. You can request priority delivery for a package. The delivery service has different levels of priority (next day, two-day, and so on). However, prioritization does not guarantee the package will get there on time. It may only mean that the delivery service handles that package before handling others. To provide guaranteed delivery, various procedures, schedules, and delivery mechanisms must be in place. For example, Federal Express has its own fleet of planes and trucks, as well as a computerized package tracking system.

Network QoS covers an entire range of bandwidth allocation, traffic engineering, and service provisioning techniques that deliver packets at guaranteed levels of service. Only ATM (Asynchronous Transfer Mode) networks are designed from the ground up to support QoS. RSVP (Resource Reservation Protocol) is an IETF-defined QoS strategy for TCP/IP networks. It is more than a prioritization and CoS strategy. RSVP supports bandwidth allocation and reservation for specific traffic flows. RSVP was found to be too difficult to implement on the Internet, but many enterprises have found it useful. See "RSVP (Resource Reservation Protocol)." MPLS (Multiprotocol Label Switching) is another QoS strategy that is designed to work across service provider networks on the Internet.

The problem with network priority schemes is that lower-priority traffic may be held up indefinitely when traffic is heavy unless there is sufficient bandwidth to handle the highest load levels. Even high-priority traffic may be held up under extreme traffic loads. One solution is to overprovision network bandwidth, which is a reasonable option given the relatively low cost of networking gear today.

As traffic loads increase, router buffers begin to fill, which adds to delay. If the buffers overflow, packets are dropped. When buffers start to fill, prioritization schemes can help by forwarding high-priority and delay-sensitive traffic before other traffic. This requires that traffic be classed (CoS) and moved into queues with the appropriate service level. One can imagine an input port that classifies traffic or reads existing tags in packets to determine class, and then moves packets into a stack of queues with the top of the stack having the highest priority. As traffic loads increase, packets at the top of the stack are serviced first. See "Queuing."

Prioritization has been used in multiprotocol routers to give some protocols higher priority than other protocols. For example, SNA (Systems Network Architecture) traffic will time-out if it is not delivered promptly, causing retransmissions that degrade network performance. Such protocols should be given high priority. A number of other prioritization/CoS schemes are outlined here:

  • MAC layer prioritization    In a shared LAN environment such as Ethernet, multiple stations may contend for access to the network. Access is based on first-come, first-serve. Two stations may attempt simultaneous access, cuasing both stations to back off and wait before making another attempt. This is minimized for switched Ethernet where only one station is connected to a switch port. A number of vendor-specific Ethernet priority schemes have been developed. Token ring networks have a priority mechanism in which a reservation bit is set in tokens to indicate priority. See "QoS (Quality of Service)," and check under the section "MAC-Layer Prioritization."

  • VLAN tagging and 802.1p    The IEEE 802.1Q frame-tagging scheme defines a method for inserting a tag into an IEEE MAC-layer frame that defines membership in a virtual LAN. Three bits within the tag define eight different priority levels. The bit settings serve as a label that provides a signal to network devices as to the class of service that the frame should receive.

  • Network layer prioritization    The IP packet header has a field called ToS (Type of Service). This field has recently been redefined to work with the IETF's Differentiated Services (Diff-Serv) strategy. Diff-Serv classifies and marks packets so that they receive a specific per-hop forwarding at network devices along a route. The ToS bit is set once, based on policy information, and then read by network devices. Because IP is an internetworking protocol, Diff-Serv works across networks, including carrier and service provider networks that support the service. Therefore, Diff-Serv will support CoS on the Internet, extranets, and intranets. See "Differentiated Services (Diff-Serv)."

Priority settings may be made in several places. The most logical place is the application running in the end user's system. But applications may not support the various schemes that are available, so edge switches may need to infer priority levels for frames or packets by examining the contents of the packets. This is now easily done with so-called "multilayer switches" based on policies that are defined in policy-based management systems. See "Multilayer Switching" and "Policy-Based Management."

RFC 3052 (Service Management Architectures Issues and Review, January 2001) discusses prioritization in the context of service management networks and policy-based management. Another document worth reading is RFC 2990 (Next Steps for the IP QoS Architecture, November 2000).




Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia.
All rights reserved under Pan American and International copyright conventions.