Site home page
(news and notices)
Get alerts when Linktionary
Book updates and addendums
Get info about the Encyclopedia of Networking
and Telecommunicatons, 3rd edition (2001)
Download the electronic version of the Encyclopedia
of Networking, 2nd edition (1996). It's free!
Contribute to this site
Electronic licensing info
Related Entries Web Links New/Updated Information
Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM.
Multimedia is a term that describes multiple forms of information, including audio, video, graphics, animation, text, and a variety of virtual reality types. The interest in multimedia is surging. At one point, Microsoft and Real Networks reported over 100,000 downloads per day from their respective streaming media players. Webcasting, a multicast technology that broadcasts multimedia from a single server to many users, is expected to grow to represent over 70 percent of Internet traffic.
The major themes of multimedia networking include the following:
- Voice, video, and data are converging on a single network in both the enterprise and the Internet. The so-called NPN (new public network) is a convergent network that can deliver voice and video with the same quality as the PSTN.
- Traffic prioritization, QoS-enabling features, and bandwidth management are critical for delivering real-time traffic over packet networks (in contrast to the circuit-based PSTN).
- Multicast provides a transport for Webcasting streaming audio and video from one source to large groups of users. No licenses are required for Webcasting, and just about anyone can set up an Internet radio or TV station. Examples of streaming multimedia include live Web cameras, live sporting events, live concerts, and distance learning.
- Voice has a relatively constant low bit rate and can be compressed to a 16-Kbit/sec stream and still maintain reasonable quality. Small numbers of dropped packets are acceptable and there is no reason for the source to retransmit since they would arrive out of sync at the destination.
- Video is composed of a continuous stream of data; but because of the way compression algorithms work, the stream may vary in bandwidth. When scenes change, a burst of new image data is added to the data stream. Some packet loss is acceptable.
- Some streaming data is sensitive to delay and cannot tolerate dropped packets, such as a sensor that supplies continuous data. A QoS channel may be required.
There are two aspects of multimedia content delivery that you need to consider: real-time delivery and stored playback (also called on-demand). In the real-time delivery model, quality of service is essential. Packets must be delivered with minimal delay. For live voice conversations, latency greater than 200 ms is noticeable by humans. Stored playback transmits multimedia in a more relaxed fashion and in one direction. An example is watching a recorded video.
Delivering multimedia from end to end over networks requires adequate bandwidth, compatible protocols, and quality of service. Enterprise networks can be overprovisioned to handle streaming multimedia, but bursts can still disrupt live flows, so prioritization and traffic management may be required. Bandwidth can be reserved for scheduled events such as videoconferences by using resource reservation protocols. Traffic can also be classified and marked with priority codes using differentiated services techniques. Refer to the following topics for more information:
- Congestion Control Mechanisms
- Differentiated Services (Diff-Serv)
- Integrated Services (Int-Serv)
- QoS (Quality of Service)
- RSVP (Resource Reservation Protocol)
- Traffic Management, Shaping, and Engineering
Streaming Media Protocols
Streaming is a method of delivering real-time or stored information such as audio and video across networks with a reasonable amount of QoS. In the case of RealNetworks' streaming media, a song or video starts to play before the entire content arrives. In other words, data continues to download in the background while the song or video plays. No space is used on a hard drive to store the content. The IETF and the World Wide Web Consortium (W3C) have created the following streaming media protocols:
- RTP (Real-time Transport Protocol) RTP is a protocol that is optimized in various ways for the delivery of real-time data such as live and/or interactive audio and video over IP packet-switched networks. RTP runs over UDP and uses its multiplexing and error-checking features. Other similar transports are supported. RTP is described in more detail at the sites listed next.
- RTSP (Real-time Streaming Protocol) RTSP is a multimedia control protocol. According to the RFC 2326 (Real time Streaming Protocol, April 1998), RTSP acts as a "network remote control" for multimedia servers. The protocol was designed to serve up multimedia from a cluster of hosts (virtual hosts). It is an application-level protocol that establishes and controls one or more time-synchronized streams of continuous media. No files are stored at the receiver. RealNetworks' RealPlayer is an example of an RTSP application. It provides play, fast forward, pause, and other controls. Real- Networks developed the protocol in conjunction with Netscape and submitted it to the IETF for standardization. Refer to the following sites for more information.
- SMIL (Synchronized Multimedia Integration Language) SMIL (pronounced "smile") is an easy-to-learn XML-based markup language for authoring TV-like multimedia presentations with timelines, layout areas, and so on. SMIL helps producers create synchronized presentations that include streaming audio, streaming video, images, text, or any other media type. For example, an opening logo can rotate for five seconds, followed by an audio and video stream, interspersed with periodic displays of text or pictures. Producers can use SMIL to create training courses, product presentations, or multimedia events for the Web. SMIL require RTSP applications such as RealPlayer for presentation. Microsoft has a similar technology called TIME (Times Interactive Multimedia Extensions) that applies SMIL concepts to HTML documents. Microsoft calls it "HTML+TIME." The following sites provide more information:
ITU Multimedia Conferencing Recommendations
The ITU H.32x is a series of conferencing and communications standards that cover, among other things, conferencing over ISDN, PSTN, and packet-switched networks. They include H.320 (desktop conferencing over ISDN lines), H.323 (conferencing over IP-based networks), and H.324 (conferencing over the public-switched telephone network). H.323 allows IP telephony hardware and software from different vendors to interoperate over IP networks. It defines all the components necessary for a videoconferencing network, including terminals (conference-enabled desktop systems); gateways (translators between different networks); gatekeepers (management and control); and MCUs (multipoint control units).
See "H Series ITU Recommendations," "H.323 Multimedia Conferencing Standard," and "Voice over IP (VoIP)" for more information.
Internet Multimedia Protocols
The Internet was not designed with real-time traffic in mind. Therefore, many strategies, recommendations, and protocols have been developed to provide these features. While delivering real-time data over packet-switched networks is inefficient when compared to circuit-based models, the trade-off in convenience and cost makes it worthwhile. The Internet is pervasive and supports a true distributed computing model. Long-distance Internet telephone calls are essentially free.
The IETF MMUSIC (Multiparty Multimedia Session Control) Working Group is developing protocols to support Internet multimedia sessions, with an emphasis on videoconferencing. The group has developed a document called "The Internet Multimedia Conferencing Architecture" that defines an architecture for multimedia conferencing on the Internet. This section outlines the basic architecture. You can refer to the MMUSIC Web site listed on the related entries page to read the entire document.
The conferencing architecture developed for the Internet is far more general than the ITU standards discussed previously. In particular, the Internet is more scalable to large groups, and allows new media and applications to be added. The architecture has been adapted for IP telephony (Voice over IP or VoIP). Some other important features of the architecture are listed here:
- The architecture takes advantage of efficient multicasting to distribute information to multiple parties.
- It relies on new service models that provide QoS by reserving capacity and prioritizing traffic.
- It corrects delay problems by using transport protocols such as RTP that send timing information so the recipient can synchronize and properly play back the multimedia streams.
- It supports applications such as whiteboards and shared editors.
- It defines conference policy methods (who can listen, who can speak), how participants find each other, and how they communicate.
- It defines security measures to enforce conference policies.
The Internet multimedia conferencing protocol stack is pictured in Figure M-10.
The most important of the Internet protocols related to teleconferencing and Internet multimedia sessions are described here. Each of these is described under its own topic. A general overview of the Internet Multimedia Conferencing Architecture is provided later.
- SIP (Session Initiation Protocol) A protocol for initiating sessions and inviting users to participate in multimedia sessions. See RFC 2543 (SIP: Session Initiation Protocol, March 1999). The IETF Session Initiation Protocol (sip) Working Group is continuing development on SIP. See http://www.ietf.org/html.charters/sip-charter.html. Also see "SIP (Session Initiation Protocol)."
- SAP (Session Announcement Protocol) SAP is a protocol to announce Internet multicast conferencing sessions. A conference is announced by periodically multicasting a UDP announcement packet to a multicast address and port. Because SAP is designed for multicast, it is suitable for setting up conference calls, not one-on-one calls. The IETF MMUSIC Working Group developed SAP. It is defined in RFC 2974 (Session Announcement Protocol, October 2000).
- SDP (Session Description Protocol) SDP is a protocol that describes a format for conveying descriptive information about multimedia sessions. This information includes session name and purpose, session time, type of media (voice or video), media format (MPEG, for example), transport protocol and port number, bandwidth requirements, and contact information. SDP is not a transport protocol, but relies instead on SIP or SAP to deliver the session information to destinations. For example, a caller can send SDP descriptive information in a SIP INVITE message. The callee then responds with acknowledgments regarding the descriptions that it can accept. See RFC 2327 (SDP: Session Description Protocol, April 1998). The IETF MMUSIC Working Group is continuing development on SDP.
SIP is considered a better choice for telephony on the Internet than the ITU H.323 standard. It is a very simple protocol compared to the complex H.323 family of protocols. SIP was originally designed as a protocol to inform people about multiparty conferences. It would then set up and tear down the calls.
An important SIP feature is that it works across a variety of applications and media. SIP gets the most attention as an IP telephony call setup protocol, but it can also be used to set up just about any kind of multimedia session. An important SIP concept is that it consists of a set of simple text commands. These are composed using basic HTTP syntax. That puts SIP call controls on the same level as using a Web browser or even sending an e-mail message. The commands are easy to understand across the Web, and provide a simple and radical departure from the complex call setup schemes of H.323.
The following IETF Working Groups are developing recommendations and specifications for multimedia over the Internet. The sites list important drafts and RFCs.
- Multiparty Multimedia Session Control (mmusic) is developing protocols to support Internet teleconferencing sessions. The group's focus is on supporting the loosely controlled conferences. The group is working on protocols for distributing session descriptions, securing session announcements, and controlling on-demand delivery of real-time data. Refer to http://www.ietf.org/html.charters/mmusic-charter.html.
- Audio/Video Transport Working Group (avt) is working on protocols for real-time transmission of audio and video over UDP and IP multicast. In particular, the group is focused on RTP (Real-time Transport Protocol). Refer to http://www.ietf.org/html.charters/avt-charter.html.
- Session Initiation Protocol (sip) Working Group is developing SIP, a text-based protocol similar to HTTP or SMTP, for initiating interactive communication sessions between users. These sessions include voice, video, chat, interactive games, and virtual reality. See "SIP (Session Initiation Protocol)." Refer to http://www.ietf.org/html.charters/sip-charter.html.
Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia.
All rights reserved under Pan American and International copyright conventions.