Site home page
Get alerts when Linktionary is updated
Book updates and addendums
Get info about the Encyclopedia of Networking and Telecommunicatons, 3rd edition (2001)
Download the electronic version of the Encyclopedia of Networking, 2nd edition (1996). It's free!
Contribute to this site
Electronic licensing info
Related Entries Web Links New/Updated Information
Note: Many topics at this site are reduced versions of the text in "The Encyclopedia of Networking and Telecommunications." Search results will not be as extensive as a search of the book's CD-ROM.
The Internet is a global web of interconnected computer networks-a "network of networks." Over the last 35 years, it has evolved into a global communication system that is analogous in many ways to the circuitry of the brain. In fact, the original concept of an interconnected mesh topology network with redundant links was conceived by Paul Baran of Rand Corporation in the early 1960s as he was thinking about the brain's neural network of redundant pathways.
Today, the Internet is huge and consists of millions of network connections. While the Internet was originally conceived as a communication network for researchers, today it is used by millions of people in business, in education, or for just everyday communication via e-mail, chat rooms, news services, webcasting, and so on.
No person, government, or entity owns or controls the Internet. One way to understand the Internet is to see it as a set of protocols (TCP/IP), standards, and acceptable use policies that are defined by open committees and organizations. Systems connected to the Internet must abide by these standards. The process of setting standards on the Internet is handled by organizations based on input from users, vendors, government agencies, and so on.
A volunteer organization called ISOC (Internet Society) controls the future of the Internet. ISOC manages the various organizations and standard-setting procedures. Other committees and organizations of the Internet include ICANN (Internet Corporation for Assigned Names and Numbers), which coordinates the assignment of Internet "identifiers" such as domain names, autonomous system numbers, IP address numbers, protocol numbers, and port numbers. Other organizations such as W3C (World Wide Web Consortium), NGI (Next Generation Internet), NSF (National Science Foundation), and GIIC (Global Information Infrastructure Commission) promote the development of interoperable technologies, advanced networking technologies, and global Internet infrastructures. These organizations are discussed under "Internet Organizations and Committees."
Internet technical specifications evolve through various levels of maturity called the "standards track." See "Internet Standards" for more information about the standards track process.
Funding for the Internet comes from many sources. The United States government has funded research into advanced networks, including the NSFNET of the later 1980s and early 1990s. At the time, this network was a platform for a high-speed Internet backbones and new routing structures. Today, commercial Internet service providers in conjunction with local and long-haul communication service providers have built a pay-for-service infrastructure that has greatly expanded the reach of the Internet.
There is a saying that the network is the computer. Emerging Internet and Web technologies are making this a reality. More and more people rely on the Internet for their computing needs. For example, many organization are outsourcing application, storage, and management to Internet service providers that operate facilities that are staffed full time by professionals and that provide fault tolerance and high-availability. See "Data Center Design," "ASP (Application Service Provider)," "MSP (Management Service Provider)," and "SSP (Storage Service Provider)."
Internet data centers have become huge facilities where ASPs, MSP, and SSPs can host their equipment and gain direct access to core networks. These regional facilities also support the exchange of information among service providers and core backbone networks. In addition, they host Internet caching and content distribution services. In fact, Internet data centers come close to caching almost the entire contents of the Internet. This means that end users access information at a relatively local site, rather than at servers that are thousands of miles and many router hops away. See "Content Distribution" and "Web Caching."
More users are accessing the Web via "thin clients" such as handheld devices, palm computers, and so on. These thin clients are often called "Internet appliances" or "Web terminals." Many are kitchen counter-top devices that provide Web browser services, electronic mail, and so on. The important point is that they have minimal storage and processing power and rely on Web servers for these needs. See "Network Appliances" and "Thin Clients."
An example is a Web camera. It does two things really well. It takes pictures and it provides a Web browser-like interface. After taking pictures, you connect to the Web and upload pictures to a photo-processing Web site. Using the camera's Web browser interface, you can manipulate the pictures (crop, adjust colors, and so on), and then send them to other people or have pictures printed and sent to your home. The important point is that Web servers are doing all the processing and storage. The camera is just a Web interface. Other examples are presented under the topics "Distributed Computer Networks" and "Embedded Systems."
Internet History and Concepts
From the beginning, the thing that set the early Internet apart from other communication systems was its use of packet switching. Information was delivered in packets rather than as a single continuous transmission. If a glitch occurred during transmission, it was only necessary to resend the packets affected by the glitch.
See Internet History for historical information about the Internet. Also refer to the Computer Museum History Center Web site listed on the Related Entries and Web Sites page for this topic.
Leonard Kleinrock developed the packet concept at MIT in the late 1950s and published a theory paper in 1961. Kleinrock later became a key figure in the construction of the early Internet. About the same time, Paul Baran of Rand Corporation developed a model of a distributed mesh topology network. He also developed the concept of multiplexing, in which packet transmissions from multiple sources may be traversing the network at the same time.
Baran was working on a U.S. Department of Defense project to build a communications network that could survive a variety of disruptions, including nuclear war. In thinking about the brain's neural network of redundant connections, he came up with the distributed mesh design shown on the right in the following illustration.
Illustration 7 (see book)
The centralized model was common with mainframe systems of the time. Dumb terminals connected to central computers that controlled everything. The decentralized model was basically a group of mainframes that could exchange information. The distributed mesh model supports any-to-any connectivity, meaning that any node can directly connect with any other node. This requires a universal logical addressing scheme and a mechanism that allows packets to flow from source to destination essentially unaltered.
The network designers realized that there were a variety of different hosts to connect, so a separate message-handling device was created to handle the communication process in a standard way and provide a custom interface to the local time-sharing host system. The device was called an interface message processor or IMP, which was the forerunner of today's router. The first IMP went online in 1969, providing communication services between computers at UCLA and Stanford. At this point, the early Internet was really just a remote computing network, not an internetwork. But it would soon grow and be referred to as the ARPANET.
The early designers of the Internet were particularly interested in the concept of "open architecture networking." People working on the project wanted to connect with many different types of computer systems over many different types of transmission schemes, including low-speed, high-speed, and wireless connections. The IMP separated communication services from the actual computer. This allowed any type of device to connect to the network. The result of this open approach can be seen in the near universal connectivity provided by the Internet.
Bob Kahn of DARPA and Vinton Cerf at Stanford University began developing TCP (Transmission Control Protocol) in the early 1970s. The original protocol was called TCP (IP wasn't defined until later) and it provided a range of reliable connection-oriented services (flow control, acknowledgment, retransmission, and so on). But the designers soon realized that TCP's reliability features added overhead that disrupted the ability to deliver live voice across the network. So, in 1978, TCP was reorganized into TCP and IP, with TCP providing reliability, and IP handling basic networking functions such as addressing, routing, and packet forwarding. With this, applications that didn't need TCP's reliability functions could bypass it and go directly to IP through a new protocol called UDP (User Datagram Protocol). UDP is a scaled-down version of TCP. It provides port connections for applications but foregoes all the extra services in the interest of speed.
The official "birth" of the Internet and the transition from the ARPANET occurred on January 1, 1983. This was the day that all connected networks were required to run the new TCP/IP protocol suite. While the Internet has a history stretching back over 20 years, from that day on, it was called the "Internet."
ARPANET grew during the 1970s, partly because its open architecture made it easy to connect just about any system to the network. In fact, the more computers that were added, the better-in the same way that fax machines are more beneficial if everybody owns one. E-mail was probably one of the most significant contributors to Internet growth. Eventually, networks (and not just large computers) were attached to the network. Robert Metcalfe had created Ethernet, which immediately became a popular LAN technology. Also, the TCP/IP protocols were adopted by the UNIX community, which led to a rapid explosion in the number of users that could connect to the Internet.
In the early 1980s, the NSF (National Science Foundation) realized the benefits the ARPANET had on research and decided to build a successor to ARPANET that would be open to university research groups. Its project resulted in the creation of a high-speed backbone called the NSFNET, which changed the topology of the Internet from a distributed mesh to a hierarchical scheme in which regional networks connected to the backbone and local networks connected to the regional networks.
This model proved very successful and it is still with us today, except that the hierarchical model has reverted to a fully meshed model as new backbones were built and interconnected with one another. In addition, regional backbone operators built cross-links directly between their networks rather than going through the backbone. By 1995, the NSF had defunded NSFNET and implemented full commercialization of the Internet. See "Internet Architecture and Backbone" and "ISPs (Internet Service Providers)" for more information.
In 1996, the Next Generation Internet (NGI) Initiative was announced to develop advanced networking technologies and applications on testbed networks that were 100 to 1,000 times faster than the networks of the day. This research is still continuing, and you can learn more at the NGI Web site at http://www.ngi.gov.
Internet2 is the latest testbed for the Internet. It is a collaborative project sponsored by UCAID (University Corporation for Advanced Internet Development), a consortium of over 180 U.S. universities that are developing advanced Internet technologies and applications to support research and higher education. See "Internet2."
The core of the Internet is now based on optical networking technologies and light circuits or lambdas. A lambda is a single wavelength of light that can carry huge amounts of data. Multiple lambdas may exist in a single fiber strand thanks to DWDM (dense wave division multiplexing). These circuits provide high-speed point-to-point links that cross entire continents, thus eliminating router hops and avoiding congestion. High-speed optical access networks are also being developed in metropolitan areas, displacing the need to lease voice lines in order to carry data. See "WDM (Wavelength Division Multiplexing), "MAN (Metropolitan Area Network)," "Network Core Technologies," "Network Access Services," "NPN (New Public Network)," and "Optical Networks."
RFCs Related to Internet History and Development
The following RFCs provide historical and developmental information about the Internet and the Internet protocols. Be sure to see "Internet Entertainment" for a list of RFCs that reflect the "lighter" side of the Internet.
Copyright (c) 2001 Tom Sheldon and Big Sur Multimedia.