History of the Internet
|
The earliest idea of a computer network intended to allow general communication between users of various computers was the ARPANET, the world's first packet switching network, which first went online in 1969.
The Internet's roots lie within the ARPANET, which not only was the intellectual forerunner of the Internet, but was also initially the core network in the collection of networks in the Internet, as well as an important tool in developing the Internet (being used for communication between the groups working on internetworking research).
Contents |
Motivation for the Internet
The need for an internetwork appeared with ARPA's sponsorship, by Robert E. Kahn, of the development of a number of innovative networking technologies; in particular, the first packet radio networks (inspired by the ALOHA network), and a satellite packet communication program. Later, local area networks (LANs) would also join the mix.
Connecting these disparate networking technologies was not possible with the kind of protocols used on the ARPANET, which depended on the exact nature of the subnetwork. A wholly new kind of networking architecture was needed.
Early Internet work
Kahn recruited Vint Cerf of the University of California, Los Angeles to work with him on the problem. They soon worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Herbert Zimmerman and Louis Pouzin (designer of the CYCLADES network) with important influences on this design. Some accounts also credit the early networking work at Xerox PARC as an important technical influence.
With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn's initial problem. (One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string".) A computer called a gateway (later changed to router to avoid confusion with other types of gateway) is provided with an interface to each network, and forwards packets back and forth between them.
Happily, this new concept was a perfect fit with the newly emerging local area networks, which were revolutionizing communication between computers within a single site.
Early growth
After ARPANET had been up and running for a decade, ARPA looked for another agency to hand off the network to. After all, ARPA's primary business was funding cutting-edge research and development, not running a communications utility. Eventually the network was turned over to the Defense Communications Agency, also part of the Department of Defense.
In 1983, TCP/IP protocols replaced the earlier NCP protocol as the principal protocol of the ARPANET; in 1984, the U.S. military portion of the ARPANet was broken off as a separate network, the MILNET. At the same time, Paul Mockapetris and Jon Postel were working on what would become the Domain Name System.
The early Internet, based around the ARPANET, was government-funded and therefore restricted to non-commercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, and even to a growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects, or providing services to those who were.
Another branch of the U.S. government, the National Science Foundation, became heavily involved in Internet research in the mid-1980s. The NSFNet backbone, intended to connect and provide access to a number of supercomputing centers established by the NSF, was established in 1986.
At the end of the 1980s, the U.S. Department of Defense decided the network was developed enough for its initial purposes, and decided to stop further funding of the core Internet backbone. The ARPANET was gradually shut down (its last node was turned off in 1989), and NSF, a civilian agency, took over responsibility for providing long-haul connectivity in the U.S.
In another NSF initiative, regional TCP/IP-based networks such as NYSERNet (New York State Education and Research Network) and BARRNet (Bay Area Regional Research Network), grew up and started interconnecting with the nascent Internet. This greatly expanded the reach of the rapidly growing network. On April 30, 1995 the NSF privatized access to the network they had created. It was at this point that the growth of the Internet really took off.
Commercialization and privatization
Parallel to the Internet, other networks were growing. Some were educational and centrally-organized like BITNET and CSNET. Others were a grass-roots mix of school, commercial, and hobby like the UUCP network.
During the late 1980s the first Internet Service Provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal were formed to provide service to the regional research networks and provide alternate network access (like UUCP-based email and Usenet News) to the public. The first dial-up ISP, world.std.com, opened in 1989.
The interest in commercial use of the Internet became a hotly-debated topic. Although commercial use was forbidden, the exact definition of commercial use could be unclear and subjective. Everyone agreed that one company sending an invoice to another company was clearly commercial use, but anything less was up for debate. The alternate networks, like UUCP, had no such restrictions, so many people were skirting grey areas in the interconnection of the various networks.
Many university users were outraged at the idea of non-educational use of their networks. Ironically it was the commercial Internet service providers who brought prices low enough that junior colleges and other schools could afford to participate in the new arenas of education and research.
By 1994, the NSFNet lost its standing as the backbone of the Internet. Other competing commercial providers created their own backbones and interconnections. Regional NAPs (network access points) became the primary interconnections between the many networks. The NSFNet was dropped as the main backbone, and commercial restrictions were gone.
Early applications
E-mail existed as a message service on early time-sharing mainframe computers connected to a number of terminals. Around 1971 it developed into the first system of exchanging addressed messages between different, networked computers; in 1972 Ray Tomlinson introduced the "name@computer" notation that is still used today. E-mail turned into the Internet "killer application" of the 1980s.
The early e-mail system was not limited to the Internet. Gateway machines connected Internet SMTP e-mail with UUCP mail, BITNET, the Fidonet BBS network, and other services. Commercial e-mail providers such as CompuServe and The Source could also be reached.
The second most popular application of the early Internet was Usenet, a system of distributed discussion groups which is still going strong today. Usenet had existed even before the internet, as an application of Unix computers connected by telephone lines via UUCP. The Network News Transfer Protocol (NNTP), similar in flavor to SMTP, slowly replaced UUCP for the relaying of news articles. Today, almost all Usenet traffic is carried over high-speed NNTP servers.
Other early protocols include the File Transfer Protocol (1985), and Telnet (1983), a networked terminal emulator allowing users on one computer to log in to other computers.
Host naming and the DNS
One of the scalability limitations of the early Internet was that there was no distributed way to name hosts, other than with their IP addresses. The Network Information Centre (NIC) maintained a central hosts file, giving names for various reachable hosts. Sites were expected to download this file regularly to have a current list of hosts.
In 1984, Paul Mockapetris devised the Domain Name System (DNS) as an alternative. Domain names (like "wikipedia.org") provided names for hosts that were both globally unique (like IP addresses), memorable (like hostnames), and distributed -- sites no longer had to download a hosts file.
Domain names quickly became a feature of e-mail addresses -- replacing the older bang path notation -- as well as other services. Many years later, they would become the central part of the World Wide Web's URLs, for which see below.
Standards and control
The Internet has developed a significant subculture dedicated to the idea that the Internet is not owned or controlled by any one person, company, group, or organization. Nevertheless, some standardization and control is necessary for anything to function.
Many people wanted to put their ideas into the standards for communication between the computers that made up this network, so a system was devised for putting forward ideas. One would write one's ideas in a paper called a "Request for Comments" (RFC for short), and let everyone else read it. People commented on and improved those ideas in new RFCs.
With its basis as an educational research project, much of the documentation was written by students or others who played significant roles in developing the network (as part of the original Network Working Group) but did not have official responsibility for defining standards. This is the reason for the very low-key name of "Request for Comments" rather than something like "Declaration of Official Standards".
The first RFC (RFC1) was written on April 7th, 1969. As of 2004 there are over 3500 RFCs, describing every aspect of how the Internet functions.
The liberal RFC publication procedure has engendered confusion about the Internet standardization process, and has led to more formalization of official accepted standards. Acceptance of an RFC by the RFC Editor for publication does not automatically make the RFC into a standard. It may be recognized as such by the IETF only after many years of experimentation, use, and acceptance have proven it to be worthy of that designation. Official standards are numbered with a prefix "STD" and a number, similar to the RFC naming style. However, even after becoming a standard, most are still commonly referred to by their RFC number.
The Internet standards process has been as innovative as the Internet technology. Prior to the Internet, standardization was a slow process run by committees with arguing vendor-driven factions and lengthy delays. In networking in particular, the results were monstrous patchworks of bloated specifications.
The fundamental requirement for a networking protocol to become an Internet standard is the existence of at least two existing, working implementations that inter-operate with each other. This makes sense in retrospect, but it was a new concept at the time. Other efforts built huge specifications with many optional parts and then expected people to go off and implement them, and only later did people find that they did not inter-operate, or worse, the standard was completely impossible to implement.
In the 1980s, the International Organization for Standardization (ISO) documented a new effort in networking called Open Systems Interconnect or OSI. Prior to OSI, networking was completely vendor-developed and proprietary. OSI was a new industry effort, attempting to get everyone to agree to common network standards to provide multi-vendor interoperability. The OSI model was the most important advance in teaching network concepts. However, the OSI protocols or "stack" that were specified as part of the project were a bloated mess. Standards like X.400 for e-mail took up several large books, while Internet e-mail took only a few dozen pages at most in RFC-821 and 822. Most protocols and specifications in the OSI stack, such as token-bus media, CLNP packet delivery, FTAM file transfer, and X.400 e-mail, are long-gone today; in 1996, ISO finally acknowledged that TCP/IP had won and killed the OSI project. Only one OSI standard, X.500 directory service, still survives with significant usage, mainly because the original unwieldy protocol has been stripped away and effectively replaced with LDAP.
Some formal organization is necessary to make everything operate. The first central authority was the NIC (Network Information Centre) at SRI (Stanford Research Institute in Menlo Park, California).
World Wide Web
(see also Origins of the World Wide Web)
One of the Internet applications many people are most familiar with is the World Wide Web.
As the Internet grew through the 1980s and early 1990s, many people realized the growing need to be able to find and organize files and related information. Projects such as Gopher, WAIS, and the Anonymous FTP Archive Site list attempted to create schemes to organize distributed data and present it to people in an easy-to-use form. Unfortunately, these projects fell short in being able to accommodate all the various existing file and data types, and in being able to grow without centralized bottlenecks. Gopher's development would later be halted when the University of Minnesota asserted its intellectual property rights over the technology.
Meanwhile, one of the most promising user interface paradigms during this period was hypertext. The technology's creation had been inspired by Vannevar Bush's "memex" and developed through Ted Nelson's research on Project Xanadu and Douglas Engelbart's research on NLS_(computer_system). Many small self-contained hypertext systems had been created before, such as Apple Computer's HyperCard, but before the Internet, nobody had worked out how to scale up the technology so that it could to refer to another document anywhere in the world. Both Nelson and one of Engelbart's collaborators had speculated about how to do it, but such speculations had gone nowhere.
The actual solution was invented by Tim Berners-Lee in 1989, out of sheer exasperation after he kept raising his idea at conferences and no one in the Internet or hypertext communities would implement it for him. He was a computer programmer working at CERN, the European Particle Physics Laboratory, and wanted a way for physicists to share information about their research. His documentation project was the source of the two key inventions that made the World Wide Web possible.
These two inventions were the Uniform Resource Locator (URL) and HyperText Markup Language (HTML). The URL was a simple way to specify the location of a document anywhere on the Internet in one simple name that specified a computer name, a file "path" on that machine, and a protocol to use in retrieving that file. HTML was an easy way to embed codes into a text file that could define the structure of a document and also include links pointing to other documents. An additional network protocol, (HTTP), was also invented for reduced overhead and improved speed during file transfers, but the true genius of the new system was that a new protocol was useful but not necessary. The URL and HTML system was backwards compatible with existing protocols like FTP and Gopher.
Tim Berners-Lee's original WorldWideWeb browser was able to include so-called inline graphics in HTML pages (also now known as image transclusion), but it was not untill around 1992 that other people also started implementing this in their browsers. The first popular graphical web browsers were developed: Viola and Mosaic. Mosaic was developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen.
Andreessen left NCSA-UIUC and joined Jim Clark, one of the founders of SGI (Silicon Graphics, Inc.). They started Mosaic Communications which became Netscape Communications Corporation, making Netscape Navigator the first commercially successful browser. Microsoft acquired Mosaic code from Spyglass, Inc., (who got it from NCSA) to develop Internet Explorer.
The ease of creating new Web documents and linking to existing ones caused exponential growth. As the Web grew, search engines and Web directories were created to track pages on the web and allow people to find things. The first search engine, Lycos, was created in 1993 as a university project. In 1993, the first web magazine, The Virtual Journal, was published by a University of Maine student. At the end of 1993, Lycos indexed a total of 800,000 web pages.
By August 2001, the Google search engine tracked over 1.3 billion web pages and the growth continues. In early 2004, Google's index exceeded 4 billion pages. On November 11, 2004 this number had doubled to just over 8 billion.
See also
- Packet switching
- ARPANET
- Internet
- History of computing hardware
- Timeline of hypertext technology
- Al Gore
Further reading
- Arthur Norberg, Judy E. O'Neill, Transforming Computer Technology: Information Processing for the Pentagon, 1962-1982 (Johns Hopkins University, 1996) pp. 179-196
- Katie Hafner, Matthew Lyon, Where Wizards Stay Up Late: The Origins of the Internet (Simon & Schuster, New York, 1996, ISBN 0-684-83267-4) pp. 219-254
External links
- The Internet Society History Page (http://www.isoc.org/internet/history/brief.shtml)
- How the Internet Came to Be (http://www.internetvalley.com/archives/mirrors/cerf-how-inet.txt)
- Hobbes' Internet Timeline v7.0 (http://www.zakon.org/robert/internet/timeline/)
- A Brief History of NSF and the Internet (http://www.nsf.gov/od/lpa/news/03/fsnsf_internet.htm)
- RFC 801 planning the TCP/IP switchover (http://www.ietf.org/rfc/rfc801.txt)da:Internettets historie
fr:Histoire de l'internet it:Storia di Internet lv:Interneta vēsture nl:Geschiedenis van het internet pl:Historia Internetu pt:História da Internet