Computer Network


The Internet is the worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a “network of networks” that consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked Web pages and other documents of the World Wide Web. The Internet was named by the American television show Good Morning America and newspaper USA Today as one of the “New Seven Wonders of the World” [1] in 2006.

Contrary to some common usage, the Internet and the World Wide Web are not synonymous: the Internet is a collection of interconnected computer networks, linked by copper wires, fiber-optic cables, wireless connections, etc.; the Web is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. The World Wide Web is accessible via the Internet, as are many other services including e-mail, file sharing, and others described below.

Creation of the Internet

The Internet is the worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a “network of networks” that consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked Web pages and other documents of the World Wide Web.

The USSR’s launch of Sputnik spurred the United States to create the Advanced Research Projects Agency (ARPA, later known as the Defense Advanced Research Projects Agency, or DARPA) in February 1958 to regain a technological lead. ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and saw universal networking as a potential unifying human revolution.

In 1950, Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT where he served on a committee that established MIT Lincoln Laboratory. He worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.

Licklider recruited Lawrence Roberts to head a project to implement a network, and Roberts based the technology on the work of Paul Baran who had written an exhaustive study for the U.S. Air Force that recommended packet switching (as opposed to Circuit switching) to make a network highly robust and survivable. After much work, the first node went live at UCLA on October 29, 1969 on what would be called the ARPANET, one of the “eve” networks of today’s Internet. Following on from this, the British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981.

The first TCP/IP wide area network was operational by 1 January 1983, when the United States’ National Science Foundation (NSF) constructed a university network backbone that would later become the NSFNet. (This date is held by some to be technically that of the birth of the Internet.) It was then followed by the opening of the network to commercial interests in 1985. Important, separate networks that offered gateways into, then later merged with, the NSFNet include Usenet, Bitnet and the various commercial and educational X.25 Compuserve and JANET. Telenet (later called Sprintnet), was a large privately-funded national computer network with free dialup access in cities throughout the U.S. that had been in operation since the 1970s. This network eventually merged with the others in the 1990s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over these pre-existing communication networks, especially the international X.25 IPSS network, allowed for a great ease of growth. Use of the term “Internet” to describe a single global TCP/IP network originated around this time.

The network gained a public face in the 1990s. On August 6th, 1991 CERN, which straddles the border between France and Switzerland publicized the new World Wide Web project, two years after Tim Berners-Lee had begun creating HTML, HTTP and the first few Web pages at CERN.

An early popular Web browser was ViolaWWW based upon HyperCard. It was eventually replaced in popularity by the Mosaic Web Browser. In 1993 the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign released version 1.0 of Mosaic and by late 1994 there was growing public interest in the previously academic/technical Internet. By 1996 the word “Internet” was coming into common daily usage, frequently misused to refer to the World Wide Web.

Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks such as FidoNet have remained separate). This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.

Today’s Internet

Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is essentially defined by its interconnections and routing policies.

As of September 18, 2006, over 1.09 billion people use the Internet according to Internet World Stats.

Internet protocols

In this context, there are three layers of protocols:

  • At the lowest level is IP (Internet Protocol), which defines the datagrams or packets that carry blocks of data from one node to another. The vast majority of today’s Internet uses version four of the IP protocol (i.e. IPv4), and although IPv6 is standardised, it exists only as “islands” of connectivity, and there are many ISPs who don’t have any IPv6 connectivity at all.
  • Next come TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) – the protocols by which one host sends data to another. The former makes a virtual ‘connection’, which gives some level of guarantee of reliability. The latter is a best-effort, connectionless transport, in which data packets that are lost in transit will not be re-sent.
  • On top comes the application protocol. This defines the specific messages and data formats sent and understood by the applications running at each end of the communication.

Internet structure

There have been many analyses of the Internet and its structure. For example, it has been determined that the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.

Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as:

  • Internet2
  • JANET (the UK’s Joint Academic Network aka UKERNA)

These in turn are built around relatively smaller networks. See also the list of academic computer network organizations

In network schematic diagrams, the Internet is often represented by a cloud symbol, into and out of which network communications can pass.


The Internet Corporation for Assigned Names and Numbers (ICANN) is the authority that coordinates the assignment of unique identifiers on the Internet, including domain names, Internet protocol addresses, and protocol port and parameter numbers. A globally unified namespace (i.e., a system of names in which there is one and only one holder of each name) is essential for the Internet to function. ICANN is headquartered in Marina del Rey, California, but is overseen by an international board of directors drawn from across the Internet technical, business, academic, and non-commercial communities. The US government continues to have the primary role in approving changes to the root zone file that lies at the heart of the domain name system. Because the Internet is a distributed network comprising many voluntarily interconnected networks, the Internet, as such, has no governing body. ICANN’s role in coordinating the assignment of unique identifiers distinguishes it as perhaps the only central coordinating body on the global Internet, but the scope of its authority extends only to the Internet’s systems of domain names, Internet protocol addresses, and protocol port and parameter numbers.

On Nov. 16, 2005, the World Summit on the Information Society, held in Tunis, established the Internet Governance Forum (IGF) to discuss Internet-related issues.

Graphic representation of a very small part of the WWW, representing some of the hyperlinks

The World Wide Web

Through keyword-driven Internet research using search engines, like Google, millions worldwide have easy, instant access to a vast and diverse amount of online information. Compared to encyclopedias and traditional libraries, the World Wide Web has enabled a sudden and extreme decentralization of information and data.

Many individuals and some companies and groups have adopted the use of “Web logs” or blogs, which are largely used as easily-updatable online diaries. Some commercial organizations encourage staff to fill them with advice on their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public’s interest in their work.

For more information on the distinction between the World Wide Web and the Internet itself — as in everyday use the two are sometimes confused — see Dark internet where this is discussed in more detail.


The low-cost and nearly instantaneous sharing of ideas, knowledge, and skills has made collaborative work dramatically easier. Not only can a group cheaply communicate and test, but the wide reach of the Internet allows such groups to easily form in the first place, even among niche interests. An example of this is the Free/Libre/Open-Source Software (FLOSS) movement in software development, such as Linux, Mozilla, and Cooperation has been greatly eased in other fields, as well.


The most prevalent language for communication on the Internet is English. This may be a result of the Internet’s origins, as well as English’s role as the lingua franca. It may also be related to the poor capability of early computers to handle characters other than those in the basic Latin alphabet.

The Internet’s technologies have developed enough in recent years that good facilities are available for development and communication in most widely used languages. However, some glitches such as mojibake (incorrect display of foreign language characters, also known as krakozyabry) still remain.

Internet and the workplace

The Internet is allowing greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections and Web applications.

The Internet has given employees a forum from which to voice their opinions about their jobs, employers and co-workers, creating a massive amount of information and data on work that is currently being collected by the project run by Harvard Law School’s Labor & Worklife Program.

The name Internet

Internet is traditionally written with a capital first letter, as it is a proper noun. The Internet Society, the Internet Engineering Task Force, the Internet Corporation for Assigned Names and Numbers, the World Wide Web Consortium, and several other Internet-related organizations use this convention in their publications.

Many newspapers, newswires, periodicals, and technical journals capitalize the term. Examples include the New York Times, the Associated Press, Time, The Times of India, Hindustan Times, and Communications of the ACM.

Others assert that the first letter should be written in lower case (internet). A significant number of publications use this form, including The Economist, the Canadian Broadcasting Corporation, the Financial Times, The Guardian, The Times, and The Sydney Morning Herald. As of 2005, many publications using internet appear to be located outside of North America—although one U.S. news source, Wired News, has adopted the lower case spelling.

Historically, Internet and internet have had different meanings, with internet being a contraction of internetwork or internetworking and Internet referring to the worldwide network. Under this distinction, the Internet is a particular internet, but the reverse does not apply. The distinction was evident in many RFCs, books, and articles from the 1980s and early 1990s (some of which, such as RFC 1918, refer to “internets” in the plural), but has recently fallen into disuse[citation needed].


  • The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal M. Mitchell Waldrop.
  • The Internet Society History Page.
  • How the Internet Came to Be
  • Hobbes’ Internet Timeline v8.1
  • Futures and Non-futures for Scholarly Internet.
  • History of the Internet links.
  • RFC 801, planning the TCP/IP switchover
  • Video of a report on the Internet – before the Web
  • Vinton Cerf’s short history of the Internet
  • Internet Archive – A searchable database of old cached versions of Web sites dating back to 1996
  • A comprehensive history with people, concepts and many interesting quotations
  • CBC Digital Archives – Inventing the Internet Age
  • A list of lectures, some of which relate to the Internet, from the Massachusetts Institute of Technology is available here. Of particular interest is lecture #3 The Next Big Thing: Video Internet which is delivered in Real Player format. The lecture gives a brief history of networking; discusses convergence between the Internet/telephone/television networks; the expansion of broadband access; makes predictions about the future of delivery of video over the Internet.


NSFNET — The National Science Foundation Network


The time of a federally provided general purpose backbone network for the research and science community is coming to a close as of April of 1995. Its roots stem from early ARPA research on packet switching and its development of the TCP/IP protocol suite, which the NSF elected for its NSFNET program in the mid-eighties, at a time of strong tendency towards GOSIP ISO protocols and support for X.25.

Evolving from the Arpanet core model which centered around a single infrastructure to interconnect campuses, the NSFNET focused on broad operation al interconnection infrastructure which considered regional clients and agency peer networks, each of which would connect to their respective clients.

The TCP/IP selection for the NSFNET resulted in a strong acceptance worldwide in the ten years since the mid-eighties, as the NSFNET creation was the enabler for broad interconnectability in the Internet community. The NSFNET program itself initially came out of the NSF supercomputing center program, with two of the awardees, SDSC and JvNC, having proposed a consortium network. NSF then orchestrated the interconnection of its supercomputing centers via a 56kbps “Fuzzball” based backbone (already synchronized to radio clocks), to which shortly thereafter regional (or mid-level) networks connected, which used the 56kbps NSFNET backbone as the national interconnection fabric. In July 1988, a 1.544 Mbps

T1 (sorry, photo shows the now empty rack) replacement of the NSFNET backbone operationally started, and was replaced by a 45Mbps T3 backbone in the early nineties, to meet growing demand patterns. By then the commercialization and privatization of the Internet started to significantly take off, with the NSF getting under increating pressure to move networking activities to the private sector, rather than bulk-providing general networking services by the federal government. This pressure has resulted in a rethinking of the NSFNET architecture, to ensure Internet stability for the time window between government supported services and full privatization of the network.

Topology history

  • 56kbps NSFNET backbone
  • T1/448kbps physical NSFNET backbone
  • T1/448kbps logical NSFNET backbone
  • T1 non-muxed NSFNET backbone
  • T3 NSFNET backbone service

The new NSFNET architecture

To address the aforementioned time window, the National Science Foundation created four new projects, three of the infrastructure related, and one of them supporting network research and development activities. Those are:

  • infrastructure related projects
    • support for regional interconnectivity to regional networks
  • general purpose Network Access Points (NAP)

NSF priority NAP details:

  • Ameritech NAP (mid west area)
  • Pacific Bell NAP (west coast area)
  • Sprint NAP (east coast area)

fourth NSF NAP:

  • MFS Datanet NAP (Washington DC area)

generic NAP related information:

  • Commercial Internet EXchange, a trade association and interconnect point in California
  • European Interconnect Information — RIPE Connectivity Fact Sheets
  • US Network Service Provider interconnect map by CERFnet

routing arbiter functions

  • routing server description
  •  network research and development
  • very High Speed Backbone Network Services (vBNS)

Network services evolution

In its initial implementation network users typically selected specific services that they explicitly connected to in a one-to-one connection, largely to transfer files, for interactive access to remote machines, and for electronic mail to other users.

 This has evolved in the last few years towards a broad “information perimeter” as seen by individual users. The information source is not perceived as specific machines any more, but a horizon consisting of the available information resources, with a one-to-many mapping between a user and information resources.

This has contributed to the notion of an information infrastructure. In the future, even that view will be too limiting, as a many-to-many weave of connectivity is arising, from a mixture of collaboration, information, and generic facility resources environment.


The NSFNET has been shaping the Internet from a federal network research effort, via a federally provided infrastructure, towards a commercialized environment. Some of the next challenges will be in the focus on applications, and how they are provisioned throughout the networked environment, and to support collaboration, information, and facilities resources. Some of the network analysis over the years has shown a dramatic impact of new applications on the IP switching substrate, something that will have to be considered for the overall traffic profiles, as new high end applications demand significant amounts of bandwidth for extensive periods of time.

Some of this is seen already by the increasing use of audio and video applications on the Internet. A lot of areas need further exploration, including:

  • work towards an information provisioning architecture, including
    • information resource discovery
    • network/server load considerations
    • architected information cache infrastructure
    • architected information brokerage
  • scalable multi-user collaboration environments, including
    • collaboration resource discovery
    • hierarchical server structures
    • movability of clients among servers
    • dynamic creation and support for collaboration groups
  • real-time visualization and sensory data, including
    • environment status information (e.g., air quality)
    • aggregation of data from many sources
    • provisioning of data silos/warehouses
    • data base support for the data access


The Advanced Research Projects Agency Network (ARPANET) developed by ARPA of the United States Department of Defense was the world’s first operational packet switching network, and the progenitor of the global Internet.

Packet switching, now the dominant basis for both data and voice communication worldwide, was a new and important concept in data communications. Previously, data communications was based on the idea of circuit switching, as in the old typical telephone circuit, where a dedicated circuit is tied up for the duration of the call and communication is only possible with the single party on the other end of the circuit.

With packet switching, a system could use one communication link to communicate with more than one machine by assembling data into packets. Not only could the link be shared (much as a single mail person can be used to post letters to different destinations), but each packet could be routed independently of other packets. This was a major advance.

Background of the ARPANET

The earliest ideas of a computer network intended to allow general communication between users of various computers were formulated by J.C.R. Licklider of Bolt, Beranek and Newman (BBN) in August 1962, in a series of memos discussing his “Galactic Network” concept. These ideas contained almost everything that the Internet is today.

In October 1963, Licklider was appointed head of the Behavioral Sciences and Command and Control programs at ARPA (as it was then called), the United States Department of Defense Advanced Research Projects Agency. He then convinced Ivan Sutherland and Bob Taylor that this was a very important concept, although he left ARPA before any actual work on his vision was performed.

ARPA and Taylor continued to be interested in creating a computer communication network, in part to allow ARPA-sponsored researchers in various locations to use various computers which ARPA was providing, and in part to quickly make new software and other results widely available. Taylor had three different terminals in his office, connected to three different computers which DARPA was funding: one for the SDC Q-32 in Santa Monica, one for Project Genie at the University of California, Berkeley, and one for Multics at MIT. Taylor later recalled:

For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them.

I said, oh, man, it’s obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet.[1].

Roughly contemporaneously, a number of people had (mostly independently) worked out various aspects of what later became known as “packet switching”; the people who created the ARPANET would eventually draw on all these different sources.

Origins of the ARPANET

At the end of 1966, Taylor brought Larry Roberts to ARPA from MIT Lincoln Laboratory to head a project to create the network. Roberts had some initial experience in this area: two years previously, in 1965, while at MIT Lincoln Laboratory, he had connected the TX-2 to System Development Corporation’s Q-32 over a telephone line, conducting some of the earliest experiments in which two computers communicated that way. Roberts’ initial concept for the network for ARPA was to hook the various time-sharing machines directly to each other, through telephone.

At a meeting at the University of Michigan in Ann Arbor, Michigan in early 1967, many of the participants were unenthusiastic at having the load of managing this line put directly on their computers. One of the participants, Wesley Clark, came up with the idea of using separate smaller computers to manage the communication links; the small computers would then be connected to the large time-sharing mainframe computers which were the typical machines to be connected to the ARPANET. This concept allowed most of the detailed work of running the network to be offloaded from the large mainframes; it also meant that correct operation of the network as a whole was not subject to the vagaries of individual host implementations, and that DARPA would have complete control over the network itself.

Initial planning for the ARPANET began on that basis, with a number of working groups on specific technical subjects meeting during the late spring and summer of 1967.

Roberts then proceeded to author a “plan for the ARPANET”, which was presented at a symposium in Gatlinburg, Tennessee in October, 1967. Also presenting there was Roger Scantlebury, from Donald Davies’ group at NPL. (Roberts had previously encountered Davies at a conference in Britain about time-sharing, in November, 1965.) He discussed Davies’ packet switching ideas with Roberts, and introduced Roberts to Paul Baran’s work.

The exact impact of all this is unclear, and somewhat controversial; memoirs by different people involved in the process give sharply conflicting accounts, often in conflict with their earlier recorded statements. The general view of most historians is that all four (Baran, Kleinrock, Davies and Roberts) had important contributions:

  • Davies was instrumental in passing on the knowledge of packet switching that he and Baran had developed to Lawrence Roberts [1]
  • Roberts’ ideas for the network were modified by his discussions with Scantlebury. .. According to his later description, upon returning to Washington from the Gatlinburg meeting [Roberts] was influenced by Baran’s reports [2]
  • If anyone influenced Roberts in his earliest thinking about computer networks, it was Kleinrock. … Baran’s insights into data communications intrigued [Roberts] … The Gatlinburg paper presented by Scantlebury on behalf of the British effort was clearly an influence, too.

Creation of the ARPANET

By the summer of 1968, a complete plan had been prepared, and after approval at ARPA, a Request For Quotation (RFQ) was sent to 140 potential bidders. Most regarded the proposal as outlandish, and only 12 companies submitted bids, of which only four were regarded as in the top rank. By the end of the year, the field had been narrowed to two, and after negotiations, a final choice was made, and the contract was awarded to BBN on 7 April 1969.

BBN’s proposal followed Roberts’ plan closely; it called for the network to be composed of small computers known as Interface Message Processors (more commonly known as IMPs). The IMPs at each site performed store-and-forward packet switching functions, and were connected to each other using modems connected to leased lines (initially running at 50 kbit/second). Host computers connected to the IMPs via custom bit-serial interfaces to connect to ARPANET.

BBN initially chose a ruggedized version of Honeywell’s DDP-516 computer to build the first-generation IMP. The 516 was originally configured with 24 kbytes of core memory (expandable) and a 16 channel Direct Multiplex Control (DMC) direct memory access control unit. Custom interfaces were used to connect, via the DMC, to each of the hosts and modems. In addition to the lamps on the front panel of the 516 there was also a special set of 24 indicator lights to show the status of the IMP communication channels. Each IMP could support up to four local hosts and could communicate with up to six remote IMPs over leased lines.

The small team at BBN (initially only seven people), helped considerably by the detail they had gone into to produce their response to the RFQ, quickly produced the first working units. The entire system, including both hardware and the world’s first packet switching software, was designed and installed in nine months.

The initial ARPANET consisted of four IMPs. They were installed at:

  • UCLA, where Leonard Kleinrock had established a Network Measurement Center (with an SDS Sigma 7 being the first computer attached to it).
  • The Stanford Research Institute’s Augmentation Research Center, where Douglas Engelbart had created the ground-breaking NLS system, a very important early hypertext system (with the SDS 940 that ran NLS, named ‘Genie’, being the first host attached).
  • The University of California, Santa Barbara (with the Culler-Fried Interactive Mathematics Centre’s IBM 360/75, running OS/MVT being the machine attached).
  • The University of Utah’s Graphics Department, where Ivan Sutherland had moved (for a DEC PDP-10 running TENEX).

The first ARPANET link was established on October 29, 1969, between the IMP at UCLA and the IMP at SRI. By December 5, 1969, the entire 4-node network was connected [2].

Software and protocol development

The starting point for host-to-host communication on the ARPANET was the 1822 protocol which defined the way that a host sent messages to an ARPANET IMP. The message format was designed to work unambiguously with a broad range of computer architectures. Essentially, an 1822 message consisted of a message type, a numeric host address, and a data field. To send a data message to another host, the sending host would format a data message containing the destination host’s address and the data to be sent, and transmit the message through the 1822 hardware interface. The IMP would see that the message was delivered to its destination, either by delivering it to a locally connected host or by delivering it to another IMP. When the message was ultimately delivered to the destination host, the IMP would send an acknowledgment message (called Ready for Next Message or RFNM) to the sending host.

Unlike modern Internet datagrams, the ARPANET was designed to transmit all 1822 messages reliably, or at least to be able to tell the host when a message was lost. Nonetheless, the 1822 protocol did not prove to be adequate by itself for juggling multiple connections between different applications residing on a single host. This problem was addressed with the Network Control Program or NCP, which provided a standard method to establish reliable, flow-controlled, bidirectional communications links between different processes on different hosts. The NCP interface allowed application software to connect across the ARPANET implementing higher-level communication protocols. This was an early example of the protocol layering concept incorporated into the OSI model.

In 1983, TCP/IP protocols replaced NCP as the principal protocol of the ARPANET, and the ARPANET became just one component of the fledgling Internet.

Network Applications

NCP provided a standard set of network services that could be shared by several applications running on a single host computer. This led to the evolution of application protocols that operated more or less independently of the underlying network service. When the ARPANET migrated to the Internet protocols in 1983, the major application protocols migrated along with it.

  • E-mail: In 1971, Ray Tomlinson of BBN sent the first network email [3]. By 1973, 75% of the ARPANET traffic was email.
  • File transfer: By 1973, the File Transfer Protocol (FTP) specification had been defined and implemented, enabling file transfers over the ARPANET.
  • Voice traffic: A Network Voice Protocol (NVP) specifications was also defined (RFC 741) and then implemented, but conference calls over the ARPANET never worked well, for technical reasons; packet voice would not become a workable reality for several decades.

Growth of the network

In March, 1970, the ARPANET reached the U.S. East Coast, when an IMP at BBN itself was joined up to the network. Thereafter, the network grew quickly: 9 IMPs by June of 1970, and 13 by December; 18 by September, 1971 (at which point twenty-three hosts, at universities and government research centers, were connected to the ARPANET); 29 by August, 1972, and 40 by September, 1973.

At that point two satellite links, across the Pacific and AtlanticOceans to Hawaii and Norway (Norwegian Seismic Array) respectively, had been added to the network. From Norway, a terrestrial circuit added an IMP in London to the growing network.

By June, 1974 there were 46 IMPs, and the network reached 57 in July, 1975. By 1981, the number of hosts had grown to 213, with a new host being added approximately every twenty days.

After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA’s primary business was funding cutting-edge research and development, not running a communications utility. Eventually, in July 1975, the network was turned over to the Defense Communications Agency, also part of the Department of Defense.

In 1984, the U.S. military portion of the ARPANet was broken off as a separate network, the MILNET.

Later hardware developments

Support for inter-IMP circuits of up to 230.4 kbit/s was added in 1970, although considerations of cost and IMP processing power meant this capability was not much used.

1971 saw the start of the use of the non-ruggedized (and therefore significantly lighter) H-316 as an IMP. It could also be configured as a Terminal IMP (TIP), which added support for up to 63 ASCII serial terminals through a multi-line controller in place of one of the hosts. The 316 featured a greater degree of integration than the 516, which made it less expensive and easier to maintain. The 316 was configured with 40 Kbytes of core memory for a TIP. The size of core memory was later increased, to 32 Kbytes for the IMPs, and 56Kbytes for TIPs, in 1973.

The Honeywell based IMPs were eventually superseded by multi-processor BBN Pluribus IMPs in 1975. These in turn were later phased out in favor of machines called C/30s, which were custom built by BBN.

The original IMPs and TIPs were phased out as the ARPANET was shut down after the introduction of the NSFNet, but some IMPs remained in service as late as 1989.

The ARPANET and nuclear attacks

A common semi-myth about the ARPANET states that it was designed to be resistant to nuclear attack. The Internet Society writes about the merger of technical ideas that produced the ARPANET in A Brief History of the Internet, and states in a note:

It was from the RAND study that the false rumor started claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, only the unrelated (sic) RAND study on secure voice considered nuclear war. However, the later work on Internetting did emphasize robustness and survivability, including the capability to withstand losses of large portions of the underlying networks.

The ARPANET was designed to survive network losses, but the main reason was actually that the switching nodes and network links were not highly reliable, even without any nuclear attacks. Charles Herzfeld, ARPA director from 1965 to 1967, speaks about limited computer resources helping to spur ARPANET’s creation:

The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was clearly a major military need, but it was not ARPA’s mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPAnet came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators who should have access to them were geographically separated from them.


Computer Network