Internet, a common term is the short form of the term 'international network' or better put, the combination of the prefix 'inter-' which connotes the state of something or someone being between or among other things or people.

The word 'Network' too as we always say and and as we often hear about, is used to refer to a connection of an electronic device to the aimed network.

Internet has been a very utile network; it is widely used by a good number of people - different kinds of people surf the internet. The youths tend to use the Internet that any other age brackets do. Adults also browse the internet to broaden their knowledge about a discourse or various puzzling questions.

The internet, clipped at the front into a commonly known short word - Net, was invented in the 19th century.

It all started on January 1, 1983 which is considered the Internet's official birthday; various computer networks did not have any perfect way to communicate with each other before it came to be.

After this was a new communications protocol established called Transfer Control Protocol/Internetwork Protocol (TCP/IP).

Since the invention of these worldwide network, the Internet has been an avenue for posting or viewing various informative, educative, entertaining, shocking, widening, and revealing ccontents in many ways.

Since that year, 1983 when the Internet was actualized, people have been able to paste their contents in form of articles, essays, blogs, posts inter alia.

Some people who work on the Internet also create and publish their website where they build pages and where online researchers will be able to view whatever they crave to know about or even hear; Browsing apps like Opera Mini, UC Mini, Phoenix, Chrome aming others, are the apps on which their researches are made.

These cyber-surfers may either watch their results or answers online or save them offline to view later; all these important features of the Internet is attributed to the very date it was conceived. The Internet is better off in a developed world - in a place where the electronical sector is an advanced one.

When on, or when active on the Internet, it has always been referred to being online and oppositely, one is said to be offline.

Many things are performed on the Internet which includes browsing, chatting, posting, playing games and other jocular actions.

Before the 1980s, the D-day which was the first day of January, there was nothing called Internet, there was no Wi-Fi or Mobile Tethering (also called Hotspot), people in far areas to themselves, communicate through letters which would definitely be an exacting task or message; there were no WhatsApp, nobody knew Facebook- it wasn't then. No one used Opera Mini to browse contents and no one could post what he knew 'on' any 'line' - all messages were offline.

All these changed on the particular day the Internet (Net) was invented, it now eased the transportation of messages and words; one could now paste a lot of messages to someone he or she hasn't met or seen before.

Sharing Resources


The Internet started in the 1960s as a way for government researchers to share information. Computers in the '60s were large and immobile and in order to make use of information stored in any one computer, one had to either travel to the site of the computer or have magnetic computer tapes sent through the conventional postal system.

Another catalyst in the formation of the Internet was the heating up of the Cold War. The Soviet Union's launch of the Sputnik satellite spurred the U.S. Defense Department to consider ways information could still be disseminated even after a nuclear attack. This eventually led to the formation of the ARPANET (Advanced Research Projects Agency Network), the network that ultimately evolved into what we now know as the Internet. ARPANET was a great success but membership was limited to certain academic and research organizations who had contracts with the Defense Department. In response to this, other networks were created to provide information sharing.

January 1, 1983 is considered the official birthday of the Internet. Prior to this, the various computer networks did not have a standard way to communicate with each other. A new communications protocol was established called Transfer Control Protocol/Internetwork Protocol (TCP/IP). This allowed different kinds of computers on different networks to "talk" to each other. ARPANET and the Defense Data Network officially changed to the TCP/IP standard on January 1, 1983, hence the birth of the Internet. All networks could now be connected by a universal language.

What is the internet?


The internet is the world’s most popular computer network. It began as an academic research project in 1969, and became a global commercial network in the 1990s. Today it is used by more than 2 billion people around the world.

The internet is notable for its decentralization. No one owns the internet or controls who can connect to it. Instead, thousands of different organizations operate their own networks and negotiate voluntary interconnection agreements.

Who decides what domain names exist and who gets them?


The domain name system is administered by the Internet Corporation for Assigned Names and Numbers (ICANN), a non-profit organization based in California. ICANN was founded in 1998. It was granted authority over DNS by the US Commerce Department, though it has increasingly asserted its independence from the US government.

History of Internet


Why was the internet created? In the 1950s and 60s, the United States was engaged in the Cold War with the Soviet Union. Each country was working to increase its science and technology capabilities in order to prevent nuclear attacks from the other, and also remain capable of attacking the other should the situation devolve. At that time, computers were much larger and more expensive than today's models. Mainframe computers took up entire rooms, and were only able to do specific tasks. Researchers needed to be able to use the computers to perform these tasks, but often had to travel long distances to find a computer to do a specific task. The proposed solution was a way to connect the computers so they could speak to each other, allowing researchers to share data without needing to travel to the location of the computer.

How was the internet created? The problem with having computers communicate with each other was that the method of transferring data from one computer to another, circuit switching, took a long time and could easily be interrupted. All of the data had to be sent in one packet, and if the connection was interrupted at any time during the process, none of the data would get through. Scientists developed a different method called packet switching to overcome this problem. With packet switching, the data could be broken up into smaller segments, and each segment could be sent individually. The smaller amounts of data took less time to transfer, and if an interruption occurred, some of the data would have made it through and the process could be continued without having to start over completely. Once the data reached its destination, the packets were able to be reassembled into a complete packet.

The evolution of the internet continued from here. Packet switching allowed computers to connect to each other over a network called ARPANET, the Advanced Research Projects Agency Network. When did the internet start? In 1969, the first computers communicated over ARPANET from UCLA to SRI in California. This initial network only had four nodes, but more were added to allow research universities to share data and other resources. After ARPANET, other networks were developed, but the individual networks could not communicate with each other. In order to solve this problem, a set of rules called the Transmission Control Protocol and Internet Protocol were developed, known as TCP/IP. These rules allowed for universal communication across all networks and made sure that packets sent over a network would be delivered to the correct destination.

When was the internet introduced to the public? The World Wide Web was launched in 1991. The Web allows users from any connected computer to locate resources and web pages using Uniform Resource Locators, or URLs. The URL functions as an address that tells the computer where to find the resource on the internet. The Web also uses HTTP, Hypertext Transfer Protocol, to enable users to download linked resources, and HTML, Hypertext Markup Language, the formatting language for web pages. Although people often refer to the Web as the internet, it is actually a service that runs on the internet, not the internet itself.


Network History



Where does the internet come from? The internet would not exist without networks. A network, simply put, is a group of devices that can communicate among themselves. They can be connected either wirelessly or with cables, and include devices like smartphones, computers, tablets, and printers. ARPANET is the most famous early network, but there are plenty of others in the history of networks. Each of the early networks was developed for a specific purpose.

ARPANET was the first large-scale network to connect computers that were not in the same geographical location. It went online in late 1969, and used phone lines to allow the computers to communicate. ARPANET was not deactivated until 1990, at which point so many computers were connected to each other that it had evolved into the internet.
CSNET was funded by the National Science Foundation, and went online in 1981. Its purpose was to connect computer science researchers from different universities. This expanded access to the internet for universities who were unable to access ARPANET for various reasons.
NSFNET was funded by the National Science Foundation, and went online in 1986. It connected several supercomputers across the country, and helped cause rapid expansion of internet use throughout the 80s and early 90s.
MILNET was originally part of ARPANET, but physically separated from it in 1983 to provide a place for unclassified Department of Defense usage.


Difference Between ARPANET and World Wide Web


ARPANET and the World Wide Web are very different. ARPANET was a physical system that allowed computers to connect to each other, while the World Wide Web was a set of languages and protocols used to navigate those connections. Essentially, ARPANET was hardware, and the World Wide Web is software.

Who invented the internet first?


The internet was created through the work of many people. However, the first person to envision a network of connected computers was J.C.R. Licklider.

Who truly invented the Internet?


The internet was invented through the work of many people, not just one. Some of the key figures were Lawrence Roberts, who proposed and led ARPANET for many years, and Tim Berners-Lee, who invented the World Wide Web.

What is the brief history of Internet?

A very brief history of the internet begins with ARPANET in 1969, which initially connected four universities' computers to each other. It grew and developed into what we recognize as the internet today, which became available for public access in 1991.


Top 10 uses of the Internet



Electronic mail. At least 85% of the inhabitants of cyberspace send and receive e-mail. Some 20 million e-mail messages cross the Internet every week.

Research.

Downloading files.

Discussion groups. These include public groups, such as those on Usenet, and the private mailing lists that ListServ manages.

Interactive games. Who hasn’t tried to hunt down at least one game?

Education and self-improvement. On-line courses and workshops have found yet another outlet.

Friendship and dating. You may be surprised at the number of electronic “personals” that you can find on the World Wide Web.

Electronic newspapers and magazines. This category includes late-breaking news, weather, and sports. We’re likely to see this category leap to the top five in the next several years.

Job-hunting. Classified ads are in abundance, but most are for technical positions.
domain name system local area networks

Shopping. It’s difficult to believe that this category even ranks. It appears that “cybermalls” are more for curious than serious shoppers local area networks

The survey shows that individuals, corporations, business people, and groups use Internet primarily as a communications vehicle as these users reduce their use of fax machines, telephones, and the postal service. E-mail should remain at the top of the list. The Internet has continued and will continue to change how we view the world. — by Diane Myers, Analyst Communications, End Use, In-Stat, Scottsdale, AZ. (602) 483-4442



Foundations
Precursors
Data communication
secure sockets layer

The concept of data communication – transmitting data between two different places through an electromagnetic medium such as radio or an electric wire – pre-dates the introduction of the first computers. Such communication systems were typically limited to point to point communication between two end devices. Semaphore lines, telegraph systems and telex machines can be considered early precursors of this kind of communication. The telegraph in the late 19th century was the first fully digital communication system.

Information theory


Fundamental theoretical work on information theory was developed by Harry Nyquist and Ralph Hartley in the 1920s. Information theory, as enunciated by Claude Shannon, in the 1948, provided a firm theoretical underpinning to understand the trade-offs between signal-to-noise ratio, bandwidth, and error-free transmission in the presence of noise, in telecommunications technology. This was one of the three key developments, along with advances in transistor technology (specifically MOS transistors) and laser technology, that made possible the rapid growth of telecommunication bandwidth over the next half-century.[16]

Computers


Early computers in the 1940s had a central processing unit and user terminals. As the technology evolved in the 1950s, new systems were devised to allow communication over longer distances (for terminals) or with higher speed (for interconnection of local devices) that were necessary for the mainframe computer model. These technologies made it possible to exchange data (such as files) between remote computers. However, the point-to-point communication model was limited, as it did not allow for direct communication between any two arbitrary systems; a physical link was necessary. The technology was also considered vulnerable for strategic and military use because there were no alternative paths for the communication in case of a broken link.

Inspiration for networking and interaction with computers
The earliest computers were connected directly to terminals used by an individual user. Christopher Strachey, who became Oxford University's first Professor of Computation, filed a patent application for time-sharing in February 1959.[17][18] In June that year, he gave a paper "Time Sharing in Large Fast Computers" at the UNESCO Information Processing Conference in Paris where he passed the concept on to J. C. R. Licklider.[19][20] Licklider, vice president at Bolt Beranek and Newman, Inc., went on to propose a computer network in his January 1960 paper Man-Computer Symbiosis:[21]

A network of such centers, connected to one another by wide-band communication lines [...] the functions of present-day libraries together with anticipated advances in information storage and retrieval and symbiotic functions suggested earlier in this paper

In August 1962, Licklider and Welden Clark published the paper "On-Line Man-Computer Communication"[22] which was one of the first descriptions of a networked future.

In October 1962, Licklider was hired by Jack Ruina as director of the newly established Information Processing Techniques Office (IPTO) within DARPA, with a mandate to interconnect the United States Department of Defense's main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he formed an informal group within DARPA to further computer research. He began by writing memos in 1963 describing a distributed network to the IPTO staff, whom he called "Members and Affiliates of the Intergalactic Computer Network".[23]

Although he left the IPTO in 1964, five years before the ARPANET went live, it was his vision of universal networking that provided the impetus for one of his successors, Robert Taylor, to initiate the ARPANET development. Licklider later returned to lead the IPTO in 1973 for two years.[24]

Packet switching

Main article: Packet switching
The issue of connecting separate physical networks to form one logical network was the first of many problems. Early networks used message switched systems that required rigid routing structures prone to single point of failure. In the 1960s, Paul Baran of the RAND Corporation produced a study of survivable networks for the U.S. military in the event of nuclear war.[25] Information transmitted across Baran's network would be divided into what he called "message blocks".[26] Independently, Donald Davies (National Physical Laboratory, UK), proposed and put into practice a local area network based on what he called packet switching, the term that would ultimately be adopted.

Packet switching is a rapid store and forward networking design that divides messages up into arbitrary packets, with routing decisions made per-packet. It provides better bandwidth utilization and response times than the traditional circuit-switching technology used for telephony, particularly on resource-limited interconnection links.[27]

Networks that led to the Internet

NPL network
Main article: NPL network
Following discussions with J. C. R. Licklider in 1965, Donald Davies became interested in data communications for computer networks.[28][29] Later that year, at the National Physical Laboratory in the United Kingdom, Davies designed and proposed a national commercial data network based on packet switching. The following year, he described the use of an "Interface computer" to act as a router.[30] The proposal was not taken up nationally but he produced a design for a local network to serve the needs of NPL and prove the feasibility of packet switching using high-speed data transmission.[31][32] To deal with packet permutations (due to dynamically updated route preferences) and to datagram losses (unavoidable when fast sources send to a slow destinations), he assumed that "all users of the network will provide themselves with some kind of error control",[33] thus inventing what came to be known the end-to-end principle. He and his team were one of the first to use the term 'protocol' in a data-commutation context in 1967.[34] The network's development was described at a 1968 conference.[35][36]

By 1968,[37] Davies had begun building the Mark I packet-switched network to meet the needs of the multidisciplinary laboratory and prove the technology under operational conditions.[38][39] The NPL local network and the ARPANET were the first two networks in the world to use packet switching,[40] and the NPL network was the first to use high-speed links.[41] Many other packet switching networks built in the 1970s were similar "in nearly all respects" to Davies' original 1965 design.[42] The NPL team carried out simulation work on packet networks, including datagram networks, and research into internetworking and computer network security.[43][44] The Mark II version which operated from 1973 used a layered protocol architecture.[41] In 1976, 12 computers and 75 terminal devices were attached,[45] and more were added until the network was replaced in 1986.

ARPANET

Main article: ARPANET
Robert Taylor was promoted to the head of the Information Processing Techniques Office (IPTO) at Defense Advanced Research Projects Agency (DARPA) in 1966. He intended to realize Licklider's ideas of an interconnected networking system.[46] As part of the IPTO's role, three network terminals had been installed: one for System Development Corporation in Santa Monica, one for Project Genie at University of California, Berkeley, and one for the Compatible Time-Sharing System project at Massachusetts Institute of Technology (MIT).[47] Taylor's identified need for networking became obvious from the waste of resources apparent to him.

For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them.... I said, oh man, it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet.[47]

Bringing in Larry Roberts from MIT in January 1967, he initiated a project to build such a network. Roberts and Thomas Merrill had been researching computer time-sharing over wide area networks (WANs).[48] Wide area networks emerged during the 1950s and became established during the 1960s. At the first ACM Symposium on Operating Systems Principles in October 1967, Roberts presented a proposal for the "ARPA net", based on Wesley Clark's proposal to use Interface Message Processors to create a message switching network.[49][50][51] At the conference, Roger Scantlebury presented Donald Davies' work on packet switching for data communications and mentioned the work of Paul Baran at RAND. Roberts incorporated the packet switching concepts into the ARPANET design and upgraded the proposed communications speed from 2.4 kbps to 50 kbps.[52][53][54][55] Leonard Kleinrock subsequently developed the mathematical theory behind the performance of this technology building on his earlier work on queueing theory.[56]

ARPA awarded the contract to build the network to Bolt Beranek & Newman, and the first ARPANET link was established between the University of California, Los Angeles (UCLA) and the Stanford Research Institute at 22:30 hours on October 29, 1969.[57]

"We set up a telephone connection between us and the guys at SRI ...", Kleinrock ... said in an interview: "We typed the L and we asked on the phone,

"Do you see the L?"
"Yes, we see the L," came the response.
We typed the O, and we asked, "Do you see the O."
"Yes, we see the O."
Then we typed the G, and the system crashed ...
Yet a revolution had begun" ....[58]



Merit Network
Main article: Merit Network
The Merit Network[64] was formed in 1966 as the Michigan Educational Research Information Triad to explore computer networking between three of Michigan's public universities as a means to help the state's educational and economic development.[65] With initial support from the State of Michigan and the National Science Foundation (NSF), the packet-switched network was first demonstrated in December 1971 when an interactive host to host connection was made between the IBM mainframe computer systems at the University of Michigan in Ann Arbor and Wayne State University in Detroit.[66] In October 1972 connections to the CDC mainframe at Michigan State University in East Lansing completed the triad. Over the next several years in addition to host to host interactive connections the network was enhanced to support terminal to host connections, host to host batch connections (remote job submission, remote printing, batch file transfer), interactive file transfer, gateways to the Tymnet and Telenet public data networks, X.25 host attachments, gateways to X.25 data networks, Ethernet attached hosts, and eventually TCP/IP and additional public universities in Michigan join the network.[66][67] All of this set the stage for Merit's role in the NSFNET project starting in the mid-1980s.

CYCLADES

Main article: CYCLADES
The CYCLADES packet switching network was a French research network designed and directed by Louis Pouzin. He developed the network to explore alternatives to the early ARPANET design and to support internetworking research. First demonstrated in 1973, it was the first network to implement the end-to-end principle conceived by Donald Davies and make the hosts responsible for reliable delivery of data, rather than the network itself, using unreliable datagrams. Concepts implemented in this network influenced TCP/IP architecture.[68][69][5]

X.25 and public data networks

Main articles: X.25 and public data network
1974 ABC interview with Arthur C. Clarke, in which he describes a future of ubiquitous networked personal computers.
Based on international research initiatives, particularly the contributions of Rémi Després, packet switching network standards were developed by the International Telegraph and Telephone Consultative Committee (ITU-T) in the form of X.25 and related standards.[70][71] X.25 is built on the concept of virtual circuits emulating traditional telephone connections. In 1974, X.25 formed the basis for the SERCnet network between British academic and research sites, which later became JANET. The initial ITU Standard on X.25 was approved in March 1976.[72]

The British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong, and Australia by 1981. By the 1990s it provided a worldwide networking infrastructure.[73]

Unlike ARPANET, X.25 was commonly available for business use. Telenet offered its Telemail electronic mail service, which was also targeted to enterprise use rather than the general email system of the ARPANET.

The first public dial-in networks used asynchronous TTY terminal protocols to reach a concentrator operated in the public network. Some networks, such as Telenet and CompuServe, used X.25 to multiplex the terminal sessions into their packet-switched backbones, while others, such as Tymnet, used proprietary protocols. In 1979, CompuServe became the first service to offer electronic mail capabilities and technical support to personal computer users. The company broke new ground again in 1980 as the first to offer real-time chat with its CB Simulator. Other major dial-in networks were America Online (AOL) and Prodigy that also provided communications, content, and entertainment features.[74] Many bulletin board system (BBS) networks also provided on-line access, such as FidoNet which was popular amongst hobbyist computer users, many of them hackers and amateur radio operators.[citation needed]

UUCP and Usenet

Main articles: UUCP and Usenet
In 1979, two students at Duke University, Tom Truscott and Jim Ellis, originated the idea of using Bourne shell scripts to transfer news and messages on a serial line UUCP connection with nearby University of North Carolina at Chapel Hill. Following public release of the software in 1980, the mesh of UUCP hosts forwarding on the Usenet news rapidly expanded. UUCPnet, as it would later be named, also created gateways and links between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due to the lower costs involved, ability to use existing leased lines, X.25 links or even ARPANET connections, and the lack of strict use policies compared to later networks like CSNET and Bitnet. All connects were local. By 1981 the number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984.[75]

Sublink Network, operating since 1987 and officially founded in Italy in 1989, based its interconnectivity upon UUCP to redistribute mail and news groups messages throughout its Italian nodes (about 100 at the time) owned both by private individuals and small companies. Sublink Network represented possibly one of the first examples of the Internet technology becoming progress through popular diffusion.

1973–1989: Merging the networks and creating the Internet

Map of the TCP/IP test network in February 1982
TCP/IP
Main article: Internet Protocol Suite
See also: Transmission Control Protocol and Internet Protocol

First Internet demonstration, linking the ARPANET, PRNET, and SATNET on November 22, 1977
With so many different network methods, something was needed to unify them. Steve Crocker had formed a "Networking Working Group" at University of California, Los Angeles in 1969. Louis Pouzin initiated the CYCLADES project in 1971, building on the work of Donald Davies; Pouzin coined the term catenet for concatenated network. An International Networking Working Group formed in 1972; active members included Vint Cerf from Stanford University, Alex McKenzie from BBN, Donald Davies and Roger Scantlebury from NPL, and Louis Pouzin and Hubert Zimmermann from IRIA.[76][77][78] Later that year, Bob Kahn of DARPA recruited Vint Cerf to work with him on the problem. By 1973, these groups had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible.[1][3]

Kahn and Cerf published their ideas in May 1974,[79] which incorporated concepts implemented by Louis Pouzin and Hubert Zimmermann in the CYCLADES network.[80] The specification of the resulting protocol, the Transmission Control Program, was published as RFC 675 by the Network Working Group in December 1974.[81] It contains the first attested use of the term internet, as a shorthand for internetwork. This software was monolithic in design using two simplex communication channels for each user session.

With the role of the network reduced to a core of functionality, it became possible to exchange traffic with other networks independently from their detailed characteristics, thereby solving the fundamental problems of internetworking. DARPA agreed to fund development of prototype software. Testing began in 1975 through concurrent implementations at Stanford, BBN and University College London.[2] After several years of work, the first demonstration of a gateway between the Packet Radio network (PRNET) in the SF Bay area and the ARPANET was conducted by the Stanford Research Institute. On November 22, 1977, a three network demonstration was conducted including the ARPANET, the SRI's Packet Radio Van on the Packet Radio Network and the Atlantic Packet Satellite Network (SATNET).[82][83]

The software was redesigned as a modular protocol stack, using full-duplex channels; between 1976 and 1977, Yogen Dalal and Robert Metcalfe among others, proposed separating TCP's routing and transmission control functions into two discrete layers,[84][85] which led to the splitting of the Transmission Control Program into the Transmission Control Protocol (TCP) and the IP protocol (IP) in version 3 in 1978.[85][86] Originally referred to as IP/TCP, version 4 was described in IETF publication RFC 791 (September 1981), 792 and 793. It was installed on SATNET in 1982 and the ARPANET in January 1983 after the DoD made it standard for all military computer networking.[87][88] This resulted in a networking model that became known informally as TCP/IP. It was also referred to as the Department of Defense (DoD) model, DARPA model, or ARPANET model.[89] Cerf credits his graduate students Yogen Dalal, Carl Sunshine, Judy Estrin, Richard Karp, and Gérard Le Lann with important work on the design and testing.[90] DARPA sponsored or encouraged the development of TCP/IP implementations for many operating systems.

Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite would result in the best and most robust computer networks.[77][91][92]


Decomposition of the quad-dotted IPv4 address representation to its binary value

From ARPANET to NSFNET
Main article: NSFNET

BBN Technologies TCP/IP Internet map of early 1986.
After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting-edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of the ARPANET was broken off as a separate network, the MILNET. MILNET subsequently became the unclassified but military-only NIPRNET, in parallel with the SECRET-level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have controlled security gateways to the public Internet.

The networks based on the ARPANET were government funded and therefore restricted to noncommercial uses such as research; unrelated commercial use was strictly forbidden. This initially restricted connections to military sites and universities. During the 1980s, the connections expanded to more educational institutions, which began to form networks of fiber optic lines. A growing number of companies such as Digital Equipment Corporation and Hewlett-Packard, which were participating in research projects or providing services to those who were. Data transmission speeds depended upon the type of connection, the slowest being analog telephone lines and the fastest using optical networking technology.

Several other branches of the U.S. government, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), and the Department of Energy (DOE) became heavily involved in Internet research and started development of a successor to ARPANET. In the mid-1980s, all three of these branches developed the first Wide Area Networks based on TCP/IP. NASA developed the NASA Science Network, NSF developed CSNET and DOE evolved the Energy Sciences Network or ESNet.


T3 NSFNET Backbone, c. 1992
NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-1980s, connecting space scientists to data and information stored anywhere in the world. In 1989, the DECnet-based Space Physics Analysis Network (SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought together at NASA Ames Research Center creating the first multiprotocol wide area network called the NASA Science Internet, or NSI. NSI was established to provide a totally integrated communications infrastructure to the NASA scientific community for the advancement of earth, space and life sciences. As a high-speed, multiprotocol, international network, NSI provided connectivity to over 20,000 scientists across all seven continents.

In 1981 NSF supported the development of the Computer Science Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran TCP/IP over X.25, but it also supported departments without sophisticated network connections, using automated dial-up mail exchange.

In 1986, the NSF created NSFNET, a 56 kbit/s backbone to support the NSF-sponsored supercomputing centers. The NSFNET also provided support for the creation of regional research and education networks in the United States, and for the connection of university and college campus networks to the regional networks.[93] The use of NSFNET and the regional networks was not limited to supercomputer users and the 56 kbit/s network quickly became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a cooperative agreement with the Merit Network in partnership with IBM, MCI, and the State of Michigan. The existence of NSFNET and the creation of Federal Internet Exchanges (FIXes) allowed the ARPANET to be decommissioned in 1990.

NSFNET was expanded and upgraded to dedicated fiber, optical lasers and optical amplifier systems capable of delivering T3 start up speeds or 45 Mbit/s in 1991. However, the T3 transition by MCI took longer than expected, allowing Sprint to establish a coast-to-coast long-distance commercial Internet service. When NSFNET was decommissioned in 1995, its optical networking backbones were handed off to several commercial Internet service providers, including MCI, PSI Net and Sprint.[94] As a result, when the handoff was complete, Sprint and its Washington DC Network Access Points began to carry Internet traffic, and by 1996, Sprint was the world's largest carrier of Internet traffic.[95]

The research and academic community continues to develop and use advanced networks such as Internet2 in the United States and JANET in the United Kingdom.

Transition towards the Internet
The term "internet" was reflected in the first RFC published on the TCP protocol (RFC 675:[96] Internet Transmission Control Program, December 1974) as a short form of internetworking, when the two terms were used interchangeably. In general, an internet was a collection of networks linked by a common protocol. In the time period when the ARPANET was connected to the newly formed NSFNET project in the late 1980s, the term was used as the name of the network, Internet, being the large and global TCP/IP network.[97]

Opening the Internet and the fiber optic backbone to corporate and consumers increased demand for network capacity. The expense and delay of laying new fiber led providers to test a fiber bandwidth expansion alternative that had been pioneered in the late 1970s by Optelecom using “interactions between light and matter, such as lasers and optical devices used for optical amplification and wave mixing”.[98] This technology became known as wave division multiplexing (WDM). Bell Labs deployed a 4-channel WDM system in 1995.[99] To develop a mass capacity (dense) WDM system, Optelecom and its former head of Light Systems Research, David R. Huber formed a new venture, Ciena Corp., that deployed the world's first dense WDM system on the Sprint fiber network in June 1996.[100] This was referred to as the real start of optical networking.[101]

As interest in networking grew by needs of collaboration, exchange of data, and access of remote computing resources, the Internet technologies spread throughout the rest of the world. The hardware-agnostic approach in TCP/IP supported the use of existing network infrastructure, such as the International Packet Switched Service (IPSS) X.25 network, to carry Internet traffic.

Many sites unable to link directly to the Internet created simple gateways for the transfer of electronic mail, the most important application of the time. Sites with only intermittent connections used UUCP or FidoNet and relied on the gateways between these networks and the Internet. Some gateway services went beyond simple mail peering, such as allowing access to File Transfer Protocol (FTP) sites via UUCP or mail.[102]

Finally, routing technologies were developed for the Internet to remove the remaining centralized routing aspects. The Exterior Gateway Protocol (EGP) was replaced by a new protocol, the Border Gateway Protocol (BGP). This provided a meshed topology for the Internet and reduced the centric architecture which ARPANET had emphasized. In 1994, Classless Inter-Domain Routing (CIDR) was introduced to support better conservation of address space which allowed use of route aggregation to decrease the size of routing tables.[103]

Optical networking
To address the need for transmission capacity beyond that provided by radio, satellite and analog copper telephone lines, engineers developed optical communications systems based on fiber optic cables powered by lasers and optical amplifier techniques.

The concept of lasing arose from a 1917 paper by Albert Einstein, “On the Quantum Theory of Radiation.” Einstein expanded upon a dialog with Max Planck on how atoms absorb and emit light, part of a thought process that, with input from Erwin Schrödinger, Werner Heisenberg and others, gave rise to Quantum Mechanics. Specifically, in his quantum theory, Einstein mathematically determined that light could be generated not only by spontaneous emission, such as the light emitted by an incandescent light or the Sun, but also by stimulated emission.

Forty years later, on November 13, 1957, Columbia University physics student Gordon Gould first realized how to make light by stimulated emission through a process of optical amplification. He coined the term LASER for this technology—Light Amplification by Stimulated Emission of Radiation.[104] Using Gould's light amplification method (patented as “Optically Pumped Laser Amplifier”) [1], Theodore Maiman made the first working laser on May 16, 1960.[105]

Gould co-founded Optelecom, Inc. in 1973 to commercialize his inventions in optical fiber telecommunications.[106] just as Corning Glass was producing the first commercial fiber optic cable in small quantities. Optelecom configured its own fiber lasers and optical amplifiers into the first commercial optical communication systems which it delivered to Chevron and the US Army Missile Defense.[107] Three years later, GTE deployed the first optical telephone system in 1977 in Long Beach, California.[108] By the early 1980s, optical networks powered by lasers, LED and optical amplifier equipment supplied by Bell Labs, NTT and Perelli were used by select universities and long-distance telephone providers.

TCP/IP goes global (1980s)
CERN, the European Internet, the link to the Pacific and beyond
In early 1982, NORSAR and Peter Kirstein's group at University College London (UCL) left the ARPANET and began to use TCP/IP over SATNET.[109] UCL provided access between the Internet and academic networks in the UK.[110][111]

Between 1984 and 1988, CERN began installation and operation of TCP/IP to interconnect its major internal computer systems, workstations, PCs, and an accelerator control system. CERN continued to operate a limited self-developed system (CERNET) internally and several incompatible (typically proprietary) network protocols externally. There was considerable resistance in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP intranets remained isolated from the Internet until 1989, when a transatlantic connection to Cornell University was established.[112][113][114]

The Computer Science Network (CSNET), which began operation in 1981 in the United States using TCP/IP, added its first international connection in Israel in 1984. Soon thereafter, connections were established to computer science departments in Australia, Canada, France, Germany, Korea, and Japan.[115]

In 1988, the first international connections to NSFNET was established by France's INRIA,[116][117] and Piet Beertema at the Centrum Wiskunde & Informatica (CWI) in the Netherlands.[118] Daniel Karrenberg, from CWI, visited Ben Segal, CERN's TCP/IP coordinator, looking for advice about the transition of EUnet, the European side of the UUCP Usenet network (much of which ran over X.25 links), over to TCP/IP. The previous year, Segal had met with Len Bosack from the then still small company Cisco about purchasing some TCP/IP routers for CERN, and Segal was able to give Karrenberg advice and forward him on to Cisco for the appropriate hardware. This expanded the European portion of the Internet across the existing UUCP networks. The NORDUnet connection to NSFNET was in place soon after, providing open access for university students in Denmark, Finland, Iceland, Norway, and Sweden.[119] In January 1989, CERN opened its first external TCP/IP connections.[120] This coincided with the creation of Réseaux IP Européens (RIPE), initially a group of IP network administrators who met regularly to carry out coordination work together. Later, in 1992, RIPE was formally registered as a cooperative in Amsterdam.

In 1991, JANET, the UK national research and education network adopted Internet Protocol on the existing network.[121][122] The same year, Dai Davies introduced Internet technology into the pan-European NREN, EuropaNet, which was built on the X.25 protocol.[123][124] The European Academic and Research Network (EARN) and RARE adopted IP around the same time, and the European Internet backbone EBONE became operational in 1992.[112]

At the same time as the rise of internetworking in Europe, ad hoc networking to ARPA and in-between Australian universities formed, based on various technologies such as X.25 and UUCPNet. These were limited in their connection to the global networks, due to the cost of making individual international UUCP dial-up or X.25 connections. In 1989, Australian universities joined the push towards using IP protocols to unify their networking infrastructures. AARNet was formed in 1989 by the Australian Vice-Chancellors' Committee and provided a dedicated IP based network for Australia. New Zealand's first international Internet connection was established the same year.[125]

In May 1982, South Korea set up a two-node domestic TCP/IP network, adding a third node the following year.[126][127] Japan, which had built the UUCP-based network JUNET in 1984, connected to NSFNET in 1989, marking the spread of the Internet to Asia. It hosted the annual meeting of the Internet Society, INET'92, in Kobe. Singapore developed TECHNET in 1990, and Thailand gained a global Internet connection between Chulalongkorn University and UUNET in 1992.[128]

The early global "digital divide" emerges

Internet users in 2015 as a percentage of a country's population
Source: International Telecommunication Union.[129]
Main articles: Global digital divide and Digital divide

Fixed broadband Internet subscriptions in 2012
as a percentage of a country's population
Source: International Telecommunication Union.[130]

Mobile broadband Internet subscriptions in 2012
as a percentage of a country's population
Source: International Telecommunication Union.[131]
While developed countries with technological infrastructures were joining the Internet, developing countries began to experience a digital divide separating them from the Internet. On an essentially continental basis, they are building organizations for Internet resource administration and sharing operational experience, as more and more transmission facilities go into place.

Africa
At the beginning of the 1990s, African countries relied upon X.25 IPSS and 2400 baud modem UUCP links for international and internetwork computer communications.

In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now known as InfoCom, and NSN Network Services of Avon, Colorado, sold in 1997 and now known as Clear Channel Satellite, established Africa's first native TCP/IP high-speed satellite Internet services. The data connection was originally carried by a C-Band RSCC Russian satellite which connected InfoMail's Kampala offices directly to NSN's MAE-West point of presence using a private network from NSN's leased ground station in New Jersey. InfoCom's first satellite connection was just 64 kbit/s, serving a Sun host computer and twelve US Robotics dial-up modems.

In 1996, a USAID funded project, the Leland Initiative, started work on developing full Internet connectivity for the continent. Guinea, Mozambique, Madagascar and Rwanda gained satellite earth stations in 1997, followed by Ivory Coast and Benin in 1998.

Africa is building an Internet infrastructure. AFRINIC, headquartered in Mauritius, manages IP address allocation for the continent. As do the other Internet regions, there is an operational forum, the Internet Community of Operational Networking Specialists.[132]

There are many programs to provide high-performance transmission plant, and the western and southern coasts have undersea optical cable. High-speed cables join North Africa and the Horn of Africa to intercontinental cable systems. Undersea cable development is slower for East Africa; the original joint effort between New Partnership for Africa's Development (NEPAD) and the East Africa Submarine System (Eassy) has broken off and may become two efforts.[133]

Asia and Oceania
The Asia Pacific Network Information Centre (APNIC), headquartered in Australia, manages IP address allocation for the continent. APNIC sponsors an operational forum, the Asia-Pacific Regional Internet Conference on Operational Technologies (APRICOT).[134]

South Korea's first Internet system, the System Development Network (SDN) began operation on 15 May 1982. SDN was connected to the rest of the world in August 1983 using UUCP (Unixto-Unix-Copy); connected to CSNET in December 1984; and formally connected to the U.S. Internet in 1990.[135] VDSL, a last mile technology developed in the 1990s by NextLevel Communications, connected corporate and consumer copper-based telephone lines to the Internet in South Korea.[136]

In 1991, the People's Republic of China saw its first TCP/IP college network, Tsinghua University's TUNET. The PRC went on to make its first global Internet connection in 1994, between the Beijing Electro-Spectrometer Collaboration and Stanford University's Linear Accelerator Center. However, China went on to implement its own digital divide by implementing a country-wide content filter.[137]

Latin America
As with the other regions, the Latin American and Caribbean Internet Addresses Registry (LACNIC) manages the IP address space and other resources for its area. LACNIC, headquartered in Uruguay, operates DNS root, reverse DNS, and other key services.

1990–2003: Rise of the global Internet, Web 1.0
Main articles: History of the World Wide Web and Digital revolution
Initially, as with its predecessor networks, the system that would evolve into the Internet was primarily for government and government body use. Although commercial use was forbidden, the exact definition of commercial use was unclear and subjective. UUCPNet and the X.25 IPSS had no such restrictions, which would eventually see the official barring of UUCPNet use of ARPANET and NSFNET connections. (Some UUCP links still remained connecting to these networks however, as administrators cast a blind eye to their operation.)[citation needed]


Number of Internet hosts worldwide: 1969–present
Source: Internet Systems Consortium.[138]
As a result, during the late 1980s, the first Internet service provider (ISP) companies were formed. Companies like PSINet, UUNET, Netcom, and Portal Software were formed to provide service to the regional research networks and provide alternate network access, UUCP-based email and Usenet News to the public. The first commercial dialup ISP in the United States was The World, which opened in 1989.[139]

In 1992, the U.S. Congress passed the Scientific and Advanced-Technology Act, 42 U.S.C. § 1862(g), which allowed NSF to support access by the research and education communities to computer networks which were not used exclusively for research and education purposes, thus permitting NSFNET to interconnect with commercial networks.[140][141] This caused controversy within the research and education community, who were concerned commercial use of the network might lead to an Internet that was less responsive to their needs, and within the community of commercial network providers, who felt that government subsidies were giving an unfair advantage to some organizations.[142]

By 1990, ARPANET's goals had been fulfilled and new networking technologies exceeded the original scope and the project came to a close. New network service providers including PSINet, Alternet, CERFNet, ANS CO+RE, and many others were offering network access to commercial customers. NSFNET was no longer the de facto backbone and exchange point of the Internet. The Commercial Internet eXchange (CIX), Metropolitan Area Exchanges (MAEs), and later Network Access Points (NAPs) were becoming the primary interconnections between many networks. The final restrictions on carrying commercial traffic ended on April 30, 1995, when the National Science Foundation ended its sponsorship of the NSFNET Backbone Service.[143][144] NSF provided initial support for the NAPs and interim support to help the regional research and education networks transition to commercial ISPs. NSF also sponsored the very high speed Backbone Network Service (vBNS) which continued to provide support for the supercomputing centers and research and education in the United States.[145]

An event held on 11 January 1994, The Superhighway Summit at UCLA's Royce Hall, was the "first public conference bringing together all of the major industry, government and academic leaders in the field [and] also began the national dialogue about the Information Superhighway and its implications".[146]

Use in wider society

Stamped envelope of Russian Post issued in 1993 with stamp and graphics dedicated to first Russian underwater digital optic cable laid in 1993 by Rostelecom from Kingisepp to Copenhagen
During the first decade or so of the public Internet, the immense changes it would eventually enable in the 2000s were still nascent. In terms of providing context for this period, mobile cellular devices ("smartphones" and other cellular devices) which today provide near-universal access, were used for business and not a routine household item owned by parents and children worldwide. Social media in the modern sense had yet to come into existence, laptops were bulky and most households did not have computers. Data rates were slow and most people lacked means to video or digitize video; media storage was transitioning slowly from analog tape to digital optical discs (DVD and to an extent still, floppy disc to CD). Enabling technologies used from the early 2000s such as PHP, modern JavaScript and Java, technologies such as AJAX, HTML 4 (and its emphasis on CSS), and various software frameworks, which enabled and simplified speed of web development, largely awaited invention and their eventual widespread adoption.

The Internet was widely used for mailing lists, emails, e-commerce and early popular online shopping (Amazon and eBay for example), online forums and bulletin boards, and personal websites and blogs, and use was growing rapidly, but by more modern standards the systems used were static and lacked widespread social engagement. It awaited a number of events in the early 2000s to change from a communications technology to gradually develop into a key part of global society's infrastructure.

Typical design elements of these "Web 1.0" era websites included:[147] Static pages instead of dynamic HTML;[148] content served from filesystems instead of relational databases; pages built using Server Side Includes or CGI instead of a web application written in a dynamic programming language; HTML 3.2-era structures such as frames and tables to create page layouts; online guestbooks; overuse of GIF buttons and similar small graphics promoting particular items;[149] and HTML forms sent via email. (Support for server side scripting was rare on shared servers so the usual feedback mechanism was via email, using mailto forms and their email program.[150]

During the period 1997 to 2001, the first speculative investment bubble related to the Internet took place, in which "dot-com" companies (referring to the ".com" top level domain used by businesses) were propelled to exceedingly high valuations as investors rapidly stoked stock values, followed by a market crash; the first dot-com bubble. However this only temporarily slowed enthusiasm and growth, which quickly recovered and continued to grow.

With the call to Web 2.0 following soon afterward, the period of the Internet up to around 2004–2005 was retrospectively named and described by some as Web 1.0.[151]

IPv6
IPv4 uses 32-bit addresses which limits the address space to 232 addresses, i.e. 4294967296 addresses.[86] The last available IPv4 address was assigned in January 2011.[152] IPv4 is being replaced by its successor, called "IPv6", which uses 128 bit addresses, providing 2128 addresses, i.e. 340282366920938463463374607431768211456.[153] This is a vastly increased address space. The shift to IPv6 is expected to take many years, decades, or perhaps longer, to complete, since there were four billion machines with IPv4 when the shift began.[152]

2004–present: Web 2.0, global ubiquity, social media
Main articles: Web 2.0 and Responsive web design
The changes that would propel the Internet into its place as a social system took place during a relatively short period of no more than five years, from around 2005 to 2010. They included:

The call to "Web 2.0" in 2004 (first suggested in 1999),
Accelerating adoption and commoditization among households of, and familiarity with, the necessary hardware (such as computers).
Accelerating storage technology and data access speeds – hard drives emerged, took over from far smaller, slower floppy discs, and grew from megabytes to gigabytes (and by around 2010, terabytes), RAM from hundreds of kilobytes to gigabytes as typical amounts on a system, and Ethernet, the enabling technology for TCP/IP, moved from common speeds of kilobits to tens of megabits per second, to gigabits per second.
High speed Internet and wider coverage of data connections, at lower prices, allowing larger traffic rates, more reliable simpler traffic, and traffic from more locations,
The gradually accelerating perception of the ability of computers to create new means and approaches to communication, the emergence of social media and websites such as Twitter and Facebook to their later prominence, and global collaborations such as Wikipedia (which existed before but gained prominence as a result),
The mobile revolution, which provided access to the Internet to much of human society of all ages, in their daily lives, and allowed them to share, discuss, and continually update, inquire, and respond.
Non-volatile RAM rapidly grew in size and reliability, and decreased in price, becoming a commodity capable of enabling high levels of computing activity on these small handheld devices as well as solid-state drives (SSD).
An emphasis on power efficient processor and device design, rather than purely high processing power; one of the beneficiaries of this was ARM, a British company which had focused since the 1980s on powerful but low cost simple microprocessors. ARM architecture rapidly gained dominance in the market for mobile and embedded devices.
The term "Web 2.0" describes websites that emphasize user-generated content (including user-to-user interaction), usability, and interoperability. It first appeared in a January 1999 article called "Fragmented Future" written by Darcy DiNucci, a consultant on electronic information design, where she wrote:[154][155][156][157]

"The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven."
The term resurfaced during 2002–2004,[158][159][160][161] and gained prominence in late 2004 following presentations by Tim O'Reilly and Dale Dougherty at the first Web 2.0 Conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you".[162] They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value.

Web 2.0 does not refer to an update to any technical specification, but rather to cumulative changes in the way Web pages are made and used. Web 2.0 describes an approach, in which sites focus substantially upon allowing users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to Web sites where people are limited to the passive viewing of content. Examples of Web 2.0 include social networking services, blogs, wikis, folksonomies, video sharing sites, hosted services, Web applications, and mashups.[163] Terry Flew, in his 3rd Edition of New Media described what he believed to characterize the differences between Web 1.0 and Web 2.0:

"[The] move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on tagging (folksonomy)".[164]
This era saw several household names gain prominence through their community-oriented operation – YouTube, Twitter, Facebook, Reddit and Wikipedia being some examples.

The mobile revolution
Main articles: History of mobile phones and Mobile Web
The process of change that generally coincided with "Web 2.0" was itself greatly accelerated and transformed only a short time later by the increasing growth in mobile devices. This mobile revolution meant that computers in the form of smartphones became something many people used, took with them everywhere, communicated with, used for photographs and videos they instantly shared or to shop or seek information "on the move" – and used socially, as opposed to items on a desk at home or just used for work.[citation needed]

Location-based services, services using location and other sensor information, and crowdsourcing (frequently but not always location based), became common, with posts tagged by location, or websites and services becoming location aware. Mobile-targeted websites (such as "m.website.com") became common, designed especially for the new devices used. Netbooks, ultrabooks, widespread 4G and Wi-Fi, and mobile chips capable or running at nearly the power of desktops from not many years before on far lower power usage, became enablers of this stage of Internet development, and the term "App" emerged (short for "Application program" or "Program") as did the "App store".

This "mobile revolution" has allowed for people to have a nearly unlimited amount of information at their fingertips. With the ability to access the internet from cell phones came a change in the way we consume media. In fact, looking at media consumption statistics, over half of media consumption between those aged 18 and 34 were using a smartphone.[165]

Networking in outer space
Main article: Interplanetary Internet
The first Internet link into low Earth orbit was established on January 22, 2010, when astronaut T. J. Creamer posted the first unassisted update to his Twitter account from the International Space Station, marking the extension of the Internet into space.[166] (Astronauts at the ISS had used email and Twitter before, but these messages had been relayed to the ground through a NASA data link before being posted by a human proxy.) This personal Web access, which NASA calls the Crew Support LAN, uses the space station's high-speed Ku band microwave link. To surf the Web, astronauts can use a station laptop computer to control a desktop computer on Earth, and they can talk to their families and friends on Earth using Voice over IP equipment.[167]

Communication with spacecraft beyond Earth orbit has traditionally been over point-to-point links through the Deep Space Network. Each such data link must be manually scheduled and configured. In the late 1990s NASA and Google began working on a new network protocol, Delay-tolerant networking (DTN) which automates this process, allows networking of spaceborne transmission nodes, and takes the fact into account that spacecraft can temporarily lose contact because they move behind the Moon or planets, or because space weather disrupts the connection. Under such conditions, DTN retransmits data packages instead of dropping them, as the standard TCP/IP Internet Protocol does. NASA conducted the first field test of what it calls the "deep space internet" in November 2008.[168] Testing of DTN-based communications between the International Space Station and Earth (now termed Disruption-Tolerant Networking) has been ongoing since March 2009, and is scheduled to continue until March 2014.[169]

This network technology is supposed to ultimately enable missions that involve multiple spacecraft where reliable inter-vessel communication might take precedence over vessel-to-Earth downlinks. According to a February 2011 statement by Google's Vint Cerf, the so-called "Bundle protocols" have been uploaded to NASA's EPOXI mission spacecraft (which is in orbit around the Sun) and communication with Earth has been tested at a distance of approximately 80 light seconds.[170]

Internet governance
Main article: Internet governance
As a globally distributed network of voluntarily interconnected autonomous networks, the Internet operates without a central governing body. Each constituent network chooses the technologies and protocols it deploys from the technical standards that are developed by the Internet Engineering Task Force (IETF).[171] However, successful interoperation of many networks requires certain parameters that must be common throughout the network. For managing such parameters, the Internet Assigned Numbers Authority (IANA) oversees the allocation and assignment of various technical identifiers.[172] In addition, the Internet Corporation for Assigned Names and Numbers (ICANN) provides oversight and coordination for the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System.

NIC, InterNIC, IANA, and ICANN
The IANA function was originally performed by USC Information Sciences Institute (ISI), and it delegated portions of this responsibility with respect to numeric network and autonomous system identifiers to the Network Information Center (NIC) at Stanford Research Institute (SRI International) in Menlo Park, California. ISI's Jonathan Postel managed the IANA, served as RFC Editor and performed other key roles until his premature death in 1998.[173]

As the early ARPANET grew, hosts were referred to by names, and a HOSTS.TXT file would be distributed from SRI International to each host on the network. As the network grew, this became cumbersome. A technical solution came in the form of the Domain Name System, created by ISI's Paul Mockapetris in 1983.[174] The Defense Data Network—Network Information Center (DDN-NIC) at SRI handled all registration services, including the top-level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root nameserver administration and Internet number assignments under a United States Department of Defense contract.[172] In 1991, the Defense Information Systems Agency (DISA) awarded the administration and maintenance of DDN-NIC (managed by SRI up until this point) to Government Systems, Inc., who subcontracted it to the small private-sector Network Solutions, Inc.[175][176]

The increasing cultural diversity of the Internet also posed administrative challenges for centralized management of the IP addresses. In October 1992, the Internet Engineering Task Force (IETF) published RFC 1366,[177] which described the "growth of the Internet and its increasing globalization" and set out the basis for an evolution of the IP registry process, based on a regionally distributed registry model. This document stressed the need for a single Internet number registry to exist in each geographical region of the world (which would be of "continental dimensions"). Registries would be "unbiased and widely recognized by network providers and subscribers" within their region. The RIPE Network Coordination Centre (RIPE NCC) was established as the first RIR in May 1992. The second RIR, the Asia Pacific Network Information Centre (APNIC), was established in Tokyo in 1993, as a pilot project of the Asia Pacific Networking Group.[178] commercial networks

Since at this point in history most of the growth on the Internet was coming from non-military sources, it was decided that the Department of Defense would no longer fund registration services outside of the .mil TLD. In 1993 the U.S. National Science Foundation, after a competitive bidding process in 1992, created the InterNIC to manage the allocations of addresses and management of the address databases, and awarded the contract to three organizations. Registration Services would be provided by Network Solutions; Directory and Database Services would be provided by AT&T; and Information Services would be provided by General Atomics.[179]

Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and the Federal Networking Council (FNC), the decision was made to separate the management of domain names from the management of IP numbers.[178] Following the examples of RIPE NCC and APNIC, it was recommended that management of IP address space then administered by the InterNIC should be under the control of those that use it, specifically the ISPs, end-user organizations, corporate entities, universities, and individuals. As a result, the American Registry for Internet Numbers (ARIN) was established as in December 1997, as an independent, not-for-profit corporation by direction of the National Science Foundation and became the third Regional Internet Registry.[180] desktop computers

In 1998, both the IANA and remaining DNS-related InterNIC functions were reorganized under the control of ICANN, a California non-profit corporation contracted by the United States Department of Commerce to manage a number of Internet-related tasks. As these tasks involved technical coordination for two principal Internet name spaces (DNS names and IP addresses) created by the IETF, ICANN also signed a memorandum of understanding with the IAB to define the technical work to be carried out by the Internet Assigned Numbers Authority.[181] The management of Internet address space remained with the regional Internet registries, which collectively were defined as a supporting organization within the ICANN structure.[182] ICANN provides central coordination for the DNS system, including policy coordination for the split registry / registrar system, with competition among registry service providers to serve each top-level-domain and multiple competing registrars offering DNS services to end-users.

Internet Engineering Task Force
The Internet Engineering Task Force (IETF) is the largest and most visible of several loosely related ad-hoc groups that provide technical direction for the Internet, including the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF).

The IETF is a loosely self-organized group of international volunteers who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. Much of the work of the IETF is organized into Working Groups. Standardization efforts of the Working Groups are often adopted by the Internet community, but the IETF does not control or patrol the Internet.[183][184] distributed network brief history

The IETF grew out of quarterly meetings with U.S. government-funded researchers, starting in January 1986. Non-government representatives were invited by the fourth IETF meeting in October 1986. The concept of Working Groups was introduced at the fifth meeting in February 1987. The seventh meeting in July 1987 was the first meeting with more than one hundred attendees. In 1992, the Internet Society, a professional membership society, was formed and IETF began to operate under it as an independent international standards body. The first IETF meeting outside of the United States was held in Amsterdam, The Netherlands, in July 1993. Today, the IETF meets three times per year and attendance has been as high as ca. 2,000 participants. Typically one in three IETF meetings are held in Europe or Asia. The number of non-US attendees is typically ca. 50%, even at meetings held in the United States.[183]

The IETF is not a legal entity, has no governing board, no members, and no dues. The closest status resembling membership is being on an IETF or Working Group mailing list. IETF volunteers come from all over the world and from many different parts of the Internet community. The IETF works closely with and under the supervision of the Internet Engineering Steering Group (IESG)[185] and the Internet Architecture Board (IAB).[186] The Internet Research Task Force (IRTF) and the Internet Research Steering Group (IRSG), peer activities to the IETF and IESG under the general supervision of the IAB, focus on longer-term research issues.[183][187]

Request for Comments
Request for Comments (RFCs) are the main documentation for the work of the IAB, IESG, IETF, and IRTF. RFC 1, "Host Software", was written by Steve Crocker at UCLA in April 1969, well before the IETF was created. Originally they were technical memos documenting aspects of ARPANET development and were edited by Jon Postel, the first RFC Editor.[183][188]

RFCs cover a wide range of information from proposed standards, draft standards, full standards, best practices, experimental protocols, history, and other informational topics.[189] RFCs can be written by individuals or informal groups of individuals, but many are the product of a more formal Working Group. Drafts are submitted to the IESG either by individuals or by the Working Group Chair. An RFC Editor, appointed by the IAB, separate from IANA, and working in conjunction with the IESG, receives drafts from the IESG and edits, formats, and publishes them. Once an RFC is published, it is never revised. If the standard it describes changes or its information becomes obsolete, the revised standard or updated information will be re-published as a new RFC that "obsoletes" the original.[183][188]

The Internet Society
The Internet Society (ISOC) is an international, nonprofit organization founded during 1992 "to assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". With offices near Washington, DC, USA, and in Geneva, Switzerland, ISOC has a membership base comprising more than 80 organizational and more than 50,000 individual members. Members also form "chapters" based on either common geographical location or special interests. There are currently more than 90 chapters around the world.[190]

ISOC provides financial and organizational support to and promotes the work of the standards settings bodies for which it is the organizational home: the Internet Engineering Task Force (IETF), the Internet Architecture Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet Research Task Force (IRTF). ISOC also promotes understanding and appreciation of the Internet model of open, transparent processes and consensus-based decision-making.[191]

Globalization and Internet governance in the 21st century
Since the 1990s, the Internet's governance and organization has been of global importance to governments, commerce, civil society, and individuals. The organizations which held control of certain technical aspects of the Internet were the successors of the old ARPANET oversight and the current decision-makers in the day-to-day technical aspects of the network. While recognized as the administrators of certain aspects of the Internet, their roles and their decision-making authority are limited and subject to increasing international scrutiny and increasing objections. These objections have led to the ICANN removing themselves from relationships with first the University of Southern California in 2000,[192] and in September 2009, gaining autonomy from the US government by the ending of its longstanding agreements, although some contractual obligations with the U.S. Department of Commerce continued.[193][194][195] Finally, on October 1, 2016, ICANN ended its contract with the United States Department of Commerce National Telecommunications and Information Administration (NTIA), allowing oversight to pass to the global Internet community.[196] cold war

The IETF, with financial and organizational support from the Internet Society, continues to serve as the Internet's ad-hoc standards body and issues Request for Comments. cold war

In November 2005, the World Summit on the Information Society, held in Tunis, called for an Internet Governance Forum (IGF) to be convened by United Nations Secretary General. The IGF opened an ongoing, non-binding conversation among stakeholders representing governments, the private sector, civil society, and the technical and academic communities about the future of Internet governance. The first IGF meeting was held in October/November 2006 with follow up meetings annually thereafter.[197] Since WSIS, the term "Internet governance" has been broadened beyond narrow technical concerns to include a wider range of Internet-related policy issues.[198][199]

Tim Berners-Lee, inventor of the web, was becoming concerned about threats to the web's future and in November 2009 at the IGF in Washington DC launched the World Wide Web Foundation (WWWF) to campaign to make the web a safe and empowering tool for the good of humanity with access to all.[200][201] In November 2019 at the IGF in Berlin, Berners-Lee and the WWWF went on to launch the Contract for the Web, a campaign initiative to persuade governments, companies and citizens to commit to nine principles to stop "misuse" with the warning "If we don't act now - and act together - to prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering" (its potential for good).[202]

Politicization of the Internet
Due to its prominence and immediacy as an effective means of mass communication, the Internet has also become more politicized as it has grown. This has led in turn, to discourses and activities that would once have taken place in other ways, migrating to being mediated by internet.

Examples include political activities such as public protest and canvassing of support and votes, but also:

The spreading of ideas and opinions;
Recruitment of followers, and "coming together" of members of the public, for ideas, products, and causes;
Providing and widely distributing and sharing information that might be deemed sensitive or relates to whistleblowing (and efforts by specific countries to prevent this by censorship);
Criminal activity and terrorism (and resulting law enforcement use, together with its facilitation by mass surveillance);
Politically motivated fake news.


When was the Internet first available to the public?

On April 30, 1993, four years after publishing a proposal for “an idea of linked information systems,” computer scientist Tim Berners-Lee released the source code for the world's first web browser and editor.


When did Internet start in homes?

The earliest versions of wifi were implemented in the mid-1990s, but it wasn't until Apple included the technology in the iBook laptop in 1999, as well as other models in the early 2000s, that it really started to kick off.

When was the Internet invented 1969?

On 29 October 1969, a computer at Stanford Research Institute (SRI) and one at the University of California, Los Angeles (UCLA), United States, were connected in the first network to use packet switching: the US Defense Department's Advanced Research Projects Agency Network, or ARPANET.

Did the Internet exist in the 70s?
The 1970s were also notable for the birth of ARPANET, the precursor to the Internet, which was first deployed in 1969 and grew throughout the decade as additional hosts were added at various universities and government institutions.

The evolution of the Web

From here on, things began developing rapidly for the Web. The first image was uploaded in 1992, with Berners-Lee choosing a picture of French parodic rock group Les Horribles Cernettes.

In 1993, it was announced by CERN that the World Wide Web was free for everyone to use and develop, with no fees payable – a key factor in the transformational impact it would soon have on the world.

While a number of browser applications were developed during the first two years of the Web, it was Mosaic which arguably had the most impact. It was launched in 1993 and by the end of that year was available for Unix, the Commodore Amiga, Windows and Mac OS. The first browser to be freely available and accessible to the public, it inspired the birth of the first commercial browser, Netscape Navigator, while Mosaic’s technology went on to form the basis of Microsoft’s Internet Explorer.

The growth of easy-to-use Web browsers coincided with the growth of the commercial ISP business, with companies like Compuserve bringing increasing numbers of people from outside the scientific community on to the Web – and that was the start of the Web we know today.

What was initially a network of static HTML documents has become a constantly changing and evolving information organism, powered by a wide range of technologies, from database systems like PHP and ASP that can display data dynamically, to streaming media and pages that can be updated in real-time. Plugins like Flash have expanded our expectations of what the Web can offer, while HTML itself has evolved to the point where its latest version can handle video natively.

The Web has become a part of our everyday lives – something we access at home, on the move, on our phones and on TV. It’s changed the way we communicate and has been a key factor in the way the Internet has transformed the global economy and societies around the world. Sir Tim Berners-Lee has earned his knighthood a thousand times over, and the decision of CERN to make the Web completely open has been perhaps its greatest gift to the world.

The future of the Web
So, where does the Web go from here? Where will it be in twenty more years? The Semantic Web will see metadata, designed to be read by machines rather than humans, become a more important part of the online experience. Tim Berners-Lee coined this term, describing it as “A web of data that can be processed directly and indirectly by machines,” – a ‘giant global graph’ of linked data which will allow apps to automatically create new meaning from all the information out there web browsers ip addresses

This 14-minute video by Kate Ray is a great introduction to the concept of the Semantic Web internet explorer cell phones system crashed

Meanwhile, while not strictly ‘the Web’, the Internet of Things will allow physical objects to transmit data about themselves and their surroundings, bringing more information about the real world into the online realm. Imagine getting precise, live traffic data from all the local roads; trains that tell your smartphone that they’re full before they arrive; flowers that email you when they need watering; maybe even implants in your body that give you real-time updates about your health that feed into a secure online ‘locker’ of your personal data. All this and more is possible with the Internet of Things, helping to transform what we expect from the Web and the Internet operating systems mobile devices

We can’t predict accurately everything that the future will hold for the Web, but whatever happens, it won’t stand still. Here’s to the next twenty years.

Happy twentieth birthday, World Wide Web!

internet protocol suite
packet radio network
mobile devices














#when #internet

When did the Internet start?