How the Internet Works

How the Internet Works?

E

Expert

Verified

It has been clear from the 1970s that packet networks would have to be pervasive in much the same way that the telephone system is if they were going to be useful. As we have described earlier, the Internet/ARPANet began as a US Department of Defense research program to reduce the cost of research. By 1975, the ARPANet had nodes all over the US and in the UK (University College, London) and Norway (the Norwegian Seismic Array, those dangerous Norwegian earthquakes, not). Throughout the 80s, the Internet grew, largely within universities and research institutions. By 1980, the DoD had partitioned the 'Net and moved all of the military sites to MILNET, where security could be more tightly controlled. In the US, to be attached to the 'Net required a DoD contract. If anyone in a university had a DoD contract, the entire university was connected and everyone could use the 'Net. This, of course, led to people sharing research results and collaborating on research in everything from literature to particle physics. Clearly, not every university could have a DoD contract. This left faculty and students at these schools at a distinct disadvantage and lead to a division between the haves and the have-nots. Consequently, in the late 1980s, NSF moved to fund "the rest" of the universities in the US to create NSFNet as part of the Internet. Along with this effort, a plan was devised to transition the operation of NSFNet to the private sector. This led to the hierarchical structure of ISPs described in the textbook. Several major switching centers were established, in the US and worldwide, as points where traffic between ISPs could be exchanged. One major difference between this structure for the Internet and that used for the telephone system is geographical distribution. In the telephone system, there had been (to some extent) a natural monopoly created by the cost of running wires. Generally, one phone company served one area or one country. There was little or no overlap. (The US is one of the few countries in the world to have private telephone companies. Even in the days before deregulation, there were small phone companies operating on the periphery of the Bell System. GTE was one of the largest. Most were small companies for small towns.)

The Internet is quite different. Many providers will have a presence in the same geographical area. Consequently, one person may get Internet service from his cable company, while his next-door neighbor may have Internet access from the telephone company. This has its good and bad points. It is good in that there is direct competition for customers and lower prices (in theory). However, there is no impetus to provide quality of service.

Let see why. In the telephone system, there was no local competition. Locally there was a monopoly and the customer had one choice. But by the same token, because the quality of the phone call was dependent on each company that might be involved, it was in all of the phone companies' interest to provide good service. From the point of view of the customer, bad service by one reflected badly on all of them. In the Internet, the opposite is the case. If one needs communication with someone on another ISP, the ISPs involved have no reason to work together to ensure good qual ity. The ISP tel ls the customer that he can't do anything about that other "scoundrel of an ISP," but if your colleague would simply move to his ISP, it would be possible to ensure quality. Not cooperating is an opportunity to take customers from the competition and acquire them for your compnay. Of course, the "scoundrel" is making the same argument to your colleague. One of the key problems facing major ISPs is redundancy-and it is far from clear that there is an economic reason for them to address this issue. Multiple long distance lines are expensive, especially if the cables have not been laid. In late 2006, an earthquake in the southwest Pacific severed 13 undersea cables. For several days, traffic to the financial centers in Hong Kong, Shanghai, and Singapore was interrupted or congested until traffic could be re-routed to other cables. The first problem was simply to get traffic flowing again; the second issue was to establish the previous level of service, which entailed bringing existing fiber optics online. Similarly, several major financial firms in lower Manhattan were quite surprised on 9/11 to find that the redundant lines they had been paying thousands of dollars a month for actually went through the same switching center in the basement of the World Trade Center. Much of the delay in re-opening the financial markets after 9/11 was due to getting these major firms back online to ensure a fair market.

In some cases, this is simply sloppy engineering. However, there is a real problem here. Not only is it expensive to have redundant lines, but physical separation may be hard to achieve. Companies, including ISPs, must weigh carefully the cost of maintaining excess capacity as redundancy against the losses that may be incurred in not having that redundancy. For example, Pacific Gas and Electric was able to respond quickly during the San Francisco Earthquake in the early 1990s, because it had always maintained its own telephone system and had not used the lines of Pacific Bell. In the aftermath, the PacBell system was not only overwhelmed by damage but by customers trying to reach family and friends. PG&E was able to avoid this congestion by using its own internal phone system. For 99% of the time, this parallel system was simply expensive excess capacity. But it may well have paid for itself in the hours and days immediately following the disaster. The other problem is ensuring truly redundant lines. One can easily draw distinct lines on a map. But it may be much harder to keep them separate on (or in) the ground. Practically speaking, we are limited to just a few means of laying communication lines around the rights of way of: streets and highways, power lines, water or gas pipelines, railroads, canals, etc. Establishing switching centers or peering points as described in the book is expensive. Buildings must be specially conditioned with environmental, security and backup power. Once such a facility is created, there is a natural tendency to try to maximize its use, which is precisely what we want to avoid. In cities, the problems are worse. There may be other structures that limit the number of paths that can be utilized. A Network Administrator may find that all redundant lines to his facility must come down the same street for several blocks or must all cross the same bridge, etc. All of these issues must be considered not only in acquiring redundant lines but also if possible in locating facilities. A careful assessment of risk is crucial.

In some situations, fixed wireless technologies may be used to augment the wired technologies. They clearly are less subject to the same constraints as wires in needing right-of-way but new constraints arise in getting "air rights." But even here there are problems that must be considered. In large cities, everyone may have the same idea and one may find that there is no available spectrum or appropriate sites or locations. Then there are the problems of weather. Microwave can experience considerable interference from heavy rain or snow.

   Related Questions in Computer Networking

©TutorsGlobe All rights reserved 2022-2023.