Network Layer

 Network Layer:

Introduction:

The network layer is responsible for receiving frames from the data link layer, and delivering them to their intended destinations among based on the addresses contained inside the frame. The network layer finds the destination by using logical addresses, such as IP.

Network Layer Design Issues:

These are the  following issues  of network layer-

1. Store and forward packet switching:

The major component of the system are the carriers equipment / routers shown inside the shaded oval, and the customers equipment, shown outside the oval. Host H1 is directly connected to one of the carriers routers A, by a leased line. In contrast, H2 is on a LAN with a router F owned and operated by the customer.

This router also has a leased line to the carriers equipment .We have shown has F as being outside the oval because it does not belong to the carrier, but  in terms of construction , software and protocols, it is probably no different from the carriers routers.

2. Services Provided to the Transport Layer:

         The network layer provides services to the transport layer at the network layer/transport layer interface. The network layer services have been designed with the following goals -

1)      The services should be independent of the router technology.

2)      The transport layer should be protected from the number, type and topology of the router present.

3)      The network addresses made available layer should use a uniform numbering Plan, even across local area network and Wide area network..

Given these goals the designers of the network layer have a lot of freedom in writing detailed specifications of the services to be offered to the transport layer.

 3. Implementation of Connection less Services:

 Two different organizations are possible, depending on the type of services offered. If connection less services is offered, packets are injected into the subnet/network individually and routed independently of each other. No advance setup is needed. In this context, the packets are frequently called as datagram and the subnet/network is called as a datagram subnet. If connection oriented service is used, the path from the source router to the destination router must be established before any data packets can be sent. This connection is called virtual circuit, in analog with the physical circuits set up by the telephone system, and the subnet is called virtual circuit subnet.

Let us now see how a datagram subnets works. Suppose that the process P1 in the flowing figure has a long message for P2. It hands the message to the transport layer with instructions to deliver it to process P2 on host H2. The transport layer code runs on H1, typically within the operation system. It prepends a transport header to the front of the message and hands the result to the network layer, probably just another procedure within the operating system.

Let us assume that the message is four times longer than the maximum packet size, so the network layer has to break it into four packets,1,2,3 and 4 and sends each of them in turn to router A using some point to point protocol.

 4. Implementation of Connection oriented services:

For connection oriented service, we need a virtual circuit subnet. The idea behind virtual circuits is to avoid having to choose a new route for every packet sent. Instead when a connection is established, a route from the source machine to the destination machine is chosen as part of the connection setup and stored in tables inside the routers. That route is used for all traffic flowing over the connection, exactly the same way that the telephone system works. When the connection is released, the virtual circuit is also terminated. With connection oriented services, each packet carries an identifier telling which virtual circuit it belongs to. As an example, consider a situation of figure. Here, host H1 has established connection 1 with host h2.


 


 

   Figure: Routing within a virtual circuit subnet.

Now let us consider what happens if H3 also wants to establish a connection to h2. IT chooses connection identifier 1 and tells the subnet to establish the virtual circuit. Thus leads to the second row in tables.

 

5. Comparison of Virtual Circuit and Datagram Subnets:

Both virtual circuit and datagram have their supporters and their detractors. We will now attempt to summarize the arguments both ways. The major issues are listed in fig. 3-4 although persists could probably find a counter example for everything in the figure.

 

Issue

Datagram subnet

Virtual circuit subnet

Circuit setup

Not needed

Required

Addressing

Each packet contains the full source and destination address.

Each packet contains a small VC number.

 

State information

Routers do not hold state information about connection

Each VC requires router table space per connection

 

Each packet routed independently

Route choose when VC is setup all packets follow it.

Effect of router feature

None, except for packets lost during the crash

All VCs that passed through the failed router are terminated

Quality of services

Difficult

Easy if enough resources can be allocated in advance for each VC

Congestion control

Difficult

Easy if enough resources can be allocated in advance for each VC

Figure: Comparison of datagram and virtual circuit subnets

Routing Algorithms:

The main function of the network layer is routing packets from the source machine to the destination machine. In most subnets, packets will require multiple hops to make the journey. The only notable exception is for broadcast networks, but even here routing is an issue if the source and destination are not on the same network. The algorithms that choose the routes and the data structures that they use are a major area of network layer design.

The routing algorithm is that part of the network layer software responsible for deciding which output line an incoming packet should be transmitted on. If the subnet uses datagram internally, this decision must be made anew foe every arriving data packet since the best route may have changed since last time. In the subnet virtual circuits such decisions are made once per session.

These are two types of  Routing algorithms-

1)      Non adaptive: This is also called as state routing. This algorithm does not have their routing decisions on measurements or estimates of the current traffic. Instead, the choice of the route to use to get from I to J is computed I advance, of-line and downloaded to the routers when the network is booted.

2)      Adaptive: They are also called as dynamic. This algorithm is in contrast, change their routing decisions to reflect changes in the network and usually the traffic as well. Adaptive algorithms differ in where to get their information, when they change the routes, and what metric is used for optimization.

 

Optimal Principle:

 The optimality principle states that if router J is on the optimal path from router I to router K, then the optimal path from J to K also fall along the same route. As a consequence of that principle, we can see that the set of the optimal routes from all sources to a given destination form a tree rooted at a destination. Such a tree is called sink tree.

 


Shortest Path Routing:

The goal of shortest path routing is to find a path between two nodes that has the lowest total cost, where the total cost of a path is the sum of arc costs in that path. Shortest path routing technique is widely used in many forms because it is simple and easy to understand. The idea is to build a graph of the subnet, with each node of the graph representing a router and each are representing a communication line. 


 

To choose a rout between a given pair of routers the algorithm just finds the shortest path between them on the graph. The shortest path concept includes definition of the way of measuring path length. Different metrics like number of hops, geographical distance, and the mean queuing and transmission delay of router can be used. In the most general case, the labels on the arcs could be compared as a function of the distance, bandwidth, average traffic, communication cost, mean queue length, measured delay and other factors.

 

Distance vector routing:

Modern computer networks generally uses dynamic routing algorithms rather than static. Two dynamic algorithms in particular, distance vectors & link state routing are the most popular.

·         Distance vector routing algorithms operate by having each router maintain a table giving the best known distance to each destination and which line to use to get there. These tables are updated by exchanging information with neighbors.

The distance vector routing algorithm is sometimes called by other names including Bellman-Ford or Ford-Fulkerson. It was the original ARPANET routing algorithm and it was also used in the internet under the name RIP and in early versions of DECnet and Novell’s IPX.

In that algorithm each router maintains a routing table indexed by and containing one entry for each router in the subnet. This entry contains two parts: The preferred outgoing line to use for that destination and an estimate of the time or distance to that destination. The metric used might be number of hops, time delay in milliseconds, and the total number of packets queued along the path or something similar.


Link state routing:   

       Distance vector routing was used in the ARPANET until 1979, when it was replaced by link state routing. The idea behind link state routing is simple and be stated as live parts. Each router must:

1.      Discover neighbors and learn their network address.

2.      Measure the delay or the cost to each of its neighbors.

3.      Construct a packet telling to all it has just learned.

4.      Send the packets to all other routers.

5.      Compute the shortest path to every other router.

In effect, the complete topology and all delays are experimentally measured and distributed to every router. Then Dijkstra’s algorithms can be used to use to find the shortest path to every other router.

 

Broadcast Routing:

For some applications, Computer needs to send messages to many or all other computer. Broadcast routing is used for that purpose. Some different methods were proposed for doing that.


 

1.      The source should send the packet to all the necessary destinations. One of the problems of this method is that the source has to have the complete list of destinations.

2.      Flooding routing - the problem of that method is generating duplicate packets.

3.      Multi destination routing – in that method each packet includes list or a bitmap indicating desired destinations. When a packed arrives, the router checks all the destinations to determine the set of output lines that will be needed, generates a new copy of the packet for each  output line to be used and includes in each packets only the destination that are use the line. In effect, the destination set is partitioned between the lines. After a sufficient number of hops, each packet will carry only one destination and can be treated as a normal packet.

4.     This routing method makes use of spanning tree of the subnet. If each router knows which of its lines belong to the panning tree, it can copy an incoming broadcast packet onto all the spanning tree lines except the one it arrived on. Problem: each router has to know the spanning tree.

5.     Reverse path forwarding algorithm at the arrival of packets check if the line those packets arrived on is the same one through which the packets are send to the source, if yes it sends it through all other lines, otherwise discards it.

Multicast Routing:

Sending messages to well defined groups that are large in size, but small compared to the network, as a whole it is called as multi casting. To do multi casting, group management is required, but that is not concern of routers. What is of concern is that when a process joins a group, it informs its host of this fact. It is important that routers know which of their hosts belong to which group. Either host must inform their routers about changes in group membership.

 To do multicast routing, each router computes a spanning tree covering all other routers in the subnet. When a process sends a multicast packet to a group, the first router examines its spanning tree and prunes it, removing all lines that do not lead to hosts that are members of a group. The simplest way of pruning the panning tree is under link. State routing when each router is aware of the complete subnet topology, including which hosts belong to which groups. Then the spanning tree can be pruned by staring at the end to each path and working toward the root, removing all routers that do not belong to the group.

A different pruning strategy is followed with distance vector routing, reverse path forwarding algorithm. Whenever a router with no hosts interested in a particular group and no connection to other routers receives a multicast message for that group, it responds a PRUNE message, telling the sender not to send any more multicasts for the group. When a router with no group members among its own hosts has received such messages on all its lines, it too, can respond with a PRUNE message. In this way, subnet is recursively pruned.

One potential disadvantage of this algorithm is that it scales poorly to large networks. An alternative design uses core based trees. Here a single spanning tree per group is computed, with the root near the middle of the group. To send a multicast message, a host sends it to core, which then does the multicast along the spanning tree. Although this tree will not be optimal for all sources, the reduction in storage costs from m trees to one per group is a major saving.

 

Congestion Control Algorithm:

When too many packets are present in the subnet, performance degrades. This situation is called congestion. When too many packets are transmitted through a network, congestion occurs. At very high traffic is the root cause, when part of the network no longer can increase of traffic, congestion builds upon.


 

Other factors such as lack of bandwidth, unwell configurations and slow routers can also bring up congestion control. These are the flowing   two basic approaches-

1. Open loop: Try to prevent congestion occurring by good design.

2. Closed loop: Monitor the system to detect congestion, pass the information to where action can be taken, and adjust system operation to correct the problem (detect, feedback and correct).

 

General Principals of Congestion Control:

Many problems of complex systems like computer networks can be analyzed from the point of view of control theory. This method leads to split into two groups all solutions: open loop and closed loop. In essence, the cycle solutions opens attempt to resolve the problem through good design, to ensure first of all not happen. Once the system is running, no corrections are made the medium way. The tools to perform open loop control include deciding when to accept new traffic, deciding when to drop packets, and what, scheduling decisions at various points in the network. All have in common the fact that decisions are made regardless of the current state of the network. In contrast, closed loop solutions are based on the concept of feedback loop.

1)      This method has three parts when applied to congestion control.

2)      Monitoring system to detect when and where congestion occurs.

3)      Pass this information on places where action can be carried out.

4)      Adjust system operation to correct the problem.

 You can use several metrics to monitor the subnet for congestion. The main ones are the percentage of packets discarded due to lack of buffer space, the length average queues, the number of packets for which the timer end and transmitted again, the average packet delay and delay standard deviation package. In all cases, an increase in the figure shows an increasing the congestion. The second step in the feedback loop is the transfer of information on the from the point where congestion is detected to the point that something can be done about it.

The most obvious is that the router detects congestion, the source send the packet (or origins) traffic, announcing the problem. Of course, these additional packages increase load at very time when no load is needed, i.e. when the subnet is congested. Fortunately, there are other options for example, each packet can reserve a bit or field to fill routers when the congestion exceeds some threshold. When a congested router detects this state, filled the field of all outgoing packets warn the neighbors.

Another strategy is to have hosts or a router periodically sends packets surveys to ask explicitly about congestion. This information can be used to route traffic away from problem areas. Some radio stations have helicopters flying over the city to report on congestion in the streets, in the hope that listeners will route its packets (cars) out of conflict zones.   In all schemes of feedback, the hope is that knowledge about congestion hosts will take appropriate action in order to reduce congestion. To operate correctly, the time scale should be adjusted carefully.

Congestion prevention policies:

These systems are to minimize congestions in the first place, rather than letting it happen & reacting after the fact. They try to achieve their goal by using appropriate policies at various levels. We see different data link, network and transport policies that can affect congestions.

Layer

Policies

Transport

 

 

1.      Retransmission policy

2.      Out of order caching policy

3.      Flow control policy

4.      Acknowledgement policy

5.      Timeout determination

Network

1.      Virtual circuit versus diagram inside the subnet

2.      Packet queuing and service policy.

3.      Routing algorithm

4.      Packet lifetime management

5.      Packet discard policy

Data Link

1.      Retransmission policy.

2.      Out of order caching policy

3.      Acknowledgement policy

4.      Flow control policy

 

The previous prevention policies decrease the occurrence of congestion, however, when it occurs it is necessary to take proper actions to stop its fast growth. These actions may be based on:

·         Monitoring the systems to detect when and where congestion occurs. The main metrics of congestion are the percentage of discarded packets due to lack of buffer space, average queue length, number of time out packets, average packets delay, variation of packet delay. The rising of these metrics indicate growing congestion.

·         Passing the above information to the places where action can be taken. Ex.- to hosts.

·         Adjusting system operations to correct the problem. Ex.-make hosts to reduce their output sent packets.

The above approach is similar to closed loop control system where feedback signals are used to make decisions affecting the system behavior. 

Congestion control in virtual circuit subnets:

In this section, we describe some approaches to dynamically controlling congestion in virtual circuit subnet as follows:


 

·         Admission control: The idea is simple. Once congestion has been signaled, no more virtual circuits are setup until the problems has gone away. This approach can be improved by allowing some virtual circuits under the restrictions of not using congested routers.

·         Resources reservations: The idea here is to reserve enough resources for each virtual circuit during its setup. The resources include buffers, tables, line bandwidth. This reservation guarantees the congestion avoidance however, it causes a lot of waste in resource usage as they are being reserved for certain virtual circuit even if that circuit does need them at all times. A better approach is to allow the use of the part of the reserved resources i.e. allow quality of service for virtual circuit to reduce rush hour. (Congestion).


Congestion control in datagram subnets:

In this approach, each router monitors the utilization of its output lines and whenever this utilization exceeds certain threshold the line enters a “warning” state. Each newly arriving packet is checked to see if its output line is in warning state. Each newly arriving packet is checked to see if its output line is a warning state. If it is, some action is taken. The taken action may be one of the several alternatives as follows –

1.      The warning bit: Here, the warning state is signaled by setting a special bit in the packet’s header. When the packets arrive at destination, the transport entity copies this bit into the next acknowledgement sent back on traffic.

2.      Choke packets: In this approach, the router sends a choke packet back to the source host, giving the destination found in the packet. The original packet is tagged so that will not generate any more choke packets further along the path and is then forwarded in the usual way. When the source host gets the choke packet, it is required to reduce the traffic sent to the specified destination.

3.      Hop-by-hop choke packets: At high speed or very long distances, sending a choke packet to the source host does not work well because the reaction is too slow. The alternative approach is to have the choke packet take effect at every hop it passes through and make the routers along the path to reduce the traffic immediately and hence reduce congestion in fast manner. The net effect of this hop-by-hop scheme is to provide quick relief at the point of congestion. It should be noted that the above schemes can be used also in virtual subnets.

Comments