Unit IV – Network Layer – Express Learning: Data Communications and Computer Networks

10

Routing and Congestion Control

1. Explain the various design issues of the network layer.

Ans: The network layer provides an end-to-end communication between two hosts. There are mainly four design issues in the network layer, which are discussed as follows:

  Network Layer Interface: The network layer can follow two methods, namely, virtual circuit (connection-oriented network service) and datagram (connectionless network service) for sending packets from node to node. In the virtual circuit method, first a logical connection is established between the two communicating users and then the quality of service to be used is decided. All the packets travel through the established path to reach the destination. On the other hand, the datagram method does not require to set up any fixed path before the transmission. Thus, each packet should contain sufficient information for routing from source to destination so that it need not depend upon previous exchanges. Each packet may follow a different path to reach the destination.
  Routing Issues: It is the responsibility of the network layer to route packets from source to destination. For this, a piece of software, known as routing algorithm, is used which decides where the packets are to be sent. The routing algorithm has to deal with the following issues:
Correctness and simplicity: The networks should never go down; the links or routers may fail but the whole network should still function.
Stability: When a router fails, the time elapsed before other routers realize this change should be stable.
Fairness and optimality: Some aspects such as whether to maximize channel usage or minimize the average delay should be considered while making the routing decisions.
  Congestion: The network layer also deals with congestion issues, which are as follows:
The network should not be flooded with packets more than its capacity; otherwise, the delay increases and the performance degrades. If the situation does not improve then the packets have to be discarded.
If the delay increases, packets may be incorrectly transmitted by the sender making the situation worse.
If congestion is not controlled, the overall performance of the network degrades as the network resources would be used for processing the packets that have actually been discarded.
  Internetworking: Internetworking is used to connect different network technologies together. Certain issues related to internetworking are as follows:
The packets should be able to pass through many different networks.
Different networks may have different frame formats. Thus, the network layer should support multiple frame formats.
The network layer should support both connectionless and connection-oriented networks.

2. Write a short note on IP addresses.

Ans: Each system (computer or router) that connects to the Internet is assigned a specific Internet address. This Internet address, referred to as IP address, uniquely identifies the connection of the system to the Internet. Here, uniquely means that no two systems can be assigned the same IP address at the same time. Moreover, IP addresses are universal which means each system wishing to connect to Internet must accept the addressing system.

Each protocol that defines the IP addresses has an address space—the total number of addresses that the protocol can use. If a protocol assigns n-bit addresses to machines, the address space of protocol will be 2n. For example, traditional IP protocol, which is IPv4, defines 32-bit IP addresses and thus, its address space is 232. On the other hand, IPv6 protocol—the new generation of IP—defines 128-bit addresses and thus, its address space is 2128.

3. Briefly describe the notations used to represent IPv4 addresses.

Ans: An IPv4 address is of 32 bits in length and can be represented using two notations, namely, binary notation and dotted-decimal notation.

In binary notation, the IP address is represented as 32 bits grouped into four octets. As each octet is one byte, an IPv4 address is sometimes also referred to as a 4-byte address. The problem with this notation is that the binary address is too lengthy and difficult to read. To overcome this problem, dotted-decimal notation is used.

In dotted–decimal notation, each byte of an IPv4 address is represented by a decimal value ranging between 0 and 255 and different bytes are separated with a decimal point (.). Figure 10.1 shows a 32-bit IPv4 address written in binary as well as dotted decimal notation.

Figure 10.1 An IPv4 Address in Binary and Dotted—Decimal Notation

4. Explain classful and classless addressing.

Ans: IP addressing can be either classful or classless. Both classful and classless addressing are discussed as follows:

Classful Addressing

The traditional IP addressing used the concept of classes thereby named as classful addressing. IPv4 addressing is classful in which all the IP addresses (address space) are divided into five classes, namely, A, B, C, D and E with each class covering a specific portion of the address space. Each class is further divided into a fixed number of blocks with each block of fixed size. Given an IPv4 address, its class can be identified by looking at either few bits of first byte in case IP address is represented in binary notation or the first byte in case the IP address is represented in dotted-decimal notation (Table 10.1).

Table 10.1 Determining Class of IP Address

Class

Binary notation (few bits of first byte) Dotted-Decimal notation (first byte)

A

0 0–127

B

10 128–191

C

110 192–223

D

1110 224–239

E

1111 240–255

The blocks of class A, B and C addresses were granted to different organizations depending on their size. The large organizations using a vast number of hosts or routers were assigned class A addresses. The class B addresses were designed to be used by mid-size organizations having tens of thousands of hosts or routers while the class C addresses were designed for small organizations having small number of hosts or routers. The class D addresses were projected to be used for multicasting where each address identifies a specific group of hosts on the Internet. Only a few of class E addresses were used while rest were kept reserved for the future use. The main problem with classful addressing was that a large number of available addresses were left unused resulting in a lot of wastage.

Each IP address in class A, B and C is divided into two parts: net ID that identifies the network on the Internet and host ID that identifies the host on that network. The size of each part is different for the different classes. In a class A address, net ID is identified by one byte and host ID is identified by three bytes. In a class B address, net ID is identified by two bytes and host ID is also identified by two bytes and in a class C address, three bytes specify the net ID and one byte specifies the host ID.

Classless Addressing

There were certain problems with classful addressing such as address depletion and less organization access to Internet. To overcome these problems, classful addressing is replaced with classless addressing. As the name of the addressing scheme implies, the addresses are not divided into classes; however, they are divided into blocks and the size of blocks varies according to the size of entity to which the addresses are to be allocated. For instance, only a few addresses may be allocated to a very small organization while a larger organization may obtain thousands of addresses. IPv6 addressing is a classless addressing.

The Internet authorities have enforced certain limitations on classless address blocks to make the handling of addresses easier. These limitations are as follows:

The addresses of a block must be contiguous.
Each block must have a power of 2 (that is, 1, 2, 4, 8…) number of addresses.
The first address in a block must be evenly divisible by the total number of addresses in that block.

5. Describe the notation used to represent IPv6 addresses.

Ans: An IPv6 address is of 128 bits (16 bytes) and is represented using hexadecimal colon notation. In this notation, the 128-bit address is divided into eight sections of two bytes each and each byte is represented by four hexadecimal digits. Thus, the address consists of 32 hexadecimal digits with each group of four hexadecimal digits separated by a colon. Figure 10.2 shows an IPv6 address written in hexadecimal colon notation.

Figure 10.2 An IPv6 Address in Hexadecimal Colon Notation

Since many of the digits in the IP address represented in Figure 10.2 are zeros, the address even in hexadecimal format is still too long. However, it can be abbreviated by omitting leading (but not trailing) zeros of a section. For example, 0037 in the second section can be written as 37 and 0000 in third, fourth, fifth and seventh section can be written as 0. Figure 10.3 shows the abbreviated form of a hexadecimal address after omitting leading zeros of a section.

Figure 10.3 Abbreviated Form of IPv6 Address

Figure 10.4 More Abbreviated Form of IPv6 Address

As still there are many consecutive sections containing zero only in the address shown in Figure 10.3, the address can be more abbreviated. The consecutive section of zeros can be eliminated and replaced with double semicolon (::) as shown in Figure 10.4. However, there is one limitation with this type of abbreviation that it can be applied only once per address. That is, if there are two or more runs of consecutive sections of zeros in a single address, only one run can be replaced with double semicolon but not others.

6. Define network address. Is it different from net ID?

Ans: The address that defines the network itself is referred to as the network address. It cannot be assigned to a host. It is also different from a net ID. It comprises both net ID and host ID with all bits of host ID set to zero. The network address is required to route the packets to the destination. Table 10.2 lists examples of network addresses for classes A, B and C.

Table 10.2 Examples of Network Addresses

Class

Network address

A

112.0.0.0

B

155.7.0.0

C

229.48.57.0

7. List some special IP addresses.

Ans: Depending on their contents, some IP addresses have been designated as special IP addresses. These addresses are listed as follows:

An IP address with all 0s (that is, 0.0.0.0) identifies this host. This address is used by the hosts at the time when they are being booted up but not afterwards.
An IP address with all 1s (that is, 255.255.255.255) indicates broadcast on this network. This address is used for forwarding the packet to all the hosts on the local network.
An IP address with net ID all 0s and a proper host ID (see diagram below) identifies a specific host on the local network.

An IP address with a proper net ID, and host ID containing all 1s (see diagram below) indicates broadcast to some distant LAN in the Internet.

An IP address of the form 127.aa.bb.cc (see diagram below) indicates reserved address used for loopback testing.

8. Write a short note on address masks.

Ans: An address mask, also referred to as default mask, is a string made up of contiguous 1s followed by contiguous 0s. Like IP address, it is of 32 bits, that is, four octets where each octet is equivalent to a decimal value in the range of 0–255. Table 10.3 lists the default masks used for classes A, B and C IP addresses.

Table 10.3 Default Masks for Classes A, B and C

Class

Address mask (in binary) Address mask (in dotted–decimal)

A

11111111 00000000 00000000 00000000 255.0.0.0

B

11111111 11111111 00000000 00000000 255.255.0.0

C

11111111 11111111 11111111 00000000 225.255.255.0

The address mask (also simply called mask) identifies which part of an IP address defines the net ID and which part defines the host ID. To determine this, the IP address and mask are compared bit by bit. If a given mask bit is 1, its corresponding bit in IP address will form a part of net ID. On the other hand, if mask bit is 0, its corresponding bit in IP address will form a part of host ID. For example, consider an IP address 132.7.21.84 (that is, 10000100 00000111 00010101 01010100) that belongs to class B. It is clear from Table 10.3 that the mask for class B addresses is 255.255.0.0 (that is, 11111111 11111111 00000000 00000000). Now, if we compare given IP address with its corresponding mask, we can easily determine that 132.7 define the net ID and 21.84 define the host ID.

9. Explain the concept of subnetting in IP. What is subnet mask?

Ans: The subnetting was introduced at the time when classful addresses were being used. It means breaking large blocks of addresses into many contiguous groups, which are further assigned to smaller networks known as subnets. It came in use when a single network had to be subdivided internally to accommodate more computers, increasing the size of the whole network. It was impossible to acquire new network addresses every time a network is divided. Therefore, in such a situation, subnetting was used to split the network into many subnets. When subnetting is done, the network is divided internally, however, it appears as a single network to the outside world. For example, in a college campus network the main router may be connected to an Internet service provider (ISP) and different departments may use their own routers, which are connected to the main router. Here, each department with a router and some communication lines form a subnet, however to the outside world, the whole campus appears as a single network (Figure 10.5).

Figure 10.5 A Campus Network Divided into Subnets

Now, a problem arises when a packet comes to the main router; how does it know to which subnet the packet is to be delivered. One way is to maintain a table in the main router having an entry corresponding to each host in the network along with the router being used for that host. Though this scheme seems fine, it is not feasible as it requires a large table and many manual operations as hosts are added or deleted. Thus, a more feasible scheme was introduced, where a part of the host address is used to define a subnet number. For example, in a class B address where a host ID is of 16 bits, the six bits can be used to define the subnet number and the rest 10 bits can be used to define the host ID. This would lead to up to 64 subnetworks, each with a maximum of 1,022 hosts.

For the implementation of subnetting, the main router requires subnet mask that shows the division of addresses between network plus subnet number and host ID. For example, Figure 10.6 shows a subnet mask written in binary notation for a subnetted class B address, where six bits have been used to represent the subnet number and 10 bits for host ID. The subnet mask can also be written in dotted-decimal notation. For example, for the IP address shown in Figure 10.6, the subnet mask in dotted-decimal notation can be written as 255.255.192.0. Alternatively, slash (/) notation, also called CIDR (classless interdomain routing) notation can also be used to represent the subnet mask. In this notation, a slash followed by the number of bits in the network plus subnet number defines the subnet mask. For example, the IP address shown in Figure 10.6 can be defined in slash notation as /22 where 22 is the size of subnet mask. Notice that /22 indicates that the first 22 bits of subnet mask are 1s while rest 10 bits are 0s.

Figure 10.6 A Class B Network Address Subnetted into 64 Subnets

10. What is supernetting? Why it is needed?

Ans: Supernetting is just the opposite of subnetting. There was a time when all the class A and B addresses were used up but still there was a great demand of address blocks for mid-size organizations. The class C address blocks were also not suitable for serving needs of those organizations; more addresses were required. As a solution to this problem, supernetting was used. In this method, several class C blocks can be combined by an organization to create a large address space, that is, smaller networks can be combined to create a super-network or supernet. For example, an organization having the requirement of 1,000 addresses can be allocated four contiguous blocks of class C addresses.

11. What is IP spoofing?

Ans: IP spoofing is a mechanism, which is used to hide the original IP address by replacing it with an arbitrary IP address. The IP datagrams, which are transferred over a network, carry the sender's IP address apart from the data from the upper layers. Any user having control over the operating system can place an arbitrary address into a datagram's source address field by modifying the device protocols. Thus, an IP packet may be made to appear as though it is originating from an arbitrary host. IP spoofing is generally used in denial of service attacks to hide the originator of attack.

If IP spoofing is destructive, it can be prevented through a mechanism known as ingress filtering. When it is implemented, the routers receiving the IP datagrams see their source addresses to check whether these addresses are in the range of network addresses known to the routers. This check is performed at the edge of a network such as a corporate gateway or a firewall.

12. What is routing? Differentiate adaptive and non-adaptive routing algorithms.

Ans: Generally, in a network there exist multiple paths available for going from a source to destination. To forward the packets, one of the available routes is to be chosen by the network layer. This is called routing.

The part of the network layer software which is responsible for deciding which output line should be used to transmit an incoming packet is referred to as routing algorithm. The routing algorithms can be divided into two categories; namely, non-adaptive and adaptive. Non-adaptive algorithms are the algorithms, which make routing decisions regardless of the current status of the network such as current traffic or current topology. If a packet has to be transferred from A to B, then the route to get from A to B is traced in advance and is entered in the routing table. This is done at the time of booting of the network. This is also sometimes called as static routing. Some examples of non-adaptive routing algorithms include flooding, shortest path routing and flow-based routing.

On the other hand, the adaptive algorithms change their decisions dynamically in coordination with the changes in the network attributes as topology, traffic and time delay. Whenever changes occur in the network attributes, the routes of the routers are also changed accordingly. That is why it is also referred to as dynamic routing. Some examples of the adaptive routing algorithms are distance vector routing and link state routing.

13. What is optimality principle?

Ans: optimality principle refers to a general statement about the optimal routes without taking into account the topology and the traffic load in the network. According to the principle, if an optimal path (say d1) from router A to G goes through a router F, then the path from F to G (say, d2) is also optimal. To prove this principle, let us assume that d2 is not optimal. This means there exist some other path (say, d3) from F to G that is more optimal than d2. Thus, d3 can be combined with d1 to improve the path from A to G. This implies that d1d2 is not optimal which contradicts our statement that d1d2 is optimal.

14. What is routing table? What are its categories?

Ans: Each host or router in a network maintains a data structure, known as routing table, which contains an entry corresponding to each destination or a group of destinations to route the IP packets. Whenever an IP packet arrives at a router, it sees the destination address of the packet. Then, it looks up in the routing table to check whether the destination address matches some existing entry. If so, the IP packet is forwarded through the interface specified in the routing table corresponding to that matching entry.

Routing tables are of two types: static and dynamic routing table. Both these types are discussed as follows:

Static Routing Table: This routing table is created manually. The route of each of the destination is entered into the table by the administrator. If there is any change in the network, it is also done manually by the administrator. Any broken link in the network would require the routing tables to be manually reconfigured immediately so that alternate paths could be used. It is not possible to automatically update the table. Static routing tables are well suited for small networks. However, in large networks such as Internet, maintenance of static routing tables can be very tiresome.
Dynamic Routing Table: This routing table is updated automatically by the dynamic routing protocols such as RIP, BGP and OSPF whenever there is a change in the network. Generally, each router periodically shares its routing information with adjacent or all routers using the routing protocols so that the information about the network could be obtained. If some change is found in the network, the routing protocols automatically update all the routing tables.

15. Differentiate between intradomain and interdomain routing.

Ans: Due to the ever-growing size of Internet, using only one routing protocol to update the routing tables of all routers is not sufficient. Therefore, the network is divided into various autonomous systems. An autonomous system (As) is a group consisting of networks and routers, which is controlled by a single administrator. Thus, a network can be seen as a large collection of autonomous system. The routing, which is done inside an autonomous system, is known as intradomain routing. One or more intradomain routing protocols may be used to handle the traffic inside an autonomous system. Two most commonly used intradomain routing protocols include distance vector routing and link state routing. On the other hand, routing which is done between different autonomous systems is known as interdomain routing. Only one interdomain routing protocol is used to handle the routing between autonomous systems. The most popular interdomain routing protocol is the path vector routing.

16. Explain the following algorithms in brief.

(a) Flooding

(b) Flow-based routing

Ans: (a) Flooding: Flooding is a static routing algorithm that works based on forwarding of packets. Here, every packet arriving in a router is forwarded (flooded) to all the outgoing lines from the router, except the one through which it has arrived. Due to bulk forwarding of packets, a large number of duplicate packets are generated. The flooding can be controlled by ignoring the flooded packets rather than resending them. Some preventive measures that can be used to control flooding are as follows:

A hop counter can be included in the header of each packet. Initially, the hop counter can be set to a value equal to the total distance from source to destination. If the distance is not known, then the counter can be initialized to the whole length of the subnet. As the packet reaches at every hop, the counter is decremented by one and finally, when it becomes zero the packet is discarded.
Another technique to prevent duplication is to keep track of all those packets, which have already been flooded so that they could not be sent for the second time. This technique requires a sequence number to be included in every packet that is to be forwarded. This sequence number is entered by the source router whenever it receives a packet from a host in its network. Every router maintains a list per each source router indicating the sequence numbers that have already been seen by that source router. The packet is not flooded if it is already on the list.
A variation of flooding known as selective flooding can be used to overcome the problem. In this algorithm, not every incoming packet is sent to every outgoing line, but only to those lines, which are going in the right direction.

Few applications where flooding is useful are as follows:

It is used in distributed database applications where the need arises to update all the databases at the same time and to choose the shortest path to the destination resulting in shorter delay.
It is used in wireless networks where one station can transmit a message, which can be received by all other stations within the range of that network.
It can be used as a metric in routing algorithms. This is because of the reason that flooding always gives a shorter delay which no other algorithm can.

Figure 10.7 Flow-based Routing

(b) Flow-based Routing: This is also a static routing algorithm in which packets are forwarded by taking into account the traffic condition and the structure of the network. To understand, consider a network shown in Figure 10.7. Suppose there always remains a great traffic from node M to L. If node M has to send packets to node K, routing through node L would not be efficient because of the heavy load on path ML. Thus, another path MNJK is used even though it is a longer than the MLK path. This type of routing is known as flow-based routing. The optimal routes can be found by analyzing the flow of data mathematically. However, this is possible only if average traffic from a node to any other node is already known and it always remains constant. Once the average traffic and capacity of a line has been known, the mean delay for each packet can be calculated in advance.

17. What is shortest path routing? Explain with the help of a suitable example.

Ans: The shortest path routing is a static routing algorithm, which is based on the concept of graphs. Here, the whole subnet is depicted as a graph where the nodes of the graph represent the routers and the edges represent the links between two routers. The optimal route between two routers is determined by finding the shortest path between them. The metric used in shortest path routing can be number of hops, distance or the transmission delay. Any of the metrics used is represented as the weights assigned to the edges in the graph.

There are many algorithms for computing the shortest path between two nodes of a graph representing a network. However, the Dijkstra's algorithm is the most often used to calculate the shortest path. In this algorithm, each node is labelled with a value equal to its distance from the source node along the optimal path. Since no paths are known initially, all nodes are labelled with infinity. As the algorithm proceeds, paths are found and thus, label may change. The labels on the nodes are divided into two categories: tentative and permanent. Initially, and each label is tentative. However, as it is assured that the label presents the shortest distance, it is made permanent.

To understand the Dijkstra's algorithm, consider a sample subnet graph shown in Figure 10.8 where the weights on the edges represent the distance between nodes connected by that edge. Suppose the shortest path has to be calculated from node A to D.

Figure 10.8 A Sample Subnet Graph

The steps involved in calculating the shortest path from A to D are as follows:

  1. The node A is marked permanent (shown by the filled circle in Figure 10.9). Each adjacent neighbour of A is examined and is relabelled with distance from A. For example, B is relabelled as (4, A) which indicates that distance of B is 4 from A. Similarly, G is relabelled as (12, A). Rest of the nodes still remain labelled with infinity. Thus, the new tentative nodes are B and G.

    Figure 10.9 Node A Marked as Permanent

  2. As out of nodes B and G, B has the shorter distance from A, it is selected and marked permanent as shown in Figure 10.10. Thus, now each adjacent neighbour of B is examined and relabelled with distance from B. Notice that a node is relabelled only if the existing label on it is greater than the sum of the label on the source node and the distance between that node and source node. For example, while examining the node C with respect to node B, we find that the sum of label on B (that is, 4) and the distance between B and C (that is, 14) is 18, which is far less than the existing label on C (that is, ∞). Thus, node C is relabelled as (18, B). Similarly, E is relabelled as (8, B).

    Figure 10.10 Node B Marked as Permanent

  3. As out of nodes C and E, E has the shorter distance from B, it is selected and marked permanent as shown in Figure 10.11. Thus, now each adjacent neighbour of E is examined and relabelled with distance from E. As the node G can be reached via E with a shorter distance (that is, 10) than its previous distance (that is, 12), G is relabelled as (10, E). Similarly, F is relabelled as (12, E).

    Figure 10.11 Node E Marked as Permanent

  4. Though out of nodes G and F, G has the shorter distance from E, it cannot be selected as it will result in a loop. Therefore, node F is selected and marked permanent as shown in Figure 10.12 and its adjacent neighbours are examined. While examining the neighbouring nodes, the node E is not considered because we have reached F via E and thus, cannot go back. In addition, as the existing label on node C (that is, 18) is equal to the sum of distance between F and C (that is, 6) and label on F (that is, 12), node C is not relabelled. Contrastive to this, node H is relabelled as (16, F).

    Figure 10.12 Node F Marked as Permanent

  5. The node H is marked permanent as shown in Figure 10.13 and its adjacent neighbour D is examined. The node D is relabelled as D(20, H). Since our intended destination D has been reached, the node D is also marked permanent. Thus, the shortest path from A to D is ABEFHD as shown in Figure 10.13 and the shortest distance from A to D is 20.

    Figure 10.13 Shortest Path from A to D

18. Explain distance vector routing algorithm with the help of a suitable example.

Ans: The distance vector routing (also known as Bellman-Ford routing) is a dynamic routing algorithm. In this method, each router maintains a table (known as vector) in which routing information about the distance and route to each destination is stored. The table contains an entry per each router in the network, where the preferred outgoing line and the distance to that router are specified. Each router periodically exchanges its routing information with their neighbours and updates its table accordingly. Thus, each router knows the distance to each of its neighbours. To make the routing decisions, the metric used maybe the total number of packets queued along the path, number of hops, time delay in milliseconds, etc. For example, if delay is used as the metric for making routing decisions then after every t milliseconds, every router updates its neighbour with the estimated delays needed to reach each destination in the network. Suppose, one such table has just arrived to router J showing that its neighbour A can reach the router I in estimated time Ai. If now the router J knows that its delay time to A is n milliseconds, then it can reach to router I via A in A. + n milliseconds. By doing such calculation for each neighbour, the optimum estimate is found.

To understand how calculations are performed in distance vector routing, consider a subnet consisting of routers and networks as shown in Figure 10.14(a). Suppose at any instant the router D receives the delay vectors from its neighbours A, C, and E as shown in Figure 10.14(b). Further, assume that the estimated delay from D to its neighbours A, C, and E are 5, 8 and 6, respectively. The router D can now calculate the delays to all the other routers in the network. Suppose D wants to compute the route to B, using the available delay information. Now, D has a delay time of 5 ms to reach A while A takes 10 ms to reach B [Figure 10.14(b)], so D would take a total of 15 ms to reach B via A. In the same way, D can calculate the delays to reach B using the lines C and E as 18(11 + 7), 31(9 + 12 + 10) ms, respectively. Since, the optimum value is 15 ms, the router D updates its routing table mentioning that the delay to B is 15 ms via the route A. The same procedure can be followed by D to calculate the delays to all the other routers and then making entries in its routing table as shown in Figure 10.14(c).

Figure 10.14 Distance Vector Routing

19. What is the count-to-infinity problem in the distance vector routing algorithm?

Ans: Theoretically, the distance vector routing algorithm works efficiently but practically, it has serious loopholes. Even though a correct answer is derived, it takes time. In practice, this algorithm acts quickly to good news but acts very late to bad news. For example, let us consider a router whose best route to destination X is large. If on the next neighbour exchange, one of its neighbours (say, A) declares a short delay to X, the router starts using the line through A to reach X. The good news is thus processed by B within a single vector exchange. The news spreads even faster as it is received by its neighbours.

To understand how fast good news spreads, consider a linear subnet consisting of five nodes [Figure 10.15(a)]. Suppose, the metric used in the subnet is the number of hops. In the beginning, the node A is down and therefore, all the routers have recorded their delay to A as infinity. When A becomes functional, the routers perceive it through the vector exchanges. At the first vector exchange, B realizes that its left neighbour has zero delay to A, so it makes an entry 1 in its routing table as A is just one hop away from it. At the second vector exchange, the router C comes to know that B is just one hop away from A, so C updates its table making an entry of two hops to A. Similarly, D and E update their routing tables after the third and fourth exchanges, respectively. Thus, the news about the recovery of A propagates at a rate of one hop per exchange.

Figure 10.15 The Count-to -Infinity Problem

However, the situation is very different in the case of bad news propagation. To understand the propagation of bad news, consider the situation shown in Figure 10.15(b). Here, initially all routers are properly functioning and the routers B, C, D and E are 1, 2, 3 and 4 hops away from A, respectively. Now, if A goes down all of a sudden, the routers still think A to be functional. At the first vector exchange, no vector is received from A but the node B sees in the vector received from C that C has a path of length 2 to A. As B does not know that path from C to A goes through it, it updates its routing table with distance to A as 3 and via C. Now, at the second vector exchange, as C sees the distance of B to A as 3, it updates its distance to A as 4 and via B. Similarly, all the routers continue to update their routing tables and increase their distance to A after every vector exchange. Eventually, all routers update their routing tables with distance to A as infinity where infinity denotes the longest path plus one. This problem is known as the count-to-infinity problem.

20. Explain the link state routing protocol.

Ans: The link state routing is a dynamic routing algorithm that had been designed to replace its previous counterpart distance vector routing. The distance vector routing has certain limitations, as it does not take into account the line bandwidth while choosing among the routes. It also suffers from count-to-infinity problem. To overcome these limitations, link state routing algorithm was devised.

In link state routing, the information such as network topology, metric used and type and condition of links is available to each router (node). Therefore, each router can use the Dijkstra's algorithm to compute the shortest path to all the available routers and then build their routing tables. Link state routing involves the following phases:

Learning about the Neighbours: Whenever a router boots up, the first step for it is to identify all its neighbours and get their network addresses. For this, the router sends a HELLO packet on each point-to-point line. Each router receiving the HELLO packet responds with its name (or network address) to the sending router. Similarly, all the routers in the network discover their neighbours. Notice that the name of the routers must be globally unique in order to avoid any ambiguity.
Measuring the Line Cost: After discovering all the neighbours, the next step for a router is to have an estimate of delay to reach every other router in the network. For this, the simplest method that a router can adopt is to send a special ECHO packet over the line and then start a timer. As soon as a router receives an ECHO packet, it immediately sends the packet back to the sending router. After receiving the ECHO packet back, the sending router can determine the round-trip time of packet and then divide it by two to calculate the delay. To have better estimate of delay, the router can send ECHO packet several times and then use the average time as delay. While computing the delay, the traffic load in the network may or may not be taken into account. If the load is considered, the sending router starts the timer at the time the packet is queued up. On the other hand, if load is not taken into account, the sending router starts the timer at the time the packet reaches at the front of the queue.
Building Link State Packets: The next step for a router is to package the collected information to form a link state packet (LSP). Each LSP comprises four fields: the identity of the sending router, sequence number, age and list of its neighbouring links. The first and fourth fields of LSP together determine the network topology. The second field indicates the sequence number assigned to the LSP packet; the sequence number is incremented by one each time the packet is created by the router. The third field indicates the duration for which the LSP has been residing in the domain. The LSP also includes delay to reach each adjacent neighbour. For example, Figure 10.16(b) shows the LSPs for each of the five routers of the subnet shown in Figure 10.16(a).

Figure 10.16 Subnet and Link State Packets

  The LSPs are created by the routers periodically at regular intervals of time or at certain occasions such as when some changes occur in the topology or either when some router goes down or comes up.
Flooding of LSPs: After each router has created the LSP, it needs to distribute its LSP to its neighbours in the network. This process is referred to as flooding. The router forwards a copy of its LSP through each of its neighbouring interface. For example, the router A sends its LSP through lines AB and AC [Figure 10.16(a)]. Each receiving router compares the received LSP against the list of LSPs that it already has. If the sequence number of newly arrived LSP is found lower than the highest sequence number in the list, the packet is simply discarded. Otherwise, the receiving router stores the new LSP and also forwards a copy of it through its neighbouring interfaces except the one through which the LSP has arrived.
Computing New Routes: After receiving all LSPs, the next step for a router is to compute the shortest path to every possible destination using the Dijkstra's algorithm (explained in Q 17) and build the routing table. The routing table of a router lists all the routers in the network, the minimum cost of reaching them and the next router in the path to which the packet should be forwarded.

21. Differentiate link state and distance routing vector algorithms.

Ans: The link state routing algorithm came after the distance vector routing algorithm and it offers many advantages over the previous one. Some of the differences between the two are as follows:

The traffic generated due to routing process is less in link state routing as compared to the distance vector routing. The reasons for this difference are as follows:
  • The HELLO messages exchanged between adjacent routers are much smaller in size than the vectors in the distance vector routing. The LSPs of the link state routing contain information only about the neighbours, while the distance vectors include the net IDs of all the routers in the network.
  • In link state routing, the LSPs are exchanged between neighbours after every 30 min, while in distance vector routing, the vectors are exchanged after a comparatively very small period (for example, 30 s in RIP).
In link state routing, the optimal paths are calculated by each router independently. However, in distance vector routing each router depends on its neighbour for updating its distance vector resulting in slow processing.
Alternate routes are also possible in link state routing but in distance vector routing only specific routes are used.
Multiple cost metrics can be used in link state routing and the optimal paths can be computed with respect to each metric separately. Further, packets can be forwarded based on any one metric.

22. Explain the path vector routing protocol.

Ans: Path vector routing is an interdomain routing protocol, which is used between various autonomous systems (ASs). In every AS, there is one node (can be more, but only one is considered here) which acts on behalf of the whole system. This node is known as the speaker node. It creates the routing table for its AS and announces it with the speaker nodes residing in the neighbouring ASs. Path vector routing is similar to that of distance vector routing; however, here only the speaker nodes in ASs can communicate with each other. In the advertised tables, only the path is shown excluding the metric of the nodes.

To understand path vector routing, consider three ASs, AS1, AS2 and AS3 with K, L and M as the speaker nodes, respectively (Figure 10.17). The path vector routing works in various phases, which are as follows:

Figure 10.17 Initial Routing Tables in Path Vector Routing

Initialization: Initially, the speaker nodes know only how to reach the nodes inside their own autonomous system. Thus, each speaker node initializes its routing table with an entry per each node that resides in its own network. For example, the routing table of node K shows that the nodes K1, K2 and K3 are located in AS1 and can be reached through K. The routing tables of L and M are also initialized in the same way.
Sharing: After the routing tables have been initialized, they are shared between the speaker nodes of all ASs. For example, the node K shares its table with L and M providing the routing information inside AS1. Similarly, node L and M also share their routing tables.
Updating: When a routing table is received by a speaker node, it updates its routing table by adding the nodes that are not present in its own routing table along with the path to reach them (Figure 10.18). After updating the table, each speaker node knows how to reach every node in other ASs. For example, if node K receives a packet for node K2, it knows that the path is in AS1. However, if it receives a packet for M2, it knows that it has to follow the path AS1-AS3.

Figure 10.18 Routing Tables After Updating

23. List some advantages of path vector routing.

Ans: The path vector routing is an interdomain routing protocol that offers certain advantages. Some of these advantages are as follows:

Loop Prevention: Path vector routing avoids the creation of loops in a path. Whenever a router receives a message, it determines whether or not its own AS is in the path list to destination. If so, looping tends to occur in which case the message is ignored.
Policy Routing: The path vector routing can implement the policy routing easily. Whenever a router receives a message, the path of its AS is checked. If any of the ASs listed in the path is against its policy then that path and destination are ignored. This path is also not included in the routing table and the message is not forwarded.
Optimum Path: The optimum path is the path that is best for the organization. Path vector routing only shows the path and does not include the metrics involved in the path as different ASs may use different metrics. Optimal path does not mean only the shortest path, several other measures like security, safety and reliability are also taken into account while determining optimal path.

24. Explain in the following routing protocols.

(a) RIP

(b) OSPF

(c) BGP

Ans: (a) RIP: Routing information Protocol (RIP) is an intradomain routing protocol (also known as interior gateway protocol) which is used inside an autonomous system. It is a simple protocol whose operating procedures are similar to the widely used distance vector routing. The operation of RIP is based on certain considerations, which are as follows:

RIP is used in autonomous systems, which consists of routers and networks (links). Routing tables are created for the routers but the networks do not implement any such table.
As the first column of a routing table specifies a destination and in case of RIP, the destination is the network; therefore, a network address is defined in the first column of the RIP routing table.
The metric used by RIP is the hop count where the term hop defines the number of subnets traversed from the source router to the destination subnet with including the destination subnet.
The maximum cost of a path in RIP cannot be more than 15, that is, any route in an autonomous system cannot have more than 15 hops.
The next-node column in the routing table of RIP contains the address of the router to which the packet is to be forwarded (the first hop to be done).

Consider an autonomous system consisting of six networks and three routers as shown in Figure 10.19. Each router has its own routing table showing how to reach each network existing in the AS. Let us study the routing table of one of the router, say R1. The networks 120.10.0.0 and 120.11.0.0 are directly connected to the router R1; therefore, R1 need not make any hop count entries for these networks. The packets destined for both these networks can be sent directly by the router; however, such forwarding cannot be done in the case of the other networks. For example, if the router R1 needs to send packets to the two networks to its left, then it has to forward the packets to the router R3. Thus, for these two networks, the next-node column of the routing table stores the interface of router R2 with the IP address 120.10.0.1. Similarly, if R1 has to send packets to two networks to its right, then it has to forward packets to the router R2. Thus, for these two networks, the next-node entry of routing table is the interface of router R2 with IP address 120.11.0.1. The routing tables of other routers can also be explained in the same way.

Figure 10.19 Example of a Domain Using RIP

(b) OSPF: open shortest Path First (OSPF) is an intradomain routing protocol which works within an autonomous system. It is based on the link state routing algorithm. In OSPF, the autonomous system is divided into many areas so that the routing traffic can be handled easily. Each area is formed by the combination of networks, hosts and routers within an autonomous system and is assigned a unique identification number. The networks residing in an area must be connected to each other. The routers inside an area function normally, that is, periodically updating the area with routing information.

In OSPF, there are two types of routers, namely, area border router and backbone router. The area border router is situated at the boundary of each area and it shares the information about its area with the other areas. The backbone router is situated in a special area (called the backbone area) among all the areas inside an autonomous system. The backbone is considered as a primary area and all the other areas are connected to it. The backbone routers can also function as area border routers. Figure 10.20 depicts the idea of an OSPF protocol.

Figure 10.20 Areas in an Autonomous System

Like every other routing protocol, in OSPF also, there is a cost associated with each route known as metric. This metric may depend upon the type of service such as minimum delay, minimum hop count or maximum throughput and there may be multiple routing tables with each based on a different type of service.

The connections in OSPF are known as links. In OSPF, four types of links have been identified: point-to-point, transient, stub and virtual.

Point-to-Point Link: This link defines a direct connection between the routers. It can be represented as a graph where the nodes of the graph represent routers while the bidirectional arrows connecting the nodes represent the link between the routers (Figure 10.21).

Figure 10.21 Point-to-Point Link

Transient Link: This link defines a network to which multiple routers are attached. In this type of network, each router has many neighbours all linked through a common network. The data can enter the network through any of the routers and leave the network through any of the routers. For example, consider an Ethernet network consisting of six routers R1, R2, R3, R4, R5 and R6 [Figure 10.22(a)]. To represent transient link graphically, the network is represented as a node, which is connected directly to all other nodes [Figure 10.22(b)].

Figure 10.22 Transient Link and its Graphical Representation

Stub Link: This link is a special case of transient link, which defines a network with only one router connected to it [Figure 10.23(a)]. The packets enter and leave the network using this single router. Graphically, it can be represented using the designated router for the network and the router as a node [Figure 10.23(b)]. The link is only unidirectional from the router to the network.

Figure 10.23 Stub Link and its Graphical Representation

Virtual Link: When the link between two routers is broken, a new link has to be established by the administrator, which is known as virtual link. This link may use a longer path covering more number of routers.

(c) BGP: Border Gateway Protocol (BGP) is an interdomain routing protocol, which came into existence in 1989. It is implemented using the concept of path vector routing. As BGP works between the different ASs, it is an exterior gateway routing protocol. It follows certain policies to transfer packets between various types of ASs and these policies are to be configured manually in each BGP router.

In BGP, the ASs are divided into three categories, namely, stub AS, multihomed AS and transit AS. The stub As has a single connection with any other AS. The data traffic can either originate or terminate at a stub AS, that is, a stub AS is either a source or a sink; however, data traffic cannot pass through it. The multihomed AS can have multiple connections with other ASs, that is, it can send data to more than one AS and receive also from many. However, still it is a sink or source as it does not allow data traffic to pass through it. The transit AS is also a multihomed AS except that it allows third-party traffic to pass through it.

Each BGP router maintains a routing table that keeps track of the path being used by the BGP router. The routing information is exchanged periodically between the routers in a session, referred to as BGP session. Whenever a BGP router needs to exchange the routing information with its neighbour, a transmission control protocol (TCP) connection is established between the two, which indicates the start of session. Using TCP connection provides a reliable communication between the routers. However, the TCP connection used for BGP is not permanent and it is terminated as soon as some unusual event occurs.

To understand the BGP operation, consider a set of BGP routers shown in Figure 10.24. Let the router H uses the path HDE to travel to E. When the neighbours of H provide routing information, they give their complete paths to reach E. After receiving all the paths, H examines which one will be the optimal path for going to E. The path originating from A is discarded as it passes through H. The choice now has to be made between G, C and D. To select from G, C and D, BGP uses a scoring function that is responsible for examining the available routes to a destination and scoring them, returning a number indicating the distance of each route to that destination. After the paths have been scored, the one with the shortest distance is selected. As the path DE has the shortest distance, it is selected.

Figure 10.24 A Set of BGP Routers Along with Routing Information for H

BGP solves the count-to-infinity problem previously encountered in the distance vector routing algorithm. To understand how it happens, suppose D crashes and the line HD goes down. When H receives routing information from the remaining working neighbours A, C and G, it finds the new routes to E as GFE, AHGFE and CHGFE, respectively. The route GFE is selected as the other two pass through H itself. Thus, the new shortest path for H to reach E is HGFE via G.

25. Explain unicast and multicast routing.

Ans: Unicast and multicast routing are the techniques of routing based on the network structure. These are explained as follows:

Unicast Routing

In unicast communication, there is only one source and one destination, that is, one-to-one connection. The data packets are transferred between the source and the destination using their unicast addresses. The packets go through various routers to arrive at their destination. Whenever a packet arrives at the router, the shortest path to the intended destination has to be found. For this, the router checks its routing table to find out the next hop to which the packet should be forwarded in order to reach the destination through the shortest path. If a path is not found in the table, the packet is discarded; otherwise, the packet is forwarded only through the interface specified in the routing table.

Multicast Routing

In multicast communication, there is one source and many destinations. The communication is one-to-many communication, that is, the packets are forwarded to one or more destination addresses defined altogether by a group address. Whenever a router receives a packet destined for a group address, it forwards the packet through its many interfaces so that the packet could reach to all destinations belonging to that group. To forward a single packet to a group address, the router needs to create a shortest path tree per each group. This makes the multicast routing complex, as N shortest path tree need to be created for N groups. To resolve the complexity, two approaches including source-based tree and group-shared tree can be used to construct the shortest path tree.

Source-based Tree: In this approach, each router constructs a shortest path tree for each group, which defines the next hop for each network containing the members for that group. For example, consider Figure 10.25 in which there are five groups G1, G2, G3, G4 and G5 and four routers R1, R2, R3 and R4. The group G1, G2 and G5 has its members in four networks while the group G2 and G3 in two networks. Each router is required to construct five shortest path trees, one per each group and having the shortest path entries in its routing table. Figure 10.25 shows a multicast routing table maintained by the router R3. Now, suppose the router R3 receives a packet destined for group G5. Then, it forwards the copy of a packet to router R2 and R4. Further, router R2 forwards the copy of a packet to router R1. This process continues and eventually, the packet reaches each member of G5 in the whole network. The problem with this approach is the complexity of routing table, which may have thousand of entries in case of large number of groups.

Figure 10.25 Source-based Tree Approach

Group-shared Tree: This approach overcomes the problem of source-based approach, as each router is not required to construct shortest path for each group. Rather, one router is designated as the core (or rendezvous) and only it is responsible for having N shortest paths in its routing table for N groups, one per each group. Whenever a router receives a packet containing a group address, it encapsulates the packet into a unicast packet and forwards it to the core router. The core router then extracts the multicast packet and checks its routing table to determine the route for the packet. Figure 10.26 shows the core router and its routing table.

Figure 10.26 Group-shared Tree Approach

26. Write a short note on internetworking.

Ans: Internetworking refers to the interconnection of many different networks. There exist many different networks like LAN, MAN and WAN using different protocols and design. When users existing in these different networks need to communicate with each other, internetworking has to be implemented. Internetworking not only allows these users to communicate with each other, but to access each other's data also. The communication between users is carried through transfer of packets. To transfer packets from one network to another, internetworking devices such as routers, transport gateways and application gateways are used at the junction between two networks. These devices help in converting the packets in a format accepted by the network to which the packets are to be transferred.

27. Explain the concept of tunnelling with respect to internetworking.

Ans: Consider two hosts A and B residing in the same type of networks (say, TCP/IP-based Ethernet LAN) and connected via a different type of network (say, a non-IP WAN) in between the two similar networks. Now, if A and B wish to communicate with each other, they can do so using the technique named as tunnelling, which implements internetworking.

To send a packet to host B, the host A creates an IP packet containing the IP address of the host B. The IP packet is then put in an Ethernet frame addressed to the multiprotocol router residing in A's network and eventually, put on the Ethernet. When the multiprotocol router on A's network receives the frame, it removes the IP packet and inserts it in the payload field of the WAN network layer packet. The packet is now addressed to the WAN address of the multiprotocol router residing in B's network. After the packet reaches to the multiprotocol router on B's network, it removes the IP packet from the Ethernet frame and sends to the host B in the local network inside an Ethernet frame. The host B then extracts the IP packet from the Ethernet frame and uses it. Figure 10.27 shows the communication between host A and B.

Figure 10.27 Tunnelling

Here, the WAN situated in between acts as a big tunnel connecting both the multiprotocol routers. The IP packet and the communicating hosts know nothing about the WAN architecture. Only the multiprotocol routers need to understand the IP and WAN packets. In effect, the distance from the middle of one multiprotocol router to another is like a tunnel that carries IP packets without any obstruction.

28. What is congestion? Why do we need congestion control?

Ans: Congestion is a situation that occurs in a network when the number of packets sent to the network is far more than its capacity. It results in increased traffic in the network layer and as a consequence, the TCP segments start getting lost as they pass through the network layer. Since TCP follows retransmission policy in case of lost or delayed segments, the congestion problem is further aggregated due to retransmission of segments. The situation becomes even worse and the performance of network degrades rapidly. That is why congestion needs to be controlled.

29. What is congestion control? What are its categories?

Ans: Congestion control refers to the techniques and mechanisms used to prevent or remove congestion from the network. There are various ways that can be used to control the congestion all these ways are divided into two categories, namely, open-loop and closed-loop congestion control. The open-loop congestion control aims to prevent the congestion from occurring by using some policies. Either the source or destination handles the congestion. On the other hand, the closed-loop congestion control attempts to remove congestion after it has occurred. The closed-loop congestion control is further divided into two categories, namely, implicit feedback closed-loop congestion control and explicit feedback closed-loop congestion control. In implicit feedback algorithms, the source is required to detect the congestion by making certain observations such as monitoring the time needed for acknowledgement to come back. In contrast, in explicit feedback algorithms, whenever congestion occurs, the packets are sent back to the source from the point of congestion to indicate the congestion.

30. What are the prevention policies used by the congestion?

Ans: The open-loop congestion control method adopts certain policies to prevent the congestion. Some of these policies are described as follows:

Retransmission Policy: When the sent packets are damaged or lost then they need to be retransmitted; however, the retransmission usually results in more congestion in the network. Thus, a reliable retransmission policy needs to be implemented to prevent the congestion. The retransmission timers are introduced to check, how fast the sender sends the data at a particular time. Both the retransmission policy and timers are designed in such a way that efficiency is optimized with preventing the congestion.
Acknowledgement Policy: Congestion may be affected by the acknowledgement policy used by the receiver, as acknowledgements are also a part of the network. Thus, if the receiver does not send an acknowledgement for every packet, it helps to reduce the congestion. Moreover, the receiver may decide to send acknowledgement only for specific packets or only when it is having packets to send or when the timer expires. All these factors result in less traffic in the network and thus, prevent congestion.
Discarding Policy: A reliable discarding policy can be implemented to prevent the congestion. For instance, the routers may start to discard packets when the network is not able to handle more packets, thus, preserving the integrity of transmission. If discarding policy is implemented correctly, then it is the best method to prevent the congestion; otherwise, it will make the situation worse.
Admission Policy: This policy is used to prevent congestion in virtual-circuit networks. According to this policy, whenever a request for virtual connection arrives to a router/switch, it first determines the flow of data and other resources in the network before admitting the incoming data into the network. If there is congestion or even the possibility of congestion in the network, the router/ switch denies the connection request.

31. What are the different mechanisms carried out to remove congestion?

Ans: After congestion has occurred in the network, it can be removed by implementing various mechanisms, some of which are described here.

Backpressure

Backpressure is a node-to-node congestion control technique where each node knows its immediate upstream node from which it is receiving the flow of data. When congestion occurs, the congested node rejects to receive any more data from its upstream node. This makes the upstream node to become congested and thus, it also starts rejecting the data coming from its upstream node. Therefore, they drop all the data coming from the upper node. This procedure continues and eventually, the congestion information reaches the original source of flow, which may then slow down the flow of data. As backpressure begins with the node where congestion is detected first and propagates back to the source, this technique is a kind of explicit feedback algorithm.

Backpressure technique can be applied only in virtual circuit networks that support node-to-node connection. It was introduced in X.25, the first virtual-circuit network. However, this technique is not applicable for IP-based networks, as in such networks, a node does not know its upstream node.

Choke Packet

Choke packet is a control packet that is created by the congested node to inform the source about the congestion in the network. Unlike backpressure method, the choke packet is sent by the congested node directly to the source of flow rather than to the intermediate nodes. The source receiving the choke packet is required to reduce the rate of flow towards the router from where the choke packet has come.

An example of choke packet is the source quench message used in ICMP (discussed in Chapter 11). This message is sent by the router or destination node to ask the source to slow down the rate of sending traffic. The router sends a source quench message for every datagram it discards due to overloaded buffer.

Implicit Congestion Signalling

In this method, there is no communication between the source and the nodes where congestion has occurred. Rather, the source is required to determine the congestion in the network by observing certain things. For example, if the source after sending many packets still does not receive any acknowledgement, it may assume that there is congestion in the network and thus, reduce its rate of sending flow. This is called implicit congestion signalling. This method can be used in connection-oriented networks such as frame relay networks; however, it is more effective in connectionless networks such as packet-switching networks and IP-based networks.

Explicit Congestion Signalling

The explicit signalling method is used by a congested node to send information to the source or destination. It is different from the choke packet method. In the choke packet method, when congestion occurs, a separate packet signalling the congestion is sent to the source whereas in explicit signalling method, the signals are included in the packets carrying data. The explicit congestion signalling can work in two directions, namely, backward signalling and forward signalling. In backward signalling, a bit is set in the packet that is moving in a direction opposite to the congestion. This bit informs the source that there is congestion in the link and the traffic needs to be slow down to avoid the discarding of packets. In forward signalling, a bit is set in the packet that is moving in the direction of congestion. This bit informs the destination that there is congestion in the link and thus, the destination needs to adopt some policy such as reducing the speed of sending acknowledgements to remove the congestion.

32. Write a short note on load shedding.

Ans: It is one of the simple and effective closed-loop congestion control techniques. Whenever a router discovers congestion in the network, it starts dropping out the packets until the congestion disappears. To decide which packet to drop, it uses certain policies. The simplest but less efficient policy is to randomly pick up the packets to be dropped. Another complex but effective policy requires some cooperation from the sender side. This policy uses some kind of priorities among the packets. As some packets are more important than others, different priority classes are used to distinguish them from less important ones. At the time of congestion in the network, the packets from the lowest priority class are discarded first, then from the next lower class and so on. There is one more policy, which allows a host to break the agreement made when the network was set up. According to this policy, the host can use resources beyond its specified limit in order to deal with congestion. For example, host can increase its bandwidth more than the specified limit.

33. List the differences between congestion control and flow control.

Ans: The congestion control and flow control are distinct in some ways that are as follows:

In congestion control, the whole network traffic (containing many nodes) is checked for congestion, while in flow control, the point-to-point traffic is checked for smooth functioning.
The congestion control involves many hosts, routers, connections while transferring the data, whereas in flow control only sender and receiver interact with each other.
In congestion control, when the network is unable to handle the load, packets are dropped to remove the congestion. However, in flow control, when the receiver is unable to absorb more data, it informs the sender to slow down the speed of sending data.

34. Write a short note on hop-by-hop choke packets.

Ans: The hop-by-hop choke packet method is an improvement over choke packet method. In choke packet method, at the time of congestion, the choke packets are sent directly to the source from congested node. However, until the time choke packets reach the source, many data packets will have already been sent out from the source and still have to be dealt with. Thus, the choke packet method is not desirable at high speeds or over long distances. In such cases, hop-by-hop choke packet method is preferred. In hop-by-hop choke packet method, not only the source node but also each intermediate node receiving the choke packet gets affected, thus, reducing the traffic between the intermediate nodes as well as the source.

To understand how hop-by-hop choke packet method works, consider the subnet shown in Figure 10.28(a). Here, the data is being sent from the node P to node R via path PTSR [Figure 10.28(b)]. Now, suppose the node R runs out of buffers and thus, sends a hop-by-hop choke packet to P via path RSTP. As the packet reaches from R to S, it informs the node S to slow down the speed of sending data packets to R. However, as S is still receiving the data packets, it has to include more buffers to accommodate the packets. This puts more demand on S but makes R congestion free [Figure 10.28(c)]. Thus, S forwards the choke packet to T, which informs T to reduce its speed of sending data packets to S. This puts more demand on T but makes S congestion free [Figure 10.28(d)]. Finally, the choke packet reaches P and the traffic flow is reduced [Figure 10.28(e)]. This mechanism makes all the nodes congestion free. Thus, the congestion is controlled and that too without losing any packets.

Figure 10.28 Hop-by-Hop Choke Packet Method

35. Find the class of each of the following address.

(a) 00000001 00001011 00001011 11101111

(b) 11000001 10000011 00011011 11111111

(c) 14.23.120.8

(d) 252.5.15.111

Ans: (a) As the first bit of the address is 0, it is a class A address.

(b) As the first two bits of the address are one and the third bit is zero, it is a class B address.

(c) As the first byte of the address is 14 (between 0 and 127), it is a class A address.

(d) As the first byte of the address is 252 (between 240 and 255), it is a class E address.

36. Why the largest octet in an Internet address is 255?

Ans: Each octet in an IP address consists of 8 bits, each of which can be either 0 or 1. The largest value that can be expressed in 8 bits is 11111111, that is, 255. That is why the largest octet in an IP address is 255.

37. Given the following IP addresses,

(a) 32.46.7.3

(b) 200.132.110.35

(c) 140.75.8.92

determine (i) class, (ii) network address, (iii) mask and (iv) broadcast address for each.

Ans: (a) Given IP address = 32.46.7.3

(i) As the first byte of this address is 32 (between 0 and 127), it is class A address.
(ii) Since it is class A address, its first byte (that is, 32) denotes the net ID. To obtain the network address, the host ID bits are made zero. Thus, the network address is 32.0.0.0
(iii) Being a class A address, the mask for this address is 255.0.0.0.
(iv) The broadcast address is obtained by keeping the net ID of the address same and replacing each byte of host ID with 255. Thus, the broadcast address is 32.255.255.255.
(b) Given IP address = 200.132.110.35
(i) As the first byte of this address is 200 (between 192 and 223), it a Class C address.
(ii) Since it is Class C address, its first three bytes (that is, 200.132.110) denote the net ID. To obtain the network address, the host ID bits are made zero. Thus, the network address is 200.132.110.0.
(iii) Being a Class C address, the mask for this address is 255.255.255.0.
(iv) The broadcast address is obtained by keeping the net ID of the address same and replacing each byte of host ID with 255. Thus, the broadcast address is 200.132.110.255.
(c) Given IP address = 140.75.8.92
(i) As the first byte of this address is 140 (between 128 and 191), it a class B address.
(ii) Since it is class B address, its first two bytes (that is, 140.75) denote the net ID. To obtain the network address, the host ID bits are made zero. Thus, the network address is 140.75.0.0.
(iii) Being a class B address, the mask for this address is 255.255.0.0.
(iv) The broadcast address is obtained by keeping the net ID of the address same and replacing each byte of host ID with 255. Thus, the broadcast address is 140.75.255.255.

38. The IP network 192.168.130.0 is using the subnet mask 255.255.255.224. What subnets are the following hosts on?

(a) 192.168.130.10

(b) 192.168.130.67

(c) 192.168.130.93

(d) 192.168.130.199

(e) 192.168.130.222

(f) 192.168.130.250

Ans: To determine which host is lying on which subnet, we need to determine the total number of subnets in the network, number of hosts in each subnet and the range of IP addresses assigned to each subnet. For finding the number of subnets and the number of hosts in each subnet, we need to know the number of masked and unmasked bits in the subnet mask, which can be found as follows:

Given, subnet mask = 255.255.255.224

The binary equivalent of host ID byte (that is, 224) is 11100000. Here, three bits are 1s while five bits are 0s. Thus, the number of masked bits (m) is 3 and the number of unmasked bits (n) is 5. Now, the number of subnets and the number of hosts in each subnet can be determined using the following formulas.

Number of subnets = 2m
=> 23
=> 8
Number of hosts in each subnet = 2n - 2
=> 25 - 2
=> 30

Thus, there are eight subnets in the network with each subnet comprising 30 hosts. As the given IP network is 192.168.130.0, the range of IP addresses assigned to each subnet can be given as follows:

Range of first subnet = 192.168.130.0 - 192.168.130.31

Range of second subnet = 192.168.130.32 - 192.168.130.63

Range of third subnet = 192.168.130.64 - 192.168.130.95

Range of fourth subnet = 192.168.130.96 - 192.168.130.127

Range of fifth subnet = 192.168.130.128 - 192.168.130.159

Range of sixth subnet = 192.168.130.160 - 192.168.130.191

Range of seventh subnet = 192.168.130.192 - 192.168.130.223

Range of eighth subnet = 192.168.130.224 - 192.168.130.255

Notice that the first and last address of each range cannot be assigned to hosts. For example, in the first subnet, the addresses 192.168.30.0 and 192.168.130.31 cannot be used for the hosts.

Now, we can find which host lies on which network as given below:

(a) The host 192.168.130.10 lies on the first subnet.
(b) The host 192.168.130.67 lies on the third subnet.
(c) The host 192.168.130.93 lies on the third subnet.
(d) The host 192.168.130.199 lies on the seventh subnet.
(e) The host 192.168.130.222 lies on the seventh subnet.
(f) The host 192.168.130.250 lies on the eighth subnet.

39. What will be the subnet address if the destination address is 200.45.34.56 and subnet mask is 255.255.240.0?

Ans: Given, destination address = 200.45.34.56
Subnet mask = 255.255.240.0

Now, the subnet address can be determined by performing AND of destination address and the subnet mask.

The destination address and subnet mask can be written in binary notation as shown below:

11001000 00101101 00100010 00111000

11111111 11111111 11110000 00000000

On ANDing the both addresses, we get the following subnet address:

11001000 00101101 00100000 00000000

This subnet address can be written in dotted decimal notation as:

200.45.32.0.

40.  Given an IP address 156.36.2.58 with default mask 255.255.0.0/23. Determine its subnet mask in the event of subnetting.

Ans: Given IP address = 156.36.2.58
Default mask = 255.255.0.0/22

Here, /22 indicates that the first 22 bits of the subnet mask are 1s while the rest are 0s. Thus, the subnet mask is:

11111111 11111111 11111100 00000000

This subnet mask can be written in dotted decimal notation as 255.255.252.0.

Multiple choice questions

1. How many bits are allocated for the host ID in an IP address of class A?
(a) 8
(b) 24
(c) 48
(d) 16
2. Which class of IP address is used for multicasting?
(a) class E
(b) class B
(c) class D
(d) class M
3. Which class address is used to implement supernetting?
(a) class D
(b) class A
(c) class C
(d) class B
4. What is the maximum limit for hop count in RIP protocol?
(a) 15
(b) 12
(c) 25
(d) 10
5. Which of the following metrics is used in RIP?
(a) delay
(b) hop count
(c) throughput
(d) bandwidth
6. Which of the following is an interdomain routing protocol?
(a) RIP
(b) OSPF
(c) BGP
(d) none of these
7. Which protocol uses backbone router?
(a) OSPF
(b) RIP
(c) BGP
(d) all of these
8. _________is a node-to-node congestion control technique where the congested node stops receiving data from the upstream nodes.
(a) backpressure
(b) choke packet
(c) implicit congestion
(d) explicit congestion
9. Which of the following belongs to explicit feedback closed-loop congestion control?
(a) choke packet
(b) hop-by-hop choke packet
(c) backpressure
(d) all of these
10. Which of the following is not a prevention policy used for congestion?
(a) retransmission policy
(b) discarding policy
(c) acknowledgement policy
(d) error control policy

Answers

1. (b)

2. (c)

3. (c)

4. (a)

5. (b)

6. (c)

7. (a)

8. (a)

9. (d)

10. (d)