11. Quality of Service and Protocols – Express Learning: Data Communications and Computer Networks


Quality of Service and Protocols

1. What is meant by quality of service? What are its characteristics?

Ans: A stream of packets being transmitted from the source node to destination node is referred to as the flow. Each flow is characterized by a certain set of performance parameters such as reliability, jitter, delay and bandwidth. The ability of a network to deliver the flow of packets to the destination node with the defined set of performance parameters is defined as its quality of service (QoS). The characteristics that a flow seeks to attain are as follows:

Reliability: This characteristic ensures that no packet is damaged or lost during the transmission and all packets are received at the destination correctly. If the flow does not attain reliability then retransmission needs to be done. Reliability can be achieved by applying checksum to each packet at the sender's end and then verifying it at the receiver's end. Some applications such as e-mail, remote login require high reliability while others such as telephony and video-on-demand are less sensitive to reliability.
Delay: This characteristic slows down the flow of transmitted packets that are needed to be received by the destination node. Like reliability, delay requirements also depend on the type of the application used. In case of file transfer applications such as e-mail, if packets are delayed by a few seconds then it cannot cause any harm. However, if the delay occurs in real-time applications such as telephony, then the users will not be able to understand each other's conversation. Thus, for such applications, delay should be the least.
Jitter: This characteristic defines the variation in delay of packets that correspond to the same flow. If the delay between successive packets is constant, then there is no harm but if the packets are received with irregular time intervals between them, the result may be unacceptable. Video and audio applications are highly sensitive to jitter while applications such as e-mail and file transfer have less stringent requirements with respect to jitter.
Bandwidth: This characteristic defines the number of bits that can be delivered per second. Different applications have different bandwidth requirements. For example, in e-mail applications, bandwidth consumption is less as number of bits transmitted is not much but in the case of video conferencing, more bandwidth is utilized as number of bits to be transmitted is in millions.

2. What are the four general techniques to improve the QoS?

Ans: The QoS can be improved by using the four techniques, namely, scheduling, traffic shaping, resource reservation and admission control.

Scheduling: This technique schedules the arrival of packets from different flows so that packets can be processed in an appropriate manner by the router or switch. Some scheduling techniques to improve the QoS are as follows:
  • FIFO Queuing: In this technique, packets are scheduled based on First-In-First-Out (FIFO) method. A queue is maintained by the router or switch in which packets wait for their turn to get processed. Each incoming packet is appended to the queue if there is space in the queue. The packet, which enters the queue, first, is processed first. If the queue is filled, then the new packets are discarded until some space is freed up in the queue.
  • Priority Queuing: In this scheduling, each packet is assigned a priority class and a separate queue is maintained for each priority class. The packets in the queue of the highest priority class are processed first while packets in the queue of the lowest priority class are processed last. Thus, this technique processes the higher-priority packets with less delay. However, the demerit of this technique is that low-priority queue packets will not be processed if there is a continuous flow of packets in high-priority queues. This situation is referred to as starvation of low-priority packets.
  • Weighted Fair Queuing: In this technique, weights are assigned to different priority classes such that higher-priority queues get higher weight and the lower-priority queues get the lower weight. The weight assigned to a queue indicates the number of packets that will be processed from the queue. For example, consider three queues Q1, Q2 and Q3 with priorities 1, 2 and 3 and weights assigned are 4, 1 and 2, respectively. Initially, four packets will be processed from the Q1; then, one packet will be processed from Q2 and finally, two packets from Q3.
Traffic Shaping: It is a technique that improves the QoS by managing the amount of traffic, which is to be sent to the network. This technique tries to make the traffic flow at the uniform data rate, resulting in less congestion and improved QoS. The flow of traffic is regulated by monitoring the traffic flow during the connection period what is referred to as traffic policing. If a packet in a stream does not obey the policy, then it is penalized by either discarding it or by assigning low priority. There are two traffic-shaping techniques, namely, leaky bucket and token bucket (discussed later) used to improve the QoS.
Resource Reservation: This technique reserves the resources required for a flow beforehand when a specific route to reach the destination node has been decided. The three kinds of resources that are normally reserved for the flow include bandwidth, buffer space and central processor unit (CPU) cycles. The reservation of bandwidth enables the flow to reach the destination node more effectively. For example, if a flow requires 2 Mbps and the capacity of outgoing line is 5 Mbps then reserving 2 Mbps for one flow will effectively work but trying to direct three flows through the line will lead to more congestion. The buffer space must be reserved at the destination side so that a specific flow does not compete with other flows for being placed in the queue. This is because if the buffer space is not available, then the packets have to be discarded which may degrade the QoS. The CPU cycles also need to be reserved, as the router takes some CPU time to process each packet and also, it can process only a few numbers of packets per second. Thus, for effective and timely processing of each packet, the CPU must not be overloaded.
Admission Control: This technique is used by the router or switch to decide whether to accept or reject the incoming flow. The decision to accept or reject the flow is made based on certain parameters such as bandwidth, buffer cycles, CPU cycles and packet size. The set of these parameters is referred to as flow specification. Whenever a flow comes to a router, the router checks the flow specification to determine whether it can handle the incoming flow. To determine this, the router checks its current buffer size, bandwidth and CPU usage. It also checks its prior commitments made to other flows. The flow is accepted only after the router becomes sure that it can handle the flow.

 3. Explain the leaky bucket algorithm.

Ans: The leaky bucket algorithm is a traffic-shaping technique that is used to control the congestion in a network. It was proposed by Turner in 1986 that used the concept of a leaky bucket—a bucket with a hole at the bottom. The water is pouring into the bucket and it is leaking continuously. However, the rate of leakage is always constant irrespective of the rate of water pouring into the bucket. This process continues until the bucket is empty. If the bucket is overflowed, then additional water falls through the sides of bucket but leakage rate will always be constant. Turner applied the same idea to packets transmitted over the network and therefore, the algorithm was named as leaky bucket algorithm. This algorithm smooths out the bursty traffic by storing bursty chunks of packets in the leaky bucket so that they can be sent out at a constant rate.

To understand the leaky bucket algorithm, let us assume that network has allowed the hosts to transmit data at the rate of 5 Mbps and each host is connected to the network through an interface containing a leaky bucket. The leaky bucket enables to shape the incoming traffic in accordance with the data rate committed by the network. Suppose the source host transmits the bursts of data at the rate of 10 Mbps for the first three seconds, that is, a total of 30 Mbits of data. After resting for 3 s, the source host again transmits the bursts of data at the rate of 5 Mbps for 4 s, that is, a total of 20 Mbits. Thus, the total amount of data transmitted by the source host is 50 Mbits in 10 s. The leaky bucket sends the whole data at the constant rate of 5 Mbps (that is, within the bandwidth commitment of the network for that host) regardless of the rate arrived from the source node. If the concept of leaky bucket was not used, then more bandwidth would be consumed by the starting burst of data, leading to more congestion.

The leaky bucket algorithm is implemented by maintaining a FIFO queue to hold the arriving packets. The queue has a finite capacity. Whenever a packet arrives, it is appended at the end of queue if there is some space in queue; otherwise, the packet is discarded (Figure 11.1). If each packet is of fixed size, then a fixed number of packets are removed from the queue per each clock tick. However, if packets are of variable sizes, a fixed number of bytes (say, p) are removed from the queue per each clock tick. The algorithm for variable size packets is as follows:

  1. A counter is initialized to p at the tick of the clock.
  2. If packet size is smaller than or equal to p, the packet is sent and value of counter is decremented by the packet size.
  3. Repeat step 2 until p becomes smaller than the packet size.
  4. Reset the counter and go back to step 1.

Figure 11.1 Leaky Bucket Implementation

 4. Explain token bucket algorithm.

Ans: The token bucket algorithm is a traffic-shaping technique used to control the congestion in the network. Though it is a variation of leaky bucket algorithm, it is more flexible as it allows the output to go at the higher rate when a large burst of traffic arrives in the bucket rather than persisting with the constant output rate. Moreover, it takes into account the time for which a host remains idle so that the idle hosts could take benefit in the future. For this, the system generates many tokens (say, n) per each clock tick and these tokens are fed into the bucket. The token are added to the bucket as long as there is a space in the bucket. For transmitting each packet, one token is removed from the bucket and is destroyed. Now, if some host remains idle for 50 clock ticks and the system sends 20 tokens per each clock tick then there will be 1,000 (50*20) tokens in the bucket of idle host. When host becomes active, either it can transmit 1,000 packets (one per each token) in a single clock tick or it can transmit 100 packets per each clock tick up to 10 clock ticks or any other way thereby sending the bursty traffic.

The token bucket algorithm is implemented by the counter to count tokens. The counter variable is incremented by one each time a token is added in the leaky bucket and is decremented by one each time a packet is transmitted. If the counter value becomes zero then the host cannot send any further packets.

The performance of token bucket algorithm is measured by the burst time (S) which is given as

S = C/ (M - P)

where C is the token bucket capacity in bytes, M is the maximum output rate in bytes/s and P is the token arrival rate in bytes/s.

 5. Distinguish between leaky bucket algorithm and token bucket algorithm.

Ans: Though both leaky bucket and token bucket are traffic-shaping techniques that smooth out the traffic between routers and regulate output from nodes, still there are some differences between the two. These differences are as follows:

Token bucket algorithm stores tokens up to the maximum size of the bucket if the node is not sending packet for some period of time, whereas in leaky bucket algorithm no such storing of tokens happens.
Token bucket algorithm can send large bursts of packets at once if needed but in leaky bucket algorithm, the output rate always remains constant.
Token bucket algorithm never discards the packet but throws away the tokens not the packets if the bucket overflows, whereas in leaky bucket algorithm the packets are discarded when the bucket fills up.
Token bucket algorithm does not result in loss of data as node with the token bucket stops sending the packets if the router defines such a rule but in leaky bucket algorithm there is no such provision to prevent any data loss.

 6. Write a short note on ARP.

Ans: Address resolution protocol (ARP) is a network layer protocol used to map an Internet protocol (IP) address (logical address) to its corresponding media access control (MAC) address (physical address). The mapping from logical to physical address can be of two types, namely, static mapping and dynamic mapping. In static mapping, each node on the network maintains a table that contains entries of IP addresses of all other nodes along with their corresponding MAC addresses. If any node knows the IP address of a particular node but does not know its MAC address then it can find the corresponding entry from the table. The disadvantage of static mapping is that as the physical address of a node may change, the table must be updated at regular intervals of time and thus, causing more overhead. On the other hand, in dynamic mapping, a protocol can be used by the node to find the other address if one is known. The ARP is based on dynamic mapping.

Whenever a host wishes to send IP packets to another host or router, it knows only the IP address of the receiver and needs to know the MAC address as the packet has to be passed through the physical network. For this, the host or router broadcasts an ARP request packet over the network. This packet consists of IP address and MAC address of the source node and the IP address of the receiver node. As the packet travels through the network, each node in between receives and processes the ARP request packet. If a node does not find its IP address in the request, it simply discards the packets. However, when an intended recipient recognizes its IP address in the ARP request packet, it sends back an ARP response packet. This packet contains the IP and MAC address of the receiver node and is delivered only to the source node, that is, ARP response packet is unicast instead of broadcast.

The performance of ARP decreases if every time the source node or router has to broadcast an ARP request packet to know the MAC address of the same destination node. Thus, to improve the efficiency, ARP response packets are stored in the cache memory of the source system. Before sending any ARP request packet, the system first checks its cache memory and if the system finds the desired mapping in it then the packet is unicasted to the intended receiver instead of broadcasting it over the network.

The format of ARP packet is shown in Figure 11.2.

The ARP packet comprises various fields, which are described as follows:

Hardware Type: It is a 16-bit long field that defines the type of the network on which ARP is running. For example, if ARP is running on Ethernet then the value of this field will be one. ARP can be used on physical network.
Protocol Type: It is a 16-bit long field that defines the protocol used by ARP. For example, if ARP is using IPv4 protocol then the value of this field will be (0800)16. ARP can be used with any protocol.
Hardware Length: It is an 8-bit long field that defines the length of MAC address in bytes.

Figure 11.2 ARP Packet Format

Protocol Length: It is an 8-bit long field that defines the length of IP address in bytes.
Operation: It is a 16-bit long field that defines the type of packet being carried out. For ARP request packet, the value of this field will be one and for ARP response packet, the value will be two.
Sender Hardware Address: It is a variable-length field that defines the MAC address of the sender node.
Sender Protocol Address: It is a of variable-length field that defines the IP address of the sender node.
Target Hardware Address: It is a variable-length field that defines the MAC address of the destination node. In case of an ARP request packet, the value of this field is 0s as the MAC address of the receiver node is not known to the sender node.
Target Protocol Address: It is a variable-length field that defines the IP address of the destination node.

 7. Write a short note on the following.

 (a) RARP

 (b) BOOTP

 (c) DHCP


 (a) RARP: Reverse address resolution protocol, as the name implies, performs the opposite of ARP. That is, it helps a machine that knows only its MAC address (logical address) to find the IP address (logical address). This protocol is used in situations when a diskless machine is booted from read only-memory (ROM). As ROM is installed by the manufacturer, it does not include the IP address in its booting information because IP addresses are assigned by the network administrator. However, MAC address of the machine can be identified by reading its NIC. Now, to get the IP address of the machine in a network, RARP request packet is broadcast to all machines on the local network. The RARP request packet contains the MAC address of the inquiring machine. The RARP server on the network that knows all the IP addresses sees this request and responds with a RARP reply packet containing the corresponding IP address to the sender machine.

The problem in using RARP protocol is that if there is more than one network or subnets then RARP server needs to be configured on each network as RARP requests are not forwarded by the routers and, thus, cannot go beyond the boundaries of a network.

 (b) BOOTP: Bootstrap protocol is an application layer protocol designed to run in a client/ server environment. The BOOTP client and BOOTP server can be on the same or different network. The BOOTP uses user datagram protocol (UDP) packets, which are encapsulated in an IP packet. A BOOTP request packet from a BOOTP client to BOOTP server is broadcast to all the nodes on the network. In case the BOOTP client is in one network and BOOTP server is on another network and two networks are separated by many other networks, the broadcast BOOTP request packet cannot be forwarded by the router. To solve this problem, one intermediary node or router, which is operational at the application layer, is used as a relay agent. The relay agent knows the IP address of the BOOTP server and when it receives a BOOTP request packet, it unicasts the BOOTP request packet to the BOOTP server by including the IP address of the server and of itself. On receiving the packet, the BOOTP server sends BOOTP reply packet to the relay agent, which further sends it to the BOOTP client.

A problem associated with BOOTP is that it is a static configuration protocol. The mapping table containing MAC and IP addresses is configured manually by the network administrator. Thus, a new node cannot use BOOTP until its IP and MAC address have been entered manually by the network administrator in the table.

 (c) DHCP: Dynamic host control protocol supports both static and dynamic address allocation that can be done manually or automatically. Thus, it maintains two databases one for each allocation. In static address allocation, DHCP acts like BOOTP, which means that a BOOTP client can request for a permanent IP address from a DHCP server. In this type of allocation, DHCP server statically maps MAC address to IP address using its database. On the other hand, in dynamic address allocation, dynamic database is maintained by DHCP that contains the unused IP addresses. Whenever a request comes for an IP address, DHCP server assigns a temporary IP address to the node from the dynamic database using a technique called leasing. In this technique, the node requests the DHCP server to renew the IP address just before the time of using this IP address expires. If the request is denied by the DHCP server, then the node cannot use the same IP address that was assigned to it earlier.

Like BOOTP, DHCP also uses relay agent on each network to forward the DHCP requests. When a DHCP node requests for an IP address, the relay agent on network helps the request to unicast to the DHCP server. On receiving the request, the server checks its static database to find the entry of the requested physical address. If it finds the address, it returns the permanent IP address to the node; otherwise, it selects some temporary address from the available pool, returns this address to the host and also, adds this entry to the dynamic database.

 8. Draw and discuss the IP datagram frame format. Discuss in detail the various fields.

Ans: The Internet protocol version 4 (IPv4) is the most widely used internetworking protocol. In IPv4, the packets are termed as datagrams (a variable length packet). Further, IPv4 is a connectionless and unreliable datagram protocol; connectionless means each datagram is handled independently and can follow different paths to reach the destination; unreliable means it does not guarantee about the successfully delivery of the message. In addition, IPv4 does not provide flow control and error control except the error detection in the header of the datagram. To achieve reliability, the IP is paired with transmission control protocol (TCP), which is a reliable protocol. Thus, it is considered as a part of TCP/IP suite.

An IPv4 datagram consists of a header field followed by a data field. The header field is 20-60 bytes long and contains routing and delivery information. The header comprises various subfields (Figure 11.3), which are described as follows:

Version (VER): It is a 4-bit long field, which indicates the version being used. The current version of IP is 4.
Header Length (HLEN): It is a 4-bit long field, which defines the IPv4 header length in 32-bit words. The minimum length of IPv4 header is five 32-bit words.
Service: It is an 8-bit long field, which provides an indication of the desired QoS such as precedence, delay, throughput and reliability. This field is also called type of service (ToS) field.
Total Length: It is a 16-bit long field, which defines the total length (in bytes) of the datagram including header and data. The maximum permitted length is 65,535 (216–1) bytes with 20-60 bytes for header and rest for data.
Identification: It is a 16-bit long field, which uniquely identifies the datagram. The datagrams can be fragmented at the sender's end for transmission and then reassembled at the receiver's end. When a datagram is fragmented into multiple datagrams, all datagrams belonging to the same datagram are labelled with same identification number and the datagrams having same identification number are reassembled at the receiving side.

Figure 11.3 IPv4 Datagram Header Format

Flags: It is a 3-bit long field in which the first bit is reserved and always zero, the second bit is do not fragment (DF) and the third bit is more fragment (MF). If DF bit is set to 1 then it means the datagram must not be fragmented while a value of zero indicates the fragmentation of the datagram. If MF bit is set to zero, then it means this fragment is the last fragment. However, if MF bit is zero then it means there are more fragments after this fragment.
Fragmentation Offset: It is a 13-bit long field that indicates the relative position of this fragment with respect to the entire fragment. The fragmentation offset indicate the actual datagram to which the given fragment belongs. It is measured in units of eight octets (64 bits). The first fragment is set to the offset zero.
Time-to-Live (TTL): It is an 8-bit long field, which indicates the total time (in seconds) or number of hops (routers) that an IPv4 datagram can survive before being discarded. As a router receives a datagram, it decrements the TTL value of datagram by one and then forwards that datagram to the next hop and so on. When the TTL value becomes zero, it is discarded. Generally, when a datagram is sent by the source node, its TTL value is set to the two times of the maximum number of routes between the sending and the receiving hosts. This field is needed to limit the lifetime of datagram because it may travel between two or more routers for a long time without ever being delivered to the destination host. Therefore, to avoid this, we discard the datagram when its TTL value becomes zero.
Protocol: It is an 8-bit long field, which specifies the higher-level protocols that uses the services of IPv4.
Header Checksum: It is a 16-bit long field that is used to verify the validity of the header and is recomputed each time when the TTL value is decremented.
Source IP Address: It is a 32-bit long field, which holds the IPv4 address of the sending host. This field remains unchanged during the lifetime of the datagram.
Destination IP Address: It is a 32-bit long field, which holds the IPv4 address of the receiving host. This field remains unchanged during the lifetime of the datagram.
Options: These are included in the variable part of header and can be of maximum 40 bytes. These are optional fields, which are used for debugging and network testing. However, if they are present in the header then all implementations must able to handle these options. Options can be of single or multiple bytes. Some examples of single byte options are no-operation and end of option and of multiple bytes are record route and strict source route.

 9. Explain in detail about IPv6.

Ans: The IP is the foundation for most Internet communications. Further, IPv6 is a version of IP that has been designed to overcome the deficiencies in IPv4 design. Some of the important issues that reflect IPv4 inadequacies include:

The IPv4 has a two-level address structure (network number, host number), which is inconvenient and inadequate to meet the network prefixes requirements. In addition, IPv4 addresses need extra overhead like subnetting, classless addressing, NAT, address depletion, etc., which are still a big issue for efficient implementation.
Internet also deals with real-time audio and video transmission, which requires high speed with minimum delays and requires reservation schemes of resources. There is no such a procedure in IPv4 to deal with such kind of transmission.
Some confidential applications need authentication and encryption to be performed during data transmission, However, IPv4 does not provide any authentication and encryption of the packets.

Thus, an Internetworking Protocol, version 6 (IPv6), also known as Internetworking Protocol Next Generation (Ipng) with enhanced functionality is proposed by the Internet Engineering Task Force (IETF) to accommodate the future growth of Internet.

Packet Format of IPv6

Figure 11.4 shows the format of IPv6 packet. An IPv6 packet consists of two fields: base header field of 40 bytes and a payload field of length up to 65,535 bytes. The payload field further consists of optional extension headers and data packets from upper layer.

Figure 11.4 IPv6 Datagram Format

The format of IPv6 datagram header is shown in Figure 11.5.

Figure 11.5 IPv6 Datagram Header Format

The description of various fields included in IPv6 header is as follows:

Version (VER): It is a 4-bit long field, which indicates the version being used. The current version of IPv6 is 6.
Priority (PRI): It is a 4-bit long field, which indicates the transmission priority of the packet in accordance to traffic congestion.
Flow Label: It is a 24-bit (3-byte) long field which is used to provide special handling to some particular data packet flow like audio and video packets.
Payload Length: It is a 16-bit (2-byte) long field which defines the length of the remainder of the IPv6 packet following the header, in octets.
Next Header: It is an 8-bit long field which identifies the first extension header (if extension header is available) following the base header or defines the protocol in the upper layer such as TCP, UDP or ICMPv6.
Hop Limit: It is an 8-bit long field, which defines the maximum number of routers the IPv6 packet can travel during its lifetime. The purpose of this field is same as of TTL field in IPv4.
Source Address: It is a 128-bit long field, which holds the IPv6 address of the sending host. This field remains unchanged during the lifetime of the datagram.
Destination Address: It is a 128-bit long field, which holds the IPv6 address of the receiving host. It remains unchanged during the lifetime of the datagram.

10. Discuss the advantages of IPv6 over IPv4.

Ans: The IPv6 has the following advantages over IPv4:

Bigger Address Space: IPv6 has 128-bit address, which can make 2128 addresses.
Improved Header Format: In IPv6 header format, options are kept distinct from the base header and can be inserted when needed. These optional headers are not examined by any router during the datagram path thereby simplifying and speeding up the routing process.
Auto Configuration of Addresses: IPv6 protocol can provide dynamic assignment of IPv6 addresses.
Allow Future Extensions: IPv6 design allows any future extensions, if needed, to meet the requirements of future technologies or applications.
Support for Resource Allocation: In IPv6, a new mechanism called flow label has been introduced to replace TOS field. With this mechanism, a source can request special handling of the packet. This mechanism supports real-time audio and video transmission.
Enhanced Security: Ipv6 includes encryption and authentication options, which provide integrity and confidentiality of the packet.

11. Compare IPv4 header with IPv6 header.

Ans: Both IPv4 and IPv6 are the variants of IP, which have certain differences between them. These differences are listed in Table 11.1.

Table 11.1 Differences Between IPv4 and IPv6 Header

  IPv4 Header   IPv6 Header
• IPv4 has header length (HLEN) field and it is variable in size. • IPv6 does not have header length field because the length of the header is fixed.
• It has service field to provide desired QoS such as precedence, delay, throughput and reliability. • The priority and flow label fields together provide the functionality provided by the service field.
• It has total length field, which shows the total length (header plus data) of the datagram. • It has payload field that indicates the length of data but not the header.
• It has identification, flag, and offset fields in its base header. • These fields have been removed from the base header and are included in the fragmentation extension header.
• In this, the TTL field specifies time to live in seconds. • IPv6 uses a hop limit field, which serves the same purpose as TTL field in IPv4.
• There is a protocol field. • There is a next header field in place of protocol field in IPv4.
• There is a header checksum field in IPv4, which is used to detect errors in the header of an IP packet and results into an extra overhead while processing an IP packet. • There is no header checksum field needed because it is provided by the protocol in upper layer.
• There is an options field. • There is an extension header field.
• The source and destination address fields are of 32 bits. • The source and destination fields are of 128 bits.

12. Discuss some deficiencies of IP.

Ans: The IP is the best effort service to deliver a datagram from the original source to the final destination. However, it has certain deficiencies, which are as follows:

It does not support error control and assistance mechanism. Sometimes, unexpected errors may occur, such as a router may have to discard a datagram because it cannot find a route to the destination or TTL field has a zero value. In such a situation, the IP protocol is unable to inform the source host.
It does not provide any support for host and management queries. Sometimes, a host wants to know whether the router or another host is alive and sometimes, a network administrator may need information about another host or router.

13. Explain message types associated with ICMP.

Ans: Internet control message protocol (ICMP) has been designed to overcome the problems with IP. It provides a mechanism for error reporting and host and management queries. It addresses and reports these problems through ICMP messages. The ICMP messages are of two types, namely, error-reporting messages and query messages.

Error-reporting Messages

The ICMP facilitates for reporting error messages to the original source. It does not correct errors but only reports them. The error-reporting messages report problems that a router faces while processing an IP packet. The ICMP handles five types of errors, which are as follows:

Destination Unreachable: The destination-unreachable messages are used in case of routing or delivery error. It may happen that routers are unable to route the datagrams or a host is unable to deliver it. In such situations, destination-unreachable messages are sent back to the source host. It is to be noted that these messages cannot be created by the routers or destination host.
Time Exceeded: The time-exceeded message is sent in two cases: first, when the packet is dropped because the value of TTL field reaches zero and second, when a destination does not receive all the fragments belonging to the same packet within a certain time limit.
Parameter Problem: When a router or the destination host finds an illegal or missing value in any field of the datagram then it discards and sends a parameter problem message to the source.
Source Quench: If a router or host discards the datagram due to congestion in the network, then it sends a source quench message to the source host. There are two purposes of sending this message. First is to tell the source that the sending datagram has been discarded and second is to slow down the process of sending datagrams to prevent the congestion.
Redirection: When a router finds that the packet has been sent to the wrong route because of some sudden change in the routing table or loss of a routing table update, then it sends a redirect message to the source host to notify the problem.

Query Messages

The ICMP allows sending query messages in addition to error-reporting messages. The query messages help a host or a network manager to receive specific information from a router or another host. For example, nodes can search their neighbours, hosts can search and know about routers on their network, and routers can help a node to redirect its messages. The query messages always occur in pairs. Different types of query messages are as follows:

Echo Request and Echo Reply: The echo-request and echo-reply messages are used to determine whether there is communication between two systems at the IP level. If the machine that has sent the echo-request message, receives the echo-reply message, it is proved that these two machines are communicating with each other through IP datagrams. A ping command is used to determine the communication in which a series of echo-request and echo-reply messages are transferred between two hosts to check the connectivity.
Timestamp Request and Reply: The timestamp request and timestamp reply messages are used by the hosts or routers to find out the round-trip time required for an IP datagram to reach from one to another. It is also used to synchronize the clocks.
Address Mask Request and Reply: The address mask request is sent by a host to a router on a local area network (LAN) that knows its IP address but wants to know its corresponding mask. In response, the address mask reply messages are sent by the router to the host by providing it necessary mask.
Router Solicitation and Advertisement: To send data to a host on another network, the source host needs to know the router address of its own network which connects it to other networks. Along with this, it needs to know whether neighbour's routers are alive and functioning. In these situations, the router solicitation and advertisement message helps the host to find out such information. The source host broadcasts a router-solicitation message. The router, which receives the router-solicitation message, broadcast their routing table information through router-advertisement message. The router-advertisement message includes an announcement about its presence as well as of other routers on the network.

14. In a leaky bucket, what should be the capacity of the bucket if the output rate is 5 gal/min and there is an input burst of 100 gal/min for 12 s and there is no input for 48 s?

Ans: Total input = 100*(12/60) + 0*48 = 20 gal/min
Given, output rate = 5 gal/min
Thus, the capacity of bucket = 20 – 5 = 15 gal.

15. Imagine a flow specification that has a maximum packet size of 1,000 bytes, a token bucket rate of 10 million bytes/s, a token bucket size of 1 million bytes and a maximum transmission rate of 50 million bytes/s. How long can a burst at maximum speed last?

Ans: Given that

Bucket size, C = 1 million bytes

Maximum transmission rate, M = 50 million bytes/s

Token bucket rate, P = 10 million bytes/s

Burst time, S = ?

We know that, S = C/ (M - P)

Putting the respective values in the above formula, we get

           S = 1/(50 - 10)

⇒ S = 1/40

   ⇒ S = 0.025 s

Therefore, burst will last for 0.025 s.

Multiple Choice Questions

1. _______defines the variation in the packet delay.
(a) jitter
(b) bandwidth
(c) reliability
(d) none of these
2. Which of the following are not the scheduling techniques used to improve the QoS?
(a) FIFO queuing
(b) LIFO queuing
(c) priority queuing
(d) weighted fair queuing
3. Which of the following ICMP messages is sent by the router to source host to inform that the packet has been discarded due to congestion?
(a) redirection
(b) parameter problem
(c) time exceeded
(d) source quench
4. In which of the following algorithms, the output rate of burst packets can be of variable rate?
(a) token bucket
(b) leaky bucket
(c) both (a) and (b)
(d) none of these
5. The protocol field in the ARP packet format is of ________ bits.
(a) 16
(b) 8
(c) variable
(d) 32
6. Mapping from MAC address to IP address is done by
(a) RARP
(c) DHCP
(d) All of these


1. (a)

2. (b)

3. (d)

4. (a)

5. (b)

6. (d)