9. Ethernet, Virtual Circuit Networks and SONET – Express Learning: Data Communications and Computer Networks


Ethernet, Virtual Circuit Networks and SONET

1. Write a short note on IEEE 802 standards.

Ans: IEEE 802 standards were developed by IEEE in 1985 for local area networks (LANs). These standards are compatible with each other at data link layer and enable intercommunication between various devices developed by different manufacturers. These standards are classified into the following categories:

IEEE 802.1: This category defines interface primitives for LAN. It deals with internetworking aspect, which seeks to settle down the conflicts between incompatible devices.
IEEE 802.2: This category specifies the upper part of the data link layer and supports the logical link protocol.
IEEE 802.3: This category supports Ethernet. Further, CSMA/CD protocol is used in the Ethernet to control simultaneous access to the channel by multiple media.
IEEE 802.4: This category has been specified for LANs based on Token Bus architecture. It supports token passing access method and bus topology.
IEEE 802.5: This category describes standards for LANs based on Token Ring. It supports token passing access method and ring topology.
IEEE 802.6: This category is used for distributed queue dual bus (DQDB) architecture. It has been developed for usage in metropolitan area networks (MANs).
IEEE 802.11: This category applies to wireless Ethernet or Bluetooth. Bluetooth is a technology that is used for small wireless LANs.

2. Explain IEEE 802 reference model.

Ans: IEEE 802 reference model was developed by IEEE and adopted by all the organizations for LAN standards. Initially, the American National Standards Institute (ANSI) adopted the standard and later in 1987, it was approved by the International Organization for Standards (ISO) as an international standard. The 802 reference model is related to the open systems interconnection (OSI) model as shown in Figure 9.1.

Figure 9.1 Relationship Between IEEE 802 Reference Model and OSI Model

Physical Layer

The lowest layer of IEEE 802 reference model is equivalent to the physical layer of the OSI model. It depends on the type and implementation of transmission media. It defines a specification for the medium of transmission and the topology. It deals with the following functions:

encoding of signals at the sender's side and the decoding of signals at the receiver's side;
transmission and reception of bits and
preamble generation for synchronization.

Data Link Layer

The layer above the physical layer in the 802 reference model corresponds to the data link layer of the OSI model. Here, the data link layer is divided into two layers, namely, logical link control (LLC) and media access control (MAC).

Figure 9.2 Format of LLC Frame Header

LLC: This layer is the upper part of the data link layer that provides an interface to upper layers and performs functions such as flow control and error control. It also performs a part of framing function. It provides single data link protocol for all IEEE LANs. In addition, it must support multi-access channels and multiple user networks.
  The LLC layer is concerned with the transmission of link-level protocol data unit (PDU) that is identical to high-level data link control (HDLC) and referred to as LLC frame. Each LLC frame consists of a header field followed by a data field. The header of LLC frame contains three subfields, namely, destination service access point (DSAP), source service access point (SSAP) and control (Figure 9.2) and the data field is used to hold data received from upper layers. The description of subfields in LLC frame header is as follows:
DSAP: This field contains a 7-bit address that identifies a user of LLC at the destination. One bit of the address specifies whether it is a group or individual address. It specifies the upper-layer protocol at the destination.
SSAP: This field contains a 7-bit address that identifies the user of LLC at the source. One bit of the address specifies whether it is a command or response PDU. It specifies the upper-layer protocol at the source.
Control Field: This field is used to handle flow and error control.
MAC: This layer forms the lower part of the data link layer that specifies the access control method used for each type of LAN. For example, CSMA/CD is specified for Ethernet LANs and the token passing method is specified for Token Ring and Token Bus LANs. It also handles a part of framing function for transmitting data. To implement all the specified functions, MAC also uses a PDU at its layer, which is referred to as MAC frame. The MAC frame consists of various fields (Figure 9.3) which are described as follows:
  • MAC Control: This field contains protocol control information such as priority level, which is required for the proper functioning of protocol.
  • Destination MAC Address: This field contains the address of the destination on the LAN for this frame.
  • Source MAC Address: This field contains the address of the source on the LAN for this frame.
  • LLC PDU: This field contains the LLC data arriving from the immediate upper layer.
  • Cyclic Redundancy Check (CRC): This field contains the CRC code that is used for error detection. This field is also referred to as frame check sequence (FCS).

Figure 9.3 Format of an MAC Frame

3. Discuss frame format of IEEE 802.3 standards.

Ans: IEEE 802.3 standard supports Ethernet which was developed in 1976 by Xerox Corporation. It was developed as an improvement over prior networks and is capable of controlling access to channel in case many stations attempt to transmit simultaneously. To control media access, it uses 1-persistent CSMA/CD protocol.

In standard Ethernet, the MAC layer is responsible for performing the operation of access method. Further, IEEE 802.3 has specified a type of an MAC frame for Ethernets. This frame consists of seven fields (Figure 9.4), which are described as follows:

Preamble: It is the first field of the Ethernet frame. It contains 7 bytes of alternating 0s and 1s such as 1010101 that are used to warn the receiver about the incoming frame so that the receiver may synchronize its timing with the input. In fact, this field is appended at the physical layer and is not a part of the frame.
Start Frame Delimiter (SFD): It is a 1-byte long field that is used to determine the beginning of the frame. The last two bits of this field are set to 11 to indicate the receiver that the next field is destination address; also, it is the last chance for the receiver to synchronize its input timing.
Destination Address (DA): It is a 6-byte long field that holds the physical address of the next receiving station(s) to which the frame is to be transmitted.
Source Address (SA): It is a 6-byte long field that holds the physical address of the station that has sent the frame.

Figure 9.4 Format of IEEE 802.3 MAC Frame

Length or Type: It is a 2-byte-long field that defines either the length or type of data. The Ethernet originally used this field as a type field to define the total length of data in upper-layer protocols, while the IEEE standard used it as the length field to indicate total number of bytes in data field.
Data: This field carries data arriving from upper-layer protocols. The amount of data stored in this field can range between 46 and 1,500 bytes.
CRC: It is a 4-byte-long field that contains error detection information. In case of Ethernet MAC frame, it is the CRC that is computed over all the fields except preamble, SFD and CRC itself.

4. What are the four generations of Ethernet? Discuss Ethernet cabling in all the generations of Ethernet.

Ans: The Ethernet was developed in 1976 at the Xerox's Palo Alto Research Center (PARC). Since its development, the Ethernet has been evolving continuously. This evolution of Ethernet can be categorized under four generations, which include standard Ethernet, fast Ethernet, gigabit Ethernet and 10-gigabit Ethernet. These generations are discussed here.

Standard Ethernet

Standard Ethernet uses digital signals (baseband) at the data rate of 10 Mbps and follows 1-persistent CSMA/CD as access method. A digital signal is encoded/decoded by sender/receiver using the Manchester scheme. Standard Ethernet has defined several physical layer implementations, out of which the following four are commonly used.

10Base5: It uses a thick coaxial cable of 50 ohm and is implemented in bus topology with an external transceiver connected to coaxial cable. The transceiver deals with transmitting, receiving and detecting collisions in the network. The cable used is too firm to bend with the hands. The maximum length of the cable should not be more than 500 m; otherwise, the signals may deteriorate. In case greater length of cable is required, the maximum five segments each of 500 m can be used and repeaters are used to connect the segments. Thus, the length of the cable can be extended up to 2,500 m. The 10Base5 Ethernet is also referred to by other names including thick Ethernet or thicknet.
10Base2: It is also implemented in the bus topology but with a thinner coaxial cable. The transceiver in this Ethernet is a part of the NIC. The 10Base2 specifications are cheaper than the 10Base5 as the cost of thin cable is less than that of the 10Base5 specifications. The thinner cable can easily be bent close to the nodes that results in flexibility and thus, making installation of the 10Base2 specification easier. The maximum length of the cable segment must not exceed 185 m. The 10Base2 Ethernet is also referred to as thin Ethernet or Cheapernet.
10Base-T: It uses two pairs of twisted cable and is implemented in star topology. All nodes are connected to the hub via two pairs of cable and thus, creating a separate path for sending and receiving the data. The maximum length of the cable should not exceed 100 m; otherwise, the signals may attenuate. It is also referred to as twisted-pair Ethernet.
10Base-F: It is the most common 10-Mbps Ethernet that is implemented in star topology. It uses a pair of fibre optic cables to connect the nodes to the central hub. The maximum length of cable should not exceed 2,000 m. It is also referred to as the fibre Ethernet.

Fast Ethernet

The IEEE 802.3 committee developed a set of specifications referred to as the fast Ethernet to provide low-cost data transfer at the rate of 100 Mbps. It was designed to compete with LAN protocols such as fibre distributed data interface (FDDI) and it was also compatible with the standard Ethernet. The fast Ethernet uses a new feature called autonegotiation, which enables two devices to negotiate on certain features such as data rate or mode of transmission. It also allows a station to determine the capability of hub and two incompatible devices can also be connected to one another using this feature. Like the standard Ethernet, various physical-layer implementations of the fast Ethernet have also been specified. Some of them are as follows:

100Base-TX: It either uses two pairs of either cat5 UTP cable or STP cable. The maximum length of the cable should not exceed 100 m. This implementation uses MLT-3 line coding scheme due to its high bandwidth. However, since MLT-3 coding scheme is not self-synchronized, the 4B/5B block coding scheme is used to prevent long sequences of 0s and 1s. The block coding increases the data rate from 100 Mbps to 125 Mbps.
100Base-FX: It uses two wires of fibre optic cable that can easily satisfy the high bandwidth requirements. The implementation uses NRZ-I coding scheme. As NRZ-I scheme suffers from synchronization problem in case of long sequence of 0s and 1s, 4B/5B block coding is used with NRZ-I to overcome this problem. The block coding results in increased data rate of 125 Mbps. The maximum cable length in 100Base-FX must not exceed 100 m.
100Base-T4: It is the new standard that uses four pairs of cat3 or higher UTP cables. For this implementation, 8B/6T line coding scheme is used. The maximum length of cable must not exceed 100 m.

Gigabit Ethernet

Gigabit Ethernet was developed by the IEEE 802.3 committee to meet the higher data rate requirements. This standard provides a data rate of 1000 Mbps (1 Gbps). It is backward compatible with traditional and fast Ethernet and also supports autonegotiation feature. Various physical layer implementations of gigabit Ethernet are as follows:

1000Base-SX: It is a two-wire implementation that uses short wave fibres. One wire is used for sending the data and other is used for receiving the data. The NRZ line coding scheme and 8B/10B block coding scheme is used for this implementation. The length of the cable should not exceed 550 m in the 1000Base-SX specifications.
1000 Base-LX: It is also a two-wire implementation that uses long wave fibres. One wire is used for sending the data and other is used for receiving the data. It is implemented by the NRZ line coding scheme and the 8B/10B block-coding scheme. The length of the cable should not exceed 5,000 m in the 1000Base-LX specifications.
1000 Base-CX: It uses two STP wires where one wire is used for sending the data and other is used for receiving the data. It is implemented by the NRZ line coding scheme and the 8B/10B block-coding scheme. The length of the cable should not exceed 25 m in the 1000Base-CX specifications.
1000 Base-T: It uses four cat5 UTP wires. It is implemented by the 4D-PAM5 line coding scheme. In this specification, the length of the cable should not exceed 100 m.

Ten-Gigabit Ethernet

This standard was named as 802.3ae by the IEEE 802.3 committee. It was designed to increase the data rate to 1000 Mbps (10 Gbps). It is compatible with standard Ethernet, fast Ethernet and gigabit Ethernet. It enables the Ethernet to be used with technologies such as Frame Relay and ATM. Various physical-layer implementations of 10-gigabit Ethernet are as follows:

10GBase-S: It uses short-waves fibres and is designed for 850 nm transmission on multimode fibre. The maximum length of the cable should not exceed 300 m.
10GBase-L: It uses long-wave fibres and is designed for 1,310 nm transmission on single-mode fibre. The maximum length of the cable should not exceed 10 km.
10GBase-E: It uses extended waves and is designed for 1,550 nm transmission on single-mode fibre. The maximum distance that can be achieved using this medium, is up to 40 km.

5. What is Token Ring (IEEE 802.5)? How is it different from Token Bus (IEEE 802.4)?

Ans: The IEEE 802.5 is a specification of standards for LANs that are based on Token Ring architecture. The Token Ring network was originally developed by IBM in the 1970s. It is the most commonly used MAC protocol that uses token passing mechanism with ring topology. In this protocol, the stations are connected via point-to-point links with the use of repeaters (Figure 9.5). To control the media access, a small frame called token (a 3-byte pattern of 0s and 1s) is allowed to move around the network and the station possessing the token can only transmit frames in the allotted time.

Figure 9.5 Token Ring LAN

Whenever a station wants to transmit a frame, it first needs to grab a token from the network before starting any transmission. Then, it appends the information with the token and sends it on the network. The information frame then circulates around the network and eventually, received by the intended destination. After receiving the information frame, the destination copies the information and sends the information frame back on the network with its two bits set to indicate that it is an acknowledgement. The information frame then moves around the ring and is finally, received by the sending station. The sending station checks the returned frame to determine whether it has been received with or without errors. If the sending station has now finished the transmission, it creates a new token and inserts it on the network. Notice that while one station is transmitting the data, no other station can grab a token. Thus, collisions cannot occur as only one station can transmit at a time. In addition, if a station does not have a frame to send or the time allotted to it passes away, the token is immediately passed to the next station.

In Token Ring networks, the ring topology is used in which the failure of any one station can bring the entire network down. Thus, another standard known as Token Bus (IEEE 802.4) was developed as an improvement over Token Ring networks. Like Token Ring, Token Bus is also based on token-passing mechanism. In Token Bus, the stations are logically organized into a ring but physically organized into a bus (Figure 9.6). Thus, each station knows the addresses of its adjacent (left and right) stations. After the logical ring has been initialized, the highest numbered station may transmit. The sending station broadcasts the frame in the network. Each station in the network receives the frame and discards if frame is not addressed to it. When a station finishes the transmission of data or the time allotted to it passes away, it inserts the address of its next neighbouring station (whether logical or physical) on the token and passes it to that station.

Figure 9.6 Token Bus LAN

6. Compare IEEE 802.3, 802.4 and 802.5.

Ans: There are certain differences among the IEEE 802.3, 802.4 and 802.5 standards that are listed in Table 9.1.

Table 9.1 Comparison among IEEE 802.3, 802.4 and 802.5

IEEE 802.3 IEEE 802.4 IEEE 802.5
  • This standard uses 1-persistent CSMA/CD medium access protocol.
  • This standard uses Token Bus medium access protocol.
  • This standard uses Token Ring medium access protocol.
  • The stations are logically connected to each other via a broadcast cable medium.
  • The stations are logically connected to each other via a broadcast cable medium.
  • The stations are physically connected to each other via point-to-point links.
  • Frames are broadcasted to the destination.
  • Frames are broadcasted to the destination.
  • Frames are transmitted to the destination using point-to-point links.
  • Transmission media used is generally coaxial cable, optical fibre or twisted pair.
  • Transmission medium used is generally coaxial cable or twisted pair. It is not well suited to fibre cables.
  • Transmission medium used is generally coaxial cable, optical fibre or twisted pair.
  • There is no prioritization of stations for transmission of data.
  • Stations are prioritized for transmission of data.
  • Stations are prioritized for transmission of data.
  • It cannot transmit short frames.
  • It can handle transmission of short frames.
  • It can handle transmission of short frames.
  • It cannot be used for real-time applications.
  • It is used for real-time applications.
  • It is used for office automation.
  • It applies Manchester encoding.
  • It applies analog encoding.
  • It applies differential Manchester encoding.
  • At high loads, its efficiency is very low. However, at low loads, its efficiency is high due to less delay.
  • At high loads, its efficiency is very high. However, at low loads, its efficiency is low due to more delay.
  • At high loads, its efficiency is also high and at low loads, its efficiency is low similar to the Token Bus.

7. Write a short note on FDDI. Explain access method, time registers, timers and station procedure.

Ans: The fibre distributed data interface (FDDI) refers to the first high speed LAN protocol standardized by ANSI and ITU-T. It has also been approved by the ISO and resembles IEEE 802.5 standards. It uses fibre optic cable; thus, packet size, network segment length and number of stations increase. It offers a speed of 100 Mbps over the distance of up to 200 km and connects up to 1,000 stations. The distance between any two stations cannot be more than a few kilometers.

Access Method

The FDDI employs the token passing access method. A station possessing the token can transmit any number of frames within the allotted time. There are two types of frames provided by the FDDI: synchronous and asynchronous. Synchronous frame (also called S-frame) is used for real-time applications such as audio and video. The frame needs to be transmitted within a short period of time without much delay. Asynchronous frame (also called A-frame) is used for non-real-time applications (such as data traffic) that can tolerate large delays. If a station has both S-frames and A-frames to send, it must send S-frames first. After sending the S-frame, if the allotted time is still left then A-frames can be transmitted.

Time Registers

Three time registers are used to manage the movement of token around the ring, namely, synchronous allocation (SA), target token rotation time (TTRT) and absolute maximum time (AMT). The SA register specifies the time for transmitting synchronous data. Each station can have a different value for it. The TTRT register specifies the average time a token needs to move around the ring exactly once. The AMT register has a value two times the value of the TTRT. It specifies the maximum time that it can take for a station to receive a token. However, if a token takes more time than the AMT, then the reinitialization of the ring has to be done.


Each station maintains two types of timers to compare the actual time with the value present in time registers. These timers include token rotation timer (TRT) and token holding timer (THT). The TRT calculates the total time taken by the token to complete one cycle. This timer runs continuously. The THT starts when the token is received by a station. This timer indicates the time left for sending A-frames after the S-frames have been sent.

Station Procedure

When a station receives the token, it uses the following procedure:

1. It sets the THT to a value equal to (TTRT – TRT).
2. It sets TRT to zero.
3. It transmits synchronous frame. With each sent unit, the value of TRT is decremented by one.
4. It continues to send asynchronous data as long as the value of THT is positive.

8. What are the advantages of FDDI over a basic Token Ring?

Ans: Though FDDI, like Token Ring, is a token-passing protocol, it provides certain advantages over Token Ring. Some of these advantages are as follows:

It provides a data rate of 100 Mbps as compared to the 10–16 Mbps in Token Ring.
It supports large number of stations as compared to Token Ring.
The access method used in FDDI is timed-token passing. It supports transmission of multiple frames at the same time after capturing a token that is not possible in Token Ring.
In case of Token Ring, the transmitting station releases the token after it receives the acknowledgement of sent frames. On the other hand, in FDDI the station releases the token immediately after it has finished the transmission. This is known as early token release.
It offers higher reliability than Token Ring by using dual counter ring topology. In this, two types of rings are used, namely, primary and secondary in contrast to Token Ring in which only one ring is present. Both rings are used to transmit data, however, in opposite directions to provide fault tolerance. Thus, in case one ring breaks or some station malfunctions, the load can be shifted to another ring. Moreover, if both rings break at the same point, two rings can be joined together to form a single ring. Reliability can be further increased by using dual ring of trees or dual homing that provides multiple paths. Thus, on failure of one path, another path can be chosen for passing a token or data.

9. What is meant by wireless LAN? Mention its advantages and disadvantages.

Ans: Wireless LAN (WLAN) refers to a network that uses wireless transmission medium. It is used to connect devices without using cables and introduces a higher flexibility for ad hoc communication. Some of the advantages of wireless LAN are as follows:

It is difficult to design small-size devices such as PDA and laptop with the use of cables and thus, WLAN becomes an alternative to be used as a transmission media.
In case of natural disasters, such as flood and earthquake, noise in the cable increases. Therefore, WLAN can be used where transmission rate is not affected by such natural calamities.
It can cover large area and number of devices. In addition, new devices can be added easily without affecting the existing network design.
The nodes can communicate without any restriction and from anywhere.

Some disadvantages of WLAN are as follows:

It is an expensive medium of transmission as compared to cables. Certain wireless devices such as wireless LAN adaptor and PC card are quite expensive.
Installation of WLANs is expensive.
Radio waves used in WLAN might interfere with various devices and thus, are not secure to use.
WLANs provide low quality of transmission because of high error rate due to interference.
WLANs are restricted to only certain frequency bands in radio transmission.

10. Differentiate between wired and wireless LAN.

Ans: Both wired and wireless LANs are used for establishing a network within an organization but there are some differences between them. Table 9.2 lists these differences.

Table 9.2 Differences Between Wired and Wireless LAN

Wired LAN Wireless LAN
  • It is difficult to set up wired LANs, as it requires long and large number of wires to connect devices to each other or to the central device such as hub or switch.
  • It is easier to set up WLANs as compared to wired LANs.
  • The total cost required to set up these type of networks include cost of cable, hubs, switches and various software packages to install the network.
  • Wireless adapters and access points are three to four times expensive as compared to components of wired LAN.
  • It is more secure because firewalls can be installed on the computers.
  • It is less secure because radio waves travel through air and can be easily intercepted.
  • The components used in wired LANs including Ethernet cable, hub and switch are extremely reliable.
  • It is less reliable as signals are easily interfered from home appliances, causing more problems.
  • Wired LANs provide less flexibility, as one need to separate printers, modems and scanners on every computer.
  • The use of WLANs provides flexibility as it is not required to separate CD-ROMs, colour printers and B/W printers.

11. Explain two types of services of IEEE 802.11.

Ans: The IEEE 802.11 is a standard specified for wireless LANs. Two types of services namely, basic service set (BSS) and extended service set (ESS) has been defined for IEEE 802.11. These two services are explained as follows:

Basic Service Set

The BSS acts as the main building block of a WLAN. It consists of many wireless stations (stationary or mobile) and an optional base station called access point (AP). If AP is present in the network, then BSS is referred to as the infrastructure network [Figure 9.7(a)]; otherwise, BSS is referred to as an adhoc network [Figure 9.7(b)], which cannot transmit data to any other BSS.

Figure 9.7 Basic Service Set (BSS)

Extended Service Set

The ESS is formed by the combination of at least two BSSs with APs (Figure 9.8). All BSSs are connected via a distribution system that is generally a wired LAN such as Ethernet. The distribution system connects the APs of each BSS to one another and thus, enables the communication between BSSs. Each BSS can have two types of stations: mobile and stationary. The stations inside the BSS are mobile stations while AP of each BSS is a stationary station. The stations inside a BSS can communicate with one another without requiring the use of an AP. However, if communication is required between two stations of different BSSs, then it occurs via APs. The ESS is analogous to a cellular network in which each cell can be considered as a BSS and each base station as an AP.

Figure 9.8 Extended Service Set (ESS)

12. What are the types of frames specified in IEEE 802.11?

Ans: The IEEE 802.11 has specified three types of MAC frames for WLANs, which include management frames, control frames and data frames. All these three types are discussed as follows:

Management Frame: This frame is used for managing communication between stations and APs. It manages request, response, reassociation, dissociation and authentication.
Control Frame: This frame is used for accessing the channel and acknowledging frames. It ensures reliable delivery of data frames. There are six subtypes of a control frame which are as follows:
  • Power Save-Poll (PS-Poll): This frame is used by a station to send a request to an AP for the transmission of frames buffered by AP for that station while the station was in power saving mode.
  • Request to Send (RTS): Whenever a station wishes to send data to another station, it first sends an RTS frame to declare to the destination and other stations within its reception range that it is going to send data to that destination.
  • Clear to Send (CTS): This frame is sent by a station in response to an RTS frame. It indicates to the source station that the destination station has granted permission for sending data frames.
  • Acknowledgement: This frame is sent by the destination station to the source station to acknowledge the successful receipt of the previous frames.
  • Contention-Free (CF)-end: This frame is used to indicate the end of the contention-free period.
  • CF-End + ACK: This frame is used to acknowledge the CF-end frame. It ends the period and all the bound stations are freed from all the restrictions associated with that period.
Data Frame: This frame is used for carrying data and control information from a source station to a destination station. Data frames are divided into eight subtypes, which are further organized under two groups. One group contains the data frames that are used to carry the user data (received from upper layers) from the source station to the destination station. The data frames included in this group are described as follows:
  • Data: This is the simplest data frame used to send data in both contention period and contention-free period.
  • Data + CF-ACK: This frame carries data and also acknowledges the data, which has been received previously. It can be used only in the contention-free period.
  • Data + CF-Poll: This frame is used by a point coordinator to send data to a mobile station. It also requests the mobile station to send a data frame, which may have been buffered by the mobile station.
  • Data + CF-ACK + CF-Poll: This frame combines the functionality of two frames Data + CF-ACK and Data + CF-Poll into a single frame.

Besides, there is another group that contains four more subtypes of data frames, which do not carry any user data. One of these frames is null-function data frame that carries the power management bit in the frame control field to the AP. It indicates that the station is moving to a low-power operating state. The remaining three frames (CF-ACK, CF-Poll, CF-ACK + CF-Poll) function in the same way as that of last three frames in the first group, the only difference being that they do not contain any data.

13. Explain the frame format of 802.11 standards.

Ans: The IEEE 802.11 has defined three MAC layer frames for WLANs including control, data, and management frames. Figure 9.9 shows the format of data frame of IEEE 802.11 that comprises nine fields. The format of management frames is also similar to data frames except that it does not include one of the base station addresses. The format of control frames does not include frame body and SC fields. It also includes one or two address fields.

The description of fields included in the IEEE 802.11 MAC frame is as follows:

Figure 9.9 Frame Format of the IEEE 802.11 Standard

Frame Control (FC): It is a 2-byte-long field in which the first byte indicates the type of frame (control, management, or data) and the second byte contains control information such as fragmentation information and privacy information.
Duration (D): It is a 2-byte-long field that defines the channel allocation period for the transmission of a frame. However, in case of one control frame, this field stores the ID of the frame.
Addresses: The IEEE 802.11 MAC frame contains four address fields and each field is 6-byte-long. In case of a data frame, two of the frame address fields store the MAC address of the original source and destination of the frame, while the other two store the MAC address of transmitting and receiving base stations that are transmitting and receiving frames respectively over the WLAN.
Sequence Control (SC): It is a 2-byte (that is 16 bits) field, of which 12 bits specify the sequence number of a frame for flow control. The remaining 4 bits specify the fragment number required for reassembling at the receiver's end.
Frame Body: It is a field that ranges between 0 and 2,312 bytes and contains payload information.
FCS: It is a 4-byte-long field that comprises a 32-bit CRC.

14. With reference to 802.11 wireless LAN, explain the following:

(a) Hidden terminal problem

(b) Exposed terminal problem

(c) Collision avoidance mechanisms


(a) Hidden Terminal Problem The hidden terminal problem occurs during communication between two stations in wireless networks. It is the problem where a station is unable to detect another station while both of them are competing for the same data transfer medium. To understand how this problem occurs, consider three stations X, Y and Z situated in a network as shown in Figure 9.10. The transmission range of station X is represented by the left oval and that of Z is represented by the right oval. The stations falling in either of the ovals can hear the signal transmitted by the station situated in that oval. However, stations X and Z are outside the transmission range of each other, that is, they are hidden from each other. As the station Y is situated in the area common to X and Z, it can hear the signals transmitted by both X and Z . Now, suppose, while X is transmitting data to Y, Z also wants to send data to Y. As Z is unable to hear the transmission from X to Y, it assumes that the transmission medium is free and starts sending data frames to Y. This results in a collision at station Y, as it is receiving from both X and Z , which are hidden from each other with respect to Y.

Figure 9.10 Hidden Terminal Problem

The hidden terminal problem can be solved by making use of the RTS and CTS frames before starting the transmission of data. Initially, station X sends an RTS frame to station Y to request for sending data. The transmission of RTS frame cannot be detected by Z. In response to RTS frame, station Y sends a CTS frame, which specifies the duration of transmission from X to Y. As Y in the transmission range of Z, Z can detect the transmission of CTS frame and knows that Y is busy with any other station and also for how long. Therefore, it does not initiate the transmission until that duration is finished.

(b) Exposed Terminal Problem This problem is the reverse of the hidden terminal problem. This problem arises when a station restricts itself from using another station that is in fact available for use. To understand how this problem occurs, consider four stations P, Q, R and S in a network as shown in Figure 9.11. Suppose that while the station P is sending data to station Q, there arises a need in station R to send data to S. The transmission from R to S can be done without disturbing the transmission from P to Q. However, as the station R is exposed to transmission range from P, it stops itself from transmitting to S after realizing that P is transmitting some data. Such a situation is known as exposed terminal problem.

Figure 9.11 Exposed Terminal Problem

(c) Collision Avoidance Mechanisms To avoid the collisions in the wireless networks, a collision avoidance protocol named as multiple access with collision avoidance (MACA) has been designed. This protocol requires a sender to make the receiver send a short frame before starting the transmission of data frames. This frame is used as an announcement to the nearby frames that transmission is going on between sender and the receiver and no other station should interfere in between, thus, avoiding any collision.

To understand how MACA works, consider five stations P, Q, R, S and T as shown in Figure 9.12. Suppose P wants to send data to Q. Initially, P sends an RTS frame to Q specifying the length of the data frame. The station Q then responds with a CTS frame specifying the duration for transmission from P to Q. After receiving the CTS frame, P begins transmitting data to Q. Now, as the station S is in the transmission range of Q, it hears the CTS message from Q and thus, remains silent for the duration, Q is receiving from P. The station R is in the transmission range of P and hears the RTS message from P (but not the CTS message from Q). Thus, R can transmit to P without avoiding the collision as long as the data from R to P does not interfere with CTS from Q to P. The station T being in the transmission range of both P and Q hears RTS and CTS both and thus, remain silent until the transmission is over.

Figure 9.12 MACA Protocol

The disadvantage of the MACA protocol is that collision can still occur in case both Q and R transmit RTS frames to P simultaneously. The RTS from Q and R may collide at P; because of collision, neither Q nor R receives the CTS frame. Then both Q and R wait for a random amount of time using binary exponential back off algorithm (explained in Chapter 8) and again retry to transmit.

To overcome the disadvantage and improve the performance of MACA, it was enhanced in 1994 and renamed as MACA for wireless (MACAW). This newer version of MACA includes several enhancements, some of which are as follows:

To identify the frames that have been lost during the transmission, the receiver must acknowledge each successfully received data frame by sending an ACK frame.
CSMA protocol is used for carrier sensing so that no two stations could send an RTS frame at the same time to the same destination.
Instead of running the binary back off exponential algorithm for each station, it is run for a pair of transmitting stations (source and destination).

15. What is meant by Bluetooth? Explain its architecture.

Ans: Bluetooth is a short-range wireless LAN technology through which many devices can be linked without using wires. It was originally started as a project by the Ericsson Company and then formalized by a consortium of companies (Ericsson, IBM, Intel, Nokia and Toshiba). Bluetooth is supposed to get its name from 10th century Danish king Harald Bluetooth who united Scandinavian Europe (Denmark and Norway) during an era when these areas were torn apart due to wars. The Bluetooth technology operates on the 2.4 GHz industrial, scientific and medical (ISM) band and can connect different devices such as computer, printer and telephone. The connections can be made up to 10 m or extended up to 100 m depending upon the Bluetooth version being used.

Bluetooth Architecture

A Bluetooth LAN is an ad hoc network, which means the network is formed by the devices themselves by detecting each other's presence. The number of devices connected in Bluetooth LAN should not be very large as it can support only a small number of devices. The Bluetooth LANs can be classified into two types of networks, namely, piconet and scatternet (Figure 9.13).

Piconet: It refers to a small Bluetooth network in which the number of stations cannot exceed eight. It consists of only one primary station (known as master) and up to seven secondary stations (known as slaves). If number of secondary stations exceeds seven, then 8th secondary station is put in the parked state. A secondary station in the parked state cannot participate in communication in the network until it leaves the parked state. All stations within a piconet share the common channel (communication link) and only a master can establish the link. However, once a link has been established, the other stations (slaves) can also request to become master. Slaves within a piconet must also synchronize their internal clocks and frequency hops with that of the master.

Figure 9.13 Bluetooth Piconets and Scatternets

Scatternet: It refers to a Bluetooth network that is formed by the combination of piconets. A device may be a master in one piconet whiles a slave in another piconet or a slave in more than one piconet.

16. Discuss the Bluetooth protocol stack.

Ans: Bluetooth protocol stack is a combination of multiple protocols and layers (Figure 9.14). Bluetooth comprises several layers including radio layer, baseband layer, L2CAP layer and other upper layers. In addition, various protocols are also associated with Bluetooth protocol stack. The description of these layers and protocols is as follows:

Figure 9.14 Bluetooth Protocol Stack

Radio Layer

This is the lowest layer in the Bluetooth protocol stack and is similar to a physical layer of transmission control protocol/Internet protocol (TCP/IP) model. The Bluetooth devices present in this layer have low power and a range of 10 m. This layer uses an ISM band of 2.4 GHz that is divided into 79 channels, each of 1 MHz. To avoid interference from other networks, the Bluetooth applies frequency-hopping spread spectrum (FHSS) technique. Here, a packet is divided into different parts and each part is transmitted at a different frequency. The bits are converted to signal using a variant of FSK, known as Gaussian bandwidth filtering shift keying (GFSK). In GFSK, the bit 1 is represented by a frequency deviation above the carrier frequency used and bit 0 by a frequency deviation below the carrier frequency.

Baseband Layer

This layer is similar to MAC sublayer in LANs. It uses time division multiplexing (TDMA) and the primary and secondary stations communicate with each other using time slots. Bluetooth uses a form of TDMA known as TDD-TDMA (time-division duplex TDMA)—a sort of half-duplex communication, which uses different hops for each direction of communication (from primary to secondary or vice versa). If there is only one secondary station in the piconet, then the secondary station uses even numbered slots while the primary station using odd-numbered slots for communication. That is, in slot 0, data flows from primary to secondary while in slot 1 data flows from secondary to primary. This process continues until the end of frame transmission. Now, consider the case where there is more than one secondary station in the piconet. In this case also, primary station sends in even-numbered slots, however, only one secondary station (out of many) who had received the data in the previous slot transmits in the odd-numbered slot. For example, suppose in slot 0, the primary station (P) has sent data intended for a secondary station (S) then only S can transmit in slot 1.

L2CAP Layer

The logical link control and adaptation protocol (L2CAP) is similar to LLC sublayer in LANs. This layer is used for exchanging data packets. Each data packet comprises three fields (Figure 9.15), which are as follows:

Figure 9.15 Format of Data Packet of L2CAP Layer

Length: It is a 2-byte long field that is used to specify the size of data received from upper layers in bytes.
Channel ID (CID): It is a 2-byte long field that uniquely identifies the virtual channel made at this level.
Data and Control: This field contains data that can be up to 65,535 bytes as well as other control information.

The L2CAP layer performs many functions that are discussed as follows:

Segmentation and Reassembly: Application layer sometimes delivers a packet that is very large in size, however, baseband layer supports only up to 2,774 bits or 343 bytes of data only in the payload field. Thus, the L2CAP layer divides large packets into segments at the source and these packets are reassembled again at the destination.
Multiplexing: The L2CAP deals with multiplexing. At the sender's side, it acquires data from the upper layers, frames them and gives them to the baseband layer. At the receiver's station, it acquires frames from the baseband layer, extracts the data and gives them to the appropriate protocol layer.

Link Manager Protocol

The link manager protocol (LMP) helps a Bluetooth device to discover other devices when they come across within the radio range of each other. It uses peer-to-peer message exchange in order to perform various security functions such as authentication and encryption. The LMP layer performs the following functions:

Generation and exchange of encryption keys.
Link setup and negotiation of baseband packet size.
Controlling the power modes, connection state and duty cycles of Bluetooth devices in a piconet.

Host Controller Interface

The host controller interface (HCI) provides command line access to LMP and the baseband layer in order to control and receive the status information. It consists of the following three parts:

The HCI firmware, which is a part of the actual Bluetooth hardware.
The HCI driver, which is present in the Bluetooth device software.
The host controller transport layer, which is used to connect the firmware with the driver.

Radio Frequency Communication

Radio frequency communication (RFCOMM) is a serial line communication protocol. It communicates with other upper layer protocols and tells them that the current Bluetooth devices are working over a RS232 wired serial interface.

Service Discovery Protocol (SDP)

Service discovery protocol (SDP) allows a Bluetooth device to join a piconet. It also tells about the available services, their types and the mechanism to access these services.

Telephony Control Protocol Binary (TCS BIN)

Telephony control protocol binary (TCS BIN) is a bit-oriented protocol that helps to setup speech and data calls between Bluetooth devices by defining all essential call control signalling protocols. It also defines mobility management procedures to handle a group of Bluetooth telephony control services (TCS) devices.


This protocol consists of a set of AT-commands (attention commands) which are used to configure and control a mobile phone to act as a modem for fax and data transfers.

Point-to-Point Bluetooth

This is a point-to-point protocol (PPP) that takes IP packets to/from the PPP layer and places them onto the LAN.


This protocol is used for Internet communication.

Object Exchange Protocol

The Object Exchange (OBEX) is a session protocol, which is used to exchange objects. It works like the hypertext transfer protocol but with a much lighter fashion. It helps to browse the contents of a folder on some remote device.


These are the content formats supported by OBEX protocol. A vCard specifies the format for electronic business card while vCal specifies the format for entries in personal calendar, which are maintained by Internet mail consortium.

17. Write a short note on virtual circuit networks.

Ans: A virtual circuit network includes the characteristics of both circuit-switched and a datagram network and performs switching at the data link layer. Like circuit-switched networks, it requires a virtual connection to be established between the communicating nodes before any data can be transmitted. Data transmission in virtual circuit networks involves three phases: connection setup, data transfer and connection teardown phase. In connection setup phase, the resources are allocated and each switch creates an entry for a virtual circuit in its table. After establishment of virtual connection, data transfer phase begins in which packets are transmitted from source to destination; all packets of a single message take the same route to reach the destination. In connection teardown phase, the communicating nodes inform the switches to delete the corresponding entry.

In virtual circuit networks, data is transmitted in the form of packets, where each packet contains an address in its header. Each packet to be transmitted contains a virtual circuit identifier (VCI) along with the data. The VCI is a small number, which is used as an identifier of packets between two switches. When a packet arrives at a switch, its existing VCI is replaced with a new VCI when the frame leaves from the switch.

The main characteristic of virtual circuit networks is that nodes need not make any routing decision for the packets, which are to be transferred over the network. Decisions are made only once for all the packets using a specific virtual circuit. At any instance of time, each node can be connected via more than one virtual circuit to any other node. Thus, transmitted packets of a single message are buffered at each node and are queued for output while packets using another virtual circuit on the same node are using the line. Some of the advantages associated with virtual circuit approach are as follows:

All packets belonging to the same message arrive in the same order to the destination as sent by the sender. This is because every packet follows the same route to reach the receiver.
It ensures that all packets arriving at the destination are free from errors. For example, if any node receives a frame with an error, then a receiving node can request for retransmission of that frame.
Packets transmit through the virtual circuit network more rapidly.

18. Write a short note on datagram networks.

Ans: Datagram networks are the connectionless networks used for packet switching at the network layer. Here, the packets are referred to as datagrams. No virtual connection exists between the source and the destination and each arriving datagram is treated independently by the switch regardless of the source and destination address provided in the datagram. Thus, the different datagram even if they belong to the same message may be forwarded through different paths to reach the destination. This results in the unordered arrival of datagrams at the receiver and with varying delay times. As switches are involved in processing datagrams belonging to other messages also, it might be possible that some datagrams are lost or dropped because of the unavailability of resources.

In datagram networks, there is no need of any connection setup or teardown phase. Each switch maintains a dynamic routing table that helps to deliver the datagrams to the intended receiver. This routing table contains the destination address of every node connected to the network and the corresponding forwarding output port. This approach provides better efficiency as compared to other networks such as circuit-switched networks because resources are allocated only when datagrams need to be transferred instead of setting a connection and reserving the resources in advance. However, datagrams may have to experience more delay as compared to packets in virtual circuit networks. This is because each datagram of a message can be forwarded through different switches and thus, may have to wait at a switch depending on the resources available at the switch at that instance of time.

19. Differentiate virtual circuits and datagram networks.

Ans: There are certain differences between virtual circuits and datagram networks that are listed in Table 9.3.

Table 9.3 Differences between Virtual Circuit Network and Datagram Networks

Virtual Circuit Networks Datagram Networks
  • It is a connection-oriented service. Thus, it requires setting up a circuit between the sender and receiver before transmission.
  • It is a connectionless service.
  • Each frame that is to be transmitted contains a virtual circuit identifier.
  • Each frame that is to be transmitted contains source and destination address.
  • All frames belonging to the same message follow the same route to a destination. The route is selected at the time when the virtual circuit is set up.
  • The frames belonging to the same message can follow different routes to reach the destination.
  • If a router fails then all virtual circuits passing through it are terminated.
  • There is no effect on the datagram networks when the router fails. Only the loss of some packets takes place.
  • Congestion control is easy as the virtual circuit is set up depending upon the available buffers.
  • Congestion control is difficult.
  • The delay associated with each packet is less.
  • The delay associated with each packet is more.
  • It provides less efficiency because resources remain allocated to stream of packets even if the connection is not been used.
  • It provides more efficiency as resources are allocated only when required. If a packet of the message being transmitted fails due to some reason, the resources can be allocated to a packet of another message.

20. What is X.25? With reference to X.25, explain the following:

(a) Switched virtual circuit and permanent virtual circuit

(b) Protocols used at the link level

(c) State diagrams to explain call setup and call clearing.

Ans: X.25 is the first public network that was developed in 1976 by ITU-T. It specifies an interface for exchanging data packets between the packet mode end system called data terminal equipment (DTE) and the access node of switched packet data network called data circuit terminating equipment (DCE). The DTE is operated by the user, while the DCE is operated by the service provider.

X.25 is a virtual circuit-switching network, which requires the prior establishment of virtual connection between sender and receiver. Each connection is assigned a unique connection number that is included in each packet to be transmitted. Each packet comprises a 3-byte header followed by data up to 128 bytes. The header consists of 12-bit connection number packet sequence number and many other fields. X.25 provides flow and error control at both data link and network layer.

(a) Switched Virtual Circuit and Permanent Virtual Circuit X.25 offers end-to-end virtual communication path through the network for communication between two DTEs. This virtual path can be of two types: switched virtual circuit (SVC) and permanent virtual circuit (PVC). An SVC is a temporary switched connection that is established upon the request of a DTE and is terminated when data transmission is over. It involves three phases, namely, call setup, data transfer and call clearing. In the call setup phase, an entry for the virtual circuit (connection between a source and a destination) is made in each switch. The network resources are allocated for the entire duration of transmission. In data transfer phase, the data packets are exchanged between the communicating DTEs. The communication between DTEs is made via local and remote DCEs. The calling DTE sends the data packets to its local DCE, which forwards the packets to remote DCE through the virtual circuit, established between them. The remote DCE finally hands over the packets to the called DTE. In call clearing phase, the virtual connection is terminated and resources are deallocated.

On the other hand, PVC is a constant (fixed) connection established between two DTEs. It need not be established or terminated for every instance of communication between the DTEs. Thus, it does not require call setup and call clearing phases and always remains in the data transfer phase.

(b) Protocols used at the Link Level The interface of X.25 has been defined at three levels including level 1, level 2 and level 3 which correspond to physical, data link and network layer of OSI model, respectively. Various protocols are used at each level. The link level (level 2) uses data link protocols whose functionality is same as that of the HDLC. These protocols are as follows:

Link Access Protocol, Balanced (LAPB): This protocol is the most common protocol and has been derived from HDLC. It supports all the characteristics of HDLC and can also form a logical link connection.

Link Access Protocol (LAP): This protocol is the earlier version of the LAPB protocol and it is rarely used today.

Link Access Procedure, D Channel (LAPD): This protocol has been derived from the LAPB protocol. It is mainly used in integrated services digital networks (ISDNs), supporting data transmission between DTE and ISDN node. The transmission is mainly done through channel D.

Logical Link Control (LLC): This is an IEEE 802 protocol used in LANs. It allows transmission of X.25 packets through a LAN channel.

(c) State Diagrams to explain Call Setup and Call Clearing The communication between two DTEs initiates through the call setup phase. In this phase, initially, the calling DTE sends a Call Request packet to its local DCE. After receiving a Call Request packet, the local DCE forwards this packet to the next node thus, establishing the virtual connection up to the remote DCE, which serves the required DTE. The remote DCE then sends an Incoming Call packet to the called DTE to indicate the willingness of calling DTE to communicate with it. If the called DTE is ready to communicate, it sends a Call Accepted packet to the remote DCE, which then forwards this packet to the local DCE via the same virtual connection. After receiving the Call Accepted packet, the local DCE sends a Call Connected packet to the calling DTE to indicate the successful establishment of connection. Figure 9.16 depicts the whole process of call setup phase.

Figure 9.16 Call Setup Phase

Generally, the call-clearing phase is initiated after the completion of data transfer between calling and called DTEs. However, in certain situations, such as when call is not accepted by the called DTE or when a virtual circuit cannot be established, the call-clearing procedure is also initiated. The call can be terminated by either of the communicating parties or by the network. For example, if the calling DTE wants to clear the connection, it sends a Clear Request packet to the local DCE which forwards this packet to the remote DCE. To forward the call-clearing request to called DTE, the remote DCE sends a Clear Indication packet to it. In response, the called DTE sends a Clear Confirm packet to the remote DCE, which then forwards this packet to local DCE. The local DCE passes this packet to the calling DTE, thus terminating the connection. Figure 9.17 depicts the whole process of call-clearing phase initiated by DTE.

Figure 9.17 Call-clearing Phase

Now, consider the case where the call-clearing phase is initiated by the network. In this case, both the local and remote DCE send a Clear Indication packet to the calling and called DTE, respectively. On receiving the packets, the calling and called DTEs respond with a Clear Confirm packet to the local and remote DCE respectively, thus, terminating the connection. The call clearing by the network may result in the loss of some data packets.

21. List some drawbacks of X.25.

Ans: X.25 is a virtual circuit network that was first developed in 1976 by ITU-T. It has some drawbacks, which are as follows:

It provides a low data rate only up to 64 kbps. Thus, it cannot be used to transmit bursty data.
The error and flow control is performed at both data link and network layer. This results in great overhead and reduced speed of transmission.
It has its own network layer as it was designed for the private use. Thus, if X.25 is to be used with some network that has its own network layer such as Internet, the network layer packets of Internet have to be delivered to X.25, which then encapsulates them into X.25 packets. This increases the overhead.

22. What are T1/T3 lease lines? List some of their drawbacks.

Ans: Some organizations started working in separation from the X.25. They started using their own private wide area networks (WANs), where a line (T1 or T3) was leased from the private service providers. These lines (T1 or T3) are known as leased lines. Like X.25, the leased lines also have certain drawbacks, which are as follows:

It was too costly, as organizations have to pay for them. The payment has to be made even if the lines are not in use.
Only fixed rate data can be transmitted with T1/T3 lines. It was not possible to send different frames at different bandwidths.

23. Discuss Frame Relay in detail.

Ans: Frame Relay is a virtual circuit WAN that came into existence in the late 1980s to meet the demands of a new WAN with faster transmission capability. Prior to Frame Relay, some organizations were using virtual circuit network X.25 and some organizations were having their own private WANs using lease lines (T1 or T3) from public service providers. Both these technologies suffered from severe limitations and thus, were replaced by Frame Relay.

Frame Relay provides a higher transmission speed of 44.376 Mbps and allows sending a frame of size up to 9,000 bytes. It operates in physical and data link layer only and thus, can be easily used with networks having their own network layer such as Internet. It allows bursty data to send through it and is less expensive as compared to other WANs. It does not provide flow control; however, error detection is supported and that too only at the data link layer.


Frame Relay is a virtual circuit network in which each virtual circuit is uniquely identified by a number known as data link connection identifier (DLCI). It provides two types of virtual circuits, which are as follows:

Permanent Virtual Circuit (PVC): In this circuit, a permanent connection is created between a source and a destination and the administrator makes a corresponding entry for all the switches in a table. An outgoing DLCI is given to the source and an incoming DLCI is given to the destination. Using PVC connections is costly, as both source and destination have to pay for the connection even if it is not in use. Moreover, it connects a single source to a single destination. Thus, if source needs connection with another destination, then separate PVC is to be set up.
Switched Virtual Circuit (SVC): In this circuit, a short temporary connection is created and that connection exists as long as data transmission is taking place between source and the destination. After the transmission of the data, the connection is terminated.

In a Frame Relay network, the frames are routed with the help of a table associated with each switch in the network. Each table contains an entry for every virtual circuit that has already been set up. The table contains four fields for each entry: incoming port, incoming DLCI, outgoing port and outgoing DLCI. Whenever a packet arrives in a switch, it searches the incoming port DLCI combination in the table to match with an entry. After a match has been found, the DLCI of the arrived packet is replaced with outgoing DLCI (found in table) and the packet is routed to the outgoing port. This way the packet travels from switch to switch and eventually, reaches the destination. Figure 9.18 depicts how data is transferred from a source to a destination in a Frame Relay network.

Figure 9.18 Data Transfer in a Frame Relay Network

Frame Relay Layers

Frame Relay comprises two layers: physical and data link layer. There is no specific protocol defined for the physical layer; the implementer can use any of ANSI-recognized protocols at this layer. In contrast, at the data link layer, a simple protocol is used, which does not offer any flow or error control. However, error detection is supported by this protocol. Frame Relay uses a frame at the data layer provides whose format is shown in Figure 9.19.

Figure 9.19 Format of Frame Relay Frame

The Frame Relay frame consists of five fields, which are described as follows:

Flag: It is an 8-bit long field, which is used at the start and end of the frame. The starting flag indicates the start of the frame, whereas the ending flag indicates the end of the frame.
Address: This field has a default length of 16 bits (2 bytes), and may be extended up to 4 bytes. The address field is further divided into various subfields, which are described as follows:
  • DLCI: This field is specified in the two parts in the frame as shown in Figure 9.19. The first part is 6 bits long while the second part is 4 bits long. These 10 bits together identify the data link connection defined by the standard.
  • Command/Response (C/R): It is 1-bit long field that enables the upper layers to recognize whether a frame is a command or a response.
  • Extended Address (EA): This field is also specified in two parts in the frame, with each part of 1 bit. It indicates whether the current byte is final byte of the address. If the value is zero, then it indicates that another address byte is to follow; else, it means that the current byte is the final byte.
  • Forward Explicit Congestion Notification (FECN): It is a 1-bit long field that informs the destination that congestion has occurred. It may lead to loss of data or delay.
  • Backward Explicit Congestion Notification (BECN): It is a 1-bit long field that informs the sender that congestion has occurred. The sender then slows down the speed of the transmission in order to prevent data.
  • Discard Eligibility (DE): It is a 1-bit long field that indicates the priority level of the frame. If its value is set to one, it indicates the network to discard the packet in case of congestion.
Information: It is a variable-length field that carries higher-level data.
FCS: It is a 16-bit long field, which is used for error detection.

24. Discuss congestion control in Frame Relay networks.

Ans: Frame Relay was designed to provide higher data rate and to achieve that congestion must be avoided. However, congestion control is difficult for a Frame Relay networks as it supports variable data rate. In addition, it does not have a network layer for flow control. To avoid congestion in the network Frame Relay uses two notification bits in the frame, namely, forward explicit congestion notification (FECN) and backward explicit congestion notification (BECN). The FECN bit is used to signal the receiver about the congestion in the network so that the receiver could control the transmitting of frames by delaying the acknowledgement to the higher layers. The BECN bit is used to signal the sender about the congestion in the network. To inform the source, the switch can either send response frame from the receiver or use a special connection having DLCI 1023 to send special frames. The sender then reduces the rate of data transmission to control congestion.

25. What is the limitation of using Frame Relay?

Ans: In Frame Relay, the length of frame is not fixed, that is, a user may transmit frames of different sizes. As all the frames are stored in the same queue, a small frame after a long frame in the queue experiences different delay than a small frame before the large frame in the queue. That is, delay varies from frame to frame. This makes the Frame Relay unsuitable for real-time applications such as audio and video as these applications are time sensitive.

26. What is ATM? Explain the architecture of ATM network?

Ans: Asynchronous transfer mode (ATM) is the standard specified by ITU-T for cell relay in which multiple service types including video and data are transferred in the form of fixed-size cells. A cell is the basic unit of data exchange in an ATM. Each ATM cell is 53 bytes long including 5 bytes of header and 48 bytes of payload or data. Further, ATM networks are connection-oriented though they employ packet-switching technique. They also allow bursty traffic to pass through as well as devices with different speeds can communicate with each other via ATM network. Thus, it combines the advantages of both packet switching and circuit switching.

Architecture of ATM Network

An ATM network is composed of ATM switches and ATM endpoints. An ATM switch deals with transmission of a cell through the network. It takes the cell from an ATM switch or ATM endpoint, reads the cell header information and updates it. After this, it switches the cell to an output interface towards its intended destination. An ATM endpoint is a user access device such as router, workstation and digital service unit (DSU) that consists of a network interface adapter.

Two types of interfaces, namely, user-to-network interface (UNI) and network-to-network interface (NNI) are used in an ATM network. The UNI connects the ATM endpoints to the ATM switches inside the network, whereas the NNI connects the switches within the network. Each UNI and NNI can also be classified into public or private UNIs and NNI. A private UNI connects an ATM endpoint and a private switch. However, the public UNI connects an ATM endpoint or private switch to a public switch. A private NNI connects two ATM switches located in the same private organization. However, the public NNI connects two ATM switches located in the same public organization. The architecture of an ATM network is shown in Figure 9.20.

Figure 9.20 Architecture of an ATM Network

Two ATM endpoints are connected through transmission path (TP), virtual path (VP) and virtual circuits (VC). A transmission path such as wire or cable that links an ATM endpoint with an ATM switch or two ATM switches with one another. It consists of a set of virtual paths. A virtual path refers to the link (or a group of links) between two ATM switches. Each virtual path is a combination of virtual circuits having the similar path. A virtual circuit refers to the logical path that connects two points. All the cells corresponding to the same message pass through the same virtual circuit and in the same order until they reach the destination.

In order to route cells from one end ATM end point to another, the virtual connections must be uniquely identified. Each virtual connection is identified by the combination of virtual path identifier (VPI) and virtual circuit identifier (VCI). Further, VPI uniquely identifies the virtual path while VCI uniquely identifies the virtual circuit; both VPI and VCI are included in the ATM cell header. Notice that all the virtual circuits belonging to the same virtual path possess the same VPI. The length of VPI is different in UNI and NNI. It is of 8 bits in UNI but of 12 bits in NNI. On the other hand, the length of VCI is same in both UNI and NNI and is 16 bits. Thus, to identify a VC, a total of 24 bits are required in UNI while 28 bits are required in NNI.

The ATM also uses PVC and SVC connections like Frame Relay. However, the difference is that ATM was developed from the starting to support audio and video applications using SVCs while in a Frame Relay, PVC and SVC were added later.

27. How are ATM cells multiplexed?

Ans: In an ATM, asynchronous time division multiplexing technique is followed to multiplex the cells from different input sources. The size of each slot is fixed and is equal to the size of a cell. Each input source is assigned a slot when it has some cell to send. In case none of the sources has cell to send, the slots remain empty. Whenever any channel has a cell to send, the ATM multiplexer puts any of these cells into a particular slot. However, if all the cells have been multiplexed, the empty time slots are sent in the network.

Figure 9.21 shows the cell multiplexing from four input sources P, Q, R and S. At the first clock tick, as the input source Q has no cell to send, the multiplexer takes a cell from S and puts it into the slot. Similarly, at the second clock tick, Q has no data to send. Thus, the multiplexer fills the slot with a cell from R. This process continues until all the cells have been multiplexed. After all the cells from all the sources have been multiplexed, the output slots are empty.

Figure 9.21 Multiplexing in ATM

28. Discuss ATM layers.

Ans: The ATM standard has defined three layers, namely, physical layer, ATM layer and application adaptation layer (AAL) as shown in Figure 9.22.

Figure 9.22 ATM Layers

Physical Layer

The physical layer is responsible for managing the medium-dependent transmission. It carries the ATM cells by converting them into bit streams. It is responsible for controlling the transmission and receipt of bits as well as maintaining the boundaries of an ATM cell. Originally, ATM was designed to use synchronous optical network (SONET) as the physical carrier. However, other physical technologies can also be used with ATM.

ATM Layer

An ATM layer is similar to the data link layer of the OSI model. It is responsible for cell multiplexing and passing cells through ATM network (called cell relay). Other services provided by the ATM layer include routing, traffic management and switching. It accepts a 48-byte segment from the AAL layer and adds a 5-byte header, transforming it into a 53-byte cell. Further, ATM uses a separate header for UNI and NNI cell (Figure 9.23). The header format of the UNI cell is similar to the NNI cell except the GFC field that is included in the UNI cell, but not in the NNI cell.

Figure 9.23 Format of an ATM Cell Header

The description of fields included in cell header is as follows:

Generic Flow Control (GFC): It is a 4-bit long field that is used for flow control in UNI. In NNI, this field is not required. Thus, all the bits are assigned to VPI. The longer is the VPI, the more number of virtual paths will be specified at NNI.
Virtual Path Identifier (VPI): It is an 8-bit long field in UNI cell and 12-bit long field in NNI cell that identifies a specific virtual path.
Virtual Circuit Identifier (VCI): It is 16-bit long field that identifies a specific virtual circuit inside the virtual path.
Payload Type (PT): It is a 3-bit long field in which the first bit is used to define the type of information, that is, data or managerial information while the context of other two bits are based on the first bit.
Cell Lost Priority (CLP): It is a 1-bit long field that is used for congestion control.
Header Error Correction (HEC): It is the cyclic redundant code that is used for error correction. It is calculated over the first 4 bytes of header using a divisor x8 + x2 + x + 1 to correct single and multiple-bit errors.

Application Adaptation Layer

Application adaptation layer (AAL) accepts data frames or stream of bits from upper layers and divides them into fixed-size segments of 48 bytes. At the receiver's side, these data frames or stream of bits are again reassembled into their original form. An AAL layer is partitioned into two sublayers, namely, segmentation and reassembly sublayer (SAR) and convergence sublayer (CS). The SAR sublayer is responsible for segmentation of payload at the sender's side and reassembling the segments to create the original payload at the receiver's side. The CS sublayer is responsible for ensuring the integrity of data and preparing it for segmentation by the SAR sublayer. There are various types of AAL including AAL1, AAL2, AAL3/4 and AAL5. Out of these four versions, only AAL1 and AAL5 are commonly used.

29. Explain the structure of ATM adaptation layer.

Ans: The ATM standard has defined four versions of AAL, which include AAL1, AAL2, AAL3/4 and AAL5. All these versions are discussed as follows:


It is a connection-oriented service that supports applications needing to transfer information at constant bit rates such as voice and video conferencing. The bit stream received from upper layer is divided into 47-byte segments by the CS sublayer and then segments are passed to SAR sublayers below it. The SAR sublayer appends a 1-byte header to each 47-byte segment and sends the 48-byte segments to the ATM layer below it. The header added by SAR sublayer consists of two fields (Figure 9.24) namely, sequence number (SN) and sequence number protection (SNP). The SN is a 4-bit field that specifies a sequence number for ordering the bits. Further, SNP is a 4-bit field that is used to protect the sequence number. It corrects the SN field by using first three bits and the last bit is used as a parity bit to discover error in all eight bits.

Figure 9.24 SAR Header


Initially, it was designed to support applications that require variable-data rate. However, now it has been redesigned to support low bit rate traffic and short-frame traffic such as audio, video or fax. The AAL2 multiplexes short frames into a single cell. Here, the CS sublayer appends a 3-byte header to the short packets received from the upper layers and then passes them to the SAR layer. The SAR layer combines the short frames to form 47-byte frames and adds a 1-byte header to each frame. Then, it passes the 48-byte frames to the ATM layer.

The header added by CS sublayer consists of five fields (Figure 9.25(a)) which are described as follows:

Channel Identifier (CID): It is an 8-bit long field that specifies the channel user of the packet.
Length Indicator (LI): It is a 6-bit long field that indicates the length of data in a packet.
Packet Payload Type (PPT): It is a 2-bit long field that specifies the type of a packet.
User-to-User Indicator (UUI): It is a 3-bit long field that can be used by end-to-end user.
Header Error Control (HEC): It is a 5-bit long field that is used to correct errors in the header.

The header added by SAR consists of only one field [Figure 9.25(b)], start field (SF) that specifies an offset from the beginning of the packet.

Figure 9.25 CS and SAR Headers


Originally, AAL3 and AAL4 were defined separately to support connection-oriented and connectionless services, respectively. However, later they were combined to form a single format AAL3/4. Thus, it supports both connection-oriented and connectionless services. Here, the CS sublayer forms a PDU by inserting a header at the beginning of a frame or appending a trailer. It passes the PDU to SAR sublayer, which partitions the PDU into segments and adds a 2-byte header to each segment. It also adds a trailer to each segment.

The header and trailer added by the CS layer together consist of six fields (Figure 9.26) that are described as follows:

Common Part Identifier (CPI): It is an 8-bit long field that helps to interpret the subsequent fields.
Begin Tag (Btag): It is an 8-bit long field that indicates the beginning of a message. The value of this field is same for all the cells that correspond to a single message.
Buffer Allocation Size (BAsize): It is a 16-bit long field that specifies to the receiver buffer size needed to hold the incoming data that is to be transmitted.
Alignment (AL): It is an 8-bit long field that is added to make the trailer 4 bytes long.
Ending Tag (Etag): It is a 16-bit long field that indicates the end of the message. It has the same value as that of Btag.
Length (L): It is a 16-bit long field that specifies the length of the data unit.

Figure 9.26 CS Header and Trailer

The header and trailer added by SAR layer together consists of five fields (Figure 9.27) that are described as follows:

Segment Type (ST): It is a 2-bit long field that specifies the position of a segment corresponding to a message.
Sequence Number (SN): It is a 4-bit long field that specifies the sequence number.
Multiplexing Identifier (MID): It is a 10-bit long field that identifies the flow of data to which the incoming cells belong.
Length Identifier (LI): It is a 6-bit long field in the trailer that specifies the length of a data in the packet excluding padding.
CRC: It is a 10-bit long field that contains CRC computed over the entire data unit.

Figure 9.27 SAR Header and Trailer


This layer supports both connection-oriented and connectionless data services. It assumes that all cells corresponding to single message follow one another in a sequential order and the upper layers of the application provide the control functions. This layer is also known as simple and efficient algorithm layer (SEAL). Here, the CS sublayer appends a trailer to the packet taken from upper layers and then passes it to the SAR layer. The SAR layer forms 48-bytes frames from it and then passes them to the ATM layer.

The trailer added by CS layer consists of four fields (Figure 9.28) that are described as follows:

User-to-User (UU): It is an 8-bit-long field that is used by users.
Common Part Identifier (CPI): It is an 8-bit-long field that is used for the similar function as that in the CS layer of AAL3/4.
Length (L): It is a 16-bit-long field that specifies the length of data.
CRC: It is a 32-bit-long field, which is used for error correction.

Figure 9.28 CS Trailer

30. List some benefits of ATM.

Ans: An ATM is a cell-switched network that provides several benefits over its counterpart, the Frame Relay. Some of these benefits are as follows:

It provides high bandwidth for applications that require bursty traffic. For example, applications such as video involve bursty data in which amount of motion in a video is not fixed and also in audio, conversation does not go the same way all the time. Thus, ATM can be used for these applications.
Technologies such as Frame Relay use frames of different sizes. Thus, it is difficult to manage the traffic. In contrast, ATM was developed to carry audio and video using a cell-switched technology in which fixed-size frames are used. This improves the efficiency, as it is easier to quantify, predict and manage the network traffic.
Further, ATM is a WAN technology but it can be used as both LAN and WAN technology. It can be used to cover large distances to link LANs or WANs.

31. What is the relationship between SONET and SDH?

Ans: Both Synchronous optical network (SONET) and synchronous digital hierarchy (SDH) are WANs that are used as transport networks to carry data at high rate from other WANs because of higher bandwidths of fibre optic cables as well as carrying vast amount of lower-rate data. The only difference is that ANSI of the United States has provided SONET while SDH has been provided by ITU-T of Europe. Both SONET and SDH are synchronous networks that use synchronous TDM multiplexing. A master clock controls all the clocks present in the system. Both SONET and SDH standards are independent of each other; however, they are functionally similar and compatible. It can be said that they are nearly identical.

32. How is an STS multiplexer different from an add/drop multiplexer since both can add signals together?

Ans: Both synchronous transport signal (STS) and add/drop multiplexer are the devices used in SONET. Further, STS is used as an interface between electrical and optical networks. At the sender's end, STS multiplexer is used that multiplexes the signals coming from various electrical sources into the corresponding optical carrier (OC) signal. This optical signal passes through the SONET link and finally, reaches the receiver. At the receiver's end, STS demultiplexer is used that demultiplexes the OC signals into the corresponding electrical signals.

Add/drop multiplexer is used in the SONET link to insert or remove signals. It can combine the STSs from several sources into a single path or extract some desired signal and send it to some other path without demultiplexing. In SONET, the signals multiplexed by the STS multiplexer (optical signals) are passed through regenerator, which regenerates the weak signals. The regenerated signals are then passed to add/drop multiplexer that transmits them in the directions as per the information available in data frames (Figure 9.29). The main difference between add/drop multiplexer and STS multiplexer is that it does not demultiplex the signals before delivering them.

Figure 9.29 A Simple Network using SONET Equipment

33. What is the function of a SONET regenerator?

Ans: The SONET regenerator is a repeater that regenerates the weak signals. Sometimes, because of the long distances travelled from one multiplexer to another, the signals becomes weak and need to be regenerated. The regenerator receives an OC signal and demodulates it into the corresponding electrical signals. These electrical signals are then again regenerated and finally, modulated into OC signals. The regenerator functions at the data link layer.

34. What are the four SONET layers? Discuss the functions of each layer.

Ans: There are four functional layers included in the SONET standard, namely, photonic, section, line and path (Figure 9.30).

Figure 9.30 Relationship of SONET Layers with the OSI Model

Photonic Layer: It is the lowest layer whose functionalities are similar to that of the physical layer of the OSI model. This layer includes the physical specifications related to optical fibre, the multiplexing functionalities and the sensitivity of the receiver. Further, NRZ encoding is used by SONET where the presence of light shows bit 1 while absence of light shows 0.
Section Layer: The function of section layer includes handling framing, scrambling and error control. It has the responsibility of moving a signal across a physical section. In addition, the section layer header is added to the frame at this layer.
Line Layer: The line layer takes care of the movement of signal across a physical line. At this layer, the line layer overhead is added to the frame. The line layer functions are provided by STS multiplexers and add/drop multiplexers.
Path Layer: The movement of a signal from the optical source to the optical destination is the responsibility of the path layer. The electronic signal is changed into the optical form from the electronic form at the optical source and then multiplexed with other signals, finally being encapsulated into a frame. The received frame is demultiplexed and the individual optical signals are changed into their electronic form at the optical destination. The STS multiplexers are used to provide path layer functionalities. At this layer, the path layer overhead is added to the signal.

35. What are virtual tributaries?

Ans: SONET was originally introduced to hold the broadband payloads. However, the data rates of the current digital hierarchy ranging from DS-1 to DS-3 are lower than STS-1. Thus, virtual tributary (VT) was introduced to make the SONET compatible with the present digital hierarchy. A VT is a partial payload, which can be filled into an STS-1 frame and combined with many other partial payloads to cover the entire frame. The VTs filled in the STS-1 frame are organized in form of rows and columns. There are four types of VTs, which have been defined to make SONET compatible with the existing digital hierarchies. These four categories are as follows:

VT1.5: This VT adapts to the US DS-1 service and provides a bit rate of 1.544 Mbps. It gets three columns and nine rows.
VT2: This VT adapts to the European CEPT-1 service and provides a bit rate of 2.048 Mbps. It gets four columns and nine rows.
VT-3: This VT adapts to the DS-1C service and provides a bit rate of 3.152 Mbps. It gets six columns and nine rows.
VT 6: This VT adapts to the DS-2 service and provides a bit rate of 6.312 Mbps. It gets 12 columns and nine rows.

Multiple Choice Questions

1. Fast Ethernet operates at
(a) 10 Mbps
(b) 100 Mbps
(c) 1000 Mbps
(d) none of these
2. 10-Base F uses
(a) optical fibre
(b) coaxial cable
(c) twisted pair
(d) none of these
3. IEEE 802.4 is used to describe
(a) Token Ring
(b) Token Bus
(d) Ethernet
4. The access method used in FDDI is
(b) token passing
(c) timed token passing
(d) none of these
5. Extended service set in IEEE 802.11 consists of
(a) only one basic service set with AP
(b) only one basic service set without AP
(c) at least two basic service sets without APs
(d) at least two basic service sets with APs
6. An example of wireless LAN is
(a) Bluetooth
(b) Ethernet
(c) both (a) and (b)
(d) none of these
7. Which of the following statements is false?

(a) Virtual circuit network provides less efficiency.

(b) Congestion control is easy in virtual circuit networks.

(c) Virtual circuit does not require set up of a connection before transmission.

(d) all of these

8. Which of the following statements is false?
(a) In Frame Relay, frame length is not fixed.
(b) Frame Relay was developed for real-time applications.
(c) ATM uses fixed-size cells.
(d) none of these
9. Which of the following is a type of interface in an ATM network?
(a) user-to-network
(b) network-to-network
(c) user-to-user
(d) both (a) and (b)
10. AAL1 is a
(a) connection-less service
(b) connection-oriented service
(c) both (a) and (b)
(d) none of these


1. (b)

2. (a)

3. (b)

4. (c)

5. (d)

6. (a)

7. (c)

8. (b)

9. (d)

10. (b)