Over the past decade, researchers have introduced various wireless QoS mechanisms and almost all of them are incorporated at the MAC or physical (PHY) layer. Considering the wireless channel reliability issue, various error control mechanisms have also been introduced as part of the transmission protocols to improve jitter, loss rate, and overall throughput. The legacy QoS mechanisms include the following.
The MAC, also known as the medium access control, is a part of the data link layer that acts as an interface between the logical link control sublayer and the network’s physical layer. The MAC layer is responsible for controlling which node of the wireless network is allowed to access the shared channel and how nodes communicate with each other, hence it has significant impacts on the MSR, MRR, latency, jitter, and priority characteristics observed at each node. Many different wireless MAC schemes have been developed to support a wide variety of services while trying to ensure QoS. Figure 1 depicts the QoS schemes in the legacy 802.11. Each scheme has been optimized to support a particular application or set of applications. The optimization of a particular scheme leads to its inherent strengths and weaknesses. These strengths and weaknesses determine how effectively the scheme functions in real life for a particular mix of applications.
The MAC, designed for the IEEE 802.11b , was originally intended to allow quick, easy, and robust access to a wireless channel without complicated addressing or queuing techniques. Differentiation of services is usually all that it achieves, and latency and jitter are still unpredictable due to the random nature of the waiting time at each client for the channel access. The average throughput in a saturated network running 802.11 distributed coordination function (DCF) MAC is equal for all nodes if they all have the same traffic pattern. The IEEE 802.11e standard implements an enhanced version of DCF. This is still a contention-based MAC using carrier sense multiple access with collision avoidance (CSMA/CA). Traffic at each node is differentiated into up to eight queues, each having a different arbitrary interframe space (AIFS) and a different minimum contention window time. Traffic classes with a shorter AIFS and window size will have a higher probability of getting access to the medium. This scheme guarantees bandwidth for high priority traffic very well while still maintaining connectivity for low priority traffic. The enhanced distributed channel access (EDCA) also achieves reasonably good latency performance. However, each queue essentially works like its own DCF, meaning that as the number of users rises, the collision rate increases quite rapidly limiting the throughput. SpectaLink is one of the world’s largest provider of voice over IP (VoIP) telephony products and as such have developed their own scheme SpectraLink voice priority (SVP) for providing QoS in 802.11 networks, in the absence of a suitable standard. SVP is a modification of 802.11 which specifies that the back-off time for higher priority packets should be set to zero. In the original specification of SVP, setting the contention window to zero for high priority traffic is only done at the access point. SVP also specifies that higher priority packets should either be put at the head of the queue or put in a separate queue completely. Both these methods are designed to give priority access to packets that contain higher priority data and allow them to access the network in a timely manner at the expense of more collisions. Collisions, however, often reduce the total throughput of data in the system. Because SVP is based on the concept of DCF, many of DCF’s shortcomings are also evident in SVP. The wireless token network (WTN) is another MAC design that incorporates the overheads that are absolutely necessary to provide good throughput and QoS. All decisions during the design phase leaned toward lower transmission overhead and hence WTN is more efficient with the bit rate compared to the 802.11. The WTN, however, cannot offer guaranteed QoS when the network is overloaded and also suffers from the problem of higher jitter because of its design issues.
The 802.11 standard specifies multiple transmission rates that can be achieved by different modulation techniques at the PHY layer. The philosophy behind it is to adapt the modulation techniques according to the channel conditions so that the received error remains within a limit and QoS does not degrade substantially. The standard, however, leaves the rate adaptation and signaling mechanisms open. Because transmission rates depend on the channel conditions, an optimized link adaptation mechanism is desirable to maximize the throughput under different channel conditions. Most of the existing link adaptation mechanisms focus on algorithms to switch among transmission rates specified in the physical layer convergence procedure (PLCP), without the need to modify existing standards. The 802.11b, however, incorporates a novel method to adjust the length of direct sequence spread spectrum systems (DSSS) pseudo-noise (PN) code with slight modifications of its DCF. Metrics that are also commonly used in existing link adaptation algorithms include channel signal-to-noise ratio/carrier-to-interference ratio (SNR/CIR), average payload length, received power level, or transmission acknowledgments. Received signal strength (RSS) is a metric used in the adaptation algorithm with the assumption that transmission power is fixed. The RSS metric also assumes that there is a linear relationship between the average RSS and SNR. Based on the measured RSS, the station dynamically switches to an appropriate transmission rate.
Packet error rate (PER) prediction is another link adaptation scheme in which decisions are made based on PER prediction that not only depends on SNR/CIR but also on the momentary channel transfer function. MAC protocol data unit (MPDU)-based link adaptation is another link adaptation scheme that uses a combination of SNR, average payload length, and frame retry count as the metric for the link adaptation algorithm. The proposed algorithm pre-established a table of best transmission rate for decision making. Link adaptation with success/fail (S/F) thresholds uses the ACKs of transmitted frames as a measurement of channel condition and adjusts the transmission rate depending on the subsequent successful transfer of frames. Code Adapts To Enhance Reliability (CATER) is an adaptive PN code algorithm for DSSS used in 802.11b and it is designed to improve the throughput under high bit error rate (BER) channel conditions.
A wireless network is not as reliable as a wired network and error in transmitted packets is common in wireless communication. The error is more evident when the nodes have the mobility that causes error in the received packets due to slow and fast fading. An error control mechanism attempts to address the problems caused by error in received signals and thereby maintains QoS by improving loss rate and jitter performances and overall throughput. The Transmission Control Protocol (TCP) is a popular protocol designed to provide reliable and orderly delivery of a stream of bytes and is a key part of the TCP/IP protocol suit. The TCP provides a simpler interface to applications by hiding most of the underlying packet structures, rearranging out-of-order packets, minimizing network congestion, and retransmitting corrupted packets. Forward error correction (FEC) is another error control mechanism for data transmission. In FEC, the sender incorporates additional redundant data to its messages, which allows the receiver to detect and correct errors within a certain limit without the need for retransmission. FEC block codes are applied to a sequence of packets, and in case of a loss/error in packets, a receiver reconstructs the missing packets from the redundant information carried in error-correcting codes. Naturally, error-correcting capability in FEC comes at some costs because the FEC codes represent redundant information that increases the overall transmission rate. FEC is highly effective where the communication media is unreliable and retransmissions of too many packets prove costly in context of available bandwidth.
Although considerable effort has gone into improving the QoS in the 802.11 standard, the most it can achieve is to differentiate traffic and treat them with their corresponding priority and also to adapt the transmission rates at various environments to offer graceful degradation of throughput. Due to its design limitations at different layers, the 802.11 standard cannot offer guaranteed QoS, which is one of the key motivations behind introducing another standard, the IEEE 802.16, also known as WiMAX.
No comments:
Post a Comment