Tuesday, December 28, 2010

DECENTRALIZED SUBCARRIER AND POWER ALLOCATION SCHEMES


The distributed infrastructure of WiMAX mesh and relay networks, and the need for reducing communication overhead between the BS and network nodes are the motivations behind proposing decentralized resource allocation schemes. Potentially, either no central controller exists or it does not influence the allocation decision in a decentralized resource allocation. These facts make the decentralized schemes more scalable.

Resource allocation in decentralized networks is essentially different from centralized one. In centralized schemes, the BS collects CSI from all users, allocates the subcarriers or power to the users, and informs the users of allocated resources. On the other hand, users may not need to know the CSI of the other parties in decentralized schemes. Besides, the parties may not be aware of the decision of each other, so a collision is probable. Accordingly, in each proposed distributed resource allocation scheme for OFDM-based networks, the following questions should be answered:
  • How does a user achieve the required CSI? The hidden terminal and exposed node problems are common problems in self-organized and decentralized networks, because it is assumed that no central controller exists to assist in signaling. Besides, the signaling overhead should be reasonable for a practical implementation.

  • How should the node coordinate or compete with the other nodes to attain the resources? A node does not know the requirements of the other nodes, at each instant, so competence or coordination is a “must” for a node which wants to start capturing the resources.
An interference aware subchannel allocation scheme that overcomes the drawbacks of decentralized schemes. 

As the scheme uses updated CSI at the beginning of each MAC frame, the channel does not need to be assumed time-invariant over multiple MAC frames. The scheme is appropriate for OFDM-TDD networks. The MAC frame is divided into mini-slots; at the first mini-slot of each MAC frame the ongoing receiver nodes broadcast a busy signal to inform the respected transmitters of the quality of the allocated subchannel. The inactive nodes does not send any busy signal. The ongoing transmitter nodes listen to the busy signal and adapt their subcarrier allocation to their specific receiver nodes according to the information on the busy signal. Also, a node that wants to start transmission listens to the busy signal and chooses the subcarriers that are not interfered by ongoing transmissions and their interference. In other words, it selects those subcarriers with a received busy signal power less than a threshold. The advantages of the scheme are as follows:
  • Signaling overhead is low compared to other radio resource allocation schemes
  • The co-channel interference is reduced significantly since the scheme is interference aware
  • Full frequency reuse is possible
A decentralized power allocation problem for a cooperative transmission is formulated in Ref. [43]. The objective is to minimize the total transmission power of all users subject to providing minimum rate requirement of each user. The network model uses a time division multiple access with the OFDM multiplexing (TDMA-OFDM), so only one user accesses the total OFDM subcarriers in each time slot. The user may allocate some of the subcarriers to transmit its traffic and the rest of subcarriers to relay the other users’ traffic to minimize total transmission power. In other words, the resource allocation problem determines if a user should cooperate with other users, and in case of cooperation it determines the subcarriers and power allocation. A cooperative user may be allocated more power than a noncooperative user, because its location and channel gain allows it to cooperate with other users. However, the total network power is minimized by cooperation among users. A decoupled subcarrier and power allocation scheme is proposed to solve the problem.

Assuming an access point in the network proposes a distributed decision-making scheme for the resource allocation. Each user measures its CSI upon receiving a beacon signal from the access point. The subcarriers are divided into several groups (equal to the number of users), and the approximate channel gain of each group for each user is estimated. Then, the users contend with each other to achieve the group with the best channel quality of their own. A backoff mechanism is proposed to avoid collision. Each user start contending for the best group of subcarriers after a backoff time which is proportional to the best group gain for that user. After contention, the access point informs the users of the winners which can transmit in the next transmission interval. A frame is divided into three subframes, contention, acknowledge, and transmission subframes. This scheme is actually a distributed resource allocation scheme for a centralized network that aims to reduce the signaling overhead and processing task of the access point.

A collaborative subcarrier allocation using the swarm intelligent. The scheme relies on the central controller to achieve the updated information of available subcarriers and the highest and lowest demands for each subcarrier. In other words, the nodes negotiate with the central controller iteratively in a negotiation phase. In each iteration, the nodes inform the central controller of their demand for a specific subcarrier. Then, the central controller broadcast a message indicating the highest and lowest demands for each subcarrier. Based on the feedback messages that occur several times in each negotiation phase, the nodes intelligently decide upon subcarriers.

A comparison between the capacity performance of OFDM-TDMA and OFDM-FDMA in a two-hop distributed network. The time frame is divided by two in OFDM-TDMA. In each half of the time frame, all subcarriers are devoted for transmission over one hop. A power allocation algorithm is proposed to maximize the end-to-end capacity subject to the overall transmit power constraint of the two hops. For OFDM-FDMA, the subcarriers are assigned to the two hops without overlapping and a joint subcarrier and power allocation algorithm is proposed. The simulation performance results of the algorithms show that OFDM-FDMA achieves a higher end-to-end capacity than OFDM-TDMA.

Friday, December 24, 2010

MAC SUBLAYER | OFDM-BASED WiMAX


Resource allocation is one of the major tasks of MAC because the medium access mechanisms of MAC directly affect the spectrum and power utilization. Accordingly, the MAC sublayer specifications in the IEEE 802.16 standard, relevant to our discussion in the next section, are introduced in the following.

Despite the advantages of OFDM in mitigating the channels’s impairments as mentioned before, underutilization of transmitter power and network subcarriers is its disadvantage. When an OFDM transmitter accesses the channel in a time division manner, e.g., time division multiple access (TDMA), the transmitter is forced to transmit on all available subcarriers Nsc, although it may require a less number of subcarriers to satisfy its transmission rate requirement. Consequently, the transmitter power consumption increases as the number of subcarriers increases. This disadvantage motivates the development of a PHY technology where transmitters are multiplexed in time and frequency, i.e., OFDMA. In such a technology, the users are exclusively assigned a subset of the network available subcarriers in each time slot. The number of both time slots and subcarriers can be dynamically assigned to each user; this is referred to as dynamic subcarrier assignment which introduces multiuser diversity. The multiuser diversity gain arises from the fact that the utilization of given resources varies from one user to another. A subcarrier may be in deep fading for one user  while it is not for another user (e.g., the same subcarrier for user O). Allocating this particular subcarrier to the user with higher channel gain permits higher transmission rate. To achieve multiuser diversity gain, a scheduler at the MAC sublayer is required to schedule users in appropriate frequency and time slots.

Point-to-multipoint (PMP) as well as mesh topologies are supported in the IEEE 802.16 standard. PMP mode operations are centrally controlled by the BS, but mesh mode can be either centralized or decentralized, i.e., distributed. In a centralized mesh, a mesh BS (a node that is directly connected to the backbone) coordinates communications among the nodes. Decentralized mesh is similar to multihop adhoc networks in the sense that the nodes should coordinate among themselves to avoid collision or reduce the transmission interference.

In PMP mode, the uplink (UL) channel, transmissions from users to BS, is shared by all users, i.e., UL is a multiple access channel. On the other hand, downlink (DL) channel, transmissions from BS to subscriber stations (SSs), is a broadcast channel. The duplexing methods of UL and DL include time division duplexing (TDD), frequency division duplexing (FDD), and half-duplex FDD (HFDD). Unlike PMP mode, there is no clear UL and DL channel defined for mesh mode.

An outstanding feature of WiMAX is heterogeneous traffic support over wireless channels. IEEE 802.16 provides service for four traffic types known as service flows. The mechanism of bandwidth assignment to each SS depends on the QoS requirements of its service flows. The service flows and their corresponding bandwidth request mechanisms are as follows:
  • Unsolicited grant service (UGS): This service supports constant bit rate traffic. Bandwidth is granted to this service periodically or in case of traffic presence by the BS.

  • Real-time polling service (rtPS): This service has been provided for real-time service flows with variable-size data packets issued periodically. rtPS flows can send their bandwidth request to the BS after being polled.

  • Nonreal-time polling service (nrtPS): This service is for nonreal-time traffic with variable-size data packets. nrtPS can gain access to the channel using monocast or multicast polling mechanisms. Upon receiving a multicast polling, the nrtPS service can take part in a contention in the bandwidth contention range.

  • Best effort (BE) service: This service provides the minimum required QoS for nonreal-time traffic. The channel access mechanism of this service is based on contention.

Sunday, December 19, 2010

WIMAX HARQ

HARQ is supported in WiMAX that uses OFDMA physical layer. As mentioned before, WiMAX uses a basic stop-and-wait HARQ protocol. Using multiple HARQ channels can compensate the propagation delay of the stop-and-wait scheme, that is, one channel transmits data while others are waiting for feedbacks. Therefore, using a small number of HARQ channels (e.g., using 6 channels), multichannel stop-and-wait HARQ is a simple, efficient protocol that simultaneously achieves high throughput and low memory requirement.


In this scheme, each HARQ channel is independent of each other; that is, a data burst can only be retransmitted by the HARQ channel that initially sent the data burst. Each HARQ channel is distinguished by a HARQ channel identifier (ACID – ARQ channel identifier). Data reordering in an SS is done by referring to Protocol Data Unit (PDU) sequence numbers that are enabled when the HARQ operation is used. The HARQ Downlink Map Information Element (DL MAP IE) contains the information about the DL HARQ Chase sub-burst IE. It specifies the location of HARQ sub bursts, the ACID, the HARQ Identifier Sequence Number (AI_SN), etc. By referring to MAP IE, an SS can correctly retrieve a given data burst. HARQ feedbacks are sent by the SS after a fixed delay (this is called synchronous feedbacks). To specify the start of a new transmission on each HARQ channel, one-bit HARQ AI_SN is toggled on each successful transmission.

When CC is used in WiMAX HARQ, the total buffer capability needed in a specific SS is determined by the maximum data size and may be transmitted by a single channel at a time, and the total number of HARQ channels provided for the SS (i.e., the product of multiplying these two values). In the IEEE 802.16 Standard, this buffer capacity is used by the BS as a rate control mechanism when sending data to an SS.

1. A Simple Example for WiMAX HARQ

In this section, the multiple-channel stop-and-wait HARQ used by WiMAX is illustrated by a simple example. As mentioned above, these HARQ channels are independent of each other; retransmissions of a data burst can only be done by its initial sending HARQ channel. Thus, a negative acknowledged (NAKed) data burst can only be resent via the initial sending channel until it is successfully received.

Figure 1 shows the channel activities versus time frames of a HARQ scheme, from a BS to a particular SS, assuming that four channels are available for the SS. The first row shows the consecutive time frames. The next four rows show the data bursts sent and the ACK or NAK received from/by each channel at the corresponding time frame. The last row shows the data bursts storing in the SS buffer at the corresponding time frame (waiting to be forwarded to the upper layer). It is assumed that data bursts have either been received correctly or erroneously. Erroneously received data bursts are placed in the receiver’s buffer waiting for the correct copies. On the other hand, correctly received data bursts are placed in the receiver’s buffer waiting to be forwarded further (either to the next hop or to the upper layer) in the right sequence, that is, the receiver is still waiting for a correct copy of some earlier data bursts.

 
Figure 1: A simple example of the WiMAX HARQ scheme.

Figure 1 may be explained below. Data bursts 1 through 4 are sent via channels 1 through 4, at time frames 1 through 4, respectively. A corresponding ACK or NAK will be received by the BS by the end of the 4th time frame after the BS sent out the data burst. So, at the end of 4th time frame, a NAK for data burst 1 is received, and thus the second copy of data burst 1 is sent at time frame 5, via channel 1 (the same channel it was initially sent). Similarly the second copy of data burst 2 is sent at time frame 6 via channel 2. Since an ACK for data burst 3 is received at time frame 6, a new data burst (5) is sent via channel 3 at time frame 7.

The data bursts stored at the receiver’s buffer may be explained as follows. At time frame 4, data bursts 1, 2, and 3 are in the receiver’s buffer. Data bursts 1 and 2 have been erroneously received (and are waiting for correct copies) while data burst 3, even though has been correctly received, is also waiting so that it may be forwarded further in the correct order. Similarly, at time frame 5, data bursts 1 through 4 are in the buffer. At time frame 6, only data bursts 2 through 4 are left in the buffer since data burst 1 has been correctly forwarded. At time 7, after data burst 2 is correctly forwarded, data bursts 3 and 4 are also forwarded, so nothing is left in the buffer. The rest of the time frames may be explained similarly.

Wednesday, December 15, 2010

ARQ AND HARQ

ARQ has long been used in wireless communications to ensure a reliable, in-sequence data transmission by retransmitting corrupted data. There are three types of basic ARQ schemes: stop-and-wait, go-back-N, and selective repeat. The stop-and-wait scheme is the simplest to implement, yet it has the lowest throughput especially when the propagation delay is long. To take its advantage of simple implementation while avoiding the long, wasteful delays, the multiple-channel (or parallel) stop-and-wait ARQ has been proposed. One famous example is the Advanced Research Projects Agency Network (ARPAnet) that supports multiplexing of eight logical channels over a single link and runs stop-and-wait ARQ on each logical channel.


The performance of ARQ schemes suffers quickly because more retransmissions are required. To reduce the frequency of retransmissions, a system can adopt a forward error correction (FEC) functionality to correct errors that occur during transmissions. Combining an ARQ scheme with the FEC functionality is called HARQ.

In general, there are three types of HARQ, described below. In type I HARQ, the receiver simply discards erroneously received packets after it fails to correct errors, and sends a negative acknowledgement (NAK) back to the transmitter to request a retransmission. There is thus no need to have a buffer storing erroneous received packets. A fixed code rate is used for error correction, type I HARQ therefore cannot effectively adapt to changing channel conditions. (Note that code rate is defined as the ratio of the total number of information bits over the total bits transmitted. Thus, the higher the code rate, the lower the redundancy.) Using a code rate too high may cause too many retransmissions in high packet error rate (PER) conditions; on the other hand, using a code rate too low may cause too much redundancy in low PER conditions. The throughput may therefore be degraded by either a high frequency of retransmissions or too many redundant data in transmissions. Accordingly, choosing a suitable code rate is crucial for type I HARQ. Type I HARQ is therefore best suited for a channel that has a consistent level of signal-to-noise ratio (SNR).

In type II HARQ, in addition to packets being coded with ARQ and FEC as in type I HARQ, each receiver also keeps the erroneously received packets in the buffer in order to combine them with the retransmitted packets. There are two major FEC categories of coding for type II HARQ: chase combining (CC) and incremental redundancy (IR), introduced below.

In CC, the receiver combines received copies of the same packet to get diversity gain. All the redundant bits for each retransmitted packet are the same as the first transmission; therefore, it is relatively simple but less adaptive to the channel condition because the decoder may just need a smaller number of redundant bits to correct errors. The buffer needed is the number of coded symbols of one coded packet.

In IR, it adapts to changing channel conditions by retransmitting redundant bits gradually to a receiver. This is done while waiting for a positive ACK until all redundancies are sent. At the beginning, a transmitter using IR sends coded packets with a small number of FEC redundant bits or even without any. If a retransmission is needed, different redundant bits, derived from different puncturing patterns, are retransmitted depending on the base coding rate. The information data will not be retransmitted unless a receiver still cannot successfully decode the packet after a transmitter has sent all the redundant bits. This approach increases the receiver’s coding gain one retransmission at a time. The IR scheme therefore allows the system to adjust the channel encoder rates to the channel quality. Comparing with CC, a bigger buffer is required to store all retransmitted data, including the first transmitted data.

Type II HARQ needs a larger buffer size than type I HARQ, but it has higher performance in terms of throughput. The drawback of type II IR HARQ is that a receiver has to receive the first transmitted data to combine it with subsequently received redundant bits. To overcome this drawback, a type III HARQ has been proposed [7]. It may be viewed as a special case of type II HARQ such that each retransmitted packet is self-decodable. A receiver can correctly decode information data by either combining the first transmitted packet with a retransmitted packet or use only one of the retransmitted packets.

Saturday, December 11, 2010

RANDOM ACCESS IN WiMAX

Random access in IEEE 802.16 involves the request portion of the request–grant process for network users. A portion of each UL frame is allocated to the contention-based initial access. This contention interval (channel) is divided into ranging (RNG) and BW request regions. These regions are used for initial network entry, ranging, power adjustments, and BW requests for UL transmission. In addition, best-effort data may be sent on the contention channel, but this is only suitable for the transmission of small amounts of data. This data may also include additional requests for resources.

The major tasks that employ the contention channel are initial ranging (IR) and BW requests. IR comprises channel synchronization and ranging procedures during network entry, such as closed loop time–frequency and power adjustments. The contention channel is often termed the Ranging Channel in the IEEE802.16e-2005 standard because BW requests are not necessarily performed on a contention basis, e.g., unsolicited granted service (UGS) and real-time polling services (rtPS) can be employed. In this case, the contention interval is mainly used for IR. A user enters the network by sending a request to the BS to allocate UL resources for data transmission.

The BS evaluates an SS request in the context of the SS service level agreement, and grants resources accordingly. The granted resources are in the form of variable-sized time by frequency bursts in the UL subframe. A burst is a group of subchannels over some PSs. These bursts constitute the major part of the WiMAX frame structure. Burst allocation information is included in the UL-MAP MAC message of the broadcast downlink subframe, specifically in the information element (IE) field. The IE is a data structure that contains complete information about a burst, such as the dimensions in time and frequency, start and end of each burst, and the corresponding physical channel information.

We explain the contention mechanisms and scheduling services for the UL contention channel as defined in the IEEE 802.16d-2004 and IEEE 802.16e-2005 standards.

WIMAX QOS AND SCHEDULING SERVICES

The scheduling service employed plays a key role in defining the QoS in WiMAX. It determines the data-handling mechanisms supported by the MAC scheduler for data transport on a connection. Each connection is characterized by a connection identifier (CID) and a set of QoS parameters. The scheduling service determines the number and quality of the UL and DL transmission opportunities, in addition to BW allocation mechanisms.

The five QoS categories in WiMAX are:
  • UGS: This class is designed to support delay-intolerant and real-time services with fixed-size data packets such as VOIP. UGS allocates fixed grants to the specified SSs on a periodic real-time basis. Thus an SS using the UGS class does not need to request resources because the size and amount of resources granted are defined at connection setup.

  • rtPS: The rtPS class is designed to support variable-size data packets such as Motion Picture Expert Group (MPEG) video traffic. This service offers real-time unicast request opportunities on a periodic basis as with UGS, but with more request overhead than UGS. rtPS supports variable grant sizes. The BS provides periodic unicast request opportunities during the connection setup phase. This allows the SS to also use unicast request opportunities to obtain UL transmission opportunities. The unicast polling 
    opportunities are typically frequent enough to meet the latency and real-time services requirements.

  • Nonreal-time polling service (nrtPS): nrtPS is similar to rtPS except that the SS uses contention-based polling in the UL to request BW. The BS provides timely unicast opportunities, but the interarrival time of adjacent opportunities is large compared to rtPS. The SSs in a polled group contend for resource request opportunities. The contention environment can result in collisions which require a resolution strategy.

  • Best-effort service (BE): This service class is provided for applications not requiring QoS. Transmission is contention-based so users compete for transmission opportunities and only send data when resources are available.

  • Extended real-time polling service (ertPS): ertPS combines UGS and rtPS so the SS can use UL allocations for both data transmission and resource requests. This allows the SS to accommodate time-varying BW requirements. The SS is only allowed to use this service on non-UGS-related connections (ertPS was introduced only recently in the IEEE802.16e standard for mobile WiMAX).
From the above categories, we see that only the nrtPS and BE scheduling services involve random access and contention-based mechanisms for BW requests. Performance evaluation on the different QoS categories.

UPLINK RESOURCE REQUESTS AND GRANT MECHANISMS

During initial network entry, the BS assigns up to three dedicated CIDs to the SS for transmitting and receiving MAC control messages. Connection begins using the basic CID. As mentioned previously, WiMAX DAMA services are given resources on a demand assignment basis (as the need arises). The downlink and UL request–grant mechanisms are distinct. In the downlink, allocation of resources to an SS is done on a CID basis. The BS scheduler allocates BW according to predetermined QoS levels. As MAC PDUs arrive for a CID, the BS determines the resources based on the corresponding QoS and the scheduling algorithm. The BS indicates these allocations in the DL-MAP control message of the downlink subframe. In the UL, the SS is controlled by the overall demand for resources.

Requests can be either standalone (occupying a dedicated MAC PDU for request purposes), or piggybacked on a generic MAC PDU. The UL/BW requests are either incremental or aggregate. When the BS receives an incremental request, it increases the BW granted according to the BW requested. The BW request type field of the MAC PDU indicates whether the request is incremental or aggregate. Because piggybacked requests are on a generic PDU with no type field, they are always incremental. Due to the possibility of collisions, BW requests sent in broadcast or multicast modes must be aggregate. The standard defines three BW request–grant mechanisms. These mechanisms are defined below. Evaluation and performance for various BW request–grant techniques based on QoS mechanisms.
  • Unsolicited BW grants: In the UGS mechanism, BW requests are primarily allocated in dedicated slots in the UL subframe. The requests can also be piggybacked on a generic PDU.

  • Unicast polling: Unicast polling or simply polling is the process where dedicated resources are provided to an individual SS in the UL to make BW requests. The BS indicates to the SS the request-allocated slots in the UL-MAP MAC message in the DL subframe to send standalone request PDUs. The BS assigns the polling allocation for requests to the polled SS using the primary CID (one of the three assigned during network entry and initialization). An SS being polled should not remain silent if no BW is needed during polling. Instead, the SS can send a dummy request PDU using padding (all zeros), to fill the allocation field in the current CID. A data grant information element (IE) is associated with the basic CID. Note that implicit UGS users will not be polled unless the Poll-Me bit is set in the header of the packet in the UGS connection.

  • Multicast/broadcast polling—If the SS does not acquire sufficient BW when polled individually, multicast or broadcast polling can be used to poll a group of SSs. As with individual polling, the SS can join a group of SSs in multicast polling to obtain additional BW, but uses the BW allocated in the UL-MAP using a multicast/broadcast CID. All SSs in a polling group contend for request opportunities during the multicast/broadcast polling interval. Intuitively, multicast polling saves BW compared with unicast polling by grouping SSs. To reduce the likelihood of collisions, only SSs with a BW request will reply. However, if several SSs are requesting BW resources, collisions may occur and therefore contention resolution is necessary.
Contention-based requests are allocated in the random access channel. The size of the contention slot is assigned by the BS as a request IE. SSs belonging to a multicast polling group contend for the BW request slots. A collision occurs if two or more SSs simultaneously make requests in the same slot. Therefore, the requesting SSs must employ a contention resolution algorithm to acquire slots to send BW requests. The WiMAX standard does not define an explicit algorithm or strategy by which the SS knows the status of a request sent. In fact, if the SS does not receive the resources sought in the very next DL subframe, the request may be deemed lost or collided. Several strategies have been proposed to acknowledge the status of a request and also for resolution if a collision occurs. To better understand the contention phase and contention resolution in WiMAX, in the next section we define the WiMAX frame allocations and MAC management messages involved with contention and contention resolution mechanisms.

Sunday, December 5, 2010

MONITORED PERFORMANCE PARAMETERS


Add a note hereThe simulation campaign has the aim of assessing the scheduling framework behavior in terms of both (1) class-based QoS differentiation capability and fair share of the channel bandwidth and (2) fairness in the treatment of traffic flows in the same class. The monitored performance parameters are the following:
§  Add a note hereThroughput: represents the achieved bit rate for a given traffic flow, accounting for the totality of packets delivered at the target SS. The totality of delivered packets include both useful and unuseful packets. Unuseful packets are either errored or expired packets.
§  Add a note hereGoodput: represents the achieved bit rate for a given traffic flow, accounting for the successfully delivered packets at the target SS. Successfully delivered packets are those packets received without errors and within the maximum tolerated latency. The goodput index intrinsically accounts for both packet losses and delays.
§  Add a note hereAverage latency: is the delay accumulated by packets of each flow from their arrival time at the BS to the delivery time at the target SS.
§  Add a note hereTotal packet loss percentage: accounts for packet losses occurred both at the BS, due to deadline expiration of queued packets waiting for an error-free channel, and at the receiving SS due to channel errors or to the reception of over-delayed packets.

Thursday, December 2, 2010

DRR-Based Algorithm | QoS and Fairness in WiMAX

DRR is a scheduling algorithm that approximates less strictly the fairness performance of GPS, but with a low computational complexity, which is O(1) processing work per packet. It uses round-robin service with a quantum of service assigned to each queue. Differently from traditional round-robin scheme, if a queue was not able to send a packet in a given round because of its large packet size, the remainder of the quantum (deficit) is added to the quantum for the next round. The use of the deficit variable makes DRR suitable to the management of queues with different packet sizes.


Add a note hereDRR maintains for each queue qi (i = 1, , 3) a deficit counter DCi and a quantum Qi measured in bytes. At the beginning, all DCi variables are set to zero. The algorithm works serving queues in turn; each time a queue is selected, its DCi is incremented by the Qi value; then packets are sent out if their size (in bytes) is less than DCi. Each time a packet is extracted from queue, DCi is decremented by the packet size. To avoid examining empty queues, the scheduler keeps an auxiliary list, active list, which includes the indices of queues that contain at least one packet. The round-robin selector points to the head of the active list. Whenever a packet arrives to a previously empty queue, the index of that queue is inserted into the active list.
Add a note here

Tuesday, November 30, 2010

CHANNEL ERROR COMPENSATOR

At the beginning of each frame, the compensation block classifies a flow (i.e., a CID) as sensing a good or a bad channel on the basis of the CINR reports received by the relevant target SS. Accordingly, CIDs destined to an SS sensing a lossy channel are marked as “banned” from being transmitted. When the error-free service scheduler selects a head-of-line (HOL) packet to be extracted from queue i, it interacts with the compensator to check whether the CID of the selected packet is banned or not. If it is banned (the packet would risk to be corrupted if transmitted over the radio link), then the scheduler looks for a packet belonging to an unmarked flow of the same class (i.e., in the same queue) to transmit instead of the HOL packet. To help the scheduler in selecting the substitute flow, the compensator manages a debit/credit counter for each active (leading or lagging) flow. Each time a flow substitutes another one for transmission, it is considered as a leading flow, and its credit counter increases by one; at the same time, the replaced flow is considered as a lagging flow, and its debit counter increases by one. Each time a HOL packet cannot be transmitted due to poor channel conditions, the substitute flow is the one with the highest debit counter among unmarked flows of the same class. If no unmarked packets can be found in the queue, then the entire queue is banned and the turn passes to the successive queue.
Add a note hereThe pseudocode listed below briefly shows how the compensator works and interacts with the scheduler, regardless of the specific algorithm (WF2Q+ or DRR) this latter uses.
Add a note here
1 ban flow if CINR > CINRthr;
2 until available resources {
3       until a pkt is dequed || all queues examined {
4               scheduler selects queue x : x !banned i.e. scheduler selects flow i = head(x);
5               if i is marked {
6                      select less serviced flow j : j !banned;
7                      if !found {
8                                    mark x;
9                                    goto 3; }
10                    else {
11                                   deque pkt;
12                                   modify scheduler variables;
13                                   modify credit/debit counter of i and j;
14                                   exit; } }
15              else {
16                     modify scheduler variables;
17                     deque pkt; } } }
Add a note hereAt the beginning of each frame, the compensator bans the flows directed to SSs, which currently sense a lossy channel (line 1), according to the received channel state information reports. For the duration of the DL subframe, packets within the queues are examined by the scheduler until at least one queue contains a packet belonging to an unbanned CID (lines 2–3). The scheduler selects the queue to be served among all backlogged (and unbanned) queues; this corresponds to select the first packet in the queue (line 4). If this packet belongs to an unbanned flow, then it gains service and the variables for the specific scheduling algorithm (WF2Q+ or DRR) are updated accordingly (lines 15–17). These variables can be the quantum and deficit counter values for DRR, and (S, F, V) values for WF2Q+. Otherwise, if the HOL packet is banned, the scheduler starts searching for the unbanned flow (in the same queue) that has received the worst service in the past (lines 5–6). If no unbanned flow exists, the Scheduler considers the entire queue as no longer servable and selects another queue among the backlogged ones, if any (lines 7–9). If at least an unbanned flow exists, the first enqueued packet belonging to this flow is selected and the scheduling variables are updated accordingly (lines 10–14). Also credit/debit counters for the originally selected flow and the substitute flow are consequently updated.

Saturday, November 27, 2010

BANDWIDTH REQUESTS AND CINR REPORTS | QoS and Fairness in WiMAX

To make the channel-aware scheduler effectively operate, information about the quality of the wireless links toward each SS must be gathered by the BS. To this aim, the wireless channel state is continuously monitored by each SS, which gets CINR (carrier to interference and noise ratio) measurements, updates estimates of the CINR’s mean and standard deviation values, and reports them back to the BS through specific signalling messages provided by the IEEE 802.16 standard, e.g., through BR messages (normally used by SSs for requesting bandwidth), which can also include the CINR report.

Add a note hereThe compensation block is in charge of collecting received information on the channel qualities and making the BS aware of the status of each link toward the SSs. Two assumptions hold in our reference scenario:
§  Add a note hereWireless channel quality of each connection changes on a per-frame basis, but it remains constant for a frame duration; i.e, the BS assumes that the CINR report received by the SS at the frame beginning is valid for the entire subsequent frame duration. This is equivalent to modelling a block-fading channel, which is a feasible assumption for the slowly varying radio links of the fixed WiMAX PMP scenario.
§  Add a note herePerfect channel state information is available at the BS. This means that the CINR reports must be correctly received by the BS over an error-free feedback channel.

Add a note hereThe monitored channel quality at the SS is compared against the allowed values of signal-to-noise ratio and receiver sensitivity for each transport mode (burst profile) at a given bit error rate (BER). As soon as the received power level becomes lower than the receiver sensitivity threshold for a given DL transport mode and target BER, then a more robust profile must be chosen. This change in the transport mode can proceed until the most robust modulation/coding scheme (BPSK modulation with 1/2 coding rate) is chosen. A channel is considered “good” as long as a more robust transport mode can be selected, which is capable to cope with the channel adverse conditions. Otherwise, the channel is finally considered as “bad.”

Tuesday, November 23, 2010

PACKET CLASSIFIER | QoS and Fairness in WiMAX

Packets received by the BS and destined to the DL are sorted by the packet classifier and buffered into one of the per-class queues in the BS, according to the class of service they belong to. In the queues, packets wait for transmission over the air toward a given SS. Queues are managed by the scheduler on a frame basis.

Add a note hereThe main task of the classifier is to match the CID of the incoming packets with the traffic class they belong to, and to queue such packets in their relevant buffers. Only the service received by rtPS, nrtPS, and BE queues will be analyzed, because flows belonging to the UGS and ertPS service classes are assumed to receive a constant reserved amount of bandwidth frame-by-frame and not need to be scheduled. This choice allows high demanding (UGS and ertPS) flows to save up the time required for the BR process.

Add a note hereAnother important task of the classifier is to timestamp each incoming packet according to its arrival time. This information is exploited by the buffer manager to recognize when a queued packet’s timeout expires, i.e., when its elapsed waiting time exceeds the maximum tolerated latency.

Friday, November 19, 2010

CHANNEL- AND QoS-AWARE SCHEDULER DESIGN FOR WiMAX NETWORKS

In this section some design guidelines are given for a “channel-aware” and “QoS-aware” scheduler to be implemented at the BS of a WiMAX PMP network for the delivery of DL traffic to a set of distributed SSs with active connections of different traffic nature. The scheduling algorithm, which is running at the WiMAX base station, needs to fulfill the following requirements:

§  Add a note hereEfficient link utilization: The scheduler shall take opportunistic decision and not assign a transmission opportunity to a flow with a currently low-quality link, because the transmission will be wasted.
§  Add a note hereDelay bound: The algorithm shall be able to provide delay bound guarantees for individual flows, to support delay-sensitive applications; besides, it shall prevent too late packet transmissions from wasting bandwidth.
§  Add a note hereFairness: The algorithm shall redistribute available resources fairly among flows; thus providing short-term fairness to error-free flows and long-term fairness to error-prone flows.
§  Add a note hereThroughput: The algorithm shall provide guaranteed short-term throughput to error-free flows and guaranteed long-term throughput to all flows.
§  Add a note hereImplementation complexity: A low-complexity algorithm is necessary to take quick scheduling decisions.
§  Add a note hereScalability: The algorithm shall operate efficiently when the number of flows sharing the channel increases.
Add a note hereTo match all the above-mentioned needs, (1) a class-based wireless scheduling is recommended to fulfill the WiMAX service class differentiation needs, and also to simplify interworking with the Internet (supporting class-based differentiated services); (2) “per class” service differentiation must be achieved, and simple “per flow” mechanisms (e.g., lead/lag counters per each flow) must be provided to guarantee fair service to traffics within a class; (3) channel awareness must be exploited to make efficient use of wireless resources, and a compensation technique for missed transmission opportunities must be provided to guarantee proportional fairness in sharing the bandwidth; (4) useless compensation must be avoided through simple measures, e.g., periodic buffer cleaning of over-delayed packets could be used; and (5) simple expedients must be provided to avoid monopolization of the scheduler by a lagging flow after the recovery of its channel, and to guarantee graceful throughput degradation of leading flows; this could come at a very low cost by using combination of lead/lag counters and queue parameters.
Add a note hereIn Figure 1, the reference channel- and QoS-aware scheduling architecture is illustrated. It is considered to be implemented at the MAC layer of a WiMAX BS, which manages local traffic queues for packet delivery over the DL channel. The illustrated framework must be able to provide per-class differentiated QoS and fair service to flows in the same class through the operation of the following QoS-support modules:

§  Add a note hereAn error-free service scheduler that decides how to provide service to traffic flows based on an error-free channel assumption.
§  Add a note hereA lead/lag counter for each traffic flow that indicates whether the flow is leading, in sync with, or lagging its error-free service model and in which extent.
§  Add a note hereA compensation technique that is used to improve fairness among flows. A lagging flow is compensated at the expense of a leading flow when its link becomes error free again. The system maintains credit/debit counters for each lagging/leading flow.
§  Add a note hereSeparate per-class packet queues used to support rtPS, nrtPS, and BE traffic flows.
§  Add a note hereA means for monitoring and predicting the channel state of each backlogged flow.


Figure 1: Channel-aware QoS scheduling architecture in the BS.
Add a note here

Related Posts with Thumbnails