Sunday, August 29, 2010

Resource Request and Scheduling Policies



Add a note hereIEEE 802.16e-2005 divides all possible data services into five classes: Unsolicited Grant Service (UGS), Real-Time Polling Service (rtPS), Extended Real-Time polling Service (ErtPS), Non-Real-Time Polling Service (nrtPS), and Best Effort (BE). Each service is associated with a set of quality of services (QoS) parameters that quantify aspects of its characteristics: (a) maximum sustained rate, (b) minimum reserved rate, (c) maximum latency tolerance, (d) jitter tolerance, and (e) traffic priority.

Add a note hereThe aforementioned parameters are the basic inputs for the service scheduler placed in the BS, whose design and implementation are left to the manufacturers, which is aimed at fulfilling service-specific QoS requirements.

Add a note hereWithin IEEE 802.16, in particular, the scheduler task is to define both uplink and downlink resource allocation maps (UL-MAP and DL-MAP) on the basis of the users' needs. With reference to BE services, an insight on the bandwidth request mechanism is provided in the following subsections and a fair and efficient scheduling strategy is proposed.

1. BE Resource Requests

Add a note hereIn order to support BE services (FTP, web browsing, and so on), a resource request mechanism is needed to make the scheduler aware of MSs bandwidth requirements in both directions.

Add a note hereAs for the DL direction, however, it is immediate to understand that the scheduler has a perfect knowledge of MSs needs, because they coincide with the amount of data waiting to be transmitted in the respective BS transmission queues.

Add a note hereAs far as the UL is concerned, on the contrary, a request mechanism is introduced by the standard, which allows MSs to make use of (a) contention request opportunities, (b) unicast request opportunities, and (c) unsolicited data grant burst types. In the first case a bandwidth request is transmitted during the appropriately shared UL allocation, whereas in the other two cases each MS is given a reserved UL resource to convey its request.

Add a note hereAll requests for bandwidth are made in terms of number of bytes needed to carry MAC PDUs.
Add a note hereOnce an MS is given the UL resource, further bandwidth requests may come as a piggyback request, thus avoiding to resort again to one of the three bandwidth request mechanisms introduced before (a, b, and c).
Add a note hereBandwidth requests may be incremental or aggregate; in the former case the BS adds the amount of bandwidth requested to its current perception of the bandwidth needs of the connection, whereas in the latter case it replaces its perception of the bandwidth needs of the connection with the amount of bandwidth requested. The type field in the bandwidth request header indicates whether the request is incremental or aggregate. Because piggyback bandwidth requests do not have a type field, piggyback bandwidth requests shall always be incremental.

Add a note hereThe mechanism of piggyback incremental requests can be conveniently exploited to reduce the time wastage due to the complete bandwidth request mechanism: it is reasonable to operate in such a way that the first time an MS performs a bandwidth request (adopting one of the three previously introduced procedures), it notifies the BS the dimension of the entire amount of data waiting in its transmission queue. This way the BS has an exact knowledge of each MS need and can update it (decreasing) each time PHY-level resources are assigned to that connections and the related transmissions are correctly acknowledged.

Add a note herePossible further data incoming in the MS queue while transmissions are still ongoing (i.e., before the MS queue gets empty) can be notified to the BS scheduler through the incremental piggyback bandwidth requests, which will determine a variation of the perception of MS bandwidth needs.

2. BE Scheduling

Add a note hereIn this section we still focus on BE data services and we show how to provide, at the same time, a fair and efficient resource sharing, carefully considering IEEE 802.16e-2005 specific characteristics.

Add a note hereAs far as the fairness issue is concerned, here we considered a round robin (RR) scheduling policy among all BE services. Although this choice seems the most suited to this kind of service, it has to be pointed out that its implementation, as well as the implementation of any other scheduling policy, requires some preliminary consideration on the nature of IEEE 802.16e-2005 radio resource (the OFDMA-Slot).

Add a note hereBecause the different modes that can be adopted by users convey, in a single OFDMA-Slot, different amounts of data, it follows that, to provide a really fair scheduling among BE users, a different amount of slots has to be allocated to each of them.

Add a note hereTo meet as much as possible the aforedescribed fairness requirement, hence avoiding that users with huge amount of data to be transmitted gather the most of resources, it is convenient to define an elementary resource unit, hereafter called Virtual Resource Unit (VRU), consisting in a fixed amount of data bytes, which is the basic element assigned by the scheduler in each RR cycle to BE users needing resources.

Add a note hereTo better understand the scheduler behavior, let us focus our attention, for instance, on the DL subframe: the scheduler task is, in this case, to define the DL-MAP, assigning PHY-level resources to the different BS MS links requiring DL resources.

Add a note hereAs long as there is room (in terms of available slots) in the subframe and there are pending data that can be allocated in it, the scheduler moves from an user to the next, performing the following actions: (i) it assigns a VRU to the user, adding it to its Virtual Resource Budget (VRB); then, (ii) it virtually generates the biggest PDU that can be allocated, and (iii) correspondingly makes a slot booking; this means that although the scheduler does not effectively allocate the PDU at this time, that amount of slots is definitively reserved to that user; at the next round a bigger PDU or more PDUs may be allocated following the increased VRB, thus increasing the amount of booked slots. Please note that the scheduler would not do any slot reservation until VRB is not enough to allocate a PDU containing a single ARQ block. When the RR cycle ends (i.e., no more room or no more data to be allocated), the PDUs are effectively generated and the coding-mapping processes described in Section 8.2.3 can take place.

Add a note hereThe separation between the virtual and physical allocations guarantees a flexible management: as an example, doubling the VRB (during the second round) may allow, for instance, to generate a PDU that includes two ARQ blocks and this could be preferred than generating and transmitting two separate PDUs with a single ARQ block each.

Add a note hereAt the end of the allocation procedure users' VRBs are not reset, to allow stations with larger ARQ blocks to gain the same long-term priority as the others. Furthermore, VRB will be increased at most up to the amount of pending data.

Add a note hereThe same procedure is performed also for the definition of the UL-MAP, which is carried out on the basis of UL resource request made by users.

Add a note hereTo preserve fairness at most, in the next subframe of the same kind (UL or DL) the cyclic scheduling procedure will start from the user that follows, in the cyclic order, the last user served in the current frame.

Thursday, August 26, 2010

Simulation Analysis | Performance Evaluation

We now evaluate the impact on the performance of VoIP, VC, and VoD traffic, in terms of the jitter, with respect to the number of SSs (N) and the frame duration. We assume that each SS has only one connection, which carries exactly one traffic source of the specified type (i.e., one of VoIP, VC, or VoD).
Add a note hereWe start assessing the performance of QoS traffic by setting up a scenario with a variable number of SSs ranging from 5 to 43, with a frame duration of 10 msec. Only one SS is provisioned with a VoD connection, whereas the remaining is partitioned between VoIP and VC traffic evenly. We repeated the scenario with both the MPEG4 and the H.263 trace files for VC traffic. The jitter of DL connections is reported in Figure 1. As can be seen, when the network is underloaded (i.e., N 27) the jitter is always smaller than the interarrival time of packets of each traffic source. As the number of SSs further increases, the VoD curves increases steeply. In fact, with a high offered load, the VoD traffic performance degrades because the rate provisioned for the VoD connection is equal to the mean rate of the application. This results in the performance of VoIP and VC connections being isolated from that of VoD traffic. Note that this has been achieved without enforcing a strict priority between rtPS and nrtPS connections, which are served by the same instance of the DRR schedulers.


Figure 1: Jitter of downlink connections versus number of Subscriber Stations, with different videoconference codecs.
Add a note here
Add a note hereAdditionally, the VC codec significantly impacts on the performance. In particular, with N < 35, in the case of H.263, the jitter results slightly higher than that of the VoD source whereas the MPEG case exhibits a lower jitter. This can be explained as follows. H.263 codecs produce frames at variable time intervals. Therefore, the VC connection queue can potentially become idle due to inactivity periods. As soon as the queue becomes backlogged again, it is reinserted at the tail of the DRR list of connections waiting to be served. Therefore, the new arrived packet has to wait until all the other connections have been served, which increases the jitter. Such a situation is less likely to happen with the MPEG4 codec because video frames are generated at fixed time intervals, with no inactivity periods.
Add a note hereWith regard to the UL connections, results are reported in Figure 2. As expected, the jitter is higher than that in the DL case, because UL connections experience the additional delay of notifying the BS of their bandwidth requests. However, the curves are almost constant when the offered load increases (except for the H.263 VC case), because the BS schedules unicast polls on a periodic basis, with the period equal to the interarrival time of SDUs of each connection. The anomaly of the H.263 curve with respect to VoIP and VC-MPEG4 is due to the variable interarrival of video frames. In fact the transmission queue of a H.263 connection can potentially become idle when polled from the BS. Hence, a connection that misses the poll then needs to wait an entire polling interval before it will have a subsequent chance to send a bandwidth request.


Add a note here
Figure 2: Jitter of downlink connections versus number of Subscriber Stations, with different videoconference codecs.
Add a note hereFinally, we setup a scenario with VoIP and VC-MPEG4 traffic only, where N ranges between 5 and 45, and with variable frame duration. The jitter of UL connections is reported in Figure 3. As can be seen, the longer the frame duration, the higher are the curves. This can be explained as follows. As scheduling is performed at the beginning of each frame, the higher the frame duration, the longer (on average) an SS has to wait before using its grant. In other words, with longer frames the BS is less responsive to the SSs' bandwidth requests. Similar considerations also hold for the DL case, where an SDU received by the BS has to wait at least until the next frame (i.e., the transmission of the next DL-MAP) before it can be served.


Add a note here
Figure 3: Jitter of uplink connections, both Voice over IP and videoconference, versus number of Subscriber Stations, with variable frame duration.

Sunday, August 15, 2010

QoS Architecture | WIMAX

In general, the process of requesting and granting QoS in a network can be logically split in two separate layers: application and network layers. The application layer provides the end-user with a simplified and standardized view of the quality level that will be granted for a given service. This layer is not aware of the technicalities of service requirements (such as bandwidth, delay, or jitter) and it does not depend on the technology-dependent issues related to the actual networks that will be traversed (such as a fiber-optic, wireless, or xDSL). On the other hand, the network layer deals with a set of technical QoS parameters, which it maps on network-specific requirements that have to be fulfilled to provide the end-user with the negotiated quality level. Usually, in wired IP networks the mapping is performed at the network layer. However, such an approach is hardly suitable for wireless networks, where there are a number of factors that influence the resource allocation: (i) the availability of bandwidth is much more limited with respect to wired networks, (ii) there is high variability of the network capacity due, for instance, to environmental conditions, (iii) the link quality experienced by different terminals is location-dependent. Therefore, it is often necessary to implement QoS provisioning at the MAC layer, as in IEEE 802.16, so as to gain a better insight of the current technology-dependent network status and to react as soon as possible to changes that might negatively affect QoS.
Add a note hereIn IEEE 802.16 the prominent QoS functions of network provisioning and admission control are logically located on the management plane. As already pointed out, the latter is outside the scope of the IEEE 802.16, which only covers the data/ control plane, as illustrated in Figure 1. Network provisioning refers to the process of approving a given type of service, by means of its network-layer set of QoS parameters that might be activated later. Network provisioning can be either static or dynamic. Specifically, it is said to be static if the full set of services that the BS supports is decided a priori. This model is intended for a service provider wishing to specify the full set of services that its subscribers can request, by means of manual or semiautomatic configuration of the BS's management information base (MIB). On the other hand, with dynamic network provisioning, each request to establish a new service is forwarded to an external policy server, which decides whether to approve or not. This model allows a higher degree of flexibility, in terms of the types of service that the provider is able to offer to its subscribers, but it requires a signaling protocol between the BS and the policy server, thus incurring additional communication overhead and increased complexity.


Figure 1: Quality-of-service model of the IEEE 802.16.
Add a note here
Add a note hereUnlike the network provisioning function, which only deals with services that might be activated later, and that are therefore said deferred, the admission control function is responsible for resource allocation. Thus, it will only accept a new service if (i) it is possible to provide the full set of QoS guarantees that it has requested, and (ii) the QoS level of all the services that have been already admitted would remain above the negotiated threshold. Quite clearly, admission control acts on a time scale smaller than that of network provisioning. This is motivated by the latter being much more complex than the former, as pointed out by a recent study  on an integrated end-to-end QoS reservation protocol in a heterogeneous environment, with IEEE 802.16 and IEEE 802.11e  devices. Tested results showed that the network provisioning latency of IEEE 802.16 equipments currently available in the market is in the order of several seconds, whereas the activation latency is in the order of milliseconds.
Add a note hereIn IEEE 802.16, the set of network layer parameters that entirely defines the QoS of a unidirectional flow of packets resides into a service flow (SF) specification. Each SF can be in one of the following three states: provisioned, admitted, active. Provisioned SFs are not bound to any specific connection, because they are only intended to serve as an indication of what types of service are available at the BS. Then, when an application on the end-user side starts, the state of the provisioned SF will become admitted, thus booking resources that will be shortly needed to fulfill the application requirements. When the SF state becomes admitted, then it is also assigned a connection identifier (CID) that will be used to classify the SDUs among those belonging to different SFs. However, in this phase, resources are still not completely activated; for instance, the connection is not granted bandwidth yet. This last step is performed during the activation of the SF, which happens just before SDUs from the application starts flowing through the network.
Add a note hereThus a two-phase model is employed, where resources are booked before the application is started. This is the model employed in traditional telephony applications. At any time it is possible to "put on hold" the application by moving back the state of the SF from active to admitted. When the application stops the SF is set to either provisioned or deleted; in any case, the one-to-one mapping between the service flow identifier (SFID) and the CID is lost, and the CID can be reassigned for other purposes. The SF transition diagram is illustrated in Figure 2.


Figure 2: Service flow transition diagram.
Add a note here
Add a note hereFigure 3 shows the blueprint of the functional entities for QoS support, which logically reside within the MAC CPS of the BS and SSs. Each DL connection has a packet queue (or queue, for short) at the BS (represented with solid lines). In accordance with the set of QoS parameters and the status of the queues, the BS DL scheduler selects from the DL queues, on a frame basis, the next SDUs to be transmitted to SSs. On the other hand, UL connection queues reside at SSs.


Figure 3: Medium Access Control architecture of the Base and Subscriber Stations. 
Add a note here
Add a note hereBandwidth requests are used on the BS for estimating the residual backlog of UL connections. In fact, based on the amount of bandwidth requested (and granted) so far, the BS UL scheduler estimates the residual backlog at each UL connection (represented in Figure 3as a virtual queue, with dashed lines), and allocates future UL grants according to the respective set of QoS parameters and the (virtual) status of the queues. However, as already introduced, although bandwidth requests are per connection, the BS nevertheless grants UL capacity to each SS as a whole. Thus, when an SS receives an UL grant, it cannot deduce from the grant which of its connections it was intended for by the BS. Consequently, an SS scheduler must also be implemented within each SS MAC to redistribute the granted capacity to the SS's connections.

Thursday, August 12, 2010

IEEE 802.16

The IEEE 802.16 specifies the data and control plane of the MAC and PHY layers, as illustrated in Figure 1. More specifically, the MAC layer consists of three sublayers: the service-specific convergence sublayer (SSCS), the MAC common part sublayer (MAC CPS), and the security sublayer. The SSCS receives data from the upper layer entities that lie on top of the MAC layer, for example, bridges, routers, hosts. A different SSCS is specified for each entity type, including support for asynchronous transfer mode (ATM), IEEE 802.3, and Internet Protocol version 4 (IPv4) services. The MAC CPS is the core logical module of the MAC architecture, and is responsible for bandwidth management and QoS enforcement. Finally, the security sublayer provides SSs with privacy across the wireless network, by encrypting data between the BS and SSs.


Figure 1: Scope of the IEEE 802.16 standard—data/control plane.
Add a note here
Add a note hereThis section reports the basic IEEE 802.16 MAC CPS and PHY layer functions so as to introduce the notation that will be used in the rest of this work. The interested reader can find all the details of the IEEE 802.16 specifications in the standard document.

Add a note here1 MAC Layer
Add a note hereThe IEEE 802.16 standard specifies two modes for sharing the wireless medium: point-to-multipoint (PMP) and mesh. With PMP, the BS serves a set of SSs within the same antenna sector in a broadcast manner, with all SSs receiving the same transmission from the BS. Transmissions from SSs are directed to and centrally coordinated by the BS. On the other hand, in mesh mode, traffic can be routed through other SSs and can occur directly among SSs. As access coordination is distributed among the SSs, the mesh mode does not include support to parameterized QoS, which is needed by multimedia applications with stringent requirements. In this study we focus on the PMP mode alone.
Add a note hereIn PMP mode uplink (UL) (from SS to BS) and downlink (DL) (from BS to SS) data transmissions occur in separate time frames. In the DL subframe the BS transmits a burst of MAC payload data units (PDUs). As the transmission is broadcast all SSs listen to the data transmitted by the BS. However, an SS is only required to process PDUs that are addressed to it or that are explicitly intended for all the SSs. In the UL subframe, on the other hand, any SS transmits a burst of MAC PDUs to the BS in a time division multiple access (TDMA) manner. DL and UL subframes are duplexed using one of the following techniques, as shown in Figure 2: frequency division duplex (FDD) is where DL and UL subframes occur simultaneously on separate frequencies, and time division duplex (TDD) is where DL and UL subframes occur at different times and usually share the same frequency. SSs can be either full-duplex, that is, they can transmit and receive simultaneously, or half-duplex, that is, they can transmit and receive at nonoverlapping time intervals.


Figure 2: Frame structure with frequency and time division duplexes. 
Add a note here

The MAC protocol is connection-oriented: all data communications, for both transport and control, are in the context of a unidirectional connection. At the start of each frame the BS schedules the UL and DL grants to meet the negotiated QoS requirements. Each SS learns the boundaries of its allocation within the current UL subframe by decoding the UL-medium access protocol (MAP) message. On the other hand, the DL-MAP message contains the timetable of the DL grants in the forthcoming DL subframe. Both maps are transmitted by the BS at the beginning of each DL subframe, as shown in Figure 2.
Add a note hereAs the BS controls the access to the medium in the UL direction, bandwidth is granted to SSs on demand. For this purpose, a number of different bandwidth request mechanisms have been specified. With unsolicited granting a fixed amount of bandwidth on a periodic basis is requested during the setup phase of an UL connection. After that phase, bandwidth is never explicitly requested. A unicast poll consists of allocating to a polled UL connection the bandwidth needed to transmit a bandwidth request. If the polled connection has no data awaiting transmission (backlog, for short), or if it has already requested bandwidth for its entire backlog, it will not reply to the unicast poll, which is thus wasted. Instead, broadcast (multicast) polls are issued by the BS to all (multiple) UL connections. The main drawback of this mechanism is that a collision occurs whenever two or more UL connections send a bandwidth request by responding to the same poll. In this case all collided connections [1] need to resend the bandwidth requests, but a truncated binary exponential backoff algorithm is employed to reduce the chance of colliding again.
Add a note hereBandwidth requests can also be piggybacked on PDUs. For instance, assume that a bandwidth request is sent by an SS for its connection x. Then, before the entire backlog of connection x is served, more data is received from upper layers. In this case, the SS can notify the BS of the increased bandwidth demands by simply adding a Grant Management subheader to any outgoing PDU of connection x. Finally, while a connection is being served by the BS, the SS can use part of the bandwidth scheduled by the BS for data transmission to send a standalone PDU with no data that updates the amount of backlog notified to the BS. This mechanism is called bandwidth stealing.
Add a note hereIt is worth noting that an SS notifies the BS of the amount of bytes awaiting transmission at its connections' buffers, but the BS grants UL bandwidth to the SS as a whole. Due to this hybrid nature of the request/grant mechanism (i.e., requests per connection, grants per SS), an SS also has to implement locally a scheduling algorithm to redistribute the granted capacity to all its connections.
Add a note hereFinally, the BS and SSs can fragment a MAC service data unit (SDU) into multiple PDUs, or they can pack multiple SDUs into a single PDU, so as to reduce the MAC overhead or improve the transmission efficiency. A hybrid analytical simulation study of the impact on the performance of this feature of the MAC layer has been carried out by Hoymann. Results showed that, if the use of fragmentation is enabled, the frame can be filled almost completely, which can significantly increase the frame utilization, depending on the size of SDUs. These optional features have also been exploited in a cross-layer approach between the MAC and application layers, so as to optimize the performance of multimedia streaming.

2 PHY Layer
Add a note hereThe IEEE 802.16 standard includes several noninteroperable PHY layer specifications. However, all the profiles envisaged by the WiMAX forum for fixed BWA specify the use of orthogonal frequency division multiplexing (OFDM) with a Fast Fourier Transform (FFT) size of 256, which is thus the primary focus of this study. This PHY layer has been designed to support non-line-of-sight (NLOS) and operates in the 2–11-GHz bands, both licensed and unlicensed. Transmitted data is conveyed through OFDM symbols, which are made up from 200 subcarriers. Part of the OFDM symbol duration, named the cyclic prefix duration, is used to collect multi-path. The interested reader can find a technical introduction to the OFDM system of the IEEE 802.16 in recent survey papers.
Add a note hereTo exploit the location-dependent wireless channel characteristics, the IEEE 802.16 allows multiple burst profiles to coexist within the same network. In fact, SSs that are located near the BS can employ a less robust modulation than those located far from the BS. The combination of parameters that describe the transmission properties, in DL or UL direction, is called a burst profile. Each burst profile is associated with an interval usage code (IUC), which is used as an identifier within the local scope of an IEEE 802.16 network. The set of burst profiles that can be used is periodically advertised by the BS using specific management messages, that is, downlink channel descriptor (DCD) and uplink channel descriptor (UCD). To maintain the quality of the radio frequency communication link between the BS and SSs, the wireless channel is continuously monitored so as to determine the optimal burst profile. The burst profile is thus dynamically adjusted so as to employ the less robust profile such that the link quality does not drop below a given threshold, in terms of the carrier-to-interference-and-noise ratio (CINR). However, as a side effect of the dynamic tuning of the transmission rate, it is not possible for the stations to compute the transmission time of MAC PDUs a priori. Therefore, SSs always issue bandwidth requests in terms of bytes instead of time, without including any overhead due to the MAC and PHY layers.
Add a note hereAlthough the link quality lies above a given threshold, it is still possible that some data get corrupted. To reduce the amount of data that the receiver is not able to successfully decode, several forward error correction (FEC) techniques are specified, which are employed in conjunction with data randomization, puncturing, and interleaving. Finally, each burst of data is prepended by a short physical preamble (or preamble), which is a well-known sequence of pilot subcarriers that synchronize the receiver. The duration of a preamble is one OFDM symbol, which can be accounted as PHY layer overhead. In the DL subframe a preamble is prepended to each burst, which can be directed to multiple SSs employing the same burst profile (Figure 2). On the other hand, in the UL subframe, each SS always incurs the overhead of one preamble for each frame where it is served.

Related Posts with Thumbnails