Monday, May 28, 2012

Unsolicited Grant Service with Activity Detection

The UGS-AD algorithm is designed to support real-time service flows that generate fixed-size data packets on a semi-periodic basis (e.g., VoIP using on–off voice codec). It incorporates activity detection, which makes it suitable for use with on/off voice codecs. The algorithm uses combined features of UGS and rtPS. UGS-AD has two scheduling modes: UGS and rtPS, and can switch between these modes depending on the status of the voice users (on or off). On initialization of VoIP services, this algorithm starts with the rtPS mode. While in rtPS mode, if the voice user requests bandwidth size of zero bytes, the BS maintains this (rtPS) mode. However if the user requests bandwidth size greater than zero, the BS switches its mode to UGS. While in UGS mode, if the voice user requests bandwidth size = 0, the BS switches to rtPS, and if the user requestsbandwidth greater than zero, the BS stays in UGS. By switching between rtPS and UGS modes, the UGS-AD algorithm significantly addresses the problem of UL resources wastage in the UGS algorithm, and the MAC over head and access delay in the rtPS. This is however only for the case where the voice user uses voice codecs with only two data rates (on–off). Where voice codecs with variable data rates like EVRC is used, resources wastage still occur in the UGS-AD algorithm. In this case, the wastage occurs during the on duration, when full resources is assigned eventhough the variable data rate of voice codecs means that it will not operate at full rate for all of the time the resources is allocated. The operation of the UGS-AD algorithm is illustrated in Figure 1.

Figure 1: UL resource allocation using UGS-AD algorithm.

Thursday, May 24, 2012

Real-Time Polling Service | QoS Scheduling

The rtPS algorithm is designed to support real-time service flows, such as MPEG video or tele-conference, that generate variable size data packets periodically. In this algorithm, the BS assigns UL resources that are sufficient for unicast bandwidth request to the voice users. This is called the polling process. The duration for which the BS continues to poll an SS with rtPS connection is negotiated in the initialization process of the connection. The SSs utilize the assigned polling resources to send their bandwidth requests, reporting the exact bandwidth need for their rtPS connection. The BS in response then allocates the exact bandwidth requested to the SS for transmission of the data. Figure 1 illustrates this dynamic polling process.

Figure 1: Polling process in rtPS.
Because rtPS always carry out polling process, it is able to adaptively determine suitable resource allocation from frame to frame. This adaptive request-grant process goes on until the connection is terminated. Because of the dynamic request-grant process, the algorithm has more optimum data transport efficiency than the UGS algorithm. The algorithm is able to dynamically follow all data rate of the voice codec without any resource wastage as illustrated in Figure 2 (allocated and utilized resources are equal). This is a major advantage over the UGS algorithm. The drawback of the rtPS algorithm however is that the dynamic polling process causes MAC overhead and access delay. Hence rtPS has more MAC overhead and larger access delay than the UGS.

Figure 2: UL resource allocation using rtPS algorithm.

Sunday, May 20, 2012

Unsolicited Grant Service

The UGS algorithm is designed to support real-time service flows, such as Voice over Internet Protocol (VoIP), that generate fixed size data packets periodically. BS periodically assigns fixed-size grants to voice users. These grants are sufficient to send voice data packets generated by the maximum data rate of enhanced variable rate “voice” codec (EVRC). The grant period are negotiated during the initialization process of the connection. Thus, MAC overhead and UL access delay caused by bandwidth request process are minimized. The drawback of the UGS algorithm is the following. Generally, voice users do not always have voice data packets to send throughout the duration of a connection, because voice users have frequent silence periods. A typical voice codec switches intermittently between “on” and “off” states as illustrated in Figure 1. While in “on” state, popular voice codecs like the EVRC also have variable data rates. For example, the EVRC operates at 1/8 of the full data rate during the off state, while the device has three different rates during the on state (rates 1, 1/2, and 1/4). Therefore for a UGS algorithm that reserves a flat amount of resources capable of sending data at the maximum rate of EVRC periodically, a significant amount of UL resources is wasted when the codec is in silence (or off) mode as well as when the codec is on but not operating at the full rate. This is illustrated in Figure 2. A number of other algorithms have thus been designed to adaptively determine the actual UL needs of each connection during frame periods, so as to minimize these resources wastage.

Figure 1: Voice codec status.

Figure 2: UL resource allocation using UGS algorithm.

Wednesday, May 16, 2012

Scheduling Algorithms

In the WiMAX standard (802.16), four scheduling classes have been defined: Unsolicited grant service (UGS), real-time polling service (rtPS), nonreal-time polling service (nrtPS), and best effort (BE). As illustrated in Figure 1, each traffic connection is associated with one of the four scheduling services, and the SS scheduler selects packets to be transmitted from each queue based on the scheduling policy employed. Usually, the scheduler selects packets to be transmitted from the highest priority queue that is not empty. Transmission of packets from lower priority queues are postponed until there is no packet available to send from a higher priority queue. Since UL traffic is generated at SS, the SS scheduler is able to arrange the transmission based on the up-to-date information on the current numbers/status of UL connections, which help to improve QoS performance. In the following, we review the various scheduling algorithms provided in the standard for handling the transmissions of packets belonging to these various services.

Figure 1: UL scheduling at the SS.

Saturday, May 12, 2012


IEEE 802.16 supports fixed-length frame, with flexible (adaptive) DL/UL resource usage ratios. The BS adaptively adjusts DL and UL subframe lengths on a frame-by-frame basis depending on the DL/UL traffics and channel conditions. Typically, the DL:UL resources can be varied from 3:1 to 1:1 in a PMP WiMAX network. Figure 1 illustrates the fixed-length frame in the PMP WiMAX network, and the flexible DL/UL subframes. The figure also depicts the network entry process for subscriber stations (SS) and the scheduling periods for assigning transmission opportunities to SS already initiated into the network. For access (PMP) mode, new SS detects preamble and frame control header (FCH), and identifies the number of DL burst transmissions from the DL MAP in the FCH. At the end of the last DL burst (Figure 1), new SS uses a contention period to exchange network entry request signal with the BS. If successful, the BS process the request and sends entry instruction (assigned DL/UL transmission opportunities, power, etc.) in the DL/UL MAPS of the next frame, and the SS gets initiated into the network. For the mesh mode, new SS waits for network entry signal broadcast at the beginning of a frame, to which they can respond within a specified period. Scheduling process is used for initiating new SS into the network. SS transmits on the scheduled slots.

Figure 1: Adaptive DL/UL subframes in WiMAX standard.
In the WiMAX standard (802.16e), UL and DL assignments are based on time division multiple access (TDMA). In each frame, the BS scheduler assigns UL and DL transmission opportunities to SS until their negotiated data periods expire. The resources given to an SS for its data transmission are both in the frequency and time domain. WiMAX MAC thus supports frequency-time resource allocation in both DL and UL on a per-frame basis. The resource allocation is delivered in media access protocol (MAP) messages at the beginning of each frame. Therefore, the resource allocation can be dynamically changed frame-by-frame in response to traffic and channel conditions. Additionally, the amount of resource in each allocation can range from one slot to the entire frame in the time domain, and from one subchannel to the entire subchannels in an OFDM symbol, in frequency domain. Also WiMAX employs fast scheduling both in the DL and UL to respond to fast variations in channel conditions. This fast and fine granular resource allocation allows superior QoS for data traffic in a bursty traffic and rapidly changing channel condition. The fundamental premise of the IEEE 802.16 MAC architecture is QoS. It defines services flows which can map to Diffserve code points or MPLS flow labels that enable end-to-end IP-based QoS. Additionally, subchannelization and MAP-based signaling schemes provide a flexible mechanism for optimal scheduling of space, frequency, and time resources over the air interface on a frame-by-frame basis. This flexible scheduling allows QoS to be better enforced and enable support for guaranteed service levels including committed and peak information rates, latency, and jitter for various types of traffic on a customer-by-customer basis.

Wednesday, May 9, 2012


FEC allows the WiMAX MAC layer to detect errors introduced during the transmissions of frames over the air link. There are three methods of FEC specified in the WiMAX system; Reed-Solomon concatenated with convolutional code (RS-CC), block turbo code (BTC), and convolutional turbo code (CTC). RS-CC is mandatory, while BTC and CTC are made optional due to their complexity, eventhough they provide 2–3 dB better coding gain than RS-CC. For the 802.16e, a hybrid ARQ (H-ARQ) has been included as an optional feature. There are three types of H-ARQ, classified based on the manner in which they handle the retransmissions. 

Type I H-ARQ retransmits lost or unacknowledged blocks using chase combining in which the old erroneous block is stored at the receiver and compared with the retransmitted copy. This helps to increase the probability of successful decoding at the FEC block during the retransmission attempts. Type II/III H-ARQ uses incremental coding rate to ensure successful decoding at the FEC block during the retransmission attempts. Rate adaptation works hand-in-hand with the FEC block in the WiMAX system. When a user experiences good channel condition, it is desirable to exploit these peaks in the channel gain to increase throughput. 

This is achieved by having the SS increase the coding rate, e.g., from rate 1/2 code to rate 1/4 code, so that more information bits can be transmitted per channel use while still keeping to the target bit error rate (BER). When the channel degrades, the rate is reduced back to the next minimum to ensure that the target BER is met. This dynamic process is carried out on a frame-by-frame basis in the WiMAX system, using the flexibility provided by MAP signaling to adaptively adjust the UL/DL rates.

Sunday, May 6, 2012


In wired networks the channel impairments tend to be constant or at least very slowly varying. Wireless networks in contrast are well known for rapidly fluctuating channel conditions even when the transmitter and receiver are stationary. Broadly speaking, the lower the modulation and coding rate, and the higher the transmitted power, the more channel fading a system can tolerate and still maintain a link at a constant error level. It is desirable therefore to be able to dynamically change the transmitted power, modulation, and data rate to best match the channel conditions at the moment to continually support the highest capacity channel possible. WiMAX systems support adaptive modulation and coding on both the downlink (DL) and uplink (UL) and adaptive power control on the UL. Adaptive modulation allows the WiMAX MAC layer to adjust the signal modulation rate depending on the channel or radio link quality. When the channel quality is good, the MAC layer chooses the highest modulation rate, e.g., 64QAM, giving the system the highest throughput. When the channel quality degrades, the MAC layer reduces the modulation rate, e.g., 16QAM, reducing the throughput. In practice, adaptive modulation and coding rate control are used in conjunction with power control. In the PMP network deployments with multiple users in a cell serviced by a BS, when a link degradation arises for a user, the BS first increases the transmitted power of the user to provide extra link budget gain, until it reaches the maximum permitted. If the received signal quality does not improve, then the coding rate is reduced. Extra redundancy is added to provide more coding gain for better error correction performance. If the received signal quality still does not improve, then the modulation rate is reduced as a last resort (as this significantly affects the throughput than others). Similar (reverse) process is also followed when link quality appreciates. For WiMAX mesh networks using the amplify-and-forward relaying option, mesh relaying cannot exploit adaptive modulation technology because relaying nodes are not able to decode the contents of the received OFDM symbols to retrieve the modulated data and remodulate them at higher or lower rate, to increase or reduce the transmission rate (or throughput) of the mesh streams in response to link quality condition. However for mesh networks using the decode-and-forward relay option, adaptive modulation and coding rate control can benefit the mesh relaying operation as mesh nodes can decode the mesh data streams and adjust the coding and modulation rate, depending on the forwarding link quality. For example in Figure 1 a relay node decodes a data stream originally transmitted from the source node using 16QAM modulation, and remodulates the data stream using 64QAM as it has good channel quality to the destination node that can support this modulation rate. This results in fast and efficient use of the mesh links. Power control is applicable in WiMAX networks (PMP and mesh) in two ways: One, when nodes are transmitting data, they are regulated to transmit only the minimum power required to achieve successful reception at the receiver. Two, when mobile nodes do not have data (mesh relay or access service data) to transmit or receive, they go on sleep modes to save battery life.

Figure 1: Adaptive modulation at WiMAX mesh nodes.

Wednesday, April 4, 2012



Beamforming takes advantage of interference to change the directionality of an antenna array system. A beamformer controls the amplitude and phase of the signal at each transmitting antenna element, to create a pattern of constructive interference (beamspots) and destructive interference (null) in the wavefront. To create a beamspot, the beamformer uses an array of closely spaced antennas, often enclosed in a single enclosure as illustrated in Figure 1. λ/2 antenna spacing between the antenna elements is commonly used (where λ is the wavelength of the transmitted signals, given by λ = c/fis the frequency of the transmitted signals, is the speed of light). By varying the amplitude and phase of each antenna element, the beamformer is able to focus electromagnetic energy (beam) in the desired directions. The beams are directed to intended users, while nulls are focused on other unintended users reducing interference to the unintended users while increasing received SNR for the intended user. This provides a stronger link to the intended user and improves reach and capacity.

Figure 1: Beamforming technique.

Adaptive Antenna System

AAS is one of the advanced antenna technologies specified in the WiMAX standard to improve performance and coverage. In the AAS system, the transmitter (base station, BS) adaptively tracks a mobile receiver as it moves around the coverage area of the transmitter (BS), and steers the focus of the beam (beam spot) on the receiver unit as it moves. The beam steering method can either be mechanical or electronic. Thus AAS creates narrow beams to communicate with desired user device, which helps to reduce interferences to unintended user devices and improves carrier-to-interference (C/I) and frequency reuse, giving rise to high spectral efficiency. Thus, through the use of adaptive processing (beam steering), AAS improves performance and coverage of the system significantly. In WiMAX networks, AAS will find wide applications both in the point-to-multipoint (PMP) as well as mesh network deployments. In the PMP mode, AAS operates in similar way as in the current 3G cellular system, and is used for enhancing coverage and performance. In the mesh mode, AAS is used to form physical or directed mesh links. Physical or directed mesh is a form of mesh where substantially directional antennas are used to create physical links between neighboring devices. Mesh nodes adaptively steer antennas towards other nodes in their neighborhood and direct the focus of the electromagnetic radiations accordingly, to create the physical link with the intended neighboring device. One of the main drawback of AAS however is the high complexity involved in designing antenna systems capable of adaptively switching (steering) antenna directionality toward users who may be highly mobile. The use of AAS technology in mobile network deployment is therefore very challenging from a complexity perspective. Another drawback of AAS technology is that in an urban environment, with rich scatterer, the beams get blurred at the receiver and are not focused as expected, due to the reflections of waves as it propagates from the transmitter to the receiver. This effect is known as angle spread and it impacts significantly the performance of AAS in urban areas with cluttered structures. The gains achieved using AAS in such places thus reduce considerably from the theoretical expectations. For example, an AAS system using an eight-column array would have an ideal gain of 6.9 dB but angle spread would reduce this to only 3.2 dB in an urban environment and 4.7 dB in a suburban environment. There are some techniques however to mitigate the effects of angle spread. Active research works are ongoing in this area

Sunday, April 1, 2012


Advanced antenna technologies specified in the WiMAX system to mitigate the non-LOS propagation problems and ensure high quality signal receptions include Diversity and multiple-input multiple-output (MIMO) systems, adaptive antenna systems (AAS), as well as beamforming systems.

Diversity Systems

Diversity technique provides the receiver with multiple copies of the transmitted signal, each of them received over independently fading wireless channel. The notion of diversity relies on the fact that with independently fading replicas of the transmitted signals available at the receiver, the probability of an error detection is improved to pM, where is the probability that each signal will fade below a usable level. The link error probability is therefore improved without increasing the transmitted power. Recently, the use of diversity technique at the transmitter side also gained wide attentions, and has resulted in the consideration of the more general case of multiple transmit–multiple receiving antennas or MIMO systems.

MIMO Systems

The two options for MIMO transmissions in the WiMAX standard are space-time codes and multiplexing. For space-time codes, both space-time trellis codes and Alamouti space-time block codes are specified. However, it is the Alamouti space-time block codes that has yet been implemented by vendors due to its reduced complexity (eventhough space-time trellis code has better link performance improvements). In the Alamouti scheme designed for two transmitting antennas, a pair of symbol is transmitted at a time instant, and a transformed version of the symbols are transmitted in the next time instant. At the receiver, the decoder detects the four symbols transmitted over two time slots and processes them to obtain 2-branch diversity gain. Thus the Alamouti scheme achieves full diversity, with a rate-1 code. For the multiplexing option, the multiple antennas are used for capacity increase. In this option, original high-rate stream is partitioned into low-rate substreams and each substream is transmitted in parallel over the same channel, using different antennas. If there are enough scatterers between the transmitter and the receiver, adequate MIMO detection algorithms like zero-forcing, minimum mean-square error (MMSE), or vertical Bell labs Layered Architecture for space-time codes (V-BLAST), etc., can be designed to separate the substreams. Thus the link capacity (theoretic upper-bound on the throughput) is increased linearly with min(N,M), where is the number of transmit and is the number of receiving antennas.

MIMO Systems with Antenna Selection

For MIMO systems to be deployed on mobile WiMAX devices, the concept of antenna selection is very essential. Because RF chain dominates the link budget in wireless systems, mobile devices are unable to implement large numbers of RF chains to incorporate high order MIMO systems. For such systems therefore, a reduced numbers of RF chains are implemented and antennas with the best received energies are adaptively selected and switched on to the implemented RF chains for MIMO signal processings, as illustrated in Figure 1. The performance of such system has been studied quite elaborately in the literature, in comparison to the full complexity system that utilizes all available antennas. It was shown that the diversity gain performance is maintained in the reduced-complexity system despite the use of antenna selection, while the coding gain deteriorates proportional to the ratio of the selected antennas to the total available antennas.

Figure 1: MIMO subset antenna selection.

MIMO Technologies in IEEE 802.16m Standard

The IEEE 802.16 standards committee has recently initiated the process of extending the existing IEEE 802.16e standard (mobile WiMAX) for high capacity, high-QoS mobile application. The new standard was dubbed IEEE 802.16m at the IEEE January session in London, 2008. The working group tasked with the responsibility of producing the working documents for the new standard was named task group m (TGm). The group hopes to complete the specification for the new standard by the end of 2009. When completed, the standard will be backward compatible with IEEE 802.16e, and interoperable with 4G cellular standards supporting the IMT-advanced technologies. Although the details of the IEEE 802.16m standard is not available at the moment, the most important features being touted for the standard include
  • Target downstream speed of 100 Mbps in highly mobile mode, and upto 1 Gbps in normadic mode (upstream rate are not yet known, but would be at least at par with 802.16e).
  • Channel sizes upto 40 MHz (802.16e currently supports upto 20 MHz channel size).
  • Use of TDD and FDD.
  • Backward-compatibility with 802.16e.
  • OFDMA radio (same as in 802.16e).
  • Mandatory MIMO antenna technology of size 4 × 4 (four transmitting, and four receiving antennas).
In contrast to the IEEE 802.16e, which supports mandatory MIMO antenna technology of size 2 × 2 (two transmitting, and two receiving antennas), the use of mandatory higher-capacity MIMO technology in 802.16m will provide extra capacity to support the targeted high-speed in the downstream. Since downstream has been the bottleneck in wireless services, this improvement will provide significant boost in system capacity, to enable the system support wide range of multimedia services expected in 4-G compatible technologies.

Tuesday, March 27, 2012


Operations in high frequencies ranging between 10 and 66 GHz were initially specified in the earlier versions of the 802.16 standard for fixed access. With this specification, only line-of-sight (LOS) signal propagation, with unobstructed path from the transmitter to the receiver, is feasible. Though high-frequency operations have the advantage of less interference, however most wireless technologies prefer lower frequencies because RF signals penetrate structures much better at low frequencies, enabling non-LOS propagation techniques. In non-LOS or multipath propagation modes, the transmitted signals are scattered, reflected, and diffracted by objects in the propagation paths between the transmitter and the receiver as shown in Figure 1. Thus, the receiver receives multiple copies of the transmitted signal, each arriving with different amplitude and phase or delay. These multipath signals may combine destructively at the receiver resulting in severe signal fades. To accommodate services in non-LOS conditions in the WiMAX system, 802.16-2004 standard subsequently specifies operations at lower frequencies, between 2 and 11GHz. Single-carrier transmission, known as wirelessMAN-SC, as well as two multicarrier transmissions, wirelessMAN-OFDM (orthogonal frequency division multiplexing) and wirelessMAN-OFDMA (orthogonal frequency division multiple access) are also specified. The WiMAX system also specified a number of advanced PHY layer and antenna technologies, both fixed and adaptive, to combat the severe fading effect of the multipath propagation channel, to enhance system performance.

Figure 1: Non-LOS propagation and intersymbol interference (ISI).

Friday, March 23, 2012


WiMAX PHY is responsible for the transmission of data over the air interface (physical medium). The PHY receives MAC layer data packets through its interface with the lowest MAC sublayer, and transmits them according to the MAC layer QoS scheduling. WiMAX MAC layer comprises of three sublayers, which interact through service access points (SAP) to provide the MAC layer services, as shown in Figure 1. The convergence sublayer (CS) interfaces the WiMAX network with other networks by mapping external network data (from ATM, Ethernet, IP, etc.) to the WiMAX system. MAC common part sublayer (MAC CPS) provides majority of the MAC layer services. The MAC CPS receives data from the CS as MAC service data unit (MAC SDU) and efficiently packs them on to the payload of the MAC packet data unit (MAC PDU) through the process of fragmentation and aggregations. Fragmented parts of MAC SDU are used to fill (aggregate) remnant portions of MAC PDU payloads that cannot accommodate full MAC SDU during package. As WiMAX provides connection-oriented service, MAC CPS is also responsible for bandwidth request/reservation for a requested connection, connection establishment, and maintenance. In the WiMAX standard, bandwidth request/reservation is an adaptive process that takes place on a frame-by-frame basis. This allows more efficient resource utilization and optimized performance. Thus the MAC CPS is required to provide up-to-date data on bandwidth request/reservation for each connection, on a frame-by-frame basis. The MAC CPS also provides connection ID for each established connection and marks all MAC PDUs traversing the MAC interface to the PHY with the respective connection ID. This sublayer also performs QoS scheduling by deciding the orders of packet transmissions on the PHY, based on the service flow decided during connection establishments. Privacy sublayer provides authentication to prevent theft of services, and encryption to provide security of services.

Figure 1: WiMAX Protocol stack.
The ensemble of the activities of the three sublayers of the WiMAX MAC layer constitutes the MAC layer services. MAC layer services can broadly be categorized into two: periodic and aperiodic activities. Periodic activities are fast- or delay-sensitive types of activities and are carried out to support ongoing communications, thus they must be completed in one frame duration. Examples include QoS scheduling, packing, and fragmentation. Aperiodic activities are slow- or delay-insensitive types of activities. They are executed when and as required by the system, and are not bounded by frame durations. Examples include ranging and authentications for network entry.

Tuesday, March 20, 2012


A standard model, suitable for planning purposes, identifies a wireless network with a set of transmitting and receiving antennas scattered over a territory. Such antennas are characterized by a position (geographical coordinates and elevation) and by a number of radio-electrical parameters. The network design process consists in establishing locations and suitable radio-electrical parameters of the antennas. The resulting network is evaluated by means of two basic performance indicators: (1) network coverage, that is the quality of the wanted signals perceived in the target region and (2) network capacity, that is the ability of the network to meet traffic demand. On the basis of quality requirements and projected demand patterns, suitable target thresholds are established for both indicators. In principle, coverage and capacity targets should be pursued simultaneously, as they both depend on the network configuration. However, to handle large real-life instances, conventional network planning resorts to a natural decomposition approach, which consists in performing coverage and capacity planning at different stages. In particular, the network is designed by first placing and configuring the antennas to ensure the coverage of a target area, and then by assigning a suitable number of frequencies to meet (projected) capacity requirements. The final outcome can be simulated and evaluated by an expert, and the whole process can be repeated until a satisfactory result is obtained (Figure 1). Future change in demand patterns can be met by increasing sectorization (i.e., mounting additional antennas in a same site), by selecting new sites, and by assigning additional transmission frequencies).

Figure 1: Phases of the conventional planning approach.
The network planning process requires an adequate representation of the territory. In the past years, the standard approach was to subdivide the territory into equally sized hexagons and basic propagation laws were implemented to calculate field strengths. By straightforward analytical computations, these simplified models could provide the (theoretical) position of the antennas and their transmission frequencies. Unfortunately, the approximations introduced by this approach were in most cases unacceptable for practical planning, as the model does not take into account several fundamental factors (e.g., orography of target territories, equipment configurations, actual availability of frequencies and of geographical sites to accommodate antennas, etc.). Furthermore, the extraordinary increase of wireless communication quickly resulted in extremely large networks and congested frequency spectrum, and asked for a better exploitation of the available band. It was soon apparent that effective automatic design algorithms were necessary to handle large instances of complex planning problems, and to improve the exploitation of the scarce radio resources. These algorithms were provided by mathematical optimization. Indeed, already in the early 1980s, it was recognized that the frequency assignment performed at the second stage of the planning process is equivalent to the Graph Coloring Problem (or to its generalizations). The graph coloring problem consists in assigning a color (= frequency) to each vertex (= antenna) of a graph so that adjacent vertices receive different colors and the number of colors is minimum. The graph = (VE) associated with the frequency assignments of a wireless network is called interference graph, since edge uv Image from book E represents interference between nodes u Image from book and Image from book V and implies that and cannot be assigned the same frequency. The graph coloring problem is one of the most known and well studied topics in combinatorial optimization. A remarkable number of exact and heuristic algorithms have been proposed over the years to obtain optimal or suboptimal colorings. Some of these methods were immediately at hand to solve the frequency assignment problem.
The development of mathematical optimization methods triggered the introduction of more accurate representations of the target territories. In particular, also inspired by standard Quality-of-Service (QoS) evaluation methodologies, the coarse hexagonal cells were replaced with (the union of) more handy geometrical entities, namely the demand nodes introduced by Tutschku, and with the now universally adopted testpoints (TP). In the TP model, a grid of approximately squared cells is overlapped to the target area. Antennas are supposed to be located in the center of testpoints: all information about customers and QoS in a TP, such as traffic demand and received signals quality, are aggregated into single coefficients. The TP model allows for smarter representations of the territory, of the actual antennas position, of the signal strengths, and of the demand distributions. This in turn permits a better evaluation of the QoS and, most important, makes it possible to construct more realistic interference graphs, thus leading to improved frequency assignments. Indeed, by means of effective coloring algorithms, it was possible to improve the design of large real-life mobile networks  and also of analogue and digital broadcasting networks.
Finally, basing on the TP model, it was also possible to develop accurate models and effective optimization algorithms to accomplish the first stage of the planning process, namely the coverage phase, to establish suitable positions and radio-electrical parameters for the antennas of a wireless network.
In recent years, thanks to the development of more effective optimization techniques and to the increase of computational power, a number of models integrating coverage and capacity planning have been developed and applied to the design of global system for mobile (GSM) , universal mobile telecommunication system (UMTS), Analog and Digital Video Broadcasting  networks.

Friday, February 10, 2012

SIMULATION RESULTS | Capacity Planning and Design

The average-based design models lead us to much smaller estimates of required capacity. Therefore, they run the risk of not being able to guarantee acceptable performance for many real-time applications such as voice or video where jitter must also be taken into account. We currently do not have design models that can take jitter into account so we need to evaluate whether the jitter remains acceptable in a system designed with an average delay method.
In this section, we present simulation results to study the delays encountered by the individual voice and video sources under various provisioning scenarios and compare them with the required delays for voice and video, respectively. We used ns-2 to conduct simulations. In this section, we only consider AF subclasses where multiple sources send packets to each subclass, and packets of each subclass are served in the order of their arrival while sharing bandwidth between the subclasses using PDD scheduling. The simulation model for AF class is shown in Figure 1. We simulate a voice source using a two state on–off model where it generates packets with a deterministic inter-arrival time of 15 ms in the on-state. On-periods are exponential with rate 2.5 and off-periods are also exponential with a rate 1.67. Each packet is of size 120 bytes. The video source is modeled using deterministic batch arrivals with batch inter-arrival time of 33 ms. The number of packets in a batch are geometrically distributed with an average of five packets. In each burst, the last packet has size distributed as uniform (0,1000) bytes. All other packets have 1000 bytes.

Figure 1: Simulation model for AF class.
Here also cLB/rcOO/r, and cP/refer to overprovisioning required when, dimensioning for average delays using LB based model, using on–off-based model and Poisson-based model, respectively. Observe that the capacity computed using these models along with PDD-based scheduling for single, five, and ten voice and video sources. Now we use that capacity for the simulation and compare in Figures 2 through 7 the delays for single, five, and ten voice and video sources. We have plotted the observed mean delay and error bars corresponding to twice the sample standard deviation for voice and video sources. We also present a horizontal line showing the required average delay for each source.

Figure 2: Delay for single voice source.

Figure 3: Delay for single video source.

Figure 4: Delay for five voice source.

Figure 5: Delay for five video source.

Figure 5: Delay for ten voice source.

Figure 6: Delay for ten video source.
Note that for the Poisson-based capacity model with single sources, the actual mean delay is many times the target delay, both for voice and video. Moreover, some voice packets can have a delay as high as 400 ms and will be useless at the receiver. For video also, packets can have delays as much as 1 s. Such a capacity planning is not very useful and could lead to unsatisfied customers. When we multiplex five or ten voice and video sources, the average delays get closer to the target delays and for ten sources, they are even acceptable for both voice and video. However, there is still a large variance in the observed delays and voice packets could still have as high as 40 ms and video as high as 100 ms. Note that such high delays could be tolerable if they affect only a small number of packets.
Next, we consider on–off and LB-based design models. Observe that both the approaches provide acceptable delays, average as well as average along with two times standard deviation. The values are smaller than the required delays and hence a significant fraction of packets belonging to voice and video sources will encounter less than required delays. These models remain consistent for single, five, or ten sources and provide acceptable, performance to individual sources. Note that the LB-based model provides delays which are less than the target for both voice and video, although it requires lesser capacity than the on–off-based models. Observe that not only the delays are acceptable but also the variance is quite small.
Based on these results, it can be argued that LB-based model could be used to determine required capacity for a source requesting an average delay QoS. When allocating capacity for a small number of sources, it can achieve the multiplexing gain and provides minimal capacity to meet the required delays.

Tuesday, February 7, 2012


Next, we present the architectural details for the IP transport link between the BS and the GW. Presence of multiple service classes with different QoS requirements running on a native IP link with constrained capacity necessitates prioritization and scheduling among the arriving packets. Internet Engineering Task Force (IETF) has standardized the differentiated services architecture (DiffServ) for large-scale deployment of IP networks with QoS support. They have provided three types of service to packets: expedited forwarding (EF), assured forwarding (AF), and best effort. The applications requiring absolute delay bound are mapped to EF class. For providing the average delay bound, we use the AF class. We propose to use the proportional delay differentiation (PDD) model of Dovrolis et al. for providing different delays to the subclasses within the AF class. The PDD-based approach is unique in its simplicity and tractability. Recently, many real-time applications have been successfully mapped to delay and loss differentiation parameters of the PDD subclasses.
Next, we discuss the architectural details of a forwarding interface of the BS. For this purpose and toward discussions in later sections, consider multiple sources which want to send their traffic from BS (A) to GW (B) connected by a direct link .
Consider now the BS (A) and GW (B) routers. Assume that the concerned forwarding interface on BS A to GW B has been configured for EF subclasses and AF subclasses. At the interface, each source is mapped to EF or AF class based on whether the class requires absolute or average delay. The mapping to subclass (such as i) within the class (EF or AF) is based on the source application running at the source (voice, video, etc.). The IP link (A-B) has to support a set of sources Image from bookIn Figure 1, we present the architecture of the forwarding interface of BS A, supporting DiffServ. Let the capacity of the direct link connecting BS (A) to GW (B) be c. We assume that the bandwidth is distributed among the EF subclasses, AF class and BE class using a weighted fair queuing scheduler (WFQ) where the vector determines the weights used in scheduling. This ensures that each EF subclass on the link gets no less bandwidth than Image from book, where

Figure 1: Forwarding interface of BS A.
Here, Image from book is the minimum bandwidth required for subclass to provide the target delay Image from book to the sources belonging to the class. Similarly, hAF captures the weight for the AF class which translates into minimal bandwidth of
cAF is the total bandwidth available to the AF class such that can be shared between the subclasses. The bandwidth cBE available to the BE class can be computed as
Multiple sources belonging to the same subclass (EF or AF) at the BS A are placed in the queue for the subclass on a first-come-first-serve basis. Let the set of sources belonging to the ith EF subclass be Image from book, then every source Image from book has a absolute delay requirement Image from book. Similarly, for the ith AF subclass, sources Image from book have an average delay requirement such that Image from bookSource belonging to AF or EF class has an average arrival rate of rs. When generated by an on–off source model, it has a peak rate of Rs and the on period of average length Is. Such a source can be effectively shaped by an LB filter of parameter (σsρs), where ρs is the average arrival rate and σsis the maximum allowed burst length of the LB filter. To ensure low losses, it is advisable to have ρs > 1.1rs and high value of σs.
Observe that sometimes the bandwidth allocated to the AF class needs to be shared between the subclasses such that each subclass meets its target delay requirement. This is done by using PDD scheduling between the subclasses where the value of parameter Image from book determines the extent of differentiation. Furthermore, each AF subclass can have an end-to-end delay requirement for the concerned hop. Providing hop-by-hop delay allows greater flexibility and options of better mapping the sources to subclasses. 

Sunday, February 5, 2012


In Figure 1, we present the overall network architecture of a WiMAX network. The network can be logically partitioned into three components, user terminals, ASN, and CSN. User terminals capture the data origination points, could be using the fixed, mobile, or portable WiMAX technology. All the three variations can be supported using a common air interface. ASN spans the BS and the ASN-GW. BS receives the transmitted signal, processes it, and converts into an IP packet and sends to the GW on the outgoing IP transport link. GW receives and upon processing determines the destination on the network side and sends the packet. BS and GW are connected to each other using an IP transport. Typical implementations would have BS located in the field/coverage area and the GW will be centrally located in the switch centers. Therefore, the IP link between BS and GW forms the transport backhaul network. CSN contains many different commercial off-the shelf (COTS) components, which provide connectivity services to the WiMAX subscribers. Addressing, authentication, and availability (AAA) servers, mobile IP home agent (MIP HA), IP multimedia services (IMS), content services, etc. provide support for seamless services to subscribers. AAA servers ensure that a user is uniquely identified and authenticated as legitimate customer. MIP HA ensures that roaming across IP networks is handled and accurate routing of data packets is ensured. Call processing related services are provided by IMS entity. Billing and operational support systems help in managing the overall network.

Figure 1: Logical network architecture of a WiMAX network.
In Figure 2, we present typical implementation of a WiMAX network in a market. For example, say a carrier plans to lay down WiMAX network in Washington D.C. market. Typically, we would have more than 100 BSs connecting to a GW location, based on the anticipated traffic, each GW location might require a cluster of servers providing the functions of the GW. Each IP transport link would be leased from the local carrier and provisioned. Based upon the cost points and required capacity, the carrier can choose to directly lease a TDM segment, Ethernet link, fiber connectivity, etc. Components of the CSN located at each switch center might also be implemented using clusters and would have enough capacity to support the entire market. Switch centers could be connected to each other using a high speed IP network running on an OC-192 (or higher) SONET ring leased from local exchange carrier. Actual network would also include connectivity to the other markets, trunking with public switched telephone network (PSTN) via the end office (EO), tandem connections with other wireless carriers, etc.

Figure 2: Physical network architecture of a WiMAX network.
For most WiMAX networks, it is unlikely that the carriers would provision the IP transport based on the capacity of the WiMAX air interface. According to WiMAX forum, air interface built on 10 MHz channel with 2 × 2 MIMO can support peak downlink rate of 63 Mbps and peak uplink rate of 28 Mbps per sector. Assuming three sectors per BS, this would translate into close to 200 Mbps of backhaul transport for each BS. When we share the symbols 3:1 between DL and UL, it could provide data rates of 46 Mbps DL and 8 Mbps UL per sector. Even then it would require about 150 Mbps of capacity between BS and GW. Such a requirement would lead to an unmanageable backhaul cost, which might become a road block in the large-scale adoption of the WiMAX technology.
Our contention is that the service providers will only provision based on the anticipated demand. For example, they might provision just enough capacity for voice calls, Mvideo calls, and few more Mbps for best effort. This would ensure that the initial cost of building the network is manageable, and as the users grow, more backhaul can be added to ensure acceptable QoS for the subscribers.
Related Posts with Thumbnails