Tuesday, May 31, 2011


The big challenge for broadband wireless system design comes up with the right balance between capacity and coverage that offers good quality and reliability at a reasonable cost. It is important to look at system spectral efficiency more broadly to include the notion of coverage area. Results presented in previous sections have demonstrated the high potential benefits for relay deployment with radio resource sharing, in terms of interference, MIMO combination, and multiuser transmission. To implement the radio resource reuse and achieve highly efficient relay deployments, appropriate frequency reuse and multiuser access strategies are required. Relay systems must be based on a topology that fully exploits effective resource assignment based on the spatial separation of nodes. In this section, we propose directional distributed relay for highly efficient multiuser transmission with reduced demands on radio resource.
Figure 1 depicts the directional distributed relaying architecture. This is based on a paired radio resource transmission scheme, and it is possible to achieve one radio resource to one user (or one group of users) in average, even with multihop relay. The radio resource can be defined as either frequency (e.g., subcarriers in an OFDMA symbol) or time (e.g., OFDMA time slots). Transmissions in the BS coverage are the same as the IEEE 802.16e standard. For relay links, paired transmissions are applied, where the BS forms two directional beams, or uses two sector antennas to communicate with RS1 and RS2 simultaneously. A paired radio resources are required: f1 and f2. The first radio resource (f1) is applied to the RS–BS1 link and also to the RS2–MS links (in the RS2 coverage); while the second resource (f2) is applied to the BS–RS2 link and also to the RS1–MS links (in the RS1 coverage). Radio resources are shared between the RSs and MSs. Each end-user employs a single pair of radio resources, on average.
Figure 1: Directional distributed relaying with paired radio resource.

Using the sharing scheme outlined above the interference can be controlled at the BS and RS nodes. In this relay configuration there are only two sets of interference, as also illustrated in Figure 1. The interference between the BS and MS groups (I1 and I2) can be detected and controlled by the BS. First, the BS could employ an adaptive array to exploit the spatial separation of the groups. Second, since the received power by each MS in each MS group is known to the BS, the BS can apply interference avoidance  between the two groups based on measured signal to interference plus noise ratio (SINR) and power control, where the transmit power of the two RSs are controlled for balancing the SINR according to the service requirement. Furthermore, in this scenario the expected level of interference is small because the BS connects to the MSs through a relay, which means the relay SNR-gain will be much higher than the SNRaccess level. Interference between RSs (I3 and I4) can be reduced by array processing (including the use of sector antennas) at the RSs. Interference measurement for the efficient resource assignment can be achieved during the neighborhood discovery procedure. To achieve high levels of SINR (e.g., 10–25 dB), array processing, including the use of sector antennas at the RS, is desirable.
This proposed topology is fully compatible with the existing 802.16e standard and no modifications are required at MSs. Alternative deployments topologies are also possible based on the same concept, such as a single RS to cover a coverage hole. In such cases, the radio resource sharing is performed between the RS and its BS. It could be complicated for statistical studies as the performance is fully dependent on the deployment scenario. However, it is much more feasible in a realistic application environment by employing real channel measurements and ray tracers.

Wednesday, May 18, 2011


WiMAX technology encompasses broadband wireless equipment which is designed in compliance with the IEEE 802.16 standard and certified by the WiMAX Forum. The IEEE 802.16e standard leverages several differences and enhancements over the 802.16-2004 standard to support mobile subscribers.
Scalable orthogonal frequency division multiple access (SOFDMA): Introduced in the 802.16e amendment over fixed WiMAX’s OFDM, SOFDMA supports scalable channel bandwidths from 1.25 to 20 MHz, using quadrature amplitude modulation (QAM; 16QAM or 64QAM) or quaternary phase shift keying (QPSK) modulation. SOFDMA enables additional resource allocation flexibility and adaptively optimized multiuser performance.
Advanced antenna technologies: MIMO PHY layer techniques have the potential to significantly increase bandwidth efficiency based on the premise that operation occurs in a rich scattering multipath environment. 802.16e defines optional support for such advanced antenna technologies. Major advantages of MIMO include diversity gains, multiplexing gains, interference suppression, and array gains. The inclusion of MIMO techniques alongside flexible subchannelization and adaptive modulation and coding (AMC) enables mobile WiMAX technology to improve system coverage and capacity.
In addition, 802.16e presents many advanced features for performance enhancements, such as handover support, quality-of-service (QoS) support, and energy savings mechanisms for handheld support, etc.
Multihop relay: Another milestone in the development of WiMAX was the introduction of multihop relay running as the 802.16j multihop relay project, which targets on OFDMA PHY layer and Medium Access Control (MAC) layer enhancements for licensed bands to enable the operation of RSs. The objectives of 802.16j are to enhance coverage, throughput, and system capacity by specifying 802.16 multihop relay capabilities and functionalities of interoperable RSs and BSs. Several technical topics were focused on, mainly including relay concepts, frame structure, network entry, bandwidth request, security, mobility management, routing, path management, interference control and radio resource management, etc.
The concept of multihop relaying is already well developed in the fixed telecommunications world. Microwave radio relays have been widely used to transmit digital and analog signals over long distances, with examples including telephony and broadcast television. With the evolution of mobile networks, wireless relays have been further developed for cellular transmission. Analog repeaters are sometimes used in cellular systems to extend coverage into regions that are uncovered by the standard network. Digital relaying for cellular applications was initially investigated to enhance coverage for delay-insensitive traffic. More recently, Streaming21 of the United States announced the availability of a 3G relay server in mid-2006, a carrier-grade mobile streaming solution that allows mobile operators and content providers to deliver multimedia contents to mobile phone subscribers over GPRS and 3G networks. Now, the relay concept is being further developed in 802.16j to supporting both digital repeater and decode-forward (DF) relaying with various techniques, such as cooperative relaying, intelligent radio resource management (RRM) for radio resource reuse, smart antenna on RSs for direction-controlled transmission, relay grouping, etc.
While the IEEE and the WiMAX Forum strive to address the technological challenges of high mobile, NLOS WiMAX services, large throughput and coverage, etc., commercial service providers face additional operational challenges including spectrum limitations, security vulnerabilities, and QoS implementations. Many researchers have been working toward developing mechanisms that provide highly efficient mobile WiMAX, which is the core topic in this chapter. Further development is expected in a newly approved project from the IEEE, namely 802.16m—Advanced Air Interface to meet IMT-advance requirements. It is targeting data rates of 100 Mbps for mobile applications and 1 Gbps for fixed applications, cellular, macro- and microcell coverage, with currently no restrictions on the RF bandwidth.

Saturday, May 14, 2011

Relay-Assisted Mobile WiMAX

There are two major technological and social trends significantly changing people’s lives: wireless communications and the Internet. Leveraging these two trends, worldwide interoperability for microwave access (WiMAX) creates a new utility enabling the development of new services and new Internet business models. In particular, the full potential of WiMAX will be realized when it is used for innovative nomadic and mobile broadband applications. With the finalization of the IEEE 802.16e standard and upcoming test and certification of WiMAX products, mobile broadband services are becoming a reality. The 802.16e standard provides broadband wireless Internet Protocol (IP) access to support a variety of services (such as voice, data, and multimedia) on virtually any device. The operation of WiMAX is currently limited to a number of licensed frequency bands below 6 GHz for reliably supporting non-line-of-sight (NLoS) operations. The 802.16e standard has also become a part of the IMT-2000 family.
WiMAX is often quoted as combining long transmission ranges (e.g., in a macrocell) with high data capacities (multi megabit per second throughput to end users). Power and spectral efficiency is key to a successful WiMAX deployment. The mobile WiMAX physical (PHY) layer is based on scalable orthogonal frequency division multiple access (SOFDMA) technology, which enables flexible channelization. The new technologies employed by mobile WiMAX result in higher data transfer rates, simpler mobility management, and lower infrastructure costs compared to current 3G systems. The underlying scenario for mobile WiMAX is an outdoor environment with multiple users within a cell. Hence scheduling (allowing a fair and efficient distribution of resources) and interference (from intracell and intercell) become important issues.
Radio relaying can address many of the challenges faced in the deployment of mobile WiMAX and its potential benefits. The relay system targets one of the biggest challenges in next generation mobile wireless access (MWA), namely the provision of high data rate coverage in a cost-effective and ubiquitous manner. Within a multihop relay network, a multihop link can be formed between the base station (BS) and a distant mobile station (MS) using a number of intermediate relay stations (RS). To avoid interference between the relay links, the simplest approach is to assign unique radio resources to each link. Using this approach, the multihop users will rapidly drain the system of valuable radio resource. The wireless medium is a precious infrastructure commodity and the situation is especially acute at lower frequencies (where the radio signal propagation characteristics are more favorable) when a significant amount of radio spectrum is needed to provide ubiquitous wireless broadband connectivity. Spectrum predictions for future cellular networks indicate large shortfalls by 2010, if not before. Hence the above approach can only be used for a very small number of very high-value MSs, or in applications where spectral efficiency is not vital, such as military or disaster relief communication networks. Given its commercial applications, relaying in the context of WiMAX must conserve radio spectrum and emphasize the need for high spectral efficiency. Enhancing radio resource efficiency is a key challenge in a competitive business development.
We focuses on the efficiency of relay transmission. We present leading edge techniques, and merge both theoretical analysis and practical application for a mobile WiMAX system with multihop relay. Directional distributed relaying is then proposed to achieve high data throughput with reduced demands on radio resource.

Wednesday, May 11, 2011


The delay-distortion performance of the position-value based resource allocation is compared in this section with traditional layer-based resource allocation in mobile WiMAX. The parameters of the simulation study are listed as follows. Default time frame duration = 0.004 second, default channel BER is 0.0001, MAC header is 6 bytes, and fragmentation subheader is 2 bytes. Frequency bandwidth is 20 MHz, and 64-QAM with 3/4 coding rate and 1/4 cyclic prefix are used.
Figures 1 and 2 indicate the average loss ratio and expected delay trade-off for delivering a typical SDU with 1400 bytes using different fragmentation thresholds and SR-ARQ retry limit strategies. From these figures it is clear to see, with larger fragmentation number (lower fragmentation threshold and shorter PDU length accordingly) and higher SR-ARQ retry limit, the SDU packet loss ratio is decreased considerably. However, the penalty of such packet loss ratio decreasing is the prolonged delay of successful SDU delivery, mainly because of the retransmission latency. This is because mobile WiMAX is basically TDMA based scheduling, retransmission has to be reissued in the next frame duration. Thus, quality and latency form the trade-off that can be fine-tuned in mobile WiMAX transmission strategies optimization.

Figure 1: SDU loss ratio for an application layer packet with 1400-byte length, at channel BER 1e-4.

Figure 2: SDU delay expectation for an application layer packet with 1400-byte length, at channel BER 1e-4.
Figure 3 depict the delay-distortion performance comparison of the position-value based approach and layer-based approaches. For both of these approaches, image qualities with loose delay constraints are better than those with strict delay constraints. This is because with loose delay constraints, more network resource especially the SR-ARQ retransmissions can be allocated to the code stream, which improves the packet delivery ratio and thus the picture quality considerably. The position-value based approach achieves better delay-distortion performance than layer-based approach with the same latency budget constraint. Layer-based UEP approaches allocate resource according to the importance of different layers in code stream, and important layers containing coarse image information are more effectively protected while unimportant layers containing imagefine details are less protected. The position oriented approach allocates resource more efficiently by considering not only different layers’ unequal importance, but also the unequal importance of position and value information in each layer. With the position-value based resource allocation, the p-segments especially those in the coarse image quality layers are more effectively protected to improve image quality; and the v-segments especially those in fine details enhancement layers are less protected to reduce delay penalty. From this figure we can see, with 1e-4 channel BER the position-value based approach shows quality improvement up to 7–8 dB in terms of PSNR over traditional layer based approach with the same delay constraint. In the worst cases, i.e., with ultra tight or ultra loose delay constraints, the position-based approach has similar performance as layer based approach. This is because it has either not enough network resource or over excessive network resource for allocation. In those situations, the performance is not confined by efficiency of resource allocation, but the amount of network resource itself.

Figure 3: Image quality and delay-bound for different resource allocation schemes at BER 1e-4.

Saturday, May 7, 2011


Network resources can be dynamically adjusted and adapted in MAC or PHY layers for mobile WiMAX, for example, physical layer FEC strategy, modulation scheme, transmission power control, link layer scheduling strategy, fragmentation threshold, and ARQ retry limit. Here, we focus our discussion on the resource management strategies that can be easily applied to mobile WiMAX networks. In WiMAX networks, each higher layer SDU usually consists of multiple link layer Protocol Data Units (PDUs). Each SDU of a traffic flow, for example, a JPEG2000 coded image stream, is dispatched to a specific 802.16e connection by SDU classifier in convergence sublayer. The connection is associated with a set of QoS requirement parameters, and the delay budget Tmax for transmitting the whole JPEG2000 image stream. We specifically consider the transmission strategy optimization within an IEEE 802.16e connection, where the multiple connection management overhead is effectively obviated. We specifically consider the MAC layer delay performance of different fragmentation and retransmission strategies, which can be seamlessly applied to mobile WiMAX without violating what has already been defined in the IEEE 802.16e standard. In mobile WiMAX and the IEEE 802.16e standard, Selective Repeat based Automatic Repeat reQuest (SR-ARQ) is defined as the default ARQ strategy for optional performance enhancement, where the characteristics of SR-ARQ for mobile WiMAX are summarized as follows:
  1. The SR-ARQ in mobile WiMAX is enabled per connection basis.
  2. A WiMAX connection must have SR-ARQ enabled or not, but it cannot have a mixture mode of both SR-ARQ and non-SR-ARQ.
  3. During connection establishment process, SR-ARQ is negotiated using dynamic service addition (DSA) and dynamic service change (DSC) messages. The fragmentation threshold ARQ_BLOCK_SIZE is negotiated and the smaller one provided by BS and SS is chosen for the SR-ARQ enabled connection between BS and SS.
  4. The SR-ARQ feedback bitmap is sent in the MAC management message via basic management connection between BS and SS, or in the piggyback message via the reverse link of data connection.
  5. SR-ARQ feedback bitmap cannot be re-fragmented.
The SR-ARQ operation in mobile WiMAX is described in Figure 1. Without losing generality, we use downlink transmission in TDD mode as the example to describe the SR-ARQ process. The bandwidth resource is divided into fix-sized time frames with duration T, and the frame duration is further divided into downlink and uplink subframes with an adaptive boundary separated by a transmit/receive transition gap (TTG). The time frames are separated by a receive/transmit transition gap (RTG). The downlink subframe is composed of preambles, DL_MAP, UL_MAP, DCD, UCD control messages as well as burst transmission opportunities allocated for each SS. The Uplink subframe is further composed of ranging and bandwidth request slots, as well as transmission opportunity grants for each SS. In each upper layer SDU transmission, the SDU is fragmented into fix-sized SR-ARQ blocks and these blocks are dispatched into a specific connection queue. During the downlink transmission opportunity to the destination SS, these SR-ARQ blocks are transmitted in the time-varying and error-prone wireless channel and some of them may be lost due to bit errors. It is worth noting that the chance of collision is minimal in the time slot scheduling based WiMAX networks, and the major packet loss is due to physical layer bit or symbol errors. The receiver SS responds with an SR-ARQ ACK bitmap to provide the receiving status of the SDU during the uplink transmission opportunity in the same frame duration, and those erroneous or lost blocks are negatively acknowledged. In the next frame duration T, the BS retransmits only negatively acknowledged blocks as well as the new data blocks, until the SDU is successfully delivered to the SS.

Figure 1: SR-ARQ operations for mobile WiMAX. The detailed concept is explained in the 802.16(e) standards.

Wednesday, May 4, 2011


Because images/video packets or frames are dominated by a mixture of stationary low frequency backgrounds and transient high-frequency edges, a wavelet transform is very efficient in capturing the bulk of image energy in a fraction of coefficients to facilitate compression. Wavelet-based image compression techniques such as zerotree or EBCOT produce excellent scalability features and rate-distortion (R-D) performance for robust multimedia transmission over wireless channels. The wavelet decomposition is illustrated in Figure 1, where energy concentration in the low-frequency bands facilitates the construction of embedded code streams. The embedded nature of the compressed code stream provides the basis for scalable video/image coding by fine-tuning the R-D trade-off. The source encoding can be stopped as soon as a target bit-rate is met, or the decoding process can be stopped at any low desirable bit-rate by truncating the code stream. Typically, the code stream is composed of different quality layers in descending order with base layer providing the rough image and enhancement layers providing quality refinement. Different layers in the code streams have significant different perception importance to the end users. Losing the base layer may cause serious distortion for reconstructed pictures, while losing quality enhancement layers can still achieve acceptable picture qualities.

Figure 1: Original lena image (128*128 pixels, 8 bpp), wavelet decomposition and reconstruction.


MPEG-4 introduced in 1998 was designated by the ISO/IEC MPEG under the formal standard ISO/IEC 14496, which was primarily aimed to low bit-rate video applications over networks. The layer-based quality enhancement concept has been widely applied to scalable video coding (SVC) in MPEG-4 Part 10 H.264/advanced video coding (AVC) and JPEG2000 progressive image coding standards. The source coding bit-rate variety advantage has laid foundations of a number of emerging multimedia applications over bandwidth limited wireless networks such as IPTV, video on demand, online video gaming, etc. Both SVC in MPEG-4 and quality progression in JPEG2000 provide considerable advantage for error-robust multimedia streaming over time-varying wireless channels especially in mobile environments (e.g., mobile WiMAX networks). Without losing generality we use MPEG-4 video coding in this section and JPEG2000 in Section 1 as multimedia coding examples.
The video sources are coded into a couple of quality layers via SVC, starting with the rough pictures in low bit-rates followed by higher layers refinement data for quality enhancement in higher bit-rates. The rough pictures in base layers are much more important in terms of perception than the refinement data in enhancement layers, which deserve more protection upon transmission in wireless channels; the refinement data in enhancement layers can be discarded during transmission when bandwidth is limited. For each wireless mobile terminal in the mobile WiMAX networks, for example a moving vehicle, the available transmission bandwidth resource is fluctuating due to different locations of the vehicle and the corresponding path losses as well as the channel errors. With SVC applied to each video stream on each wireless terminal, the actual traffic pumped into WiMAX networks from application layer can be adaptively controlled while keeping a rate-distortion optimize manner: if the available bandwidth is low due to high channel error probability, only the base layers of the rough pictures will be transmitted; when the channel condition becomes better with higher available bandwidth, both base layers and refinement layers will be transmitted to improve the perception quality.
Another important factor in multimedia streaming is the inter-packet dependency. It is typical that the dependency graph of multimedia packets is composed of packetized group of pictures, and the multimedia packet coding is correlated involving complex dependency among those packets. If a set of image/video packets are received, only the packets whose ancestors have all been received can be decoded. Figure 2 illustrates the typical code stream dependency for layer-based embedded media stream. The inter-packet dependency provides opportunities for resource allocation and adaptation for multimedia streaming over WiMAX, where the packets with more descendents are much more important than those descendents. For example, for the layered-dependent media packets in the Figure 2, each packet is associated with a distortion reduction value denoting the quality gain if this packet is successfully received and decoded. If the packets in layer 2 can make contribution to the decoded media, all the packets in layer 0 and layer 1 must be received and decoded successfully; otherwise the packets in layer 2 are useless in terms of decoding even if they are transmitted without a single bit error.

Layer Based Multimedia Decoding:
Step 1:
           CumulativeSuccess=TRUE; iteration=0;
Step 2:
           While ( iteration < number of layers ) {
                     Decode the layer (denoted by iteration).
                     if (decoding successfully)
                               Then CumulativeSuccess=TRUE;
                               Else CumulativeSuccess=FALSE; break;
Step 3:
           Output the decoded stream up to layer iteration.

IBP Based Video Decoding:
Step 1:
           Decode I frame. If fail, return;
Step 2:
           Find and decode the next P frame.
           If fail, go to Step 4; Otherwise pPFrame = the found P frame.
Step 3:
           While ( pPFrame != NULL ) {
                     Decode the B frames ahead of pPFrame;
                     Find and decode the next P frame.
                     If decoding fail, break;
                     if ( the current P frame == the last P frame ) pPFrame = NULL; Else pPFrame = pPFrame ->Next;
Step 4:
           Output the decoded stream.

Figure 2: Typical packet dependency of layer based embedded media stream structure.
Based on the analysis of unequal importance and inter-packet dependency, the UEP-based resource allocation strategies can be generalized for image/video streaming over wireless channels: network resource allocation and adaptation are applied to the media streaming according to the distortion reduction (importance) of each packet and the inter-packet dependency. The ancestor packets with more dependent children packets are protected with more network resources including stronger FEC capability, robust modulation schemes, and higher ARQ retry limits, etc. the descendent packets with less dependent children packets are less protected to save communication resources.


Besides the layer-based quality scalability, wavelet-based image compressions also produce shape and position information of the regions or the objects in the picture, as well as the lighting magnitude value information describing those regions or objects. Without losing generality, we use wavelet-based image coding as an example for multimedia content. The shapes or regions of the objects in the picture are much more important than the lighting value magnitudes of these objects. Errors in shape and region information lead to high distortion of reconstructed images, while errors in pixel magnitudes are more tolerable during transmission and decoding. This is because the shape and region information impacts the magnitude value information associated with those regions when the image is percept by end users. Furthermore, the shape and region information can be desirably translated to position information segments (e.g., p-segment) and the lightening magnitude information can be translated into value information segments (e.g., v-segment) by wavelet-based progressive compression in each quality layer. The p-segments denote how small-magnitude wavelet coefficients or insignificant wavelet coefficients are clustered, while the v-segments denote how large-magnitude wavelet coefficients are valued. Layers in the code stream represent the quality improvement manner, while the p-segments and v-segments in each layer represent the data dependency. The p-segments and v-segments can be easily identified from zerotree-based or EBCOT-based code-streams. The final code stream structure is composed of p-segments and v-segments in decreasing importance order as shown in Figure 3.

Figure 2: Code stream format for scalable quality layers and position-value separation in each layer.
Now we see how to separate p-segments and v-segments in each quality layer. The zerotree-based compression techniques generally involve dominant coding pass to extract the tree structures, as well as the subdominant pass to refine the leaves on the tree. These coding passes are invoked layer by layer in a bit-plane progressive way. In significant pass of each zerotree bit-plane coding loop, a half decreasing threshold δ is specified. A wavelet coefficient is encoded as positive or negative significant pixel if its magnitude is over δ. The positive or negative nature is determined according to the sign of that coefficient. A coefficient may be encoded as a zerotree root if its magnitude and all the descendents’ magnitudes are all below δ, and itself is not a descendent of previous tree root. Otherwise this wavelet coefficient is encoded as an isolated zero. Because all of these positive or negative significant symbols, isolated zero and tree root symbols contain tree structure information, they are put to the p-segment of the current bit-plane layer. Then subdominant pass is invoked for magnitude refinement. The magnitude bit of each positive or negative significant symbol is determined according to the threshold δ, and is put to the v-segment of that bit-plane layer. Thus, p-segment and v-segment are formed layer by layer with the half decreasing threshold δ. Because p-segments contain zerotree structures and v-segments contain magnitude values, incorrect symbols in p-segments cause future bits to be mis-interpreted in decoding process while incorrect bits in v-segments are tolerable for errors.
The EBCOT-based JPEG2000 is a two-tiered wavelet coder, where embedded block coding resides in tier-1 and rate-distortion optimization resides in tier-2. Without losing generality, we only discuss the coding process in one code block (tier-1), because the p-segments and v-segments separation interacting with context formation (CF) and arithmetic coding (AC) resides in tier-1 intra code block coding, and the p-segments and v-segments in all other code blocks can be separated in the same way. Unlike zerotree compressions’ coefficient by coefficient coding, JPEG2000 tier-1 coder processes each code block bit-plane by bit-plane from the most significant bit (MSB) to the least significant bit (LSB) after quantization. The p-segments and v-segments are also formed bit-plane by bit-plane in an embedded manner. In each bit-plane coding loop, each quantization sample bit is scanned and encoded in one of the significant propagation pass, magnitude refinement pass, and cleanup pass. In significant propagation pass, if a sample bit is insignificant (“0” bit in the current bit-plane) but has at least one immediate significant context neighbor (at lease a “1” bit occurs in the current/previous bit-plane), the zero coding (ZC) and sign coding (SC) coding primitives are invoked according to one of the 19 contexts defined in JPEG2000. The output codeword of this sample after ZC and SC are put to p-segment in this bit-plane, because the coded sample determines the positions of neighboring significant coefficients. In other words, it determines the structure of the code stream. The p-segment in this bit-plane is partly formed after significant propagation pass. In the following magnitude refinement pass, the significant sample bits are scanned and processed. If a sample is already significant, magnitude refinement (MR) primitive is invoked according to the context of eight immediate neighbors’ significant states. The codeword after MR is put to v-segment in this bit-plane because it denotes magnitude information only, containing no position information of the significant samples. After magnitude refinement pass, v-segment in this bit-plane is completely formed. In the final cleanup pass, all the uncoded samples in the first two passes are coded by invoking ZC and run-length coding (RLC) primitives according to each sample’s context. The code words after cleanup pass are put to the p-segment in this bit-plane, because the positions of significant samples are determined by how insignificant wavelet coefficients are clustered. Till now p-segment in this bit-plane is also formed. Then the following bit-planes are scanned, and all the p-segments and v-segments are formed bit-plane by bit-plane.
The UEP-based resource allocation with position-value enhancement is similar to layer based resource allocation where packets in base layers are more reliably protected than packets in quality refinement layers. Different from layer-based UEP, position packets are protected more reliably than value packets in each quality layer. Then the network resource allocation strategy is optimized according to the distortion reduction of each packet and the calculated dependency graph among these packets. Some position packets may have low values of distortion reduction but a lot of descendents, and these packets will be protected more effectively; the value packets with low distortion reduction and few descendents will be less reliably protected to save communication resources.
Related Posts with Thumbnails