Sunday, December 25, 2011

Optimization Process | AUTOMATIC CELL CONFIGURATION



The basic structure of the local search algorithm that has been developed for an optimization of WiMAX networks is depicted in Figure 1.
 
Figure 1: Local search algorithm for WiMAX network optimization.
The algorithm comprises the basic elements of a local search method that have been presented in the previous section. The local search starts from an initial solution, which can for example be the current configuration of the network to be optimized, a manually planned solution, or a solution suggested by the fast heuristics presented in the previous section.
At the beginning of each search step, the search neighborhood is generated. At first a cluster of cells is selected for which parameter changes are considered. Based on the selected cells the search neighborhood is generated and explored to yield a new solution.
The quality of a certain solution is assessed by a performance analysis. The choice of the method depends on the particular application and is a trade-off between accuracy and speed of the optimization process. From the results of the performance analysis, a cost value is generated by means of a cost function. The cost function is basically a linear combination of the evaluated quantities. In addition, penalty components are added if certain thresholds are exceeded for any of the different cost function components (e.g., coverage probability below some design target). The search process can be guided by appropriately setting weights for the different cost function components.
As a search paradigm either a descend method or a Tabu search can be applied. The Tabu list is maintained independent of the selected search paradigm. The search paradigm only influences the way in which the search is terminated. In case of the descend method, the search process is terminated if no improvement could be found. For the Tabu search, nonimproving moves are accepted to escape from local minima. The Tabu search is terminated once no improving moves were found for a certain number of search steps.
The performance of the local search method strongly depends on the applied performance evaluation. The choice between the methods is a trade-off between accuracy and running time. The basic static method is the fastest but as was shown has some weaknesses in terms of accuracy of results. The presented statistical methods significantly outperform the latter method in accuracy of results but even if implemented efficiently are of higher computational complexity, especially if the experimental analysis is applied for evaluating quantities per pixel. The local search optimization presented in this section can be extended to yield a hybrid method which makes use of two methods for performance evaluation. The exploration of the neighborhood is split into two parts. In a first step, the neighborhood is explored by the use of a simple and fast basic performance evaluation method. As a result a list of candidate solution is generated. The list is sorted with respect to cost values. The next candidate solution is selected from this list using a more accurate but also more time consuming advanced performance evaluation. Either the first improving solution from the list or the best solution from a subset of most promising solutions (“short list”) is selected.

Wednesday, December 21, 2011

Advanced Search Algorithm | AUTOMATIC CELL CONFIGURATION



The optimization method described in the previous section is well suited for an initial planning and in cases where the level of completeness or accuracy of input data is limited. For a detailed optimization that takes the full set of input data into account, a search approach is proposed. That is, the space of possible configurations denoted as search space is explored to find the point in the search space that is optimal with respect to a certain criterion. This point is denoted as global optimum. Exhaustive search traverses the complete search space in a systematic manner. As all points in the search space are visited, with exhaustive search it is guaranteed to find the global optimum. The search space is very large for typical applications in network planning. For each site there can easily be several hundreds of possible configurations. Furthermore, configurations of different sites cannot be considered independently, so that the amount of possible network configurations grows expotentially with the number of sites. Hence targeting this area, an exhaustive search is too time-consuming and local search algorithms are commonly used for network optimization purposes.
Local search algorithms start at some point in the search space denoted as initial solution and subsequently move from the present to neighboring solutions, if they fulfill some criterion, i.e., appear to be better or more promising. Local search algorithms cannot guarantee to find the global optimum. The objective of the search algorithm—developed or tailored for a particular problem—is to find a solution that is at least close to the global optimum. Local search algorithms are thus often classified as heuristics.
The basic procedure of local search is independent of the actual search algorithm applied. Starting from the initial solution in each search step, first a search neighborhood is generated. The search neighborhood is a subset of points from the search space that are close to the current solution, i.e., that have some attributes in common with the current solution. The point that is most appropriate with respect to some criterion is selected from the search neighborhood and accepted as new initial solution for the next search step. If no appropriate new solution is found (or some other stop criterion is fulfilled) the search is terminated. The comparison of points from the search space is carried out by means of cost values associated with them. The cost values are generated from a cost function, in the literature often also referred to as objective function. The objective function maps a given point in the search space to a cost value. The cost value can be a scalar but could also be represented by a vector. In the latter case, an appropriate function to compare cost values needs to be defined.
Local search algorithms are very much application specific. However, several search paradigms have been developed in the last three decades. The simplest search paradigm is the descent method. This method always selects the solution from the neighborhood that has lowest cost. If this value is lower than the lowest value in the last search step, the solution is accepted as a new solution, otherwise the algorithm is terminated. The algorithm hence explores the search space by always moving in the direction of the greatest improvement, so that it typically gets trapped in a local minimum. A local minimum is a solution that is optimal with respect to the search neighborhood but which generally is worse than the global optimum. To escape from local minima, among several others, one widely applied approach is to carry out restarts, that is, the local search is restarted from a new solution that is selected from a different area of the search space. The new start solutions are often selected randomly. If restarts are applied, the algorithm strictly speaking is not a local search algorithm anymore.
Another option for escaping from local minima is to accept also the cost deteriorating neighbors under certain conditions. The most prominent local search paradigms that apply this strategy are simulated annealing and Tabu search.
Simulated annealing is based on an analogy with the physical annealing process. In simulated annealing improving points from the neighborhood are always selected when exploring the search neighborhood, nonimproving points are accepted as new solutions with a certain probability. The probability of acceptance is a function of the level of deterioration, but also gradually decreases during the algorithm execution. The reduction of the probability of acceptance is determined by the cooling scheme.
In contrast to the simulated annealing which comprises randomness, classical Tabu search is deterministic. The basic operation is equivalent to the descent method with the difference that the best point from the neighborhood is also accepted if it is worse than the current solution. In this way the search is directed away from local minima. To avoid a move back to already visited solutions, a Tabu list is introduced. The Tabu list typically contains sets of attributes of solutions that have already been visited. If a point from the neighborhood exhibits one of the sets of attributes stored in the Tabu list, the point is only accepted as new solution if its quality, i.e., cost, exceeds a certain aspiration level. The Tabu list is updated, keeping the individual entries only for a number of iterations. The size of the Tabu list is a very important design parameter of the Tabu search, it in particular needs to be chosen large enough to prevent cycling, but a too large list might introduce too many restrictions. Several enhancements to the basic operation of the Tabu search have been introduced most of which modify the handling of the Tabu list. These include intensification and diversification schemes

Saturday, December 17, 2011

Optimization Parameters and Targets | AUTOMATIC CELL CONFIGURATION


Optimization Parameters and Targets

The targets of the radio network optimization are mainly twofold. First target is to minimize the interference caused by the individual cells, while a sufficient coverage over the planning area is maintained. This is in general a trade-off and needs to be balanced, e.g., tilting down the antenna causes lower coverage, but also lower interference in neighboring cells and thus a potentially higher network capacity. Second target is the traffic distribution between cells. It is desirable to maintain similar cell loading of neighboring cells to minimize blocking probabilities and to maximize spare capacity for traffic fluctuations and a future traffic evolution.
The most effective parameter in network optimization is the antenna tilt. Antenna tilts need to be set such that the traffic within the “own” cell is served with maximum link gain, but at the same time the interference in neighboring cells is minimized. The possible tilt angles are typically restricted because of technical and civil engineering reasons. Especially in case of collocated sites with multiband antennas there might be strong restrictions on the possible tilt angles to be taken into account during optimization.
The transmitted pilot channel power and the other common channel powers, which are typically coupled by a fixed offset, are also vital parameters of network optimization. It needs to be assured that these channels are received with sufficient quality by all users in the serving cell. At the same time a minimization of the common channel powers yields significant capacity gains: Firstly, additional power becomes available for other (user traffic) channels, and secondly, the interference is reduced. The gains obtained from reducing the pilot power are often underestimated. It is important to note that in a capacity-limited WiMAX network (e.g., in urban areas) the reduction of pilot power levels by a certain factor also reduces the total transmit power of cells and as a consequence the cell loading by up to the same factor.
Optimization of azimuth angles of sectored sites is of great importance in particular in case of antennas with rather small horizontal beam-width (e.g., 65° vs. 90° in case of three-sectored sites). In this case the difference between antenna gains in direction of the main lobe and the half-angle between neighboring sectors is comparatively large, and cells of neighboring sites might need to be adjusted such that maximum coverage is achieved. It is observed that during optimization azimuth changes are in particular introduced to reduce coverage problems. For possible azimuth angles typically even stronger restrictions apply than for the tilt angles.
The antenna height is also often a degree of freedom for the optimization. Higher antennas can provide better coverage, but on the other hand also cause more interference in neighboring cells. Additional important parameters are the antenna type and the number of deployed sectors at a site. Both parameters are closely coupled, as a larger number of sectors also suggest the use of antenna pattern with smaller horizontal beam-width. The choice of sectorization is typically a trade-off between increased network capacity and higher monetary cost.

Tuesday, December 13, 2011

SITE SELECTION | SITE DEPLOYMENT AND CONFIGURATION



The site selection targets at optimizing mobile radio networks. It provides flexible methods to modify the configuration of a network in such away that key performance measures are optimized and pivotal performance targets are met.
The working engine behind the site selection is a fast construction and solution of downlink or uplink cell equations for a given network configuration from which most performance measures can be deduced. Based on an analysis of the “permitted configurations,” a preprocessing tree is constructed which contains all partial coefficients of the cell equations which might be relevant for the construction of the downlink/uplink/combined cell equations of an arbitrary permitted network configuration. This preprocessing tree then allows for an efficient accumulation of the coefficients of the equation system which in turn yields a fast calculation of the cell transmit/interference powers/activities from which most performance measures can be deduced. One can observe how many configurations are evaluated by looking at the number of evaluations in the status messages (which are issued about once in a second) when the algorithm is running after preprocessing.
Based on this fast evaluation, an optimization of the network is performed by evaluating all neighboring configurations of the current configuration and moving to the best of them if this improves the current network configuration, where the neighboring configuration deviates from the current configuration with short distance in the searching space. As long as an improvement can be obtained this optimization step is iterated. When no improvement can be found the last configuration is stored as the end design. On top of this basic procedure (construct preprocessing tree—perform descent to local minimum), there is the possibility for a repetition (since when many sites are switched off the coefficients in the preprocessing tree might become more and more irrelevant hence yield too conservative cell equations) and for a division of large problems into smaller ones, where one part of the network is optimized after another in a circular way.
A key role in the simple local descent iteration is played by the comparison operator between configurations. Here the algorithm allows choosing the relevant performance measures/constraints by assigning priorities to them. For example, one often assigns the monetary cost the highest priority and the coverage constraint the second highest (nonzero) priority.
The site selection can be used for optimizing cell parameters and selecting sites/cells. It can start from given configurations or empty/full networks. It can look through neighborhoods with selectable depths and iterate through these neighborhoods with a deterministic steepest or gradient descent, or a greedy random descent. However, this flexibility is at the price of a more complex user interaction. Through a large number of parameters the user has to tell the program what kind of optimization is intended. In addition, in large optimization problems, the preprocessing tree requires a huge amount of computer memory and if a large number of partial interferer coefficients are stored the evaluation can become slow. Partial coefficients which cannot be stored are accumulated in a so-called remainder coefficient. This remainder coefficient can accumulate a large number of small interferer and then makes sure that these interferers are not neglected in the cell equations. This saves computation time and computer memory, but might yield overconservative cell transmit powers/channel activities, when the number of stored coefficients becomes too small or when the interferer which belong to the stored coefficients are not active (e.g., is switched off). In these cases one might have to recalculate the preprocessing tree or allow for a larger number of coefficients within the nodes of the tree. Or one could divide a large problem into smaller ones which also reduces the preprocessing tree.


Candidate Sites and Permitted Configurations

The site selection algorithm starts from a description of the permitted network configurations. Such descriptions consist of a set of potential cells (configured cells which might appear in a network design) and subsets of these cells which we call selection sets and choice sets.
The meaning of a selection set is that all potential cells in this set have to appear together in a permitted network configuration or none of them. In a site selection, problem one can think of a selection set as the set of cells which belong to a site: One has to select all of them or none of them. Since selection sets can consist of a single cell, one can also define “cell selection” problems, where individual cells are selected by putting each potential cell into an individual selection set.
The meaning of a choice set is that exactly one of the potential cells in this set has to appear in a permitted network configuration. In a tilt choice problem such a set would consist of all potential cells which model the same cell but with different tilt settings. Defining such a choice set means to tell the optimization kit that it should choose exactly one of these cells for a solution design. Through a choice set which contains only one potential cell one can tell the program that this cell must appear in a solution design.


Optimization Algorithm

Given this definition of permitted configurations and the results of the preprocessing, the algorithm can walk through the permitted configurations and try to improve the performance measures of interest during this walk. The permitted configurations are the possible search choices for the optimization algorithm. The user can choose between several different manners of walking towards an optimized configuration by either “local steepest descent,” “local gradient descent,” “random greedy descent,” or “local immediate descent.”
The local steepest descent method evaluates all neighbor network configurations and then replaces the current with the best one found, where the comparison checks the selected performance measures exactly according to their priority.
The local gradient descent method evaluates all neighbor network configurations and then replaces the current one with the best one, where the comparison is based on gradients instead of the absolute values of the performance measures.
The random greedy descent method chooses randomly one new configuration in the neighborhood of the current one and moves to it if it performs better than the old one.
The local immediate descent is an exotic procedure where improvements are immediately accepted and the search continues from the improved configuration. This can speed up the algorithm, but at the cost of perhaps not giving as good results as the above more thorough search methods.
In all cases the walk moves from the current configuration to a configuration within a certain neighborhood of the current one. The neighborhood is defined as all configurations which can be reached by a given number of permitted elementary steps. An elementary step is either changing a given permitted configuration by switching one selection set on or off (adding its cells to the configuration or removing them) or modifying one cell in a choice set (replacing it by another one of the same choice set).
However, increasing the search depth to more than one, usually dramatically increases the computation time because a large number of configurations might have to be considered before accepting a move. This increase can be controlled through algorithm parameter.


Performance Evaluation and Comparison

There are two groups of performance measures: Ordinary performance measures and performance constraints, which additionally have a target value which should be reached. The ordinary performance measures are
  • ActiveCells: The number of active cells in the evaluated network configuration. (Can be used instead of TotalFirstYearCost if detailed costs are irrelevant.) Small values are better than large ones.
  • ActiveSites: The number of active sites. (Can be used instead of TotalFirstYearCost if the detailed costs of the base stations are irrelevant.) Small values are better than large ones.
  • MaxOtherCoefficient: The maximum of the nondiagonal coefficients in the coupling matrix. Small values are better than large ones.
  • MaxOwnCoefficient: The maximum diagonal coefficient in the coupling matrix. Small values are better than large ones.
  • MaxSiteCostPerAccess: The maximum first year cost of one BS location (including hardware) divided by its sum of area and traffic access coverage in percent. Small values are better than large ones.
  • MaxUserLoadInPercent: The maximum user load of a cell in the network. Small values better than large ones. If this measure has high priority, the optimization seeks for a configuration in which the maximum user load of a cell is as small as possible.
  • OverloadedCells: The number of overloaded cells. Small values are better than large ones.
  • TotalFirstYearCost: The most important (nontechnical) performance measure in site selection, namely the total first year cost of a given network configuration. Small values are better than large ones.
The performance constraints are
  • AreaCoverageInPercent: The percentage of the area which can be served (i.e., considered pixels which have a large enough receive signal strength from at least one of the active cells) by the network configuration. Its target value is the area coverage defined in the underlying optimization profile. Large values are better than small ones.
  • AccessCoverageInPercent: The number of users on the served area of a configuration as a percentage of the number of users in the total considered area. The target value is the access coverage from the underlying optimization profile. Large values are better than small ones.
  • MinCoverageGapInPercent: Here one looks at the differences of the area coverage in percent and its target value and the traffic coverage in percent and its target value. The smaller one of these two differences is the MinCoverageGapInPercent. Large values are better than small ones. The target value of this constraint is 0.
  • TrafficCoverageInPercent: After solving the cell equations one can assess the number of customers which can be served by the network. The target value is the traffic coverage from the underlying optimization profile. Large values are better than small ones.
Only performance measures are calculated and displayed during the optimization for which the priority is strictly positive. The one with the largest positive value has highest priority, the one with the lowest positive value the lowest priority. The positive priorities should be different.
When two evaluations of network configurations are compared, first the number of violations (i.e., number of performance constraints, where the target is not met) is compared. The configuration with fewer violations is better than the other one.
Next, if both configurations have the same nonzero number of violations, the violation with the highest priority is considered. The configuration where the corresponding performance measure has a better value is considered as better than the other one. If both values agree, the performance measure with a violation and next highest priority is considered, and so on.
If there are no violations or the performance measures of all violated constraints agree, the performance measure with the highest priority is considered: The configuration which has the better value of this performance measure is considered to be better than the other one. If both values agree one looks at the performance measure with the second highest priority, and so on.
Since the steepest descent algorithm accepts only “better” permitted configurations, this evaluation first tries to reach the performance targets and then to optimize the high priority measures.
If the optimization method is set to local gradient descent, the decision whether a neighbor of the current configuration is better than another neighbor of the current configuration is slightly more complex: instead of the absolute values one considers gradients with respect to the current configuration.

Wednesday, December 7, 2011

WIMAX NETWORK PLANNING PROCESS



WiMAX radio planning involves a number of steps ranging from tool setup to site survey. The process is similar to any wireless network. What differs between WiMAX and other technologies are the actual site configuration, KPIs, and the propagation environment as WiMAX may support mobile and fixed users where the latter may employ directional/rooftop antennas.
The final radio plan defines the site locations and their respective configuration. The configuration involves BTS height, number of sectors, assigned frequencies or major channel groups, types of antennas, azimuth and downtilt, equipment type, and RF power. The final plan will be tested against various KPI requirements mainly coverage criteria and capacity (or signal quality). Figure 1 can be used as a guide in developing a planning process. The planning process also largely depends on the planning tool used.

 
Figure 1: WiMAX radio planning process.
The planning process in Figure 1 includes measurements (i.e., drive test and verifications) after the site survey. This procedure is not mandatory for all sites if the site count is too large. Usually, site survey and the KPI analysis give an indication of which areas are expected to have poor RF quality and which sites are involved. This is usually done when the candidate site(s) are not located in ideal locations or if the site survey finds some discrepancies of the candidate(s).
The differences between WiMAX and 3G radio planning. WiMAX radio offers modest processing gain in a form of repetition coding and subchannelization. These features are only exploited when the signal quality demands more processing. To support high data rates, the radio plan must offer very good SINRs (signal to interference and noise ratio) even with very limited spectrum. For example, in the absence of subchannelization and repetition coding, the required SINR for the lower MCS (modulation and coding scheme) is around 5 dB and this needs to be achieved even with very tight frequency reuse factor of 1/3 or 1/4 in the presence of shadowing, where the reuse factor is the reciprocal of the number the cells using different frequencies and the sum of the frequencies presents the whole spectrum resource allocated to the planned system. Another consideration in the case of WiMAX planning is the high SINR requirements to support high data rates. Although a site is expected to support high data rates for CPEs closer to it, SINR values >30 dB are only possible in the absence of interference. This requires accurate modeling of the propagation and RF equipments. For example, in 3G, high data rates are possible even with C/(N) of < 10 dB as the processing gain enables the receiver to tolerate some amount of interference. In WiMAX, this is not the case as the processing gain is only provided through channel coding and limited coding repetition.

Saturday, December 3, 2011

IMPACT OF THE RATIO OF THE COST OF BS TO RS ON SOLUTION


It is also worth to notice how the ratio of the cost of BS and RS affects the site selection. Intuitively, as the ratio raising, the RS becomes relatively cheaper, so it should tend to select more RS compared to the BS and connections from TP to RS should increase. Figure 1 shows the trend. It shows number of connections between TP and BS and between TP and RS as the cost ratio varying from one to ten, i.e., from the cost of BS equals to the cost of RS to the cost of BS ten times the cost of RS. Figure 2 shows the corresponding average path loss between each TP and its communicating node. It can be seen that the path loss is decreasing which means the quality of the radio received becomes higher as the cost of RS becoming lower.


Figure 1: Average path loss between each TP and its communicating node as the ratio of the cost of BS and RS is varied.

Figures 2 and 3 show two extreme cases. In both cases, the number of candidate BS sites is 50, the number of candidate RS sites is 150, and the number of TP is 500. Figure 2 shows the plan of the cost of BS equals to the cost of RS. In this case, there are 38 BSs and 50 RSs being selected; 177 connections between TP and BS; 320 connections between TP and RS. Figure 3 shows the plan of the cost of BS ten times to the cost of RS. In this case, there are 28 BSs and 75 RSs being selected; 133 connections between TP and BS; 367 connections between TP and RS.


Figure 2: An output of the planning tool when the ratio of the cost of BS and RS is 1.


Figure 3: An output of the planning tool when the ratio of the cost of BS and RS is 10.

Wednesday, November 30, 2011

QUALITY, CAPACITY, AND ECONOMIC ISSUES OF NETWORK DESIGN


The increasing demand for mobile communications leads mobile service providers to look for ways to improve the QoS and to support increasing numbers of users in their systems. Because the amount of frequency spectrum available for mobile communications is very limited, efficient use of the frequency resource is needed. Currently, cellular system design is challenged by the need for a better QoS and the need for serving an increased number of subscribers. Network planning is becoming a key issue in the current scenario, with exceedingly high growth rates in many countries which force operators to reconfigure their networks virtually on a monthly basis. Therefore, the search for intelligent techniques, which may considerably alleviates planning efforts (and associated costs), becomes extremely important for operators in a competitive market.
Cellular network planning is a very complex task, as many aspects must be taken into account, including the topography, morphology, traffic distribution, existing infrastructure, and so on. Things become more complicated because a handful of constraints are involved, such as the system capacity, service quality, frequency bandwidth, and coordination requirements. Nowadays, it is the network planner’s task to manually place base stations (BSs) and to specify their parameters based on personal experience and intuition. These manual processes have to go through a number of iterations before achieving satisfactory performance and do not necessarily guarantee an optimum solution. It could work well when the demand for mobile services was low. However, the explosive growth in the service demand has led to a need for an increase in cell density. This in turn has resulted in greater network complexity, making it extremely difficult to design a high-quality network manually [13].
Furthermore, OFDM (orthogonal frequency division multiplexing) technology is emerging as an attractive solution for fast wireless access. It has been adopted for many future wireless networks, e.g., FLASH (fast low-latency access with seamless handoff) OFDM and WiMAX. The UMTS (Universal Mobile Telecommunication System) evolution will go into the direction of OFDM, e.g., LTE (Long Term Evolution). Similar to other technologies, the deployment of OFDM networks poses the problem to select antenna locations and configurations with respect to contradictory goals: low costs versus high performance. A key to successful planning is the fast and accurate assessment of network performance in terms of the coverage, capacity, and QoS [4]. This also makes the conventional design methods insufficient for planning mobile networks in the future. Thus, more advanced and intelligent network planning tools are required. A promising planning tool should be able to aid the human planner by automating the design processes.

Tuesday, November 29, 2011

RADIO PLANNING OBJECTIVE



The task of radio planning is to define a set of site locations and respective BTS (Base Transceiver Station) configurations with addressing the coverage and capacity figures derived from dimensioning. Dimensioning a new network/service is to determine the minimum capacity requirements that will still allow the GoS (Grade of Service) to be met. Site densities in each clutter type are one of the outputs. The site count, i.e., number of sites in a considered service area, derived in radio planning often differ from the site count derived from dimensioning since the actual site coverage may differ significantly from the assumed empirical model(s). There is always a risk that the planned site count may exceed the estimated site count from dimensioning. As a result several planning iterations are needed to reach a reliable figure.
One problem with radio planning deals with site density. Firstly, higher site density poses more difficulty in finding suitable candidates. This is true in all clutter types. In dense areas, most suitable sites are already overcrowded with 2G and 3G antennas. This will likely put the WiMAX antennas in less ideal positions. Secondly, there is a tendency that the candidate sites are not having comparable heights. This is a major drawback in radio planning because large differences in heights can distort the site dominance areas and cell ranges. The third problem is the bandwidth constraint which may require tighter frequency reuse. In this case, the radio plan must be as close as the ideal case.
Radio network planning normally follows the dimensioning exercise. Sometimes the dimensioning process includes a rough plan to justify the site count and coverage level using some commonly accepted propagation model and generic WiMAX system modules in the planning tool. In the actual planning phase, a number of inputs are needed to improve the quality and accuracy of the radio plan. Depending on the selected planning tool to use, a number of inputs maybe required to be fully utilized by the tool. For example, it is assumed that following items are already well considered:
  • Propagation characteristics of various areas (propagation models tuned)
  • Required inputs defined (clutter maps, terrain maps, building data, etc.)
  • Traffic and demographic information, i.e., per clutter type
  • WiMAX RF equipment parameters are defined (antennas, RF [radio frequency] features, etc.)
  • Options for BTS configuration (sectorized, omni, PUSC [partial usage of subchannels], FUSC [full usage of subchannels])
  • CPE (customer premises equipment) types and parameters defined (antenna types, mounting, diversity)
Two important decisions with regards to radio planning have to be considered prior to the actual planning exercise. Firstly, the level of accuracy when it comes to coverage and capacity needs to be considered and this highly depends on the accuracy of the propagation model in the planning tool. Secondly, the planner needs to decide how much RF optimization will be undertaken during the planning phase. This is only possible if the planning tool together with the planning parameters and equipments models are accurate enough. It is often the case where optimization is neglected during the planning process. Postplanning optimization exercise is often costly and produces only minor improvements. It is often limited to antenna adjustments (tilting and azimuth changes).
There are a number of features that are useful when selecting a planning tool such as
  • Automatic frequency selection
  • Optimal site selection—when existing or candidate sites are provided
  • Support of mixed and multiple propagation models
  • Support of model tuning and user defined models
  • Support of OFDMA system including channel impairments
  • Optimal downtilting
  • Propagation parameters (or constants) for 2.5 and 3.5 GHz
A number of commercial planning tools are available in the market. The major factor that determines the usability of the tool is the accuracy of the RF modeling such as propagation, BTS and CPE antenna models, interference prediction, frequency allocation, and channel models. Planning tools with OFDMA models for capacity planning are advantageous but not necessary since the capacity figures for each site of cluster can be estimated based on the signal quality outputs.

Friday, November 25, 2011

INTEGER PROGRAMMING MODEL | Network Planning for IEEE 802.16j Relay Networks



Four sets of tests were performed with the basic variant of the problem to determine its sensitivity to different parameters.
In the first experiment, all three parameters were scaled—the number of candidate BSs, candidate RSs, and TPs. The number of BSs was varied and the numbers of RSs and TPs were three times and ten times this figure, respectively. Figure 1 shows how the time required finding a solution scales up. As it can be seen, the problem can be solved for up to 80 candidate BSs and 240 RSs with ease. Further, the results show that the problem complexity is scaling up quite rapidly. Indeed, further experiments were performed in which the number of candidate BSs was increased to 120 and the resulting execution mean time was under 30 min. The system is exhibiting scaling properties which are quite nonlinear, although some basic curve fitting has shown that for the available data set, the scaling is considerably less than exponential.

 
Figure 1: Calculation time when three parameters are scaled at the same time.
Figure 2 shows the calculation time when only the number of BSs is scaling. The number of RSs is set to 90 and the number of TPs is set to 300 in all tests.

 
Figure 2: Calculation time when only the number of BS is scaled.
A similar experiment was performed in which the number of RSs was scaled up and the number of BSs and TPs remained constant. Again it is clear that the system is scaling up linearly in this parameter (Figure 3). The number of BSs is set to 30 and the number of TPs is set 300 in all tests.

 
Figure 3: Calculation time when only the number of RS is scaled.
Finally, in this set of experiments, the sensitivity to the number of TPs was considered. The same characteristic is again observed: the system scales linearly as can be seen from Figure 4. The number of BSs is set to 30 and the number of RSs is set to 90 in all tests.

 
Figure 4 Calculation time when only the number of TP is scaled.
From the figures, it can be seen that this algorithm should suit small size network planning problems since the time cost is very short for small number of BSs. The time varies almost linearly if individual parameter is varying. For the problem sizes studied—which are typical for small metropolitan scenarios—the solution can be found quickly on typical desktop computers, e.g., under two minutes for problems with 50 candidate BS sites, and approximately ten minutes for problems with 100 candidate BS sites. The time cost for the planning could increase to one day long or a few days to plan a larger network, e.g., around 500 candidate sites, but it is still practicable.

Monday, November 14, 2011

RESULTS AND DISCUSSION | IEEE 802.16j Relay Networks



The objective of these tests can be divided into two parts. One is to obtain an understanding of the scalability of the problem formulation—the basic and the state space reduction model. More specifically, the objective was to understand if this problem formulation can be used to solve problems of realistic size. Given that it is, in principle, an NP-hard problem, it is important to understand the range of problems for which standard solution techniques are appropriate and the range of problems which require the development of heuristics which employ domain knowledge.
The second is to determine how the clustering approach compares with the more rudimentary approaches. The comparison was performed based on both the time taken to obtain a solution and the quality of the resulting solution; naturally, the former relates directly to the scalability characteristics of the approach and its applicability for realistic scenarios.
A number of tests were performed in which the number of BSs, RSs, and TPs were varied. All tests were done using a standard desktop computer—Centrino Duo 2.0 GHz, 1 GB Memory, Windows Vista. Twelve tests were performed each time and the mean execution time taken. As there was some variation in the results, the minimum and maximum execution times were removed and the mean taken over the remaining ten results.
Problems were generated at random. The locations of each of the BSs, RSs, and TPs were chosen randomly from an area of size 3 × 3 km. The (xy) coordinates of each node were chosen by selecting two random variable from the distribution U(0, 3000). For each of the problems the same set of weight parameters were used: Î»1 = 8, Î»2 = 8, and Î»3 = 20. However, it is worth noting that the values of these parameters have little impact on the time required to find solutions. In each of the problems, the BS cost was chosen at random and was three times the cost of the RS.
In all of the following tests, the branch and bound method found the optimal solution to the given problem. Figure 1 shows one possible result for planning a network with 20 candidate BSs, 60 candidate RSs, and 200 TPs. In the solution, 10 BSs are selected with 36 RSs

 
Figure 1: A typical output of the planning tool.

Friday, November 11, 2011

RELAY STATION CAPABILITIES | IEEE 802.16J



As the standard is still evolving, it is not clear what the final variant will look like. However, at present, it appears that two categories of RS will be defined: low capability RS (simple RS) and high capability RS (full function RS). The simple RS is used for low cost deployment, and operates on one OFDMA channel. It contains no control functionality (i.e., control functions are centralized in the MMR-BS) with one transceiver and optionally supports multiple input multiple output (MIMO). The full function RS can operate on multiple OFDMA channels, implement distributed control functions, and support MIMO. This type of RS has a further two variants: fixed/nomadic full function RS and mobile full function RS. Mobile RSs add support for handover and the ability to deal with a varying channel due to mobility. Table 1 summarizes the different RSs capabilities.
Table 1: RS Capabilities 
 
Simple RS
Full Function Fixed/Nomadic RS
Mobile RS
Number of OFDMA channels
1
1
1
Duplexing on MMR and access links
TDD
TDD or FDD
TDD or FDD
Frequency sharing between access and MMR links
Yes
Yes or No
Yes or No
Mobility
Centralized in MMR-BS
Centralized in MMR-BS or distributed in RSs
Centralized in MMR-BS or distributed in RSs
Antenna support
SISO or MIMO
MIMO
MIMO
At present, it is considered that an MMR network could be composed of multiple usage models including multiple RS types specifically deployed. But at present, there is only a little work about the heterogeneous functionalities of the RSs in different scenarios.
For example, an MS can move from the coverage provided inside a building by fixed/nomadic RS to a train where the coverage is provided by a mobile RS. Furthermore, there is no direct mapping between the usage models and the types of RS. An operator may deploy a variety of different RS types depending on traffic, mobility, topology (two hops or more) within the area of each RS location for a specific usage model.
In fact, the future standard will not answer all the issues raised by the RS incorporation to provide vendor differentiation. For instance, intelligent scheduling either at the BS (in a centralized approach) or at the BS and RSs (in a distributed approach) are required to minimize the interference that occurs at the RSs.

Monday, November 7, 2011

OVERVIEW OF IEEE 802.16J



In IEEE 802.16j low cost RSs are introduced to provide enhanced coverage and capacity. Using such stations, an operator could deploy a network with wide coverage at a lower cost than using only (more) expensive BSs to provide good coverage, and increasing significantly the system throughput. As network utilization increases, these RSs could be replaced by BSs as required. The mesh architecture defined in WiMAX is already used to increase the coverage and the throughput of the system. However, this mode is not compatible with the point-to-multipoint (PMP) mode with no support of the OFDMA PHY, fast route change for mobile station (MS), etc. Hence, the standards organization has recognized this as an important area of development, and today a task group is charged with drafting a new standard: the IEEE 802.16j mobile multihop relay design to address these issues. The first draft of the IEEE 802.16j standard has just finished in August 2007.

IEEE 802.16J SCOPE

The IEEE 802.16j is aiming to develop a relay mode based on IEEE 802.16e by introducing RSs depending on the usage model:
  • Coverage extension
  • Capacity enhancement
In other words, the relay technology is first expected to improve the coverage reliability in geographic areas that are severely shadowed from the BS or to extend the range of a BS. In both cases, the RS enhances coverage by transmitting from an advantageous location closer to a disadvantaged SS than the BS. Second, it is expected to improve the throughput for users at the edges of an 802.16 cell. It has been recognized in previous 802.16 contributions that subscribers at the edges of a cell may be required to communicate at reduced rates. This is because received signal strength is lower at the cell edge. Finally, it is expected to increase system capacity by deploying RSs in a manner that enables more aggressive frequency reuse. Figure 1 illustrates the different scenarios in which relay mode could be used. However, introducing such RSs considerably alters the architecture of the network and raises many issues and questions. It is still unclear what system design is appropriate and can be realized at a low cost while still providing good coverage with an enhancement of the throughput.

 
Figure 1: IEEE 802.16j example use cases.
The 802.16j task group’s scope is to specify OFDMA PHY and MAC enhancement to the IEEE 802.16 standards for licensed bands. These specifications aim to enable the operation of fixed, nomadic, and mobile RSs by keeping the backward compatibility with SS/MS. In other words, the standard will define a new RS entity and modify the BS to support Mobile Multihop Relay (MMR) links and aggregation of traffic from multiple sources. An MMR link represents a radio link between an MMR-BS and an RS or between a pair of RSs. Such link can support fixed, portable, and mobile RSs and multihop communications between a BS and RSs on the path. An access link is a radio link that originates or terminates at an SS/MS. Table 1 illustrates the main scope of the project.
Table 1: IEEE 802.16j Project Scope 
  
Define New
No Change
Changes to BS
RS Entity
“802.16j Relay” Link Air Interface
  • To SS/MS
  • To 802.16e OFDMA PMP link
  • Add support for MMR links
  • Add support for aggregation of traffic from multiple RSs
  • Supports PMP links
  • Supports MMR links
  • Supports aggregation of traffic from multiple RSs
  • Support fixed, portable, and mobile RSs
  • Based on OFDMA PHY
  • MAC to support multi-hop communication
  • Security and management
Related Posts with Thumbnails