Sunday, December 25, 2011

Optimization Process | AUTOMATIC CELL CONFIGURATION



The basic structure of the local search algorithm that has been developed for an optimization of WiMAX networks is depicted in Figure 1.
 
Figure 1: Local search algorithm for WiMAX network optimization.
The algorithm comprises the basic elements of a local search method that have been presented in the previous section. The local search starts from an initial solution, which can for example be the current configuration of the network to be optimized, a manually planned solution, or a solution suggested by the fast heuristics presented in the previous section.
At the beginning of each search step, the search neighborhood is generated. At first a cluster of cells is selected for which parameter changes are considered. Based on the selected cells the search neighborhood is generated and explored to yield a new solution.
The quality of a certain solution is assessed by a performance analysis. The choice of the method depends on the particular application and is a trade-off between accuracy and speed of the optimization process. From the results of the performance analysis, a cost value is generated by means of a cost function. The cost function is basically a linear combination of the evaluated quantities. In addition, penalty components are added if certain thresholds are exceeded for any of the different cost function components (e.g., coverage probability below some design target). The search process can be guided by appropriately setting weights for the different cost function components.
As a search paradigm either a descend method or a Tabu search can be applied. The Tabu list is maintained independent of the selected search paradigm. The search paradigm only influences the way in which the search is terminated. In case of the descend method, the search process is terminated if no improvement could be found. For the Tabu search, nonimproving moves are accepted to escape from local minima. The Tabu search is terminated once no improving moves were found for a certain number of search steps.
The performance of the local search method strongly depends on the applied performance evaluation. The choice between the methods is a trade-off between accuracy and running time. The basic static method is the fastest but as was shown has some weaknesses in terms of accuracy of results. The presented statistical methods significantly outperform the latter method in accuracy of results but even if implemented efficiently are of higher computational complexity, especially if the experimental analysis is applied for evaluating quantities per pixel. The local search optimization presented in this section can be extended to yield a hybrid method which makes use of two methods for performance evaluation. The exploration of the neighborhood is split into two parts. In a first step, the neighborhood is explored by the use of a simple and fast basic performance evaluation method. As a result a list of candidate solution is generated. The list is sorted with respect to cost values. The next candidate solution is selected from this list using a more accurate but also more time consuming advanced performance evaluation. Either the first improving solution from the list or the best solution from a subset of most promising solutions (“short list”) is selected.

Wednesday, December 21, 2011

Advanced Search Algorithm | AUTOMATIC CELL CONFIGURATION



The optimization method described in the previous section is well suited for an initial planning and in cases where the level of completeness or accuracy of input data is limited. For a detailed optimization that takes the full set of input data into account, a search approach is proposed. That is, the space of possible configurations denoted as search space is explored to find the point in the search space that is optimal with respect to a certain criterion. This point is denoted as global optimum. Exhaustive search traverses the complete search space in a systematic manner. As all points in the search space are visited, with exhaustive search it is guaranteed to find the global optimum. The search space is very large for typical applications in network planning. For each site there can easily be several hundreds of possible configurations. Furthermore, configurations of different sites cannot be considered independently, so that the amount of possible network configurations grows expotentially with the number of sites. Hence targeting this area, an exhaustive search is too time-consuming and local search algorithms are commonly used for network optimization purposes.
Local search algorithms start at some point in the search space denoted as initial solution and subsequently move from the present to neighboring solutions, if they fulfill some criterion, i.e., appear to be better or more promising. Local search algorithms cannot guarantee to find the global optimum. The objective of the search algorithm—developed or tailored for a particular problem—is to find a solution that is at least close to the global optimum. Local search algorithms are thus often classified as heuristics.
The basic procedure of local search is independent of the actual search algorithm applied. Starting from the initial solution in each search step, first a search neighborhood is generated. The search neighborhood is a subset of points from the search space that are close to the current solution, i.e., that have some attributes in common with the current solution. The point that is most appropriate with respect to some criterion is selected from the search neighborhood and accepted as new initial solution for the next search step. If no appropriate new solution is found (or some other stop criterion is fulfilled) the search is terminated. The comparison of points from the search space is carried out by means of cost values associated with them. The cost values are generated from a cost function, in the literature often also referred to as objective function. The objective function maps a given point in the search space to a cost value. The cost value can be a scalar but could also be represented by a vector. In the latter case, an appropriate function to compare cost values needs to be defined.
Local search algorithms are very much application specific. However, several search paradigms have been developed in the last three decades. The simplest search paradigm is the descent method. This method always selects the solution from the neighborhood that has lowest cost. If this value is lower than the lowest value in the last search step, the solution is accepted as a new solution, otherwise the algorithm is terminated. The algorithm hence explores the search space by always moving in the direction of the greatest improvement, so that it typically gets trapped in a local minimum. A local minimum is a solution that is optimal with respect to the search neighborhood but which generally is worse than the global optimum. To escape from local minima, among several others, one widely applied approach is to carry out restarts, that is, the local search is restarted from a new solution that is selected from a different area of the search space. The new start solutions are often selected randomly. If restarts are applied, the algorithm strictly speaking is not a local search algorithm anymore.
Another option for escaping from local minima is to accept also the cost deteriorating neighbors under certain conditions. The most prominent local search paradigms that apply this strategy are simulated annealing and Tabu search.
Simulated annealing is based on an analogy with the physical annealing process. In simulated annealing improving points from the neighborhood are always selected when exploring the search neighborhood, nonimproving points are accepted as new solutions with a certain probability. The probability of acceptance is a function of the level of deterioration, but also gradually decreases during the algorithm execution. The reduction of the probability of acceptance is determined by the cooling scheme.
In contrast to the simulated annealing which comprises randomness, classical Tabu search is deterministic. The basic operation is equivalent to the descent method with the difference that the best point from the neighborhood is also accepted if it is worse than the current solution. In this way the search is directed away from local minima. To avoid a move back to already visited solutions, a Tabu list is introduced. The Tabu list typically contains sets of attributes of solutions that have already been visited. If a point from the neighborhood exhibits one of the sets of attributes stored in the Tabu list, the point is only accepted as new solution if its quality, i.e., cost, exceeds a certain aspiration level. The Tabu list is updated, keeping the individual entries only for a number of iterations. The size of the Tabu list is a very important design parameter of the Tabu search, it in particular needs to be chosen large enough to prevent cycling, but a too large list might introduce too many restrictions. Several enhancements to the basic operation of the Tabu search have been introduced most of which modify the handling of the Tabu list. These include intensification and diversification schemes

Saturday, December 17, 2011

Optimization Parameters and Targets | AUTOMATIC CELL CONFIGURATION


Optimization Parameters and Targets

The targets of the radio network optimization are mainly twofold. First target is to minimize the interference caused by the individual cells, while a sufficient coverage over the planning area is maintained. This is in general a trade-off and needs to be balanced, e.g., tilting down the antenna causes lower coverage, but also lower interference in neighboring cells and thus a potentially higher network capacity. Second target is the traffic distribution between cells. It is desirable to maintain similar cell loading of neighboring cells to minimize blocking probabilities and to maximize spare capacity for traffic fluctuations and a future traffic evolution.
The most effective parameter in network optimization is the antenna tilt. Antenna tilts need to be set such that the traffic within the “own” cell is served with maximum link gain, but at the same time the interference in neighboring cells is minimized. The possible tilt angles are typically restricted because of technical and civil engineering reasons. Especially in case of collocated sites with multiband antennas there might be strong restrictions on the possible tilt angles to be taken into account during optimization.
The transmitted pilot channel power and the other common channel powers, which are typically coupled by a fixed offset, are also vital parameters of network optimization. It needs to be assured that these channels are received with sufficient quality by all users in the serving cell. At the same time a minimization of the common channel powers yields significant capacity gains: Firstly, additional power becomes available for other (user traffic) channels, and secondly, the interference is reduced. The gains obtained from reducing the pilot power are often underestimated. It is important to note that in a capacity-limited WiMAX network (e.g., in urban areas) the reduction of pilot power levels by a certain factor also reduces the total transmit power of cells and as a consequence the cell loading by up to the same factor.
Optimization of azimuth angles of sectored sites is of great importance in particular in case of antennas with rather small horizontal beam-width (e.g., 65° vs. 90° in case of three-sectored sites). In this case the difference between antenna gains in direction of the main lobe and the half-angle between neighboring sectors is comparatively large, and cells of neighboring sites might need to be adjusted such that maximum coverage is achieved. It is observed that during optimization azimuth changes are in particular introduced to reduce coverage problems. For possible azimuth angles typically even stronger restrictions apply than for the tilt angles.
The antenna height is also often a degree of freedom for the optimization. Higher antennas can provide better coverage, but on the other hand also cause more interference in neighboring cells. Additional important parameters are the antenna type and the number of deployed sectors at a site. Both parameters are closely coupled, as a larger number of sectors also suggest the use of antenna pattern with smaller horizontal beam-width. The choice of sectorization is typically a trade-off between increased network capacity and higher monetary cost.

Tuesday, December 13, 2011

SITE SELECTION | SITE DEPLOYMENT AND CONFIGURATION



The site selection targets at optimizing mobile radio networks. It provides flexible methods to modify the configuration of a network in such away that key performance measures are optimized and pivotal performance targets are met.
The working engine behind the site selection is a fast construction and solution of downlink or uplink cell equations for a given network configuration from which most performance measures can be deduced. Based on an analysis of the “permitted configurations,” a preprocessing tree is constructed which contains all partial coefficients of the cell equations which might be relevant for the construction of the downlink/uplink/combined cell equations of an arbitrary permitted network configuration. This preprocessing tree then allows for an efficient accumulation of the coefficients of the equation system which in turn yields a fast calculation of the cell transmit/interference powers/activities from which most performance measures can be deduced. One can observe how many configurations are evaluated by looking at the number of evaluations in the status messages (which are issued about once in a second) when the algorithm is running after preprocessing.
Based on this fast evaluation, an optimization of the network is performed by evaluating all neighboring configurations of the current configuration and moving to the best of them if this improves the current network configuration, where the neighboring configuration deviates from the current configuration with short distance in the searching space. As long as an improvement can be obtained this optimization step is iterated. When no improvement can be found the last configuration is stored as the end design. On top of this basic procedure (construct preprocessing tree—perform descent to local minimum), there is the possibility for a repetition (since when many sites are switched off the coefficients in the preprocessing tree might become more and more irrelevant hence yield too conservative cell equations) and for a division of large problems into smaller ones, where one part of the network is optimized after another in a circular way.
A key role in the simple local descent iteration is played by the comparison operator between configurations. Here the algorithm allows choosing the relevant performance measures/constraints by assigning priorities to them. For example, one often assigns the monetary cost the highest priority and the coverage constraint the second highest (nonzero) priority.
The site selection can be used for optimizing cell parameters and selecting sites/cells. It can start from given configurations or empty/full networks. It can look through neighborhoods with selectable depths and iterate through these neighborhoods with a deterministic steepest or gradient descent, or a greedy random descent. However, this flexibility is at the price of a more complex user interaction. Through a large number of parameters the user has to tell the program what kind of optimization is intended. In addition, in large optimization problems, the preprocessing tree requires a huge amount of computer memory and if a large number of partial interferer coefficients are stored the evaluation can become slow. Partial coefficients which cannot be stored are accumulated in a so-called remainder coefficient. This remainder coefficient can accumulate a large number of small interferer and then makes sure that these interferers are not neglected in the cell equations. This saves computation time and computer memory, but might yield overconservative cell transmit powers/channel activities, when the number of stored coefficients becomes too small or when the interferer which belong to the stored coefficients are not active (e.g., is switched off). In these cases one might have to recalculate the preprocessing tree or allow for a larger number of coefficients within the nodes of the tree. Or one could divide a large problem into smaller ones which also reduces the preprocessing tree.


Candidate Sites and Permitted Configurations

The site selection algorithm starts from a description of the permitted network configurations. Such descriptions consist of a set of potential cells (configured cells which might appear in a network design) and subsets of these cells which we call selection sets and choice sets.
The meaning of a selection set is that all potential cells in this set have to appear together in a permitted network configuration or none of them. In a site selection, problem one can think of a selection set as the set of cells which belong to a site: One has to select all of them or none of them. Since selection sets can consist of a single cell, one can also define “cell selection” problems, where individual cells are selected by putting each potential cell into an individual selection set.
The meaning of a choice set is that exactly one of the potential cells in this set has to appear in a permitted network configuration. In a tilt choice problem such a set would consist of all potential cells which model the same cell but with different tilt settings. Defining such a choice set means to tell the optimization kit that it should choose exactly one of these cells for a solution design. Through a choice set which contains only one potential cell one can tell the program that this cell must appear in a solution design.


Optimization Algorithm

Given this definition of permitted configurations and the results of the preprocessing, the algorithm can walk through the permitted configurations and try to improve the performance measures of interest during this walk. The permitted configurations are the possible search choices for the optimization algorithm. The user can choose between several different manners of walking towards an optimized configuration by either “local steepest descent,” “local gradient descent,” “random greedy descent,” or “local immediate descent.”
The local steepest descent method evaluates all neighbor network configurations and then replaces the current with the best one found, where the comparison checks the selected performance measures exactly according to their priority.
The local gradient descent method evaluates all neighbor network configurations and then replaces the current one with the best one, where the comparison is based on gradients instead of the absolute values of the performance measures.
The random greedy descent method chooses randomly one new configuration in the neighborhood of the current one and moves to it if it performs better than the old one.
The local immediate descent is an exotic procedure where improvements are immediately accepted and the search continues from the improved configuration. This can speed up the algorithm, but at the cost of perhaps not giving as good results as the above more thorough search methods.
In all cases the walk moves from the current configuration to a configuration within a certain neighborhood of the current one. The neighborhood is defined as all configurations which can be reached by a given number of permitted elementary steps. An elementary step is either changing a given permitted configuration by switching one selection set on or off (adding its cells to the configuration or removing them) or modifying one cell in a choice set (replacing it by another one of the same choice set).
However, increasing the search depth to more than one, usually dramatically increases the computation time because a large number of configurations might have to be considered before accepting a move. This increase can be controlled through algorithm parameter.


Performance Evaluation and Comparison

There are two groups of performance measures: Ordinary performance measures and performance constraints, which additionally have a target value which should be reached. The ordinary performance measures are
  • ActiveCells: The number of active cells in the evaluated network configuration. (Can be used instead of TotalFirstYearCost if detailed costs are irrelevant.) Small values are better than large ones.
  • ActiveSites: The number of active sites. (Can be used instead of TotalFirstYearCost if the detailed costs of the base stations are irrelevant.) Small values are better than large ones.
  • MaxOtherCoefficient: The maximum of the nondiagonal coefficients in the coupling matrix. Small values are better than large ones.
  • MaxOwnCoefficient: The maximum diagonal coefficient in the coupling matrix. Small values are better than large ones.
  • MaxSiteCostPerAccess: The maximum first year cost of one BS location (including hardware) divided by its sum of area and traffic access coverage in percent. Small values are better than large ones.
  • MaxUserLoadInPercent: The maximum user load of a cell in the network. Small values better than large ones. If this measure has high priority, the optimization seeks for a configuration in which the maximum user load of a cell is as small as possible.
  • OverloadedCells: The number of overloaded cells. Small values are better than large ones.
  • TotalFirstYearCost: The most important (nontechnical) performance measure in site selection, namely the total first year cost of a given network configuration. Small values are better than large ones.
The performance constraints are
  • AreaCoverageInPercent: The percentage of the area which can be served (i.e., considered pixels which have a large enough receive signal strength from at least one of the active cells) by the network configuration. Its target value is the area coverage defined in the underlying optimization profile. Large values are better than small ones.
  • AccessCoverageInPercent: The number of users on the served area of a configuration as a percentage of the number of users in the total considered area. The target value is the access coverage from the underlying optimization profile. Large values are better than small ones.
  • MinCoverageGapInPercent: Here one looks at the differences of the area coverage in percent and its target value and the traffic coverage in percent and its target value. The smaller one of these two differences is the MinCoverageGapInPercent. Large values are better than small ones. The target value of this constraint is 0.
  • TrafficCoverageInPercent: After solving the cell equations one can assess the number of customers which can be served by the network. The target value is the traffic coverage from the underlying optimization profile. Large values are better than small ones.
Only performance measures are calculated and displayed during the optimization for which the priority is strictly positive. The one with the largest positive value has highest priority, the one with the lowest positive value the lowest priority. The positive priorities should be different.
When two evaluations of network configurations are compared, first the number of violations (i.e., number of performance constraints, where the target is not met) is compared. The configuration with fewer violations is better than the other one.
Next, if both configurations have the same nonzero number of violations, the violation with the highest priority is considered. The configuration where the corresponding performance measure has a better value is considered as better than the other one. If both values agree, the performance measure with a violation and next highest priority is considered, and so on.
If there are no violations or the performance measures of all violated constraints agree, the performance measure with the highest priority is considered: The configuration which has the better value of this performance measure is considered to be better than the other one. If both values agree one looks at the performance measure with the second highest priority, and so on.
Since the steepest descent algorithm accepts only “better” permitted configurations, this evaluation first tries to reach the performance targets and then to optimize the high priority measures.
If the optimization method is set to local gradient descent, the decision whether a neighbor of the current configuration is better than another neighbor of the current configuration is slightly more complex: instead of the absolute values one considers gradients with respect to the current configuration.

Wednesday, December 7, 2011

WIMAX NETWORK PLANNING PROCESS



WiMAX radio planning involves a number of steps ranging from tool setup to site survey. The process is similar to any wireless network. What differs between WiMAX and other technologies are the actual site configuration, KPIs, and the propagation environment as WiMAX may support mobile and fixed users where the latter may employ directional/rooftop antennas.
The final radio plan defines the site locations and their respective configuration. The configuration involves BTS height, number of sectors, assigned frequencies or major channel groups, types of antennas, azimuth and downtilt, equipment type, and RF power. The final plan will be tested against various KPI requirements mainly coverage criteria and capacity (or signal quality). Figure 1 can be used as a guide in developing a planning process. The planning process also largely depends on the planning tool used.

 
Figure 1: WiMAX radio planning process.
The planning process in Figure 1 includes measurements (i.e., drive test and verifications) after the site survey. This procedure is not mandatory for all sites if the site count is too large. Usually, site survey and the KPI analysis give an indication of which areas are expected to have poor RF quality and which sites are involved. This is usually done when the candidate site(s) are not located in ideal locations or if the site survey finds some discrepancies of the candidate(s).
The differences between WiMAX and 3G radio planning. WiMAX radio offers modest processing gain in a form of repetition coding and subchannelization. These features are only exploited when the signal quality demands more processing. To support high data rates, the radio plan must offer very good SINRs (signal to interference and noise ratio) even with very limited spectrum. For example, in the absence of subchannelization and repetition coding, the required SINR for the lower MCS (modulation and coding scheme) is around 5 dB and this needs to be achieved even with very tight frequency reuse factor of 1/3 or 1/4 in the presence of shadowing, where the reuse factor is the reciprocal of the number the cells using different frequencies and the sum of the frequencies presents the whole spectrum resource allocated to the planned system. Another consideration in the case of WiMAX planning is the high SINR requirements to support high data rates. Although a site is expected to support high data rates for CPEs closer to it, SINR values >30 dB are only possible in the absence of interference. This requires accurate modeling of the propagation and RF equipments. For example, in 3G, high data rates are possible even with C/(N) of < 10 dB as the processing gain enables the receiver to tolerate some amount of interference. In WiMAX, this is not the case as the processing gain is only provided through channel coding and limited coding repetition.

Saturday, December 3, 2011

IMPACT OF THE RATIO OF THE COST OF BS TO RS ON SOLUTION


It is also worth to notice how the ratio of the cost of BS and RS affects the site selection. Intuitively, as the ratio raising, the RS becomes relatively cheaper, so it should tend to select more RS compared to the BS and connections from TP to RS should increase. Figure 1 shows the trend. It shows number of connections between TP and BS and between TP and RS as the cost ratio varying from one to ten, i.e., from the cost of BS equals to the cost of RS to the cost of BS ten times the cost of RS. Figure 2 shows the corresponding average path loss between each TP and its communicating node. It can be seen that the path loss is decreasing which means the quality of the radio received becomes higher as the cost of RS becoming lower.


Figure 1: Average path loss between each TP and its communicating node as the ratio of the cost of BS and RS is varied.

Figures 2 and 3 show two extreme cases. In both cases, the number of candidate BS sites is 50, the number of candidate RS sites is 150, and the number of TP is 500. Figure 2 shows the plan of the cost of BS equals to the cost of RS. In this case, there are 38 BSs and 50 RSs being selected; 177 connections between TP and BS; 320 connections between TP and RS. Figure 3 shows the plan of the cost of BS ten times to the cost of RS. In this case, there are 28 BSs and 75 RSs being selected; 133 connections between TP and BS; 367 connections between TP and RS.


Figure 2: An output of the planning tool when the ratio of the cost of BS and RS is 1.


Figure 3: An output of the planning tool when the ratio of the cost of BS and RS is 10.
Related Posts with Thumbnails