WO2016026509A1 - Technique for handling rules for operating a self-organizing network - Google Patents

Technique for handling rules for operating a self-organizing network Download PDF

Info

Publication number
WO2016026509A1
WO2016026509A1 PCT/EP2014/067564 EP2014067564W WO2016026509A1 WO 2016026509 A1 WO2016026509 A1 WO 2016026509A1 EP 2014067564 W EP2014067564 W EP 2014067564W WO 2016026509 A1 WO2016026509 A1 WO 2016026509A1
Authority
WO
WIPO (PCT)
Prior art keywords
kpi
rule
son
operating
measurement
Prior art date
Application number
PCT/EP2014/067564
Other languages
French (fr)
Inventor
Peter Vaderna
András RÁCZ
Norbert REIDER
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Priority to PCT/EP2014/067564 priority Critical patent/WO2016026509A1/en
Publication of WO2016026509A1 publication Critical patent/WO2016026509A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5019Ensuring fulfilment of SLA
    • H04L41/5025Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/0816Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic

Definitions

  • the present disclosure generally relates to a technique for handling rules for operating a self-organizing network. More specifically, and without limitation, a method for deriving at least one operating rule, a method of verifying at least one operating rule and devices for implementing such methods are disclosed.
  • MTN mobile telecommunications network
  • 3GPP 3rd Generation Partnership Project
  • UMTS Universal Mobile Telecommunications System
  • LTE Long-Term Evolution
  • LTE-Advanced LTE-Advanced
  • the costs of the MTN can be significantly reduced by automating as many processes as possible in network deployment and network operation.
  • the MTN includes a plurality of network elements (NEs) connected to each other with standard interfaces and communicating according to standard protocols.
  • the MTN is managed by a network management system (NMS) performed separately from the NEs.
  • the NMS is also referred to as an Operation, Administration and Maintenance (OAM) system or Operational Support System (OSS).
  • OAM Operation, Administration and Maintenance
  • OSS Operational Support System
  • the NMS provides functions for network configuration, referred to as configuration management (CM), and for operation supervision, referred to as performance management (PM) and fault management (FM).
  • CM configuration management
  • PM performance management
  • FM fault management
  • the CM, PM and FM functions include rules for automatic configuration, optimization and fault handling, which are also referred to as self-configuration function, self- optimization function and self-healing function.
  • the 3GPP standard e.g., in 3GPP standard documents TS 32.500, TS 32.511 and TS 36.902 (Release 8 or later releases), collectively refers to such functions as self-organizing network (SON) functionality.
  • the SON functionality can be centralized using a network manager (NM), distributed to a plurality of domain managers (DMs), or partially centralized and partially distributed in a hybrid implementation using NM and DMs.
  • the SON functionality executes the rules against collected measurement sets and determines which rule is triggered by the measurement sets.
  • the rules are typically set by experts at the development of the SON functionality and adjusted from time to time for a particular network deployment and in response to problems occurring in the network. Thereby, the rules are input to the SON functionality.
  • the conventional installation and maintenance of the rules for the rule- based SON functionality is a knowledge-intensive and cumbersome manual task.
  • correct rules have to be found such that each rule precisely captures one particular network phenomenon or network problem.
  • the rule may specify:
  • the thresholds applied in the rules often need to be adjusted to the particular network and environment. Adjusting rules is typically done by experts that manually adapt the rules.
  • the predefined rules become incorrect or obsolete after some time, e.g., due to a change of node software, a change in user traffic structure, an evolution of terminal types, etc. It is also possible that one rule is valid for one country or region but the same rule is invalid for another country or region, e.g., due to differences in network usage, traffic structure, subscription profiles, etc.
  • a method of deriving at least one rule for operating a self- organizing network comprises a step of measuring a plurality of measurement sets, each measurement set including measurement values specifying at least one first key performance indicator (KPI) of the SON and a second KPI of the SON that is different from the at least one first KPI; a step of determining a relation between the at least one first KPI and the second KPI based on the measurement sets; and a step of deriving the at least one rule for operating the SON based on the determined relation.
  • KPI key performance indicator
  • the SON may be a mobile telecommunications network, e.g., with SON functionality.
  • the KPIs of the SON may be indicative of network performance, e.g., specific aspects of the network performance.
  • the KPIs may be aggregations of performance metrics.
  • the performance metrics may be aggregated over a certain period of time and/or a plurality of spatial domains or cells.
  • the rules may include, or may be representable by, a condition and/or an action to be performed when the condition is fulfilled.
  • the rule may specify a decision logic of the SON functionality.
  • the action may also be referred to as an actuation.
  • the action may include modifying the SON functionality, e.g., responsive to the condition that is fulfilled.
  • the rule may specify the condition for modifying one or more operating parameters for operating the SON.
  • the rule may allow formulating conditional actions of modifying the one or more operating parameters.
  • At least some implementations of the method enhance the SON functionality to achieve a self-adjusting mechanism. Same or other implementations of the method may adjust an optimization rule to a particular network and/or environment, e.g., thus reducing or avoiding a task that is conventionally done manually by an expert.
  • the technique may be applicable to an existing SON functionality, e.g., any SON functionality including a decision logic based on rules.
  • the method may be performed in parallel to the SON functionality to which the method is applied.
  • the relation between the at least one first KPI and the second KPI may be determined automatically.
  • the determining step and/or the deriving step may include a machine learning process.
  • the machine learning process may be based on the plurality of measurement sets.
  • the machine learning process may be used for adapting the rules in the SON.
  • the condition may be defined in terms of the at least one first KPI.
  • the condition may be defined exclusively in terms of the at least one first KPI.
  • the condition may specify at least one threshold value for the at least one first KPI.
  • the step of determining may include determining the at least one threshold value used for defining the condition of the corresponding rule.
  • the method may be performed in parallel to adjusting the threshold value of the rule of the SON, e.g., responsive to the condition detected based on the measurement sets.
  • the step of deriving the at least one rule may encompass a change (e.g., an adaptation) of at least one existing rule.
  • the change may be based on the measurement sets and/or the machine learning process.
  • the method may identify an existing rule and/or create a not yet existing rule, e.g., in the deriving step.
  • the method may further include a step of ranking two or more rules of the SON. For example, the rules may be ranked in order of their influence on the second KPI and/or an importance of the second KPI of the corresponding rule.
  • the relation may be represented or representable by means of a decision tree or a binary tree.
  • the binary tree may be a result of the machine learning process.
  • the tree may represent a machine learning model.
  • the tree may be constructed by training the machine learning model with the measurement sets, e.g., wherein the at least one first KPI and the second KPI are known in each case.
  • Each internal node of the binary tree may assess, or may correspond to assessing, whether one of the at least one first KPI fulfils a corresponding one of the threshold values.
  • Each leaf of the binary tree may correspond to a quality requirement for the second KPI.
  • the condition for one rule may correspond to one or more branches of the binary tree.
  • each condition may correspond to one path from a root of the binary tree to a leaf of the binary tree.
  • each branch and/or condition may be defined by one of the leaves (e.g., since the binary tree is not mashed or loop-free).
  • the quality requirement may be an input parameter of the method.
  • the deriving of the at least one rule may include selecting the one or more branches according to the input parameter.
  • the method may further include a step of applying the rule.
  • the rule may be applied by modifying the one or more operating parameters for operating the SON.
  • the modification may be triggered by the condition.
  • At least the steps of determining and applying may be performed simultaneously in the SON.
  • the SON may include a cellular telecommunications network or a portion thereof.
  • the SON may include a Radio Access Network (RAN).
  • RAN Radio Access Network
  • the SON may include one or more picocells. The picocells may be self-adjusting under temporal deployment by means of the rules of the SON.
  • the at least one first KPI may relate to radio resources of the SON, e.g., radio resources of the RAN.
  • the second KPI may be indicative of a performance of a service provided by the SON.
  • the service may use the radio resources to which the at least one first KPI relates.
  • the method may further include a step of assessing an accuracy of at least one of the determined relation and the derived rule.
  • the assessing may include counting at least one of a number of false positive incidences and a number of false negative incidences for the rule based on measurement sets, e.g., the measurement sets of the measuring step.
  • a method of verifying at least one rule for operating a self-organizing network comprises a step of measuring a plurality of measurement sets, each measurement set including measurement values specifying at least one first key performance indicator (KPI) of the SON and a second KPI of the SON that is different from the at least one first KPI; a step of receiving a quality requirement in terms of the second KPI for at least one rule, each rule including a condition in terms of the at least one first KPI; and a step of assessing, based on the measurement sets, a correlation between the condition of the rule in terms of the at least one first KPI and a violation of the quality
  • KPI key performance indicator
  • the verifying aspect of the technique may further include any feature or any step disclosed in the context of the deriving aspect.
  • the quality requirement may specify a quality threshold value for the second KPI. The quality requirement may be violated if the second KPI falls below the quality threshold value.
  • a computer program product comprises program code portions for performing the steps of any one of the method aspects disclosed herein. The steps may be performed when the computer program product is executed on one or more computing devices.
  • the computer program product may be provided by means of a computer-readable recording medium.
  • the computer-readable recording medium may include the program code portions.
  • the computer program product may be provided for download in a data network, e.g., the SON and/or the Internet.
  • the computer-readable recording medium may include an Internet address for downloading the computer program product.
  • a device for deriving at least one rule for operating a self-organizing network comprises a measuring unit adapted to measure a plurality of measurement sets, each measurement set including measurement values specifying at least one first key performance indicator, KPI, of the SON and a second KPI of the SON that is different from the at least one first KPI; a determining unit adapted to determine a relation between the at least one first KPI and the second KPI based on the measurement sets; and a deriving unit adapted to derive the at least one rule for operating the SON based on the determined relation.
  • KPI key performance indicator
  • a device for verifying at least one rule for operating a self-organizing network comprises a measuring unit adapted to measure a plurality of measurement sets, each
  • measurement set including measurement values specifying at least one first key performance indicator, KPI, of the SON and a second KPI of the SON that is different from the at least one first KPI; a receiving unit adapted to receive a quality requirement in terms of the second KPI for at least one rule, each rule including a condition in terms of the at least one first KPI; and an assessing unit adapted to assess, based on the measurement sets, a correlation between the condition of the rule in terms the at least one first KPI and a violation of the quality requirement in terms of the second KPI.
  • a system for deriving and verifying at least one rule for operating a self-organizing network comprises a measuring unit adapted to measure a plurality of measurement sets, each measurement set including measurement values specifying at least one first key performance indicator, KPI, of the SON and a second KPI of the SON that is different from the at least one first KPI; a determining unit adapted to determine a relation between the at least one first KPI and the second KPI based on the measurement sets; a receiving unit adapted to receive a quality requirement in terms of the second KPI for at least one rule, each rule including a condition in terms of the at least one first KPI; a deriving unit adapted to derive the at least one rule for operating the SON based on the determined relation; and an assessing unit adapted to assess, based on the measurement sets, a correlation between the condition of the rule in terms the at least one first KPI and a violation of the quality requirement in terms of the second KPI.
  • the hardware aspects may further include any feature disclosed in the context of the method aspects. Any one of the units, or dedicate unit, may be adapted to perform any one of the steps disclosed in the context of the method aspects.
  • Fig. 1 schematically illustrates a mobile telecommunications network as an
  • Fig. 2 shows a schematic block diagram of a system including a device for
  • Fig. 3 shows a flowchart for a method of deriving a rule for operating the self- organizing network of Fig. 1;
  • Fig. 4 shows a flowchart for a method of verifying a rule for operating the self- organizing network of Fig. 1;
  • Fig. 5 shows a flowchart of a combined method implementation for deriving and verifying a rule for operating the self-organizing network of Fig. 1;
  • Fig. 6 schematically illustrates implementation steps for determining a relation between at least one resource key performance indicator and at least one service key performance indicator according to the method of Fig. 3;
  • Fig. 7 schematically illustrates a representation of the relation usable in any one of the methods of Figs. 3 to 6 for classifying the service key performance indicator in terms of the resource key performance indicators.
  • Fig. 8 schematically illustrates a first example for the relation determined by the method of Fig. 3 or used by the method of Fig. 4;
  • Fig. 9 schematically illustrates a second example for the relation determined by the method of Fig. 3 or used by the method of Fig. 4;
  • Fig. 10 schematically illustrates a validation table for assessing in the method of
  • Fig. 4 the relation determined, or the rule derived, by the method of Fig. 3;
  • Fig. 11 schematically illustrates a result of the assessment of Fig. 10 as a function of a threshold value of a rule derived by the method of Fig. 3;
  • Fig. 12 shows a flowchart for applying, to the self-organizing network of Fig. 1, the rule derived by the method of Fig. 3 and/or verified by the method of Fig. 4.
  • embodiments are primarily described in the context of a mobile telecommunications network, the technique is also applicable to a data network providing landline access or including at least some stationary terminals.
  • the technique described herein may be implemented according to 3GPP standards (e.g., UMTS networks, LTE networks and LTE-Advanced networks) and non-3GPP standards (e.g., Wi-Fi networks according to an IEEE 802.11 standard) or combinations thereof.
  • 3GPP standards e.g., UMTS networks, LTE networks and LTE-Advanced networks
  • non-3GPP standards e.g., Wi-Fi networks according to an IEEE 802.11 standard
  • services, functions and steps disclosed herein may be implemented using software functioning in conjunction with a programmed microprocessor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP) or a general purpose computer.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • DSP Digital Signal
  • Fig. 1 schematically illustrates a self-organizing network (SON) 100.
  • the SON 100 includes network elements (NE) 102 for providing end-to-end data communication in the SON 100 via standardized interfaces.
  • the SON 100 further includes domain managers (DMs) 104 for managing domains of the SON 100. Each domain may be formed by a subset of the NEs 102.
  • Each of the DMs 104 includes functionality for setting-up and configuring the NEs 102, for receiving and handling fault indications and alarms, for resolving local network problems, for measuring local network performance and for improving the network performance, e.g., based on a measured usage behavior.
  • the SON 100 further includes a network manager (NM) 106.
  • the NM 106 provides a centralized end-to-end management of the SON 100.
  • the NM 100 receives and aggregates reports from the plurality of DMs 104 for the different domains.
  • the NM 106 indirectly configures the NEs 102 across different domains, e.g., for a globally consistent network configuration.
  • any aspect of the technique described herein may be implemented, in parts or completely, in any one of the components 102 to 106 of the SON 100.
  • the technique may be deployed in the NEs 102, the DMs 104 and/or the NM 106.
  • the technique may be distributed over different management layers of the SON 100.
  • the SON 100 may be implemented according to a 3GPP standard.
  • the NEs 102 include nodes of a 3GPP Radio Access Network (RAN), e.g., Radio Base Stations (RBSs), a Radio Network Controller (RNC) and/or Evolved Radio Base Stations (ERBS).
  • RAN 3GPP Radio Access Network
  • RBS Radio Base Stations
  • RNC Radio Network Controller
  • ERBS Evolved Radio Base Stations
  • the NEs 102 include nodes of a 3GPP Core Network (CN), e.g., a Gateway GPRS Support Node (GGSN) and/or a Serving GPRS Support Node (SGSN).
  • GGSN Gateway GPRS Support Node
  • SGSN Serving GPRS Support Node
  • the NEs 102 include nodes of any other backhaul domain, e.g., a transport network, a service network, etc.
  • the RAN and the CN are optionally managed by different DMs 104.
  • An interface between the NEs 102 and the corresponding DM 104 may be proprietary.
  • An interface between the DMs 104 and the NM 106 may be standardized by 3GPP.
  • Fig. 2 shows as a schematic block diagram of a system 200 for deriving and verifying one or more rules for operating a self-organizing network, e.g., the SON 100.
  • the system 200 includes a device 210 for deriving at least one rule for operating the SON 100.
  • the system 200 further includes a device 220 for verifying at least one rule for operating the SON 100.
  • the device 210 includes a measuring unit 212, a determining unit 214 and a deriving unit 216.
  • the measuring unit 212 measures or receives a plurality of measurement sets.
  • Each measurement set includes measurement values specifying one or more first key performance indicators (KPIs) of the SON 100 and at least one second KPI of the SON 100.
  • KPIs first key performance indicators
  • Each measurement set represents a state of the SON 100 at a certain point in time.
  • the different measurement sets may relate to different points in time.
  • the measuring unit 212 provides the measurement sets to the determining unit 214.
  • the determining unit 214 determines a relation (also referred to as a dependence structure) between the first KPIs and the second KPI.
  • the second KPI is estimated as a function of the first KPIs.
  • the relation may be determined by any induction algorithm.
  • the function may be continuous and/or may be derived by means of regression. Alternatively or in addition, the function may classify the first KPIs and/or may be determined by machine learning (ML).
  • ML machine learning
  • the classification may include a binary tree or any other representation of multiple classes defined by the first KPIs.
  • the deriving unit 216 outputs a rule that changes certain operating parameters, if the first KPIs fulfil a condition derived from the relation.
  • the operating parameters may encompass any parameters for configuration of the SON 100.
  • the device 220 includes a measuring unit 212, a receiving unit 224 and an assessing unit 226.
  • the measuring unit 212 may be identical to the measuring unit of the device 210.
  • Each of the rules for operating the SON 100 includes a condition in terms of the one or more first KPIs. Furthermore, each rule may be associated with a quality requirement for the second KPI.
  • the receiving unit 224 receives the quality requirement.
  • the quality requirement is input by a network operator.
  • the first KPI may specify a status of the SON 100 at a first layer of a network protocol and the second KPI may specify a status of the SON 100 at a second layer of the network protocol higher than the first layer.
  • “higher” may specify a functional abstraction that is closer to an application layer and/or farther from a physical layer.
  • the assessing unit 226 assesses the one or more of rules based on the measurement sets and, optionally, the quality requirement.
  • the assessment includes computing a correlation between the condition of the rule in terms of the one or more first KPIs and a violation of the quality requirement compared with the second KPI as measured.
  • Fig. 3 shows a flowchart of a method 300 for deriving at least one rule for operating a SON. Measurements are gathered in a step 302. Based on the measurements, one or more second KPIs are related to one or more first KPIs in a step 304. A rule for operating the SON is derived based on the relation in a step 306.
  • the method 300 may be performed by the device 210 and/or in the SON 100.
  • the steps 302, 304 and 306 may be performed by the units 212, 214 and 216, respectively.
  • Fig. 4 shows a flowchart of a method 400 for verifying at least one rule for operating a SON. Measurements are gathered in a step 402. A quality requirement in terms of one or more second KPIs is received in a step 404. A rule of the SON is assessed in a step 406 by assessing the ability and/or efficiency of the rule in detecting violations of the quality requirement based on one or more first KPIs, e.g., without using in the rule the one or more second KPIs.
  • the method 400 may be performed by the device 220.
  • the quality requirement may be stored in the device 220.
  • the steps 402, 404 and 406 of the method may be performed by the units 212, 224 and 226, respectively.
  • the rule verified by the method 400 may have been derived by the method 300.
  • the device 210 performing the method 300 and the device 220 performing the method 400 are implemented separately.
  • the device 210 includes an implementation of the measuring unit 212 performing the step 302 independently of the device 220 including a different implementation of the measuring unit 212 performing the step 402.
  • At least one of the first KPIs may relate to a Resource KPI (R-KPI) indicative of the performance of network resources.
  • At least one of the second KPIs may relate to a Service KPI (S-KPI) indicative of the performance of a service provided in or via the SON 100, e.g., the performance of an end-user service.
  • R-KPI Resource KPI
  • S-KPI Service KPI
  • the operation of the SON 100 is controlled by operating parameters.
  • functionality of the SON 100 includes automated changes to the operating
  • the rules of the SON 100 control the automated changes. A corresponding change is triggered when the condition of the rule is fulfilled.
  • the automated changes may include specifying and/or updating the operating parameters.
  • the functionality of the SON may be split up into the functionalities of the NEs 102, the DMs 104 and/or the NM 106, each of which is optionally controlled by one or more of the rules.
  • the technique disclosed herein may be applied to an existing SON functionality.
  • the existing SON functionality may include the following loop. Measurements for the R-KPIs are performed. If the measured R-KPIs trigger any one of the rules of the SON 100, the corresponding operating parameter is changed according to the triggered rule. Since the changed operating parameter influences the measured R-KPIs, the loop of the SON functionality continuous by measuring the R-KPIs.
  • the rules are predefined and fixed.
  • the predefined and fixed rules for the SON functionality are set by experts, e.g., in a network roll-out phase.
  • the method 300 creates new rules and/or updates existing rules of the SON 100 (e.g., previously created rules or conventionally predefined and fixed rules).
  • the determining step 304 is based on a machine learning algorithm that learns the dependencies between the R-KPIs used in the SON functionality (i.e., input to the rules as the rules are applied) and/or measured in the step 302 or 402, on the one hand, and, on the other hand, a service performance measured by the S-KPI. Based on the determined relation and in accordance with the quality requirement for the S- KPI, threshold values for the R-KPIs are determined in the step 304, so that R-KPIs corresponding to the service performance fulfilling the quality requirement are distinguished from R-KPIs corresponding to the service performance not fulfilling the quality requirement. At least in some implementations, the determining step 304 includes automatically learning, based on the measuring step 302, which combination of R-KPIs is to be compared with respective threshold values and/or includes determining the respective threshold values.
  • Fig. 5 shows a flowchart of an exemplary implementation 500 that combines the methods 300 and 400.
  • a parameter set related to a specific SON functionality is initialized according to a step 502 in the roll-out phase of the SON 100 in order to select the actual measurement sets to be collected.
  • the collection of the selected measurement sets starts in the combined measuring steps 302 and 402.
  • the combined measuring steps 302 and 402 are also referred to as selection and collection of measurements.
  • the measurements relate to R-KPIs and S-KPIs.
  • Each R-KPI typically measures the performance of some lower-level resource, e.g., signal strength.
  • Each of the S-KPIs describes the end-to-end service performance, e.g., a performance observed, observable or relevant from an end-user perspective.
  • the step 304 relates the lower-level R-KPIs to the higher-level S-KPIs, e.g., by finding the at least one R-KPI responsible for an S-KPI value violating the quality requirement.
  • the rule is formulated in terms of the determined at least one R-KPI. The determined relation does not have to be fully encoded into the derived rule.
  • a machine learning algorithm is applied on the collected measurements in order to find and explore the most suitable set of conditions for triggering the rule, i.e., the relation (also referred to as "inter-dependence") between S-KPIs and R-KPIs is determined.
  • the relation also referred to as "inter-dependence" between S-KPIs and R-KPIs is determined.
  • new rules are generated and/or the existing rules within the SON functionality are evaluated and modified.
  • the newly identified rules are automatically validated by statistical analysis. If the rule is found to be correct in a decision step 504, then the rule is sent to the SON functionality in a step 506.
  • the SON functionality adds the validated rule or the validated rule replaces a corresponding existing rule. Otherwise, e.g., if the outcome of the machine learning steps 304 and 306 matches the existing rules or if the new rules are found to be not correct by the validation step 406, the existing rule remains unchanged.
  • the loop continues at the combined measuring steps 302 and 402, e.g., by performing more recent measurements for the same KPIs.
  • the timing of the loop is configurable. Since the inter-dependence between the KPIs can change slowly, the rules may be updated on long time scale (e.g., on a daily or weekly basis), but certain use-cases might require executing the loop more frequently.
  • the measurements can be performed on a layer equal to or closer to the physical layer of the SON 100, e.g., including Performance Management (PM) events, PM counters and/or Fault Management (FM) events.
  • PM Performance Management
  • FM Fault Management
  • the R-KPIs, and optionally the S- KPIs, are derived from the measurements, e.g., as aggregations of certain performance metrics.
  • the SON functionality is designed to improve an end-user service performance metric referred to as S-KPI.
  • S-KPI include, e.g., a session drop ratio, an attach success ratio, a Packet Data Convergence Protocol (PDCP) throughput, etc.
  • PDCP Packet Data Convergence Protocol
  • R-KPIs are more related to the lower-level performance metrics referred to as R-KPIs.
  • R-KPIs include, e.g., a Signal-to-Noise and Interference Ratio (SINR), a Channel Quality Index (CQI), Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), etc.
  • SINR Signal-to-Noise and Interference Ratio
  • CQI Channel Quality Index
  • RSRP Reference Signal Received Power
  • RSRQ Reference Signal Received Quality
  • the SON functionality measures R-KPIs and changes the operating parameters according to the rules.
  • the impact of changes of the operating parameter on the S-KPIs is not directly encoded in the SON functionality.
  • the rules trigger the changes of the operating parameters.
  • threshold values included in the rules for the R-KPIs define the triggering condition.
  • the threshold values in the rules indicate a value separating "good” R-KPI values that do not require a change of operating parameters and "bad” R-KPI values that trigger the change. What is considered as “good” or “bad” R-KPI values is specified in terms of the quality requirement for one or more S-KPI values. Hence, what is considered as “good” or “bad” R-KPI values depends on the effect of the R-KPI on the S-KPIs.
  • the triggering condition and/or the operating parameters to be conditionally changed are conventionally fixed.
  • Fig. 6 schematically illustrates a dependence structure 600 used in an exemplary implementation of the method 300.
  • the determining step 304 further includes determining the dependence structure 600 between operating parameters 612 of the SON 100 and the R-KPIs 604 based on the measurement sets 602.
  • the relation of the operating parameters 612 and the R-KPIs 604 used in the rule 610 is schematically illustrated in Fig. 6 by the lower white arrow. Although the relation between operating parameters 612 and R-KPIs 604 is not known exactly, a subset of the operating parameters 612 can be mapped to certain R-KPIs 604.
  • the dependence is usually monotonic, so that a direction of the change of operating parameters 612 determines a direction of the change of the R-KPIs 604.
  • the mapping between operating parameters 612 and R-KPIs 604 and the direction of the dependence is known, e.g., by radio experts.
  • this dependence structure is pre-configured in the algorithm.
  • the operating parameter to be changed by the rule 610 is determined based on the mapping between operating parameters 612 and the R-KPIs 604.
  • a machine learning algorithm explores the dependence to derive the rule 610, e.g., encompassing both creating new rules and modifying existing rules.
  • a state of the art machine learning method can be used as a basis.
  • Fig. 7 schematically illustrates a classification tree used in an exemplary implementation for representing the relation 700.
  • the machine learning may be based on the classification tree.
  • the output of the machine learning is the classification tree.
  • the rule 610 is directly derived (which is also referred to as mapping of the rule).
  • the measurement sets 602 are input to the machine learning algorithm.
  • Each of the measurement sets 602 includes the value of the S-KPI and the measured values of R-KPIs.
  • each of the measurement sets 602 includes measurement samples in the format of
  • Fig. 7 schematically illustrates the relation 700 represented by means of a
  • Each leaf 704 of the tree corresponds to an S-KPI classification, e.g., fulfilment or violation of the quality requirement.
  • An example of the S-KPI classification includes the leaves 704 "low throughput" and "high throughput”.
  • the relation 700 classifies the R-KPI values 604 not only for one value of the quality requirement.
  • the relation may classify the R-KPI values 604 for a set of quality requirements relating to the same S-KPI, e.g., for determining the relation 700 independent of a quality requirement, which may be input or changed by a network operator after the determination. Repetitions of the determining step 304 may thus be reduced or avoided.
  • the internal nodes 702 of the tree correspond to R-KPIs each of which is associated with a threshold value. Different combinations of the R-KPI values leading to the same S-KPI classification are representable by multiple occurrences of equal S-KPI classification leaves at the tree.
  • Each measurement set 602 (including the S-KPI 606 and corresponding R-KPIs 604) falls into one of the leaves, as the measurement set 602 is classified by going all the way from the root (i.e., the first threshold condition 702) of the tree to the classifying leaf 704 of the tree.
  • Each leaf uniquely specifies one way from the root to the corresponding leaf, which is also referred to as a branch.
  • the branch also referred to as route
  • the branch to the leaf 704 of the tree gives a set of conditions of R-KPI values specifying the condition of the rule 610.
  • Combining the conditions 702 along the branch leading to the leaf representing the violation of the quality requirement specifies the condition for the rule 610.
  • the individual conditions 702 along the branch are combined by a logical AND (i.e., a logical conjunction) in the rule 610.
  • the AND-combined conditions 702 of each of the branches leading to the leaf representing the violation of the quality requirement are further combined by a logical OR (i.e., a logical disjunction) in the rule 610.
  • the quality requirement 608, defining which S-KPIs values are considered as "good” or "bad”, is an input to the technique.
  • the quality requirement 608 can be determined by the operator, e.g., by inputting the quality requirement 608 at an interface of the SON functionality. Alternatively or in addition, the quality requirement is automatically determined, for example, based on known reports of end-user surveys as to perceived service quality. Standardized quality models of services (e.g., audiovisual services) can be used for automatically determining the quality requirement 608 based on an opinion of users of the SON 100.
  • the quality requirement is also referred to as a target value for the S-KPI.
  • the determining unit 214 automatically learns the dependency structure of S-KPI on R-KPIs, which includes the condition (e.g., the threshold values) of the R-KPIs and the combinations of R-KPIs under which the S-KPI reaches the target value (e.g., set by the operator) or fails the target value, respectively.
  • the determining unit 214 includes a tree-construction machine-learning algorithm.
  • the device 210 Given the input (e.g., from the operator) on the target S-KPI value, the device 210 automatically creates the conditions for the R-KPIs 604 of the rule 610, under which the target value 608 is reached or failed.
  • the condition of the rule 610 includes a logical expression in an IF- statement format.
  • the condition includes the relevant R-KPIs and the corresponding threshold values to be applied for the condition.
  • An exemplary condition includes:
  • the method 300 finds both the logical expression (i.e., which R-KPI values are to be included in the condition) as well as the threshold values (i.e., r_thresh_l, r_thresh_2 and r_thresh_3 in above example) for the R-KPIs included in the condition of the rule 610.
  • the threshold values i.e., r_thresh_l, r_thresh_2 and r_thresh_3 in above example.
  • the logical expressions of the rule conditions are determined by an expert and given as input to the rule learning algorithm in the first embodiment. In this case the machine learning only adjusts the different R-KPI thresholds in the rule conditions.
  • the first embodiment allows for more operator control on the automatic rule derivation method 300, and may be preferred by operators wishing to influence the rule setting directly.
  • Fig. 8 schematically illustrates a first example for the relation 700.
  • the relation 700 is representable by a binary classification tree.
  • the relation 700 is representable by a one-level binary tree.
  • the left leaf 704 represents network states that are (e.g., statistically) associated with a session drop.
  • the right leaf 704 represents cases that (e.g., statistically) do not lead to a session drop.
  • the threshold value, r_thresh, of one R-KPI 604 separating good and bad S-KPI values 606 is learnt. If one or more rules exist in the SON functionality that include the particular R-KPI 604, then the threshold value included in the rule is revised in the step 306 using the automatically learnt r_thresh.
  • the rules 610 are determined completely in the steps 304 and 306, e.g., by means of the machine-learning algorithm.
  • the algorithm
  • no rule 610 is input or specified in advance.
  • the machine learning steps 304 and 306 determine the rule 610, including the logical expression for the rule condition as well as the values of the thresholds in the expression.
  • the relation 700 schematically illustrated in Fig. 8 is an example for the outcome of the machine learning algorithm to classify session drops.
  • a Radio Access Bearer is the user-plane channel established between an end-user and the SON 100. Communication is performed via the RABs.
  • a RAB session starts with RAB establishment and ends with RAB release.
  • the RAB provides communication between the two endpoints (e.g., two NEs 102) during the session.
  • An abnormal release of the RAB is called "session drop” or "RAB drop”.
  • the S-KPI 606 is the session drop ratio, that is the number of abnormally released sessions divided by the number of sessions.
  • a set of measurements is collected in the steps 302 and 402. Each session
  • each measurement set 602 corresponds to one measurement set 602.
  • a session record is generated containing a plurality of R-KPIs 604.
  • each measurement set 602 corresponds to one measurement set 602.
  • a session record is generated containing a plurality of R-KPIs 604.
  • measurement set 602 further includes a type of the end of session (e.g., normal or abnormal).
  • the type is indicative of whether the session is not dropped or dropped.
  • Fig. 9 schematically illustrates a second example for the relation 700.
  • the relation 700 is representable by a two-level classification tree of session drops, as trained by real network measurement sets 602.
  • a number of dropped sessions and a number of not dropped sessions are indicated in insets at each on the leaves 704 of the classification tree.
  • the leftmost and the rightmost leaves 704 are classified as representation of "session drop” events.
  • the middle leaf 704 is classified as a "no session drop" event.
  • the step 406 of assessing the rule 610 may be implemented by calculating a correlation 1000 between events that trigger the rule 610 and events that violate the quality requirement 608.
  • Fig. 10 schematically illustrates a validity matrix (also referred to as confusion matrix or validation table) as an example for the correlation 1000 for assessing the validity of the rule 610.
  • the rule 610 assessed by means of the 2-by-2 correlation matrix (i.e., 2 x 2-correlation) shown in Fig. 10 has to fulfil one S-KPI quality requirement 608, e.g., the "no session drop" requirement described with reference to the Figs. 8 and 9.
  • a higher-dimensional correlation may reveal the validity of a rule 610 that has to fulfil a quality requirement 608 for two or more S- KPIs.
  • a 2 x 2 x 2 x 2-correlation may be computed for a rule 610 that has to fulfil two S-KPIs.
  • the rule 610 is said to be valid, if the rule 610 correctly classifies the measurement set 602, based on the R-KPIs 604 of the measurement set 602, into the correct S-KPI class (which is also referred to as S-KPI group). E.g., the measured S-KPI 606 in the measurement set 602 indeed falls into the class, as given by the corresponding leaf 704 of the classification tree. In other words, the classification estimated by the relation 700 is consistent with the measured S-KPI 606.
  • the rules 610 are not 100% correct. Therefore, the method 400 provides for a mechanism that can evaluate the accuracy of each of the rules of the SON 100, e.g., by counting how often the rule 610 is correct. For this purpose, existing mechanisms can be adapted to the technique disclosed herein.
  • the validation table for a binary S-KPI classification is shown in Fig. 10.
  • the relation 700, or the condition derived from the relation 700, are also referred to as a model. Verifying, according to the method 400, the model determined by the machine learning steps 304 and 306 provides a correction mechanism to the method 300. After the relation 700 (e.g., the classification tree) is determined in the step 304, the method 400 ensures that the S-KPI 606 is correctly classified by a simple rule 610, e.g., including an expression (R-KPI ⁇ r_thresh) or an expression (R-KPI > r_thresh).
  • a simple rule 610 e.g., including an expression (R-KPI ⁇ r_thresh) or an expression (R-KPI > r_thresh).
  • the confusion matrix is a means for verifying the model and, thus, the rule 610.
  • the term "positive” means that the estimation of the model is consistent with the actually observed S-KPI classification.
  • the term "negative” means that the model incorrectly classifies the measurement set 602.
  • Violation and fulfilment of the quality requirement 608 is denoted as bad and good performance, respectively, in the context of the measurement 402.
  • Violation and fulfilment of the condition of the rule 610 is denoted as negative and positive trigger, respectively.
  • the following four cases can be distinguished:
  • the measurement set 602 is true positive (TP); if the measurement set 602 is good and it is modelled as bad, then the measurement set 602 is false positive (FP); if the measurement set 602 is bad and it is modelled as good, then the measurement set 602 is false negative (FN); and if the sample is good and it is modelled as good then the sample is true negative (TN).
  • TP true positive
  • FP true positive
  • FP false positive
  • FN if the measurement set 602 is bad and it is modelled as good
  • FN false negative
  • TN true negative
  • the model In order to decide whether or not the model is valid, certain criteria on the derived metrics are defined. For example, the model can be considered as valid, if the total error rate is below a certain threshold (e.g. 20%).
  • a certain threshold e.g. 20%
  • inspection of validity according to the method 400 is built in the method 300. If the model is not valid, then the rule 610 is not changed in the SON 100 according to the step 506.
  • the machine learning algorithm obtains in the step 304 the most relevant R-KPI 604 (e.g., the R-KPI 604 having most impact on the S-KPI 606). Additionally, a threshold value (which is also referred to as a cutpoint) of the R-KPI 604 is determined automatically.
  • the threshold 1102 separates the R-KPI 604 in two classes so that the dropped and not dropped sessions are separated in the best possible way.
  • the most relevant parameter found by the machine learning algorithm in the step 304 is the SINR on the Physical Uplink Control Channel (PUCCH) and the threshold is determined to be 1.5 dB in the step 304. This means that if the SINR is below 1.5dB the sessions are modelled to be dropped, otherwise not dropped.
  • the model is valid (e.g., having a total error rate ⁇ 20%) and the SON functionality executed in the SON 100 includes the following rules for SINR_PUSCH: if avg(SINR_PUSCH) ⁇ 2.0, then increase power on PUSCH; and if avg(SINR_PUSCH) > 8.0, then decrease power on PUSCH.
  • the new rules for SINR_PUSCH as derived in the step 306 include: if avg(SINR_PUSCH) ⁇ 1.5, then trigger the action of increasing power on PUSCH; and
  • the desired depth of the classification tree (i.e., the complexity of the rule 610) is, in one embodiment of the technique, an input to the algorithm. More specifically, the desired error rate as the quality requirement for S-KPI 606 is a direct input to the algorithm.
  • the method 300 determines, based on the quality requirement how finegrained the classification should be (i.e., the depth of the tree) to achieve the predefined error rate.
  • the depth of the classification tree is limited. In the example of Fig. 8, the depth is limited to one for the simplicity of the illustration.
  • the different levels also indicate the importance of the corresponding R-KPI 604. That is, the R-KPI 604 at the first level (i.e., the root of the classification tree) is the most important for achieving the given target S-KPI.
  • the operating parameter 612 corresponding to that R-KPI should be optimized first.
  • the mapping mechanism defines a priority among the operating parameters 612.
  • the machine learning implemented in the step 304 depends on the complexity of the rule 610 (e.g., the depth of the classification tree). Furthermore, the machine learning algorithm implemented in step 304 also depends on the target of the optimization. An extended implementation properly configures the machine learning algorithm by setting the target of the optimization, i.e., the quality requirement for the S-KPI 606.
  • Fig. 11 shows an exemplary chart for a tradeoff between the different validation cases (e.g., false positive and false negative rates) changing in opposite directions as the threshold value changes. If a network operator prefers minimizing the false positive rate, then the threshold value of the SINR_PUSCH is higher than the threshold 1102 which would correspond to equal rates.
  • the threshold of the SINR_PUSCH is lower than the threshold 1102 which would correspond to equal rates. It is up to the use-case (e.g., the specific SON functionality), if the target of the optimization is the false positive rate, the false negative rate or a combination thereof.
  • the technique disclosed herein is applicable for any SON functionality operating based on decision rules 610.
  • the technique can be executed in parallel to an existing SON functionality, i.e., parallel to the flowchart shown in Fig. 12 for a method 1200 of applying the rule 610.
  • a loop starts that includes performing
  • the step 1204 of selection and collection of measurements may be identical with one or both of the steps 304 and 404 of the methods 300 and 400.
  • Each of the measurement sets 602 may relate to a Performance Management (PM) event (e.g., signal strength reports, throughput reports per user per cell), a Fault Management (FM) event (e.g., reporting of the occurrence of a fault or alarm), or a Performance Management (PM) counter.
  • PM Performance Management
  • the measurements may be aggregated, e.g., throughput may be aggregated over the last 15 minutes.
  • the condition in the rule 610 is tested against the measurement sets 602 in a step 1206. If the 610 rule is found to be triggered in a step 1208, a step 1210 increments a statistics counter for the rule 610.
  • a step 1212 assesses the sufficiency of the triggering statistics.
  • the SON functionality performs the action corresponding to the triggered rule 610 in a step 1214.
  • each rule 610 in the SON 100 is associated with one or more operating parameters 612 in a step 1216, which are modified in a modifying step 1218, if the rule 610 is triggered.
  • the rule 610 when the rule 610 is triggered, it may imply an adjustment of the operating parameters that impact the R-KPIs observed by the rule 610.
  • An exemplary rule 610 is the following.
  • the power of the PUSCH is increased or decreased by a given step, respectively.
  • SINR Signal to Interference plus Noise Ratio
  • the operating parameters are not fixed but vary, e.g., based on the measured network performance, and thus adapt to varying conditions.
  • the loop starts over again at the measuring step 1204.
  • a time scale for repeating the loop of the method 1200 depends on the SON functionality.
  • the SON periodicity may depend on a typical rate of change for the conditions of the rules 610 of the SON 100.
  • the SON periodicity may range from seconds or minutes to days or weeks.
  • exemplary aspects controlled by the SON functionality include load balancing, coverage and interference optimization, mobility and robustness optimization, etc.
  • At least some embodiments of the technique reduce or avoid the need for manually tuning decision rules of a self-organizing network functionality. Same or other embodiments reduce or avoid an intervention by experts for adapting the rules to an individual network deployment or to individual cells.
  • the functionality of a self-organizing network can be improved by always applying the most appropriate rules to decide on network operating parameter, e.g., for network optimization.
  • the self-organizing network functionality can be deployed faster, e.g., by avoiding a cumbersome initial setting of rule parameters.

Abstract

A technique for handling at least one rule for operating a self-organizing network is provided. As to one method aspect of the technique, a plurality of measurement sets (602) are measured. Each measurement set includes measurement values specifying at least one first key performance indicator (604) of the self-organization network and a second key performance indicator (606) of the self-organizing network. The second key performance indicator is different from the first key performance indicator. A relation (700) is determined between the at least one first key performance indicator and the second key performance indicator based on the measurement sets. The at least one rule (610) for operating the self-organizing network is derived based on the determined relation. The relation between the at least one first KPI and the second KPI may be determined automatically by means of a machine learning process.

Description

Technique for Handling Rules for Operating a Self-Organizing Network Technical Field
The present disclosure generally relates to a technique for handling rules for operating a self-organizing network. More specifically, and without limitation, a method for deriving at least one operating rule, a method of verifying at least one operating rule and devices for implementing such methods are disclosed.
Background
The configuration of a mobile telecommunications network (MTN) is a complex task. Setting-up, operating and optimizing MTNs, such as those defined by the 3rd Generation Partnership Project (3GPP), require high-level expert knowledge. Examples of 3GPP networks include Universal Mobile Telecommunications System (UMTS) networks, Long-Term Evolution (LTE) networks and LTE-Advanced networks.
Therefore, the costs of the MTN can be significantly reduced by automating as many processes as possible in network deployment and network operation.
The MTN includes a plurality of network elements (NEs) connected to each other with standard interfaces and communicating according to standard protocols. The MTN is managed by a network management system (NMS) performed separately from the NEs. The NMS is also referred to as an Operation, Administration and Maintenance (OAM) system or Operational Support System (OSS). The NMS provides functions for network configuration, referred to as configuration management (CM), and for operation supervision, referred to as performance management (PM) and fault management (FM).
The CM, PM and FM functions include rules for automatic configuration, optimization and fault handling, which are also referred to as self-configuration function, self- optimization function and self-healing function. The 3GPP standard, e.g., in 3GPP standard documents TS 32.500, TS 32.511 and TS 36.902 (Release 8 or later releases), collectively refers to such functions as self-organizing network (SON) functionality. The SON functionality can be centralized using a network manager (NM), distributed to a plurality of domain managers (DMs), or partially centralized and partially distributed in a hybrid implementation using NM and DMs. The SON functionality executes the rules against collected measurement sets and determines which rule is triggered by the measurement sets. The rules are typically set by experts at the development of the SON functionality and adjusted from time to time for a particular network deployment and in response to problems occurring in the network. Thereby, the rules are input to the SON functionality.
However, the conventional installation and maintenance of the rules for the rule- based SON functionality is a knowledge-intensive and cumbersome manual task. First, correct rules have to be found such that each rule precisely captures one particular network phenomenon or network problem. For example, the rule may specify:
IF signal_strength < thrl AND throughput < X
THEN "Too weak transmit power" => "Increase power".
Establishing such a rule requires expert and system knowledge.
Second, even after defining the rules, the thresholds applied in the rules often need to be adjusted to the particular network and environment. Adjusting rules is typically done by experts that manually adapt the rules.
Furthermore, the predefined rules become incorrect or obsolete after some time, e.g., due to a change of node software, a change in user traffic structure, an evolution of terminal types, etc. It is also possible that one rule is valid for one country or region but the same rule is invalid for another country or region, e.g., due to differences in network usage, traffic structure, subscription profiles, etc.
Summary
Accordingly, there is need for a technique that reduces the expenditure of time and the level of knowledge required for operating a self-organizing network in at least some situations.
According to one aspect, a method of deriving at least one rule for operating a self- organizing network (SON) is provided. The method comprises a step of measuring a plurality of measurement sets, each measurement set including measurement values specifying at least one first key performance indicator (KPI) of the SON and a second KPI of the SON that is different from the at least one first KPI; a step of determining a relation between the at least one first KPI and the second KPI based on the measurement sets; and a step of deriving the at least one rule for operating the SON based on the determined relation.
The SON may be a mobile telecommunications network, e.g., with SON functionality.
The KPIs of the SON may be indicative of network performance, e.g., specific aspects of the network performance. The KPIs may be aggregations of performance metrics. The performance metrics may be aggregated over a certain period of time and/or a plurality of spatial domains or cells.
The rules may include, or may be representable by, a condition and/or an action to be performed when the condition is fulfilled. For example, the rule may specify a decision logic of the SON functionality. The action may also be referred to as an actuation. The action may include modifying the SON functionality, e.g., responsive to the condition that is fulfilled. The rule may specify the condition for modifying one or more operating parameters for operating the SON. The rule may allow formulating conditional actions of modifying the one or more operating parameters.
At least some implementations of the method enhance the SON functionality to achieve a self-adjusting mechanism. Same or other implementations of the method may adjust an optimization rule to a particular network and/or environment, e.g., thus reducing or avoiding a task that is conventionally done manually by an expert.
The technique may be applicable to an existing SON functionality, e.g., any SON functionality including a decision logic based on rules. The method may be performed in parallel to the SON functionality to which the method is applied.
The relation between the at least one first KPI and the second KPI may be determined automatically. The determining step and/or the deriving step may include a machine learning process. The machine learning process may be based on the plurality of measurement sets. The machine learning process may be used for adapting the rules in the SON.
The condition may be defined in terms of the at least one first KPI. E.g., the condition may be defined exclusively in terms of the at least one first KPI. The condition may specify at least one threshold value for the at least one first KPI. The step of determining may include determining the at least one threshold value used for defining the condition of the corresponding rule. By way of example, the method may be performed in parallel to adjusting the threshold value of the rule of the SON, e.g., responsive to the condition detected based on the measurement sets.
The step of deriving the at least one rule may encompass a change (e.g., an adaptation) of at least one existing rule. The change may be based on the measurement sets and/or the machine learning process. The method may identify an existing rule and/or create a not yet existing rule, e.g., in the deriving step. The method may further include a step of ranking two or more rules of the SON. For example, the rules may be ranked in order of their influence on the second KPI and/or an importance of the second KPI of the corresponding rule.
The relation may be represented or representable by means of a decision tree or a binary tree. The binary tree may be a result of the machine learning process. The tree may represent a machine learning model. The tree may be constructed by training the machine learning model with the measurement sets, e.g., wherein the at least one first KPI and the second KPI are known in each case. Each internal node of the binary tree may assess, or may correspond to assessing, whether one of the at least one first KPI fulfils a corresponding one of the threshold values.
Each leaf of the binary tree may correspond to a quality requirement for the second KPI. The condition for one rule may correspond to one or more branches of the binary tree. For example, each condition may correspond to one path from a root of the binary tree to a leaf of the binary tree. Alternatively or in addition, each branch and/or condition may be defined by one of the leaves (e.g., since the binary tree is not mashed or loop-free).
The quality requirement may be an input parameter of the method. The deriving of the at least one rule may include selecting the one or more branches according to the input parameter.
The method may further include a step of applying the rule. The rule may be applied by modifying the one or more operating parameters for operating the SON. The modification may be triggered by the condition. At least the steps of determining and applying may be performed simultaneously in the SON. The SON may include a cellular telecommunications network or a portion thereof. The SON may include a Radio Access Network (RAN). For example, the SON may include one or more picocells. The picocells may be self-adjusting under temporal deployment by means of the rules of the SON.
The at least one first KPI may relate to radio resources of the SON, e.g., radio resources of the RAN. The second KPI may be indicative of a performance of a service provided by the SON. The service may use the radio resources to which the at least one first KPI relates.
The method may further include a step of assessing an accuracy of at least one of the determined relation and the derived rule. The assessing may include counting at least one of a number of false positive incidences and a number of false negative incidences for the rule based on measurement sets, e.g., the measurement sets of the measuring step.
According to a further aspect, a method of verifying at least one rule for operating a self-organizing network (SON) is provided. The method comprises a step of measuring a plurality of measurement sets, each measurement set including measurement values specifying at least one first key performance indicator (KPI) of the SON and a second KPI of the SON that is different from the at least one first KPI; a step of receiving a quality requirement in terms of the second KPI for at least one rule, each rule including a condition in terms of the at least one first KPI; and a step of assessing, based on the measurement sets, a correlation between the condition of the rule in terms of the at least one first KPI and a violation of the quality
requirement in terms of the second KPI.
The verifying aspect of the technique may further include any feature or any step disclosed in the context of the deriving aspect. The quality requirement may specify a quality threshold value for the second KPI. The quality requirement may be violated if the second KPI falls below the quality threshold value.
According to a still further aspect, a computer program product is provided. The computer program product comprises program code portions for performing the steps of any one of the method aspects disclosed herein. The steps may be performed when the computer program product is executed on one or more computing devices. Furthermore, the computer program product may be provided by means of a computer-readable recording medium. The computer-readable recording medium may include the program code portions. Alternatively or in addition, the computer program product may be provided for download in a data network, e.g., the SON and/or the Internet. The computer-readable recording medium may include an Internet address for downloading the computer program product.
According to a hardware aspect, a device for deriving at least one rule for operating a self-organizing network (SON) is provided. The device comprises a measuring unit adapted to measure a plurality of measurement sets, each measurement set including measurement values specifying at least one first key performance indicator, KPI, of the SON and a second KPI of the SON that is different from the at least one first KPI; a determining unit adapted to determine a relation between the at least one first KPI and the second KPI based on the measurement sets; and a deriving unit adapted to derive the at least one rule for operating the SON based on the determined relation.
According to another hardware aspect, a device for verifying at least one rule for operating a self-organizing network (SON) is provided. The device comprises a measuring unit adapted to measure a plurality of measurement sets, each
measurement set including measurement values specifying at least one first key performance indicator, KPI, of the SON and a second KPI of the SON that is different from the at least one first KPI; a receiving unit adapted to receive a quality requirement in terms of the second KPI for at least one rule, each rule including a condition in terms of the at least one first KPI; and an assessing unit adapted to assess, based on the measurement sets, a correlation between the condition of the rule in terms the at least one first KPI and a violation of the quality requirement in terms of the second KPI.
According to a further hardware aspect, a system for deriving and verifying at least one rule for operating a self-organizing network (SON) is provided. The system comprises a measuring unit adapted to measure a plurality of measurement sets, each measurement set including measurement values specifying at least one first key performance indicator, KPI, of the SON and a second KPI of the SON that is different from the at least one first KPI; a determining unit adapted to determine a relation between the at least one first KPI and the second KPI based on the measurement sets; a receiving unit adapted to receive a quality requirement in terms of the second KPI for at least one rule, each rule including a condition in terms of the at least one first KPI; a deriving unit adapted to derive the at least one rule for operating the SON based on the determined relation; and an assessing unit adapted to assess, based on the measurement sets, a correlation between the condition of the rule in terms the at least one first KPI and a violation of the quality requirement in terms of the second KPI.
The hardware aspects may further include any feature disclosed in the context of the method aspects. Any one of the units, or dedicate unit, may be adapted to perform any one of the steps disclosed in the context of the method aspects.
Brief Description of the Drawings
Further aspects and advantages of the technique disclosed herein will become apparent from the following description of preferred embodiments with reference to the drawings, wherein
Fig. 1 schematically illustrates a mobile telecommunications network as an
example for a self-organizing network;
Fig. 2 shows a schematic block diagram of a system including a device for
deriving a rule for operating the self-organizing network of Fig. 1 and a device for verifying a rule for operating the self-organizing network of Fig. 1;
Fig. 3 shows a flowchart for a method of deriving a rule for operating the self- organizing network of Fig. 1;
Fig. 4 shows a flowchart for a method of verifying a rule for operating the self- organizing network of Fig. 1;
Fig. 5 shows a flowchart of a combined method implementation for deriving and verifying a rule for operating the self-organizing network of Fig. 1;
Fig. 6 schematically illustrates implementation steps for determining a relation between at least one resource key performance indicator and at least one service key performance indicator according to the method of Fig. 3;
Fig. 7 schematically illustrates a representation of the relation usable in any one of the methods of Figs. 3 to 6 for classifying the service key performance indicator in terms of the resource key performance indicators. Fig. 8 schematically illustrates a first example for the relation determined by the method of Fig. 3 or used by the method of Fig. 4;
Fig. 9 schematically illustrates a second example for the relation determined by the method of Fig. 3 or used by the method of Fig. 4;
Fig. 10 schematically illustrates a validation table for assessing in the method of
Fig. 4 the relation determined, or the rule derived, by the method of Fig. 3;
Fig. 11 schematically illustrates a result of the assessment of Fig. 10 as a function of a threshold value of a rule derived by the method of Fig. 3; and
Fig. 12 shows a flowchart for applying, to the self-organizing network of Fig. 1, the rule derived by the method of Fig. 3 and/or verified by the method of Fig. 4.
Detailed Description
In the following description of preferred embodiments, for purposes of explanation and not limitation, specific details are set forth, such as network environments and functional concepts in order to provide a thorough understanding of the technique. It will be apparent to one skilled in the art that the technique described herein may be practiced in other network environments and using other functional concepts that depart from these specific details. For example, the technique may be implemented in a centralized manner or may be distributed among two or more nodes of the network. Furthermore, some implementations may share, partially of completely, functional units, e.g., between co-located devices, or a modular implementation of the devices may include dedicated units. Furthermore, while the following
embodiments are primarily described in the context of a mobile telecommunications network, the technique is also applicable to a data network providing landline access or including at least some stationary terminals. Moreover, it will be readily apparent that the technique described herein may be implemented according to 3GPP standards (e.g., UMTS networks, LTE networks and LTE-Advanced networks) and non-3GPP standards (e.g., Wi-Fi networks according to an IEEE 802.11 standard) or combinations thereof. Moreover, those skilled in the art will appreciate that services, functions and steps disclosed herein may be implemented using software functioning in conjunction with a programmed microprocessor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP) or a general purpose computer. It will also be appreciated that, while following embodiments are primarily described in context with methods and devices, the technique may also be embodied in a computer program product as well as in a system comprising a computer processor and memory coupled to the processor, wherein the memory is encoded with one or more programs that may perform the services, functions and steps disclosed herein.
Fig. 1 schematically illustrates a self-organizing network (SON) 100. The SON 100 includes network elements (NE) 102 for providing end-to-end data communication in the SON 100 via standardized interfaces. The SON 100 further includes domain managers (DMs) 104 for managing domains of the SON 100. Each domain may be formed by a subset of the NEs 102. Each of the DMs 104 includes functionality for setting-up and configuring the NEs 102, for receiving and handling fault indications and alarms, for resolving local network problems, for measuring local network performance and for improving the network performance, e.g., based on a measured usage behavior.
The SON 100 further includes a network manager (NM) 106. The NM 106 provides a centralized end-to-end management of the SON 100. The NM 100 receives and aggregates reports from the plurality of DMs 104 for the different domains.
Furthermore, the NM 106 indirectly configures the NEs 102 across different domains, e.g., for a globally consistent network configuration.
Any aspect of the technique described herein may be implemented, in parts or completely, in any one of the components 102 to 106 of the SON 100. By way of example, the technique may be deployed in the NEs 102, the DMs 104 and/or the NM 106. Furthermore, the technique may be distributed over different management layers of the SON 100.
The SON 100 may be implemented according to a 3GPP standard. For example, the NEs 102 include nodes of a 3GPP Radio Access Network (RAN), e.g., Radio Base Stations (RBSs), a Radio Network Controller (RNC) and/or Evolved Radio Base Stations (ERBS). Alternatively or in addition, the NEs 102 include nodes of a 3GPP Core Network (CN), e.g., a Gateway GPRS Support Node (GGSN) and/or a Serving GPRS Support Node (SGSN). As a further alternative or in addition, the NEs 102 include nodes of any other backhaul domain, e.g., a transport network, a service network, etc.
In an exemplary 3GPP implementation of the SON 100, the RAN and the CN are optionally managed by different DMs 104. An interface between the NEs 102 and the corresponding DM 104 may be proprietary. An interface between the DMs 104 and the NM 106 may be standardized by 3GPP.
Fig. 2 shows as a schematic block diagram of a system 200 for deriving and verifying one or more rules for operating a self-organizing network, e.g., the SON 100. The system 200 includes a device 210 for deriving at least one rule for operating the SON 100. The system 200 further includes a device 220 for verifying at least one rule for operating the SON 100.
The device 210 includes a measuring unit 212, a determining unit 214 and a deriving unit 216. The measuring unit 212 measures or receives a plurality of measurement sets. Each measurement set includes measurement values specifying one or more first key performance indicators (KPIs) of the SON 100 and at least one second KPI of the SON 100. Each measurement set represents a state of the SON 100 at a certain point in time. The different measurement sets may relate to different points in time.
The measuring unit 212 provides the measurement sets to the determining unit 214. The determining unit 214 determines a relation (also referred to as a dependence structure) between the first KPIs and the second KPI. For example, the second KPI is estimated as a function of the first KPIs. The relation may be determined by any induction algorithm. The function may be continuous and/or may be derived by means of regression. Alternatively or in addition, the function may classify the first KPIs and/or may be determined by machine learning (ML). The classification may include a binary tree or any other representation of multiple classes defined by the first KPIs.
The deriving unit 216 outputs a rule that changes certain operating parameters, if the first KPIs fulfil a condition derived from the relation. The operating parameters may encompass any parameters for configuration of the SON 100. The device 220 includes a measuring unit 212, a receiving unit 224 and an assessing unit 226. The measuring unit 212 may be identical to the measuring unit of the device 210.
Each of the rules for operating the SON 100 includes a condition in terms of the one or more first KPIs. Furthermore, each rule may be associated with a quality requirement for the second KPI. The receiving unit 224 receives the quality requirement. By way of example, the quality requirement is input by a network operator. The first KPI may specify a status of the SON 100 at a first layer of a network protocol and the second KPI may specify a status of the SON 100 at a second layer of the network protocol higher than the first layer. Herein, "higher" may specify a functional abstraction that is closer to an application layer and/or farther from a physical layer.
The assessing unit 226 assesses the one or more of rules based on the measurement sets and, optionally, the quality requirement. The assessment includes computing a correlation between the condition of the rule in terms of the one or more first KPIs and a violation of the quality requirement compared with the second KPI as measured.
Fig. 3 shows a flowchart of a method 300 for deriving at least one rule for operating a SON. Measurements are gathered in a step 302. Based on the measurements, one or more second KPIs are related to one or more first KPIs in a step 304. A rule for operating the SON is derived based on the relation in a step 306. The method 300 may be performed by the device 210 and/or in the SON 100. For example, the steps 302, 304 and 306 may be performed by the units 212, 214 and 216, respectively.
Fig. 4 shows a flowchart of a method 400 for verifying at least one rule for operating a SON. Measurements are gathered in a step 402. A quality requirement in terms of one or more second KPIs is received in a step 404. A rule of the SON is assessed in a step 406 by assessing the ability and/or efficiency of the rule in detecting violations of the quality requirement based on one or more first KPIs, e.g., without using in the rule the one or more second KPIs. The method 400 may be performed by the device 220. The quality requirement may be stored in the device 220. For example, the steps 402, 404 and 406 of the method may be performed by the units 212, 224 and 226, respectively. The rule verified by the method 400 may have been derived by the method 300. Alternatively or in addition, the device 210 performing the method 300 and the device 220 performing the method 400 are implemented separately. For example, the device 210 includes an implementation of the measuring unit 212 performing the step 302 independently of the device 220 including a different implementation of the measuring unit 212 performing the step 402.
At least one of the first KPIs may relate to a Resource KPI (R-KPI) indicative of the performance of network resources. At least one of the second KPIs may relate to a Service KPI (S-KPI) indicative of the performance of a service provided in or via the SON 100, e.g., the performance of an end-user service.
The operation of the SON 100 is controlled by operating parameters. The
functionality of the SON 100 includes automated changes to the operating
parameters (also referred to as the SON functionality). The rules of the SON 100 control the automated changes. A corresponding change is triggered when the condition of the rule is fulfilled. The automated changes may include specifying and/or updating the operating parameters. The functionality of the SON may be split up into the functionalities of the NEs 102, the DMs 104 and/or the NM 106, each of which is optionally controlled by one or more of the rules.
The technique disclosed herein may be applied to an existing SON functionality. The existing SON functionality may include the following loop. Measurements for the R-KPIs are performed. If the measured R-KPIs trigger any one of the rules of the SON 100, the corresponding operating parameter is changed according to the triggered rule. Since the changed operating parameter influences the measured R-KPIs, the loop of the SON functionality continuous by measuring the R-KPIs.
Conventionally, the rules are predefined and fixed. The predefined and fixed rules for the SON functionality are set by experts, e.g., in a network roll-out phase.
The method 300 creates new rules and/or updates existing rules of the SON 100 (e.g., previously created rules or conventionally predefined and fixed rules).
Consequently, at least some implementations of the method 300 and/or the device 210 achieve adaptive rules.
The determining step 304 is based on a machine learning algorithm that learns the dependencies between the R-KPIs used in the SON functionality (i.e., input to the rules as the rules are applied) and/or measured in the step 302 or 402, on the one hand, and, on the other hand, a service performance measured by the S-KPI. Based on the determined relation and in accordance with the quality requirement for the S- KPI, threshold values for the R-KPIs are determined in the step 304, so that R-KPIs corresponding to the service performance fulfilling the quality requirement are distinguished from R-KPIs corresponding to the service performance not fulfilling the quality requirement. At least in some implementations, the determining step 304 includes automatically learning, based on the measuring step 302, which combination of R-KPIs is to be compared with respective threshold values and/or includes determining the respective threshold values.
In a combined implementation of the methods 300 and 400, the steps 302 and 402 may be identical. Fig. 5 shows a flowchart of an exemplary implementation 500 that combines the methods 300 and 400.
A parameter set related to a specific SON functionality is initialized according to a step 502 in the roll-out phase of the SON 100 in order to select the actual measurement sets to be collected. The collection of the selected measurement sets starts in the combined measuring steps 302 and 402. The combined measuring steps 302 and 402 are also referred to as selection and collection of measurements.
The measurements relate to R-KPIs and S-KPIs. Each R-KPI typically measures the performance of some lower-level resource, e.g., signal strength. Each of the S-KPIs describes the end-to-end service performance, e.g., a performance observed, observable or relevant from an end-user perspective.
The step 304 relates the lower-level R-KPIs to the higher-level S-KPIs, e.g., by finding the at least one R-KPI responsible for an S-KPI value violating the quality requirement. In the step 306, the rule is formulated in terms of the determined at least one R-KPI. The determined relation does not have to be fully encoded into the derived rule.
When a sufficient amount of measurements is collected in the combined measuring steps 302 and 402, a machine learning algorithm is applied on the collected measurements in order to find and explore the most suitable set of conditions for triggering the rule, i.e., the relation (also referred to as "inter-dependence") between S-KPIs and R-KPIs is determined. In the step 306, new rules are generated and/or the existing rules within the SON functionality are evaluated and modified. In the step 406, the newly identified rules are automatically validated by statistical analysis. If the rule is found to be correct in a decision step 504, then the rule is sent to the SON functionality in a step 506. The SON functionality adds the validated rule or the validated rule replaces a corresponding existing rule. Otherwise, e.g., if the outcome of the machine learning steps 304 and 306 matches the existing rules or if the new rules are found to be not correct by the validation step 406, the existing rule remains unchanged.
The loop continues at the combined measuring steps 302 and 402, e.g., by performing more recent measurements for the same KPIs. The timing of the loop is configurable. Since the inter-dependence between the KPIs can change slowly, the rules may be updated on long time scale (e.g., on a daily or weekly basis), but certain use-cases might require executing the loop more frequently.
The measurements can be performed on a layer equal to or closer to the physical layer of the SON 100, e.g., including Performance Management (PM) events, PM counters and/or Fault Management (FM) events. The R-KPIs, and optionally the S- KPIs, are derived from the measurements, e.g., as aggregations of certain performance metrics.
In at least some embodiments of the SON 100, the SON functionality is designed to improve an end-user service performance metric referred to as S-KPI. Examples for the S-KPI include, e.g., a session drop ratio, an attach success ratio, a Packet Data Convergence Protocol (PDCP) throughput, etc. The configurable operating
parameters, e.g. for a cell, are more related to the lower-level performance metrics referred to as R-KPIs. Examples for the R-KPIs include, e.g., a Signal-to-Noise and Interference Ratio (SINR), a Channel Quality Index (CQI), Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), etc.
Changing the operating parameters directly impacts some of the R-KPIs and indirectly impacts some of the S-KPIs. Therefore, the SON functionality measures R-KPIs and changes the operating parameters according to the rules. The impact of changes of the operating parameter on the S-KPIs is not directly encoded in the SON functionality.
The rules trigger the changes of the operating parameters. E.g., threshold values included in the rules for the R-KPIs define the triggering condition. The threshold values in the rules indicate a value separating "good" R-KPI values that do not require a change of operating parameters and "bad" R-KPI values that trigger the change. What is considered as "good" or "bad" R-KPI values is specified in terms of the quality requirement for one or more S-KPI values. Hence, what is considered as "good" or "bad" R-KPI values depends on the effect of the R-KPI on the S-KPIs. For example, a threshold value CQI=5 means that for the R-KPI value CQI being less than 5, the PDCP throughput is usually low (e.g., below the quality requirement) and/or a session drop rate is high (e.g., violating the quality requirement), etc.
In contrast, the triggering condition and/or the operating parameters to be conditionally changed are conventionally fixed.
Fig. 6 schematically illustrates a dependence structure 600 used in an exemplary implementation of the method 300. Optionally, the determining step 304 further includes determining the dependence structure 600 between operating parameters 612 of the SON 100 and the R-KPIs 604 based on the measurement sets 602.
The relation of the operating parameters 612 and the R-KPIs 604 used in the rule 610 is schematically illustrated in Fig. 6 by the lower white arrow. Although the relation between operating parameters 612 and R-KPIs 604 is not known exactly, a subset of the operating parameters 612 can be mapped to certain R-KPIs 604.
Moreover, the dependence is usually monotonic, so that a direction of the change of operating parameters 612 determines a direction of the change of the R-KPIs 604. The mapping between operating parameters 612 and R-KPIs 604 and the direction of the dependence is known, e.g., by radio experts. Optionally, this dependence structure is pre-configured in the algorithm.
The operating parameter to be changed by the rule 610, and optionally the direction of the parameter change, is determined based on the mapping between operating parameters 612 and the R-KPIs 604.
Regarding the relation between R-KPIs 604 and S-KPIs 608, a machine learning algorithm explores the dependence to derive the rule 610, e.g., encompassing both creating new rules and modifying existing rules.
For the execution of the machine learning to identify the rules 610, a state of the art machine learning method can be used as a basis. Fig. 7 schematically illustrates a classification tree used in an exemplary implementation for representing the relation 700. The machine learning may be based on the classification tree. The output of the machine learning is the classification tree.
Based on the relation 700, the rule 610 is directly derived (which is also referred to as mapping of the rule). The measurement sets 602 are input to the machine learning algorithm. Each of the measurement sets 602 includes the value of the S-KPI and the measured values of R-KPIs. E.g., each of the measurement sets 602 includes measurement samples in the format of
(S-KPI, R-KPI#1, R-KPI#2, ... R-KPI#N).
Fig. 7 schematically illustrates the relation 700 represented by means of a
classification tree, e.g., resulting from a tree-based machine learning algorithm in the step 304. Each leaf 704 of the tree corresponds to an S-KPI classification, e.g., fulfilment or violation of the quality requirement. An example of the S-KPI classification includes the leaves 704 "low throughput" and "high throughput".
In an extended implementation, the relation 700 classifies the R-KPI values 604 not only for one value of the quality requirement. The relation may classify the R-KPI values 604 for a set of quality requirements relating to the same S-KPI, e.g., for determining the relation 700 independent of a quality requirement, which may be input or changed by a network operator after the determination. Repetitions of the determining step 304 may thus be reduced or avoided.
The internal nodes 702 of the tree correspond to R-KPIs each of which is associated with a threshold value. Different combinations of the R-KPI values leading to the same S-KPI classification are representable by multiple occurrences of equal S-KPI classification leaves at the tree. Each measurement set 602 (including the S-KPI 606 and corresponding R-KPIs 604) falls into one of the leaves, as the measurement set 602 is classified by going all the way from the root (i.e., the first threshold condition 702) of the tree to the classifying leaf 704 of the tree. Each leaf uniquely specifies one way from the root to the corresponding leaf, which is also referred to as a branch. For example, in an internal node 702, if the measured R-KPI 604 is below the threshold value, then the tree requires to go to the left. Otherwise, if the R-KPI 604 is above the threshold value, then the branch continues to the right of the node 702. Thereby, the branch (also referred to as route) to the leaf 704 of the tree gives a set of conditions of R-KPI values specifying the condition of the rule 610. Combining the conditions 702 along the branch leading to the leaf representing the violation of the quality requirement specifies the condition for the rule 610. The individual conditions 702 along the branch are combined by a logical AND (i.e., a logical conjunction) in the rule 610. If multiple leaves represent the violation of the quality requirement, the AND-combined conditions 702 of each of the branches leading to the leaf representing the violation of the quality requirement are further combined by a logical OR (i.e., a logical disjunction) in the rule 610.
In one embodiment, the quality requirement 608, defining which S-KPIs values are considered as "good" or "bad", is an input to the technique. The quality requirement 608 can be determined by the operator, e.g., by inputting the quality requirement 608 at an interface of the SON functionality. Alternatively or in addition, the quality requirement is automatically determined, for example, based on known reports of end-user surveys as to perceived service quality. Standardized quality models of services (e.g., audiovisual services) can be used for automatically determining the quality requirement 608 based on an opinion of users of the SON 100. The quality requirement is also referred to as a target value for the S-KPI.
Given a set of R-KPIs and a selected S-KPI, the determining unit 214 automatically learns the dependency structure of S-KPI on R-KPIs, which includes the condition (e.g., the threshold values) of the R-KPIs and the combinations of R-KPIs under which the S-KPI reaches the target value (e.g., set by the operator) or fails the target value, respectively. In the case of the tree classification, the determining unit 214 includes a tree-construction machine-learning algorithm. Given the input (e.g., from the operator) on the target S-KPI value, the device 210 automatically creates the conditions for the R-KPIs 604 of the rule 610, under which the target value 608 is reached or failed. The condition of the rule 610 includes a logical expression in an IF- statement format. The condition includes the relevant R-KPIs and the corresponding threshold values to be applied for the condition. An exemplary condition includes:
IF (R-KPI-1 < r_thresh_l AND R-KPI-2 < r_thresh_2) OR
(R-KPI-1 > r_thresh_l AND R-KPI-3 < r_thresh_3)
THEN ...
In an extended implementation, the method 300 finds both the logical expression (i.e., which R-KPI values are to be included in the condition) as well as the threshold values (i.e., r_thresh_l, r_thresh_2 and r_thresh_3 in above example) for the R-KPIs included in the condition of the rule 610. Two embodiments of the technique are described. In a first embodiment, only the threshold values of the rules 610 are derived in the step 306. The tree structure is an input to the method 300. The determining step 304 only determines the r_thresh settings for each of the leaves 704. For example, a generic machine learning scheme is adapted for the steps 304 and 306.
The logical expressions of the rule conditions are determined by an expert and given as input to the rule learning algorithm in the first embodiment. In this case the machine learning only adjusts the different R-KPI thresholds in the rule conditions. The first embodiment allows for more operator control on the automatic rule derivation method 300, and may be preferred by operators wishing to influence the rule setting directly.
Fig. 8 schematically illustrates a first example for the relation 700. The relation 700 is representable by a binary classification tree. The relation 700 is representable by a one-level binary tree. The left leaf 704 represents network states that are (e.g., statistically) associated with a session drop. The right leaf 704 represents cases that (e.g., statistically) do not lead to a session drop.
In the case of the first embodiment, the threshold value, r_thresh, of one R-KPI 604 separating good and bad S-KPI values 606 is learnt. If one or more rules exist in the SON functionality that include the particular R-KPI 604, then the threshold value included in the rule is revised in the step 306 using the automatically learnt r_thresh.
In a second embodiment, the rules 610 are determined completely in the steps 304 and 306, e.g., by means of the machine-learning algorithm. The algorithm
determines the relation 700 by completely determining the classification tree including both the structure of the tree and the threshold values at the nodes 702.
For example, no rule 610 is input or specified in advance. The machine learning steps 304 and 306 determine the rule 610, including the logical expression for the rule condition as well as the values of the thresholds in the expression.
The relation 700 schematically illustrated in Fig. 8 is an example for the outcome of the machine learning algorithm to classify session drops. In the mobile
telecommunications implementation, a Radio Access Bearer (RAB) is the user-plane channel established between an end-user and the SON 100. Communication is performed via the RABs. A RAB session starts with RAB establishment and ends with RAB release. The RAB provides communication between the two endpoints (e.g., two NEs 102) during the session. An abnormal release of the RAB is called "session drop" or "RAB drop". In this example the S-KPI 606 is the session drop ratio, that is the number of abnormally released sessions divided by the number of sessions.
A set of measurements is collected in the steps 302 and 402. Each session
corresponds to one measurement set 602. For each measurement set 602, a session record is generated containing a plurality of R-KPIs 604. Optionally, each
measurement set 602 further includes a type of the end of session (e.g., normal or abnormal). The type is indicative of whether the session is not dropped or dropped.
Fig. 9 schematically illustrates a second example for the relation 700. The relation 700 is representable by a two-level classification tree of session drops, as trained by real network measurement sets 602. A number of dropped sessions and a number of not dropped sessions are indicated in insets at each on the leaves 704 of the classification tree. The leftmost and the rightmost leaves 704 are classified as representation of "session drop" events. The middle leaf 704 is classified as a "no session drop" event.
The step 406 of assessing the rule 610 may be implemented by calculating a correlation 1000 between events that trigger the rule 610 and events that violate the quality requirement 608. Fig. 10 schematically illustrates a validity matrix (also referred to as confusion matrix or validation table) as an example for the correlation 1000 for assessing the validity of the rule 610. The rule 610 assessed by means of the 2-by-2 correlation matrix (i.e., 2x2-correlation) shown in Fig. 10 has to fulfil one S-KPI quality requirement 608, e.g., the "no session drop" requirement described with reference to the Figs. 8 and 9. A higher-dimensional correlation may reveal the validity of a rule 610 that has to fulfil a quality requirement 608 for two or more S- KPIs. E.g., a 2x2x2x2-correlation may be computed for a rule 610 that has to fulfil two S-KPIs.
The rule 610 is said to be valid, if the rule 610 correctly classifies the measurement set 602, based on the R-KPIs 604 of the measurement set 602, into the correct S-KPI class (which is also referred to as S-KPI group). E.g., the measured S-KPI 606 in the measurement set 602 indeed falls into the class, as given by the corresponding leaf 704 of the classification tree. In other words, the classification estimated by the relation 700 is consistent with the measured S-KPI 606. In a realistic and/or efficient implementation, the rules 610 are not 100% correct. Therefore, the method 400 provides for a mechanism that can evaluate the accuracy of each of the rules of the SON 100, e.g., by counting how often the rule 610 is correct. For this purpose, existing mechanisms can be adapted to the technique disclosed herein.
The validation table for a binary S-KPI classification is shown in Fig. 10. The relation 700, or the condition derived from the relation 700, are also referred to as a model. Verifying, according to the method 400, the model determined by the machine learning steps 304 and 306 provides a correction mechanism to the method 300. After the relation 700 (e.g., the classification tree) is determined in the step 304, the method 400 ensures that the S-KPI 606 is correctly classified by a simple rule 610, e.g., including an expression (R-KPI < r_thresh) or an expression (R-KPI > r_thresh).
Since the model is simplified, there can be correctly classified measurement sets 602 and incorrectly classified measurement sets 602 among the measurement sets 602 resulting from the step 402. Incorrectly classified measurement sets 602 are indicated at reference signs 1002 and 1004 in Fig. 10. The confusion matrix is a means for verifying the model and, thus, the rule 610. The term "positive" means that the estimation of the model is consistent with the actually observed S-KPI classification. The term "negative" means that the model incorrectly classifies the measurement set 602.
Violation and fulfilment of the quality requirement 608 is denoted as bad and good performance, respectively, in the context of the measurement 402. Violation and fulfilment of the condition of the rule 610 is denoted as negative and positive trigger, respectively. The following four cases can be distinguished:
If the measurement set 602 is bad and it is modelled as bad, then the measurement set 602 is true positive (TP); if the measurement set 602 is good and it is modelled as bad, then the measurement set 602 is false positive (FP); if the measurement set 602 is bad and it is modelled as good, then the measurement set 602 is false negative (FN); and if the sample is good and it is modelled as good then the sample is true negative (TN). The number of measurement sets 602 falling in the above four cases is representable by the 2x2 confusion matrix shown in Fig. 10.
Any one of, or any function of, the following metrics may be derived from the confusion matrix: a total error rate = (FP+FN) / (FP+FN+TP+TN); a false positive rate = FP / (FP+TN), i.e., the ratio of false positive cases in all positive measurement sets 602; and a false negative rate = FN / (FN+TP), i.e., the ratio of false negative cases in all negative measurement sets 602.
In order to decide whether or not the model is valid, certain criteria on the derived metrics are defined. For example, the model can be considered as valid, if the total error rate is below a certain threshold (e.g. 20%).
In the combined implementation 500, inspection of validity according to the method 400 is built in the method 300. If the model is not valid, then the rule 610 is not changed in the SON 100 according to the step 506.
For the example of the S-KPI 606 being the number of dropped sessions and the relation 700 of Fig. 8, based on the number of session records as an input, the machine learning algorithm obtains in the step 304 the most relevant R-KPI 604 (e.g., the R-KPI 604 having most impact on the S-KPI 606). Additionally, a threshold value (which is also referred to as a cutpoint) of the R-KPI 604 is determined automatically. The threshold 1102 separates the R-KPI 604 in two classes so that the dropped and not dropped sessions are separated in the best possible way.
In this example, the most relevant parameter found by the machine learning algorithm in the step 304 is the SINR on the Physical Uplink Control Channel (PUCCH) and the threshold is determined to be 1.5 dB in the step 304. This means that if the SINR is below 1.5dB the sessions are modelled to be dropped, otherwise not dropped. By way of explanation, if the model is valid (e.g., having a total error rate < 20%) and the SON functionality executed in the SON 100 includes the following rules for SINR_PUSCH: if avg(SINR_PUSCH) < 2.0, then increase power on PUSCH; and if avg(SINR_PUSCH) > 8.0, then decrease power on PUSCH.
The new rules for SINR_PUSCH as derived in the step 306 include: if avg(SINR_PUSCH) < 1.5, then trigger the action of increasing power on PUSCH; and
if avg(SINR_PUSCH) > 8.0 triggers the action decrease power on PUSCH.
The desired depth of the classification tree (i.e., the complexity of the rule 610) is, in one embodiment of the technique, an input to the algorithm. More specifically, the desired error rate as the quality requirement for S-KPI 606 is a direct input to the algorithm. The method 300 determines, based on the quality requirement how finegrained the classification should be (i.e., the depth of the tree) to achieve the predefined error rate. Optionally, the depth of the classification tree is limited. In the example of Fig. 8, the depth is limited to one for the simplicity of the illustration.
In case of trees with depth greater than one, the different levels also indicate the importance of the corresponding R-KPI 604. That is, the R-KPI 604 at the first level (i.e., the root of the classification tree) is the most important for achieving the given target S-KPI. The operating parameter 612 corresponding to that R-KPI (as determined according to the mapping mechanism) should be optimized first. Hence, the mapping mechanism defines a priority among the operating parameters 612.
The machine learning implemented in the step 304 depends on the complexity of the rule 610 (e.g., the depth of the classification tree). Furthermore, the machine learning algorithm implemented in step 304 also depends on the target of the optimization. An extended implementation properly configures the machine learning algorithm by setting the target of the optimization, i.e., the quality requirement for the S-KPI 606. Fig. 11 shows an exemplary chart for a tradeoff between the different validation cases (e.g., false positive and false negative rates) changing in opposite directions as the threshold value changes. If a network operator prefers minimizing the false positive rate, then the threshold value of the SINR_PUSCH is higher than the threshold 1102 which would correspond to equal rates. If the target is to minimize the false negative rate, then the threshold of the SINR_PUSCH is lower than the threshold 1102 which would correspond to equal rates. It is up to the use-case (e.g., the specific SON functionality), if the target of the optimization is the false positive rate, the false negative rate or a combination thereof.
The technique disclosed herein is applicable for any SON functionality operating based on decision rules 610. The technique can be executed in parallel to an existing SON functionality, i.e., parallel to the flowchart shown in Fig. 12 for a method 1200 of applying the rule 610.
After an initial configuration step 1202 for initializing of the parameter set and the rule set based on the method 300, a loop starts that includes performing
measurements according to a step 1204. The step 1204 of selection and collection of measurements may be identical with one or both of the steps 304 and 404 of the methods 300 and 400.
Each of the measurement sets 602 may relate to a Performance Management (PM) event (e.g., signal strength reports, throughput reports per user per cell), a Fault Management (FM) event (e.g., reporting of the occurrence of a fault or alarm), or a Performance Management (PM) counter. The measurements may be aggregated, e.g., throughput may be aggregated over the last 15 minutes.
The condition in the rule 610 is tested against the measurement sets 602 in a step 1206. If the 610 rule is found to be triggered in a step 1208, a step 1210 increments a statistics counter for the rule 610.
A step 1212 assesses the sufficiency of the triggering statistics. When the statistics of rule triggers are considered to be stable, the SON functionality performs the action corresponding to the triggered rule 610 in a step 1214. To this end, each rule 610 in the SON 100 is associated with one or more operating parameters 612 in a step 1216, which are modified in a modifying step 1218, if the rule 610 is triggered. For example, when the rule 610 is triggered, it may imply an adjustment of the operating parameters that impact the R-KPIs observed by the rule 610. An exemplary rule 610 is the following. If the average value of the measured Signal to Interference plus Noise Ratio (SINR) on the Physical Uplink Shared Channel (PUSCH) exceeds, or falls below, a certain threshold in a radio cell, then the power of the PUSCH is increased or decreased by a given step, respectively. Thus, the power adapts to the actual signal strength and interference conditions.
In this way the operating parameters are not fixed but vary, e.g., based on the measured network performance, and thus adapt to varying conditions. The loop starts over again at the measuring step 1204.
A time scale for repeating the loop of the method 1200 (also referred to as a SON periodicity) depends on the SON functionality. E.g., the SON periodicity may depend on a typical rate of change for the conditions of the rules 610 of the SON 100. The SON periodicity may range from seconds or minutes to days or weeks.
Further exemplary aspects controlled by the SON functionality (e.g., in a radio access network) include load balancing, coverage and interference optimization, mobility and robustness optimization, etc.
As has become apparent from above description of exemplary embodiments, at least some embodiments of the technique reduce or avoid the need for manually tuning decision rules of a self-organizing network functionality. Same or other embodiments reduce or avoid an intervention by experts for adapting the rules to an individual network deployment or to individual cells.
The functionality of a self-organizing network can be improved by always applying the most appropriate rules to decide on network operating parameter, e.g., for network optimization.
The self-organizing network functionality can be deployed faster, e.g., by avoiding a cumbersome initial setting of rule parameters.

Claims

Claims
1. A method (300) of deriving at least one rule for operating a self-organizing network, SON (100), the method comprising:
measuring (302) a plurality of measurement sets (602), each measurement set including measurement values specifying at least one first key performance indicator, KPI (604), of the SON and a second KPI (606) of the SON that is different from the at least one first KPI;
determining (304) a relation (700) between the at least one first KPI and the second KPI based on the measurement sets; and
deriving (306) the at least one rule (610) for operating the SON based on the determined relation.
2. The method of claim 1, wherein the rule (610) specifies a condition for modifying one or more operating parameters for operating the SON.
3. The method of claim 2, wherein the condition is defined in terms of the at least one first KPI (604).
4. The method of claim 2 or 3, wherein the condition specifies at least one threshold value (1102) for the at least one first KPI (604).
5. The method of claim 4, wherein the relation (700) is represented or representable by means of a binary tree.
6. The method of claim 5, wherein the binary tree is constructed by machine learning, and wherein the binary tree is a machine learning model trained with the measurement sets.
7. The method of claim 5 or 6, wherein each internal node (702) of the binary tree assesses whether one of the at least one first KPI (604) fulfils a corresponding one of the threshold values.
8. The method of claim 7, wherein each leaf (704) of the binary tree corresponds to a quality requirement (608) for the second KPI (606), and wherein the condition of the rule (610) corresponds to one or more branches of the binary tree, each branch being defined by one of the leaves (704).
9. The method of claim 8, wherein the quality requirement (608) is an input parameter of the method and the deriving of the rule (610) includes selecting the one or more branches according to the input parameter.
10. The method of any one of claims 4 to 9, wherein the determination includes determining the at least one threshold value (1102).
11. The method of any one of claims 2 to 10, further comprising:
applying (1200) the rule by modifying (1218) the one or more operating parameters for operating the SON, if the condition of the rule (610) is fulfilled.
12. The method of claim 11, wherein at least the steps of determining (304) and applying (1200) are performed simultaneously in the SON (100).
13. The method of any one of claims 1 to 12, wherein the SON (100) is a cellular telecommunications network.
14. The method of claim 13, wherein the at least one first KPI (604) relates to radio resources of the cellular telecommunications network.
15. The method of claim 14, wherein the second KPI (606) is indicative of a performance of a service that uses the radio resources to which the at least one first KPI (604) relates.
16. The method of any one of claims 1 to 15, further comprising:
assessing (406) an accuracy of at least one of the determined relation (700) and the derived rule (610).
17. The method of claim 16, wherein the assessing (406) includes counting at least one of a number of false positive incidences and a number of false negative incidences for the rule (610) based on the measurement sets (602).
18. A method (400) of verifying at least one rule for operating a self-organizing network, SON (100), the method comprising:
measuring (402) a plurality of measurement sets (602), each measurement set including measurement values specifying at least one first key performance indicator, KPI (604), of the SON and a second KPI (606) of the SON that is different from the at least one first KPI;
receiving (404) a quality requirement (608) in terms of the second KPI for at least one rule (610), each rule including a condition in terms of the at least one first KPI; and
assessing (406), based on the measurement sets, a correlation (1000) between the condition of the rule in terms the at least one first KPI and a violation of the quality requirement in terms of the second KPI.
19. A computer program product comprising program code portions for performing the steps of any one of the claims 1 to 18 when the computer program product is executed on one or more computing devices.
20. The computer program product of claim 19, stored on a computer-readable recording medium.
21. A device (210) for deriving at least one rule for operating a self-organizing network, SON, the device comprising:
a measuring unit (212) adapted to measure a plurality of measurement sets (602), each measurement set including measurement values specifying at least one first key performance indicator, KPI, of the SON and a second KPI of the SON that is different from the at least one first KPI;
a determining unit (214) adapted to determine a relation between the at least one first KPI and the second KPI based on the measurement sets; and
a deriving unit (216) adapted to derive the at least one rule for operating the SON based on the determined relation.
22. A device (220) for verifying at least one rule for operating a self-organizing network, SON (100), the device comprising:
a measuring unit (212) adapted to measure a plurality of measurement sets (602), each measurement set including measurement values specifying at least one first key performance indicator, KPI (604), of the SON and a second KPI (606) of the SON that is different from the at least one first KPI;
a receiving unit (224) adapted to receive a quality requirement (608) in terms of the second KPI for at least one rule, each rule including a condition in terms of the at least one first KPI; and
an assessing unit (226) adapted to assess, based on the measurement sets, a correlation (1000) between the condition of the rule in terms the at least one first KPI and a violation of the quality requirement in terms of the second KPI.
23. A system (200) for deriving and verifying at least one rule for operating a self- organizing network, SON (100), the system comprising:
a measuring unit (212) adapted to measure a plurality of measurement sets (602), each measurement set including measurement values specifying at least one first key performance indicator, KPI (604), of the SON and a second KPI (606) of the SON that is different from the at least one first KPI;
a determining unit (214) adapted to determine a relation (700) between the at least one first KPI and the second KPI based on the measurement sets;
a receiving unit adapted to receive a quality requirement (608) in terms of the second KPI for at least one rule, each rule including a condition in terms of the at least one first KPI;
a deriving unit (216) adapted to derive the at least one rule (610) for operating the SON based on the determined relation; and
an assessing unit (226) adapted to assess, based on the measurement sets, a correlation (1000) between the condition of the rule in terms the at least one first KPI and a violation of the quality requirement in terms of the second KPI.
253
PCT/EP2014/067564 2014-08-18 2014-08-18 Technique for handling rules for operating a self-organizing network WO2016026509A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/067564 WO2016026509A1 (en) 2014-08-18 2014-08-18 Technique for handling rules for operating a self-organizing network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/067564 WO2016026509A1 (en) 2014-08-18 2014-08-18 Technique for handling rules for operating a self-organizing network

Publications (1)

Publication Number Publication Date
WO2016026509A1 true WO2016026509A1 (en) 2016-02-25

Family

ID=51359396

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/067564 WO2016026509A1 (en) 2014-08-18 2014-08-18 Technique for handling rules for operating a self-organizing network

Country Status (1)

Country Link
WO (1) WO2016026509A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017153867A3 (en) * 2016-03-09 2017-10-26 Cisco Technology, Inc. Cross-domain service optimization
WO2019120543A1 (en) * 2017-12-21 2019-06-27 Telefonaktiebolaget Lm Ericsson (Publ) A method and apparatus for dynamic network configuration and optimisation using artificial life
US10498609B1 (en) 2017-07-11 2019-12-03 Amdocs Development Limited System, method, and computer program for enterprise service network design driven by deep machine learning and artificial intelligence
US10735287B1 (en) 2019-03-27 2020-08-04 Hcl Technologies Limited Node profiling based on performance management (PM) counters and configuration management (CM) parameters using machine learning techniques
WO2020164739A1 (en) * 2019-02-15 2020-08-20 Telefonaktiebolaget Lm Ericsson (Publ) Detecting interference in a wireless network
US10785664B2 (en) 2018-12-17 2020-09-22 Loon Llc Parameter selection for network communication links using reinforcement learning
US10999146B1 (en) 2020-04-21 2021-05-04 Cisco Technology, Inc. Learning when to reuse existing rules in active labeling for device classification
WO2022069036A1 (en) * 2020-09-30 2022-04-07 Telefonaktiebolaget Lm Ericsson (Publ) Determining conflicts between kpi targets in a communications network
WO2023038478A1 (en) * 2021-09-09 2023-03-16 Samsung Electronics Co., Ltd. Server and method for obtaining key performance indicator fast-adaptive artificial intelligence model
US11971962B2 (en) 2020-04-28 2024-04-30 Cisco Technology, Inc. Learning and assessing device classification rules

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012072445A1 (en) * 2010-12-03 2012-06-07 Huawei Technologies Sweden Ab Method and apparatus of communications
WO2013091715A1 (en) * 2011-12-22 2013-06-27 Telefonaktiebolaget L M Ericsson (Publ) Apparatus and method for monitoring performance in a communications network
WO2014008915A1 (en) * 2012-07-09 2014-01-16 Telefonaktiebolaget L M Ericsson (Publ) Network management systems for controlling performance of a communication network
WO2014146690A1 (en) * 2013-03-19 2014-09-25 Nokia Solutions And Networks Oy System and method for rule creation and parameter adaptation by data mining in a self-organizing network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012072445A1 (en) * 2010-12-03 2012-06-07 Huawei Technologies Sweden Ab Method and apparatus of communications
WO2013091715A1 (en) * 2011-12-22 2013-06-27 Telefonaktiebolaget L M Ericsson (Publ) Apparatus and method for monitoring performance in a communications network
WO2014008915A1 (en) * 2012-07-09 2014-01-16 Telefonaktiebolaget L M Ericsson (Publ) Network management systems for controlling performance of a communication network
WO2014146690A1 (en) * 2013-03-19 2014-09-25 Nokia Solutions And Networks Oy System and method for rule creation and parameter adaptation by data mining in a self-organizing network

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11196625B2 (en) 2016-03-09 2021-12-07 Cisco Technology, Inc. Cross-domain service optimization
WO2017153867A3 (en) * 2016-03-09 2017-10-26 Cisco Technology, Inc. Cross-domain service optimization
US10848381B2 (en) 2016-03-09 2020-11-24 Cisco Technology, Inc. Cross-domain service optimization
US10498609B1 (en) 2017-07-11 2019-12-03 Amdocs Development Limited System, method, and computer program for enterprise service network design driven by deep machine learning and artificial intelligence
WO2019120543A1 (en) * 2017-12-21 2019-06-27 Telefonaktiebolaget Lm Ericsson (Publ) A method and apparatus for dynamic network configuration and optimisation using artificial life
US11082292B2 (en) 2017-12-21 2021-08-03 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for dynamic network configuration and optimisation using artificial life
US11234141B2 (en) 2018-12-17 2022-01-25 Softbank Corp. Parameter selection for network communication links using reinforcement learning
US10785664B2 (en) 2018-12-17 2020-09-22 Loon Llc Parameter selection for network communication links using reinforcement learning
WO2020164739A1 (en) * 2019-02-15 2020-08-20 Telefonaktiebolaget Lm Ericsson (Publ) Detecting interference in a wireless network
US10735287B1 (en) 2019-03-27 2020-08-04 Hcl Technologies Limited Node profiling based on performance management (PM) counters and configuration management (CM) parameters using machine learning techniques
US10999146B1 (en) 2020-04-21 2021-05-04 Cisco Technology, Inc. Learning when to reuse existing rules in active labeling for device classification
US11971962B2 (en) 2020-04-28 2024-04-30 Cisco Technology, Inc. Learning and assessing device classification rules
WO2022069036A1 (en) * 2020-09-30 2022-04-07 Telefonaktiebolaget Lm Ericsson (Publ) Determining conflicts between kpi targets in a communications network
US11894990B2 (en) 2020-09-30 2024-02-06 Telefonaktiebolaget Lm Ericsson (Publ) Determining conflicts between KPI targets in a communications network
WO2023038478A1 (en) * 2021-09-09 2023-03-16 Samsung Electronics Co., Ltd. Server and method for obtaining key performance indicator fast-adaptive artificial intelligence model

Similar Documents

Publication Publication Date Title
WO2016026509A1 (en) Technique for handling rules for operating a self-organizing network
EP2832040B1 (en) System and method for root cause analysis of mobile network performance problems
Gómez et al. Towards a QoE-driven resource control in LTE and LTE-A networks
US9026851B2 (en) System and method for intelligent troubleshooting of in-service customer experience issues in communication networks
EP3138235B1 (en) Verification in self-organizing networks
US20200127901A1 (en) Service aware uplink quality degradation detection
US11089516B2 (en) Systems and methods for network performance monitoring, event detection, and remediation
Gómez-Andrades et al. Methodology for the design and evaluation of self-healing LTE networks
US11122467B2 (en) Service aware load imbalance detection and root cause identification
US20150181022A1 (en) Technique for Performance Management in a Mobile Communications Network
CN108989880B (en) Code rate self-adaptive switching method and system
US10567467B2 (en) System and method for heuristic control of network traffic management
US20220124517A1 (en) Anomaly detection method and device, terminal and storage medium
US20130176871A1 (en) Network Bottleneck Management
WO2011045736A1 (en) Network management system and method for identifying and accessing quality of service issues within a communications network
US20220174511A1 (en) Method and system for filtering of abnormal network parameter values prior to being used in training of a prediction model in a communication network
EP2934037B1 (en) Technique for Evaluation of a Parameter Adjustment in a Mobile Communications Network
US20200304381A1 (en) Live network real time intelligent analysis on distributed system
EP3799356A1 (en) Determining dependent causes of a computer system event
WO2014040646A1 (en) Determining the function relating user-centric quality of experience and network performance based quality of service
EP4029196A1 (en) System and method of scenario-driven smart filtering for network monitoring
Tsvetkov et al. A configuration management assessment method for SON verification
Tsvetkov et al. A post-action verification approach for automatic configuration parameter changes in self-organizing networks
US20230396485A1 (en) Network management actions based on access point classification
EP2720409A1 (en) Device and method for home network analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14752855

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14752855

Country of ref document: EP

Kind code of ref document: A1