FI129551B - Analyzing operation of communications network - Google Patents

Analyzing operation of communications network Download PDF

Info

Publication number
FI129551B
FI129551B FI20215028A FI20215028A FI129551B FI 129551 B FI129551 B FI 129551B FI 20215028 A FI20215028 A FI 20215028A FI 20215028 A FI20215028 A FI 20215028A FI 129551 B FI129551 B FI 129551B
Authority
FI
Finland
Prior art keywords
anomaly
anomalies
data
performance
performance data
Prior art date
Application number
FI20215028A
Other languages
Finnish (fi)
Swedish (sv)
Other versions
FI20215028A1 (en
Inventor
Petteri Lundèn
Adriana Chis
Original Assignee
Elisa Oyj
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elisa Oyj filed Critical Elisa Oyj
Priority to FI20215028A priority Critical patent/FI129551B/en
Priority to PCT/FI2022/050009 priority patent/WO2022152967A1/en
Priority to EP22701663.1A priority patent/EP4278578A1/en
Application granted granted Critical
Publication of FI129551B publication Critical patent/FI129551B/en
Publication of FI20215028A1 publication Critical patent/FI20215028A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/06Generation of reports
    • H04L43/067Generation of reports using time frame reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • H04L41/064Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis involving time analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/04Processing captured monitoring data, e.g. for logfile generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition

Abstract

Analysis of operation of a communications network. One or more cells of the communications network are analyzed by obtaining (302) time series of performance data of a cell of the communications network; detecting (303) anomalies in the time series; profiling (304) the detected anomalies to determine an anomaly profile for the cell; and inputting (305) the anomaly profile to a classifier model configured to identify whether the cell is a relevant anomaly candidate. At least information about cells identified as relevant anomaly candidates is output for use in management of the communications network and the classifier model has been trained (301) with training data generated based on expert analysis of identified network problems.

Description

ANALYZING OPERATION OF COMMUNICATIONS NETWORK
TECHNICAL FIELD The present application generally relates to analysis of operation of a communications network.
BACKGROUND This section illustrates useful background information without admission of any technique described herein representative of the state of the art.
Cellular communications networks comprise a plurality of cells serving users of the network. There are various factors that affect operation of individual cells and co- operation between the cells. In order for the communications network to operate as intended and to provide planned quality of service, cells of the communications network need to operate as planned.
There are various automated measures that monitor operation of the communications networks in order to detect problems as soon as possible so that corrective actions can be taken. The challenge is that there are malfunctions that are not necessarily detected by current automated monitoring arrangements and therefore there is room for further development of the automated monitoring arrangements.
Now a new approach is taken to analyzing operation of a communications network.
N O
N > SUMMARY
O - The appended claims define the scope of protection. Any examples and technical a = 25 descriptions of apparatuses, products and/or methods in the description and/or N drawings not covered by the claims are presented not as embodiments of the
LO N invention but as background art or examples useful for understanding the invention.
O N
According to a first example aspect there is provided a computer implemented method for analysis of operation of a communications network. The method comprises analyzing one or more cells of the communications network by - obtaining time series of performance data of a cell of the communications network; - detecting anomalies in the time series; - profiling the detected anomalies to determine an anomaly profile for the cell; and - inputting the anomaly profile to a classifier model configured to identify whether the cell is a relevant anomaly candidate; and outputting at least information about cells identified as relevant anomaly candidates for use in management of the communications network; wherein — the classifier model has been trained with training data generated based on expert analysis of identified network problems. In some example embodiments, the detected anomalies are change points. In some example embodiments, the anomaly profile comprises information on magnitudes of changes at the change points. In some example embodiments, anomaly profile comprises types of detected anomalies. In some example embodiments, the types of detected anomalies comprise one or N more of the following: value peak, value drop, step change, gradual change,
N — variance change. <Q © The method of any preceding claim, wherein entries of the training data comprise a = 25 root cause associated with respective identified network problem as a target and an a © anomaly profile determined based on time series of performance data of at least
QA 3 one serving cell associated with the respective identified network problem as input.
N S In some example embodiments, the root cause has been determined by an expert based on the identified network problem and time series of performance data of the at least one serving cell associated with the respective identified network problem.
In some example embodiments, the classifier model is configured to provide a root cause for the anomaly profile that is input to the classifier model, and wherein the output information about cells identified as relevant anomaly candidates comprises information about respective root cause.
In some example embodiments, the method further comprises retraining the classifier model based on expert analysis of the output of the method and/or based on expert evaluation of further identified network problems.
In some example embodiments, the performance data comprises performance indicator data, alarm data and/or probe data. In some example embodiments, the performance data comprises time series of a plurality of performance variables. In some example embodiments, the performance data comprises data collected from multiple cells. According to a second example aspect of the present invention, there is provided an apparatus comprising a processor and a memory including computer program code; the memory and the computer program code configured to, with the processor, cause the apparatus to perform the method of the first aspect or any related embodiment. According to a third example aspect of the present invention, there is provided a computer program comprising computer executable program code which when executed by a processor causes an apparatus to perform the method of the first aspect or any related embodiment. O According to a fourth example aspect there is provided a computer program product 5 comprising a non-transitory computer readable medium having the computer e 25 program of the third example aspect stored thereon. E According to a fifth example aspect there is provided an apparatus comprising @ means for performing the method of the first aspect or any related embodiment.
O = Any foregoing memory medium may comprise a digital data storage such as a data disc or diskette, optical storage, magnetic storage, holographic storage, opto- magnetic storage, phase-change memory, resistive random access memory,
magnetic random access memory, solid-electrolyte memory, ferroelectric random access memory, organic memory or polymer memory. The memory medium may be formed into a device without other substantial functions than storing memory or it may be formed as part of a device with other functions, including but not limited to a memory of a computer, a chip set, and a sub assembly of an electronic device. Different non-binding example aspects and embodiments have been illustrated in the foregoing. The embodiments in the foregoing are used merely to explain selected aspects or steps that may be utilized in different implementations. Some embodiments may be presented only with reference to certain example aspects. It should be appreciated that corresponding embodiments may apply to other example aspects as well.
BRIEF DESCRIPTION OF THE FIGURES Some example embodiments will be described with reference to the accompanying figures, in which: Fig. 1 schematically shows an example scenario according to an example embodiment; Fig. 2 shows a block diagram of an apparatus according to an example embodiment; and Fig. 3 shows a flow diagram illustrating example methods according to certain embodiments; and Fig. 4 shows logical components used in training phase according to some - embodiments; O Fig. 5 illustrates a simple example of anomaly profiling; and 5 25 Fig. 6 shows logical components used in network analysis and retraining phases e according to some embodiments. j
N DETAILED DESCRIPTION 5 Example embodiments of the present invention and its potential advantages are N 30 understood by referring to Figs. 1 through 6 of the drawings. In the following description, like reference signs denote like elements or steps.
Example embodiments of the invention provide new methods for analyzing operation of a communications network in order to identify anomalously operating cells.
Certain example embodiments of the invention are based on using results from expert analysis of individual identified network problems and generalizing these 5 results to larger scale analysis of the network (whole network or part of the network). Various embodiments provide analysis that can be used for proactively detecting network problems before they cause user complaints.
It is to be noted that present disclosure is not related to analyzing possibly anomalous content transmitted in the communications networks, but to identifying situations that may indicate anomalous — or deteriorated operation of the network.
Fig. 1 schematically shows an example scenario according to an embodiment.
The scenario shows a communications network 101 comprising a plurality of cells and base stations and other network devices, and an operations support system, OSS, 102 configured to manage operations of the communications network 101. Further, the scenario shows an automation system 111. The automation system 111 is configured to implement automated monitoring of operation of the communications network 101. The automation system 111 is operable to interact with the OSS 102 for example to receive performance data from the OSS 102. The automation system 111 is configured to implement at least some example embodiments of present disclosure.
In an embodiment of the invention the scenario of Fig. 1 operates as follows: In phase 11, the automation system 111 receives performance data comprising values of performance indicators and/or other performance data from the OSS 102. O In phase 12, the performance data is automatically analysed in the automation 5 25 system 111 to identify cells that are relevant anomaly candidates, i.e. cells that likely e reguire maintenance or corrective actions.
E In phase 13, the results of the analysis are output for further processing.
The results | of the analysis may be shown on a display or otherwise output to a user.
The user 0 may then use the results for management of the communications network in order O 30 to solve problems in the cells identified as relevant anomaly candidates.
Additionally or alternatively, the results of the analysis may be directly provided to other automated processes running in the automation system 111 or elsewhere.
There may be for example a process that automatically performs corrective actions (such as resets or parameter adjustments) in the cells identified as relevant anomaly candidates. The analysis may be automatically or manually triggered. The analysis may be periodically repeated. Fig. 2 shows a block diagram of an apparatus 20 according to an embodiment. The apparatus 20 is for example a general-purpose computer or server or some other electronic data processing apparatus. The apparatus 20 can be used for implementing at least some embodiments of the invention. That is, with suitable configuration the apparatus 20 is suited for operating for example as the automation system 111 of foregoing disclosure. The apparatus 20 comprises a communication interface 25; a processor 21; a user interface 24; and a memory 22. The apparatus 20 further comprises software 23 stored in the memory 22 and operable to be loaded into and executed in the processor 21. The software 23 may comprise one or more software modules and can be in the form of a computer program product. The processor 21 may comprise a central processing unit (CPU), a microprocessor, a digital signal processor (DSP), a graphics processing unit, or the like. Fig. 2 shows one processor 21, but the apparatus 20 may comprise a plurality of processors. The user interface 24 is configured for providing interaction with a user of the apparatus. Additionally or alternatively, the user interaction may be implemented through the communication interface 25. The user interface 24 may comprise a = circuitry for receiving input from a user of the apparatus 20, e.g., via a keyboard, N graphical user interface shown on the display of the apparatus 20, speech O 25 recognition circuitry, or an accessory device, such as a headset, and for providing - output to the user via, e.g., a graphical user interface or a loudspeaker.
I E The memory 22 may comprise for example a non-volatile or a volatile memory, such N as a read-only memory (ROM), a programmable read-only memory (PROM),
LO S erasable programmable read-only memory (EPROM), a random-access memory N 30 (RAM), a flash memory, a data disk, an optical storage, a magnetic storage, a smart card, or the like. The apparatus 20 may comprise a plurality of memories. The memory 22 may serve the sole purpose of storing data, or be constructed as a part of an apparatus 20 serving other purposes, such as processing data. The communication interface 25 may comprise communication modules that implement data transmission to and from the apparatus 20. The communication modules may comprise a wireless or a wired interface module(s) or both. The wireless interface may comprise such as a WLAN, Bluetooth, infrared (IR), radio frequency identification (RF ID), GSM/GPRS, CDMA, WCDMA, LTE (Long Term Evolution) or 5G radio module. The wired interface may comprise such as Ethernet or universal serial bus (USB), for example. The communication interface 25 may — support one or more different communication technologies. The apparatus 20 may additionally or alternatively comprise more than one of the communication interfaces
25. A skilled person appreciates that in addition to the elements shown in Fig. 2, the apparatus 20 may comprise other elements, such as displays, as well as additional circuitry such as memory chips, application-specific integrated circuits (ASIC), other processing circuitry for specific purposes and the like. Further, it is noted that only one apparatus is shown in Fig. 2, but the embodiments of the invention may equally be implemented in a cluster of shown apparatuses. Fig. 3 shows a flow diagram illustrating example methods according to certain embodiments. The methods may be implemented in the automation system 111 of Fig. 1 and/or in the apparatus 20 of Fig. 2. The methods are implemented in a computer and do not require human interaction unless otherwise expressly stated. It is to be noted that the methods may however provide output that may be further S processed by humans and/or the methods may reguire user input to start. The < 25 shown method comprises various possible process phases including some optional = phases while also further phases can be included and/or some of the phases can = be performed more than once. e The method of Fig. 3 comprises the following phases: D 301: A classifier model is trained. The classifier model is a machine learning or O 30 artificial intelligence model that is intended for automatically identifying whether cells are relevant anomaly candidates. In order to configure the classifier model, it is trained with training data generated based on expert analysis of identified (reported or otherwise known) network problems.
The classifier model may analyze one cell or multiple cells at a time.
The training phase is discussed in more detail in connection with Fig. 4. The trained classifier model is then used in analysis of performance data in the following phases.
In the following phases one cell is considered, but multiple cells may be equally analyzed.
Multiple cells may be separately analysed or the subject of the analysis may be performance data aggregated or collected from multiple cells.
For example data from a group of neighboring cells (or from some other group of cells) may be collected and analysed as one entity.
The analysis them provides result that concerns the whole group. l.e. it is determined if the whole group is relevant anomaly candidate. 302: Time series of performance data of a cell of the communications network are obtained.
The performance data may comprise time series of performance indicator data, alarm data and/or probe data.
At minimum the performance data comprises time series of one performance variable, but the performance data may comprise time series of a plurality of performance variables.
Or like mentioned above, the performance data may comprise data collected from multiple cells.
The following is non-exclusive list of possible performance variables included in the performance data: throughput, cell availability, handover failure or success rate, reference signal received power, RSRP, reference signal received quality, RSRQ , received signal strength indicator, RSSI, signal to noise ratio, SNR, signal to interference plus noise ratio, SINR, received signal code power, RSCP, and channel quality indicator, CQI.
Other performance variables may be used, too.
O 303: The performance data is analysed in order to detect anomalies in the time 5 25 series.
In an embodiment, the detected anomalies are change points. - The following is non-exclusive list of possible methods or models that can be used z for detecting the anomalies: binary segmentation algorithm, Pruned Exact Linear D Time (PELT) algorithm, Z-score based method.
Other methods or models may be D used, too.
Several different methods may be used for example to detect different S 30 types of anomalies, such as long-term step changes or individual short-term peaks or drops.
304: The detected anomalies are profiled to determine an anomaly profile for the cell. In general, anomaly profiling summarizes combination of anomalies detected in the cell (series of anomalies detected in one performance variable and/or combination of anomalies of different performance variables, i.e. giving information which performance variables are having anomalies, and possibly also the number, the type, and the magnitude of the anomalies). In this way, performance data of the cell is transformed into a more manageable number of features that describe what are the unusual or anomalous characteristics in the cell's performance.
In an embodiment, the anomaly profile comprises information on magnitudes of changes at detected change points.
Additionally or alternatively, the anomaly profile may comprise types of detected anomalies. The following is non-exclusive list of possible anomaly types: value peak, value drop (peaks and drops may be sudden and there may be multiple peaks or drops), step change (or data shift) gradual change (or trend change), variance change (e.g. performance data starts to look like noise after certain point).
Additionally, the anomaly profile may include information on severity of the anomaly (e.g. anomaly score) and timing of the anomaly or anomalies (e.g. time between anomalies, time since the last detected anomaly).
305: The anomaly profile is input (for inference) to the classifier model trained in phase 301 in order to identify whether the cell is a relevant anomaly candidate and requires corrective actions or at least further consideration.
In an embodiment, the output of the classifier indicates whether the cell is = considered a relevant anomaly candidate or not. In another alternative, the output N of the classifier additionally provides a (likely) root cause for the input anomaly O 25 profile. In an embodiment, the output of the classifier is a score indicating how - confident the classifier model is that the cell is a relevant anomaly candidate, or how z confident the classifier model is of each potential root cause for the input anomaly S profile.
N 306: Output is provided. The output comprises at least information about cells N 30 identified as relevant anomaly candidates. In an embodiment, the output comprises also information about cells that are not considered as relevant anomaly candidates.
Still further, the output may comprise respective (likely) root cause of likely problems in the cell.
For cells that are not considered as relevant anomaly candidates, the output may comprise an indication of not having found any likely root cause.
It is to be noted that the output does not need to include separate indication of the cell being or not being a relevant anomaly candidates.
The root cause or lack of root cause may indirectly provide this information.
The output may then be used in management of the communications network.
Fig. 4 shows logical components used in training phase according to some embodiments.
The logical components are a identified problem 405, serving cell performance data 401, an anomaly detection block 402, an anomaly profiling block 403, an expert analysis block 406, a root cause 407, training data 408, and a classifier model 410. The identified problem 405 is for example a user complaint or an alarm from the network.
The identified problem is associated with a serving cell (the cell serving the — user having the problem). In some cases the same problem could be associated with multiple serving cells.
A network expert analyses the identified problem 405 based on at least the time series of serving cell performance data 401 in the expert analysis block 406. If the problem is associated with multiple serving cells, the network expert may analyze performance data from more than one serving cell.
As a result of the analysis the expert determines the root cause 407 for the identified problem 405. The root cause may be for example bad handover parameters, bad coverage, PCI (Physical Cell Identity) conflict, or faulty hardware or some other problem source or no root cause (the identified problem is such that there is no need N for corrective actions). eI 25 The serving cell performance data 401 is analyzed to detect anomalies and to obtain = anomaly profile of the serving cell in the anomaly detection block 402 and the > anomaly profiling block 403. The anomaly detection block 402 and the anomaly : profiling block 403 operate according to phases 303 and 304 of Fig. 3. S The training data 408 is generated based on the anomaly profile of the serving cell N 30 and the root cause.
The anomaly profile is used as an input and the root cause is N used as a target in the training data 408. The training data 408 is used for training the classifier model 410 in order to configure the classifier model 410 to later identify whether certain cell is relevant anomaly candidate and optionally also to determine respective root cause. The classifier model may use the anomaly profile as a count of anomaly types per performance variable or as a series of events.
Fig. 5 illustrates a simple example of anomaly profiling. Fig. 5 shows time series of four different performance indicators KPI 1-KPI 4. Anomaly detection phase results in no anomalies in KPI 1, no anomalies in KPI 2, three anomalies 503 in KPI 3, and one anomaly 504 in KPI 4. Anomaly profiling results in anomaly profile that indicates three peaks in KPI 3, wherein the magnitude of the peaks are 200%, 170%, and 180% over the mean KPI value, respectively, and one step change in KPI 4, wherein the magnitude of the step change is -90% in the mean KPI value. In this example, the reference point is mean KPI value, but clearly some other reference point could be used, too.
Such combination of anomalies is likely to lead to or indicate certain type of problems, while some other combination of anomalies may lead to some other problems or to no problems at all.
Fig. 6 shows logical components used in network analysis and retraining phases according to some embodiments. The logical components are performance data 421, an anomaly detection block 402, an anomaly profiling block 403, a classifier model 410, an output 411, and an expert analysis block 406. The anomaly detection block 402, the anomaly profiling block 403, and the classifier model 410 operate according to phases 303, 304 and 305 of Fig. 3 to analyze the = performance data 421 and to provide output as defined in phase 306 of Fig. 3.
O = Ongoing retraining of the classifier model may be provided based on network expert = 25 analysis of the output 411 and respective performance data 421 performed in the > expert analysis block 406. Ao © Further retraining of the classifier model 410 during operation may be performed O based on expert analysis of new identified network problems. O The following provides illustrative examples of the training data generated based on the anomaly profile of the serving cell and the root cause determined by an expert.
The anomaly profile is used as an input and the root cause is used as a target in the training data. It is to be noted that these are non-limiting examples and that the content of the training data may vary over time and/or depending on the communications network that is analyzed. Example 1: - Anomaly profile (input): o changepoint showing a 796% increase in random access channel setup attempts (RACH STP ATT) o changepoint showing a 95% decrease in random access channel setup success rate (RACH STP ATT) o changepoint showing a 21% increase in uplink interference (UL INTERFERENCE) KPI - Root cause (target): synchronization problem Example 2: - Anomaly profile (input): o outlier with mean KPI level in the lowest 2% (of the cells) in downlink throughput (DL_THP) o outlier with 5th percentile KPI level in the lowest 3% (of the cells) in reference signal received power (RSRP) - Root cause (target): coverage hole Example 3: N - Anomaly profile (input):
N 5 o short-term drop of 35% on day -2 in handover success rate (HO SR) 2 o short-term drop of 54% on day -5 in handover success rate (HO_SR)
I n. - Root cause (target): not relevant anomaly (i.e. anomaly is not considered N 25 relevant in the sense that no corrective action is taken)
LO N Without in any way limiting the scope, interpretation, or application of the appended N claims, a technical effect of one or more of the example embodiments disclosed herein is improved efficiency of network problem resolution. Various embodiments provide identification of cells that are relevant anomaly candidates and are likely to require corrective actions.
Thereby maintenance personnel does not need to analyze performance data or anomalies from all cells of the network in order to monitor operation of the network.
In this way, the amount of work of the maintenance personnel can be reduced or the resources can be better targeted to likely problems to improve efficiency.
Thus, maintenance actions may be initiated faster for the most relevant issues.
Still further, identification of cells that are relevant anomaly candidates provides that some anomalies of performance data may be ignored by the maintenance personnel.
The reason for this is that some anomalies in performance data are not likely to indicate problems that affect user experience.
Various embodiments provide that such anomalies can be automatically ignored as cells with such anomalies are not likely to be identified as relevant anomaly candidates.
A further technical effect is that automatic analysis enables continuous network-wide cell monitoring through monitoring performance data.
Yet another technical effect is that a proactive approach for finding and resolving network faults may be provided, instead of waiting for user complaints to react to.
The analysis of user complaints and associated tickets or other identified problems requires a lot of work and involves going through large amount of performance data time series.
Therefore avoidance of at least some of these by proactive actions may save a lot of resources.
Yet another technical effect is that relatively sparse training data from expert analysis of identified network problems can be used for training an automated analyzer of network-wide data (the classifier model of various embodiments). S Anomaly profiling according to various embodiments provides that time series of < 25 performance data are transformed into a more manageable number of features.
This = provides that dependencies between performance data and network problems can > be learned from a relatively small amount of training data.
In this way extensive E amount of resources are not needed to generating sufficient training data for the 5 automated analysis.
N 30 Yet another technical effect is that some embodiments enable identifying N combinations of anomalies (in one performance variable or in multiple performance variables of a cell) as source of a problem.
Detecting such combinations may be difficult for a human who needs to go through large amount of performance data time series. If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the before-described functions may be optional or may be combined. Various embodiments have been presented. It should be appreciated that in this document, words comprise, include and contain are each used as open-ended expressions with no intended exclusivity. The foregoing description has provided by way of non-limiting examples of particular implementations and embodiments a full and informative description of the best mode presently contemplated by the inventors for carrying out the invention. It is however clear to a person skilled in the art that the invention is not restricted to details of the embodiments presented in the foregoing, but that it can be implemented in other embodiments using equivalent means or in different combinations of embodiments without deviating from the characteristics of the invention. Furthermore, some of the features of the afore-disclosed example embodiments may be used to advantage without the corresponding use of other features. As such, the foregoing description shall be considered as merely illustrative of the principles of the present invention, and not in limitation thereof. Hence, the scope of the invention is only restricted by the appended patent claims.
N O N S O
I a a 00
QA O LO N O N

Claims (15)

1. A computer implemented method for analysis of operation of a communications network (101), characterized by analyzing one or more cells of the communications network by - obtaining (302) time series of performance data (421) of a cell of the communications network; - detecting (303, 402) anomalies in the time series; - profiling (304, 403) the detected anomalies to determine an anomaly profile for the cell, wherein the anomaly profile comprises a combination of the detected anomalies: and - inputting (305) the anomaly profile to a classifier model (410) configured to identify whether the cell is a relevant anomaly candidate; and outputting (306) at least information about cells identified as relevant anomaly candidates for use in management of the communications network; wherein the classifier model (410) has been trained (301) with training data (408) generated based on expert analysis (406) of identified network problems (405), and wherein the training data comprises anomaly profiles automatically determined based on time series of performance data (401) of at least one serving cell associated with the respective identified network problems (405). a N 2. The method of any preceding claim, wherein the detected anomalies are T change points. O 25
I = 3. The method of claim 2, wherein the anomaly profile comprises information on e magnitudes of changes at the change points.
D
N S 4. The method of any preceding claim, wherein anomaly profile comprises types of detected anomalies.
5. The method of claim 4, wherein the types of detected anomalies comprise one or more of the following: value peak, value drop, step change, gradual change, variance change.
6. The method of any preceding claim, wherein anomaly profile comprises series of anomalies detected in one performance variable and/or combination of anomalies of different performance variables.
7. The method of any preceding claim, wherein entries of the training data (408) comprise a root cause (407) associated with respective identified network problem (405) as a target and an anomaly profile determined based on time series of performance data (401) of at least one serving cell associated with the respective identified network problem (405) as input.
8. The method of claim 7, wherein the root cause (407) has been determined by an expert based on the identified network problem and time series of performance data of the at least one serving cell associated with the respective identified network problem.
9 The method of any preceding claim, wherein the classifier model (410) is configured to provide a root cause for the anomaly profile that is input to the classifier model, and wherein the output information about cells identified as relevant anomaly candidates comprises information about respective root cause.
N 25
10. The method of any preceding claim, further comprising retraining the classifier
N ~ model based on expert analysis of the output of the method and/or based on expert i evaluation of further identified network problems.
O = -
11. The method of any preceding claim, wherein the performance data comprises N 30 performance indicator data, alarm data and/or probe data.
LO
N
O
N
12. Themethodof any preceding claim, wherein the performance data comprises time series of a plurality of performance variables.
13. The method of any preceding claim, wherein the performance data comprises data collected from multiple cells.
14. An apparatus (20, 111, 112) comprising a processor (21), and a memory (22) including computer program code; the memory and the computer program code configured to, with the processor, cause the apparatus to perform the method of any one of claims 1-13.
15. A computer program comprising computer executable program code (23) which when executed by a processor causes an apparatus to perform the method of any one of claims 1-13.
N
O
N
O
I = 00
QA
O
LO
N
O
N
FI20215028A 2021-01-13 2021-01-13 Analyzing operation of communications network FI129551B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
FI20215028A FI129551B (en) 2021-01-13 2021-01-13 Analyzing operation of communications network
PCT/FI2022/050009 WO2022152967A1 (en) 2021-01-13 2022-01-05 Analyzing operation of communications network
EP22701663.1A EP4278578A1 (en) 2021-01-13 2022-01-05 Analyzing operation of communications network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FI20215028A FI129551B (en) 2021-01-13 2021-01-13 Analyzing operation of communications network

Publications (2)

Publication Number Publication Date
FI129551B true FI129551B (en) 2022-04-14
FI20215028A1 FI20215028A1 (en) 2022-04-14

Family

ID=80123104

Family Applications (1)

Application Number Title Priority Date Filing Date
FI20215028A FI129551B (en) 2021-01-13 2021-01-13 Analyzing operation of communications network

Country Status (3)

Country Link
EP (1) EP4278578A1 (en)
FI (1) FI129551B (en)
WO (1) WO2022152967A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI130380B (en) 2021-11-09 2023-08-07 Elisa Oyj Analyzing operation of communications network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9918258B2 (en) * 2013-11-26 2018-03-13 At&T Intellectual Property I, L.P. Anomaly correlation mechanism for analysis of handovers in a communication network
US10261851B2 (en) * 2015-01-23 2019-04-16 Lightbend, Inc. Anomaly detection using circumstance-specific detectors
US10263833B2 (en) * 2015-12-01 2019-04-16 Microsoft Technology Licensing, Llc Root cause investigation of site speed performance anomalies
US10924330B2 (en) * 2018-09-07 2021-02-16 Vmware, Inc. Intelligent anomaly detection and root cause analysis in mobile networks

Also Published As

Publication number Publication date
WO2022152967A1 (en) 2022-07-21
EP4278578A1 (en) 2023-11-22
FI20215028A1 (en) 2022-04-14

Similar Documents

Publication Publication Date Title
US9961571B2 (en) System and method for a multi view learning approach to anomaly detection and root cause analysis
EP3259881B1 (en) Adaptive, anomaly detection based predictor for network time series data
EP3286878B1 (en) Fault diagnosis in networks
CN105744553B (en) Network association analysis method and device
EP2750432A1 (en) Method and system for predicting the channel usage
CN112243249B (en) LTE new access anchor point cell parameter configuration method and device under 5G NSA networking
Turkka et al. An approach for network outage detection from drive-testing databases
Chernogorov et al. Sequence-based detection of sleeping cell failures in mobile networks
US10517007B2 (en) Received signal strength based interferer classification of cellular network cells
FI129551B (en) Analyzing operation of communications network
Chernogorov et al. N-gram analysis for sleeping cell detection in LTE networks
WO2022117911A1 (en) Anomaly detection
EP3849231B1 (en) Configuration of a communication network
Santos et al. An unsupervised learning approach for performance and configuration optimization of 4G networks
Sallent et al. Data analytics in the 5G radio access network and its applicability to fixed wireless access
CN114745289A (en) Method, device, storage medium and equipment for predicting network performance data
CN114362906A (en) Rate matching method, device, electronic equipment and readable medium
FI129315B (en) Analyzing operation of cells of a communications network
WO2023084146A1 (en) Analyzing operation of communications network
US11877170B2 (en) Automated evaluation of effects of changes in communications networks
US20230370354A1 (en) Systems and methods for identifying spatial clusters of users having poor experience in a heterogeneous network
US20230216726A1 (en) Monitoring of target system, such as communication network or industrial process
FI129316B (en) Monitoring performance of a communication network
EP3641373B1 (en) System and method for determining communication channels in a wireless network
CN115038040A (en) Cell positioning method, device, equipment, system and medium

Legal Events

Date Code Title Description
FG Patent granted

Ref document number: 129551

Country of ref document: FI

Kind code of ref document: B