WO2014040633A1 - Identifying fault category patterns in a communication network - Google Patents
Identifying fault category patterns in a communication network Download PDFInfo
- Publication number
- WO2014040633A1 WO2014040633A1 PCT/EP2012/068092 EP2012068092W WO2014040633A1 WO 2014040633 A1 WO2014040633 A1 WO 2014040633A1 EP 2012068092 W EP2012068092 W EP 2012068092W WO 2014040633 A1 WO2014040633 A1 WO 2014040633A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- communication
- fault
- indicators
- network
- indicator
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0631—Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0631—Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
- H04L41/065—Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis involving logical or physical relationship, e.g. grouping and hierarchies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/069—Management of faults, events, alarms or notifications using logs of notifications; Post-processing of notifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5032—Generating service level reports
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5061—Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the interaction between service providers and their network customers, e.g. customer relationship management
- H04L41/5067—Customer-centric QoS measurements
Definitions
- Service providers are increasingly aware about the effects of user perceived service quality to their business. Measuring and improving user experience is a challenging task, which can be tackled by taking into account both technical, e.g. response times, throughput, and non- technical aspects such as user expectations, price or customer support. These measures can be retrieved from a root cause analysis for network troubleshooting in a communication network such as ITC network.
- CSA Customer Service Assurance
- KPIs Key Performance Indicators
- KQIs Key Quality Indicators
- the invention is based on the finding that the above-identified object can be achieved by analyzing the user sessions, considering the aspects related to the service provided to the user as summarized in the Session Details Records (SDRs) and to the transactions involved in the implementation of the session through the network as summarized in the Transaction Details Records (TDRs).
- SDRs and the TDRs can be e.g. provided by data source such as a "Point of Control and Observation” as described in ETSI TS 102 250-1 V2.2.1 (201 1-04) - "Speech and multimedia Transmission Quality (STQ); QoS aspects for popular services in mobile networks. Part 1 : Assessment of Quality of Service".
- the SDR and the TDRs respectively or collectively form a Communication Details Record.
- the data source can generate structured records reporting two types of data:
- NPI Network Performance Indicators
- UXI User Experience Indicators
- the data source can be a software instance embedded in a Network Element, a dedicated equipment intercepting the signaling exchanged on the interconnections between Network Elements or a software agent embedded in Mobile terminals.
- a Customer Service Assurance (CSA) system can collect data from a variety of sources, provided by different vendors.
- the invention relates to a method for determining a network fault in a communication network.
- the method comprises providing a Communication Details Record, the Communication Details Record indicating a fault status of user communications in the communication network, extracting at least one communication fault indicator from the Communication Details Record, and assigning the communication fault indicator to a predetermined fault category from a plurality of different predetermined fault categories to determine the network fault.
- the Communication Details Record enables to exploit user-centric criteria for network fault analysis.
- the fault determining process can comprise statistically analyzing the user sessions, considering all the aspects related to the service provided to the user (summarized in the SDR) and to the transactions involved in the implementation of the session through the network (summarized in the TDRs).
- the Communication Details Record can be implemented as a digital data set. Therefore, the communication fault indicator can digitally be extracted from the digital data set.
- the predetermined fault category can be formed by a stored and data matrix with an entry representing the predetermined fault category, so that assigning the communication fault indicator to the predetermined fault category can be implemented by digitally assigning the communication fault indicator to an entry of the data matrix.
- the inventive approach also enables the application of the clustering algorithm to a set of samples appropriately formatted, in order to identify the root cause of the network
- the framework can define at least one of the following
- TDRs Transaction Details Records
- a user communication is associated with a set of Communication Details Records, and wherein the method comprises assigning each of the communication fault indicators from the set of Communication Details Records to a predetermined fault category, and wherein, in the step of assigning, the communication fault indicators belonging to the set of Communication Details Records are assigned to a predetermined fault category pattern.
- the predetermined fault category pattern is stored to form a fault category matrix comprising matrix columns associated with different fault categories and matrix lines associated to different
- Communication Details Record comprises a Transaction Details Record comprising at least one network performance indicator relating to a communication protocol, the network performance indicator forming the communication fault indicator.
- the Transaction Details Record comprises at least one indicator from one of the following types of network performance indicators: TCP/HTTP indicators, DNS indicators, PDP indicators, GPRS (GMM) indicators, CS (CC) indicators, RAB indicators, Radio Access indicators, RRC indicators.
- Communication Details Record comprises a Session Details Record comprising at least one user experience indicators forming the communication fault indicator.
- the Session Details Record comprises at least one indicator from one of the following types of user experience indicators: HTTP streaming model indicators, web browsing model indicators, CS voice model indicators.
- the method comprises associating each of a plurality of communications to a set of
- the communication fault indicators to a predetermined fault category from the plurality of different predetermined fault categories, wherein, in the step of assigning, the communication fault indicators belonging to each set of Communication Details Records are assigned to a predetermined fault category pattern, storing each of the plurality of predetermined fault category patterns to obtain a fault category matrix, clustering respectively the corresponding fault category matrices assigned to communication fault indicators to obtain clusters of fault category matrices, and determining the most relevant cluster amongst the fault category matrices to identify recurrent predetermined fault category patterns and finally determine the network fault.
- the respectively corresponding fault category matrices are clustered upon the basis of a first distance metric
- the most relevant cluster amongst the fault category matrices is determined upon the basis of a second distance metric, the second distance metric being different that the first distance metric.
- the fault category matrix can correspond to an anomalies pattern as described herein.
- the method comprises assigning the network fault to a Key Performance Indicator or to a Key Quality Indicator.
- the method comprises intercepting data samples of user communications, and determining the
- the method comprises receiving the Communication Details Record from a point of control and observation in the communication network.
- the method comprises indicating the determined network fault.
- the assigning the communication fault indicator to the predetermined fault category is performed digitally.
- the Communication Details Record is provided by a network probe element, and wherein the extracting and assigning are performed by a Network Management System.
- the network probe element can be configured to intercept data communications in the communication network.
- the network probe element can be implemented in a RNC or in a mobile agent.
- the Communication Details Record can be transmitted towards the Network Management System by the network probe element over the communication network.
- the invention relates to a computer program with a program code for performing the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect when the computer program runs on a computer.
- the invention relates to a computer system being configured to perform the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect or to execute the computer program according to the second aspect.
- the invention relates to a network system for determining a network fault in a communication network, the network system comprising a network probe element for providing a Communication Details Record, the Communication Details Record indicating a fault status of user communications in the communication network, and a
- Network Management System for extracting at least one communication fault indicator from the Communication Details Record and for assigning the communication fault indicator to a predetermined fault category from a plurality of different predetermined fault categories to determine the network fault.
- the network probe element can be implemented in an RNC.
- the network system can be further configured to perform the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect to execute the computer program according to the second aspect or to implement the computer system according to the third aspect.
- the invention can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof.
- Fig. 1 shows a diagram of a method for determining a network fault
- Fig. 2 shows an anomalies pattern
- Fig. 3 shows a diagram of a method for determining a network fault
- Fig. 4 shows anomalies patterns' groups
- Fig. 5 shows anomalies patterns' clusters
- Fig. 6 shows anomalies patterns' clusters
- Fig. 7 shows anomalies patterns' clusters.
- Fig. 1 shows a diagram of a method for determining a network fault in a communication network.
- the method comprises providing 101 a Communication Details Record, the Communication Details Record indicating a fault status of user communications in the communication network, extracting 103 at least one communication fault indicator from the Communication Details Record and assigning 105 the communication fault indicator to a predetermined fault category from a plurality of different predetermined fault categories to determine the network fault.
- the Communication Details Record can comprise a SDR and/or TDR, which will be denoted with xDR in the following.
- the communication fault indicator can be a generic fault indicator in a xDR such as a TDR network performance indicator or SDR user experience indicator.
- Predetermined fault categories form anomaly classes as defined in following Table 1 and 2.
- TDR Transaction Details Record
- NPIs Network Performance Indicators
- TDR identifiers unique identification of the TDR within the CSA system
- TDR context timestamps, network identifiers, user identifiers, service identifiers;
- Performance Indicators performance indicators related to the protocol transaction, organized in classes of anomalies; the optimal list of performance indicators can be defined independently from the data source, making reference to the protocol/interface standard specifications.
- Table 1 shows an implementation form of a TDR structure. As shown in Table 1 , the performance indicators are organized in 6 anomaly classes. An anomaly class collects the performance indicators having similar effects on the transaction: i.e. the class Establishment exception includes the performance indicators affecting the success of the transaction establishment. Each class is finally characterized by an anomaly class indicator: this is a binary indicator which is flagged when the correspondent NPIs violate given thresholds. The anomaly class indicator highlights a fault or an error in the transaction, without going into protocol details.
- Network identifiers Endpoints, interface, link identifier, cell, ...
- Quality overall transaction i.e. low average received signal anomalies level or quality in the radio interface
- Quality localized transaction i.e. low average received signal anomaly level or quality in the radio interface
- SDR Session Details Record
- a SDR is coupled with a single session of a service, e.g. voice, video or data, and it is lower layers and network interfaces agnostic. It relates to the service performance and corresponding QoE. It also consists of three information elements:
- - SDR identifiers enable a unique identification of the SDR within the CSA system;
- - SDR context provide timestamps, network descriptors, user descriptors, service descriptors;
- anomaly class indicators can be flagged to denote quality issues.
- the TDRs and SDRs, i.e. xDRs, generated by a given data source may be organized hierarchically.
- a generic xDRs hierarchy can be populated by the xDRs provided by complementary data sources, depending on the availability of consistent performance indicators for the sessions and network protocols.
- a software instance embedded in a UMTS Radio Network Controller can normally provide TDRs related to the RRC and RAB transactions monitored on the LUB interface, while a mobile agent can additionally supply TDRs related to GPRS MM and Session Management, as well as SDRs shaped according to service modeling methods.
- Table 3 and Table 4 depict the hierarchy according to an implementation form.
- the TDRs and SDRs are generated by software agents installed on the mobile terminals assigned to friendly users and by probes embedded in the Radio Network Controllers serving the relevant users.
- additional xDRs are generated by probes intercepting signaling in 3GPP Core network interfaces.
- xDRs patterns and anomalies patterns when a user performs a service's session in the network under monitoring, the available data sources generate SDRs or TDRs that outline a footprint of the session throughout the network.
- a xDRs pattern is the tread of xDRs belonging to the same user's session.
- An anomalies pattern is the tabular representation of a xDRs pattern, each xDR being characterized only by the xDR type and its anomaly classes indicators.
- Fig. 2 shows an anomalies pattern, according to an implementation form.
- the SDR and TDRs types can belong to a xDR hierarchy, i.e. TDR type 1 could be a RRC TDR and TDR type 2 a RAB TDR in the hierarchy depicted in Table 3.
- TDR type 1 could be a RRC TDR and TDR type 2 a RAB TDR in the hierarchy depicted in Table 3.
- the abstract representation of the anomalies pattern makes it possible to process automatically the Communication Details Records by an algorithm.
- the method comprises the application of the clustering algorithm to a set of samples appropriately formatted, in order to identify the root cause of the network performance deterioration.
- the framework defines at least one of the following: the structure of the Transaction Details Records (TDRs) and Session Details Records (SDRs), the format of the data samples to be fed to the clustering algorithm, the clustering algorithm, and the mapping strategy to associate the root cause to appropriate KPIs, in order to extend the analysis to the full population of relevant network transactions.
- Fig. 3 shows a diagram of a method for determining a network fault in a communication network according to an implementation form to demonstrate a detection of an anomaly, e.g. a KQI anomaly 301 or a KPI anomaly 303.
- the respective anomaly can be characterized as a deterioration of Network Performance Indicators (NPIs) or User Experience Indicators (UXIs).
- NPIs Network Performance Indicators
- UXIs User Experience Indicators
- KPIs Key Performance Indicators
- KQIs Key Quality Indicators
- a troubleshooting engineer may want to go deeper into the analysis of a set of transaction samples 306 or session samples 308, which are identified by SDR samples 305 characterized by abnormal UXIs and/or TDRs samples 307 characterized by abnormal NPIs.
- the method comprises sampling 304 user sessions or network transactions to provide SDR samples 305 and/or TDR samples 307 respectively or jointly forming the Communication Details Record.
- communication fault indicators e.g. generic fault indicator in a TDR, i.e. network performance indicator or in a SDR, i.e. user experience indicator, can be extracted from the Communication Details Record.
- the method comprises assigning 309 the communication fault indicators to a predetermined fault category, i.e. anomaly class, from a plurality of different predetermined fault categories to determine the network fault.
- a predetermined fault category i.e. anomaly class
- the assigning 309 can be performed by xDRs correlation.
- each sample is correlated with all the available Details Records provided by other data sources in order to build sets of xDRs (TDRs and SDRs).
- Each sample is then characterized by a xDRs pattern.
- a clustering step is meant to agglomerate the samples in clusters, identifying the recurrence of similar patterns.
- the method further comprises pattern clustering 31 1 to group anomaly patterns 313, 315 e.g. by relevance, an optional step of generalization to determine the KPIs or KQI sets 319, 321 which may cause the deterioration.
- the sampling step 304 If the deterioration is affecting a KPI, the sampling step 304:
- Table 5 shows a mapping between NPIs and KPIs for RRC TDRs according to an implementation form.
- an analogous mapping table identifies the relevant UXI(s); the sampling 304 will select a set of SDRs affected by threshold violations of this UXI(s).
- the analysis can start from a set of SDRs or TDRs samples.
- the correlation step re-builds the xDRs pattern: whatever is the starting point, a TDR or a SDR, each sample will be associated to the SDR that evaluates the user session and all the TDRs related to protocol transactions belonging to that session.
- the correlation rules depend on the structure of the network under monitoring, it is possible to define a priori a table of common parameters for a complete set of interfaces/protocols.
- the assigning 309 can be performed by xDRs correlation. According to an implementation form, each sample is correlated with all the available Details Records provided by other data sources in order to build sets of xDRs (TDRs and SDRs). Each sample is then characterized by an xDRs pattern.
- communication fault indicators e.g. generic fault indicators in a TDR, i.e. network performance indicators or in a SDR, i.e. user experience indicator
- TDR network performance indicators
- SDR SDR
- the step 309 comprises assigning the communication fault indicators to a predetermined fault category, i.e. anomaly class, from a plurality of different predetermined fault categories to determine the network fault.
- the output of a correlation step is a set of xDRs patterns, each one associated to an anomalies pattern, as shown in Fig. 3.
- a clustering algorithm can agglomerate the samples, in order to identify recurrence of anomalies patterns.
- Fig. 4 to 7 show groups (clusters) 401 , 403, 405 of xDR patterns (anomalies patterns) 407, 409, 41 1.
- the patterns 407, 409, 41 1 are agglomerated considering, by way of example, only the xDRs treads 402, 404, 406 that characterizes the patterns.
- the patterns 407, 409, 41 1 are grouped when consisting of the same set of xDRs.
- the xDRs patterns have been agglomerated in 3 groups 401 , 403, 405.
- the characteristic xDRs tread is highlighted in the circles on the upper-right corner of each group.
- the distance function used in this step only depends on the differences between the xDRs treads, neglecting the possible anomalies affecting each TDR.
- the anomalies patterns 407, 409, 41 1 are clustered based on a distance function that considers only the anomaly classes affecting each xDRs type.
- Fig. 5 showing a second clustering step where anomalies patterns 407, 409, 41 1 are clustered based on common anomalies
- the small circles 501 mark the relevance of each cluster, namely the amount of patterns belonging to the cluster.
- the clustering might be applied only to the groups above a minimum population.
- Fig. 6 shows a third clustering step of clusters agglomeration based on xDRs tread differences and common anomalies.
- the grouping 401 , 403, 405 introduced by the first clustering step is removed, then the anomalies patterns' clusters 407, 409, 41 1 detected in the second step are agglomerated using a distance function that considers both xDRs treads differences (common xDR types) and anomalies affecting each xDR type. Higher weight is assigned to anomalies' differences.
- Fig. 7 shows an implementation form of an output, where the centroid 503, 505, 507 of each cluster is finally defined as the minimum common xDRs pattern including all the anomalies that characterize the cluster.
- the highest relevance is assigned to the cluster including 1 1 xDRs patterns out of 17.
- the centroid of the most relevant cluster is assumed as a description of the root cause of the issue that triggered the analysis.
- each anomaly class can be disclosed to the detailed anomalies affecting the TDR.
- each detailed anomaly corresponds to a KPI: in conclusion an anomalies pattern can be translated in a combination of KPIs (identified by the TDRs' anomalies) and KQIs (identified by the SDRs' anomalies).
- KPIs identified by the TDRs' anomalies
- KQIs identified by the SDRs' anomalies
- the user's session are considered as a whole, using the xDR patterns to highlight the interactions between the protocol layers and user's sessions. Treating all the protocol and service layers as interdependent objects, the user- centric troubleshooting process described herein goes beyond the classical network investigation methods based statistical analysis and hierarchical relations between key performance indicators (KPI) and key quality indicators (KQI).
- KPI key performance indicators
- KQI key quality indicators
- the user-centric troubleshooting process described herein comprises analyzing statistically the user sessions, considering as a whole all the aspects related to a user's session (summarized in the SDR) and to the protocol transactions involved in the implementation of the session through the network (summarized in the TDRs).
- the framework can be employed for applications in the ambit of the service monitoring in virtualized networks, by correlating service level SDRs, platform level Virtual Transaction Data Records (V-TDRs) and infrastructure level TDRs.
Abstract
The invention relates to a method for determining a network fault in a communication network, the method comprising providing (101) a Communication Details Record, the Communication Details Record indicating a fault status of user communications in the communication network, extracting (103) at least one communication fault indicator from the Communication Details Record, and assigning (105) the communication fault indicator to a predetermined fault category from a plurality of different predetermined fault categories to determine the network fault.
Description
DESCRIPTION
IDENTIFYING FAULT CATEGORY PATTERNS IN A COMMUNICATION NETWORK BACKGROUND OF THE INVENTION
Service providers are increasingly aware about the effects of user perceived service quality to their business. Measuring and improving user experience is a challenging task, which can be tackled by taking into account both technical, e.g. response times, throughput, and non- technical aspects such as user expectations, price or customer support. These measures can be retrieved from a root cause analysis for network troubleshooting in a communication network such as ITC network.
In this regard, Customer Service Assurance (CSA) platforms may cater correlation capabilities, displaying in a single view the transactions belonging to different netwoik interfaces. The sessions affected by performance and quality deteriorations are analyzed individually, delegating to the expertise of the operator the detection of recurrent transactions failures, delays or other network glitches in correspondence to the performance issues. Another approach to network troubleshooting is based on Key Performance Indicators (KPIs) and Key Quality Indicators (KQIs) hierarchies, according to the methodology suggested by the ITU-T Recommendation E.800 "Terms and Definitions Related to Quality of Service and Network Performance Including Dependability". However, the statistical approaches do not always grab the correlation between network faults and users' perception, so that quality deteriorations perceived by the users are in many cases not mirrored by deteriorations of KPIs and KQIs.
A further approach as described in F. Guyard, S. Beker, "Towards real-time anomalies monitoring for QoE indicators" Ann. Telecommun. (2010) 65:59-71 , Dec. 2009, aims therefore at evaluating the user's perception by means of user-centric modeling, making it possible to identify QoE deteriorations within a user's session or segment of users and which Key Quality Indicators (KQIs) were involved.
SUMMARY OF THE INVENTION
It is the object of the invention to provide a concept for a user-centric network diagnosis for determining a network fault in a communication network. This object is achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
The invention is based on the finding that the above-identified object can be achieved by analyzing the user sessions, considering the aspects related to the service provided to the user as summarized in the Session Details Records (SDRs) and to the transactions involved in the implementation of the session through the network as summarized in the Transaction Details Records (TDRs). The SDRs and the TDRs can be e.g. provided by data source such as a "Point of Control and Observation" as described in ETSI TS 102 250-1 V2.2.1 (201 1-04) - "Speech and multimedia Transmission Quality (STQ); QoS aspects for popular services in mobile networks. Part 1 : Assessment of Quality of Service". The SDR and the TDRs respectively or collectively form a Communication Details Record.
The data source can generate structured records reporting two types of data:
• Network Performance Indicators (NPI): performance indicators related to the protocol transactions both for the Control and the User Plane.
• User Experience Indicators (UXI): quality indicators evaluating the quality of experience for specific services (voice, video streaming, web browsing, etc.) according to service modeling criteria.
The data source can be a software instance embedded in a Network Element, a dedicated equipment intercepting the signaling exchanged on the interconnections between Network Elements or a software agent embedded in Mobile terminals. Thus, a Customer Service Assurance (CSA) system can collect data from a variety of sources, provided by different vendors.
According to a first aspect, the invention relates to a method for determining a network fault in a communication network. The method comprises providing a Communication Details Record, the Communication Details Record indicating a fault status of user communications in the communication network, extracting at least one communication fault indicator from the
Communication Details Record, and assigning the communication fault indicator to a predetermined fault category from a plurality of different predetermined fault categories to determine the network fault.
The Communication Details Record enables to exploit user-centric criteria for network fault analysis. In particular, the interactions among the protocols contributing to the
implementation of the user's sessions may be exploited, which enables a statistical analysis of the sessions as a whole. Each individual session can be described using a standard format, considering all the network protocols and service models associated to the session. The detection of the root cause can be based on clustering the samples, identifying recurrent network behavioral patterns. The identification of the features that mostly characterize the dominant cluster such as tread of transactions, specific failure causes or delays is then used as a description of the root cause. Thus, an automatic network diagnosis can be performed. The fault determining process can comprise statistically analyzing the user sessions, considering all the aspects related to the service provided to the user (summarized in the SDR) and to the transactions involved in the implementation of the session through the network (summarized in the TDRs). The Communication Details Record can be implemented as a digital data set. Therefore, the communication fault indicator can digitally be extracted from the digital data set. The predetermined fault category can be formed by a stored and data matrix with an entry representing the predetermined fault category, so that assigning the communication fault indicator to the predetermined fault category can be implemented by digitally assigning the communication fault indicator to an entry of the data matrix.
The inventive approach also enables the application of the clustering algorithm to a set of samples appropriately formatted, in order to identify the root cause of the network
performance deterioration. The framework can define at least one of the following
parameters:
• The structure of the Transaction Details Records (TDRs) and Session Details
Records (SDRs);
• The format of the data samples to be fed to the clustering algorithm;
• The clustering algorithm;
• The mapping strategy to associate the root cause to appropriate KPIs, in order to extend the analysis to the full population of relevant network transactions.
In a first possible implementation form of the method according to the first aspect, a user communication is associated with a set of Communication Details Records, and wherein the method comprises assigning each of the communication fault indicators from the set of Communication Details Records to a predetermined fault category, and wherein, in the step of assigning, the communication fault indicators belonging to the set of Communication Details Records are assigned to a predetermined fault category pattern. In a second possible implementation form of the method according to the first aspect as such or according to the first implementation form of the first aspect, the predetermined fault category pattern is stored to form a fault category matrix comprising matrix columns associated with different fault categories and matrix lines associated to different
Communication Details Records, and wherein the communication fault indicator is assigned to at least one matrix column or matrix line.
In a third possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the
Communication Details Record comprises a Transaction Details Record comprising at least one network performance indicator relating to a communication protocol, the network performance indicator forming the communication fault indicator.
In a fourth possible implementation form of the method according to the third possible implementation form of the first aspect, the Transaction Details Record comprises at least one indicator from one of the following types of network performance indicators: TCP/HTTP indicators, DNS indicators, PDP indicators, GPRS (GMM) indicators, CS (CC) indicators, RAB indicators, Radio Access indicators, RRC indicators.
In a fifth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the
Communication Details Record comprises a Session Details Record comprising at least one user experience indicators forming the communication fault indicator. In a sixth possible implementation form of the method according to the fifth possible implementation form of the first aspect, the Session Details Record comprises at least one
indicator from one of the following types of user experience indicators: HTTP streaming model indicators, web browsing model indicators, CS voice model indicators.
In a seventh possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method comprises associating each of a plurality of communications to a set of
Communication Details Records, extracting a plurality of communication fault indicators from each set of Communication Details Records, assigning each of the plurality of
communication fault indicators to a predetermined fault category from the plurality of different predetermined fault categories, wherein, in the step of assigning, the communication fault indicators belonging to each set of Communication Details Records are assigned to a predetermined fault category pattern, storing each of the plurality of predetermined fault category patterns to obtain a fault category matrix, clustering respectively the corresponding fault category matrices assigned to communication fault indicators to obtain clusters of fault category matrices, and determining the most relevant cluster amongst the fault category matrices to identify recurrent predetermined fault category patterns and finally determine the network fault.
In an eighth possible implementation form of the method according to the seventh possible implementation form of the first aspect, wherein, in the step of clustering, the respectively corresponding fault category matrices are clustered upon the basis of a first distance metric, and wherein, in the step of determining the most relevant cluster, the most relevant cluster amongst the fault category matrices is determined upon the basis of a second distance metric, the second distance metric being different that the first distance metric. The fault category matrix can correspond to an anomalies pattern as described herein.
In a ninth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method comprises assigning the network fault to a Key Performance Indicator or to a Key Quality Indicator.
In a tenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method comprises intercepting data samples of user communications, and determining the
Communication Details Record from the intercepted data samples.
In a eleventh possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the
method comprises receiving the Communication Details Record from a point of control and observation in the communication network.
In a twelfth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the method comprises indicating the determined network fault.
In a thirteenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the assigning the communication fault indicator to the predetermined fault category is performed digitally. In a fourteenth possible implementation form of the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the Communication Details Record is provided by a network probe element, and wherein the extracting and assigning are performed by a Network Management System. The network probe element can be configured to intercept data communications in the communication network. The network probe element can be implemented in a RNC or in a mobile agent. The Communication Details Record can be transmitted towards the Network Management System by the network probe element over the communication network.
According to a second aspect, the invention relates to a computer program with a program code for performing the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect when the computer program runs on a computer.
According to a third aspect, the invention relates to a computer system being configured to perform the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect or to execute the computer program according to the second aspect.
According to a fourth aspect, the invention relates to a network system for determining a network fault in a communication network, the network system comprising a network probe element for providing a Communication Details Record, the Communication Details Record indicating a fault status of user communications in the communication network, and a
Network Management System for extracting at least one communication fault indicator from the Communication Details Record and for assigning the communication fault indicator to a
predetermined fault category from a plurality of different predetermined fault categories to determine the network fault.
According to the fourth aspect, the network probe element can be implemented in an RNC. The network system can be further configured to perform the method according to the first aspect as such or according to any of the preceding implementation forms of the first aspect to execute the computer program according to the second aspect or to implement the computer system according to the third aspect.
The invention can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof.
BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 shows a diagram of a method for determining a network fault;
Fig. 2 shows an anomalies pattern;
Fig. 3 shows a diagram of a method for determining a network fault;
Fig. 4 shows anomalies patterns' groups;
Fig. 5 shows anomalies patterns' clusters;
Fig. 6 shows anomalies patterns' clusters; and
Fig. 7 shows anomalies patterns' clusters.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION Fig. 1 shows a diagram of a method for determining a network fault in a communication network. The method comprises providing 101 a Communication Details Record, the Communication Details Record indicating a fault status of user communications in the communication network, extracting 103 at least one communication fault indicator from the Communication Details Record and assigning 105 the communication fault indicator to a predetermined fault category from a plurality of different predetermined fault categories to determine the network fault.
The Communication Details Record can comprise a SDR and/or TDR, which will be denoted with xDR in the following. The communication fault indicator can be a generic fault indicator
in a xDR such as a TDR network performance indicator or SDR user experience indicator. Predetermined fault categories form anomaly classes as defined in following Table 1 and 2.
The organization of the data in structured Communication Details Records enables an automatic data processing and root cause analysis.
As to the Transaction Details Record (TDR): a TDR carries Network Performance Indicators (NPIs) related to a single session, a single protocol and a single network interface, and it is organized in 3 information blocks:
- TDR identifiers: unique identification of the TDR within the CSA system;
- TDR context: timestamps, network identifiers, user identifiers, service identifiers;
- Network Performance Indicators: performance indicators related to the protocol transaction, organized in classes of anomalies; the optimal list of performance indicators can be defined independently from the data source, making reference to the protocol/interface standard specifications.
Table 1 shows an implementation form of a TDR structure. As shown in Table 1 , the performance indicators are organized in 6 anomaly classes. An anomaly class collects the performance indicators having similar effects on the transaction: i.e. the class Establishment exception includes the performance indicators affecting the success of the transaction establishment. Each class is finally characterized by an anomaly class indicator: this is a binary indicator which is flagged when the correspondent NPIs violate given thresholds. The anomaly class indicator highlights a fault or an error in the transaction, without going into protocol details.
Set Subset Description
Monitoring sub-system Set of parameters identifying the monitoring
TDR identifiers id sub-system that originated the TDR
TDR id TDR unique identifier
Allnomay casses
Time start, time stop Timestamps
Network identifiers Endpoints, interface, link identifier, cell, ...
TDR context
User identifiers IMSI, PDP context, equipment type
Service identifiers Establishment reason, service type
Establishment Exceptions in the transaction establishment exceptions (i.e. establishment failures)
Quality anomalies affecting the overall
Quality overall transaction (i.e. low average received signal anomalies level or quality in the radio interface)
Quality anomalies affecting locally the
Quality localized transaction (i.e. low average received signal anomaly level or quality in the radio interface)
Network
Abnormal quantitative measurements
Performance
related to the overall transaction (i.e. high
Indicators
Transaction counters number of handover or inter-system anomaly handover failures etc.)
Localized signaling or quality anomalies apparently not affecting the correct closure of the transaction (i.e. the RRC transaction
Localized anomalies is closed by an inter-system handover)
Exceptions in the transaction release (i.e.
Release exceptions release failures.)
Table 1
As to a Session Details Record (SDR): a SDR is coupled with a single session of a service, e.g. voice, video or data, and it is lower layers and network interfaces agnostic. It relates to the service performance and corresponding QoE. It also consists of three information elements:
- SDR identifiers enable a unique identification of the SDR within the CSA system;
- SDR context provide timestamps, network descriptors, user descriptors, service descriptors; and
- User Experience Indicators providing a list of quality indicators that can be used to evaluate the user's perception of the quality of a session.
As shown in Table 2, also the user experience indicators are organized in anomaly classes. As a consequence of quality thresholds violation, anomaly class indicators can be flagged to denote quality issues.
Table 2
The TDRs and SDRs, i.e. xDRs, generated by a given data source may be organized hierarchically. A generic xDRs hierarchy can be populated by the xDRs provided by complementary data sources, depending on the availability of consistent performance indicators for the sessions and network protocols.
According to an implementation form, a software instance embedded in a UMTS Radio Network Controller can normally provide TDRs related to the RRC and RAB transactions monitored on the LUB interface, while a mobile agent can additionally supply TDRs related to GPRS MM and Session Management, as well as SDRs shaped according to service modeling methods.
Table 3 and Table 4 depict the hierarchy according to an implementation form. In Table 3 the TDRs and SDRs are generated by software agents installed on the mobile terminals assigned to friendly users and by probes embedded in the Radio Network Controllers serving the relevant users. In Table 4 additional xDRs are generated by probes intercepting signaling in 3GPP Core network interfaces.
Table 3
Table 4
Regarding xDRs patterns and anomalies patterns, when a user performs a service's session in the network under monitoring, the available data sources generate SDRs or TDRs that outline a footprint of the session throughout the network. A xDRs pattern is the tread of xDRs belonging to the same user's session. An anomalies pattern is the tabular representation of a xDRs pattern, each xDR being characterized only by the xDR type and its anomaly classes indicators.
Fig. 2 shows an anomalies pattern, according to an implementation form. The SDR and TDRs types can belong to a xDR hierarchy, i.e. TDR type 1 could be a RRC TDR and TDR type 2 a RAB TDR in the hierarchy depicted in Table 3. The abstract representation of the anomalies pattern makes it possible to process automatically the Communication Details Records by an algorithm.
According to some implementation forms, the method comprises the application of the clustering algorithm to a set of samples appropriately formatted, in order to identify the root cause of the network performance deterioration. The framework defines at least one of the following: the structure of the Transaction Details Records (TDRs) and Session Details
Records (SDRs), the format of the data samples to be fed to the clustering algorithm, the clustering algorithm, and the mapping strategy to associate the root cause to appropriate KPIs, in order to extend the analysis to the full population of relevant network transactions. Fig. 3 shows a diagram of a method for determining a network fault in a communication network according to an implementation form to demonstrate a detection of an anomaly, e.g. a KQI anomaly 301 or a KPI anomaly 303.
The respective anomaly can be characterized as a deterioration of Network Performance Indicators (NPIs) or User Experience Indicators (UXIs). The criteria to trigger the diagnosis process can be:
• Statistical: Key Performance Indicators (KPIs, based NPIs) or Key Quality Indicators (KQIs, based on UXIs) violate predefined thresholds; in this case the method comprises sampling 304 user sessions or network transactions to provide SDR samples 305 and/or TDR samples 307 forming the Communication Details Record.
• Based on events: a troubleshooting engineer may want to go deeper into the analysis of a set of transaction samples 306 or session samples 308, which are identified by SDR samples 305 characterized by abnormal UXIs and/or TDRs samples 307 characterized by abnormal NPIs.
The method comprises sampling 304 user sessions or network transactions to provide SDR samples 305 and/or TDR samples 307 respectively or jointly forming the Communication Details Record. In this step, also communication fault indicators, e.g. generic fault indicator in a TDR, i.e. network performance indicator or in a SDR, i.e. user experience indicator, can be extracted from the Communication Details Record.
In the next step, the method comprises assigning 309 the communication fault indicators to a predetermined fault category, i.e. anomaly class, from a plurality of different predetermined fault categories to determine the network fault.
The assigning 309 can be performed by xDRs correlation. According to an implementation form, each sample is correlated with all the available Details Records provided by other data sources in order to build sets of xDRs (TDRs and SDRs). Each sample is then characterized by a xDRs pattern. A clustering step is meant to agglomerate the samples in clusters, identifying the recurrence of similar patterns.
The method further comprises pattern clustering 31 1 to group anomaly patterns 313, 315 e.g. by relevance, an optional step of generalization to determine the KPIs or KQI sets 319, 321 which may cause the deterioration.
If the deterioration is affecting a KPI, the sampling step 304:
• Identifies the NPI(s) contributing to the KPI's formula;
• Selects a set of TDRs characterized by abnormal values of that NPI(s); the thresholds are the same as described herein to define the anomaly class indicators.
Table 5 shows a mapping between NPIs and KPIs for RRC TDRs according to an implementation form.
Table 5
If the deterioration is affecting a KQI, an analogous mapping table identifies the relevant UXI(s); the sampling 304 will select a set of SDRs affected by threshold violations of this UXI(s).
Referring to the correlation 309, whether the sampling step 304 was involved or not, at this stage the analysis can start from a set of SDRs or TDRs samples. For each sample, the
correlation step re-builds the xDRs pattern: whatever is the starting point, a TDR or a SDR, each sample will be associated to the SDR that evaluates the user session and all the TDRs related to protocol transactions belonging to that session. Although the correlation rules depend on the structure of the network under monitoring, it is possible to define a priori a table of common parameters for a complete set of interfaces/protocols.
The assigning 309 can be performed by xDRs correlation. According to an implementation form, each sample is correlated with all the available Details Records provided by other data sources in order to build sets of xDRs (TDRs and SDRs). Each sample is then characterized by an xDRs pattern.
In this step 309, also communication fault indicators, e.g. generic fault indicators in a TDR, i.e. network performance indicators or in a SDR, i.e. user experience indicator, can be extracted from the Communication Details Record.
The step 309 comprises assigning the communication fault indicators to a predetermined fault category, i.e. anomaly class, from a plurality of different predetermined fault categories to determine the network fault. The output of a correlation step is a set of xDRs patterns, each one associated to an anomalies pattern, as shown in Fig. 3.
As to patterns clustering 31 1 , a clustering algorithm can agglomerate the samples, in order to identify recurrence of anomalies patterns.
The simple clustering algorithm described here after is allowed by the homogeneous structure of the anomalies patterns. Here follows a short description of the 3 clustering steps, introducing efficient agglomeration and cluster centroids identification, supported by an example.
Fig. 4 to 7 show groups (clusters) 401 , 403, 405 of xDR patterns (anomalies patterns) 407, 409, 41 1. The patterns 407, 409, 41 1 are agglomerated considering, by way of example, only the xDRs treads 402, 404, 406 that characterizes the patterns. The patterns 407, 409, 41 1 are grouped when consisting of the same set of xDRs. In Fig. 4, the xDRs patterns have been agglomerated in 3 groups 401 , 403, 405. The characteristic xDRs tread is highlighted in
the circles on the upper-right corner of each group. The distance function used in this step only depends on the differences between the xDRs treads, neglecting the possible anomalies affecting each TDR. Within each group identified in the first step, the anomalies patterns 407, 409, 41 1 are clustered based on a distance function that considers only the anomaly classes affecting each xDRs type.
In Fig. 5 showing a second clustering step where anomalies patterns 407, 409, 41 1 are clustered based on common anomalies, the small circles 501 mark the relevance of each cluster, namely the amount of patterns belonging to the cluster. In order to make this step more efficient, the clustering might be applied only to the groups above a minimum population. Fig. 6 shows a third clustering step of clusters agglomeration based on xDRs tread differences and common anomalies. The grouping 401 , 403, 405 introduced by the first clustering step is removed, then the anomalies patterns' clusters 407, 409, 41 1 detected in the second step are agglomerated using a distance function that considers both xDRs treads differences (common xDR types) and anomalies affecting each xDR type. Higher weight is assigned to anomalies' differences.
Fig. 7 shows an implementation form of an output, where the centroid 503, 505, 507 of each cluster is finally defined as the minimum common xDRs pattern including all the anomalies that characterize the cluster. According to an implementation form, the highest relevance is assigned to the cluster including 1 1 xDRs patterns out of 17. The centroid of the most relevant cluster is assumed as a description of the root cause of the issue that triggered the analysis.
As to the generalization to KPIs, the root cause, as defined at the end of the clustering step, can be now translated from the abstract notation to detailed anomalies. As shown in Table4, each anomaly class can be disclosed to the detailed anomalies affecting the TDR. Moreover, each detailed anomaly corresponds to a KPI: in conclusion an anomalies pattern can be translated in a combination of KPIs (identified by the TDRs' anomalies) and KQIs (identified by the SDRs' anomalies).
According to an implementation form, the impact of the anomaly pattern on the entire population of users can be verified, going back to a statistical analysis based on KPIs and KQIs. According to an implementation form, the user's session are considered as a whole, using the xDR patterns to highlight the interactions between the protocol layers and user's sessions. Treating all the protocol and service layers as interdependent objects, the user- centric troubleshooting process described herein goes beyond the classical network investigation methods based statistical analysis and hierarchical relations between key performance indicators (KPI) and key quality indicators (KQI).
According to an implementation form, the user-centric troubleshooting process described herein comprises analyzing statistically the user sessions, considering as a whole all the aspects related to a user's session (summarized in the SDR) and to the protocol transactions involved in the implementation of the session through the network (summarized in the TDRs). The framework can be employed for applications in the ambit of the service monitoring in virtualized networks, by correlating service level SDRs, platform level Virtual Transaction Data Records (V-TDRs) and infrastructure level TDRs.
Claims
1 . Method for determining a network fault in a communication network, the method comprising: providing (101 ) a Communication Details Record, the Communication Details Record indicating a fault status of user communications in the communication network; extracting (103) at least one communication fault indicator from the Communication Details Record; and assigning (105) the communication fault indicator to a predetermined fault category from a plurality of different predetermined fault categories to determine the network fault.
2. The method according to claim 1 , wherein a user communication is associated with a set of Communication Details Records, and wherein the method comprises: assigning each of the communication fault indicators from the set of Communication Details Records to a predetermined fault category, and wherein, in the step of assigning, the communication fault indicators belonging to the set of Communication Details Records are assigned to a predetermined fault category pattern.
3. The method according to claim 1 or 2, wherein the predetermined fault category pattern is stored to form a fault category matrix comprising matrix columns associated with different fault categories and matrix lines associated to different Communication Details Records, and wherein the communication fault indicator is assigned to at least one matrix column or matrix line.
4. The method according to anyone of the preceding claims, wherein the
Communication Details Record comprises a Transaction Details Record comprising at least one network performance indicator relating to a communication protocol, the network performance indicator forming the communication fault indicator.
5. The method according to claim 4, wherein the Transaction Details Record comprises at least one indicator from one of the following types of network performance indicators: TCP/HTTP indicators, DNS indicators, PDP indicators, GPRS (GMM) indicators, CS (CC) indicators, RAB indicators, Radio Access indicators, RRC indicators.
6. The method according to anyone of the preceding claims, wherein the Communication Details Record comprises a Session Details Record comprising at least one user experience indicator forming the communication fault indicator.
7. The method according to claim 6, wherein the Session Details Record comprises at least one indicator from one of the following types of user experience ind ators: HTTP streaming model indicators, web browsing model indicators, CS voice model indicators.
8. The method according to anyone of the preceding claims, comprising: associating each of a plurality of communications to a set of Communication Details
Records; extracting a plurality of communication fault indicators from each set of Communication Details Records; assigning each of the plurality of communication fault indicators to a predetermined fault category from the plurality of different predetermined fault categories; wherein, in the step of assigning (101 ), the communication fault indicators belonging to each set of Communication Details Records are assigned to a predetermined fault category pattern; storing each of the plurality of predetermined fault category patterns to obtain a fault category matrix; clustering respectively the corresponding fault category matrices assigned to communication fault indicators to obtain clusters of fault category matrices; and determining the most relevant cluster amongst the fault category matrices to identify recurrent predetermined fault category patterns and finally determine the network fault.
9. The method according to claim 8, wherein, in the step of clustering, the respectively corresponding fault category matrices are clustered upon the basis of a first distance metric, and wherein, in the step of determining the most relevant cluster, the most relevant cluster amongst the fault category matrices is determined upon the basis of a second distance metric, the second distance metric being different from the first distance metric.
10. The method according to anyone of the preceding claims, comprising assigning the network fault to a Key Performance Indicator or to a Key Quality Indicator.
1 1 . The method according to anyone of the preceding claims, comprising intercepting data samples of user communications, and determining the Communication Details Record from the intercepted data samples.
12. The method according to anyone of the preceding claims, comprising receiving the Communication Details Record from a point of control and observation in the communication network.
13. The method according to anyone of the preceding claims, wherein the assigning the communication fault indicator to the predetermined fault category is performed digitally.
14. The method according to anyone of the preceding claims, wherein the
Communication Details Record is provided by a network probe element, and wherein the extracting (103) and assigning (105) are performed by a Network Management System.
15. Computer program with a program code for performing the method of anyone of claims 1 to 14 when the computer program runs on a computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2012/068092 WO2014040633A1 (en) | 2012-09-14 | 2012-09-14 | Identifying fault category patterns in a communication network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2012/068092 WO2014040633A1 (en) | 2012-09-14 | 2012-09-14 | Identifying fault category patterns in a communication network |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014040633A1 true WO2014040633A1 (en) | 2014-03-20 |
Family
ID=46888413
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2012/068092 WO2014040633A1 (en) | 2012-09-14 | 2012-09-14 | Identifying fault category patterns in a communication network |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2014040633A1 (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015183322A1 (en) * | 2014-05-30 | 2015-12-03 | Hewlett-Packard Development Company, L.P. | Evaluating user experience |
WO2015187156A1 (en) * | 2014-06-04 | 2015-12-10 | Hewlett-Packard Development Company, L.P. | Evaluating user experience |
US9424121B2 (en) | 2014-12-08 | 2016-08-23 | Alcatel Lucent | Root cause analysis for service degradation in computer networks |
WO2016169616A1 (en) | 2015-04-24 | 2016-10-27 | Telefonaktiebolaget Lm Ericsson (Publ) | Fault diagnosis in networks |
WO2017220107A1 (en) | 2016-06-20 | 2017-12-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and network node for detecting degradation of metric of telecommunications network |
US10475090B2 (en) | 2016-07-11 | 2019-11-12 | Micro Focus Llc | Calculating user experience scores |
US10531325B2 (en) | 2015-05-20 | 2020-01-07 | Telefonaktiebolaget Lm Ericsson (Publ) | First network node, method therein, computer program and computer-readable medium comprising the computer program for determining whether a performance of a cell is degraded or not |
CN111224805A (en) * | 2018-11-26 | 2020-06-02 | 中兴通讯股份有限公司 | Network fault root cause detection method, system and storage medium |
WO2020242275A1 (en) * | 2019-05-30 | 2020-12-03 | Samsung Electronics Co., Ltd. | Root cause analysis and automation using machine learning |
CN112491595A (en) * | 2020-11-12 | 2021-03-12 | 杭州迪普信息技术有限公司 | Fault area positioning method, device, equipment and computer readable storage medium |
US11138163B2 (en) | 2019-07-11 | 2021-10-05 | EXFO Solutions SAS | Automatic root cause diagnosis in networks based on hypothesis testing |
US11388040B2 (en) | 2018-10-31 | 2022-07-12 | EXFO Solutions SAS | Automatic root cause diagnosis in networks |
US11522766B2 (en) | 2020-02-12 | 2022-12-06 | EXFO Solutions SAS | Method and system for determining root-cause diagnosis of events occurring during the operation of a communication network |
US11611500B2 (en) | 2021-07-29 | 2023-03-21 | Hewlett Packard Enterprise Development Lp | Automated network analysis using a sensor |
US11645293B2 (en) | 2018-12-11 | 2023-05-09 | EXFO Solutions SAS | Anomaly detection in big data time series analysis |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1784027A1 (en) * | 2005-11-07 | 2007-05-09 | Accenture Global Services GmbH | Network performance management |
WO2011014169A1 (en) * | 2009-07-30 | 2011-02-03 | Hewlett-Packard Development Company, L.P. | Constructing a bayesian network based on received events associated with network entities |
-
2012
- 2012-09-14 WO PCT/EP2012/068092 patent/WO2014040633A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1784027A1 (en) * | 2005-11-07 | 2007-05-09 | Accenture Global Services GmbH | Network performance management |
WO2011014169A1 (en) * | 2009-07-30 | 2011-02-03 | Hewlett-Packard Development Company, L.P. | Constructing a bayesian network based on received events associated with network entities |
Non-Patent Citations (2)
Title |
---|
F. GUYARD; S. BEKER: "Towards reai-time anomalies monitoring for QoE indicators", ANN. TELECOMMUN., vol. 65, December 2009 (2009-12-01), pages 59 - 71 |
KHANNA G ET AL: "Distributed Diagnosis of Failures in a Three Tier E-Commerce System", RELIABLE DISTRIBUTED SYSTEMS, 2007. SRDS 2007. 26TH IEEE INTERNATIONAL SYMPOSIUM ON, IEEE, PISCATAWAY, NJ, USA, 10 October 2007 (2007-10-10), pages 185 - 198, XP031572962, ISBN: 978-0-7695-2995-0 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015183322A1 (en) * | 2014-05-30 | 2015-12-03 | Hewlett-Packard Development Company, L.P. | Evaluating user experience |
US10725891B2 (en) | 2014-05-30 | 2020-07-28 | Micro Focus Llc | Evaluating user experience |
WO2015187156A1 (en) * | 2014-06-04 | 2015-12-10 | Hewlett-Packard Development Company, L.P. | Evaluating user experience |
US9424121B2 (en) | 2014-12-08 | 2016-08-23 | Alcatel Lucent | Root cause analysis for service degradation in computer networks |
WO2016169616A1 (en) | 2015-04-24 | 2016-10-27 | Telefonaktiebolaget Lm Ericsson (Publ) | Fault diagnosis in networks |
US10498586B2 (en) | 2015-04-24 | 2019-12-03 | Telefonaktiebolaget Lm Ericsson (Publ) | Fault diagnosis in networks |
US10531325B2 (en) | 2015-05-20 | 2020-01-07 | Telefonaktiebolaget Lm Ericsson (Publ) | First network node, method therein, computer program and computer-readable medium comprising the computer program for determining whether a performance of a cell is degraded or not |
WO2017220107A1 (en) | 2016-06-20 | 2017-12-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and network node for detecting degradation of metric of telecommunications network |
US10475090B2 (en) | 2016-07-11 | 2019-11-12 | Micro Focus Llc | Calculating user experience scores |
US11388040B2 (en) | 2018-10-31 | 2022-07-12 | EXFO Solutions SAS | Automatic root cause diagnosis in networks |
US11736339B2 (en) | 2018-10-31 | 2023-08-22 | EXFO Solutions SAS | Automatic root cause diagnosis in networks |
CN111224805A (en) * | 2018-11-26 | 2020-06-02 | 中兴通讯股份有限公司 | Network fault root cause detection method, system and storage medium |
US11645293B2 (en) | 2018-12-11 | 2023-05-09 | EXFO Solutions SAS | Anomaly detection in big data time series analysis |
US11496353B2 (en) | 2019-05-30 | 2022-11-08 | Samsung Electronics Co., Ltd. | Root cause analysis and automation using machine learning |
WO2020242275A1 (en) * | 2019-05-30 | 2020-12-03 | Samsung Electronics Co., Ltd. | Root cause analysis and automation using machine learning |
US11138163B2 (en) | 2019-07-11 | 2021-10-05 | EXFO Solutions SAS | Automatic root cause diagnosis in networks based on hypothesis testing |
US11522766B2 (en) | 2020-02-12 | 2022-12-06 | EXFO Solutions SAS | Method and system for determining root-cause diagnosis of events occurring during the operation of a communication network |
CN112491595A (en) * | 2020-11-12 | 2021-03-12 | 杭州迪普信息技术有限公司 | Fault area positioning method, device, equipment and computer readable storage medium |
US11611500B2 (en) | 2021-07-29 | 2023-03-21 | Hewlett Packard Enterprise Development Lp | Automated network analysis using a sensor |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014040633A1 (en) | Identifying fault category patterns in a communication network | |
US7668109B2 (en) | Method for determining mobile terminal performance in a running wireless network | |
CN110209820B (en) | User identification detection method, device and storage medium | |
US8144599B2 (en) | Binary class based analysis and monitoring | |
CN111325463A (en) | Data quality detection method, device, equipment and computer readable storage medium | |
CN106445796B (en) | Automatic detection method and device for cheating channel | |
CN109347688B (en) | Method and device for positioning fault in wireless local area network | |
US20150019916A1 (en) | System and method for identifying problems on a network | |
CN108093427B (en) | VoLTE service quality evaluation method and system | |
CN101843134A (en) | Method and monitoring component for network traffic monitoring | |
CN113328872A (en) | Fault repair method, device and storage medium | |
US11856426B2 (en) | Network analytics | |
WO2009022953A1 (en) | Monitoring individual data flow performance | |
CN103581976B (en) | The recognition methods of community and device | |
WO2015003551A1 (en) | Network testing method and data collection method thereof, and network testing apparatus and system | |
CN110856188B (en) | Communication method, apparatus, system, and computer-readable storage medium | |
CN106998256A (en) | A kind of communication failure localization method and server | |
CN109921928A (en) | Switch network monitoring method, device, computer equipment and storage medium | |
CN109150794B (en) | VoLTE voice service quality analysis processing method and device | |
CN108111346A (en) | The definite method, apparatus and storage medium of frequent item set in warning association analysis | |
JP2015106220A (en) | Sensory communication quality estimation device and sensory communication quality estimation program | |
Rizwan et al. | A zero-touch network service management approach using ai-enabled cdr analysis | |
CN106878965B (en) | A kind of method and apparatus for assessing mobile terminal performance | |
US20200076707A1 (en) | Autonomic or AI-assisted validation, decision making, troubleshooting and/or performance enhancement within a telecommunications network | |
US10805186B2 (en) | Mobile communication network failure monitoring system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12761946 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12761946 Country of ref document: EP Kind code of ref document: A1 |