EP4029196A1 - Système et procédé de filtrage intelligent piloté par scénario pour surveillance de réseau - Google Patents

Système et procédé de filtrage intelligent piloté par scénario pour surveillance de réseau

Info

Publication number
EP4029196A1
EP4029196A1 EP20772111.9A EP20772111A EP4029196A1 EP 4029196 A1 EP4029196 A1 EP 4029196A1 EP 20772111 A EP20772111 A EP 20772111A EP 4029196 A1 EP4029196 A1 EP 4029196A1
Authority
EP
European Patent Office
Prior art keywords
network
model
monitoring system
metrics
network monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20772111.9A
Other languages
German (de)
English (en)
Inventor
Zsófia KALLUS
Tamas Borsos
Péter KERSCH
Peter Vaderna
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of EP4029196A1 publication Critical patent/EP4029196A1/fr
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3452Performance evaluation by statistical analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0836Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability to enhance reliability, e.g. reduce downtime
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/091Measuring contribution of individual network components to actual service level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/028Capturing of monitoring data by filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition

Definitions

  • the present disclosure relates generally to network monitoring systems, and more particularly, to the automatic optimization of network monitoring using machine learning (ML) models.
  • ML machine learning
  • QoE Quality of Experience
  • End-to-end service assurance relies on the horizontal monitoring of low-level event streams from a variety of sources.
  • These low-level event streams are heterogeneous event time series that can be correlated on a per user basis, such as is done in the Ericsson Expert Analytics (EEA) solution, to create per user session descriptors, for example, for the duration of a call.
  • ESA Ericsson Expert Analytics
  • these network monitoring reports should also provide information useful for root cause analysis.
  • Embodiments of the present disclosure leverage machine learning (ML) models and state-of-the-art feature impact analysis to enable the automatic optimization of network monitoring in a closed-loop, flexible control framework.
  • ML machine learning
  • the present disclosure provides a closed-loop method implemented by a network node of a communication network.
  • the method comprises identifying a service assurance scenario for a user equipment (UE) based on feedback received from a network monitoring system, selecting, from a repository, a machine learning (ML) model and a corresponding model explainer based on the identified service assurance scenario, determining, based on a feature impact analysis of the ML model selected from the repository, a list of N features defining one or more metrics to be measured and reported by the network monitoring system, and configuring the network monitoring system to measure and report the one or more metrics based on the list of N features.
  • UE user equipment
  • ML machine learning
  • embodiments of the present disclosure also provide a network node in a communication network.
  • the network node comprises an interface circuit and a processing circuit.
  • the interface circuit is configured for communication with one or more nodes in the communication network.
  • the processing circuit is configured to identify a service assurance scenario for a user equipment (UE) based on feedback received from a network monitoring system, select, from a repository, a machine learning (ML) model and a corresponding model explainer based on the identified service assurance scenario, determine, based on a feature impact analysis of the ML model selected from the repository, a list of N features defining one or more metrics to be measured and reported by the network monitoring system, and configure the network monitoring system to measure and report the one or more metrics based on the list of N features.
  • UE user equipment
  • ML machine learning
  • the present disclosure provides a computer program product stored on a non-transitory computer readable medium.
  • the computer program product comprises instructions that, when executed by at least one processor of a network node, causes the network node to identify a service assurance scenario for a user equipment (UE) based on feedback received from a network monitoring system, select, from a repository, a machine learning (ML) model and a corresponding model explainer based on the identified service assurance scenario, determine, based on a feature impact analysis of the ML model selected from the repository, a list of N features defining one or more metrics to be measured and reported by the network monitoring system, and configure the network monitoring system to measure and report the one or more metrics based on the list of N features.
  • UE user equipment
  • ML machine learning
  • Figure 1 illustrates a general network monitoring architecture.
  • Figure 2 illustrates an exemplary communication network according to one embodiment of the present disclosure.
  • Figure 3 illustrates an exemplary architecture, functional units, and logical steps of a closed-loop control of a flexible monitoring system that implements smart-filtering logic for resource optimization according to one embodiment of the present disclosure.
  • Figure 4 is a graph illustrating the top N features in terms of aggregate feature impact, and the top N features in terms of top maximum absolute model impact, for individual samples according to one embodiment of the present disclosure.
  • Figure 5 explains some feature naming convention used in the embodiment of Figure 4.
  • Figure 6 illustrates an exemplary closed-loop method implemented by a network node in a communication network according to one embodiment of the present disclosure.
  • Figure 7 is a block diagram illustrating some components of a network node configured according to one embodiment of the present disclosure.
  • Figure 8 is a functional block diagram illustrating some functional modules/units executing on a processing circuit of a network node according to one embodiment of the present disclosure.
  • Embodiments of the present disclosure leverage machine learning (ML) models and their explainers to automatically optimize network monitoring configuration for service assurance.
  • the present embodiments provide a near real-time solution where the active services of a user session, as well as any additional content information for the user, are used to identify applicable service assurance scenarios.
  • context information may be used according to one embodiment to determine that a user is located at or near the edge of a cell and/or is experiencing undesirable radio conditions.
  • the disclosure dynamically specifies the low-level features to be reported with highest impact on the service-level performance indicators. Not only does this “smart filtering” aspect of the present embodiments serve as input for a network configuration management, but it also creates a dynamic and optimized closed-loop control solution for flexible monitoring.
  • Figure 1 is a functional block diagram illustrating a general architecture 10 for monitoring a network.
  • a Network Management System 30 receives raw measurement data from different domains 20 of the network.
  • Examples of such domains 20 include, but are not limited to, the Radio Access Network (RAN) 22, the core network 24, the IP Multimedia Subsystem (IMS) 26, and the passive probing 28 on transmission lines.
  • the Network Management System 30, usually comprising an Operating Support System (OSS) 32 and Business Support System (BSS) 34, is responsible for collecting and processing the raw data into measurements.
  • OSS Operating Support System
  • BSS Business Support System
  • the node logging and reporting functions are configurable in mobile networks. Logging can, in some cases, be highly detailed such that it is possible to generate fine granularity reports.
  • the radio nodes in 4G mobile networks are able to report per user events and measurements from various protocol layers down to sub-second granularity in hundreds of different report types and including thousands of different event parameters. In 5G networks, the possibility for even more detailed logging is expected.
  • ML/Artificial Intelligence Another technique that may be used in service assurance is Machine Learning (ML)/Artificial Intelligence (Al).
  • ML/AI Machine Learning
  • One dominant area in adopting ML/AI for service assurance lies in the detection of anomalies and performance degradation. While each sub-system of a network can be treated as separate models, predictive models for end-to-end QoE metrics operate based on the results of horizontal monitoring of network components. The resulting heterogeneous event time series need to be vectorized to create the input of the ML models. Ultimately, root cause analysis rules created by domain experts should be replaced by automated methods. This would allow the implementation of closed feedback loops for the network management system to self-correct subs-system configurations where possible.
  • interpretable ML is a hot topic for ML research efforts.
  • the goal of interpretable ML research is to explain the inner logic of a trained model.
  • Explanations can be provided, for example, in the form of a feature impact analysis. For example, given a trained model, (e.g., a boosted decision tree or a neural network) and an input vector, the research efforts aim to determine the role of each feature in forming the output of the model (i.e., the inferred label prediction).
  • a trained model e.g., a boosted decision tree or a neural network
  • model explainer In the SHapley Additive explanations (SHAP) method, for example, (see e.g., “A Unified Approach to Interpreting Model Predictions”, arXiv:1705.07874v2, 2017 to Lundberg et. al.), an “explainer” or “model explainer” is generated from an ML model. Multiple factors are considered when model explainers perform feature impact analysis.
  • One consideration for example, is the overall absolute importance, or the distribution of per input vector importance, of a given feature is measured from a set of input vectors.
  • the effect of a feature relative to the average effect of all features is also considered to determine whether the feature is pushing the output over an average with a positive effect, or pulling the output below the average with a negative effect. These effects can be regarded as a force, with both a direction and a quantified strength.
  • an order of importance can be created for the features or single input vectors can be analyzed.
  • FIG 2 illustrates a wireless communication network 40 according to the NR standard currently being developed by Third Generation Partnership Project (3GPP).
  • the wireless communication network 40 comprises one or more base stations 50 providing service to user equipment (UEs) 60a, 60b in respective cells 70 of the wireless communication network 40.
  • the base stations 50 are also referred to as Evolved NodesBs (eNBs) and gNodeBs (gNBs) in 3GPP standards.
  • eNBs Evolved NodesBs
  • gNodeBs gNodeBs
  • One feature of NR networks is the ability of the base stations 50 to transmit and/or receive on multiple beams in the same cell 70.
  • Figure 2 illustrates two such beams 72a, 72b (collectively, “beams 72”), although the number of beams 72 in a cell 40 may be different.
  • the UEs 60a, 60b may comprise any type of equipment capable of communicating with the base station 20 over a wireless communication channel.
  • UEs 60a, 60b may comprise cellular telephones, smart phones, laptop computers, notebook computers, tablets, machine-to-machine (M2M) devices (also known as machine type communication (MTC) devices), embedded devices, wireless sensors, or other types of wireless end user devices capable of communicating over wireless communication networks 40.
  • M2M machine-to-machine
  • MTC machine type communication
  • embodiments of the present disclosure leverage ML models and state-of-the-art feature impact analysis to enable the automatic optimization of network monitoring in a closed-loop, flexible control framework.
  • the ML models use low- level network measurement inputs to predict service experience measures.
  • the automated feature impact analysis methods are performed on top of these models to reveal the relative predictive power of each low-level input feature.
  • the network monitoring can therefore dynamically be configured in a closed-loop manner based on this analysis to report only on those features that are determined to have the most impact on the indicators associated with the service-level performance.
  • a “scenario” is defined as the context-dependent list of active ML models that are used either to infer quality of experience metrics for active services, or to perform root cause analysis upon service degradations. Scenarios can be detected dynamically, and a configuration management system of the present disclosure will receive, within defined monitoring constraints, information representing a union of the top most impactful features (i.e., the features that are determined as having the most impact on service-level performance indicators) from each of the corresponding ML models.
  • This scenario detection can be performed, according to the present embodiments, on a per-subscriber basis, a per-network function basis, or on a network node basis.
  • embodiments described herein provide advantages and benefits that conventional systems and methods of network monitoring do not or cannot provide.
  • embodiments of the present disclosure are referred to as performing “smart filtering” since they are automated and data-driven, and thus, minimize the reporting of low-level dimensions.
  • Such “smart filtering” reduces the monitoring load with minimal loss of performance in service assurance functionalities.
  • the present embodiments are automatic and data-driven, they create a closed control loop control for various network monitoring optimization problems, such as the issues related to the tradeoff between network observability and monitoring load. Additionally, with the present embodiments, important low-level network metrics that are required to characterize a service QoE are automatically identified and configured to be reported by the monitoring system. This advantageously provides good network observability dynamically adapted to eventual changes, while keeping the monitoring load as low as possible.
  • the closed loop automation of the present embodiments also decreases network operations costs and makes it easier and faster to rollout new complex network services (e.g., 5G, NB-loT, delay critical industry systems, etc.).
  • a system and method configured according to the present disclosure leverages the ML models that are built for service assurance use cases for smart filtering optimization techniques in order to monitor the system more efficiently.
  • the system utilizes a Network Scenario Library (NSL).
  • NSL represents a knowledge base comprising information related to the services and QoE measures.
  • the library comprises expert knowledge related to the available network services and service assurance scenarios. For each type of service that is available, the library specifies the QoE metrics and the corresponding performance indicators along with any extra situational measures or descriptors.
  • the NSL also comprises, for each service type and QoE measure pair, a method for embedding raw event streams and a trained ML model (e.g., an extreme boosted tree regressor).
  • a trained ML model e.g., an extreme boosted tree regressor
  • One or more corresponding model explainers are also available on a per-model basis from the NSL.
  • the model explainers provide a list of the top N features of the corresponding ML model along with an indication of each feature’s respective relative importance in the QoE prediction process.
  • N can be fixed, or A/can be an optional configuration parameter of the optimization, or A/can be dynamically derived from network monitoring and service assurance constraints (e.g., maximum network monitoring load, maximum model degradation, etc.).
  • training and/or retraining the models could be performed periodically or continuously, or online when necessary, depending on the timescale of changes in the underlying network.
  • Figure 3 illustrates an architecture, functional units, and logical steps of a closed-loop control of a flexible monitoring system that implements smart-filtering logic for resource optimization according to one embodiment of the present disclosure.
  • the filtering is data-driven and leverages ML models created for Service Assurance QoE inference.
  • the architecture comprises a Service Detector 80, a Scenario Detector 82, a Network Scenario Library (NSL) 84, ML model(s) and model explainers 86, a Configuration Manager 88, a Network Monitoring System 90, and a Service QoE Inference function 92.
  • NSL Network Scenario Library
  • the Service Detector 80 identifies the active service(s) in a user session from the Network Monitoring feedback.
  • the detected services are input into the Scenario Detector 82, where the Network Scenario Library (NSL) 84 is used to acquire the corresponding ML and model explainer(s) 86, respectively.
  • NSL Network Scenario Library
  • a list of respective the top N features considered to have the most impact on the predictions of service-level performance is generated. That is, the top N features represent the low-level metrics and can be logged and reported as having the highest impact on one or more target variables in the ML model.
  • the Configuration Manager 88 sends a list of events/reports to the Network Monitoring System 90 identifying the events to be activated in a specified node for a given UE 60.
  • the Configuration Manager 88 provides a rule set, a condition list, or a program agent to be activated in the given nodes, which may be required if local decisions are to be made. Some possible reasons for making local decisions may be (i) the need to make such decisions in real-time; or (ii) to use a wider set of internal events/reports to start/stop/select events for final reporting.
  • the method according to the present embodiments helps to ensure that the monitoring optimization best serves the service assurance scenarios that are related to the active services of the user.
  • control loop is closed within the nodes themselves.
  • Configuration Manager 88 can implement smart filtering. In another embodiment, however, the Configuration Manager 88 is deployed within the OSS system 32.
  • the embodiment seen in Figure 3 deploys solutions for the automated optimization of near real-time monitoring in communication networks, such as the one in network 40 seen in Figure 2.
  • the present embodiments are not so limited. According to the present disclosure, the components seen in Figure 3 can be part of local system configuration solution, but may also be realized as cloud-native micro-services.
  • a first embodiment is implemented using actual mobile network data.
  • the first embodiment considers a single, simplified scenario for Voice over LTE (VoLTE) service assurance where the VoLTE QoE is measured via a Mean Opinion Score (MOS) metric on a scale of 1 - 5.
  • VoIP Voice over LTE
  • MOS Mean Opinion Score
  • low-level RAN, core network and IMS reports are correlated into VoLTE call session records from both legs of each call. Each call has been partitioned into 10 second long slices.
  • vectorized representations containing >1000 parameters (input features) are created and 2) a MOS score is computed from RTCP reports sent by a subset of the UEs (labels).
  • decision tree ensemble models are trained to infer the VoLTE MOS metric.
  • a feature impact analysis is then performed on the model using the SHAP methodology.
  • Figure 4 is a graph illustrating an overview of which features are most important for a model.
  • the graph plots the SHAP values of some features for samples.
  • Figure 4 also illustrates a bar graph 110 indicating the impact of each feature 100 on the model output from low impact to high impact.
  • the top feature 102 i.e., “ ul - ran_penod_state_ave_ul_pusch_sini > '
  • bar 112 is the top feature based on the aggregate model impact.
  • a set of other features 106 relate to features having a small aggregate impact on the model output, but as indicated by the corresponding bars 116, can still have a high impact individually in rare cases. Typically, such high impacts are associated with failure events.
  • Figure 5 illustrates the feature naming convention explanations 120 for the embodiment of Figure 4.
  • the feature name is a concatenation of:
  • the “dr is the Direction portion of the feature name and indicates that the feature represents a measurement on the downlink leg of the RTP stream.
  • the “ran_period_state” is the Blockname portion of the feature name and indicates that the measurement refers to RAN data collected for the entire duration of the analyzed call session slice.
  • the “cqf’ is the EventName/Event Parameter portion of the feature name and is the name of the actual measured RAN metric.
  • the “0” is the Event Value portion of the feature name and indicates a value for the CQI.
  • scenario detection is described for a mobile network subscriber X.
  • the Service Detector 80 identifies the start of this VoLTE session from IMS signaling.
  • the NSL 84 contains an ML model for the VoLTE service that is used to infer a MOS metric for this service.
  • Scenario Detector 80 then obtains a list of the top low-level network metrics that are required for this model from NSL 84 and provides the list to the Configuration Manager 88 with instructions to activate the monitoring of these metrics for subscriber X.
  • NSL 84 can be configured to store a plurality of ML models for the YouTube service.
  • NSL 84 in this embodiment stores three ML models - one ML model to infer YouTube session boundaries, one ML model to infer video bitrate, and one ML model to infer stall time metrics.
  • the Service Detector 80 in this example scenario is configured to determine that subscriber X has started watching the YouTube video based on the DNS requests made by subscriber X for YouTube video servers. Additionally, Service Detector 80 determines the set of QoE metrics to be inferred for YouTube sessions and fetches the corresponding ML models and their explainers provided by NSL 84. The model explainers provide a list of the top L/most impactful features required for those models to the Configuration Manager 88, along with an instruction to activate the monitoring of the measurements corresponding to those features for subscriber X.
  • the present disclosure can be implemented when subscribers are in fixed positions, and when they are moving. For example, consider a situation in which subscriber X is sitting on a bus while watching the YouTube video. At some point, as the bus is moving, the bus would approach a cell-edge and the radio conditions would get worse. Responsive to detecting such a cell-edge situation, the Scenario Detector 82 would obtain an additional fourth ML model from NSL 84 - used for in-depth radio level root cause analysis - and provide the list of the top low-level network metrics that are required for this model to the Configuration Manager 88 along with instructions to activate the monitoring of these metrics for subscriber X. The Configuration Manager 88, is then used for updating the low-level metrics to be monitored for subscriber X based on the top N features of the 4th model.
  • FIG 6 illustrates an exemplary closed-loop method 130 implemented by a network node in a communication network (e.g., network 40) according to one embodiment of the present disclosure.
  • method 130 begins with the network node identifying a service assurance scenario for a user equipment (UE) based on feedback received from a network monitoring system (box 132).
  • the network node selects, from a repository, a machine learning (ML) model and a corresponding model explainer based on the identified service assurance scenario (box 134), and determines, based on a feature impact analysis of the ML model selected from the repository, a list of N features defining one or more metrics to be measured and reported by the network monitoring system (box 136). So determined, the network node configures the network monitoring system to measure and report the one or more metrics based on the list of N features (box 138).
  • ML machine learning
  • the ML model comprises information used to predict end-to-end Quality of Experience (QoE) metrics for a given service type from performance measurements associated with the UE, and one or more descriptors describing the network performance measurements.
  • QoE Quality of Experience
  • selecting the ML model and the corresponding model explainer based on the identified service assurance scenario comprises determining one or more active services associated with the UE, and obtaining, from the repository, the ML model predicting the QoE metrics for the one or more active services.
  • the model explainer comprises information indicating a respective relative importance of each of the input features of the ML model in predicting the QoE metrics.
  • N is a fixed value.
  • N is a configurable value.
  • configuring the network monitoring system comprises configuring the network monitoring system to measure and report the one or more metrics responsive to one or more predefined events.
  • configuring the network monitoring system comprises providing the network monitoring system with a set of rules for measuring and reporting the one or more metrics.
  • configuring the network monitoring system comprises providing a list of one or more conditions to the network monitoring system defining the conditions under which the network monitoring system will measure and report the one or more metrics. In at least one embodiment, configuring the network monitoring system comprises activating a program agent in each of one or more nodes of the network monitoring system to measure and report the one or more metrics.
  • selecting the ML model and the corresponding model explainer comprises selecting, from the repository, a plurality of ML models and corresponding model explainers.
  • each ML model is associated with a corresponding different service assurance scenario and defines a list of M input features. Further, each input feature defines one or more network performance metrics to be measured and reported by the network monitoring system.
  • each of the plurality of ML models provides information used in predicting the end-to-end QoE metrics for a respective different service type.
  • each model explainer in the plurality of model explainers provides information indicating a respective relative importance of the M input features in its corresponding ML model in predicting the QoE metrics.
  • the one or more metrics defined in the list of N features comprises a union of the one or more network performance metrics defined in one or more of the lists of M input features.
  • configuring the network monitoring system comprises configuring the network monitoring system to measure and report the union of the one or more network performance metrics.
  • an apparatus can perform any of the methods herein described by implementing any functional means, modules, units, or circuitry.
  • the apparatus comprises respective circuits or circuitry configured to perform the steps shown in the method figures.
  • the circuits or circuitry in this regard may comprise circuits dedicated to performing certain functional processing and/or one or more microprocessors in conjunction with memory.
  • the circuitry may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include Digital Signal Processors (DSPs), special-purpose digital logic, and the like.
  • DSPs Digital Signal Processors
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory may include program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments.
  • the memory stores program code that, when executed by the one or more processors, carries out the techniques described herein.
  • FIG. 7 illustrates a network node 140 according to one embodiment that may be configured to function as herein described.
  • the network node 140 comprises processing circuitry 142, a memory 144 configured to store a computer program 146, and communication interface circuitry 148.
  • the processing circuity 142 controls the overall operation of the network node 140 and processes the signals sent to or received by the network node 140. Such processing can include, but is not limited to, coding and modulation of transmitted data signals, and the demodulation and decoding of received data signals.
  • the processing circuity 142 may comprise one or more microprocessors, hardware, firmware, or a combination thereof, and as stated above, is configured to execute a control program, such as computer program 146, to perform the previously described functions.
  • Memory 144 comprises both volatile and non-volatile memory for storing computer program code and data needed by the processing circuity 142 for operation.
  • Memory 144 may comprise any tangible, non-transitory computer-readable storage medium for storing data including electronic, magnetic, optical, electromagnetic, or semiconductor data storage.
  • Memory 144 stores, as stated above, a computer program 146 comprising executable instructions that configure the processing circuity 142 to implement method 130 according to Figure 6 as described herein.
  • a computer program in this regard may comprise one or more code modules corresponding to the means or units described above.
  • computer program instructions and configuration information are stored in a non-volatile memory, such as a ROM, erasable programmable read only memory (EPROM) or flash memory. Temporary data generated during operation may be stored in a volatile memory, such as a random access memory (RAM).
  • computer program 146 for configuring the processing circuity 142 as herein described may be stored in a removable memory, such as a portable compact disc, portable digital video disc, or other removable media.
  • the computer program 146 may also be embodied in a carrier such as an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • the communications interface circuitry 148 is configured to communicate data, signals, and information with other devices and/or systems via the network 40.
  • the communications interface circuitry 148 may be coupled to one or more antennas and comprise the radio frequency (RF) circuitry needed for transmitting and receiving signals over a wireless communication channel.
  • the communications interface circuitry 148 is configured to send and receive such information via an ETHERNET-based network.
  • Figure 8 illustrates processing circuitry 142 for a network node 140configured in accordance with one or more embodiments.
  • the processing circuitry 142 comprises a scenario identification module/unit 150, a model obtaining module/unit 152, a feature determination module/unit 154, and a network monitoring system configuration module/unit 156.
  • the various modules/units 150, 152, 154, and 156 can be implemented by hardware and/or by software code that is executed by a processor or processing circuit.
  • the scenario identification module/unit 150 is configured to identify a service assurance scenario for a user equipment (UE) based on feedback received from a network monitoring system.
  • UE user equipment
  • the model obtaining module/unit 152 is configured to select, from a repository, a machine learning (ML) model and a corresponding model explainer based on the identified service assurance scenario, as previously described.
  • the feature determination module/unit 154 is configured to determine, based on a feature impact analysis of the ML model selected from the repository, a list of N features defining one or more metrics to be measured and reported by the network monitoring system, as previously described.
  • the network monitoring system configuration module/unit 156 is configured to configure the network monitoring system to measure and report the one or more metrics based on the list of N features, as previously described.
  • aspects of the present disclosure may be executed as one or more network functions on the processing circuit 142 of a single network node 140, or on the processing circuits 142 of multiple network nodes 140 in the communication network 40. Further, aspects of the present disclosure may be implemented on a virtual network node.
  • a computer program comprises instructions which, when executed on at least one processor of an apparatus, cause the apparatus to carry out any of the respective processing described above.
  • a computer program in this regard may comprise one or more code modules corresponding to the means or units described above.
  • Embodiments further include a carrier containing such a computer program.
  • This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
  • embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.
  • Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device.
  • This computer program product may be stored on a computer readable recording medium.
  • any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses.
  • Each virtual apparatus may comprise a number of these functional units.
  • These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like.
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein.
  • the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
  • the term unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Un nœud de réseau (140) tire profit de modèles d'apprentissage machine et de leurs expliqueurs de modèle correspondants (86) pour optimiser automatiquement la configuration de surveillance de réseau pour une assurance de service. Les services actifs d'une session utilisateur servent à identifier des scénarios d'assurance de service applicables. À l'aide d'une base de connaissances représentant des modèles d'apprentissage machine correspondants et leurs expliqueurs de modèle correspondants, une ou plusieurs caractéristiques de bas niveau à rapporter sont sélectionnées. Les caractéristiques sélectionnées sont celles qui sont déterminées pour avoir une plus grande importance relative par rapport à des indicateurs de performance de niveau de service. Les mesures associées aux caractéristiques sélectionnées sont ensuite entrées dans le système de gestion de configuration de réseau.
EP20772111.9A 2019-09-09 2020-09-08 Système et procédé de filtrage intelligent piloté par scénario pour surveillance de réseau Pending EP4029196A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962897695P 2019-09-09 2019-09-09
PCT/IB2020/058346 WO2021048742A1 (fr) 2019-09-09 2020-09-08 Système et procédé de filtrage intelligent piloté par scénario pour surveillance de réseau

Publications (1)

Publication Number Publication Date
EP4029196A1 true EP4029196A1 (fr) 2022-07-20

Family

ID=72517279

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20772111.9A Pending EP4029196A1 (fr) 2019-09-09 2020-09-08 Système et procédé de filtrage intelligent piloté par scénario pour surveillance de réseau

Country Status (2)

Country Link
EP (1) EP4029196A1 (fr)
WO (1) WO2021048742A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022235525A1 (fr) * 2021-05-02 2022-11-10 Intel Corporation Collaboration améliorée entre un équipement utilisateur et un réseau pour faciliter un apprentissage machine
US11665261B1 (en) 2022-03-17 2023-05-30 Cisco Technology, Inc. Reporting path measurements for application quality of experience prediction using an interest metric
WO2023199098A1 (fr) * 2022-04-14 2023-10-19 Telefonaktiebolaget Lm Ericsson (Publ) Exploration sûre pour optimisation de réseau automatisée à l'aide d'explicateurs ml

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8787901B2 (en) 2010-10-06 2014-07-22 Telefonaktiebolaget Lm Ericsson (Publ) Method, apparatus and system for flexible user tracing in mobile networks
EP2783530A1 (fr) 2011-11-04 2014-10-01 Telefonaktiebolaget LM Ericsson (PUBL) Réduction de la quantité de reporting fait à un noeud de gestion
US9026851B2 (en) 2012-09-05 2015-05-05 Wipro Limited System and method for intelligent troubleshooting of in-service customer experience issues in communication networks
US9955373B2 (en) 2012-11-05 2018-04-24 Telefonaktiebolaget L M Ericsson (Publ) Systems and methods for controlling logging and reporting under constraints
US11018958B2 (en) * 2017-03-14 2021-05-25 Tupl Inc Communication network quality of experience extrapolation and diagnosis
US11057284B2 (en) * 2017-06-06 2021-07-06 International Business Machines Corporation Cognitive quality of service monitoring
US20180365581A1 (en) * 2017-06-20 2018-12-20 Cisco Technology, Inc. Resource-aware call quality evaluation and prediction
US10735274B2 (en) * 2018-01-26 2020-08-04 Cisco Technology, Inc. Predicting and forecasting roaming issues in a wireless network
US10673728B2 (en) * 2018-01-26 2020-06-02 Cisco Technology, Inc. Dynamic selection of models for hybrid network assurance architectures

Also Published As

Publication number Publication date
WO2021048742A1 (fr) 2021-03-18

Similar Documents

Publication Publication Date Title
US11451452B2 (en) Model update method and apparatus, and system
US11811588B2 (en) Configuration management and analytics in cellular networks
EP4029196A1 (fr) Système et procédé de filtrage intelligent piloté par scénario pour surveillance de réseau
Wu et al. CellPAD: Detecting performance anomalies in cellular networks via regression analysis
US11228503B2 (en) Methods and systems for generation and adaptation of network baselines
Pierucci et al. A neural network for quality of experience estimation in mobile communications
WO2021025603A1 (fr) Collecte de données dans des conditions de fonctionnement de réseau définies dans des réseaux d'accès radio
US20230198640A1 (en) Channel state information values-based estimation of reference signal received power values for wireless networks
US20230090169A1 (en) Monitoring a Communication Network
Yu et al. Self‐Organized Cell Outage Detection Architecture and Approach for 5G H‐CRAN
US20220210682A1 (en) SYSTEM AND METHOD FOR ARTIFICIAL INTELLIGENCE (AI) DRIVEN VOICE OVER LONG-TERM EVOLUTION (VoLTE) ANALYTICS
EP4275291A1 (fr) Estimation d'information d'état de canal basée sur l'apprentissage machine et configuration de rétroaction à base d'apprentissage automatique
CN114124709B (zh) 网络切片配置的优化方法、装置及可读存储介质
US11805043B2 (en) Method and system for real-time encrypted video quality analysis
CN104412644B (zh) 测量方法、基站和用户设备
Parracho et al. An improved capacity model based on radio measurements for a 4G and beyond wireless network
CN115866634A (zh) 一种网络性能异常分析方法、装置及可读存储介质
Fré et al. Data Shower in Electronics Manufacturing: Measuring Wi-Fi 4, Wi-Fi 6, and 5G SA behavior in production assembly lines
EP4150861B1 (fr) Détermination de la mise à niveau de cellules
US20240049032A1 (en) Analytics perfromance management
WO2024065752A1 (fr) Techniques de rapport d'informations csi
US20240235643A1 (en) Channel state information reporting techniques
US20230362678A1 (en) Method for evaluating action impact over mobile network performance
CN116643954A (zh) 模型监控方法、监控端、装置及存储介质
WO2023079567A1 (fr) Premier nœud, second nœud, système de communication et procédés exécutés par ceux-ci pour traiter un événement anormal

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220221

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)