EP4070197A1 - Dispositif de surveillance d'un système de réseau informatique - Google Patents
Dispositif de surveillance d'un système de réseau informatiqueInfo
- Publication number
- EP4070197A1 EP4070197A1 EP20703010.7A EP20703010A EP4070197A1 EP 4070197 A1 EP4070197 A1 EP 4070197A1 EP 20703010 A EP20703010 A EP 20703010A EP 4070197 A1 EP4070197 A1 EP 4070197A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- indicators
- indicator
- computer network
- network system
- anomaly
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3452—Performance evaluation by statistical analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
Definitions
- the present disclosure relates to anomaly detection in a computer network system.
- embodiments of the invention provide a device for monitoring a computer network system, such as to detect anomalies in the computer network system.
- Embodiments of the invention also relate to a method for monitoring a computer network system.
- networking systems Due to the fact that networking systems are rapidly expanding, an exponential growth of complexity in networked applications and information services is generated.
- These large-scale, distributed, networking systems usually comprise a huge variety of components, which work together in a complex and coordinated manner.
- a central task in running these large scale distributed systems is to automatically monitor the system status, detect anomalies, and diagnose system fault, so as to guarantee stable and high-quality services or outputs.
- Anomaly detection relates to the problem of identifying anomalies in a data set, where anomalies correspond to points generated by a different process than the one that normal samples are assumed to be generated from. In most applications, however, statistical anomalies do not always correspond to semantically-meaningful anomalies. For example, in a computer security application, a user may be considered statistically anomalous due to an unusually high amount of copying and printing activity, which in reality has a benign explanation, and hence is not a true anomaly.
- embodiments of the present invention aim to improve the conventional devices for monitoring computer network systems.
- An object is thereby to provide a device and method for monitoring a computer network system, which allows a more efficient expert analysis of detected anomalies.
- the device should in this way allow detecting anomalies in the computer network system faster and more reliably.
- the disclosure relates to a device for monitoring a computer network system, the device being configured to: receive a dataset comprising a set of indicators, wherein each indicator of the set of indicators is indicative of a performance of the computer network system, detect an anomaly in the performance of the computer network system based on the received set of indicators, determine a score for each indicator in the set of indicators, based on the received set of indicators and the detected anomaly, wherein the determined score is indicative of a relationship of the respective indicator with the detected anomaly, obtain an expert factor for each indicator in a subset of the set of indicators, wherein each expert factor is indicative of a level of relevance of the respective indicator for at least one previous anomaly in the performance of the computer network system, and modify the determined score of each indicator in the subset of the set of indicators based on the expert factor.
- a computer network system may comprise a plurality of computer network system entities, for instance, computers or routers.
- An anomaly may be an (unexpected) change or drop in the performance of the computer network system, which may be reflected in the set of indicators.
- an anomaly may correspond to a values of an indicator that is different from a “normal” value of the indicator.
- An expert factor may be a predetermined factor that is associated to a respective indicator.
- a human expert may have previously determined a level of relevance for one or more indicators with respect to one or more anomalies, which is reflected by the expert factors.
- Expert factors may be stored in a database and may be obtained by the device from that database.
- the set of indicators may comprise one or more indicators.
- the subset of indicators may comprise one or more indicators. Thereby, the subset may comprise all indicators comprised in the set of indicators, or may comprise fewer indicators than comprised in the set of indicators.
- the device is further configured to sort the indicators in the subset of the set of indicators based on a respective modified score and sort the indicators not included in the subset of the set of indicators based on a respective score.
- the device is further configured to modify the determined score of each indicator in the subset of the set of indicators based on a weighting factor, wherein the weighting factor is indicative of an adjustable numeric value to be applied to a value of the expert factor of the respective indicator.
- the device can flexibly accommodate current and past knowledge of the same indicator from different troubleshooting cases, by giving either more importance to the current case (important for new problems) or by trusting more the expert knowledge (very useful for recurring problems), through the tuning of the weighting factor.
- the level of relevance of the respective indicator for the at least one previous anomaly comprises the respective indicator being related to the at least one previous anomaly in an expert diagnosis of the computer network system.
- the device is further configured to obtain the expert factor for each indicator in the subset of the set of indicators by querying a database storing an expert diagnosis of the computer network system, and generate a second database based on the expert factor obtained for each indicator in the subset of the set of indicators.
- the device is further configured to update the second database in response to a modification of the expert diagnosis of the computer network system stored in the database.
- each indicator of the set of indicators is indicative of a development of the performance of the computer network system over time.
- the device is configured to sample each indicator of the set of indicators with a same frequency, wherein the indicators of the set of indicators are aligned in time.
- each indicator of the set of indicators comprises an indicator value for each of multiple time slots covered by the dataset.
- the device is configured to determine a duration of the detected anomaly.
- the device is further configured to determine the relationship of an indicator of the set of indicators with the detected anomaly, based on a difference between an average value of the indicator over a duration of the detected anomaly and an average value of the indicator before and/or after the duration of the detected anomaly.
- the device is configured to detect the anomaly by using a machine learning method, in particular an unsupervised machine learning method.
- At least one indicator of the set of indicators is indicative of one or more of: a processing power consumption of the computer network system, a memory consumption in the computer network system, and an amount of traffic routed through the computer network system.
- the disclosure relates to a method for monitoring a computer network system, the method comprising: receiving a dataset comprising a set of indicators, wherein each indicator of the set of indicators is indicative of a performance of the computer network system, detecting an anomaly in the performance of the computer network system based on the received set of indicators, determining a score for each indicator in the set of indicators, based on the received set of indicators and the detected anomaly, wherein the determined score is indicative of a relationship of the respective indicator with the detected anomaly, obtaining an expert factor for each indicator in a subset of the set of indicators, wherein each expert factor is indicative of a level of relevance of the respective indicator for at least one previous anomaly in the performance of the computer network system, and modifying the determined score of each indicator in the subset of the set of indicators based on the expert factor.
- the method further comprises sorting the indicators in the subset of the set of indicators based on a respective modified score and sorting the indicators not included in the subset of the set of indicators based on a respective score.
- the method further comprises modifying the determined score of each indicator in the subset of the set of indicators based on a weighting factor, wherein the weighting factor is indicative of an adjustable numeric value to be applied to a value of the expert factor of the respective indicator.
- the level of relevance of the respective indicator for the at least one previous anomaly comprises the respective indicator being related to the at least one previous anomaly in an expert diagnosis of the computer network system.
- the method further comprises: obtaining the expert factor for each indicator in the subset of the set of indicators by querying a database storing an expert diagnosis of the computer network system, and generating a second database based on the expert factor obtained for each indicator in the subset of the set of indicators.
- the disclosure relates to a computer program comprising a program code for performing the method according to the second aspect or any one of the implementation forms thereof.
- the disclosure relates to a non-transitory storage medium storing executable program code which, when executed by a processor, causes the method according to the second aspect or any of its implementation forms to be performed.
- FIG. 1 shows a schematic representation of a device for monitoring a computer network system according to an embodiment
- FIG. 2 shows a schematic representation of a working scheme of a device for monitoring a computer network system according to an embodiment
- FIG. 3 shows a schematic representation of a working scheme of a device for monitoring a computer network system according to an embodiment
- FIG. 4 shows a schematic representation of a method for monitoring a computer network system according to an embodiment.
- FIG. 1 shows a schematic representation of a device 101 for monitoring a computer network system 100 according to an embodiment.
- the device 101 is configured to detect anomalies that occur in the computer network system.
- the computer network system 100 may comprise, as exemplarily shown, computer network system entities 102, 103, 103 and 105.
- the device 101 is configured to receive a dataset comprising a set of indicators, wherein each indicator of the set of indicators is indicative of a performance of the computer network system 100.
- the set of indicators may be indicative of ta performance of the computer network system entities.
- the set of indicators may be obtained by the device 101 from the computer network system entities, or from a device collecting data from the computer network system entities.
- the device 101 is configured to detect an anomaly in the performance of the computer network system 100 based on the received set of indicators.
- the anomaly may, for instance, relate to one or more of the computer network system entities.
- the device 101 is configured to determine a score for each indicator in the set of indicators, based on the received set of indicators and the detected anomaly, wherein the determined score is indicative of a relationship of the respective indicator with the detected anomaly.
- the device 101 is configured to obtain an expert factor for each indicator in a subset of the set of indicators, wherein each expert factor is indicative of a level of relevance of the respective indicator for at least one previous anomaly in the performance of the computer network system 100, and modify the determined score of each indicator in the subset of the set of indicators based on the expert factor.
- the device 101 may comprise processing circuitry (not shown) configured to perform, conduct or initiate the various operations of the device 101 described herein.
- the processing circuitry may comprise hardware and software.
- the hardware may comprise analog circuitry or digital circuitry, or both analog and digital circuitry.
- the digital circuitry may comprise components such as application-specific integrated circuits (ASICs), field-programmable arrays (FPGAs), digital signal processors (DSPs), or multi-purpose processors.
- the processing circuitry comprises one or more processors and a non-transitory memory connected to the one or more processors.
- the non-transitory memory may carry executable program code which, when executed by the one or more processors, causes the device 101 to perform, conduct or initiate the operations or methods described herein.
- FIG. 2 shows a schematic representation of a working scheme of the device 101 for monitoring a computer network system 100 according to an embodiment, in particular the device 101 shown in FIG. 1.
- Step 0 the device 101 can be configured to accept as an input the dataset, which may comprise, as the set of indicators, a set of multivariate numeric time series representing the state along time of the managed computer network system 100 (e.g. for a router, its CPU consumption, the traffic it is routing, its memory consumption, etc.).
- Each of these time series is an indicator of the set of indicators, and is specifically referred to in the following as Key Performance Indicator (KPI). That is an indicator may be a KPI, and the set of indicators may be a set of KPIs.
- KPIs may be indicative of a development of the performance of the computer network system 100 over time.
- the KPIs can be sampled with equal frequency, and may be aligned in time.
- each KPI may comprise a value for each of multiple time slots covered by the dataset.
- the dataset has a total of n points or timeslots, and a total of v KPIs or features.
- any algorithm that is able to perform the transformation described above would be suitable for this step.
- machine learning methods specifically from the family of unsupervised learning (methods that do not need data that has been explicitly labeled by human experts), may be used.
- isolation forests, robust random cut forests or local outlier factor can be suitable methods for this step.
- a specific policy to transform the output into a binary value would be needed.
- the usage of unsupervised learning methods has the advantage that the need of any human interaction is eliminated this step.
- Step 2 the device 101 may be configured to determine a score for each KPI.
- the device 101 is configured to take X and A as an input and outputs a vector of tuples:
- S j corresponds to a numeric score associated to KPI i and is calculated based on the detected anomalous timeslots in step 1 and the dataset used as input.
- S j corresponds to a numeric score associated to KPI i and is calculated based on the detected anomalous timeslots in step 1 and the dataset used as input.
- the device 101 may be configured to sort the KPIs, in order to prioritize showing those KPIs that are more correlated with the detected anomaly.
- Step 3 the device 101 may be configured to add an expert knowledge. While the two previous steps combined already produce a usable solution, they suffer from a specific pitfall that can reduce the effectiveness of the system: when examining KPIs in order to understand what has happened on their managed environment, the experts look for those that point at the cause of an anomaly, not at the effects (e.g., a large number of launched processes, a cause, can empty the available memory in a router, an effect). Following the anomaly detection and the feature scoring scheme, both causes and effects will be assigned a high score by these processes, which will increase the number of non-useful time series that are shown to the expert before the fault can be diagnosed.
- the device 101 may be configured to exploit information, which should be already available in any organization, in which human experts diagnose network faults: previous diagnoses.
- the device 101 may be configured to create an expert knowledge base (EK), a collection of the decisions of previous experts for the KPIs in the system. Assuming that the computer network system 100 has V possible KPIs, the EK can be expressed as a collection of V tuples of the form:
- K t corresponds to the expert factor that can be then applied to 5, biasing the scores to tune the KPI sorting and make it more similar to one produced by the experts, producing:
- K t S j (l + yK ( ), y being a weighting factor, real, positive number that allows for the balancing of the scores, giving more importance to past decisions (y » 1) or to the current dataset (y « 1).
- K t S j (l + yK ( ), y being a weighting factor, real, positive number that allows for the balancing of the scores, giving more importance to past decisions (y » 1) or to the current dataset (y « 1).
- K t S j (l + yK ( ), y being a weighting factor, real, positive number that allows for the balancing of the scores, giving more importance to past decisions (y » 1) or to the current dataset (y « 1).
- the definition of K t is variable and can be altered or adapted to different situations. Some possibilities for it would be the proportion of times a KPI has appeared on a diagnosed case and has been anomalous or more complex calculations involving conditional Bayesian probabilities depending on the presence or absence of other KPIs or other
- embodiments of the present invention provide the advantage that the device 101 works in a completely unsupervised way so as to significantly reduce the time spent by network experts diagnosing fault cases. Given the cost of these experts’ time, this translates into more reliable systems, as the time to diagnose and correct them is reduced and more cases can be tackled in the same amount of time.
- FIG. 2 shows an expert diagnosis step
- this is a task that may already be done in any organization that uses human experts for network management, and that no explicit interaction with the system may be required.
- the device 100 may be configured to alter an order in which KPIs are shown. Thus, there is no added workload or learning needed for the experts to do to include it in a managed network.
- the device 101 may flexibly accommodate current and past knowledge of the same KPI from different troubleshooting cases, by giving either more importance to the current case (important for new problems) or by trusting more the expert knowledge (very useful for recurring problems), through the tuning of Moreover, the device 101 can start from an empty EK and is capable to learn over time. Furthermore, the device 101 can directly be plugged into a new network (e.g., when EK is transferred) and start working as intended, as long as there are shared KPIs between the old and new system.
- a new network e.g., when EK is transferred
- the device 101 leverages already existing output from the natural interaction of human expert with the current system and requires no human interaction at all to properly function. Moreover, the device 101 naturally and seamlessly interacts with the user interface by proposing minimally intrusive changes (i.e., altering the ordering in which the KPI are presented to the user) that maximize the gain (i.e., reducing the time it takes to solve the ticket by reducing the number of KPIs to inspect).
- the concept and process of blending the results of anomaly detection with expert knowledge in order to decide the order in which KPIs, in the form of time series, can be shown to network experts to diagnose a fault in a managed system, employing only resources already available in this kind of systems and without the need of any human interaction to function.
- This provides the advantage of reducing the number of KPIs needed to be analyzed by experts, which translates into an increased reliability of the managed system, reduced costs and less reliability on human availability and expertise.
- FIG. 3 shows a schematic representation of different modules of the device 101 for monitoring the computer network system 100 according to an embodiment.
- the different modules may be used for implementing the working scheme shown in FIG. 2.
- the device 101 can in particular comprise modules 300, 301, and 302.
- the role of Anomaly Detection (AD(.)) module 300 is to implement step 1 shown in FIG. 2. In particular, it is configured to find anomalous timeslots, i.e. timeslots in which an anomaly is detected.
- the device 101 may not be configured to make any assumption about the module AD(.), nor constraints the use of a specific AD(.) function.
- unsupervised techniques may be used by the module AD(.) or 300, since they do not require previous training and are general and portable across incidents. However, in other embodiments, the use of supervised techniques (e.g. Long Short-Term Memory (LSTM)) is also possible to detect anomalies.
- the module AD(.) can be configured to leverage further data source including, but not limited to, topology information, configuration parameters, alarm time series, etc.
- the role of the Feature Scoring (FS(.)) module 301 is to implement step 2 shown in FIG. 2, and thus to reduce human time involved to solve the ticket by prioritizing human attention to the most anomalous values.
- the scoring and sorting of module 301 can be implemented with parametric and non-parametric functions (see examples in table below).
- Non-parametric functions work well experimentally and may generalize across troubleshooting cases.
- the absolute difference of the mean feature score abs(E[FS(X ⁇ normal )] — E[FS(X ⁇ anomaly)]) may be implemented in the FS(.) module 301, wherein FS is the feature score and E denotes the expected value.
- the role of Expert Knowledge (EK(.)) module 302 is to support step 3 shown in FIG. 3, for instance, to transparently reuse knowledge, if available, gathered by the computer network system 100 during the past cases.
- the EK(.) module 302 may be configured to give a simple statistical representation of expert knowledge.
- the EK(.) module 302 may be activated/deactivated at no additional computational cost.
- the EK(.) module 302 can be configured to learn over past solutions by experts.
- the EK(.) module 302 can be configured to work transparently with sorting, by biasing scores computed by the FS(.) module 301 to also take into account past knowledge.
- the KPI order may be given by the most to least anomalous, based on values of KPI of this case determined by the FS(.) module 301 and EK statistics from past cases (hence most likely causes, according to case under investigation and past cases solved by experts).
- HCI interface can be enriched with the ability to prioritize current case (g->1) or past case (g»0). In all the above cases, the time spent by the human in the troubleshooting can be reduced, since the device 101 already “searches” the indicators for the most relevant ones for the detected anomaly.
- the device 101 does not add any burdens to the human experts: when a ticket is closed, the device 101 may be configured to export the expert knowledge by updating the EK(.) module 302 for the features involved in the case, improving its knowledge over time.
- A(X) is increased for features flagged by the expert in his report.
- O(X) is increased only by features that expert has analyzed but not flagged in his report.
- human intervention can be seamless.
- the FS module 301 can be configured to reduce human intervention time even without making use of the EK module 302.
- the device 101 can be configured to automatically update the EK function at any new ticket.
- the system learns over time and human intervention is further reduced.
- Expert knowledge can be a transferable asset, expert knowledge can initially be absent, or can be seen as “added value”, expert knowledge can easily be combined (e.g. by weighted average).
- FIG. 4 shows a schematic representation of a method 400 for monitoring a computer network system 100 according to an embodiment.
- the method 400 may be performed by the device 101.
- the method 400 comprises the following steps.
- a step 401 the method 400 receives a dataset comprising a set of indicators, wherein each indicator of the set of indicators is indicative of a performance of the computer network system 100.
- the method 400 detects an anomaly in the performance of the computer network system 100 based on the received set of indicators.
- the method 400 determines a score for each indicator in the set of indicators, based on the received set of indicators and the detected anomaly. The determined score is indicative of a relationship of the respective indicator with the detected anomaly.
- the method 400 obtains an expert factor for each indicator in a subset of the set of indicators.
- Each expert factor is indicative of a level of relevance of the respective indicator for at least one previous anomaly in the performance of the computer network system 100.
- the method 400 modifies the determined score of each indicator in the subset of the set of indicators based on the expert factor.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Probability & Statistics with Applications (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Debugging And Monitoring (AREA)
Abstract
La présente divulgation concerne la surveillance et la détection d'anomalies dans un système de réseau informatique. Un dispositif de surveillance du système de réseau informatique est configuré pour recevoir un ensemble de données comprenant un ensemble d'indicateurs représentatifs d'une performance du système, détecter une anomalie dans les performances sur la base des indicateurs, et déterminer un score pour chaque indicateur sur la base de l'anomalie détectée, le score indiquant une relation de l'indicateur respectif à l'anomalie. De plus, le dispositif est configuré pour obtenir un facteur expert pour chaque indicateur dans un sous-ensemble d'indicateurs, chaque facteur expert étant représentatif d'un niveau de pertinence de l'indicateur respectif pour une anomalie précédente, et pour modifier le score déterminé de chaque indicateur dans le sous-ensemble sur la base du facteur expert.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2020/052332 WO2021151494A1 (fr) | 2020-01-30 | 2020-01-30 | Dispositif de surveillance d'un système de réseau informatique |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4070197A1 true EP4070197A1 (fr) | 2022-10-12 |
Family
ID=69411444
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20703010.7A Pending EP4070197A1 (fr) | 2020-01-30 | 2020-01-30 | Dispositif de surveillance d'un système de réseau informatique |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP4070197A1 (fr) |
WO (1) | WO2021151494A1 (fr) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230186170A1 (en) * | 2021-12-14 | 2023-06-15 | International Business Machines Corporation | Contention detection and cause determination |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10031829B2 (en) * | 2009-09-30 | 2018-07-24 | International Business Machines Corporation | Method and system for it resources performance analysis |
US9003076B2 (en) * | 2013-05-29 | 2015-04-07 | International Business Machines Corporation | Identifying anomalies in original metrics of a system |
US9632858B2 (en) * | 2013-07-28 | 2017-04-25 | OpsClarity Inc. | Organizing network performance metrics into historical anomaly dependency data |
US10397810B2 (en) * | 2016-01-08 | 2019-08-27 | Futurewei Technologies, Inc. | Fingerprinting root cause analysis in cellular systems |
US10432661B2 (en) * | 2016-03-24 | 2019-10-01 | Cisco Technology, Inc. | Score boosting strategies for capturing domain-specific biases in anomaly detection systems |
-
2020
- 2020-01-30 EP EP20703010.7A patent/EP4070197A1/fr active Pending
- 2020-01-30 WO PCT/EP2020/052332 patent/WO2021151494A1/fr unknown
Also Published As
Publication number | Publication date |
---|---|
WO2021151494A1 (fr) | 2021-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11150974B2 (en) | Anomaly detection using circumstance-specific detectors | |
CN111475804B (zh) | 一种告警预测方法及系统 | |
CN103513983B (zh) | 用于预测性警报阈值确定工具的方法和系统 | |
US7647524B2 (en) | Anomaly detection | |
US10373065B2 (en) | Generating database cluster health alerts using machine learning | |
US20180039914A1 (en) | Machine learning techniques for providing enriched root causes based on machine-generated data | |
US8181069B2 (en) | Method and system for problem determination using probe collections and problem classification for the technical support services | |
AU2017274576B2 (en) | Classification of log data | |
Klinkenberg et al. | Data mining-based analysis of HPC center operations | |
US20060188011A1 (en) | Automated diagnosis and forecasting of service level objective states | |
US11886276B2 (en) | Automatically correlating phenomena detected in machine generated data to a tracked information technology change | |
Dou et al. | Pc 2 a: predicting collective contextual anomalies via lstm with deep generative model | |
US20230275915A1 (en) | Machine learning for anomaly detection based on logon events | |
Zhang et al. | Efficient and robust syslog parsing for network devices in datacenter networks | |
Yassin et al. | Signature-Based Anomaly intrusion detection using Integrated data mining classifiers | |
CN113515434B (zh) | 异常分类方法、装置、异常分类设备及存储介质 | |
US8909768B1 (en) | Monitoring of metrics to identify abnormalities in a large scale distributed computing environment | |
Chen et al. | Graph-based incident aggregation for large-scale online service systems | |
Muller | Event correlation engine | |
Pal et al. | DLME: distributed log mining using ensemble learning for fault prediction | |
Hariprasad et al. | Detection of DDoS Attack in IoT Networks Using Sample Selected RNN-ELM. | |
WO2021151494A1 (fr) | Dispositif de surveillance d'un système de réseau informatique | |
Zheng et al. | Anomaly localization in large-scale clusters | |
Gu et al. | Performance Issue Identification in Cloud Systems with Relational-Temporal Anomaly Detection | |
Kakadia et al. | Machine learning approaches for network resiliency optimization for service provider networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20220704 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) |