WO2021045719A1 - System for online unsupervised event pattern extraction - Google Patents

System for online unsupervised event pattern extraction Download PDF

Info

Publication number
WO2021045719A1
WO2021045719A1 PCT/US2017/030469 US2017030469W WO2021045719A1 WO 2021045719 A1 WO2021045719 A1 WO 2021045719A1 US 2017030469 W US2017030469 W US 2017030469W WO 2021045719 A1 WO2021045719 A1 WO 2021045719A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
event
anomaly
pattern
module operative
Prior art date
Application number
PCT/US2017/030469
Other languages
French (fr)
Inventor
Xiaohui Gu
Original Assignee
Xiaohui Gu
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaohui Gu filed Critical Xiaohui Gu
Priority to PCT/US2017/030469 priority Critical patent/WO2021045719A1/en
Publication of WO2021045719A1 publication Critical patent/WO2021045719A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0709Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems

Definitions

  • FIGURE 1 is a view of the system architecture to perform pattern extraction and relationship extraction consistent with certain embodiments of the present invention.
  • FIGURE 2 is a view of metric event pattern extraction consistent with certain embodiments of the present invention.
  • FIGURE 3 is a view of log event pattern extraction consistent with certain embodiments of the present invention.
  • FIGURE 4 is a view of system call trace anomaly detection and pattern matching consistent with certain embodiments of the present invention.
  • FIGURE 5 is a view of component causal relationship discovery consistent with certain embodiments of the present invention.
  • FIGURE 6 is a view of component correlation relationship extraction consistent with certain embodiments of the present invention.
  • the terms “a” or “an”, as used herein, are defined as one or more than one.
  • the term “plurality”, as used herein, is defined as two or more than two.
  • the term “another”, as used herein, is defined as at least a second or more.
  • the terms “including” and/or “having”, as used herein, are defined as comprising (i.e., open language).
  • the term “coupled”, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically.
  • component refers to computers, servers, communication devices, displays, diagnostic devices, software modules, software utilities, application programming interfaces (APIs), and all other devices and modules having a network communication connection permitting each component to be connected to one or more networked systems.
  • APIs application programming interfaces
  • KPI Key Performance Indicator
  • the present invention relates to unsupervised online event pattern extraction and holistic root cause analysis in distributed systems.
  • the invention is implemented in public and private cloud environments.
  • the innovation may first perform metric event pattern extraction.
  • the innovation first provides automatic unsupervised multi-variant statistical classification methods to extract principle event patterns from large amounts of raw metric data streams for a system under analysis. Each event pattern captures a unique system state.
  • a modem server system typically operates under different states over time because of environment changes such as workload fluctuations, resource allocation change, software updates, or other actions required to meet processing load, system updates, or other maintenance and operational needs.
  • the system may capture all unique system states of a live production server using unsupervised online learning algorithms.
  • the innovation may further identify key features of each event pattern to automatically create a label for each event pattern.
  • the innovation identifies the key metrics that make an event pattern unique are the result of a gradually increasing memory consumption and a near constant CPU usage, this event is identified as a “memory leak” and the event is labeled and stored under this identification.
  • a user may override or edit the label using domain knowledge that is specific to the domain in which the system under analysis is operational.
  • the innovation may also automatically identify recurrent event patterns by comparing newly extracted patterns with previously captured and stored patterns.
  • the innovation may also provide data compression benefit to the user by only storing unique patterns. This operation avoids duplication and promotes more efficient storage of patterns and optimizes search time when recalling event patterns for analysis or comparison.
  • the innovation may further aggregate continuous events of the same pattern into one consolidated event.
  • the innovation may next perform operations to permit the extraction from log files of event patterns.
  • log data provides useful information especially for anomaly diagnosis.
  • Existing log analysis tools focus on providing search and query support with little support for automatic pattern extraction.
  • log data are semi-structured or unstructured.
  • the innovation may first extract quantitative features from the raw log data.
  • the innovation implements two schemes to address the issue of extracting quantitative features from raw log data. The first approach is to extract popular keywords from all the log events and use the frequency of keywords to construct the feature vector for each log event.
  • the innovation may also provide word filtering functions for the user to filter out uninteresting words such as articles, verbs, and adjectives.
  • the innovation may further extract popular phrases, where the popularity of a phrase is based upon the number of times the phrase appears in the incoming log data, using a frequent episode mining algorithm.
  • the innovation may then construct frequency feature vectors in a similar manner as constructing word frequency vectors.
  • the innovation may also provide a user interface for the user to conveniently choose the interesting keywords and phrases he or she prefers to use in the analysis algorithm.
  • the innovation may apply unsupervised machine learning algorithms over extracted feature vectors to group log data with similar feature patterns together.
  • This log pattern extraction service can help users browse through a large number of log events and extract useful information in a more efficient way.
  • this classification can be useful for incident triage by helping a user to identify previously diagnosed events.
  • the innovation may also achieve log data compression by identifying common parts among similar log data and replacing common parts with a pattern identifier to eliminate duplicate log data and optimize the storage of data within the log files.
  • the innovation may also perform rare event identification by identifying those log data patterns that rarely appear in the analyzed log data.
  • a histogram may be created to present to a user those event patterns that appear more or less frequently, or appear in an unusual way.
  • the innovation is operative to perform system call trace pattern extraction for the system under analysis.
  • the system call trace pattern extraction receives analysis data from system call traces and function call traces to create a set of system call sequence patterns for application functions called. This data may be used to extract patterns for system call traces that have been affected by some anomaly.
  • the system call sequence pattern extraction may be used to develop a list of affected functions that may be reported to a user.
  • the innovation is operative to create an adaptive pattern learning framework.
  • Modern computer systems are highly complex and dynamic, especially for emerging container-based architectures where application components can be dynamically created and deleted with high frequency.
  • the innovation may provide an adaptive pattern learning framework that can accommodate both environment changes (in the form of workload changes, resource availability variations, and other environment changes) and different kinds of applications such as long-running batch jobs in comparison with short running tasks.
  • the innovation provides for associative event analysis to further develop automatic learning algorithms to extract the association patterns from individual component events.
  • the event association algorithms extract possible correlation and causality relationships among different system components based on the start time of different events.
  • a cascade of components affected by events may be discovered through a correlation of the relationships between various components in the system.
  • a sequence of relationships between components may be constructed and the correlations applied to determine all possible cascade sequences for components that are highly correlated in the event of a system anomaly affecting any component within the sequence of relationships.
  • proper system orchestration services such as auto-scaling, migration, and/or reboot may be automatically triggered by matching a detected or predicted event with stored anomaly pattern to automatically repair an unhealthy system as an automatic fix.
  • the system manager can configure the orchestration functions based on different extracted event patterns.
  • an autofix action could be configured.
  • the autofix action could be specified to require rebooting the software to prevent the system outage and alert the developer to patch the memory leak.
  • the detected event type is network congestion
  • migration actions may be undertaken to avoid the impact of bandwidth shortage.
  • the system for computerized network anomaly prediction and correction may consist of a processor in networked data communication with a plurality of networked components and a plurality of software components. Including a software module operative to capture unique system status of one or more network production servers through the use of unsupervised learning algorithms where pluralities of networked components are transmitting at least metric data, system call trace data and log data to the system processor.
  • a software module operative to label one or more system conditions that correspond to the metric data values that contribute to an identified anomaly pattern as defined by a system user, where the event labels may be edited or over-ridden by a human user with specific domain knowledge.
  • the system may use said metric data, system call trace data, and log data to create one or more anomaly events associated with said anomaly pattern, where each identified anomaly pattern is given a label automatically that can be further refined by a human user.
  • the system may aggregate two or more events having substantially the same event pattern into a consolidated event and analyze said anomaly events utilizing causal and correlation relationships between said pluralities of networked components for extracting root causes of a detected or predicted anomaly event.
  • the system may extract one or more patterns from said system call trace data to develop a list of affected functions to be reported to a user. Upon such identification, the system may utilize system user defined orchestration functions to trigger autofix functions for said events, where said autofix functions correct said one or more events, reporting autofix actions, and providing predictions and recommendations for additional corrective action to a system user.
  • the innovation first provides automatic unsupervised multi-variant statistical classification methods to extract principle event patterns from large amounts of raw metric data streams for a system under analysis.
  • the system architecture provides for ingestion of events and data from data receivers integrated 102 into the system such as events received from independent and cloud-based servers, apps active on mobile devices, and infrastructure components. Additional ingestion may be received from Application Programming Interfaces (APIs) from scheduled active polling and/or queries 104 and Custom Sources such as the RESTful API 106. Event, log, and other data patterns are received from all sources by the Insightfmder application 108.
  • APIs Application Programming Interfaces
  • Custom Sources such as the RESTful API 106.
  • Event, log, and other data patterns are received from all sources by the Insightfmder application 108.
  • each event pattern captures a unique system state.
  • the system may capture all unique system states of a live production server using unsupervised online learning algorithms.
  • the Insightfmder application 108 performs trained extraction, anomaly detection, and component actions to create output that is meaningful to a user.
  • the Insightfmder application provides root cause analysis 110, provides live and predictive alerts 112 for discovered anomalies, provides autofix actions 114 for known anomalies and/or issues, provides webhooks 116 for further information discovery and problem correction, and provides for stored event patterns 118 to optimize future discovery and correction of problem events, anomalies, and issues.
  • FIGURE 2 presents a view of metric event pattern extraction consistent with certain embodiments of the present invention.
  • the system presents the operation for metric event pattern extraction utilizing incoming data values from a plurality of sources such as, but not limited to, data receivers, APIs and custom sources.
  • Input data may be composed of a series of parameters that are ingested as metric time series event data 200.
  • the metric time series data 200 may, by way of example and not of limitation, be composed of a time span snapshot of available cpu percentage over time, the amount of free memory in mega-bytes, the number of input data units, the amount of cpu time consumed by users in milliseconds, the amount of cpu time consumed by the system in milliseconds, total memory consumption in mega-bytes, and the overall queue length for jobs in process, among other parameters that may be specified by the system as such additional parameters are identified.
  • the system may have a software module operative to perform online pattern extraction 202 from the input metric time series data input to the system.
  • the online pattern extraction process may discover a pattern, entitled Event Pattern A 204, that is indicative of a memory leak on a web server.
  • Event Pattern A 204 may be established through an event pattern unique that is the result of gradually increasing memory consumption and a near constant CPU usage, the system may create a label a “memory leak” for Event Pattern A 204.
  • metric values that indicate that there is disk contention on a database within the purview of the received metric time series values may be labeled by the system as Event Pattern B 206.
  • metric time series values that have not previously been received, or produce a pattern that is not yet known to the system may result in an Anomaly branding by the system such as is represented by Anomaly Pattern C 208.
  • Anomaly Pattern C 208 may occur again on a frequent or intermittent basis, however, the system is operative to store Anomaly Pattern C 208 in a pattern database. This permits the system to recall Anomaly Pattern C 208, among other stored patterns, whenever the same pattern is presented by the Online Pattern Extraction process 202.
  • the system may replace the anomaly term with the identified system condition and rename the anomaly pattern with said system condition. In this manner, the system may learn to recognize anomalous conditions and provide proper labels and recommendations for such patterns.
  • FIGURE 3 presents a view of log event pattern extraction consistent with certain embodiments of the present invention.
  • the system does not depend solely upon patterns extracted from metric time series input to perform analysis and identify patterns indicating events that may require remediation.
  • the system also receives collected log data that may be semi -structured or unstructured to provide additional verification for patterns possibly requiring remediation.
  • the received log data 300 is subjected to statistical machine learning algorithms to extract patterns from those data.
  • Feature extraction 302 from the log data uses two principal schemes to analyze the received log data 300.
  • the system may extract frequently used, or popular, words from the received log data 300.
  • the system also determines the frequency of use for each popular word.
  • a word filtering function is employed to filter out uninteresting words such as articles, verbs, and adjectives to reduce the amount of processing time and optimize the extraction of words that may truly be indicative of anomalous patterns.
  • the system may also extract popular phrases using a frequent episode mining algorithm as well as the frequency of occurrence of each popular phrase.
  • the system may also present mined frequently used words and phrases to a user to permit the user to choose the interesting keywords and phrases the user wants the system to use in performing further analysis on the log data.
  • the system may utilize the occurrences of popular words and popular phrases in combination with the frequency of occurrence of each popular word and/or phrase to construct frequency feature vectors 304.
  • the frequency feature vectors may be composed of an appearance vector for each appearance of a particular word or phrase, and a frequency vector for the number of times each popular word or phrase occurs in the received log data 300.
  • the innovation may apply unsupervised machine learning algorithms over extracted feature vectors to group log data with similar feature patterns together to perform online pattern extraction 306.
  • This online pattern extraction service 306 as applied to received log data 300 can help users browse through a large number of log events and extract useful information in a more efficient way. Moreover, this classification can be useful for incident triage by helping a user to identify previously diagnosed events.
  • the system may utilize the extracted patterns to perform rare event detection 308 from the received log data 300.
  • Rare log events may indicate some interesting incidents, which could expedite the incident triage processing by giving the rare patterns higher processing priority.
  • the innovation may also compute a windowed frequency count for each extracted log pattern and construct a histogram chart for each pattern. In a non-limiting example, if the log pattern A appeared 5 times in [0, W] and 10 times in [W + 1, 2 x W], the system may produce a histogram of [5,10] The innovation may then perform anomaly detection over the histogram to detect which event patterns appear more or less frequently in an unusual way.
  • the histogram may provide a user with a view of event patterns that are of interest to the user, how frequently such patterns occur, and may provide the user with the ability to select particular words or phrases for additional analysis and processing.
  • the innovation may also provide system call tracing function 400 that can collect runtime system call traces for production server applications.
  • the innovation may first perform simple pre-processing to extract system call information in the form of (timestamp, process ID, thread ID, system call type).
  • the innovation may then segment the large raw system call traces into smaller groups of related system calls that are termed execution units based on process identifier, thread identifier, and the time gap between two continuous system calls 402.
  • the innovation may next perform frequent episode mining over the system call trace within each execution unit to identify common system call sequences to trace functions that are frequently called and the frequency with which such functions are called 404.
  • the system call trace and function trace analysis contribute to the pattern extraction 406 to disclose functions that display indicators of being affected in some way that may require intervention.
  • the innovation may also perform an affected system call trace detection 408 action in each execution unit to identify which system calls are either executed more frequently or take longer time to execute within each execution unit to determine which functions require further processing.
  • the innovation may then label each execution unit as normal or abnormal based on the anomaly detection results in comparison with the system call execution time or frequency.
  • the innovation may also map each execution unit to high level program constructs such as application functions by profiling the frequent system call episodes produced by different application functions.
  • An online anomaly detection and pattern matching 410 module receives the extracted patterns from the system call trace 402 and function trace 404 operations.
  • the patterns provide information regarding the affected system calls as identified by the analysis of the system calls and operative functions.
  • the online anomaly detection and pattern matching 410 module may provide an adaptive pattern learning framework that can accommodate both environment changes and different kinds of applications such as long-running batch jobs in comparison with short-running tasks.
  • each pattern is a compressed representation of one specific system state and each model we create consists of all possible patterns of the behavior of the system being analyzed over a period of time (e.g., one day) for each system component.
  • the innovation may then take a model ensemble approach to building a group of models for each system component where each system component could be any of a job, a task, a micro-service, or any other identified system component.
  • the learning framework expressed by the innovation is adaptive with regard to both dynamic environments and application component types.
  • the innovation may adopt different sampling rate for deriving models for different application components.
  • the innovation may employ a relatively long sampling period (e.g., 5 minutes) for pattern extraction.
  • a fine grained sampling period is utilized (e.g., 1 second) for pattern extraction.
  • the innovation may then perform dynamic model consolidations to improve the model quality for each application component.
  • the innovation aggregates the training data coming from multiple similar tasks or jobs to train one consolidated model instead of creating a set of separate models trained from segmented data. Performing this operation is particularly important for short running tasks which often just exist for a few minutes, which will result in an insufficiently trained model.
  • the result of the online anomaly detection and pattern matching function is a set of affected functions 412 for the system under analysis.
  • FIGURE 5 presents a view of component causal relationship discovery consistent with certain embodiments of the present invention.
  • the event association algorithms extract possible correlation and causality relationships among different system components based on the start time of different events 502.
  • Two components Cl and C2 are said to be correlated if anomalous events often occur on both components concurrently.
  • Two components Cl and C2 are said to have causal relationships if anomalous events on Cl often happen before anomalous event on C2.
  • database DB always starts to experience anomalies a few minutes after the web server WS has some issues, the inference is that there exists a possible causal relationship between DB and WS.
  • holistic root cause analysis may be performed to reveal the reason why a problem occurs in the production system.
  • the root cause analysis tool may identify the exact host(s), system metric(s), application component(s), and buggy function(s) attributed to a production system problem.
  • the root cause analysis executes an automatic drill down root cause analysis protocol to gradually narrow down the location of the root cause hosts, components, and application functions in a distributed system.
  • a log and system call trace analysis may be triggered to detect whether there exists any abnormalities in log and system call trace data to further pin down the root causes.
  • it can be distinguished whether the root cause comes from outside or inside the application software. If the root cause is located inside the application software, the buggy application function may be further localized using the system call trace pattern extraction algorithm described above.
  • the system may use a set of causality relationships and probabilities of possible correlation to determine common component failure sequences 504.
  • Frequent sequence mining may also be applied to discover common component failing sequences, that is, anomaly on component A “happens before” the anomaly on component B. Since those raw event sequences can be noisy and imprecise, frequent sequence mining may be used to extract strong causality relationships. Additional dependency information may be leveraged, such as network topologies, application structures, and communication patterns, to cross validate the group patterns discovered by the causal relationship algorithms.
  • the cascade of failures among strongly correlated components may provide a determination of one or more Key Performance Indicator (KPI) violations.
  • KPI Key Performance Indicator
  • FIGURE 6 presents a view of component correlation relationship extraction consistent with certain embodiments of the present invention.
  • holistic performance anomaly impact prediction 600 may be provided to estimate the potential impact of a detected anomaly. Based on the anomaly correlation patterns, a first estimate may be provided as to which other components are likely to become anomalous after detecting one component anomaly. In a non-limiting example, after detecting an anomaly on switch S3 (Component 1), a prediction that edge router R ⁇ (Component 2) will probably fail soon may be made since these components always experience anomalies together. Subsequently, a prediction may be provided regarding which application or service will be likely to experience service outages or key performance indicator (KPI) violations based on the causal relationships between system metrics and KPI violations.
  • KPI key performance indicator
  • a distributed multi-tier application consisting of web service tier and database tier. If an observation that a disk contention anomaly on the database tier is likely to cause a CPU contention on the web server tier, and further a response time increase (e.g., database disk contention - Web CPU spike - KPI violation), early alarms may be raised about any web server anomaly and KPI violation when a database anomaly is detected.
  • the technique herein recited can achieve early performance problem detection by leveraging causality analysis results.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Debugging And Monitoring (AREA)

Abstract

An unsupervised pattern extraction system and method for extracting user interested patterns from various kinds of data such as system -level metric values, system call traces, and semi -structured or free form text log data and performing holistic root cause analysis for distributed systems. The distributed system includes a plurality of computer machines or smart devices. The system consists of both real time data collection and analytics functions. The analytics functions automatically extract event patterns and recognize recurrent events in real time by analyzing collected data streams from different sources. A root cause analysis component analyzes the extracted events and identifies both correlation and causality relationships among different components to pinpoint root cause of a networked-system anomaly. Furthermore, an anomaly impact prediction component estimates the impact scope of the detected anomaly and raises early alarms about impending service outages or application performance degradations based on the identified correlation and causality relationships.

Description

SYSTEM FOR ONLINE UNSUPERVISED EVENT PATTERN EXTRACTION
COPYRIGHT NOTICE
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND
As computer systems become increasingly complex, computer system anomalies become more prevalent, causing serious performance degradations, service outages, and, ultimately, big financial loss and brand damages. To handle the anomalies, system managers wish to receive early alerts, root cause analysis, and remedy suggestions to minimize the system downtime.
To date, existing solutions have mostly focused on detecting anomalous metric values. However, it is difficult for the system manager to understand enormous amounts of low-level anomalous raw data and manually extract meaningful insights or patterns from those many anomalous raw data. Moreover, existing techniques typically analyze system anomalies within individual components. However, production computing infrastructures often consist of many inter-dependent components. One component anomaly may cause other components to fail and eventually bring down the whole production system. Thus, it is important to understand which groups of components have strong causal relationships among their failure incidents. BRIEF DESCRIPTION OF THE DRAWINGS
Certain illustrative embodiments illustrating organization and method of operation, together with objects and advantages may be best understood by reference to the detailed description that follows taken in conjunction with the accompanying drawings in which:
FIGURE 1 is a view of the system architecture to perform pattern extraction and relationship extraction consistent with certain embodiments of the present invention.
FIGURE 2 is a view of metric event pattern extraction consistent with certain embodiments of the present invention. FIGURE 3 is a view of log event pattern extraction consistent with certain embodiments of the present invention.
FIGURE 4 is a view of system call trace anomaly detection and pattern matching consistent with certain embodiments of the present invention.
FIGURE 5 is a view of component causal relationship discovery consistent with certain embodiments of the present invention.
FIGURE 6 is a view of component correlation relationship extraction consistent with certain embodiments of the present invention.
DETAILED DESCRIPTION
While this invention is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail specific embodiments, with the understanding that the present disclosure of such embodiments is to be considered as an example of the principles and not intended to limit the invention to the specific embodiments shown and described. In the description below, like reference numerals are used to describe the same, similar or corresponding parts in the several views of the drawings.
The terms “a” or “an”, as used herein, are defined as one or more than one. The term “plurality”, as used herein, is defined as two or more than two. The term “another”, as used herein, is defined as at least a second or more. The terms “including” and/or “having”, as used herein, are defined as comprising (i.e., open language). The term “coupled”, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically.
Reference throughout this document to "one embodiment", “certain embodiments”, "an embodiment" or similar terms means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of such phrases or in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments without limitation.
Reference throughout this document to “component” refers to computers, servers, communication devices, displays, diagnostic devices, software modules, software utilities, application programming interfaces (APIs), and all other devices and modules having a network communication connection permitting each component to be connected to one or more networked systems.
Reference throughout this document to an “adaptive pattern learning algorithm” refers to the development and use of automatic learning algorithms to extract one or more association patterns from individual component events.
Reference throughout this document to a “cascade event” refers to a condition where if a correlation probability between components is at or near 1.0, the components can be said to be highly correlated in such a way that if one component is affected by an anomaly highly correlated components are highly likely to be affected as well creating a cascade of events from one component to all components that are highly correlated with that one component. Reference throughout this document to “Key Performance Indicator (KPI)” refers to a metric, measurement, value, valuation, or statistic in which a user has high confidence that the KPI is representative of the performance of a particular network, component, software module, or other system component.
In an embodiment, the present invention relates to unsupervised online event pattern extraction and holistic root cause analysis in distributed systems. In a non-limiting example, the invention is implemented in public and private cloud environments.
In an embodiment, the innovation may first perform metric event pattern extraction.
In this embodiment, the innovation first provides automatic unsupervised multi-variant statistical classification methods to extract principle event patterns from large amounts of raw metric data streams for a system under analysis. Each event pattern captures a unique system state. A modem server system typically operates under different states over time because of environment changes such as workload fluctuations, resource allocation change, software updates, or other actions required to meet processing load, system updates, or other maintenance and operational needs. In this embodiment, the system may capture all unique system states of a live production server using unsupervised online learning algorithms.
In an exemplary embodiment, the innovation may further identify key features of each event pattern to automatically create a label for each event pattern. In a non-limiting example, if the innovation identifies the key metrics that make an event pattern unique are the result of a gradually increasing memory consumption and a near constant CPU usage, this event is identified as a “memory leak” and the event is labeled and stored under this identification. A user may override or edit the label using domain knowledge that is specific to the domain in which the system under analysis is operational.
In an embodiment, the innovation may also automatically identify recurrent event patterns by comparing newly extracted patterns with previously captured and stored patterns. As a byproduct of the event pattern extraction service, the innovation may also provide data compression benefit to the user by only storing unique patterns. This operation avoids duplication and promotes more efficient storage of patterns and optimizes search time when recalling event patterns for analysis or comparison. To further simplify the event browsing for the user, the innovation may further aggregate continuous events of the same pattern into one consolidated event.
In an embodiment, the innovation may next perform operations to permit the extraction from log files of event patterns. In addition to metric data, many customers already collect large amounts of log data through the operation of existing log collections and search tools such as ELK and Splunk. Log data provides useful information especially for anomaly diagnosis. Existing log analysis tools focus on providing search and query support with little support for automatic pattern extraction. Different from metric data, log data are semi-structured or unstructured. In order to apply statistical machine learning algorithms to extract patterns from the accumulated log data, the innovation may first extract quantitative features from the raw log data. In this embodiment, the innovation implements two schemes to address the issue of extracting quantitative features from raw log data. The first approach is to extract popular keywords from all the log events and use the frequency of keywords to construct the feature vector for each log event. The innovation may also provide word filtering functions for the user to filter out uninteresting words such as articles, verbs, and adjectives. In the second approach, the innovation may further extract popular phrases, where the popularity of a phrase is based upon the number of times the phrase appears in the incoming log data, using a frequent episode mining algorithm. The innovation may then construct frequency feature vectors in a similar manner as constructing word frequency vectors. The innovation may also provide a user interface for the user to conveniently choose the interesting keywords and phrases he or she prefers to use in the analysis algorithm.
In an embodiment, after extracting feature vectors from raw log data, the innovation may apply unsupervised machine learning algorithms over extracted feature vectors to group log data with similar feature patterns together. This log pattern extraction service can help users browse through a large number of log events and extract useful information in a more efficient way. Moreover, this classification can be useful for incident triage by helping a user to identify previously diagnosed events. The innovation may also achieve log data compression by identifying common parts among similar log data and replacing common parts with a pattern identifier to eliminate duplicate log data and optimize the storage of data within the log files.
In an embodiment, in addition to log event classification, the innovation may also perform rare event identification by identifying those log data patterns that rarely appear in the analyzed log data. A histogram may be created to present to a user those event patterns that appear more or less frequently, or appear in an unusual way.
In an embodiment, the innovation is operative to perform system call trace pattern extraction for the system under analysis. The system call trace pattern extraction receives analysis data from system call traces and function call traces to create a set of system call sequence patterns for application functions called. This data may be used to extract patterns for system call traces that have been affected by some anomaly. The system call sequence pattern extraction may be used to develop a list of affected functions that may be reported to a user.
In an embodiment, the innovation is operative to create an adaptive pattern learning framework. Modern computer systems are highly complex and dynamic, especially for emerging container-based architectures where application components can be dynamically created and deleted with high frequency. The innovation may provide an adaptive pattern learning framework that can accommodate both environment changes (in the form of workload changes, resource availability variations, and other environment changes) and different kinds of applications such as long-running batch jobs in comparison with short running tasks.
In an embodiment, the innovation provides for associative event analysis to further develop automatic learning algorithms to extract the association patterns from individual component events. The event association algorithms extract possible correlation and causality relationships among different system components based on the start time of different events. A cascade of components affected by events may be discovered through a correlation of the relationships between various components in the system. A sequence of relationships between components may be constructed and the correlations applied to determine all possible cascade sequences for components that are highly correlated in the event of a system anomaly affecting any component within the sequence of relationships.
In an embodiment, proper system orchestration services such as auto-scaling, migration, and/or reboot may be automatically triggered by matching a detected or predicted event with stored anomaly pattern to automatically repair an unhealthy system as an automatic fix. The system manager can configure the orchestration functions based on different extracted event patterns. In a non-limiting example, if the event is recognized as a memory leak bug, an autofix action could be configured. The autofix action could be specified to require rebooting the software to prevent the system outage and alert the developer to patch the memory leak. If the detected event type is network congestion, migration actions may be undertaken to avoid the impact of bandwidth shortage. By automatically identifying different event patterns, the innovation allows the system manager to configure proper autofix actions to be triggered automatically without human intervention.
In an embodiment, the system for computerized network anomaly prediction and correction may consist of a processor in networked data communication with a plurality of networked components and a plurality of software components. Including a software module operative to capture unique system status of one or more network production servers through the use of unsupervised learning algorithms where pluralities of networked components are transmitting at least metric data, system call trace data and log data to the system processor.
Additionally including a software module operative to label one or more system conditions that correspond to the metric data values that contribute to an identified anomaly pattern as defined by a system user, where the event labels may be edited or over-ridden by a human user with specific domain knowledge. The system may use said metric data, system call trace data, and log data to create one or more anomaly events associated with said anomaly pattern, where each identified anomaly pattern is given a label automatically that can be further refined by a human user. The system may aggregate two or more events having substantially the same event pattern into a consolidated event and analyze said anomaly events utilizing causal and correlation relationships between said pluralities of networked components for extracting root causes of a detected or predicted anomaly event.
The system may extract one or more patterns from said system call trace data to develop a list of affected functions to be reported to a user. Upon such identification, the system may utilize system user defined orchestration functions to trigger autofix functions for said events, where said autofix functions correct said one or more events, reporting autofix actions, and providing predictions and recommendations for additional corrective action to a system user.
Turning now to FIGURE 1, this figure presents a view of the system architecture to perform pattern extraction and relationship extraction consistent with certain embodiments of the present invention. In an exemplary embodiment, the innovation first provides automatic unsupervised multi-variant statistical classification methods to extract principle event patterns from large amounts of raw metric data streams for a system under analysis. The system architecture provides for ingestion of events and data from data receivers integrated 102 into the system such as events received from independent and cloud-based servers, apps active on mobile devices, and infrastructure components. Additional ingestion may be received from Application Programming Interfaces (APIs) from scheduled active polling and/or queries 104 and Custom Sources such as the RESTful API 106. Event, log, and other data patterns are received from all sources by the Insightfmder application 108.
In an embodiment, each event pattern captures a unique system state. In this embodiment, the system may capture all unique system states of a live production server using unsupervised online learning algorithms. The Insightfmder application 108 performs trained extraction, anomaly detection, and component actions to create output that is meaningful to a user. The Insightfmder application provides root cause analysis 110, provides live and predictive alerts 112 for discovered anomalies, provides autofix actions 114 for known anomalies and/or issues, provides webhooks 116 for further information discovery and problem correction, and provides for stored event patterns 118 to optimize future discovery and correction of problem events, anomalies, and issues.
Turning now to FIGURE 2, this figure presents a view of metric event pattern extraction consistent with certain embodiments of the present invention. In an exemplary embodiment, the system presents the operation for metric event pattern extraction utilizing incoming data values from a plurality of sources such as, but not limited to, data receivers, APIs and custom sources. Input data may be composed of a series of parameters that are ingested as metric time series event data 200. The metric time series data 200 may, by way of example and not of limitation, be composed of a time span snapshot of available cpu percentage over time, the amount of free memory in mega-bytes, the number of input data units, the amount of cpu time consumed by users in milliseconds, the amount of cpu time consumed by the system in milliseconds, total memory consumption in mega-bytes, and the overall queue length for jobs in process, among other parameters that may be specified by the system as such additional parameters are identified. The system may have a software module operative to perform online pattern extraction 202 from the input metric time series data input to the system.
In an embodiment, the online pattern extraction process may discover a pattern, entitled Event Pattern A 204, that is indicative of a memory leak on a web server. As previously described, Event Pattern A 204 may be established through an event pattern unique that is the result of gradually increasing memory consumption and a near constant CPU usage, the system may create a label a “memory leak” for Event Pattern A 204. Similarly, metric values that indicate that there is disk contention on a database within the purview of the received metric time series values may be labeled by the system as Event Pattern B 206.
In an embodiment, metric time series values that have not previously been received, or produce a pattern that is not yet known to the system may result in an Anomaly branding by the system such as is represented by Anomaly Pattern C 208. Anomaly Pattern C 208 may occur again on a frequent or intermittent basis, however, the system is operative to store Anomaly Pattern C 208 in a pattern database. This permits the system to recall Anomaly Pattern C 208, among other stored patterns, whenever the same pattern is presented by the Online Pattern Extraction process 202. As anomalies are discovered and labeled with the system condition that corresponds to the metric data values that contribute to the identified anomaly pattern, either by the user or administrator of the system or by the system, the system may replace the anomaly term with the identified system condition and rename the anomaly pattern with said system condition. In this manner, the system may learn to recognize anomalous conditions and provide proper labels and recommendations for such patterns.
Turning now to FIGURE 3, this figure presents a view of log event pattern extraction consistent with certain embodiments of the present invention. In an exemplary embodiment, the system does not depend solely upon patterns extracted from metric time series input to perform analysis and identify patterns indicating events that may require remediation. The system also receives collected log data that may be semi -structured or unstructured to provide additional verification for patterns possibly requiring remediation. The received log data 300 is subjected to statistical machine learning algorithms to extract patterns from those data. Feature extraction 302 from the log data uses two principal schemes to analyze the received log data 300. The system may extract frequently used, or popular, words from the received log data 300. The system also determines the frequency of use for each popular word. When extracting frequently used words a word filtering function is employed to filter out uninteresting words such as articles, verbs, and adjectives to reduce the amount of processing time and optimize the extraction of words that may truly be indicative of anomalous patterns. The system may also extract popular phrases using a frequent episode mining algorithm as well as the frequency of occurrence of each popular phrase. The system may also present mined frequently used words and phrases to a user to permit the user to choose the interesting keywords and phrases the user wants the system to use in performing further analysis on the log data.
In an embodiment, the system may utilize the occurrences of popular words and popular phrases in combination with the frequency of occurrence of each popular word and/or phrase to construct frequency feature vectors 304. The frequency feature vectors may be composed of an appearance vector for each appearance of a particular word or phrase, and a frequency vector for the number of times each popular word or phrase occurs in the received log data 300. After the creation of the frequency feature vectors has been completed, the innovation may apply unsupervised machine learning algorithms over extracted feature vectors to group log data with similar feature patterns together to perform online pattern extraction 306. This online pattern extraction service 306 as applied to received log data 300 can help users browse through a large number of log events and extract useful information in a more efficient way. Moreover, this classification can be useful for incident triage by helping a user to identify previously diagnosed events.
In an embodiment, the system may utilize the extracted patterns to perform rare event detection 308 from the received log data 300. Rare log events may indicate some interesting incidents, which could expedite the incident triage processing by giving the rare patterns higher processing priority. The innovation may also compute a windowed frequency count for each extracted log pattern and construct a histogram chart for each pattern. In a non-limiting example, if the log pattern A appeared 5 times in [0, W] and 10 times in [W + 1, 2 x W], the system may produce a histogram of [5,10] The innovation may then perform anomaly detection over the histogram to detect which event patterns appear more or less frequently in an unusual way. The histogram may provide a user with a view of event patterns that are of interest to the user, how frequently such patterns occur, and may provide the user with the ability to select particular words or phrases for additional analysis and processing.
Turning now to FIGURE 4, this figure presents a view of call trace anomaly detection and pattern matching consistent with certain embodiments of the present invention. In an exemplary embodiment, in addition to metric and log data, the innovation may also provide system call tracing function 400 that can collect runtime system call traces for production server applications. The innovation may first perform simple pre-processing to extract system call information in the form of (timestamp, process ID, thread ID, system call type). The innovation may then segment the large raw system call traces into smaller groups of related system calls that are termed execution units based on process identifier, thread identifier, and the time gap between two continuous system calls 402. The innovation may next perform frequent episode mining over the system call trace within each execution unit to identify common system call sequences to trace functions that are frequently called and the frequency with which such functions are called 404. The system call trace and function trace analysis contribute to the pattern extraction 406 to disclose functions that display indicators of being affected in some way that may require intervention.
In an embodiment, the innovation may also perform an affected system call trace detection 408 action in each execution unit to identify which system calls are either executed more frequently or take longer time to execute within each execution unit to determine which functions require further processing. The innovation may then label each execution unit as normal or abnormal based on the anomaly detection results in comparison with the system call execution time or frequency. The innovation may also map each execution unit to high level program constructs such as application functions by profiling the frequent system call episodes produced by different application functions.
An online anomaly detection and pattern matching 410 module receives the extracted patterns from the system call trace 402 and function trace 404 operations. The patterns provide information regarding the affected system calls as identified by the analysis of the system calls and operative functions. The online anomaly detection and pattern matching 410 module may provide an adaptive pattern learning framework that can accommodate both environment changes and different kinds of applications such as long-running batch jobs in comparison with short-running tasks. At a high level, each pattern is a compressed representation of one specific system state and each model we create consists of all possible patterns of the behavior of the system being analyzed over a period of time (e.g., one day) for each system component. The innovation may then take a model ensemble approach to building a group of models for each system component where each system component could be any of a job, a task, a micro-service, or any other identified system component.
In an embodiment, the learning framework expressed by the innovation is adaptive with regard to both dynamic environments and application component types. Initially, the innovation may adopt different sampling rate for deriving models for different application components. In a non-limiting example, for long-running jobs, the innovation may employ a relatively long sampling period (e.g., 5 minutes) for pattern extraction. However, for short-running tasks, preferably a fine grained sampling period is utilized (e.g., 1 second) for pattern extraction. The innovation may then perform dynamic model consolidations to improve the model quality for each application component. The innovation aggregates the training data coming from multiple similar tasks or jobs to train one consolidated model instead of creating a set of separate models trained from segmented data. Performing this operation is particularly important for short running tasks which often just exist for a few minutes, which will result in an insufficiently trained model. The result of the online anomaly detection and pattern matching function is a set of affected functions 412 for the system under analysis.
Turning now to FIGURE 5, this figure presents a view of component causal relationship discovery consistent with certain embodiments of the present invention. In this embodiment, the event association algorithms extract possible correlation and causality relationships among different system components based on the start time of different events 502. Two components Cl and C2 are said to be correlated if anomalous events often occur on both components concurrently. Two components Cl and C2 are said to have causal relationships if anomalous events on Cl often happen before anomalous event on C2. In a non-limiting example, if database DB always starts to experience anomalies a few minutes after the web server WS has some issues, the inference is that there exists a possible causal relationship between DB and WS.
In an embodiment, based on the extracted events from metric data, log data, and system call trace data, holistic root cause analysis may be performed to reveal the reason why a problem occurs in the production system. Specifically, the root cause analysis tool may identify the exact host(s), system metric(s), application component(s), and buggy function(s) attributed to a production system problem. The root cause analysis executes an automatic drill down root cause analysis protocol to gradually narrow down the location of the root cause hosts, components, and application functions in a distributed system. When an abnormal metric pattern is detected, a log and system call trace analysis may be triggered to detect whether there exists any abnormalities in log and system call trace data to further pin down the root causes. In a non-limiting example, it can be distinguished whether the root cause comes from outside or inside the application software. If the root cause is located inside the application software, the buggy application function may be further localized using the system call trace pattern extraction algorithm described above.
In an embodiment, the system may use a set of causality relationships and probabilities of possible correlation to determine common component failure sequences 504. Frequent sequence mining may also be applied to discover common component failing sequences, that is, anomaly on component A “happens before” the anomaly on component B. Since those raw event sequences can be noisy and imprecise, frequent sequence mining may be used to extract strong causality relationships. Additional dependency information may be leveraged, such as network topologies, application structures, and communication patterns, to cross validate the group patterns discovered by the causal relationship algorithms. The cascade of failures among strongly correlated components may provide a determination of one or more Key Performance Indicator (KPI) violations.
Turning now to FIGURE 6, this figure presents a view of component correlation relationship extraction consistent with certain embodiments of the present invention.
In this embodiment, holistic performance anomaly impact prediction 600 may be provided to estimate the potential impact of a detected anomaly. Based on the anomaly correlation patterns, a first estimate may be provided as to which other components are likely to become anomalous after detecting one component anomaly. In a non-limiting example, after detecting an anomaly on switch S3 (Component 1), a prediction that edge router R \ (Component 2) will probably fail soon may be made since these components always experience anomalies together. Subsequently, a prediction may be provided regarding which application or service will be likely to experience service outages or key performance indicator (KPI) violations based on the causal relationships between system metrics and KPI violations.
In a non-limiting example, consider a distributed multi-tier application consisting of web service tier and database tier. If an observation that a disk contention anomaly on the database tier is likely to cause a CPU contention on the web server tier, and further a response time increase (e.g., database disk contention - Web CPU spike - KPI violation), early alarms may be raised about any web server anomaly and KPI violation when a database anomaly is detected. Thus, the technique herein recited can achieve early performance problem detection by leveraging causality analysis results.
While certain illustrative embodiments have been described, it is evident that many alternatives, modifications, permutations and variations will become apparent to those skilled in the art in light of the foregoing description.

Claims

CLAIMS We claim:
1. A system for anomaly pattern recognition and root cause analysis in distributed systems, comprising: a processor in networked data communication with a plurality of networked components; the plurality of networked components transmitting at least metric data, system call trace data, and log entry data to said processor; a module operative to process said metric data, system call trace data, and log data to create one or more anomaly events and for each anomaly pattern; a module operative to identify causal relationships between said plurality of networked components; analyzing said anomaly events utilizing causal relationships between said plurality of networked components for identifying one or more root cause components; utilizing an identified anomaly pattern to trigger autofix functions, where said autofix functions correct said one or more anomalies associated with an identified anomaly pattern; and reporting autofix actions to a system user.
2. The system of claim 1, further comprising a module operative to create signatures that characterize an identified anomaly pattern and to capture unique system status of one or more network production servers through the use of unsupervised machine learning algorithms.
3. The system of claim 1, further comprising a module operative to create at least one event label for each anomaly pattern where the event labels may be edited or over ridden by a human user with specific domain knowledge.
4. The system of claim 1, further comprising a module operative to aggregate two or more contiguous events having the same or substantially similar event pattern into a consolidated event.
5. The system of claim 1, further comprising a module operative to extract log event patterns from received log data and constructing a feature vector for each log entry contained within said received log data.
6. The system of claim 5, further comprising a module operative to create frequency feature vectors and word frequency vectors for each log entry contained within said received log data .
7. The system of claim 1, further comprising a module operative to extract one or more patterns from said system call trace data to identify a ranked list of affected functions to be reported to a user.
8. A system for network component anomaly pattern recognition and correction, comprising: a processor in networked data communication with a plurality of networked components; a module operative to capture unique system status of one or more network production servers through the use of unsupervised machine learning algorithms; the plurality of networked components transmitting at least metric data, system call trace data and log entry data to said processor; a module operative to process said metric data, system call trace data, and log entry data to create one or more anomaly events and at least one label for each anomaly pattern, where each identified anomaly pattern is given a label that is further defined by a human user; a module operative to aggregate two or more contiguous events having the same or substantially similar event pattern into a consolidated event; analyzing said anomaly events utilizing causal relationships between said plurality of networked components for identifying one or more root cause components; a module operative to extract one or more patterns from said system call trace data to develop a list of affected functions to be reported to a user; utilizing an identified anomaly pattern to trigger autofix functions, where said autofix functions correct said one or more anomalies associated with an identified anomaly pattern; and reporting autofix actions, and providing predictions and recommendations for additional corrective action to a system user.
9. A method of claim 11, further comprising a module operative to create signatures that characterize an identified anomaly pattern, where the event signatures may be edited and where the event labels may be edited or over-ridden by a human user with specific domain knowledge.
10. The system of claim 11, further comprising a module operative to capture unique system state of one or more network production servers through the use of unsupervised machine learning algorithms.
11. The system of claim 11, further comprising creating event labels where discovered events are cascading events.
12. The system of claim 11, further comprising a module operative to aggregate two or more contiguous events having the same or substantially similar event pattern into a consolidated event.
13. The system of claim 11, further comprising extracting log event patterns from received log data and constructing a feature vector for each log entry.
14. The system of claim 13, further comprising creating frequency feature vectors and word frequency vectors for each log entry.
15. The system of claim 11, further comprising extracting one or more patterns from said system call trace data to identify a ranked list of affected functions and/or a ranked list of predicted functions that will be affected by each event, said lists to be reported to a user.
PCT/US2017/030469 2019-09-03 2019-09-03 System for online unsupervised event pattern extraction WO2021045719A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2017/030469 WO2021045719A1 (en) 2019-09-03 2019-09-03 System for online unsupervised event pattern extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2017/030469 WO2021045719A1 (en) 2019-09-03 2019-09-03 System for online unsupervised event pattern extraction

Publications (1)

Publication Number Publication Date
WO2021045719A1 true WO2021045719A1 (en) 2021-03-11

Family

ID=74853477

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/030469 WO2021045719A1 (en) 2019-09-03 2019-09-03 System for online unsupervised event pattern extraction

Country Status (1)

Country Link
WO (1) WO2021045719A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392893A (en) * 2021-06-08 2021-09-14 北京达佳互联信息技术有限公司 Method, device, storage medium and computer program product for positioning service fault
WO2022222623A1 (en) * 2021-04-20 2022-10-27 International Business Machines Corporation Composite event estimation through temporal logic
US11809267B2 (en) 2022-04-08 2023-11-07 International Business Machines Corporation Root cause analysis of computerized system anomalies based on causal graphs

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050262237A1 (en) * 2004-04-19 2005-11-24 Netqos, Inc. Dynamic incident tracking and investigation in service monitors
US20100100774A1 (en) * 2008-10-22 2010-04-22 International Business Machines Corporation Automatic software fault diagnosis by exploiting application signatures
US20100122270A1 (en) * 2008-11-12 2010-05-13 Lin Yeejang James System And Method For Consolidating Events In A Real Time Monitoring System
US8024618B1 (en) * 2007-03-30 2011-09-20 Apple Inc. Multi-client and fabric diagnostics and repair
US20160315822A1 (en) * 2015-04-24 2016-10-27 Goldman, Sachs & Co. System and method for handling events involving computing systems and networks using fabric monitoring system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050262237A1 (en) * 2004-04-19 2005-11-24 Netqos, Inc. Dynamic incident tracking and investigation in service monitors
US8024618B1 (en) * 2007-03-30 2011-09-20 Apple Inc. Multi-client and fabric diagnostics and repair
US20100100774A1 (en) * 2008-10-22 2010-04-22 International Business Machines Corporation Automatic software fault diagnosis by exploiting application signatures
US20100122270A1 (en) * 2008-11-12 2010-05-13 Lin Yeejang James System And Method For Consolidating Events In A Real Time Monitoring System
US20160315822A1 (en) * 2015-04-24 2016-10-27 Goldman, Sachs & Co. System and method for handling events involving computing systems and networks using fabric monitoring system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022222623A1 (en) * 2021-04-20 2022-10-27 International Business Machines Corporation Composite event estimation through temporal logic
GB2620538A (en) * 2021-04-20 2024-01-10 Ibm Composite event estimation through temporal logic
CN113392893A (en) * 2021-06-08 2021-09-14 北京达佳互联信息技术有限公司 Method, device, storage medium and computer program product for positioning service fault
US11809267B2 (en) 2022-04-08 2023-11-07 International Business Machines Corporation Root cause analysis of computerized system anomalies based on causal graphs

Similar Documents

Publication Publication Date Title
US10831585B2 (en) System and method for online unsupervised event pattern extraction and holistic root cause analysis for distributed systems
US10977154B2 (en) Method and system for automatic real-time causality analysis of end user impacting system anomalies using causality rules and topological understanding of the system to effectively filter relevant monitoring data
US9672085B2 (en) Adaptive fault diagnosis
Chen Path-based failure and evolution management
US7730364B2 (en) Systems and methods for predictive failure management
US8655623B2 (en) Diagnostic system and method
US7237023B2 (en) System and method for correlating and diagnosing system component performance data
Li et al. FLAP: An end-to-end event log analysis platform for system management
Chuah et al. Diagnosing the root-causes of failures from cluster log files
US20170063762A1 (en) Event log analyzer
US11775407B2 (en) Diagnosing and mitigating memory leak in computing nodes
CN107533504A (en) Anomaly analysis for software distribution
WO2022093239A1 (en) Machine learning driven automated incident prevention
WO2021045719A1 (en) System for online unsupervised event pattern extraction
US20220156134A1 (en) Automatically correlating phenomena detected in machine generated data to a tracked information technology change
Sun et al. Non-intrusive anomaly detection with streaming performance metrics and logs for DevOps in public clouds: a case study in AWS
US8095514B2 (en) Treemap visualizations of database time
Chen et al. Predicting job completion times using system logs in supercomputing clusters
WO2016178661A1 (en) Determining idle testing periods
JP2017539031A (en) Separation of test verification from test execution
US20230205516A1 (en) Software change analysis and automated remediation
Gu et al. Toward predictive failure management for distributed stream processing systems
JP5240709B2 (en) Computer system, method and computer program for evaluating symptom
US20230306343A1 (en) Business process management system and method thereof
US20240121254A1 (en) System and Method for Machine Learning Driven Automated Incident Prevention for Distributed Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936998

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17936998

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 28/07/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 17936998

Country of ref document: EP

Kind code of ref document: A1