WO2017142692A1 - Réduction de données haute fidélité pour une analyse de dépendances de système liée à des informations d'application - Google Patents

Réduction de données haute fidélité pour une analyse de dépendances de système liée à des informations d'application Download PDF

Info

Publication number
WO2017142692A1
WO2017142692A1 PCT/US2017/015267 US2017015267W WO2017142692A1 WO 2017142692 A1 WO2017142692 A1 WO 2017142692A1 US 2017015267 W US2017015267 W US 2017015267W WO 2017142692 A1 WO2017142692 A1 WO 2017142692A1
Authority
WO
WIPO (PCT)
Prior art keywords
events
tracking
shadowed
causality
module
Prior art date
Application number
PCT/US2017/015267
Other languages
English (en)
Inventor
Zhenyu Wu
Zhichun Li
Junghwan Rhee
Fengyuan XU
Guofei Jiang
Kangkook JEE
Xusheng Xiao
Zhang Xu
Original Assignee
Nec Laboratories America, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/416,462 external-priority patent/US20170244733A1/en
Application filed by Nec Laboratories America, Inc. filed Critical Nec Laboratories America, Inc.
Priority to JP2018539057A priority Critical patent/JP2019506678A/ja
Priority to DE112017000886.7T priority patent/DE112017000886T5/de
Publication of WO2017142692A1 publication Critical patent/WO2017142692A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3017Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is implementing multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/302Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3433Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/81Threshold
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/865Monitoring of software

Definitions

  • the present invention relates to causality dependency analysis and, more particularly, to data reduction on large volumes of event information.
  • a method for dependency tracking includes identifying a hot process that generates bursts of events with interleaved dependencies. Events related to the hot process are aggregated according to a process-centric dependency approximation that ignores dependencies between the events related to the hot process. Causality is tracked in a reduced event stream that includes the aggregated events using a processor.
  • a system for dependency tracking includes a busy process module configured to identify a hot process that generates bursts of events with interleaved dependencies.
  • An aggregation module is configured to aggregate events related to the hot process according to a process-centric dependency approximation that ignores dependencies between the events related to the hot process.
  • a causality tracking module includes a processor configured to track causality in a reduced event stream that includes the aggregated events.
  • FIG. 1 is a block/flow diagram of a method for data reduction in accordance with the present principles
  • FIG. 2 is a block/flow diagram of a method for data reduction in accordance with the present principles
  • FIG. 3 is a diagram of an exemplary set of events in accordance with the present principles
  • FIG. 4 is a diagram of an exemplary set of events in accordance with the present principles
  • FIG. 5 is a block/flow diagram of a method for data reduction in accordance with the present principles
  • FIG. 6 is a block diagram of a data reduction system in accordance with the present principles.
  • FIG. 7 is a block diagram of a processing system in accordance with the present principles.
  • FIG. 8 is a block diagram of an intrusion detection system in accordance with the present principles.
  • the present embodiments make a distinction between “key events” and “shadowed events.”
  • key events In a stream of low-level system events, only a small fraction of events bear causality significance to other events. These events are referred to herein as “key events.”
  • key events For each key event, there may exist a series of "shadowed events” whose causality relations to other events are negligible in the presence of the key event. That is, the presence or absence of shadowed events does not alter the results of the dependency analysis.
  • the present embodiments therefore detect key events and shadowed events in real-time system event streams. Information relevant to dependency analysis is preserved while data volume is reduced by aggregating and summarizing other information.
  • the present embodiments can operate in either "lossless” or “lossy” modes.
  • lossless mode data reduction is performed based only on key event and shadowed event identification, so that causality is perfectly preserved.
  • Arbitrary dependency analysis on data before and after data reduction produces the same sequence of events in the same other.
  • Lossy mode takes advantage of the fact that some applications (e.g., system daemons) tend to exhibit intense bursts of similar events that are not reducible in lossless mode.
  • One example of such a scenario includes repeatedly accessing a set of files with interleaved dependencies.
  • Each burst generated by such an application may perform a single high-level operation, such as checking for the existence of a particular hardware component, scanning files in a directory, etc. While the high-level operation is not necessarily complex, it can translate to highly repetitive low-level operations. From the perspective of causality analysis, tracking down the high-level operations can yield enough information to aid in understanding the results, such that the details of the exact low-level operation dependencies do not add much more value. Therefore accuracy loss can be acceptable as long as the impact of the errors is contained so as not to affect events that do not belong to the burst.
  • the present embodiments thereby provide data reduction without impacting the results of causality analysis on low-level system event traces.
  • the present embodiments may be applied to any type of data, instead of needing domain-specific knowledge that applies only to certain specific types of data. As a result, the present embodiments are applicable to a greater variety of systems.
  • the present embodiments target low-level system event traces, the present embodiments can be applied at various semantic levels.
  • Block 102 collects an event stream, for example in the form of system calls or other process interactions in a computer system.
  • the event stream includes, e.g., timing information, type of operation, and information flow directions, which can be used to reconstruct causal dependencies between historical events. It should be noted that the terms “causality” and “dependency” may be used interchangeably herein.
  • Block 104 performs data sanitization on the collected event stream.
  • Block 106 performs data reduction on the sanitized event stream. As will be described in greater detail below, data reduction in block 106 may be lossless or lossy, with key events and shadowed events being identified in either case to location categories of event data that may be eliminated. Block 108 then indexes and stores the remaining data for later dependency analysis.
  • Block 202 identifies busy processes which generate intense bursts of events with interleaved dependencies.
  • Block 02 thereby keeps track of each live process including tracking, e.g., the number of resources (e.g., files, network connections, etc.) that the live processes interact with in a given time interval, and their event intensity. If both metrics are above a predefined threshold, the process is classified as busy, and is referred to herein as a "hot" process. Hot processes can be detected using a statistical calculation with a sliding time window— if the number of events related to a process in a time window exceeds the threshold, the process is marked as a hot process. In one specific example, the threshold may be set to twenty events per five seconds.
  • Block 203 performs event dispatching, classifying every event according to whether the event belongs to a busy process. Events belonging to busy processes are redirected by block 205 to the process flow of FIG. 5, described below. Block 204 performs dependency tracking and aggregation on the events that do not belong to busy processes. Block 206 performs event summarization, generating a reduced event stream. This method performs lossless data reduction. Another method may be performed alongside the method of FIG. 2 to perform lossy data reduction, handling busy processes that generate events that are not reducible by the lossless method.
  • Block 204 The dependency tracking and aggregation of block 204 is used to update temporary events and states, which may be used as feedback for further tracking. Block 204 thereby analyzes and identifies key events that carry causality that is significant in the event stream, as well as corresponding shadowed events, which are candidates for event aggregation.
  • the nodes 302 represent different system entities (e.g., processes or files), while the directed edges between the nodes 302 represent system events between an initiator and a target.
  • the nodes are labeled A, B, C, and D, which may, in one specific example, be considered the entities "/etc/bash,” “/etc/bashrc,” “/etc/inputrc,” and “/bin/wget” respectively.
  • An edge may be described as, e.g., ⁇ - ⁇ , where N represents the initiator node, M represents the target node, and i represents an index for the order of events between those two nodes.
  • the first recorded event between nodes A and B will be denoted as ⁇ - ⁇
  • the second such event will be denoted as eAB-2
  • Each event is described in this example as an event type and a time window during which the event takes place.
  • an event ⁇ - ⁇ may be described as a "Read" event occurring in the time window between timestamp 10 and timestamp 20: [10, 20].
  • the nodes and edges encode information needed for causality analysis: the information flow direction (reflected by the direction of the edge), the type of event, and the window during which the event takes place.
  • Causality tracking is a recursive graph traversal procedure, which follows the causal relationship of edges either in the forward or backward direction. For example, in FIG. 3, to examine the root cause of event eAD-i, backtracking is applied on this edge, which recursively follows all edges that could have contributed to eAD-i - Causality dependency may be formally defined for two events e g h and ey if node h is the same as node I and if the end time for e g h is before the end time for ey. If e g h has information flow to ey, and ey has information flow to a third event emn, then ey has information flow to
  • Two events are aggregable only if they have the same type and share the same source and destination nodes. For certain types of events, such as read/write, the two events also may need to share certain attributes (e.g., a file open descriptor).
  • a set of aggregable events is a superset of a key event and its shadowed events.
  • event eAD-i If causality analysis is employed to determine the cause of the event eAD-i, the events that cause information flow into the node A prior to event eAD-i are backtracked, including events ⁇ - ⁇ (read, [10, 20]), eAc-i (read, [15, 23]), and eAc-2 (read, [28, 32]).
  • event eAB-2 read, [40, 42]
  • eAB-2 occurs after the event of interest 308 eAD-i (exec, [36, 37]).
  • eAB-2 the existence of eAB-2 has no causality impact to the causality of eAD-i-
  • the irrelevant event is marked with a dotted line 307
  • eAc-2 takes place after eAc-i and both events are of the same type (read) involving the same entities.
  • eAc-2 is a key event 304 that shadows the event eAc-i, with shadowed events being denoted by dashed line 306.
  • the shadowed events describe the same event attacker activities that have already been revealed by the key events. Therefore, the data volume can be reduced by keeping the causal dependencies intact by, e.g., merging or summarizing information in "shadowed events" into "key events” while preserving causal relevant information in the latter.
  • Node E may be, for example, "excel.exe”
  • node F may be, “salary.xls”
  • node G may be, “dropbox.exe”
  • node H may be, "backup.exe”
  • events may include e EF -i (write, [10, 20]), e EF -i (write, [30, 32]), e FG -i (read, [42, 44]), e FG -2 (read, [38, 40]), and e FH -i (read [18, 27]).
  • the event of interest 308 is event e FF -2, with a time window of [30, 32].
  • the events eEF-i and e F H-i both occur before e FF -2, so they are marked as irrelevant events 307 for forward-tracking.
  • Event e F G-2 occurs before e F G-i, making e F G-2 a key event 304 and e F G-i a shadowed event 306.
  • Block 206 is responsible for performing data reduction. Given a key event 304 and its associated shadowed events 306, block 206 merges all events' time windows into a single time window which tightly encapsulates the start and end of the entire set of events. In addition, event type-specific data summarization is performed on other attributes of the events. For example, for "read" events, the amount of data read in all events may be accumulated into a single number denoting the total amount of data read by the set.
  • Block 202 detects busy processes and block 205 dispatches the busy processes.
  • Block 502 receives the dispatched, hot process and collects all objects involved in the interactions to form a neighbor set N(u), where u is the hot process. Instead of checking he trackability of all aggregation candidates, only those events with information flow into and out of the neighbor set N(u) are checked. This ensures that, as long as no event inside N(u) is selected as an event-of-interest, high- quality tracking results are generated.
  • block 504 Based on the events for the busy processes, block 504 performs dependency approximating data reduction.
  • a busy process may be scanning files. The process and its directed interactions with other system objects may be tracked. All of these events may be considered part of a single high-level operation. As a result, the exact causalities among the events can be ignored and the events may aggregated, even if they would not otherwise be aggregable.
  • Block 206 then aggregates events as indicated by block 504. The aggregated events that result from FIG. 5 may introduce some accuracy loss, but this accuracy loss is well-contained to events generated by busy processes.
  • Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements.
  • the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • the medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
  • Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein.
  • the inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
  • a data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times code is retrieved from bulk storage during execution.
  • I/O devices including but not limited to keyboards, displays, pointing devices, etc. may be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
  • Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • APT advanced persistent threat
  • APT attacks were found to have remained undiscovered for an average of about 6 months, and in some cases years, before launching harmful actions. This implies that, to detect and understand the impact of such attacks, enterprises need to store at least half a year of event data.
  • the system-level audit data alone can easily reach 1Gb per host. In a real-world scenario of an enterprise with 200,000 hosts, the data storage is around 17 petabytes to around 70 petabytes.
  • the data not only needs to be stored efficiently, but indexed to make retrieval efficient.
  • the present embodiments provide the ability to aggregate event information without substantially affecting the accuracy of the ability to detect attacks.
  • the system 600 includes a hardware processor 602 and a memory.
  • the system 600 also includes one or more functional modules that may, in one embodiment, be implemented as hardware that is stored by the memory 604 and executed by the processor 602.
  • the functional modules may be implemented as one or more discrete hardware components, for example in the form of an application-specific integrated chip or field programmable gate array.
  • the functional modules include, e.g., an event monitor 606 that tracks high- level and low-level events and generates an event stream.
  • a tracking module 608 identifies key events in the event stream as well as corresponding shadowed events.
  • a busy process module 610 identifies hot processes within the event stream, while an approximation module 612 determines aggregations of the events related to the hot processes.
  • An aggregation module 614 aggregates events in accordance with the output of the tracking module and the approximation module 612.
  • a causality tracking module 616 then performs causality tracking for an event-of-interest, using the event stream and event aggregations.
  • the processing system 700 includes at least one processor (CPU) 704 operatively coupled to other components via a system bus 702.
  • a cache 706, a Read Only Memory (ROM) 708, a Random Access Memory (RAM) 710, an input/output (I/O) adapter 720, a sound adapter 730, a network adapter 740, a user interface adapter 750, and a display adapter 760, are operatively coupled to the system bus 702.
  • a first storage device 722 and a second storage device 724 are operatively coupled to system bus 702 by the I/O adapter 720.
  • the storage devices 722 and 724 can be any of a disk storage device (e.g., a magnetic or optical disk storage device), a solid state magnetic device, and so forth.
  • the storage devices 722 and 724 can be the same type of storage device or different types of storage devices.
  • a speaker 732 is operatively coupled to system bus 702 by the sound adapter 730.
  • a transceiver 742 is operatively coupled to system bus 702 by network adapter 740.
  • a display device 762 is operatively coupled to system bus 702 by display adapter 760.
  • a first user input device 752, a second user input device 754, and a third user input device 756 are operatively coupled to system bus 702 by user interface adapter 750.
  • the user input devices 752, 754, and 756 can be any of a keyboard, a mouse, a keypad, an image capture device, a motion sensing device, a microphone, a device incorporating the functionality of at least two of the preceding devices, and so forth. Of course, other types of input devices can also be used, while maintaining the spirit of the present principles.
  • the user input devices 752, 754, and 756 can be the same type of user input device or different types of user input devices.
  • the user input devices 752, 754, and 756 are used to input and output information to and from system 700.
  • processing system 700 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements.
  • various other input devices and/or output devices can be included in processing system 700, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art.
  • various types of wireless and/or wired input and/or output devices can be used.
  • additional processors, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art.
  • an intrusion detection and recovery system 300 is shown.
  • the intrusion detection system 300 includes a causality tracking system 600 as described above.
  • the intrusion detection and recovery system 800 may be tightly integrated with the causality tracking system 600, using the same hardware processor 602 and memory 604, or may alternatively have its own standalone hardware processor 802 and memory 804. In the latter case, the intrusion detection and recovery system 800 may communicate with the causality tracking system by, for example, inter-process communications, network communications, or any other appropriate medium and/or protocol.
  • the intrusion detection and recovery system 800 may flag particular events for review. This may performed automatically, for example using one or more heuristics or machine learning processes to determine when an event is unexpected or otherwise out of place. Flagging events for review may alternatively, or in addition, be performed by a human operator who selects specific events for review.
  • the intrusion detection and recovery system 800 then indicates the flagged event to the causality tracking system 600 to efficiently build a causality trace for the flagged event. Using this causality trace, an intrusion detection module 805 determines whether an intrusion has occurred.
  • the intrusion detection module 805 may operate using, e.g., one or more heuristics or machine learning processes that take advantage of the causality information provided by the causality tracking system 600 and may be supplemented by review by a human operator to determine that an intrusion has occurred.
  • a mitigation module 806 may automatically trigger one or more mitigation actions.
  • Mitigation actions may include, for example, changing access permissions in one or more affected or accessible computing systems, quarantining affected data or programs, increasing logging or monitoring activity, and any other automatic action that may serve to stop or diminish the effect or scope of an intrusion.
  • Mitigation module 806 can guide mitigation and recovery by forward-tracking the impact of an intrusion using the causality trace.
  • An alert module 808 may alert a human operator of the intrusion, providing causality information as well as information regarding any mitigation actions that have occurred.

Abstract

L'invention concerne des procédés et des systèmes de suivi de dépendances comprenant l'identification d'un processus critique générant des rafales d'événements avec des dépendances entrelacées. Des événements liés au processus critique sont agrégés selon une approximation de dépendances centrée sur le processus ignorant des dépendances entre les événements liés au processus critique. Une causalité dans un flux d'événements réduit comprenant les événements agrégés est suivie.
PCT/US2017/015267 2016-02-18 2017-01-27 Réduction de données haute fidélité pour une analyse de dépendances de système liée à des informations d'application WO2017142692A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2018539057A JP2019506678A (ja) 2016-02-18 2017-01-27 アプリケーション情報に関するシステム依存関係解析についての高忠実度データ縮約
DE112017000886.7T DE112017000886T5 (de) 2016-02-18 2017-01-27 High-Fidelity-Datenreduktion zur Systemabhängigkeitsanalyse

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201662296646P 2016-02-18 2016-02-18
US62/296,646 2016-02-18
US15/416,462 US20170244733A1 (en) 2016-02-18 2017-01-26 Intrusion detection using efficient system dependency analysis
US15/416,346 2017-01-26
US15/416,346 US20170244620A1 (en) 2016-02-18 2017-01-26 High Fidelity Data Reduction for System Dependency Analysis
US15/416,462 2017-01-26

Publications (1)

Publication Number Publication Date
WO2017142692A1 true WO2017142692A1 (fr) 2017-08-24

Family

ID=59626206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/015267 WO2017142692A1 (fr) 2016-02-18 2017-01-27 Réduction de données haute fidélité pour une analyse de dépendances de système liée à des informations d'application

Country Status (1)

Country Link
WO (1) WO2017142692A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100100421A1 (en) * 2008-10-22 2010-04-22 Arash Bateni Methodology for selecting causal variables for use in a product demand forecasting system
WO2013055760A1 (fr) * 2011-10-14 2013-04-18 Zenoss, Inc. Procédé et appareil d'analyse de cause racine d'un impact de service dans un environnement virtualisé
US20130179568A1 (en) * 2010-06-29 2013-07-11 Telefonaktiebolaget L M Ericsson Method and apparatus for analysis of the operation of a communication system using events
US20140143185A1 (en) * 2012-11-19 2014-05-22 Qualcomm Incorporated Method and apparatus for inferring logical dependencies between random processes
US20150172412A1 (en) * 2012-07-06 2015-06-18 Cornell University Managing dependencies between operations in a distributed system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100100421A1 (en) * 2008-10-22 2010-04-22 Arash Bateni Methodology for selecting causal variables for use in a product demand forecasting system
US20130179568A1 (en) * 2010-06-29 2013-07-11 Telefonaktiebolaget L M Ericsson Method and apparatus for analysis of the operation of a communication system using events
WO2013055760A1 (fr) * 2011-10-14 2013-04-18 Zenoss, Inc. Procédé et appareil d'analyse de cause racine d'un impact de service dans un environnement virtualisé
US20150172412A1 (en) * 2012-07-06 2015-06-18 Cornell University Managing dependencies between operations in a distributed system
US20140143185A1 (en) * 2012-11-19 2014-05-22 Qualcomm Incorporated Method and apparatus for inferring logical dependencies between random processes

Similar Documents

Publication Publication Date Title
US20170244733A1 (en) Intrusion detection using efficient system dependency analysis
Hassan et al. Tactical provenance analysis for endpoint detection and response systems
US11811801B2 (en) Anomaly detection for microservices
US11341237B2 (en) Anomaly detection for computer systems
US8392385B2 (en) Flexible event data content management for relevant event and alert analysis within a distributed processing system
US9736173B2 (en) Differential dependency tracking for attack forensics
US9286143B2 (en) Flexible event data content management for relevant event and alert analysis within a distributed processing system
US8713366B2 (en) Restarting event and alert analysis after a shutdown in a distributed processing system
US8645757B2 (en) Administering incident pools for event and alert analysis
US9256482B2 (en) Determining whether to send an alert in a distributed processing system
US11093349B2 (en) System and method for reactive log spooling
US9658902B2 (en) Adaptive clock throttling for event processing
US20160239660A1 (en) Sequence identification
JP7302019B2 (ja) システムレベルセキュリティのための階層的挙動行動のモデル化および検出システムおよび方法
US20120331332A1 (en) Restarting Event And Alert Analysis After A Shutdown In A Distributed Processing System
US10915626B2 (en) Graph model for alert interpretation in enterprise security system
WO2012076380A1 (fr) Administration dynamique de pools d'événements permettant une analyse d'événement et d'alerte appropriée durant des tempêtes d'événements
US10868823B2 (en) Systems and methods for discriminating between human and non-human interactions with computing devices on a computer network
JP6165224B2 (ja) アプリケーション層ログ分析を基礎とする情報セキュリティー管理システム及びその方法
US9361184B2 (en) Selecting during a system shutdown procedure, a restart incident checkpoint of an incident analyzer in a distributed processing system
US10785243B1 (en) Identifying evidence of attacks by analyzing log text
Marchetti et al. Identification of correlated network intrusion alerts
CN114020735A (zh) 安全告警日志降噪方法、装置、设备及存储介质
Al-Ghuwairi et al. Intrusion detection in cloud computing based on time series anomalies utilizing machine learning
US20140208427A1 (en) Apparatus and methods for detecting data access

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17753624

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018539057

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 112017000886

Country of ref document: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17753624

Country of ref document: EP

Kind code of ref document: A1