GB2593509A - Computer vulnerability identification - Google Patents

Computer vulnerability identification Download PDF

Info

Publication number
GB2593509A
GB2593509A GB2004336.0A GB202004336A GB2593509A GB 2593509 A GB2593509 A GB 2593509A GB 202004336 A GB202004336 A GB 202004336A GB 2593509 A GB2593509 A GB 2593509A
Authority
GB
United Kingdom
Prior art keywords
events
event
attack
pair
computer system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2004336.0A
Other versions
GB202004336D0 (en
Inventor
El-Moussa Fadi
Herwono Ian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Telecommunications PLC
Original Assignee
British Telecommunications PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Telecommunications PLC filed Critical British Telecommunications PLC
Priority to GB2004336.0A priority Critical patent/GB2593509A/en
Publication of GB202004336D0 publication Critical patent/GB202004336D0/en
Publication of GB2593509A publication Critical patent/GB2593509A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security

Abstract

A method comprising obtaining event data representing a plurality of events that have occurred in a first computer system which has undergone a successful attack and identifying, from the plurality of events, at least one event that formed part of an unsuccessful attack on the first computer system. The unsuccessful attack is correlated with the successful attack. The unsuccessful attack is used to identify at least one vulnerability of a second computer system, different from the first computer system. At least one property of the second computer system to be modified to reduce the at least one vulnerability is determined.

Description

COMPUTER VULNERABILITY IDENTIFICATION
Field of Invention
The present invention relates to identifying a vulnerability of a computer system to an attack.
Background
After a cyber-attack on a computer system, an attack flow illustrating how the attack was carried out can be manually created. To manually create an attack flow, a cyber security expert analyses logs gathered from the attacked system and any relevant security devices in order to determine when the attack began, and the root cause and anatomy of the attack.
There are generally many logs available, which typically include a large number of alerts arising from various different sources. Attack flow analysis therefore tends to be very time consuming, and requires a high level of technical skill and understanding. Furthermore, it can be difficult to identify small steps that individually appeared innocuous but nevertheless contributed to the successful achievement of an attack objective.
It is an aim of the present invention to at least alleviate some of the aforementioned problems.
Statements of Invention
According to a first aspect of the present invention, there is provided a method comprising: obtaining event data representing a plurality of events that have occurred in a first computer system which has undergone a successful attack; identifying, from the plurality of events, at least one event that formed part of an unsuccessful attack on the first computer system, wherein the unsuccessful attack is correlated with the successful attack; using the unsuccessful attack to identify at least one vulnerability of a second computer system, different from the first computer system; and determining at least one property of the second computer system to be modified to reduce the at least one vulnerability.
In some examples, the method comprises: identifying, from the plurality of events, attack-related pairs of events; and identifying the at least one event from the attack-related pairs of events.
In these examples, the method may comprise determining, using the attack-related pairs of events, a sequence of events of the plurality of events that formed part of the successful attack, wherein identifying the at least one event comprises identifying a first event of the plurality of events which is in the sequence of events and is in a pair of the attack-related pairs with a second event of the plurality of events, the second event not forming part of the sequence of events and subsequent to the first event, the at least one event comprising the second event.
In these examples, the method may comprise identifying a time-ordered series of events of the unsuccessful attack, from the first event to a third event, wherein each event between the first event and the third event is in an attack-related pair with a previous event and is in a further attack-related pair with a subsequent event, and the at least one event comprises the events of the time-ordered series of events.
Identifying the time-ordered series of events may comprise identifying respective events of the time-ordered series of events in chronological order.
The events of the time-ordered series of events other than the first event may not be in the sequence of events.
In these examples, identifying the at least one event may comprise identifying, from the attack-related pairs of events, a fourth event not forming part of the sequence of events, based on a comparison between a value of an attribute of the fourth event and a value of the attribute of an event of the sequence of events, the at least one event comprising the fourth event.
The value of the attribute of the fourth event may be the same as the value of the attribute of the event of the sequence of events.
The attribute may be obtained from a knowledge base.
A further time-ordered series of events of the unsuccessful attack may be identified, from the fourth event to a fifth event, wherein each event between the fourth event and the fifth event is in an attack-related pair with a previous event and is in a further attack-related pair with a subsequent event, and the at least one event comprises the events of the further time-ordered series of events.
Identifying the further time-ordered series of events may comprise identifying respective events of the further time-ordered series of events in chronological order.
The events of the further time-ordered series of events may not be in the sequence of events.
In these examples, the method may comprise identifying a pair type of a pair of events of the plurality of events, wherein identifying the attack-related pairs of events comprises identifying that the pair of events is an attack-related pair of events using the pair type of the pair of events.
In these examples, the method may comprise obtaining, for each of a or the plurality of predetermined pair types, a respective set of attributes for use in identifying whether a given pair of events of the plurality of events is an attack-related pair of events.
In these examples, identifying the attack-related pairs of events may comprise identifying that a pair of events of the plurality of events is an attack-related pair of events based on processing of respective attribute values of attributes associated with the pair of events using a machine learning classifier.
In some examples, the method comprises modifying the at least one property of the second computer system to reduce the at least one vulnerability.
In some examples, the method comprises: using the successful attack to identify at least one further vulnerability of the second computer system; and determining at least one further property of the second computer system to be modified to reduce the at least one further vulnerability.
The at least one further property of the second computer system may be modified to reduce the at least one further vulnerability of the second computer system.
The at least one vulnerability of the second computer may comprise a vulnerability of the second computer to an attack of the same type as the successful attack.
According to a second aspect of the present invention, there is provided a system comprising: storage for storing event data representing a plurality of events that have occurred in a first computer system which has undergone a successful attack; and at least one processor configured to: identify, from the plurality of events, at least one event that formed part of an unsuccessful attack on the first computer system, wherein the unsuccessful attack is correlated with the successful attack; use the unsuccessful attack to identify at least one vulnerability of a second computer system, different from the first computer system; and determine at least one property of the second computer system to be modified to reduce the at least one vulnerability.
In some examples, the at least one processor is configured to: identify, from the plurality of events, attack-related pairs of events; and identify the at least one event from the attack-related pairs of events.
In some examples, the at least one processor is configured to determine, using the attack-related pairs of events, a sequence of events of the plurality of events that formed part of the successful attack, wherein identifying the at least one event comprises identifying a first event of the plurality of events which is in the sequence of events and is in a pair of the attack-related pairs with a second event of the plurality of events, the second event not forming part of the sequence of events and subsequent to the first event, the at least one event comprising the second event.
According to a third aspect of the present invention, there is provided a computer-readable medium storing thereon a program for carrying out the method of any examples in accordance with the first aspect.
The invention includes any novel aspects described and/or illustrated herein. The invention also extends to methods and/or apparatus substantially as herein described and/or as illustrated with reference to the accompanying drawings. The invention is also provided as a computer program and/or a computer program product for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein, a computer-readable medium storing thereon a program for carrying out any of the methods and/or for embodying any of the apparatus features described herein. Features described as being implemented in hardware may alternatively be implemented in software, and vice versa.
The invention also provides a method of transmitting a signal, and a computer product having an operating system that supports a computer program for performing any of the methods described herein and/or for embodying any of the apparatus features described herein.
Any apparatus feature may also be provided as a corresponding step of a method, and vice versa.
As used herein, means plus function features may alternatively be expressed in terms of their corresponding structure, for example as a suitably-programmed processor and/or as suitably configured circuitry.
Any feature in one aspect of the invention may be applied, in any appropriate combination, to other aspects of the invention. Any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination. Particular combinations of the various features described and defined in any aspects of the invention can be implemented and/or supplied and/or used independently.
As used throughout, the word 'or' can be interpreted in the exclusive and/or inclusive sense, unless otherwise specified.
The invention extends at least to a method, a system and a computer-readable medium substantially as described herein and/or substantially as illustrated with reference to the accompanying drawings.
The present invention is now described, purely by way of example, with reference to the accompanying diagrammatic drawings, in which: Figure 1 is a schematic diagram showing a sequence of events that formed part of a successful attack on a computer system; Figure 2 is a schematic diagram showing a method of identifying an attack story of an attack on a computer system; Figures 3a, 3b and 3c are schematic diagrams showing examples of obtaining attribute values for respective attributes of a pair of events; Figure 4 is a schematic diagram showing the identification of attack-related pairs of events using a plurality of random forest classifiers; Figure 5 is a schematic diagram showing the identification of attack-related pairs of events using a single random forest classifier; Figures 6a to 6d are schematic diagrams showing the identification of a sequence of events that formed part of a successful attack on a computer system; Figure 7 is a schematic diagram showing an example of identifying at least one event that formed part of an unsuccessful attack on a computer system; Figure 8 is a schematic diagram showing a further example of identifying at least one event that formed part of an unsuccessful attack on a computer system; Figure 9 is a schematic diagram showing the identification of an attack story comprising the successful attack of Figure 6 and the unsuccessful attacks of Figures 7 and 8; Figure 10 is a schematic diagram of an example system for use with the methods herein; and Figure 11 is a schematic diagram of internal components of an example computer system.
Specific Description
Figure 1 is a schematic diagram showing a sequence of events 100 that formed part of a successful attack on a computer system, to put the methods and apparatuses herein into context.
The computer system in this case includes four security systems 102a-102d (collectively referred to with the reference numeral 102). Each of the security systems 102 is configured to detect a different type of malicious behaviour (although in some cases the same activity may be identified as malicious by a plurality of the security systems 102). A security system may be or include an intruder detection system (IDS), an IFS (intrusion prevention system), an anti-virus system, a firewall and/or an anti-malware system, for example.
The security systems 102 are arranged to generate an alert if malicious behaviour is detected. Malicious behaviour is behaviour that appears to be suspicious or that is unexpected or uncommon. Such behaviour may indicate that the user is trying to interfere with the computer system, e.g. to gain unauthorized access to the computer system (or data stored thereon) and/or to compromise a performance of the computer system. Alerts 104a-104c generated by the first security system 102a, alerts 106a-106c generated by the second security system 102b, alerts 108a-108c generated by the third security system 102c and alerts 110a-110c generated by the fourth security system 102d are shown schematically in Figure 1. The alerts 104, 106, 108, 110 are received over a time period from an initial time, shown as t = 0 with respect to a time axis 112 in Figure 1.
Each alert is an example of an event, which may be considered to represent an action of the computer system (such as an action performed by the computer system or a subsystem of the computer system, or an interaction between the computer system and an external system or user). In some cases, an event is considered to be or correspond to the activity performed by the attacker, which activity caused an alert to be generated by a security system of the computer system. In others, though, the alert itself may be considered to be an event.
Information associated with respective events are typically stored in log files associated with the given subsystem at which the event occurred. Hence, in the example of Figure 1, each alert corresponds to a log entry in a log file of the respective security system 102 that generated the alert. In other cases, though, at least one of the events may correspond to a log entry associated with an action of the user rather than of a security system 102.
Most of the alerts 104, 106, 108, 110 of Figure 1 did not form part of the successful attack. For example, these other alerts may have been triggered by actions performed by the attacker that were unsuccessful in achieving an attack objective or these other attacks may have been triggered by unrelated activity, e.g. by other users than the attacker.
However, some of the alerts (the first alert 104a of the first security system 102a, then the first alert 108a of the third security system 102c, then the second alert 104b of the first security system 102, then the third alert 106c of the second security system 102b, then the third alert 110c of the fourth security system 102d) are identified as being a sequence of events 100 forming part of the successful attack. The identification of a sequence of events 100 such as that shown in Figure 1 is described in more detail below.
By identifying the sequence of events 100, a vulnerability of the computer system to the attack can be identified, enhancing an understanding of potential flaws in the computer system. The computer system can be appropriately modified to reduce the vulnerability. The security of the computer system can thereby be improved, reducing the likelihood of a similar attack being successful in the future. For example, a further security system can be created to detect part or all of the sequence of events and to take appropriate action to prevent the successful completion of an attack objective identified as being associated with the sequence of events 100. Additionally or alternatively, one or more of the existing security systems 102 can be modified such that performance of a similar sequence of events in future would no longer lead to successful completion of the attack objective. A vulnerability of a different computer system may also or alternatively be reduced in a similar way, based on the sequence of events 100 that occurred in the computer system.
Figure 2 is a schematic diagram showing a method of identifying an attack story of an attack on a computer system. The attack story for example indicates attack-related events that occurred during the course of the attack. By identifying the attack story, a vulnerability of a computer system (or a different computer system) can in turn be identified.
Historical logs 200 of events of a plurality of different security devices associated with the computer system are ingested by a system arranged to identify the attack story. This system may the computer system that underwent the attack or a different computer system. Each log contains a number of events (e.g. security and network traffic events) that are usually recorded in chronological order. The logs 200 are an example of event data representing a plurality of events that have occurred in the computer system.
At step 202 of Figure 2, pairs of events are identified from the events in the logs 200 and values of attributes of the pairs of events are obtained. The attributes for which values are to be obtained are determined based on information provided by a knowledge base 204, as discussed further with reference to Figure 3. A pair of events is for example any two different events that have occurred in the logs 200. Events of a given pair need not be obtained from the same source, e.g. from the same security device. For example, one event may have occurred at a first security device and another may have occurred at a second, different, security device. It is to be appreciated that some pre-selection of events may be performed at or before this stage, e.g. to discard events that are likely to be innocuous and/or events with missing or incomplete information.
After obtaining values of the attributes of the event pairs from the logs 200, the values of the attributes are provided as inputs to trained random forest classifiers 206. The random forest classifiers 206 identify at step 208 whether each pair of events is an attack-related pair of events or an unrelated pair of events, which is unrelated to the attack. It is to be appreciated that the identification of a pair of events as being attack-related at this stage does not necessarily mean that the event pair was actually part of the attack. Instead, an attack-related pair of events is merely a pair of events that is likely to be part of the attack. The attack itself includes both a successful attack (which succeeded in achieving an attack objective of the attacker) and an unsuccessful attack (which failed to achieve the attack objective). Attack-related pairs of events are those pairs of events that are likely to be part of either the successful attack or the unsuccessful attack. Events of an attack-related pair of events may be considered correlated with each other in that they are both likely to be correlated with the attack the computer system has undergone (rather than random events with no connection to the attack).
The pairs of events that have been positively classified as attack-related pairs of events are grouped together and a systematic selection process (in this case, backward tracking, discussed further with reference to Figures 6a to 6d) is performed at step 210 in order to identify potential attack flows 212. In this way, a sequence of events that formed part of the successful attack is determined using the attack-related pairs of events.
At step 214, another selection process (in this case, forward tracking, discussed further with reference to Figures 7 and 8) is performed to identify at least one event that formed part of an unsuccessful attack of the attack to build an attack story 216. The attack story 216 includes both the sequence of events of the successful attack as well as the at least one event of the unsuccessful attack.
In some examples, identifying whether a pair of events is an attack-related pair of events uses a pair type of the pair of events. Each event is generally of a particular type (which may be referred to as an event type), such as an IDS alert, an IRS (intrusion prevention system) alert, a FireEye® alert, a NetFlow traffic anomaly, an anti-malware alert, a phishing email alert, or a suspicious web proxy log entry. Various different events may be considered to be of the same event type if they relate to the same activity and/or if they are associated with the same system or subsystem. For example, a port scan alert and a brute force attack attempt alert both identified by an IDS system may each be considered to be of the "IDS alert" event type. Conversely, a phishing email alert and an HTTP request to a malicious web site relate to different activities (one relating to email and one relating to accessing a web site). These events may therefore be considered to be of different event types (a "phishing email alert" event type and a suspicious "web proxy log entry' event type, respectively).
A pair type represents a combination of the event types of a given pair of events. For example, the pair type may indicate that each event of the pair is of the same type or is of a different type.
In some examples, the pair type further indicates the event type of each of the pair of events (e.g. that both events are IDS alerts, rather than merely that both events are of the same type). In yet further examples, the pair type also indicates the order in which the events of the pair of events occurred. In such cases, the pair type is different for two events depending on the order in which the two events occurred. In other words, a pair type indicating that a "phishing email alert' occurred before a "suspicious web proxy log" differs from a pair type indicating that a "suspicious web proxy log' occurred before a "phishing email alert". This is because the order in which events occurred within a system can be an indicative of whether the events were indeed malicious and were likely to form part of an attack. For example, a given set of events may be innocuous when performed in one order, but malicious when performed in a different order. In other cases, though, the pair type may be independent of the order in which the events occurred. In these other cases, the pair type indicating that a "phishing email alert' occurred before a "suspicious web proxy log' is therefore the same as a pair type indicating that a "suspicious web proxy log" occurred before a "phishing email alert".
The pair type may be indicated in various ways. In one case, a plurality of predetermined pair types may be identified and each of the predetermined pair types may be assigned a given pair type indicator (e.g. a suitable reference number), e.g.: Pair type Pair type indicator Event 1 -IDS alert; Event 2 -IDS alert 1 Event 1 -phishing email alert; Event 2-suspicious web proxy alert 2 Event 1 -suspicious web proxy alert; Event 2 -phishing email alert 3 The pair type of a given pair of events can be identified based on properties of each of the events of the pair, such as the system or subsystem in which the event occurred. In one example, the event type of each event is determined based on the log from which the event was obtained (e.g. if the event was recorded in a log associated with the IDS system, it is determined to be an IDS alert). The pair type of the pair of events is then determined based on the event type of each event.
This approach allows pairs of events which are of particular pair types (e.g. those that are likely to indicate malicious activity) to be identified. In some cases, the attack-related pairs of events are identified from pairs of events with a pair type that is one of a plurality of predetermined pair types (e.g. those that are considered suspicious). In this way, by identifying a subset of pairs of events (those with a pair type that is one of a plurality of predetermined pair types) and identifying the attack-related pairs from this subset of events, the amount of data to be processed to identify attack-related pairs is reduced.
In some cases, using the pair type of a given pair of events to identify whether that pair of events is attack-related allows the identification process to be tailored to the pair type of the give pair, as discussed further with reference to Figures 3a to 3c. This can improve the accuracy with which an attack-related pair of events is correctly identified as being attack-related.
In these examples, attributes for use in identifying whether a pair of events is an attack-related pair of events can be obtained based on the pair type. Attribute values of these attributes can then be used to identify whether the pair of events is attack-related. Various attributes may be more or less suspicious depending on the pair type. Hence, by using different attributes to ascertain whether the pair of events is attack-related, depending on the pair type, the true nature of the pair of events can be more effectively identified. For example, attributes that provide more information on the nature of the pair of events can be selected for use in determining whether the pair is attack-related. Other, less discriminative, attributes can be disregarded for a given pair of events, reducing the amount of information to be processed compared with processing all possible attributes for each pair of events.
Figures 3a, 3b and 3c are schematic diagrams showing examples of obtaining attribute values for respective attributes of a pair of events. In Figure 3a, a set of attributes 300 is obtained for a pair of events which are each of the same type (IDS alerts in this case). The set of attributes 300 is obtained from a knowledge base in this example. A knowledge base is for example a data storage system for storing structured and/or unstructured information. In this case, the knowledge base stores a plurality of sets of attributes, each associated with a different respective pair type of a plurality of predetermined pair types. A knowledge base typically comprises at least one database, and is generally structured to facilitate reasoning over the information stored therein. In Figure 3a, the knowledge base has been curated by a cyber security expert based on their knowledge and experience of events that are likely to be associated with cyber-attacks. The cyber security expert in this case has created sets of attributes for different respective pair types, for use in identifying whether a given pair of events of that pair type is attack-related. The cyber security expert has selected particular attributes for a given set based on their ability to distinguish attack-related pairs of events from innocuous pairs of events.
A value of attribute, which may be referred to as a feature, indicates a characteristic of one or both events of a given pair of events and/or the relationship between the events of the pair of events. For example, the value of the attribute may indicate a feature of one of the events (e.g. the event category or the event severity), or a relationship between various features of the events (e.g. whether the source IF address of each event is the same, whether the subnet of each event is the same, whether the event category of each event is the same, a timestamp difference between timestamps associated with each event, or whether a destination IF address of the first event corresponds to a source IF address of the second event).
In Figure 3a, the set of attributes 300 includes a first attribute 302a of the first event, indicating that the event type of a first event (which may be referred to as an event category) is "port scanning". The set of attributes 300 also includes a second attribute 302b of a second event (which occurred after the first event), indicating that the event type of the second event is a "web application attack". The set of attributes 300 further includes a third attribute 302c of the pair of events, indicating that the source IF addresses of the events are the same. The set of attributes 300 of Figure 3a also includes various other attributes, and is merely an example.
In Figure 3a, each attribute of the set of attributes 300 includes an attribute identifier (one of which is labelled in Figure 3a with the reference numeral 304). This facilitates easy retrieval and/or processing of each of the attributes 302.
Figure 3a illustrates an example in which the set of attributes 300 is obtained for an event pair of a particular pair type (indicating that the events of the pair are both IDS alerts). However, it is to be appreciated that a respective set of attributes may be obtained for each of a plurality of predetermined pair types, e.g. before event data is processed to identify attack-related pairs of events, or even before an attack has occurred. In other words, the sets of attributes may be pre-fetched or pre-obtained from the knowledge base, to improve the efficiency of identifying the attack-related pairs of events. As for the set of attributes 300 of Figure 3a, each of these sets of attributes may be obtained from a knowledge base. The sets of attributes may be different for different respective pair types, or at least some of the sets of attributes may be the same despite being associated with different respective pair types. In such cases, when a particular pair of events is obtained from a set of logs to be processed, the set of attributes for that pair of events may be selected from the pre-fetched sets of attributes (which e.g. may be stored locally to the computer system configured to process the logs), based on the pair type of that pair of events.
In some cases, the plurality of predetermined pair types are themselves obtained from a knowledge base, such as that used to obtain the set of attributes 300. For example, the plurality of predetermined pair types may be stored in a table or other data format in the knowledge base, and may then be retrieved for use in determining whether a given pair of events is attack-related.
In one case, the plurality of predetermined pair types are obtained and a determination is made as to whether the pair type of a given pair of events is one of the plurality of predetermined pair types. If it is not, the pair of events is treated as an innocuous pair of events. Further processing of that pair of events therefore ceases. If a pair type of the pair of events is one of the plurality of predetermined pair types, however, the set of attributes for that pair type is retrieved from the knowledge base and used to determine whether that pair of events is attack-related.
Referring back to Figure 3a, after obtaining the set of attributes 300, event data 306, which in this case represents log extracts for each of the events, is processed to extract values of the attributes 302 of the set of attributes 300. For example, processing the event data 306 may include querying a database storing the event data 306 to obtain the values of the attributes for the events in question. In Figure 3a, the log extracts are shown within the same table, but this is merely illustrative. In other cases, the event data is distributed across a plurality of different logs and/or different storage systems. For example, each security system may store a log for that security system in a database of that security system.
By processing the event data 306, a set of attribute values 308 for the attributes 302 is obtained. The set of attribute values 308 includes, for each of the attributes 302, an attribute value (one of which is labelled in Figure 3a with the reference numeral 310). The attribute values 310 are each associated with the attribute identifier 304 for the given attribute. As can be seen in Figure 3a, the attribute values may be of various formats, such as Boolean values, numbers, and so forth.
Figure 3b shows a further example of obtaining attribute values 310' for respective attributes 302' of a pair of events. Features of Figure 3b that are similar to corresponding features of Figure 3a are labelled with the same reference numerals, but appended with a'. Figure 3b is the same as Figure 3a except that in Figure 3b the first event is a malware alert and the second event is an IDS alert (rather than both events being IDS alerts). Hence, the pair type of the pair of events of Figure 3b differs from that of Figure 3a. The attributes 302' of the set of attributes 300' in Figure 3b are also different from the attributes 302 of Figure 3a. The attributes 302, 302' of Figures 3a and 3b are tailored to the respective pair type, in order to obtain attribute values for attributes that are more useful in identifying whether the pair of events of that pair type is attack-related.
Figure 3c shows a yet further example of obtaining attribute values 310" for respective attributes 302" of a pair of events. Features of Figure 3c that are similar to corresponding features of Figure 3a are labelled with the same reference numerals, but appended with a ". Figure 3c is the same as Figure 3a except that in Figure 3c the first event is a suspicious web proxy log and the second event is an anti-malware alert (rather than both events being IDS alerts). The pair type of the pair of events and the attributes 302" of Figure 3c differ from the pair type and the attributes 302 of Figure 3a, respectively, but this is merely an example.
Figure 4 is a schematic diagram indicating an example 400 of identifying attack-related pairs of events. In Figure 4, the attack-related pairs of events are obtained by processing attribute values of attributes for pairs of events to identify whether the pairs of events are related to an attack that occurred in a computer system. The attribute values may be obtained as described with reference to Figures 3a to 3c, for example. The computer system in this case includes three security systems 402a-402c (collectively referred to with the reference numeral 402), which have generated various alerts during the course of the attack, each of which is considered to correspond to a respective event. The events 404a-404e of the first security system 402a, the events 406a, 406b of the second security system 402b and the events 408a-408c of the third security system 402c are shown schematically in Figure 4. The events 404, 406, 408 occurred over a time period from an initial time, shown as t = 0 with respect to a time axis 410 in Figure 4.
In Figure 4, there are three different event types (each corresponding to an event associated with a different one of the security systems 402). In this example, there are six possible combinations of event types, i.e. six possible pair types, for a pair of events that occurred in the computer system: * Pair type 1: both events are associated with the first security system 402a; * Pair type 2: both events are associated with the second security system 402b; * Pair type 3: both events are associated with the third security system 402c; * Pair type 4: an event associated with the first security system 402a is followed by an event associated with the second security system 402b (or vice versa); * Pair type 5: an event associated with the first security system 402a is followed by an event associated with the third security system 402c (or vice versa); and * Pair type 6: an event associated with the second security system 402b is followed by an event associated with the third security system 402c (or vice versa).
It is to be appreciated that an event is associated with a given security system where, for example, the event was detected by or otherwise occurred within that security system or is represented by a log entry of a log of that security system. In this example, the pair type of a given pair of events does not depend on the chronological order of the events (i.e. the pair type of one of the events being associated with the first security system 402a and the other of the events being associated with the second security system 402b is the same, irrespective of the order in which the events occurred). This need not be the case in other examples, though, in which the pair type of a given pair of events does depend on the chronological order of the events.
In Figure 4, some of these possible combinations of event types are excluded from the plurality of predetermined pair types because it is determined that these possible combinations are unlikely to be attack-related. In Figure 4, there are four predetermined pair types that are considered to be potentially attack-related (pair types 1, 3, 4 and 5).
In the example of Figure 4, the attribute values for a given pair of events with a pair type of the plurality of predetermined pair types are processed using a random forest classifier. The random forest classifier used to process the attribute values for the given pair of events is selected based on the pair type of the pair of events, from a plurality of random forest classifiers. Each of the random forest classifiers is associated with a different respective pair type. In Figure 4, there are four random forest classifiers 412a-412d, one for each of the predetermined pair types. A first random forest classifier 412a is associated with pair type 1 (indicating that both events are associated with the first security system 402a). A second random forest classifier 412b is associated with pair type 4 (indicating that the first event is associated with the first security system 402a and the second event is associated with the second security system 402b (or vice versa)). A third random forest classifier 412c is associated with pair type 5 (indicating that the first event is associated with the first security system 402a and the second event is associated with the third security system 402c (or vice versa)). A fourth random forest classifier 412d is associated with pair type 3 (indicating that the first and second events are both associated with the third security system 402c).
A random forest classifier is a classifier trained according to the random forest algorithm. The random forest algorithm is a supervised learning algorithm in which multiple decision trees are built and merged together to improve the accuracy and stability of the predicted classification. Each decision tree in a forest considers a random subset of features (in this case, a random subset of attribute values) when forming decision trees during the training process. In addition, each decision tree only has access to a random set of the training data. This increases diversity, improving the robustness of the predictions obtained using the trained classifier. After training the random forest classifier, a majority vote of the classifications obtained by processing input data using each of the individual decision trees of the trained random forest classifier is taken as the predicted classification of the input data.
In the present case, the random forest classifier for a given pair type has been trained to classify input pairs of events of that pair type as either attack-related or not attack-related (i.e. innocuous or otherwise unrelated to the attack). The random forest classifier for a given pair type is trained to process attribute values for the set of attributes associated with the given pair type to determine whether a pair of events of that pair type is attack-related or not. The set of attributes may be obtained from a knowledge base, e.g. as explained with reference to Figures 3a to 3c, and may differ for different pair types. In this way, the random forest classifiers 412 are trained to classify whether two separate events (a first event and a second event that occurred after the first event) are attack-related in the sense that the first event could potentially be a logical predecessor of the second event in the attack on the computing system in which the events occurred. If a pair of events are classified as attack-related, this indicates that the pair of events may together form a step of the attack, which generally also includes other steps.
The use of the random forest classifiers 412 to process the attributes of various pairs of events 404, 406, 408 is shown schematically in Figure 4 by arrows from pairs of events to be processed to the respective random forest classifier 412 used to process a given pair of events. By processing the pairs of events using the random forest classifiers 412, each pair of events can be classified as attack-related or not.
By using different random forest classifiers 412 for different respective pair types, the random forest classifiers 412 can be trained more appropriately to identify a given pair type. This approach may be used where the sets of attributes associated with each of a plurality of predetermined pair types are relatively different from each other. For example, an attribute for use in classifying pairs of events of one pair type (e.g. a packet size) may not be available to classify pairs of events of a different pair type. Hence, each random forest classifier 412 can be trained to receive different sets of attributes as inputs, to classify pairs of events of different respective pair types. The random forest classifiers 412 are hence adapted for the respective pair type they are trained to classify, improving the performance of the random forest classifiers 412.
To train the random forest classifier for a given pair type, attribute values of respective attributes of a pair of events of that pair type and the correct classification for the pair of events (i.e. whether the pair of events is attack-related or not) are input to the random forest classifier. As explained above, the attributes used for a given pair type may be obtained based on the pair type, e.g. from a knowledge base. The decisions of each decision tree (e.g. whether an attribute value of an attribute is within a particular range) are iteratively updated during the training process, to improve the accuracy with which the random forest classifier is able to classify input pairs of events as attack-related or not. The classification can be indicated in various ways, e.g. as a 1 to indicate that a pair of events is attack-related and a 0 to indicate that the pair of events is not attack-related. The training data used to train the random forest classifier for a given pair type may include historical system logs for a previous attack and/or synthetic data representing artificially generated events. The events used to train the random forest classifier may be paired and labelled as being attack-related or not by cyber security experts.
Figure 5 is a schematic diagram showing an example 500 of the identification of attack-related pairs of events using a single random forest classifier 512. Figure 5 shows the same events 504, 506, 508 as Figure 4, but with a single random forest classifier 512 used to classify pairs of events as attack-related or not, rather than the four random forest classifiers 412a-412d of Figure 4. Features of Figure 5 that are similar to or the same as corresponding features of Figure 4 are labelled with the same reference numerals but incremented by 100.
In Figure 5, the pairs of events are classified using the random forest classifier 512, even though some of them are of different respective pair types. This obviates the need to train a plurality of separate classifiers. The approach of Figure 5 may be used for example where the difference between various pair types of a plurality of predetermined pair types is relatively small, e.g. where the sets of attributes obtained for classifying each of the pair types is relatively similar or the same. If this is the case, the sets of attributes can be consolidated into a single set of attributes and a single random forest classifier 512 can be trained to classify pairs of events of various different pair types using attribute values of a consolidated set of attributes as inputs.
In some examples, the consolidated set of attributes is obtained by first obtaining the sets of attributes for each of the plurality of predetermined pair types. The attributes in common between the various sets of attributes are identified, to remove duplicate attributes from the consolidated set of attributes (which may be considered to be a consolidated features set). The consolidated set of attributes can be stored in a knowledge base and used to extract attribute values for pairs of events for use in training the random forest classifier 512 or for extracting attribute values to be supplied to the trained random forest classifier 512 to classify pairs of events as attack-related or not (i.e. during use of the trained random forest classifier 512 for classification). In general, most of the attribute values will be true or false (e.g. expressed as a 1 or a 0). Where the attribute value for a given attribute is unavailable for a particular pair of events (e.g. if that pair of events is of a pair type that does not have the given attribute), a predetermined value may be assigned to the attribute value (e.g. a 2). In this way, it can be indicated that the attribute value for the given attribute is unavailable for that pair of events.
Figures 6a to 6d are schematic diagrams showing an example 600 of identifying a sequence of events that formed part of a successful attack on a computer system. Figures 6a to 6d show a plurality of events 604, 606 that occurred during a time period in which a successful attack on a computer system occurred. Some of the events formed part of the successful attack and others did not form part of the successful attack (e.g. they were innocuous or they formed part of an unsuccessful attack on the computer system). The computer system in this case includes two security systems 602a, 602b (collectively referred to with the reference numeral 602). The plurality of events 604, 606 include events 604a, 604b that occurred in the first security system 602a (shown in Figures 6a to 6d as El and E2) and events 606a-606c that occurred in the second security system 602b (shown in Figures 6a to 6d as E3 to E5). The events 604, 606 occurred over a time period from an initial time, shown as t = 0 with respect to a time axis 610 in Figures 6a to 6d.
A respective set of attributes for each of a plurality of predetermined pair types is obtained, in this case from a knowledge base. Pairs of events of the events 604, 606 that are one of the predetermined pair types are identified. In this example, the events in each pair are arranged in chronological order, as the order in which events occurred can provide an indication as to whether the events were malicious or otherwise.
In Figure 6a, pairs of events of three different pair types are identified (with the arrows between events indicating the time order in which the events occurred, i.e. the arrow in El -> E2 indicates that the event El occurred before the event E2): * Type 1 pairs (both events associated with the first security system 602a): o {E1 -> E2} * Type 2 pairs (both events associated with the second security system 602b): O {E3 -> E4}, {E4 -> E5}, {E3 -> E5} * Type 4 pairs (one event associated with the first security system 602a and one event associated with the second security system 602b): O {E1 -> E4}, {E1 -> E5}, {E2 -> E5}, {E3 -> El}, {E3 -> E2}, {E4 -> E2} These event pairs are indicated schematically in Figure 6b, which shows an arrow between events of each of these pairs. The direction of the arrow indicates the time order in which the events occurred. For example, the arrow from E3 to El in Figure 6b indicates that E3 occurred before El.
After identifying the pairs of events, the attribute values for the set of attributes for the pairs of events of each pair type are obtained, e.g. as described with reference to Figures 3a to 3c. The attribute values for each pair of events are then input either into a random forest classifier for the pair type for the respective pair of events (e.g. as described with reference to Figure 4) or to a single random forest classifier (e.g. as described with reference to Figure 5).
The random forest classifier used to classify a given pair of events outputs a classification indicating that the given pair of events is an attack-related pair of events or otherwise. For example, the random forest classifier may indicate that the pair of events is either attack-related or that the pair of events is an unrelated pair of events, unrelated to the attack. It is to be appreciated that a pair of events may be categorised as attack-related even if it did not form part of the successful attack. For example, a pair of events that occurred contemporaneously with the successful attack and was performed by the attacker or in an attempt to attack the computer system may be categorised as attack-related even if the pair of events was unsuccessful in achieving an attack objective. If a pair of events is identified as attack-related, this indicates that the events of the pair of events could have occurred in the sequence in which they occurred to form part of the attack, i.e. that a first one of the events may be a precursor to a second, subsequent one of the events.
If a pair of events is identified as being attack-related, it is retained for further processing, to identify whether it formed part of the sequence of events of the successful attack. If otherwise, though, the pair of events is discarded from the plurality of events that are further processed to identify the sequence of events of the successful attack.
Figure Sc illustrates the pairs of events retained after identifying the attack-related events, which in this case are the following pairs: {El -> E2), {E4 -> E5}, {El -> {El -> E5}, {E3 -> E2}. The remaining pairs of events were identified as being unrelated to the attack and were therefore discarded.
The attack-related pairs of events are then processed to identify a sequence of events 614 of the plurality of events 604, 606 that formed part of the attack. The sequence of events 614 may be considered to be an attack flow of the event and is shown in Figure 6d.
The approach of this example involves constructing possible attack flows (which may be referred to as potential sequences of events) using information derived from the attack-related pairs of events. A so-called backward tracking method is used to link the attack-related pairs of events together, backwards in time, and to identify the possible attack flows. This involves identifying a plurality of potential sequences of events. In this case, the respective events of each of the potential sequences are identified in reverse chronological order, i.e. from the most recent event backwards in time. Each potential sequence of events includes a time-ordered series of events from an initial event to a final event, and each event between the initial event and the final event is in an attack-related pair with a previous event and is in a further attack-related pair with a subsequent event.
In some cases, the backwards tracking method involves starting with the most recent event from the logs of the first and second security systems 602a, 602d and working backwards to identify the most recent event that is in an attack-related pair. This event may be taken as the final event in a potential sequence of events. In other cases, the final event is identified based on properties of that event such as a semantic description or tag associated with that event. For example, the final event may be taken as the event that represents the culmination of the attack (i.e. that corresponds to the successful performance of the attack), such as an event representing a distributed denial-of-service (DDoS) attack or a data breach.
After identifying the final event (which in the example of Figure 6c is E5), the other event(s) in an attack-related pair with the final event is identified (in this case, E4 and El). The same backwards tracking process is then performed on the event(s) in the attack-related pair with the final event (in this case, E4 and El) to find their own preceding events. In this example, applying backwards tracking to the event E4 leads to the identification of El, while there is no predecessor to the event El.
This backwards tracking process is performed repeatedly until the start of the event data On this case, the start of the logs) has been reached or until no preceding event can be identified for a potential sequence of events. At this point, the backwards tracking process is restarted from the most recent event in the plurality of events, but with the events that already form part of a potential sequence of events omitted from the backwards tracking. Hence, in this case, the backwards tracking process is performed again starting from E2, and omitting the events E5 and E4 (which form part of the previously identified potential sequence of events).
Once the backwards tracking process has been performed for each of the plurality of events (which in this case are events that have been identified as belonging to an attack-related pair of events), a sequence of events that may be taken as forming part of the attack is selected from the plurality of potential sequences.
From the attack-related pairs shown in Figure 6c (with each pair indicated using an arrow), four potential sequences of events are identified (where the arrow between events indicates the time ordering of the events, i.e. El -> E2 indicates that El occurred before E2): * potential sequence 1: {E1 -> E2} * potential sequence 2: {E1 -> E4 -> E5} * potential sequence 3: {E1 -> E5} * potential sequence 4: {E3 -> E2} The potential sequence that is selected as the one that is considered to form part of the attack may be selected based on at least one property of the sequence of events. In Figure Sc, the property is a time-based property. In this case, the potential sequence that spans the longest time period is taken as the sequence of events of the attack. This is because an attack is more likely to have included events over a relatively long time period, as this typically reduces the likelihood of detection of the attack. This is merely an example, though, and other properties may be used in other examples, alone or in combination with the time-based property, e.g. the severity of event(s) of each of the potential sequences of events, such as the final event. The potential sequence of events may also be selected as that which included a particular event identified as representing the culmination of the attack objective (i.e. the event indicating that the attack was successful).
In the example of Figure 6, potential sequence 2 is selected as the sequence of events 614, because it spans the longest time period. This is shown schematically in Figure 6d. Potential sequence 4 may also be considered to be another, e.g. independent, attack flow since it is separate from the sequence of events 614 of the main attack. In Figure 6, the final event of the potential sequence 2 had more severe consequences than that of the final event of potential sequence 4. Hence, as potential sequence 2 spanned a longer time period and was more severe than potential sequence 4, potential sequence 2 is considered to correspond to the main attack. However, in a different example, potential sequence 4 may have been selected as the main attack (despite occurring over a shorter time period than potential sequence 2) if the severity of potential sequence 4 was higher than that of potential sequence 2.
The approach of Figures 6a to 6d allows the sequence of events 614 of a successful attack on a computer system to be identified in an efficient manner, e.g. with reduced input from a cyber security expert. By identifying the sequence of events 614 of the successful attack, at least one property of the computer system to be modified to reduce the vulnerability of the computer system to a further attack is identified. For example, by analysing the properties of the events of the sequence of events 614, it can be determined that at least one of the events should have been detected and prevented to avoid the successful completion of the attack. An appropriate property of the existing security systems to detect and mitigate this event in future can then be identified (or a new security system that is configured to detect and prevent this event can be included in the computer system). For example, if the sequence of events 614 included a phishing email alert, it may be determined that a system for detecting phishing attempts should be improved (or a new system to provide such detection should be included) in the computer system, to reduce the risk of successful phishing attempts in future.
The at least one property can then be modified appropriately, e.g. by modifying an existing security system or including a new security system, to improve the security of the computer system in the future. Hence, by analysing previous attacks on a computer system as described herein, weaknesses or flaws in existing security systems of the computer system can be identified. These vulnerabilities can then be fixed or otherwise compensated for. The further attack may be the same as or similar to the successful attack that occurred previously, or it may be a different type of attack that nevertheless relies on a vulnerability exploited during the successful attack.
However, by reducing this vulnerability of the computer system, the further attack may be successfully prevented or the severity of the consequences of the further attack may be reduced.
In some cases, at least one event that formed part of an unsuccessful attack on the computer system, as shown in Figures 7 and 8. The at least one event that formed part of the unsuccessful attack is correlated with the successful attack in that it occurred during a time period associated with the successful attack and was performed as part of the same attack process. In this way, the events of both the successful attack and the unsuccessful attack are considered attack-related. Although the computer system in which the successful attack occurred was sufficiently secure to prevent the unsuccessful attack from achieving its attack objective, by identifying the event(s) of the unsuccessful attack, a greater understanding of potential vulnerabilities in other computer systems can be identified. In some cases, the unsuccessful attack is used to identify at least one vulnerability of a second computer system, different from the computer system in which the successful attack occurred (which may be considered a first computer system). At least one property of the second computer system to be modified to reduce the at least one vulnerability can then be determined. The at least one property of the second computer system can then be modified appropriately, to improve the security of the second computer system.
For example, if the first computer system has a robust anti-malware system, an attempt to install malware on the first computer system may be detected and prevented by the anti-malware system. The detection of the attempted installation of the malware in the first computer system causes a malware alert to be raised by the anti-malware system. The malware alert is an event forming part of an unsuccessful attack on the first computer system in this instance.
By determining that an (unsuccessful) attack strategy attempted by an attacker included an attempt to install malware, the security system(s) of the second computer system can be analysed to identify whether they would have been able to detect a similar attempt to install malware. If it is identified that the security system(s) of the second computer system are vulnerable to this attack strategy, e.g. if the second computer system lacks an anti-malware system or if the antimalware system is out of date or is not sophisticated enough to correctly identify that these actions indicate an attempt to install malware, the security system(s) can be modified appropriately to reduce or remove this vulnerability.
Figure 7 shows schematically an example 700 of identifying at least one event that formed part of an unsuccessful attack on a computer system which underwent a successful attack. The attack represented in the example 700 of Figure 7 is the same as that in the example 600 of Figures 6a to 6d. Features of Figure 7 that are the same as corresponding features of Figures 6a to 6d are labelled with the same reference numerals but incremented by 100; corresponding descriptions are to be taken to apply.
A sequence of events 714 (formed of the events El 704a, E4 706b and E5 706c) that formed part of the successful attack is shown schematically in Figure 7. This sequence of events 714 may be identified using the methods described herein, e.g. those described with reference to Figures 6a to 6d, or using a different method (e.g. by a cyber security expert analysing the events 704, 706 of the first and second security systems 702a, 702b of the computer system).
In Figure 7, the sequence of events 714 has been identified using the methods described herein.
Each of the events 704, 706 shown in Figure 7 is in an attack-related pair of events, which can be identified e.g. using at least one random forest classifier as explained with reference to Figures 4 and 5. At least one event On this case, two events: El 704a and E2 704b) that formed part of the unsuccessful attack are identified from the attack-related pairs of events (which are the same as the attack-related pairs indicated by the arrows in Figure 6c).
In the example 700 of Figure 7, the events of the unsuccessful attack are identified from the attack-related events using a so-called forward tracking method. This involves identifying a first event which is in the sequence of events 714 (in this case, event El). In this case, the first event is chronologically first in the sequence of events 714, i.e. it occurred before the other events of the sequence of events 714. The attack-related pairs are then used to track successive events that are not part of the sequence of events 714 of the successful attack. This for example involves identifying a first event that is in the sequence of events 714 and which is in an attack-related pair with a second event (in this case, event E2) which is subsequent to the first event (event El) but which does not form part of the sequence of events 714. This therefore generates a time-ordered series of events 716 of the unsuccessful attack.
The same approach is applied to track subsequent events from the second event until no further events that are within an attack-related pair with an event of the time-ordered series of events 716 is identified. In other words, events are iteratively added to the time-ordered series of events 716 in chronological order (i.e. from the least recent event to the most recent event). Each event of the time-ordered series of events is in an attack-related pair with a previous event and is in a further attack-related pair with a subsequent event.
In the example 700 of Figure 7, the second event (event E2) is not in an attack-related pair with a subsequent event so the time-ordered series of events 716 of the unsuccessful attack is formed of the first event (event El) and the second event (event E2). In other cases, though, the second event is in an attack-related pair with a subsequent event, which itself may be in an attack-related pair with a further subsequent event and so on, so as to create a chain of events (in time order), each linked by being in an attack-related pair with a previous and a subsequent event. The forward tracking in such cases therefore identifies a time-ordered sequence of events from the first event to a third event (which is e.g. the most recent event of the time-ordered sequence of events that is not in an attack-related pair with a more recent event).
In this case, the events of the time-ordered series of events 716 of the unsuccessful attack other than the first event (event El) are not in the sequence of events 714 of the successful attack. However, in other cases, at least one other event of the time-ordered series of events 716 of the unsuccessful attack may form part of the successful attack. In such cases, the time-ordered series of events 716 also includes at least one event that does not form part of the successful attack, e.g. at least one event that caused the unsuccessful attack to be prevented by the security system(s) of the computer system.
It is to be appreciated that a similar forward tracking method may be used to identify events of the sequence of events 714 of the successful attack. However, this is not needed if the sequence of events 714 has already been identified previously.
Figure 8 shows schematically a further example 800 of identifying at least one event that formed part of an unsuccessful attack on a computer system which underwent a successful attack. The attack represented in the example 800 of Figure 8 is the same as that in the examples 600, 700 of Figures 6a to 6d, and Figure 7. The further example 800 of Figure 8 may be performed instead of the example 700 of Figure 7 to identify the at least one event that formed part of the unsuccessful attack, or in addition to the example 700 of Figure 7 to identify further events that part of the unsuccessful attack. Features of Figure 8 that are the same as corresponding features of Figure 7 are labelled with the same reference numerals but incremented by 100; corresponding
descriptions are to be taken to apply.
In the example of Figure 7, the forward tracking method started from events included in the sequence of events 714 of the successful attack. In Figure 8, the identification of event(s) that form part of the unsuccessful attack is extended to other events that do not part of the sequence of events 814 of the successful attack. The approach of Figure 8 involves identifying, from the attack-related pairs of events (which are the same as those illustrated by the arrows in Figure Sc), an event not forming part of the sequence of events 814, which may be referred to as a fourth event. The fourth event is identified based on a comparison between a value of an attribute of the fourth event and a value of the same attribute for an event of the sequence of events 814. In other words, the example 800 of Figure 8 involves identifying event(s) of the unsuccessful attack based on a comparison between attribute values for those event(s) and corresponding attribute values for event(s) of the sequence of events 814 of the unsuccessful attack.
The example 800 of Figure 8 involves identifying a fourth event for which the value of the attribute is the same as the value of the attribute of an event of the sequence of events 814. In this way, events which share characteristics with the sequence of events 814 (but are not themselves part of the sequence of events 814) can be identified. In other cases, though, events that do not have exactly the same characteristics but are nevertheless identified to be sufficiently similar in nature to an event of the sequence of events 814 (based on their attribute values) may be identified as forming part of the unsuccessful attack. Identifying events with similar characteristics to those of the successful attack allows further information about the activities of the attacker during the attack to be obtained. This information can be used to further improve the security of a computer system (such as a different computer system which would otherwise be vulnerable to similar activities).
The attribute or attributes used for identifying these related events can be obtained from a knowledge base, which may be similar to or the same as the knowledge base described with reference to Figures 3a to 3c. Examples of such attributes include: a public source IF address associated with an event, a source network domain associated with an event, an email address of the sender of an email (where the event is detection of receipt of a phishing email), a malicious domain included a phishing email (where the event is detection of receipt of a phishing email).
These attributes are typically event-based rather than event pair-based, i.e. they capture a feature of a single event only. Nevertheless, these attributes may be otherwise similar to or the same as the event-based features used in identifying whether a pair of events is an attack-related pair.
The attribute(s) for identifying these event(s) of the unsuccessful attack flow can be retrieved from the knowledge base in various ways. For example, the attribute(s) used in identifying whether a given event of the attack-related pairs is part of the unsuccessful attack (e.g. based on a correlation with an event of the successful attack as determined by attribute values of the two events) can be obtained from the knowledge base based on the event type of the event of the successful attack. For example, if the event is detection of receipt of a phishing email, the attributes may include an email address of the sender of an email and/or a malicious domain included a phishing email.
In other cases, shared attributes of the sequence of events 814 of the successful attack as a whole may be obtained and used to determine the attribute(s) for use in identifying events of the unsuccessful attack that are correlated with an event of the successful attack. For example, if each (or a subset, such as a majority) of the events of the sequence of events 814 has a value of an attribute in common (e.g. the same source IF address), this attribute (the source IF address) may be used to identify at least one event that does not form part of the sequence of events 814 but which is attack-related and has the same value of this attribute.
A respective value of each of the attribute(s) can be obtained from the event data for a given event in the sequence of events 814, e.g. as described with reference to Figures 3a to 3c. These values can then be compared to corresponding values for each of the other events in the attack-related pairs of events that are not part of the sequence of events 814. In this way, events of the attack-related pairs that do not form part of the successful attack but which nevertheless have similar characteristics to an event in the sequence of events 814 can be identified.
In the example 800 of Figure 8, the event E3 is identified as having the same attribute value as one of the events of the sequence of events 814 (in this case, the same source IP address). The event E3 is therefore considered to be correlated with the successful attack, and to form part of an unsuccessful attack.
The forward tracking method described with reference to Figure 7 can then be applied starting from each of the events identified as being correlated with an event of the successful attack. In this way, a further time-ordered series of events 818 of the unsuccessful attack, from the fourth event to a fifth event (e.g. a final event in the further time-ordered series of events in chronological order) can be identified. An understanding of the activities that occurred during the attack can therefore be further improved.
As for the time-ordered series of events 716 of the example 700 of Figure 7, each event between the fourth event and the fifth event is in an attack-related pair with a previous event and is in a further attack-related pair with a subsequent event. In Figure 8, the fourth event (event E3) is in an attack-related pair with a fifth event (event E2). There are no intervening events in this example, and no further events after the fifth event, as the fifth event is the most recent event which is not in an attack-related pair with a subsequent event. In this case, the events of the further time-ordered series of events 818 are identified in chronological order. The events of the further time-ordered series of events 818 are not in the sequence of events 814 in this case, but in other cases the further time-ordered series of events 818 may include at least one event in common with the sequence of events 814.
This approach can be repeated for each event that is not within the sequence of events 814 but which is nevertheless identified as correlated with an event of the sequence of events 814 (e.g. on the basis of a comparison between values of respective attributes). A plurality of further time-ordered series of events of the unsuccessful attack can therefore be identified, to gain a fuller picture of the malicious activities performed by the attacker during the course of the attack.
Figure 9 illustrates an example 900 of an attack story, which in this case is formed by combining the sequence of events 914 of the successful attack with the time-ordered series of events 916 and the further time-ordered series of events 918 of the unsuccessful attack. The attack represented in the example 900 of Figure 9 is the same as that in the examples 600, 700 of Figures 6a to 6d, and Figure 7. Features of Figure 9 that are the same as corresponding features of Figure 7 are labelled with the same reference numerals but incremented by 200 and features of Figure 9 that are the same as corresponding features of Figure 8 are labelled with the same reference numerals but incremented by 100; corresponding descriptions are to be taken to apply.
By creating an attack story such as that shown in Figure 9, a more holistic view of the attack can be obtained. The attack story includes the sequence of events 914 (which may be referred to as the attack flow). The attack flow represents the successful path of the attacker in reaching a final attack objective. The attack story also represents other activities that can be directly or indirectly attributed to the same attacker and/or to the same attack, including those series of events 916, 918 of the unsuccessful attack. Based on the attack story, the computer system that underwent the attack can be fortified where appropriate. Furthermore, the security of a different computer system can be improved based on an understanding of the activities of the attack.
In some cases, the successful attack that was performed on a first computer system, e.g. as identified using the methods herein, can be used to identify at least one further vulnerability of a second computer system. For example, from the attack story shown in Figure 9, a first vulnerability of the second computer system can be identified based on the successful attack on the first computer system and a second vulnerability of the second computer system can be identified based on the unsuccessful attack. By identifying further vulnerabilities of the second computer system, a greater understanding of the points of weakness of the second computer system can be achieved.
At least one further property of the second computer system to be modified for reducing the at least one further vulnerability of the second computer system can be determined. The determination of the at least one further property of the second computer system may be performed similarly to the determination of the at least one property of the second computer system but to address the second vulnerability rather than the first vulnerability. The at least one further property can then be modified appropriately, further improving the security of the second computer system. For example, the second computer system can be modified so it is less vulnerable to an attack of the same type as the successful attack on the first computer system. In this way, an improved understanding of the successful attack on the first computer system can be leveraged to enhance the security of the second computer system, which is different from the first computer system.
Figure 10 is a schematic diagram of a system 1000 that has undergone an attack, which included a successful attack and an unsuccessful attack. The system 1000 includes a first computer system 1002 that was attacked as well as a second computer system 1004 and a further computer system 1006. In Figure 10, the computer systems 1002, 1004, 1006 are each associated with the same entity (e.g. a business or other organisation), but this need not be the case in other examples. For example, the further computer system 1006 may be associated with a different entity than the first and second computer systems 1002, 1004, such as a cyber security entity or other entity that is dedicated to analysing and understanding prior cyber-attacks.
The system 1000 further includes first storage 1008 associated with the first computer system 1002, second storage 1010 associated with the second computer system 1004 and further storage 1012 associated with the further computer system 1012. Each of the storages 1008, 1010, 1012 may be or include any suitable apparatus, device or system for storing electronic data, such cloud storage, which may be distributed across multiple servers. In the system 1000 of Figure 10, the storages 1008, 1010, 1012 are shown as remote from the computer systems 1002, 1004, 1006 but in other cases at least one of the storages may be integrated with the computer system associated with the storage.
The computer systems 1002, 1004, 1006 and the storages 1008, 1010, 1012 can communicate with each other via a network 1014. The network 1014 may be a single network or may comprise a plurality of networks. The network 1014 may be or include a wide area network (WAN), a local area network (LAN) and/or the Internet.
In the system 1000 of Figure 10, the first computer system 1002 was attacked by a malicious party. Event data representing events that occurred in the first computer system 1002 within the time period during which the attack occurred (e.g. in the form of logs from security systems of the first computer system 1002) is stored in the first storage 1008. For example, the event data for a given event may be generated by the first computer system 1002 and then transferred to the first storage 1008 via the network 1014 for subsequent storage.
The second computer system 1004 stores event data representing events that have occurred in the second computer system 1004 (e.g. events identified by respective security systems of the second computer system 1004) in the second storage 1010 in a similar way. However, in this case, the event data for the second computer system 1004 is not used until the second computer system 1004 has undergone a successful attack (at which point it may be analysed similarly to the event data for the first computer system 1002 to identify an attack story of the attack on the second computer system 1004).
After it has been identified that the first computer system 1002 has undergone an attack, the event data is obtained from the first storage 1008 by the further computer system 1006 for processing. The further computer system 1006 processes the event data as described in the methods herein to identify an attack story of the attack. In examples in which the attributes for identifying attack-related pairs of events and/or for identifying events that formed part of an unsuccessful attack are obtained from a knowledge base, the knowledge base may be stored in the further storage 1012 associated with the further computer system 1006. In other cases, though, the knowledge base may be stored in yet further storage, which is nevertheless accessible to the further computer system 1006. After the attack story has been identified, data representing the attack story is then transferred from the further computer system 1006 to the further storage 1012, to be stored for future use if desired.
By processing the event data, the further computer system 1006 in this example identifies at least one property of the first computer system 1002 to be modified to reduce a vulnerability of the first computer system 1002 to a further attack. This property may be communicated to a user, e.g. by displaying a suitable message using a display device of the further computer system 1006. Alternatively or additionally, this property and/or instructions to modify this property to reduce the vulnerability may be communicated to the first computer system 1002 via the network 1014. The security of the first computer system 1002 can then be improved by appropriate modification of this property.
In the example of Figure 10, the further computer system 1006 also uses the attack story to identify at least one property of the second computer system 1004 to be modified to reduce a vulnerability of the second computer system 1004 to an attack, e.g. an attack similar to or the same as the unsuccessful attack and/or the successful attack on the first computer system 1006. As described for the first computer system 1002, the at least one property of the second computer system 1004 may be notified to a user of the further and/or second computer systems 1006, 1004 and/or to the second computer system 1004 itself. The at least one property of the second computer system 1004 can then be modified appropriately, e.g. by appropriate configuration by a cyber security expert or based on instructions received from the further computer system 1006.
Figure 11 is a schematic diagram of internal components of a computer system 1100 that may be used to implement any of the methods described herein. For example, the computer system 1100 may be used any of the computer systems 1002, 1004, 1006 of Figure 10. The computer system 1100 in Figure 11 is implemented as a single computer device but in other cases a similar computer system may be implemented as a distributed system.
The computer system 1100 includes storage 1102 which may be or include volatile or non-volatile memory, read-only memory (ROM), or random access memory (RAM). The storage 1102 may additionally or alternatively include a storage device, which may be removable from or integrated within the computer system 1100. For example, the storage 1102 may include a hard disk drive (which may be an external hard disk drive such as a solid state disk) or a flash drive. The storage 1102 is arranged to store data, temporarily or indefinitely. The storage 1102 may be referred to as memory, which is to be understood to refer to a single memory or multiple memories operably connected to one another.
The storage 1102 may be or include a non-transitory computer-readable medium. A non-transitory computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, compact discs (CDs), digital versatile discs (DVDs), or other media that are capable of storing code and/or data.
In Figure 11, the storage 1102 is arranged to store event data representing a plurality of events that have occurred in a computer system that has undergone an attack (which may be the computer system 1100 of Figure 11 or a different computer system).
The computer system 1100 also includes at least one processor 1104 which is configured to implement any of the methods described herein. The processor 1104 may be or comprise processor circuitry. The at least one processor 1104 is arranged to execute program instructions and process data, such as the event data. The at least one processor 1104 is for example arranged to process instructions, obtained from the storage 1102, to implement any of the methods described herein. The at least one processor 1104 may include a plurality of processing units operably connected to one another, including but not limited to a central processing unit (CPU) and/or a graphics processing unit (GPU).
The computer system 1100 further includes a network interface 1106 for connecting to a network, such as the network 1014 of Figure 10. A computer system otherwise similar to the computer system 1100 of Figure 11 may additionally include at least one further interface for connecting at least one further component, such as a display interface for connecting to a display device (which may be separable from or integral with the computer system). The components of the computer system 1100 are communicably coupled via a suitable bus 1108.
Alternatives and Modifications In the examples above, the events are detected by a plurality of subsystems. In other examples, though, the events may be detected by a single subsystem of a computing system, e.g. a single security system.
In the examples above, the event data is in the form of event logs. However, this is merely an example and in other cases the event data may be in various other formats.
In the examples of Figures 3a to 3c, the sets of attributes for respective pair types of a plurality of predetermined pair types have been identified by a cyber security expert and stored in a knowledge base for subsequent retrieval. However, in other examples, the sets of attributes may be generated or otherwise identified in a different manner, e.g. based on analysis of attack stories of other attacks.
In examples above, the attributes are used to identify whether a given pair of events is attack-related depend on the pair type of the pair of events. However, this need not be the case in other examples. In such cases, obtaining a plurality of predetermined pair types and/or obtaining a set of attributes for each of the plurality of predetermined pair types is omitted. Instead, the same set of attributes is used to identify whether each pair of events is attack-related, irrespective of the pair type of the pair of events.
In the examples above, at least one random forest classifier is used to identify attack-related pairs of events. In other cases, though, a different machine learning classifier than a random forest classifier may be used to identify attack-related pairs of events or the attack-related pairs of events may be identified in a different manner, e.g. which does not use a machine learning classifier.
In Figure 10, the further computer system 1006 performs the methods herein to determine an attack story of an attack on the first computer system 1002. However, in other cases, the methods herein may be performed at least partly using the first computer system 1002, e.g. without sending the event data to a different computer system. This may be the case e.g. where the event data is sensitive and it is desired to limit the transfer of the event data to reduce the risk of the event data being publicly exposed.
Each feature disclosed herein, and (where appropriate) as part of the claims and drawings may be provided independently or in any appropriate combination.
Any reference numerals appearing in the claims are for illustration only and shall not limit the scope of the claims.
Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. It is intended that the specification and examples be considered as exemplary only.
In addition, where this application has listed the steps of a method or procedure in a specific order, it could be possible, or even expedient in certain circumstances, to change the order in which some steps are performed, and it is intended that the particular steps of the method or procedure claims set forth herein not be construed as being order-specific unless such order specificity is expressly stated in the claim. That is, the operations/steps may be performed in any order, unless otherwise specified, and embodiments may include additional or fewer operations/steps than those disclosed herein. It is further contemplated that executing or performing a particular operation/step before, contemporaneously with, or after another operation is in accordance with the described embodiments.
The methods and processes described herein can be partially or fully embodied in software or partially or fully embodied in hardware modules or apparatuses or firmware, so that when the hardware modules or apparatuses are activated, they perform the associated methods and processes. The methods and processes can be embodied using a combination of code, data, and hardware modules or apparatuses.
Examples of processing systems, environments, and/or configurations that may be suitable for use with the embodiments described herein include, but are not limited to, embedded computer devices, personal computers, server computers (specific or cloud (virtual) servers), hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, mobile telephones, network personal computers (PCs), minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. Hardware modules or apparatuses described in this disclosure include, but are not limited to, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), dedicated or shared processors, and/or other hardware modules or apparatuses.

Claims (23)

  1. CLAIMS1. A method comprising: obtaining event data representing a plurality of events that have occurred in a first computer system which has undergone a successful attack; identifying, from the plurality of events, at least one event that formed part of an unsuccessful attack on the first computer system, wherein the unsuccessful attack is correlated with the successful attack; using the unsuccessful attack to identify at least one vulnerability of a second computer system, different from the first computer system; and determining at least one property of the second computer system to be modified to reduce the at least one vulnerability.
  2. 2. The method according to claim 1, comprising: identifying, from the plurality of events, attack-related pairs of events; and identifying the at least one event from the attack-related pairs of events.
  3. 3. The method according to claim 2, comprising determining, using the attack-related pairs of events, a sequence of events of the plurality of events that formed part of the successful attack, wherein identifying the at least one event comprises identifying a first event of the plurality of events which is in the sequence of events and is in a pair of the attack-related pairs with a second event of the plurality of events, the second event not forming part of the sequence of events and subsequent to the first event, the at least one event comprising the second event.
  4. 4. The method according to claim 3, comprising identifying a time-ordered series of events of the unsuccessful attack, from the first event to a third event, wherein each event between the first event and the third event is in an attack-related pair with a previous event and is in a further attack-related pair with a subsequent event, and the at least one event comprises the events of the time-ordered series of events.
  5. 5. The method according to claim 4, wherein identifying the time-ordered series of events comprises identifying respective events of the time-ordered series of events in chronological order.
  6. 6. The method according to claim 4 or claim 5, wherein the events of the time-ordered series of events other than the first event are not in the sequence of events.
  7. 7. The method according to any one of claims 2 to 6, wherein identifying the at least one event comprises identifying, from the attack-related pairs of events, a fourth event not forming part of the sequence of events, based on a comparison between a value of an attribute of the fourth event and a value of the attribute of an event of the sequence of events, the at least one event comprising the fourth event.
  8. 8. The method according to claim 7, wherein the value of the attribute of the fourth event is the same as the value of the attribute of the event of the sequence of events.
  9. 9. The method according to claim 7 or claim 8, comprising obtaining the attribute from a knowledge base.
  10. 10. The method according to any one of claims 7 to 9, comprising identifying a further time-ordered series of events of the unsuccessful attack, from the fourth event to a fifth event, wherein each event between the fourth event and the fifth event is in an attack-related pair with a previous event and is in a further attack-related pair with a subsequent event, and the at least one event comprises the events of the further time-ordered series of events.
  11. 11. The method according to claim 10, wherein identifying the further time-ordered series of events comprises identifying respective events of the further time-ordered series of events in chronological order.
  12. 12. The method according to claim 10 or claim 11, wherein the events of the further time-ordered series of events are not in the sequence of events.
  13. 13. The method according to any one of claims 2 to 12, comprising identifying a pair type of a pair of events of the plurality of events, wherein identifying the attack-related pairs of events comprises identifying that the pair of events is an attack-related pair of events using the pair type of the pair of events.
  14. 14. The method according to any one of claims 2 to 13, comprising obtaining, for each of a or the plurality of predetermined pair types, a respective set of attributes for use in identifying whether a given pair of events of the plurality of events is an attack-related pair of events.
  15. 15. The method according to any one of claims 2 to 14, wherein identifying the attack-related pairs of events comprises identifying that a pair of events of the plurality of events is an attack-related pair of events based on processing of respective attribute values of attributes associated with the pair of events using a machine learning classifier.
  16. 16. The method according to any one of claims 1 to 15, comprising modifying the at least one property of the second computer system to reduce the at least one vulnerability.
  17. 17. The method according to any one of claims 1 to 16, comprising: using the successful attack to identify at least one further vulnerability of the second computer system; and determining at least one further property of the second computer system to be modified to reduce the at least one further vulnerability.
  18. 18. The method according to claim 17, comprising modifying the at least one further property of the second computer system to reduce the at least one further vulnerability of the second computer system.
  19. 19. The method according to claim 17 or claim 18, wherein the at least one vulnerability of the second computer comprises a vulnerability of the second computer to an attack of the same type as the successful attack.
  20. 20. A system comprising: storage for storing event data representing a plurality of events that have occurred in a first computer system which has undergone a successful attack; and at least one processor configured to: identify, from the plurality of events, at least one event that formed part of an unsuccessful attack on the first computer system, wherein the unsuccessful attack is correlated with the successful attack; use the unsuccessful attack to identify at least one vulnerability of a second computer system, different from the first computer system; and determine at least one property of the second computer system to be modified to reduce the at least one vulnerability.
  21. 21. The system according to claim 20, wherein the at least one processor is configured to: identify, from the plurality of events, attack-related pairs of events; and identify the at least one event from the attack-related pairs of events.
  22. 22. The system according to claim 20 or claim 21, wherein the at least one processor is configured to determine, using the attack-related pairs of events, a sequence of events of the plurality of events that formed part of the successful attack, wherein identifying the at least one event comprises identifying a first event of the plurality of events which is in the sequence of events and is in a pair of the attack-related pairs with a second event of the plurality of events, the second event not forming part of the sequence of events and subsequent to the first event, the at least one event comprising the second event.
  23. 23. A computer-readable medium storing thereon a program for carrying out the method of any one of claims 1 to 19.
GB2004336.0A 2020-03-25 2020-03-25 Computer vulnerability identification Pending GB2593509A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2004336.0A GB2593509A (en) 2020-03-25 2020-03-25 Computer vulnerability identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2004336.0A GB2593509A (en) 2020-03-25 2020-03-25 Computer vulnerability identification

Publications (2)

Publication Number Publication Date
GB202004336D0 GB202004336D0 (en) 2020-05-06
GB2593509A true GB2593509A (en) 2021-09-29

Family

ID=70546612

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2004336.0A Pending GB2593509A (en) 2020-03-25 2020-03-25 Computer vulnerability identification

Country Status (1)

Country Link
GB (1) GB2593509A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220237302A1 (en) * 2019-06-06 2022-07-28 Nec Corporation Rule generation apparatus, rule generation method, and computer-readable recording medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007109721A2 (en) * 2006-03-21 2007-09-27 21St Century Technologies, Inc. Tactical and strategic attack detection and prediction
EP3079336A1 (en) * 2015-04-09 2016-10-12 Accenture Global Services Limited Event correlation across heterogeneous operations
US20180248893A1 (en) * 2017-02-27 2018-08-30 Microsoft Technology Licensing, Llc Detecting Cyber Attacks by Correlating Alerts Sequences in a Cluster Environment
WO2019035120A1 (en) * 2017-08-14 2019-02-21 Cyberbit Ltd. Cyber threat detection system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007109721A2 (en) * 2006-03-21 2007-09-27 21St Century Technologies, Inc. Tactical and strategic attack detection and prediction
EP3079336A1 (en) * 2015-04-09 2016-10-12 Accenture Global Services Limited Event correlation across heterogeneous operations
US20180248893A1 (en) * 2017-02-27 2018-08-30 Microsoft Technology Licensing, Llc Detecting Cyber Attacks by Correlating Alerts Sequences in a Cluster Environment
WO2019035120A1 (en) * 2017-08-14 2019-02-21 Cyberbit Ltd. Cyber threat detection system and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220237302A1 (en) * 2019-06-06 2022-07-28 Nec Corporation Rule generation apparatus, rule generation method, and computer-readable recording medium

Also Published As

Publication number Publication date
GB202004336D0 (en) 2020-05-06

Similar Documents

Publication Publication Date Title
US11785035B2 (en) System and methods for malware detection using log analytics for channels and super channels
Viegas et al. BigFlow: Real-time and reliable anomaly-based intrusion detection for high-speed networks
US11227047B1 (en) System and method for improved end-to-end cybersecurity machine learning and deployment
Bijone A survey on secure network: intrusion detection & prevention approaches
US9094288B1 (en) Automated discovery, attribution, analysis, and risk assessment of security threats
Kaur et al. A review of detection approaches for distributed denial of service attacks
JP6863969B2 (en) Detecting security incidents with unreliable security events
US8418249B1 (en) Class discovery for automated discovery, attribution, analysis, and risk assessment of security threats
US9311476B2 (en) Methods, systems, and media for masquerade attack detection by monitoring computer user behavior
US9032521B2 (en) Adaptive cyber-security analytics
Perdisci et al. Alarm clustering for intrusion detection systems in computer networks
EP2769508B1 (en) System and method for detection of denial of service attacks
Rashid et al. Machine and deep learning based comparative analysis using hybrid approaches for intrusion detection system
Bagui et al. Using machine learning techniques to identify rare cyber‐attacks on the UNSW‐NB15 dataset
US11700269B2 (en) Analyzing user behavior patterns to detect compromised nodes in an enterprise network
US11930022B2 (en) Cloud-based orchestration of incident response using multi-feed security event classifications
KR100910761B1 (en) Anomaly Malicious Code Detection Method using Process Behavior Prediction Technique
EP3772004B1 (en) Malicious incident visualization
CN111726342B (en) Method and system for improving alarm output accuracy of honeypot system
Hammad et al. Intrusion detection system using feature selection with clustering and classification machine learning algorithms on the unsw-nb15 dataset
GB2593508A (en) Computer vulnerability identification
GB2593509A (en) Computer vulnerability identification
Hussein Performance analysis of different machine learning models for intrusion detection systems
Ebrahimi et al. Automatic attack scenario discovering based on a new alert correlation method
US20230087309A1 (en) Cyberattack identification in a network environment