WO2019035120A1 - Cyber threat detection system and method - Google Patents

Cyber threat detection system and method Download PDF

Info

Publication number
WO2019035120A1
WO2019035120A1 PCT/IL2018/050892 IL2018050892W WO2019035120A1 WO 2019035120 A1 WO2019035120 A1 WO 2019035120A1 IL 2018050892 W IL2018050892 W IL 2018050892W WO 2019035120 A1 WO2019035120 A1 WO 2019035120A1
Authority
WO
WIPO (PCT)
Prior art keywords
behaviors
events
threat detection
suspicious
malicious
Prior art date
Application number
PCT/IL2018/050892
Other languages
French (fr)
Inventor
Koby KRIPS
Yasmin BOKOBZA
Original Assignee
Cyberbit Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cyberbit Ltd. filed Critical Cyberbit Ltd.
Publication of WO2019035120A1 publication Critical patent/WO2019035120A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/145Countermeasures against malicious traffic the attack involving the propagation of malware through the network, e.g. viruses, trojans or worms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/53Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/561Virus type analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0631Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/142Network analysis or design using statistical or mathematical methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/028Capturing of monitoring data by filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/04Processing captured monitoring data, e.g. for logfile generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1433Vulnerability analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/046Network management architectures or arrangements comprising network management agents or mobile agents therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Definitions

  • the invention relates to a cyber threat detection system and method.
  • malware-related cyber security breaches increase by nearly 30% year over year.
  • sophisticated attackers still manage to bypass even the most advanced security systems, including next-generation, non-signature based security systems.
  • advanced threats are coded to look like legitimate behavior. Therefore, they are extremely difficult to identify using conventional systems and can proceed unimpeded within the target networks.
  • IOCs Indications of Compromise
  • US Patent Application No. 2014/0215618 (Striem Amit), published on July 31, 2014, discloses a method and apparatus for intrusion detection, the method comprising: - . - receiving a description of a computerized system, the description comprising two or more entities, one or more attribute for each entity and one or more statistical rule related to relationship between the entities; receiving data related to activity of the computerized system, the data comprising two or more events; grouping the events into two or more groups associated with the entities; comparing the groups in accordance with the statistical rule, to identify a group not complying with any of the statistical Riles.
  • US Patent No. 7,181,768 (Ghosh et ai), published on February 20, 2007, discloses an intrusion detection system (IDS) that uses application monitors for detecting application-based attacks against computer systems.
  • the IDS implements application monitors in the form of a software program to learn and monitor the behavior of system programs in order to detect attacks against computer hosts.
  • the application monitors implement machine learning algorithms to provide a mechanism for learning from previously observed behavior in order to recognize future attacks that it has not seen before.
  • the application monitors include temporal locality algorithms to increased the accuracy of the IDS.
  • the IDS of the present invention may comprise a string-matching program, a neural network, or a time series prediction algorithm for learning normal application behavior and for detecting anomalies.
  • US Patent No. 8,776,218 discloses an executing computer process is monitored for an indication of malicious behavior, wherein the indication of the malicious behavior is a result of comparing an operation with a predetermined behavior, referred to as a gene.
  • a plurality of malicious behavior indications observed for the executing process are compared to a predetermined collection of malicious behaviors, referred to as a phenotype, which comprises a grouping of specific genes that are typically present in a type of malicious code.
  • a phenotype which comprises a grouping of specific genes that are typically present in a type of malicious code.
  • an action may be caused, where the action is based on a prediction that the executing computer process is the type of malicious code as indicated by the phenotype.
  • Related user interfaces, applications, and computer program products are disclosed.
  • US Patent No. 9,544,321 (Baikalov et al.), published on January 10, 2017, discloses anomalous activities in a computer network are detected using adaptive behavioral profiles that are created by measuring at a plurality of points and over a period of time observables corresponding to behavioral indicators related to an activity. Normal kernel distributions are created about each point, and the behavioral profiles are created automatically by combining the distributions using the measured values and a Gaussian kernel density estimation process that estimates values between measurement points. Behavioral profiles are adapted periodically using data aging to de-emphasize older data in favor of current data. The process creates behavioral profiles without regard to the data distribution. An anomaly probability profile is created as a normalized inverse of the behavioral profile, and is used to determine the probability that a behavior indicator is indicative of a threat. The anomaly detection process has a low false positive rate.
  • US Patent No. 8,490,194 discloses a method for detecting malicious behavioral patterns which are related to malicious software such as a computer worm in computerized systems that include data exchange channels with other systems over a data network. Accordingly, hardware and/or software parameters are determined in the computerized system that is can characterize known behavioral patterns thereof.
  • Known malicious code samples are learned by a machine learning process, such as decision trees and artificial neural networks, and the results of the machine learning process are analyzed in respect to the behavioral patterns of the computerized system. Then known and unknown malicious code samples are identified according to the results of the machine learning process.
  • US Patent Application No. 2017/0200004 (Ghosh et al.) published on July 13, 2017, discloses a non-transitory processor-readable medium storing code representing instructions to cause a processor to perform a process includes code to cause the processor to receive a set of indications of allowed behavior associated with an application.
  • the processor is also caused to initiate an instance of the application within a sandbox environment.
  • the processor is further caused to receive, from a monitor module associated with the sandbox environment, a set of indications of actual behavior of the instance of the application in response to initiating the instance of the application within the sandbox environment.
  • the processor is also caused to send an indication associated with an anomalous behavior if at least one indication from the set of indications of actual behavior does not correspond to an indication from the set of indications of allowed behavior.
  • Analytical models are constructed and dynamically updated from the data sources so as to be able to rapidly identify and characterize conditions within the environment (such as behaviors, events, and functions) that are typically characteristic with that of a normal state and those that are of an abnormal or potentially suspicious state.
  • the model is further able to implement statistical flagging functions, provide analytical interfaces to system administrators and estimate likely conditions that characterize the state of the system and the potential threat.
  • the model may further recommend (or alternatively implement autonomously or semi-autonomously) optimal remedial repair and recover ⁇ - strategies as well as the most appropriate countermeasures to isolate or neutralize the threat and its effects.
  • US Patent Application No. 2017/0063912 (Muddu et al), published on March 2, 2017, discloses a security platform employs a variety techniques and mechanisms to detect security related anomalies and threats in a computer network environment.
  • the security platform is "big data” driven and employs machine learning to perform security analytics.
  • the security platform performs user/entity behavioral analytics (UEBA) to detect the security related anomalies and threats, regardless of whether such anomalies/threats were previously known.
  • UEBA user/entity behavioral analytics
  • the security platform can include both realtime and batch paths/modes for detecting anomalies and threats. By visually presenting analytical results scored with risk ratings and supporting evidence, the security platform enables network security administrators to respond to a detected anomaly or threat, and to take action promptly.
  • a threat detection system comprising a processing resource configured to: provide a suspicious behaviors classifier configured to classify a set of a plurality of suspicious behaviors as malicious or non-malicious; obtain information of a given set of a plurality of suspicious behaviors, each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events that occurred on one or more endpoints connected to an organizational network, wherein : (a) at least one of the suspicious behaviors of the given set is identified in accordance with one or more user-defined identification rules, (b) at least one of the suspicious behaviors of the given set is non-anomalous, and (c) at least one of the groups includes multiple events including two or more of the detected events; and classify, utilizing the suspicious behaviors classifier, the given set as malicious or non-malicious.
  • the grouped events in each group are inter-related events.
  • the processing resource is further configured to alert a user of the threat detection system of the classification of the given set as malicious.
  • the suspicious behaviors classifier is trained utilizing: (a) a malicious training set of a plurality of first behaviors occurring on a controlled malware infected network, and (b) a routine training set of a plurality of second behaviors occurring on a second network.
  • the suspicious behaviors classifier is re-trained utilizing at least one of: (a) an updated malicious training set of a plurality of third behaviors occurring on the controlled malware infected network infected with a new malware, or (b) a new routine training set of a plurality of fourth behaviors occurring on the second network.
  • the second network is the organizational network.
  • the second network is a part of the organizational network.
  • the first behaviors and the second behaviors are identified in accordance with one or more identification rules applied on a plurality of events detected on one or more training endpoints connected to the controlled malware infected network and to the second network, correspondingly.
  • the detected events are detected by one or more agents, each agent installed on a given endpoint of the endpoints, wherein the agents are configured to provide information of the detected events to the processing resource.
  • At least part of the detected events detected by the agents installed on a corresponding endpoint are not provided to the processing resource, according to filtering criteria.
  • At least one of the agents installed on a corresponding endpoint identifies a given occurrence of a plurality of given events occurring on the corresponding endpoint as one of the suspicious behaviors.
  • At least some of the detected events are detected on a kernel of an operating system of the given endpoint.
  • the plurality of suspicious behaviors of the given set meet one or more of the following: (a) occur within a predetermined time frame, or (b) include a sequence of behaviors where at least one event of each behavior is related to at least one other event of at least one other behavior in the sequence, or (c) relate to a common resource.
  • the relation between the event and the other event is that the other event is executed by the event.
  • a threat detection method comprising: providing a suspicious behaviors classifier configured to classify a set of a plurality of suspicious behaviors as malicious or non-malicious; obtaining, by a processor, information of a given set of a plurality of suspicious behaviors, each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events that occurred on one or more endpoints connected to an organizational network, wherein: (a) at least one of the suspicious behaviors of the given set is identified in accordance with one or more user-defined identification rules, (b) at least one of the suspicious behaviors of the given set is non-anomalous, and (c) at least one of the groups includes multiple events including two or more of the detected events; and classifying, by the processor, utilizing the suspicious behaviors classifier, the given set as malicious or non-malicious.
  • the grouped events in each group are inter-related events.
  • the method further comprises alerting a user of the threat detection system of the classification of the given set as malicious.
  • the suspicious behaviors classifier is trained utilizing: (a) a malicious training set of a plurality of first behaviors occurring on a controlled malware infected network, and (b) a routine training set of a plurality of second behaviors occurring on a second network.
  • the suspicious behaviors classifier is re-trained utilizing at least one of: (a) an updated malicious training set of a plurality of third behaviors occurring on the controlled malware infected network infected with a new malware, or (b) a new routine training set of a plurality of fourth behaviors occurring on the second network.
  • the second network is the organizational network.
  • the second network is a part of the organizational network.
  • the first behaviors and the second behaviors are identified in accordance with one or more identification rules applied on a plurality of events detected on one or more training endpoints connected to the controlled malware infected network and to the second network, correspondingly.
  • the detected events are detected by one or more agents, each agent installed on a given endpoint of the endpoints, wherein the agents are configured to provide information of the detected events to the processing resource,
  • At least part of the detected events detected by the agents installed on a corresponding endpoint are not provided to the processing resource, according to filtering criteria.
  • At least one of the agents installed on a corresponding endpoint identifies a given occurrence of a plurality of given events occurring on the corresponding endpoint as one of the suspicious behaviors.
  • At least some of the detected events are detected on a kernel of an operating system of the given endpoint.
  • the plurality of suspicious behaviors of the given set meet one or more of the following: (a) occur within a predetermined time frame, or (b) include a sequence of behaviors where at least one event of each behavior is related to at least one other event of at least one other behavior in the sequence, or (c) relate to a common resource.
  • a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code, executable by at least one processor of a computer to perform a method comprising: providing a suspicious behaviors classifier configured to classify a set of a plurality of suspicious behaviors as malicious or non-malicious; obtaining, by a processor, information of a given set of a plurality of suspicious behaviors, each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events that occurred on one or more endpoints connected to an organizational network, wherein; (a) at least one of the suspicious behaviors of the given set is identified in accordance with one or more user-defined identification rules, (b) at least one of the suspicious behaviors of the given set is non-anomalous, and (c) at least one
  • FIG. 1 is a schematic illustration of the operation of a cyber threat detection system, in accordance with the presently disclosed subject matter
  • Fig. 2 is a block diagram schematically illustrating one example of an endpoint, in accordance with the presently disclosed subject matter
  • Fig. 3 is a block diagram schematically illustrating one example of a threat detection system, in accordance with the presently disclosed subject matter
  • Fig. 4 is a flowchart illustrating one example of a sequence of operations carried out for detecting threats using a classifier, in accordance with the presently disclosed subject matter
  • Fig. 5 is a flowchart illustrating one example of a sequence of operations carried out for identifying suspicious behaviors for classification by a classifier, in accordance with the presently disclosed subject matter.
  • should be expansively construed to cover any kind of electronic device with data processing capabilities, including, by way of non-limiting example, a personal desktop/laptop computer, a server, a computing system, a communication device, a smartphone, a tablet computer, a smart television, a processor (e.g. digital signal processor (DSP), a microcontroller, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), a group of multiple physical machines sharing performance of various tasks, virtual servers co-residing on a single physical machine, any other electronic computing device, and/or any combination thereof.
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • non-transitory is used herein to exclude transitory, propagating signals, but to otherwise include any volatile or non-volatile computer memory technology suitable to the application.
  • the phrase “for example,” “such as”, “for instance” and variants thereof describe non-limiting embodiments of the presently disclosed subject matter.
  • Reference in the specification to “one case”, “some cases”, “other cases” or variants thereof means that a particular feature, structure or characteristic described in connection with the embodiment(s) is included in at least one embodiment of the presently disclosed subject matter.
  • the appearance of the phrase “one case”, “some cases”, “other cases” or variants thereof does not necessarily refer to the same embodiment(s).
  • Figs. 1-3 illustrate a general schematic of the system architecture in accordance with an embodiment of the presently disclosed subject matter.
  • Each module in Figs. 1-3 can be made up of any combination of software, hardware and/or firmware that performs the functions as defined and explained herein.
  • the modules in Figs. 1-3 may be centralized in one location or dispersed over more than one location.
  • the system may comprise fewer, more, and/or different modules than those shown in Figs. 2-3.
  • Any reference in the specification to a method should be applied mutatis mutandis to a system capable of executing the method and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that once executed by a computer result in the execution of the method.
  • Any reference in the specification to a system should be applied mutatis mutandis to a method that may be executed by the system and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that may be executed by the system.
  • Any reference in the specification to a non-transitory computer readable medium should be applied mutatis mutandis to a system capable of executing the instructions stored in the non-transitory computer readable medium and should be applied mutatis mutandis to method that may be executed by a computer that reads the instructions stored in the non-transitory computer readable medium.
  • FIG. 1 a schematic illustration of the operation of a cyber threat detection system, in accordance with the presently disclosed subject matter.
  • endpoints 100-a, 100-b, 100-c, 100- n a plurality of endpoints are provided, denoted in the figure as endpoints 100-a, 100-b, 100-c, 100- n.
  • the endpoints e.g. endpoints 100-a, 100-n
  • the endpoints e.g. endpoints 100-a, 100-n
  • the endpoints are endpoints of an organization, at least part of the endpoints do not have access to an external network (external to the organizational network), however in other cases, at least part of such endpoints also have access (possibly limited, or controlled) to the Internet.
  • Each endpoint (e.g. endpoint 100-a, 100-n) comprises a software agent installed thereon (not shown).
  • the software agent (referred to herein also as “agent” interchangeably) is configured to monitor local events that occur on the endpoint on which it is installed, and provide information of all or part of the local events to the threat detection system 300 (events 105-a, 105-b, 105-c, 105-n).
  • An event can include any type of action performed by a processing resource (e.g. a processor) of the endpoint, including, for example, deleting files, copying files, modifying files, opening files, closing files, executing a process, killing a process, changing any type of permissions, printing a document, sending data, receiving data, etc.
  • a processing resource e.g. a processor
  • the agent can be configured to monitor all events performed by a processing resource of the endpoint on which it is installed (whether on the operating system kernel, or on the user space). In other cases, the agent can be configured to monitor only part of the events performed by a processing resource of the endpoint on which it is installed (e.g.
  • At least some of the monitored events are monitored on a kernel of an operating system of the given endpoint.
  • the information of the local events provided by the endpoints to the threat detection system 300 can include, besides from the type of event detected, also information about the entity triggering the event, and information about the entity on which the event was executed.
  • an agent installed on a specific endpoint can detect a file delete event.
  • the entity that initiated the file delete event can be a certain process executed on the endpoint, and the entity on which the event was executed is a given file identified for deletion bv the delete event.
  • an entity can be a local entity (i.e.
  • a resource of the endpoint such as a process executing on the endpoint, a file stored on a local memory (ROM/RAM/other) of the endpoint, an input/output device, etc.), or an external entity (such as another endpoint having a separate network connection, or a resource connected to such other resource).
  • the threat detection system 300 obtains the information of the local events provided by the various endpoints (e.g. endpoints 100-a, I 00-n), and identifies suspicious behaviors 110.
  • a suspicious behavior is a group of one or more events that occurred on one or more endpoints (e.g. endpoints 100-a, , , , , 100-n), that meet a certain user-defined identification rule, optionally out of a plurality of user-defined identification rule.
  • the user-defined identification rules can be defined by cyber analysts that analyze behaviors of malware to identify malware characterizing behaviors. Such user-defined identification rules are network generic and not network specific, and the same user-defined identification rules will apply to any network, irrespective of specific characteristics behaviors of different networks.
  • the identification rule/s that are user-defined.
  • at least one of the suspicious behaviors that is identified by at least one of the user-defined identification rule is not an anomalous behavior, i.e. it is a behavior that can be generated by non-malicious applications and it does not deviate from a baseline behavior that regularly occurs on a monitored organizational network.
  • at least part of the events within a group of events comprising a suspicious behavior are inter-related events (e.g. one event triggers the other event, two or more events relate to a common entity, two or more events occur within a certain predefined time window, etc.). It is to be noted that in some cases, events flow into the threat detection system 300 continuously and the threat detection system 300 is configured to continuously identify suspicious behaviors.
  • a self-delete process in which a certain process initiates a second process designed to delete (either directly, or by initiating subsequent processes designed to perform the deletion) the executable from which the first process was initiated, could be identified as a suspicious behavior.
  • Such suspicious behavior could be identified according to a corresponding identification rale, saying that if a given group of events occur - it constitutes a suspicious behavior (optionally even in cases where the given group of events is non-anomalous). So, if an event in which one process initiates a second process occurs, and then an event in which the second process attempts to delete the executable from which the first process was initiated occurs - the group of these two events can be marked as a suspicious behavior, as it meets a given identification rule.
  • one or more identification rules can be met by a single event, for example if a certain event in which an attempt is made by a certain entity to modify a given file (for example a certain operating system file), a identification rule can be met and the event can be marked as a suspicious behavior.
  • a plurality of identification rales can be met by a given group of one or more events, for example if a first process initiates deletion of a certain operating system file, and another process, initiated by the first process, initiates deletion of the executable from which the first process was initiated - an identification rule that checks if deletion of an operating system is attempted can be met, and another identification rale that checks self-delete (a given process initiates another process to delete the executable from which the given process was initiated) can be met.
  • a suspicious behavior can be a behavior that occurs on more than one endpoint.
  • a certain suspicious behavior can start by a given event, or group of events, occurring on a given endpoint, and then proceeding on one or more other endpoints. So, if on a given endpoint, a given event is detected, in which a process sends a given executable file to another endpoint, and then, on the other endpoint an event is detected in which the executable is executed on a given process, and then another event is detected on the other endpoint, in which the executed executable attempts to change certain permissions - this can be identified as a suspicious behavior occurring on multiple endpoints, in accordance with one or more identification rules.
  • information of events is obtained from a plurality of endpoints (e.g. endpoints 100-a, 100-n, utilizing the agents installed respectively thereon), and the threat detection system 300 identifies, using the identification rules (defining what events or what group of events constitute a suspicious behavior, that is not necessarily an anomaly), suspicious behaviors based on the obtained events.
  • endpoints e.g. endpoints 100-a, 100-n, utilizing the agents installed respectively thereon
  • the threat detection system 300 identifies, using the identification rules (defining what events or what group of events constitute a suspicious behavior, that is not necessarily an anomaly), suspicious behaviors based on the obtained events.
  • the identification of at least part of the suspicious behaviors can be performed by the agents themselves, on the endpoints.
  • the agents can be configured to locally identify suspicious behaviors, using the identification rules, and provide the information of any identified suspicious behaviors (including the information of the events forming the suspicious behaviors) to the threat detection system 300.
  • the threat detection system 300 can be configured to continuously identify suspicious behaviors (based on the information of events obtained from the endpoints and/or by receiving information of identified suspicious events from the endpoints). In such cases, every pre-determined time period (e.g. thirty seconds, one minute, ten minutes, one hour, one day, etc.), or upon certain criteria being met (more than a certain number of suspicious behaviors were identified, etc.), the threat detection system 300 can be configured to classify, using a suspicious behaviors classifier 115, a given set of suspicious behaviors, optionally including at least one suspicious behavior that is non-anomalous (i.e. a suspicious behavior that is comprised of a non-anomalous group of events), as malicious or non-malicious.
  • a suspicious behaviors classifier 115 e.g. thirty seconds, one minute, ten minutes, one hour, one day, etc.
  • the threat detection system 300 can be configured to classify, using a suspicious behaviors classifier 115, a given set of suspicious behaviors, optionally including at least one suspicious behavior that is
  • the threat detection system 300 can be configured to provide an alert to a user of the threat detection system 300 (e.g. a security analyst, etc.).
  • a user of the threat detection system 300 e.g. a security analyst, etc.
  • Fig. 2 there is shown a block diagram schematically illustrating one example of an endpoint, in accordance with the presently disclosed subject matter.
  • each endpoint 100 can comprise a endpoint network interface 220 enabling connecting the endpoint 100 to a communication network (being a communication network monitored by the threat detection system 300, such as an organizational communication network, and/or the Internet) and enabling it to send and receive data sent thereto through the communication network, including sending information of local events and/or identified suspicious behaviors and receiving alerts to be provided to the users of the endpoint, as detailed herein, inter alia with reference to Figs. 3-5.
  • a communication network being a communication network monitored by the threat detection system 300, such as an organizational communication network, and/or the Internet
  • Each endpoint 100 can further comprise or be otherwise associated with a endpoint data repository 230 (e.g. a database, a storage system, a memory including Read Only Memory - ROM, Random Access Memory - RAM, or any other type of memory, etc.) configured to store data, including, inter alia, information of local events, and optionally also suspicious behaviors identification rules, etc.
  • endpoint data repository 230 can be further configured to enable retrieval and/or update and/or deletion of the stored data. It is to be noted that in some cases, endpoint data repository 230 can be distributed.
  • Endpoint processing resource 210 can be one or more processing units (e.g. central processing units), microprocessors, microcontrollers (e.g. microcontroller units (MCUs)) or any other computing devices or modules, including multiple and/or parallel and/or distributed processing units, which are adapted to independently or cooperatively process data for controlling relevant endpoint 100 resources and for enabling operations related to endpoint 100 resources.
  • processing units e.g. central processing units
  • microprocessors e.g. microcontroller units (MCUs)
  • MCUs microcontroller units
  • the processing resource comprises an agent 205, comprising one or more of the following modules: endpoint suspicious behaviors identification module 240, event collection module 250 and endpoint alert module 260.
  • Event collection module 250 can be configured to obtain information of local events occurring on the endpoint 100, and provide such information to the endpoint suspicious behavior identification module 240, and/or to the threat detection system 300, for identification of suspicious behaviors.
  • Endpoint suspicious behaviors identification module 240 can be configured to obtain information of events from the event collection module 250 and to identify suspicious behaviors utilizing suspicious behaviors identification rules, as further detailed herein, inter alia with reference to Fig. 5.
  • Endpoint alert module 260 can be configured to provide alerts to a user of the endpoint 100, for example of a malicious behavior being identified, as further detailed herein, inter alia with reference to Fig. 4.
  • Fig. 3 is a block diagram schematically illustrating one example of a threat detection system, in accordance with the presently disclosed subject matter.
  • threat detection system 300 can comprise a network interface 320 enabling connecting the threat detection system 300 to a communication network (being a communication network monitored by the threat detection system 300, such as an organizational communication network, and/or the Internet) and enabling it to send and receive data sent thereto through the communication network, including receiving information of events and/or identified suspicious behaviors from endpoints (e.g. endpoint 100-a, endpoint 100-n) and sending alerts to be provided to the users of the endpoints, as detailed herein, inter alia with reference to Figs. 3-5.
  • a communication network being a communication network monitored by the threat detection system 300, such as an organizational communication network, and/or the Internet
  • endpoints e.g. endpoint 100-a, endpoint 100-n
  • alerts e.g. endpoint 100-a, endpoint 100-n
  • Threat detection system 300 can further comprise or be otherwise associated with a data repository 330 (e.g. a database, a storage system, a memory including Read Only Memory - ROM, Random Access Memory - RAM, or any other type of memory, etc.) configured to store data, including, inter alia, information of events, suspicious behaviors, etc.
  • data repository 330 can be further configured to enable retrieval and/or update and/or deletion of the stored data. It is to be noted that in some cases, data repository 330 can be distributed.
  • Threat detection system 300 further comprises a processing resource 310.
  • Processing resource 310 can be one or more processing units (e.g. central processing units), microprocessors, microcontrollers (e.g. microcontroller units (MCUs)) or any- other computing devices or modules, including multiple and/or parallel and/or distributed processing units, which are adapted to independently or cooperatively process data for controlling relevant threat detection system 300 resources and for enabling operations related to threat detection system 300 resources.
  • the processing resource comprises one or more of the following modules: suspicious behaviors identification module 340, classification module 350 and threat detection system alert module 360.
  • Suspicious behaviors identification module 340 can be configured to obtain information of events from the endpoints (e.g. endpoint 100-a, endpoint 100-n) and to identify suspicious behaviors utilizing suspicious behaviors identification rules, as further detailed herein, inter alia with reference to Fig. 5
  • Classification module 350 can be configured to classify, using a suspicious behaviors classifier 1 15, a given set of suspicious behaviors, as malicious or non- malicious, as further detailed herein, inter alia with reference to Fig. 4.
  • Threat detection system alert module 360 can be configured to provide alerts to a user of the threat detection system 300 (e.g. a security analyst, etc.), and/or to the endpoint alert module 260 if the endpoints (e.g. endpoint 100-a, , , , , endpoint 100-n), for example of a malicious behavior being identified, as further detailed herein, inter alia with reference to Fig, 4.
  • a user of the threat detection system 300 e.g. a security analyst, etc.
  • endpoint alert module 260 e.g. endpoint 100-a, , , , endpoint 100-n
  • FIG. 4 being a flowchart illustrating one example of a sequence of operations carried out for identifying suspicious behaviors for classification by a classifier, in accordance with the presently disclosed subject matter.
  • threat detection system 300 can be configured to perform a suspicious behaviors identification process 400.
  • threat detection system 300 can be configured to obtain information of a plurality of detected events (block 410).
  • the information can be obtained from agents 205, installed on endpoints (e.g. endpoint 100-a, 100-n), that collect, utilizing event collection modules 250 information relating to local events detected thereby (i.e. events that took place on the respective endpoint 100) and send such information to the threat detection system 300.
  • the information can include, besides from the type of detected event, also information about the entity triggering the event, information about the entity on which the event was executed, and optionally additional/alternative parameters. The additional parameters, or some of them, can depend on the event type.
  • the additional parameters can include all, or part, of the user attribute.
  • the additional parameters can include all, or part, of the file attributes.
  • the additional parameters can include the value deleted from the registry.
  • the additional parameters can include the Internet Protocol (IP) address and port of the connection.
  • IP Internet Protocol
  • the additional parameter can include the director ⁇ ' path.
  • the agents 205 can be configured to send information relating to all events performed by a processing resource of the endpoint on which it is installed (whether on the operating system kernel, or on the user space). In other cases, the agent 205 can be configured to send information relating to only part of the events performed by a endpoint processing resource 210 of the endpoint on which it is installed (e.g. only a certain type of events, only events occurring on a certain resource of the endpoint, or any other subset of the events performed by a processing resource of the endpoint on which it is installed, for example, in accordance with certain rules).
  • threat detection system 300 can be configured to identify, utilizing suspicious behaviors identification module 340, one or more suspicious behaviors, each comprised of an event, or a plurality of events (block 420). For this purpose, threat detection system 300 obtains the information of the local events provided by the various endpoints (e.g. endpoints 100-a, 100-n), and identifies groups of events (each group can comprise one or more events) that occurred on one or more endpoints (e.g. endpoints 100-a, 100-n), where each group meets a certain identification rule (e.g. a predetermined user-defined identification rule).
  • a certain identification rule e.g. a predetermined user-defined identification rule
  • At least one of the identified suspicious behaviors is non-anomalous (i.e. a suspicious behavior that is comprised of a non-anomalous group of events). It is to be noted that in some cases, events flow into the threat detection system 300 continuously and the threat detection system 300 is configured to continuously identify suspicious behaviors out of the events flowing in. As indicated above, in some cases, at least some of the events comprising a suspicious behavior are inter-related events.
  • the threat detection system 300 can identify suspicious behaviors that occur on a single endpoint (i.e. all events within such suspicious behavior occur on the single endpoint) and/or suspicious behaviors that occur on more than one endpoint (i.e. the events within such suspicious behaviors originate from more than one endpoint).
  • the identification of at least part of the suspicious behaviors can be performed by the agents 205 themselves, on the endpoints.
  • the agents 205 can be configured to locally identify, utilizing endpoint suspicious behaviors identification modules 240, suspicious behaviors, using the identification rules, and provide the information of any identified suspicious behaviors (including the information of the events forming the suspicious behaviors) to the threat detection system 300.
  • the agents 205 can identify suspicious behaviors that occur on the endpoint on which they are installed (i.e. all events within such suspicious behavior occur on the single endpoint), and not cross- endpoint suspicious behaviors.
  • threat detection system 300 After identifying the suspicious behaviors (optionally including at least one non- anomalous suspicious behavior), threat detection system 300 provides a given set of suspicious behaviors to a classifier, for classification thereof, as malicious, or non- malicious, as further detailed herein, inter alia with reference to Fig. 5 (block 430).
  • the given set of suspicious behaviors includes all suspicious behaviors identified by the threat detection system 300 (at block 420) and/or by the agents 205 (as detailed above).
  • the given set of suspicious behaviors includes only a subset of the suspicious behaviors, e.g. according to rules (e.g. suspicious behaviors that occurred during a certain time frame, suspicious behaviors having at least a couple of inter-related events, etc.).
  • process can be performed in a continuous and/or repeating manner (e.g. by returning to block 410).
  • threat detection system 300 can be configured to perform a threat detection process 500.
  • threat detection system 300 can be configured to provide a classification module 350, being a suspicious behaviors classifier 115, configured to classify a set of a plurality of suspicious behaviors as malicious or non-malicious (block 510).
  • the suspicious behaviors classifier 115 can be generated by performing a supervised machine learning algorithm (e.g.
  • each set of suspicious behaviors is associated with a plurality of parameters, one of which being an indication if the set i s part of the malicious training set, or of the routine training set.
  • the second network, from which the routine training set is obtained is an organizational network, or a certain part of an organizational network, of the organization on which the threat detection system 300 is operating. This enables training the suspicious behaviors classifier 1 15 using real-life events data of routine events occurring on the organizational network of the organization on which the threat detection system 300 is operating.
  • the suspicious behaviors classifier 115 can optionally be periodically or arbitrarily re-trained utilizing at least one of: (a) an updated malicious training set of a plurality of first behaviors occurring on the controlled malware infected network infected with a new malware, or (b) a new routine training set of a plurality of second behaviors occurring on the second network.
  • the first behaviors and the second behaviors can be identified in accordance with one or more identification rules applied on a plurality of events detected by agents 205 installed on one or more training endpoints connected to the controlled malware infected network and to the second network, correspondingly.
  • This is a similar process to identifying suspicious behaviors during operation of the threat detection system 300 as detailed herein, inter alia with reference to Fig. 4, Having described the suspicious behaviors classifier 1 15, attention is drawn back to the threat detection process 500.
  • Threat detection system 300 can be further configured to obtain information of a given set of a plurality of suspicious behaviors to be classified as malicious or non-malicious by the suspicious behaviors classifier 1 15 (block 520).
  • Each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events (detected by the agents 105 of the endpoints 100) that occurred on one or more endpoints 100 connected to a network, optionally an organizational network of the organization on which the threat detection system 300 is operating.
  • at least one of the groups includes multiple events, including two or more events 105 of the detected events (detected by the agents 105 of the endpoints 100),
  • at least one of the suspicious behaviors is non-anomalous (i.e. a suspicious behavior that is comprised of a non-anomalous group of events).
  • the plurality of suspicious behaviors of the given set meet one or more of the following: (a) occur within a predetermined time frame, or (b) include a sequence of behaviors where at least one event of each behavior is related to at least one other event of at least one other behavior in the sequence, or (c) relate to a common resource. For example, all suspicious behaviors of the given set occurred within a time frame of thirty minutes, and/or at least one event of each suspicious behavior of the given set triggered another event of another suspicious behavior of the given set, and/or at least one event of each suspicious behavior of the given set relates to a common resource (e.g. a certain file or process, etc.).
  • a common resource e.g. a certain file or process, etc.
  • threat detection system 300 can be further configured to classify, utilizing the suspicious behaviors classifier 1 15, the given set as malicious or non-malicious (block 530).
  • Threat detection system 300 can be configured to check if the suspicious behaviors classifi er 1 15 classifies the given set as malicious or not (block 540), In case the suspicious behaviors classifier 1 15 classifies the given set as malicious - threat detection system 300 can be configured to alert, utilizing the threat detection system alert module 360, a user of the threat detection system 300 of the classification of the given set as malicious (block 550),
  • threat detection system 300 can also send alerts to one or more specific endpoints (e.g. endpoints 100-a, , , , , 100-n) for presenting them, by an endpoint alert module 260, to the users of the respective endpoints.
  • Threat detection system 300 can optionally be further configured to quarantine certain endpoints 100 identified as potentially infected with maiware (and/or perform other protective measures thereon) based on analysis of the given set of suspicious behaviors classifi ed as malicious.
  • the threat detection process 500 can be performed in a continuous and/or repeating manner (e.g. by returning to block 520).
  • system can be implemented, at least partly, as a suitably programmed computer.
  • the presently disclosed subject matter contemplates a computer program being readable by a computer for executing the disclosed method.
  • the presently- disclosed subject matter further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the disclosed method.

Abstract

A threat detection system comprising a processing resource configured to: provide a suspicious behaviors classifier configured to classify a set of a plurality of suspicious behaviors as malicious or non- malicious; obtain information of a given set of a plurality of suspicious behaviors, each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events that occurred on one or more endpoints connected to an organizational network, wherein: (a) at least one of the suspicious behaviors of the given set is identified in accordance with one or more user-defined identification rules, (b) at least one of the suspicious behaviors of the given set is non-anomalous, and (c) at least one of the groups includes multiple events including two or more of the detected events; and, utilizing the suspicious behaviors classifier, the given set as malicious or non-malicious.

Description

The invention relates to a cyber threat detection system and method.
BACKGROUN D
According to certain sources, malware-related cyber security breaches increase by nearly 30% year over year. Despite the growing investment in cyber security, sophisticated attackers still manage to bypass even the most advanced security systems, including next-generation, non-signature based security systems. One reason is that advanced threats are coded to look like legitimate behavior. Therefore, they are extremely difficult to identify using conventional systems and can proceed unimpeded within the target networks.
In addition, today's cybersecurity threats are very dynamic, as attackers can create new permutations of old threats by the minute. Security systems cannot rely solely on Indications of Compromise (IOCs) for detection of such threats. These subtle changes in a known threat's code modify its attributes and allow the malware to easily bypass IOC-based detection mechanisms and inflict damage. To detect and respond to advanced, targeted threats, forward -thinking organizations need to apply advanced detection techniques, beyond IOCs,
Still further, conventional security systems generate countless alerts requiring security experts to correlate, analyze and prioritize them manually. Detecting and responding to advanced and targeted attacks requires a new approach.
There is thus a need in the art for a new cyber threat detection system and method.
References considered to be relevant as background to the presently disclosed subject matter are listed below. Acknowledgement of the references herein is not to be inferred as meaning that these are in any way relevant to the patentability of the presently disclosed subject matter.
US Patent Application No. 2014/0215618 (Striem Amit), published on July 31, 2014, discloses a method and apparatus for intrusion detection, the method comprising: - . - receiving a description of a computerized system, the description comprising two or more entities, one or more attribute for each entity and one or more statistical rule related to relationship between the entities; receiving data related to activity of the computerized system, the data comprising two or more events; grouping the events into two or more groups associated with the entities; comparing the groups in accordance with the statistical rule, to identify a group not complying with any of the statistical Riles.
US Patent No. 7,181,768 (Ghosh et ai), published on February 20, 2007, discloses an intrusion detection system (IDS) that uses application monitors for detecting application-based attacks against computer systems. The IDS implements application monitors in the form of a software program to learn and monitor the behavior of system programs in order to detect attacks against computer hosts. The application monitors implement machine learning algorithms to provide a mechanism for learning from previously observed behavior in order to recognize future attacks that it has not seen before. The application monitors include temporal locality algorithms to increased the accuracy of the IDS. The IDS of the present invention may comprise a string-matching program, a neural network, or a time series prediction algorithm for learning normal application behavior and for detecting anomalies.
US Patent No. 8,776,218 (Wright), published on July 8, 2014, discloses an executing computer process is monitored for an indication of malicious behavior, wherein the indication of the malicious behavior is a result of comparing an operation with a predetermined behavior, referred to as a gene. A plurality of malicious behavior indications observed for the executing process are compared to a predetermined collection of malicious behaviors, referred to as a phenotype, which comprises a grouping of specific genes that are typically present in a type of malicious code. Upon matching the malicious behavior indications with a phenotype, an action may be caused, where the action is based on a prediction that the executing computer process is the type of malicious code as indicated by the phenotype. Related user interfaces, applications, and computer program products are disclosed.
US Patent No. 7,454,790 (Potok), published on November 18, 2008, discloses a method of analyzing computer intrusion detection information That looks beyond known attacks and abnormal access patterns to the critical information that an intruder may want to access. Unique target identifiers and type of work performed by the networked targets is added to audit log records. Analysis using vector space modeling, dissimilarity matrix comparison, and clustering of the event records is then performed.
US Patent No. 9,544,321 (Baikalov et al.), published on January 10, 2017, discloses anomalous activities in a computer network are detected using adaptive behavioral profiles that are created by measuring at a plurality of points and over a period of time observables corresponding to behavioral indicators related to an activity. Normal kernel distributions are created about each point, and the behavioral profiles are created automatically by combining the distributions using the measured values and a Gaussian kernel density estimation process that estimates values between measurement points. Behavioral profiles are adapted periodically using data aging to de-emphasize older data in favor of current data. The process creates behavioral profiles without regard to the data distribution. An anomaly probability profile is created as a normalized inverse of the behavioral profile, and is used to determine the probability that a behavior indicator is indicative of a threat. The anomaly detection process has a low false positive rate.
US Patent No. 8,490,194 (Moskovitch et al), published on July 16, 2013, discloses a method for detecting malicious behavioral patterns which are related to malicious software such as a computer worm in computerized systems that include data exchange channels with other systems over a data network. Accordingly, hardware and/or software parameters are determined in the computerized system that is can characterize known behavioral patterns thereof. Known malicious code samples are learned by a machine learning process, such as decision trees and artificial neural networks, and the results of the machine learning process are analyzed in respect to the behavioral patterns of the computerized system. Then known and unknown malicious code samples are identified according to the results of the machine learning process.
US Patent Application No. 2017/0200004 (Ghosh et al.) published on July 13, 2017, discloses a non-transitory processor-readable medium storing code representing instructions to cause a processor to perform a process includes code to cause the processor to receive a set of indications of allowed behavior associated with an application. The processor is also caused to initiate an instance of the application within a sandbox environment. The processor is further caused to receive, from a monitor module associated with the sandbox environment, a set of indications of actual behavior of the instance of the application in response to initiating the instance of the application within the sandbox environment. The processor is also caused to send an indication associated with an anomalous behavior if at least one indication from the set of indications of actual behavior does not correspond to an indication from the set of indications of allowed behavior.
US Patent Application No. 2013/0305377 (Herz), published on November 14,
2013, discloses a distributed multi-agent system and method is implemented and employed across at least one intranet for purposes of real time collection, monitoring, aggregation, analysis and modeling of system and network operations, communications, internal and external accesses, code execution functions, network and network resource conditions as well as other assessable criteria within the implemented environment. Analytical models are constructed and dynamically updated from the data sources so as to be able to rapidly identify and characterize conditions within the environment (such as behaviors, events, and functions) that are typically characteristic with that of a normal state and those that are of an abnormal or potentially suspicious state. The model is further able to implement statistical flagging functions, provide analytical interfaces to system administrators and estimate likely conditions that characterize the state of the system and the potential threat. The model may further recommend (or alternatively implement autonomously or semi-autonomously) optimal remedial repair and recover}- strategies as well as the most appropriate countermeasures to isolate or neutralize the threat and its effects.
US Patent Application No. 2017/0063912 (Muddu et al), published on March 2, 2017, discloses a security platform employs a variety techniques and mechanisms to detect security related anomalies and threats in a computer network environment. The security platform is "big data" driven and employs machine learning to perform security analytics. The security platform performs user/entity behavioral analytics (UEBA) to detect the security related anomalies and threats, regardless of whether such anomalies/threats were previously known. The security platform can include both realtime and batch paths/modes for detecting anomalies and threats. By visually presenting analytical results scored with risk ratings and supporting evidence, the security platform enables network security administrators to respond to a detected anomaly or threat, and to take action promptly. In accordance with a first aspect of the presently disclosed subject matter, there is provided a threat detection system comprising a processing resource configured to: provide a suspicious behaviors classifier configured to classify a set of a plurality of suspicious behaviors as malicious or non-malicious; obtain information of a given set of a plurality of suspicious behaviors, each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events that occurred on one or more endpoints connected to an organizational network, wherein : (a) at least one of the suspicious behaviors of the given set is identified in accordance with one or more user-defined identification rules, (b) at least one of the suspicious behaviors of the given set is non-anomalous, and (c) at least one of the groups includes multiple events including two or more of the detected events; and classify, utilizing the suspicious behaviors classifier, the given set as malicious or non-malicious.
In some cases, the grouped events in each group are inter-related events.
In some cases, the processing resource is further configured to alert a user of the threat detection system of the classification of the given set as malicious.
In some cases, the suspicious behaviors classifier is trained utilizing: (a) a malicious training set of a plurality of first behaviors occurring on a controlled malware infected network, and (b) a routine training set of a plurality of second behaviors occurring on a second network.
In some cases, the suspicious behaviors classifier is re-trained utilizing at least one of: (a) an updated malicious training set of a plurality of third behaviors occurring on the controlled malware infected network infected with a new malware, or (b) a new routine training set of a plurality of fourth behaviors occurring on the second network.
In some cases, the second network is the organizational network.
In some cases, the second network is a part of the organizational network.
In some cases, the first behaviors and the second behaviors are identified in accordance with one or more identification rules applied on a plurality of events detected on one or more training endpoints connected to the controlled malware infected network and to the second network, correspondingly. In some cases, the detected events are detected by one or more agents, each agent installed on a given endpoint of the endpoints, wherein the agents are configured to provide information of the detected events to the processing resource.
In some cases, at least part of the detected events detected by the agents installed on a corresponding endpoint are not provided to the processing resource, according to filtering criteria.
In some cases, at least one of the agents installed on a corresponding endpoint identifies a given occurrence of a plurality of given events occurring on the corresponding endpoint as one of the suspicious behaviors.
In some cases, at least some of the detected events are detected on a kernel of an operating system of the given endpoint.
In some cases, the plurality of suspicious behaviors of the given set meet one or more of the following: (a) occur within a predetermined time frame, or (b) include a sequence of behaviors where at least one event of each behavior is related to at least one other event of at least one other behavior in the sequence, or (c) relate to a common resource.
In some cases, the relation between the event and the other event is that the other event is executed by the event.
In accordance with a second aspect of the presently disclosed subject matter, there is provided a threat detection method comprising: providing a suspicious behaviors classifier configured to classify a set of a plurality of suspicious behaviors as malicious or non-malicious; obtaining, by a processor, information of a given set of a plurality of suspicious behaviors, each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events that occurred on one or more endpoints connected to an organizational network, wherein: (a) at least one of the suspicious behaviors of the given set is identified in accordance with one or more user-defined identification rules, (b) at least one of the suspicious behaviors of the given set is non-anomalous, and (c) at least one of the groups includes multiple events including two or more of the detected events; and classifying, by the processor, utilizing the suspicious behaviors classifier, the given set as malicious or non-malicious.
In some cases, the grouped events in each group are inter-related events. In some cases, the method further comprises alerting a user of the threat detection system of the classification of the given set as malicious.
In some cases, the suspicious behaviors classifier is trained utilizing: (a) a malicious training set of a plurality of first behaviors occurring on a controlled malware infected network, and (b) a routine training set of a plurality of second behaviors occurring on a second network.
In some cases, the suspicious behaviors classifier is re-trained utilizing at least one of: (a) an updated malicious training set of a plurality of third behaviors occurring on the controlled malware infected network infected with a new malware, or (b) a new routine training set of a plurality of fourth behaviors occurring on the second network.
In some cases, the second network is the organizational network.
In some cases, the second network is a part of the organizational network.
In some cases, the first behaviors and the second behaviors are identified in accordance with one or more identification rules applied on a plurality of events detected on one or more training endpoints connected to the controlled malware infected network and to the second network, correspondingly.
In some cases, the detected events are detected by one or more agents, each agent installed on a given endpoint of the endpoints, wherein the agents are configured to provide information of the detected events to the processing resource,
In some cases, at least part of the detected events detected by the agents installed on a corresponding endpoint are not provided to the processing resource, according to filtering criteria.
In some cases, at least one of the agents installed on a corresponding endpoint identifies a given occurrence of a plurality of given events occurring on the corresponding endpoint as one of the suspicious behaviors.
In some cases, at least some of the detected events are detected on a kernel of an operating system of the given endpoint.
In some cases, the plurality of suspicious behaviors of the given set meet one or more of the following: (a) occur within a predetermined time frame, or (b) include a sequence of behaviors where at least one event of each behavior is related to at least one other event of at least one other behavior in the sequence, or (c) relate to a common resource. In some cases, the relation between the event and the other event is that the other event is executed by the event- In accordance with a third aspect of the presently disclosed subject matter, there is provided a non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code, executable by at least one processor of a computer to perform a method comprising: providing a suspicious behaviors classifier configured to classify a set of a plurality of suspicious behaviors as malicious or non-malicious; obtaining, by a processor, information of a given set of a plurality of suspicious behaviors, each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events that occurred on one or more endpoints connected to an organizational network, wherein; (a) at least one of the suspicious behaviors of the given set is identified in accordance with one or more user-defined identification rules, (b) at least one of the suspicious behaviors of the given set is non-anomalous, and (c) at least one of the groups includes multiple events including two or more of the detected events; and classifying, by a processor, utilizing the suspicious behaviors classifier, the given set as malicious or non-malicious. BRIEF DESCRIPTION OF THE DRAWINGS
In order to understand the presently disclosed subject matter and to see how it may be carried out in practice, the subject matter will now be described, by way of non- limiting examples only, with reference to the accompanying drawings, in which:
Fig. 1 is a schematic illustration of the operation of a cyber threat detection system, in accordance with the presently disclosed subject matter;
Fig. 2 is a block diagram schematically illustrating one example of an endpoint, in accordance with the presently disclosed subject matter;
Fig. 3 is a block diagram schematically illustrating one example of a threat detection system, in accordance with the presently disclosed subject matter;
Fig. 4 is a flowchart illustrating one example of a sequence of operations carried out for detecting threats using a classifier, in accordance with the presently disclosed subject matter; and Fig. 5 is a flowchart illustrating one example of a sequence of operations carried out for identifying suspicious behaviors for classification by a classifier, in accordance with the presently disclosed subject matter.
DETAILED DESCRIPTION
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the presently disclosed subject matter. However, it will be understood by those skilled in the art that the presently disclosed subject matter may be practiced without these specific details. In other instances, well- known methods, procedures, and components have not been described in detail so as not to obscure the presently disclosed subject matter.
In the drawings and descriptions set forth, identical reference numerals indicate those components that are common to different embodiments or configurations.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "providing", "obtaining", "classifying", "alerting ", "training", "re-training", or the like, include action and/or processes of a computer that manipulate and/or transform data into other data, said data represented as physical quantities, e.g. such as electronic quantities, and/or said data representing the physical objects. The terms "computer", "processor", and "controller" should be expansively construed to cover any kind of electronic device with data processing capabilities, including, by way of non-limiting example, a personal desktop/laptop computer, a server, a computing system, a communication device, a smartphone, a tablet computer, a smart television, a processor (e.g. digital signal processor (DSP), a microcontroller, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), a group of multiple physical machines sharing performance of various tasks, virtual servers co-residing on a single physical machine, any other electronic computing device, and/or any combination thereof.
The operations in accordance with the teachings herein may be performed by a computer specially constructed for the desired purposes or by a general-purpose computer specially configured for the desired purpose by a computer program stored in a non-transitory computer readable storage medium. The term "non-transitory" is used herein to exclude transitory, propagating signals, but to otherwise include any volatile or non-volatile computer memory technology suitable to the application.
As used herein, the phrase "for example," "such as", "for instance" and variants thereof describe non-limiting embodiments of the presently disclosed subject matter. Reference in the specification to "one case", "some cases", "other cases" or variants thereof means that a particular feature, structure or characteristic described in connection with the embodiment(s) is included in at least one embodiment of the presently disclosed subject matter. Thus the appearance of the phrase "one case", "some cases", "other cases" or variants thereof does not necessarily refer to the same embodiment(s).
It is appreciated that, unless specifically stated otherwise, certain features of the presently disclosed subject matter, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the presently disclosed subject matter, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination.
In embodiments of the presently disclosed subject matter, fewer, more and/or different stages than those shown in Figs. 4-5 may be executed. In embodiments of the presently disclosed subject matter one or more stages illustrated in Figs. 4-5 may be executed in a different order and/or one or more groups of stages may be executed simultaneously. Figs. 1-3 illustrate a general schematic of the system architecture in accordance with an embodiment of the presently disclosed subject matter. Each module in Figs. 1-3 can be made up of any combination of software, hardware and/or firmware that performs the functions as defined and explained herein. The modules in Figs. 1-3 may be centralized in one location or dispersed over more than one location. In other embodiments of the presently disclosed subject matter, the system may comprise fewer, more, and/or different modules than those shown in Figs. 2-3.
Any reference in the specification to a method should be applied mutatis mutandis to a system capable of executing the method and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that once executed by a computer result in the execution of the method.
Any reference in the specification to a system should be applied mutatis mutandis to a method that may be executed by the system and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that may be executed by the system.
Any reference in the specification to a non-transitory computer readable medium should be applied mutatis mutandis to a system capable of executing the instructions stored in the non-transitory computer readable medium and should be applied mutatis mutandis to method that may be executed by a computer that reads the instructions stored in the non-transitory computer readable medium.
Bearing this in mind, attention is drawn to Fig. 1, a schematic illustration of the operation of a cyber threat detection system, in accordance with the presently disclosed subject matter.
In accordance with the presently disclosed subject matter, a plurality of endpoints are provided, denoted in the figure as endpoints 100-a, 100-b, 100-c, 100- n. The endpoints (e.g. endpoints 100-a, 100-n) can be computerized devices, including servers, laptop computers, desktop computers, smartphones, printers, network switches, or any other device having access (via a wired and/or wireless connection) to a communication network, such as the Internet. In some cases, the endpoints (e.g. endpoints 100-a, 100-n) can be endpoints of an organization (e.g. a corporate, etc.), having access to an organizational network of the organization. In some cases, when the endpoints are endpoints of an organization, at least part of the endpoints do not have access to an external network (external to the organizational network), however in other cases, at least part of such endpoints also have access (possibly limited, or controlled) to the Internet.
Each endpoint (e.g. endpoint 100-a, 100-n) comprises a software agent installed thereon (not shown). The software agent (referred to herein also as "agent" interchangeably) is configured to monitor local events that occur on the endpoint on which it is installed, and provide information of all or part of the local events to the threat detection system 300 (events 105-a, 105-b, 105-c, 105-n).
An event can include any type of action performed by a processing resource (e.g. a processor) of the endpoint, including, for example, deleting files, copying files, modifying files, opening files, closing files, executing a process, killing a process, changing any type of permissions, printing a document, sending data, receiving data, etc. In some cases, the agent can be configured to monitor all events performed by a processing resource of the endpoint on which it is installed (whether on the operating system kernel, or on the user space). In other cases, the agent can be configured to monitor only part of the events performed by a processing resource of the endpoint on which it is installed (e.g. only a certain type of events, only events occurring on a certain resource of the endpoint, or any other subset of the events performed by a processing resource of the endpoint on which it is installed, for example, in accordance with certain rules). In some cases, at least some of the monitored events are monitored on a kernel of an operating system of the given endpoint.
The information of the local events provided by the endpoints to the threat detection system 300 can include, besides from the type of event detected, also information about the entity triggering the event, and information about the entity on which the event was executed. For example, an agent installed on a specific endpoint can detect a file delete event. The entity that initiated the file delete event can be a certain process executed on the endpoint, and the entity on which the event was executed is a given file identified for deletion bv the delete event. It is to be noted that an entity can be a local entity (i.e. a resource of the endpoint such as a process executing on the endpoint, a file stored on a local memory (ROM/RAM/other) of the endpoint, an input/output device, etc.), or an external entity (such as another endpoint having a separate network connection, or a resource connected to such other resource).
The threat detection system 300 obtains the information of the local events provided by the various endpoints (e.g. endpoints 100-a, I 00-n), and identifies suspicious behaviors 110. A suspicious behavior is a group of one or more events that occurred on one or more endpoints (e.g. endpoints 100-a, , , , , 100-n), that meet a certain user-defined identification rule, optionally out of a plurality of user-defined identification rule. The user-defined identification rules can be defined by cyber analysts that analyze behaviors of malware to identify malware characterizing behaviors. Such user-defined identification rules are network generic and not network specific, and the same user-defined identification rules will apply to any network, irrespective of specific characteristics behaviors of different networks. It is to be noted that wherever reference is made herein to identification rule or identification rules, the identification rule/s that are user-defined. In some cases, at least one of the suspicious behaviors that is identified by at least one of the user-defined identification rule is not an anomalous behavior, i.e. it is a behavior that can be generated by non-malicious applications and it does not deviate from a baseline behavior that regularly occurs on a monitored organizational network, In some cases, at least part of the events within a group of events comprising a suspicious behavior are inter-related events (e.g. one event triggers the other event, two or more events relate to a common entity, two or more events occur within a certain predefined time window, etc.). It is to be noted that in some cases, events flow into the threat detection system 300 continuously and the threat detection system 300 is configured to continuously identify suspicious behaviors.
For example, a self-delete process, in which a certain process initiates a second process designed to delete (either directly, or by initiating subsequent processes designed to perform the deletion) the executable from which the first process was initiated, could be identified as a suspicious behavior. Such suspicious behavior could be identified according to a corresponding identification rale, saying that if a given group of events occur - it constitutes a suspicious behavior (optionally even in cases where the given group of events is non-anomalous). So, if an event in which one process initiates a second process occurs, and then an event in which the second process attempts to delete the executable from which the first process was initiated occurs - the group of these two events can be marked as a suspicious behavior, as it meets a given identification rule. In some cases, one or more identification rules can be met by a single event, for example if a certain event in which an attempt is made by a certain entity to modify a given file (for example a certain operating system file), a identification rule can be met and the event can be marked as a suspicious behavior. In other cases, a plurality of identification rales can be met by a given group of one or more events, for example if a first process initiates deletion of a certain operating system file, and another process, initiated by the first process, initiates deletion of the executable from which the first process was initiated - an identification rule that checks if deletion of an operating system is attempted can be met, and another identification rale that checks self-delete (a given process initiates another process to delete the executable from which the given process was initiated) can be met.
In some cases, a suspicious behavior can be a behavior that occurs on more than one endpoint. For example, a certain suspicious behavior can start by a given event, or group of events, occurring on a given endpoint, and then proceeding on one or more other endpoints. So, if on a given endpoint, a given event is detected, in which a process sends a given executable file to another endpoint, and then, on the other endpoint an event is detected in which the executable is executed on a given process, and then another event is detected on the other endpoint, in which the executed executable attempts to change certain permissions - this can be identified as a suspicious behavior occurring on multiple endpoints, in accordance with one or more identification rules.
Having described the above examples, and in order to re-generalize the operation of the threat detection system 300, information of events is obtained from a plurality of endpoints (e.g. endpoints 100-a, 100-n, utilizing the agents installed respectively thereon), and the threat detection system 300 identifies, using the identification rules (defining what events or what group of events constitute a suspicious behavior, that is not necessarily an anomaly), suspicious behaviors based on the obtained events.
It is to be noted that in some cases, the identification of at least part of the suspicious behaviors can be performed by the agents themselves, on the endpoints. hi such cases, the agents can be configured to locally identify suspicious behaviors, using the identification rules, and provide the information of any identified suspicious behaviors (including the information of the events forming the suspicious behaviors) to the threat detection system 300.
As indicated herein, the threat detection system 300 can be configured to continuously identify suspicious behaviors (based on the information of events obtained from the endpoints and/or by receiving information of identified suspicious events from the endpoints). In such cases, every pre-determined time period (e.g. thirty seconds, one minute, ten minutes, one hour, one day, etc.), or upon certain criteria being met (more than a certain number of suspicious behaviors were identified, etc.), the threat detection system 300 can be configured to classify, using a suspicious behaviors classifier 115, a given set of suspicious behaviors, optionally including at least one suspicious behavior that is non-anomalous (i.e. a suspicious behavior that is comprised of a non-anomalous group of events), as malicious or non-malicious. In those cases, where the suspicious behaviors classifier 1 15 classifies the given set of suspicious behaviors as malicious, the threat detection system 300 can be configured to provide an alert to a user of the threat detection system 300 (e.g. a security analyst, etc.). Turning to Fig. 2, there is shown a block diagram schematically illustrating one example of an endpoint, in accordance with the presently disclosed subject matter.
According to certain examples of the presently disclosed subject matter, each endpoint 100 can comprise a endpoint network interface 220 enabling connecting the endpoint 100 to a communication network (being a communication network monitored by the threat detection system 300, such as an organizational communication network, and/or the Internet) and enabling it to send and receive data sent thereto through the communication network, including sending information of local events and/or identified suspicious behaviors and receiving alerts to be provided to the users of the endpoint, as detailed herein, inter alia with reference to Figs. 3-5.
Each endpoint 100 can further comprise or be otherwise associated with a endpoint data repository 230 (e.g. a database, a storage system, a memory including Read Only Memory - ROM, Random Access Memory - RAM, or any other type of memory, etc.) configured to store data, including, inter alia, information of local events, and optionally also suspicious behaviors identification rules, etc. In some cases, endpoint data repository 230 can be further configured to enable retrieval and/or update and/or deletion of the stored data. It is to be noted that in some cases, endpoint data repository 230 can be distributed.
Each endpoint 100 further comprises a endpoint processing resource 210. Endpoint processing resource 210 can be one or more processing units (e.g. central processing units), microprocessors, microcontrollers (e.g. microcontroller units (MCUs)) or any other computing devices or modules, including multiple and/or parallel and/or distributed processing units, which are adapted to independently or cooperatively process data for controlling relevant endpoint 100 resources and for enabling operations related to endpoint 100 resources.
The processing resource comprises an agent 205, comprising one or more of the following modules: endpoint suspicious behaviors identification module 240, event collection module 250 and endpoint alert module 260.
Event collection module 250 can be configured to obtain information of local events occurring on the endpoint 100, and provide such information to the endpoint suspicious behavior identification module 240, and/or to the threat detection system 300, for identification of suspicious behaviors. Endpoint suspicious behaviors identification module 240 can be configured to obtain information of events from the event collection module 250 and to identify suspicious behaviors utilizing suspicious behaviors identification rules, as further detailed herein, inter alia with reference to Fig. 5.
Endpoint alert module 260 can be configured to provide alerts to a user of the endpoint 100, for example of a malicious behavior being identified, as further detailed herein, inter alia with reference to Fig. 4.
Fig. 3 is a block diagram schematically illustrating one example of a threat detection system, in accordance with the presently disclosed subject matter.
According to certain examples of the presently disclosed subject matter, threat detection system 300 can comprise a network interface 320 enabling connecting the threat detection system 300 to a communication network (being a communication network monitored by the threat detection system 300, such as an organizational communication network, and/or the Internet) and enabling it to send and receive data sent thereto through the communication network, including receiving information of events and/or identified suspicious behaviors from endpoints (e.g. endpoint 100-a, endpoint 100-n) and sending alerts to be provided to the users of the endpoints, as detailed herein, inter alia with reference to Figs. 3-5.
Threat detection system 300 can further comprise or be otherwise associated with a data repository 330 (e.g. a database, a storage system, a memory including Read Only Memory - ROM, Random Access Memory - RAM, or any other type of memory, etc.) configured to store data, including, inter alia, information of events, suspicious behaviors, etc. In some cases, data repository 330 can be further configured to enable retrieval and/or update and/or deletion of the stored data. It is to be noted that in some cases, data repository 330 can be distributed.
Threat detection system 300 further comprises a processing resource 310. Processing resource 310 can be one or more processing units (e.g. central processing units), microprocessors, microcontrollers (e.g. microcontroller units (MCUs)) or any- other computing devices or modules, including multiple and/or parallel and/or distributed processing units, which are adapted to independently or cooperatively process data for controlling relevant threat detection system 300 resources and for enabling operations related to threat detection system 300 resources. The processing resource comprises one or more of the following modules: suspicious behaviors identification module 340, classification module 350 and threat detection system alert module 360.
Suspicious behaviors identification module 340 can be configured to obtain information of events from the endpoints (e.g. endpoint 100-a, endpoint 100-n) and to identify suspicious behaviors utilizing suspicious behaviors identification rules, as further detailed herein, inter alia with reference to Fig. 5
Classification module 350 can be configured to classify, using a suspicious behaviors classifier 1 15, a given set of suspicious behaviors, as malicious or non- malicious, as further detailed herein, inter alia with reference to Fig. 4.
Threat detection system alert module 360 can be configured to provide alerts to a user of the threat detection system 300 (e.g. a security analyst, etc.), and/or to the endpoint alert module 260 if the endpoints (e.g. endpoint 100-a, , , , , endpoint 100-n), for example of a malicious behavior being identified, as further detailed herein, inter alia with reference to Fig, 4.
Attention is now drawn to Fig. 4, being a flowchart illustrating one example of a sequence of operations carried out for identifying suspicious behaviors for classification by a classifier, in accordance with the presently disclosed subject matter.
According to certain examples of the presently disclosed subject matter, threat detection system 300 can be configured to perform a suspicious behaviors identification process 400.
For this purpose, threat detection system 300 can be configured to obtain information of a plurality of detected events (block 410). As detailed with respect to Fig. 1 , the information can be obtained from agents 205, installed on endpoints (e.g. endpoint 100-a, 100-n), that collect, utilizing event collection modules 250 information relating to local events detected thereby (i.e. events that took place on the respective endpoint 100) and send such information to the threat detection system 300. The information can include, besides from the type of detected event, also information about the entity triggering the event, information about the entity on which the event was executed, and optionally additional/alternative parameters. The additional parameters, or some of them, can depend on the event type. For example, for a "new user created" event, the additional parameters can include all, or part, of the user attribute. For a "new file created" event, the additional parameters can include all, or part, of the file attributes. For a "registry value delete" event, the additional parameters can include the value deleted from the registry. For a "network connect" event, the additional parameters can include the Internet Protocol (IP) address and port of the connection. For a "process execution from a suspicious path" event, the additional parameter can include the director}' path.
It is to be noted that in some cases, the agents 205 can be configured to send information relating to all events performed by a processing resource of the endpoint on which it is installed (whether on the operating system kernel, or on the user space). In other cases, the agent 205 can be configured to send information relating to only part of the events performed by a endpoint processing resource 210 of the endpoint on which it is installed (e.g. only a certain type of events, only events occurring on a certain resource of the endpoint, or any other subset of the events performed by a processing resource of the endpoint on which it is installed, for example, in accordance with certain rules).
Utilizing the obtained information, optionally along with past information relating to past events (that can be stored on and retrieved from, for example, data repository 330), threat detection system 300 can be configured to identify, utilizing suspicious behaviors identification module 340, one or more suspicious behaviors, each comprised of an event, or a plurality of events (block 420). For this purpose, threat detection system 300 obtains the information of the local events provided by the various endpoints (e.g. endpoints 100-a, 100-n), and identifies groups of events (each group can comprise one or more events) that occurred on one or more endpoints (e.g. endpoints 100-a, 100-n), where each group meets a certain identification rule (e.g. a predetermined user-defined identification rule). Optionally, at least one of the identified suspicious behaviors is non-anomalous (i.e. a suspicious behavior that is comprised of a non-anomalous group of events). It is to be noted that in some cases, events flow into the threat detection system 300 continuously and the threat detection system 300 is configured to continuously identify suspicious behaviors out of the events flowing in. As indicated above, in some cases, at least some of the events comprising a suspicious behavior are inter-related events.
It is to be noted that the threat detection system 300 can identify suspicious behaviors that occur on a single endpoint (i.e. all events within such suspicious behavior occur on the single endpoint) and/or suspicious behaviors that occur on more than one endpoint (i.e. the events within such suspicious behaviors originate from more than one endpoint).
It is to be further noted, as indicated herein, that in some cases, the identification of at least part of the suspicious behaviors can be performed by the agents 205 themselves, on the endpoints. In such cases, the agents 205 can be configured to locally identify, utilizing endpoint suspicious behaviors identification modules 240, suspicious behaviors, using the identification rules, and provide the information of any identified suspicious behaviors (including the information of the events forming the suspicious behaviors) to the threat detection system 300. Naturally, the agents 205 can identify suspicious behaviors that occur on the endpoint on which they are installed (i.e. all events within such suspicious behavior occur on the single endpoint), and not cross- endpoint suspicious behaviors.
After identifying the suspicious behaviors (optionally including at least one non- anomalous suspicious behavior), threat detection system 300 provides a given set of suspicious behaviors to a classifier, for classification thereof, as malicious, or non- malicious, as further detailed herein, inter alia with reference to Fig. 5 (block 430). In some cases, the given set of suspicious behaviors includes all suspicious behaviors identified by the threat detection system 300 (at block 420) and/or by the agents 205 (as detailed above). In other cases, the given set of suspicious behaviors includes only a subset of the suspicious behaviors, e.g. according to rules (e.g. suspicious behaviors that occurred during a certain time frame, suspicious behaviors having at least a couple of inter-related events, etc.).
It is to be noted that the process can be performed in a continuous and/or repeating manner (e.g. by returning to block 410).
It is to be noted that, with reference to Fig. 4, some of the blocks can be integrated into a consolidated block or can be broken down to a few blocks and/or other blocks may be added. It should be also noted that whilst the flow diagram is described also with reference to the system elements that realizes them, this is by no means binding, and the blocks can be performed by elements other than those described herein.
Turning to Fig, 5, there is shown a flowchart illustrating one example of a sequence of operations carried out for detecting threats using a classifier, in accordance with the presently disclosed subject matter. According to certain examples of the presently disclosed subject matter, threat detection system 300 can be configured to perform a threat detection process 500. For this purpose, threat detection system 300 can be configured to provide a classification module 350, being a suspicious behaviors classifier 115, configured to classify a set of a plurality of suspicious behaviors as malicious or non-malicious (block 510). The suspicious behaviors classifier 115 can be generated by performing a supervised machine learning algorithm (e.g. tree-based algorithm (such as C4.5, random forest, ID3, chaid, etc.), neural-network-based algorithm (such as steepest descent, quasi- Newton, Levenberg-Marquardt, etc.), etc.) on two or more training sets. In some cases, the following two training sets are utilized for generating the suspicious behaviors classifier: (a) a malicious training set of a plurality of first behaviors occurring on a controlled malware infected network, and (b) a routine training set of a plurality of second behaviors occurring on a second network. During the training, each set of suspicious behaviors is associated with a plurality of parameters, one of which being an indication if the set i s part of the malicious training set, or of the routine training set.
In some cases, the second network, from which the routine training set is obtained, is an organizational network, or a certain part of an organizational network, of the organization on which the threat detection system 300 is operating. This enables training the suspicious behaviors classifier 1 15 using real-life events data of routine events occurring on the organizational network of the organization on which the threat detection system 300 is operating.
The suspicious behaviors classifier 115 can optionally be periodically or arbitrarily re-trained utilizing at least one of: (a) an updated malicious training set of a plurality of first behaviors occurring on the controlled malware infected network infected with a new malware, or (b) a new routine training set of a plurality of second behaviors occurring on the second network.
It is to be noted that during training, and retraining of the suspicious behaviors classifier 1 15, the first behaviors and the second behaviors can be identified in accordance with one or more identification rules applied on a plurality of events detected by agents 205 installed on one or more training endpoints connected to the controlled malware infected network and to the second network, correspondingly. This is a similar process to identifying suspicious behaviors during operation of the threat detection system 300 as detailed herein, inter alia with reference to Fig. 4, Having described the suspicious behaviors classifier 1 15, attention is drawn back to the threat detection process 500. Threat detection system 300 can be further configured to obtain information of a given set of a plurality of suspicious behaviors to be classified as malicious or non-malicious by the suspicious behaviors classifier 1 15 (block 520). Each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events (detected by the agents 105 of the endpoints 100) that occurred on one or more endpoints 100 connected to a network, optionally an organizational network of the organization on which the threat detection system 300 is operating. In some cases, at least one of the groups includes multiple events, including two or more events 105 of the detected events (detected by the agents 105 of the endpoints 100), In some cases, at least one of the suspicious behaviors is non-anomalous (i.e. a suspicious behavior that is comprised of a non-anomalous group of events). A more detailed explanation about the process of obtaining the information of a given set of a plurality of suspicious behaviors to be classified as malicious or non-malicious by the suspicious behaviors classifier 115 is provided herein, inter alia with reference to Fig. 4.
In some cases, the plurality of suspicious behaviors of the given set meet one or more of the following: (a) occur within a predetermined time frame, or (b) include a sequence of behaviors where at least one event of each behavior is related to at least one other event of at least one other behavior in the sequence, or (c) relate to a common resource. For example, all suspicious behaviors of the given set occurred within a time frame of thirty minutes, and/or at least one event of each suspicious behavior of the given set triggered another event of another suspicious behavior of the given set, and/or at least one event of each suspicious behavior of the given set relates to a common resource (e.g. a certain file or process, etc.).
Having obtained a given set of suspicious behaviors to be classified as malicious or non-malicious by the suspicious behaviors classifier 115, threat detection system 300 can be further configured to classify, utilizing the suspicious behaviors classifier 1 15, the given set as malicious or non-malicious (block 530). Threat detection system 300 can be configured to check if the suspicious behaviors classifi er 1 15 classifies the given set as malicious or not (block 540), In case the suspicious behaviors classifier 1 15 classifies the given set as malicious - threat detection system 300 can be configured to alert, utilizing the threat detection system alert module 360, a user of the threat detection system 300 of the classification of the given set as malicious (block 550),
In some cases, threat detection system 300 can also send alerts to one or more specific endpoints (e.g. endpoints 100-a, , , , , 100-n) for presenting them, by an endpoint alert module 260, to the users of the respective endpoints. Threat detection system 300 can optionally be further configured to quarantine certain endpoints 100 identified as potentially infected with maiware (and/or perform other protective measures thereon) based on analysis of the given set of suspicious behaviors classifi ed as malicious.
The threat detection process 500 can be performed in a continuous and/or repeating manner (e.g. by returning to block 520).
It is to be noted that, with reference to Fig, 5, some of the blocks can be integrated into a consolidated block or can be broken down to a few blocks and/or other blocks may be added. It should be also noted that whilst the flow diagram is described also with reference to the system elements that realizes them, this is by no means binding, and the blocks can be performed by elements other than those described herein.
It is to be understood that the presently disclosed subject matter is not limited in its application to the details set forth in the description contained herein or illustrated in the drawings. The presently disclosed subject matter is capable of other embodiments and of being practiced and carried out in various ways. Hence, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for designing other structures, methods, and systems for carrying out the several purposes of the present presently disclosed subject matter,
It will also be understood that the system according to the presently disclosed subject matter can be implemented, at least partly, as a suitably programmed computer. Likewise, the presently disclosed subject matter contemplates a computer program being readable by a computer for executing the disclosed method. The presently- disclosed subject matter further contemplates a machine-readable memory tangibly embodying a program of instructions executable by the machine for executing the disclosed method.

Claims

CLAIMS:
1. A threat detection system comprising a processing resource configured to:
provide a suspicious behaviors classifier configured to classify a set of a plurality of suspicious behaviors as malicious or non-malicious;
obtain information of a given set of a plurality of suspicious behaviors, each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events that occurred on one or more endpoints connected to an organizational network, wherein: (a) at least one of the suspicious behaviors of the given set is identified in accordance with one or more user-defined identification rules, (b) at least one of the suspicious behaviors of the given set is non-anomalous, and (c) at least one of the groups includes multiple events including two or more of the detected events; and
classify, utilizing the suspicious behaviors classifier, the given set as malicious or non-malicious.
2. The threat detection system of claim 1 , wherein the grouped events in each group are inter-related events.
3. The threat detection system of claim 1, wherein the processing resource is further configured to alert a user of the threat detection system of the classification of the given set as malicious.
4. The threat detection system of claim 1 , wherein the suspicious behaviors classifier is trained utilizing: (a) a malicious training set of a plurality of first behaviors occurring on a controlled malware infected network, and (b) a routine training set of a plurality of second behaviors occurring on a second network.
5. The threat detection system of claim 4, wherein the suspicious behaviors classifier is re-trained utilizing at least one of: (a) an updated malicious training set of a plurality of third behaviors occurring on the controlled malware infected network infected with a new malware, or (b) a new routine training set of a plurality of fourth behaviors occurring on the second network.
6. The threat detection system of claim 5, wherein the second network is the organizational network.
7. The threat detection system of claim 5, wherein the second network is a part of the organizational network.
8. The threat detection system of claim 4, wherein the first behaviors and the second behaviors are identified in accordance with one or more identification rules applied on a plurality of events detected on one or more training endpoints connected to the controlled malware infected network and to the second network, correspondingly.
9. The threat detection system of claim 1, wherein the detected events are detected by one or more agents, each agent installed on a given endpoint of the endpoints, wherein the agents are configured to provide information of the detected events to the processing resource.
10. The threat detection system of claim 9, wherein at least part of the detected events detected by the agents installed on a corresponding endpoint are not provided to the processing resource, according to filtering criteria.
11 , The threat detection system of claim 9, wherein at least one of the agents installed on a corresponding endpoint identifies a given occurrence of a plurality of given events occurring on the corresponding endpoint as one of the suspicious behaviors.
12. The threat detection system of claim 9, wherein at least some of the detected events are detected on a kernel of an operating system of the given endpoint.
13. The threat detection system of claim 1, wherein the plurality of suspicious behaviors of the given set meet one or more of the following: (a) occur within a predetermined time frame, or (b) include a sequence of behaviors where at least one event of each behavior is related to at least one other event of at least one other behavior in the sequence, or (c) relate to a common resource.
14. The threat detection system of claim 13, wherein the relation between the event and the other event is that the other event is executed by the event.
15. A threat detection method compri sing:
providing a suspicious behaviors classifier configured to classify a set of a plurality of suspicious behaviors as malicious or non-malicious;
obtaining, by a processor, information of a given set of a plurality of suspicious behaviors, each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events that occurred on one or more endpoints connected to an organizational network, wherein: (a) at least one of the suspicious behaviors of the given set is identified in accordance with one or more user-defined identification rules, (b) at least one of the suspicious behaviors of the given set is non-anomalous, and (c) at least one of the groups includes multiple events including two or more of the detected events; and
classifying, by the processor, utilizing the suspicious behaviors classifier, the given set as malicious or non-malicious.
16. The threat detection method of claim 1615 wherein the grouped events in each group are inter-related events.
17. The threat detection method of claim 15, further comprising alerting a user of the threat detection system of the classification of the given set as malicious.
18. The threat detection method of claim 15, wherein the suspicious behaviors classifier is trained utilizing: (a) a malicious training set of a plurality of first behaviors occurring on a controlled malware infected network, and (b) a routine training set of a plurality of second behaviors occurring on a second network.
19. The threat detection method of claim 18, wherein the suspicious behaviors classifier is re-trained utilizing at least one of: (a) an updated malicious training set of a plurality of third behaviors occurring on the controlled maiware infected network infected with a new maiware, or (b) a new routine training set of a plurality of fourth behaviors occurring on the second network.
20. The threat detection method of claim 18, wherein the second network is the organizational network.
21. The threat detection method of claim 18, wherein the second network is a part of the organizational network,
22. The threat detection method of claim 18, wherein the first behaviors and the second behaviors are identified in accordance with one or more identification rules applied on a plurality of events detected on one or more training endpoints connected to the controlled maiware infected network and to the second network, correspondingly.
23. The threat detection method of claim 15, wherein the detected events are detected by one or more agents, each agent installed on a given endpoint of the endpoints, wherein the agents are configured to provide information of the detected events to the processing resource.
24. The threat detection method of claim 23, wherein at least part of the detected events detected by the agents installed on a corresponding endpoint are not provided to the processing resource, according to filtering criteria.
25. The threat detection method of claim 23, wherein at least one of the agents installed on a corresponding endpoint identifies a given occurrence of a plurality of given events occurring on the corresponding endpoint as one of the suspicious behaviors.
26. The threat detection method of claim 23, wherein at least some of the detected events are detected on a kernel of an operating system of the given endpoint. 27, The threat detection method of claim 15, wherein the plurality of suspicious behaviors of the given set meet one or more of the following: (a) occur within a predetermined time frame, or (b) include a sequence of behaviors where at least one event of each behavior is related to at least one other event of at least one other behavior in the sequence, or (c) relate to a common resource,
28, The threat detection method of claim 27, wherein the relation between the event and the other event is that the other event is executed by the event.
29, A non-transitory computer readable storage medium having computer readable program code embodied therewith, the computer readable program code, executable by at least one processor of a computer to perform a method compri sing: providing a suspicious behaviors classifier configured to classify a set of a plurality of suspicious behaviors as malicious or non -malicious;
obtaining, by a processor, information of a given set of a plurality of suspicious behaviors, each suspicious behavior of the set comprises information of a corresponding group including one or more grouped events, grouped from a plurality of detected events that occurred on one or more endpoints connected to an organizational network, wherein: (a) at least one of the suspicious behaviors of the given set is identified in accordance with one or more user-defined identification rules, (b) at least one of the suspicious behaviors of the given set is non-anomalous, and (c) at least one of the groups includes multiple events including two or more of the detected events; and
classifying, by a processor, utilizing the suspicious behaviors classifier, the given set as malicious or non-malicious.
PCT/IL2018/050892 2017-08-14 2018-08-12 Cyber threat detection system and method WO2019035120A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL253987 2017-08-14
IL253987A IL253987B (en) 2017-08-14 2017-08-14 Cyber threat detection system and method

Publications (1)

Publication Number Publication Date
WO2019035120A1 true WO2019035120A1 (en) 2019-02-21

Family

ID=61866874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2018/050892 WO2019035120A1 (en) 2017-08-14 2018-08-12 Cyber threat detection system and method

Country Status (2)

Country Link
IL (1) IL253987B (en)
WO (1) WO2019035120A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2593509A (en) * 2020-03-25 2021-09-29 British Telecomm Computer vulnerability identification
CN113746781A (en) * 2020-05-28 2021-12-03 深信服科技股份有限公司 Network security detection method, device, equipment and readable storage medium
CN114780810A (en) * 2022-04-22 2022-07-22 中国电信股份有限公司 Data processing method, data processing device, storage medium and electronic equipment
WO2022248906A1 (en) * 2021-05-24 2022-12-01 Nokia Solutions And Networks Oy Detecting manipulative network functions
US11562069B2 (en) 2020-07-10 2023-01-24 Kyndryl, Inc. Block-based anomaly detection
US11647040B1 (en) * 2022-07-14 2023-05-09 Tenable, Inc. Vulnerability scanning of a remote file system
CN117034261A (en) * 2023-10-08 2023-11-10 深圳安天网络安全技术有限公司 Exception detection method and device based on identifier, medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130305377A1 (en) * 2002-10-23 2013-11-14 Frederick S.M. Herz Sdi-scam
US20170063912A1 (en) * 2015-08-31 2017-03-02 Splunk Inc. Event mini-graphs in data intake stage of machine data processing platform
US20170200004A1 (en) * 2011-12-02 2017-07-13 Invincea, Inc. Methods and apparatus for control and detection of malicious content using a sandbox environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130305377A1 (en) * 2002-10-23 2013-11-14 Frederick S.M. Herz Sdi-scam
US20170200004A1 (en) * 2011-12-02 2017-07-13 Invincea, Inc. Methods and apparatus for control and detection of malicious content using a sandbox environment
US20170063912A1 (en) * 2015-08-31 2017-03-02 Splunk Inc. Event mini-graphs in data intake stage of machine data processing platform

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2593509A (en) * 2020-03-25 2021-09-29 British Telecomm Computer vulnerability identification
CN113746781A (en) * 2020-05-28 2021-12-03 深信服科技股份有限公司 Network security detection method, device, equipment and readable storage medium
US11562069B2 (en) 2020-07-10 2023-01-24 Kyndryl, Inc. Block-based anomaly detection
WO2022248906A1 (en) * 2021-05-24 2022-12-01 Nokia Solutions And Networks Oy Detecting manipulative network functions
CN114780810A (en) * 2022-04-22 2022-07-22 中国电信股份有限公司 Data processing method, data processing device, storage medium and electronic equipment
CN114780810B (en) * 2022-04-22 2024-02-27 中国电信股份有限公司 Data processing method and device, storage medium and electronic equipment
US11647040B1 (en) * 2022-07-14 2023-05-09 Tenable, Inc. Vulnerability scanning of a remote file system
CN117034261A (en) * 2023-10-08 2023-11-10 深圳安天网络安全技术有限公司 Exception detection method and device based on identifier, medium and electronic equipment
CN117034261B (en) * 2023-10-08 2023-12-08 深圳安天网络安全技术有限公司 Exception detection method and device based on identifier, medium and electronic equipment

Also Published As

Publication number Publication date
IL253987A0 (en) 2017-10-01
IL253987B (en) 2019-05-30

Similar Documents

Publication Publication Date Title
WO2019035120A1 (en) Cyber threat detection system and method
US20210273949A1 (en) Treating Data Flows Differently Based on Level of Interest
US20240064168A1 (en) Incorporating software-as-a-service data into a cyber threat defense system
CN108040493B (en) Method and apparatus for detecting security incidents based on low confidence security events
US9401924B2 (en) Monitoring operational activities in networks and detecting potential network intrusions and misuses
Li Using genetic algorithm for network intrusion detection
US7752665B1 (en) Detecting probes and scans over high-bandwidth, long-term, incomplete network traffic information using limited memory
NL2002694C2 (en) Method and system for alert classification in a computer network.
US20220224721A1 (en) Ordering security incidents using alert diversity
US20230012220A1 (en) Method for determining likely malicious behavior based on abnormal behavior pattern comparison
US20150358292A1 (en) Network security management
Uppal et al. An overview of intrusion detection system (IDS) along with its commonly used techniques and classifications
CN110618977B (en) Login anomaly detection method, device, storage medium and computer equipment
Sallay et al. Intrusion detection alert management for high‐speed networks: current researches and applications
WO2023163820A1 (en) Graph-based analysis of security incidents
Barhoom et al. Adaptive worm detection model based on multi classifiers
CN114584391B (en) Method, device, equipment and storage medium for generating abnormal flow processing strategy
US20230275908A1 (en) Thumbprinting security incidents via graph embeddings
CN117376030B (en) Flow anomaly detection method, device, computer equipment and readable storage medium
US20230403294A1 (en) Cyber security restoration engine
US20230275907A1 (en) Graph-based techniques for security incident matching
Mahmoud et al. A hybrid snort-negative selection network intrusion detection technique
Zope et al. Event correlation in network security to reduce false positive
Ferragut et al. Detection of anomalous events
Alsajri et al. Intrusion Detection System Based on Machine Learning Algorithms:(SVM and Genetic Algorithm)

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18845553

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18845553

Country of ref document: EP

Kind code of ref document: A1