US9977895B2 - Malicious software identification integrating behavioral analytics and hardware events - Google Patents

Malicious software identification integrating behavioral analytics and hardware events Download PDF

Info

Publication number
US9977895B2
US9977895B2 US14/670,721 US201514670721A US9977895B2 US 9977895 B2 US9977895 B2 US 9977895B2 US 201514670721 A US201514670721 A US 201514670721A US 9977895 B2 US9977895 B2 US 9977895B2
Authority
US
United States
Prior art keywords
tier
predetermined selection
calls
module
kernel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/670,721
Other versions
US20150281267A1 (en
Inventor
John J. Danahy
Ryan J. Berg
Kirk R. Swidowski
Stephen C. Carlucci
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CYLENT SYSTEMS Inc
Alert Logic Inc
Original Assignee
Barkly Protects Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Barkly Protects Inc filed Critical Barkly Protects Inc
Priority to US14/670,721 priority Critical patent/US9977895B2/en
Assigned to CYLENT SYSTEMS, INC. reassignment CYLENT SYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BERG, RYAN J., SWIDOWSKI, KIRK R., DANAHY, JOHN J., CARLUCCI, STEPHEN C.
Publication of US20150281267A1 publication Critical patent/US20150281267A1/en
Assigned to BARKLY PROTECTS, INC. reassignment BARKLY PROTECTS, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: CYLENT SYSTEMS, INC.
Priority to US15/095,607 priority patent/US9589132B2/en
Priority to US15/283,910 priority patent/US9733976B2/en
Priority to US15/853,795 priority patent/US10078752B2/en
Application granted granted Critical
Publication of US9977895B2 publication Critical patent/US9977895B2/en
Priority to US16/131,894 priority patent/US10460104B2/en
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BARKLY PROTECTS, INC.
Assigned to ALERT LOGIC, INC. reassignment ALERT LOGIC, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BARKLY PROTECTS, INC.
Assigned to BARKLY PROTECTS, INC. reassignment BARKLY PROTECTS, INC. TERMINATION AND RELEASE OF INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: SILICON VALLEY BANK
Assigned to PACIFIC WESTERN BANK reassignment PACIFIC WESTERN BANK SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALERT LOGIC, INC.
Assigned to ALERT LOGIC, INC. reassignment ALERT LOGIC, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: PACIFIC WESTERN BANK
Assigned to JEFFERIES FINANCE LLC, AS COLLATERAL AGENT reassignment JEFFERIES FINANCE LLC, AS COLLATERAL AGENT FIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: ALERT LOGIC, INC.
Assigned to GOLUB CAPITAL MARKETS LLC, AS COLLATERAL AGENT reassignment GOLUB CAPITAL MARKETS LLC, AS COLLATERAL AGENT SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: ALERT LOGIC, INC.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/53Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/562Static detection
    • G06F21/563Static detection by source code analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45591Monitoring or debugging support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/034Test or assess a computer or a system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2101Auditing as a secondary aspect
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1433Vulnerability analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic

Definitions

  • This invention relates to computer system security, and more particularly, to a system and method for autonomously identifying and disrupting multiple forms of malicious software attacks through the correlation of hardware, operating system, and user space events.
  • a mix of high false positives, complex management, unacceptable performance load, and a lack of automatic responses have critically reduced the efficacy and adoption of current security technologies in use at the endpoint.
  • These technologies include anti-virus and malicious code detection products, network and host-based monitoring agents, and traditional host-based IPS and IDS technologies. These technologies are focused on detecting malware and automated attack mechanisms by recognizing direct representations (signatures) of known attack payloads, or by identifying a limited base of inappropriate or unauthorized actions. These approaches have proven increasingly ineffective as attackers use techniques such as polymorphism to change the appearance of attacks and increase their use of zero-day attacks, for which no signatures exist.
  • endpoint security technologies instead provide monitoring data to human interpreters and remote data aggregation suites, from which attack identification and response decisions are made. This latency, between the attack, the detection of the attack, and the disruption or mitigation of the attack often takes months. Skilled individuals capable of recognizing attack patterns, and infrastructures capable of supporting them, also come at a high cost, making them inappropriate for all but the largest of organizations.
  • One aspect of the invention includes a security system for securing and responding to security threats in a computer having a Central Processing Unit (CPU), a Kernel/Operating System, and a plurality of software applications.
  • the system includes one or more low-level data collector modules configured to intercept a predetermined selection of first tier calls between the CPU and Kernel/OS, and to store identifying information pertaining to the intermediated first tier calls, i.e., first tier call IDs, in a data store.
  • One or more Kernel Modules are configured to intermediate a predetermined selection of second tier calls between applications/users as they are interpreted by the Kernel/OS and to store identifying information pertaining to the intermediated second tier calls, i.e., second tier call IDs, in the data store.
  • An Analytic Engine aggregates and maps the stored first and second tier call IDs to a rulebase containing patterns of first and second tier call IDs associated with identifiable security threats, to generate a threat analysis.
  • the Analytic Engine selectively enlarges or contracts the predetermined selection of first and second tier calls to respectively increase or decrease specificity of the threat analysis.
  • the Analytic Engine is also configured to take responsive actions in response to the threat analysis.
  • a Management Module is configured to generate user interfaces accessible remotely, e.g., via the Internet, by a user device, to enable a user to update the rulebase and configure the low-level collector module, the Kernel module, and the Analytic Engine.
  • a method for securing and responding to security threats in a computer having a Central Processing Unit (CPU), a Kernel/Operating System, and a plurality of software applications.
  • the method includes intermediating a predetermined selection of first tier calls between the CPU and the Kernel/Operating System, and storing first tier call IDs in a data store.
  • Second tier calls between the Kernel/OS and the applications are intermediated, with second tier call IDs stored in the data store.
  • An Analytic Engine aggregates and maps the stored first and second tier call IDs to a rulebase to generate a threat analysis.
  • the Analytic Engine selectively enlarges or contracts the predetermined selection of first and second tier calls to respectively increase or decrease specificity of said threat analysis.
  • the Analytic Engine also implements responsive actions in response to the threat analysis.
  • a Management Module generates a plurality of user interfaces to enable a user, via a user device, to update the rulebase and configure low-level collector and Kernel modules, and the Analytic Engine.
  • FIG. 1A is a block diagram of one embodiment of a system in accordance with the present invention.
  • FIG. 1B is a block diagram of an alternate embodiment of a system in accordance with the present invention.
  • FIG. 2 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario
  • FIG. 3 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario
  • FIG. 4 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario
  • FIG. 5 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario
  • FIG. 6 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario
  • FIG. 7 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario
  • FIG. 8 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario
  • FIG. 9 is a block diagram of the embodiment of FIG. 1B , during a step in an exemplary operation during a malicious attack scenario;
  • FIG. 10 is a block diagram of the embodiment of FIG. 1B , during a step in an exemplary operation during a malicious attack scenario;
  • FIG. 11 is a block diagram of the embodiment of FIG. 1B , during a step in an exemplary operation during a malicious attack scenario;
  • FIG. 12 is a block diagram of the embodiment of FIG. 1B , during a step in an exemplary operation during a malicious attack scenario;
  • FIG. 13 is a block diagram of the embodiment of FIG. 1B , during a step in an exemplary operation during a malicious attack scenario;
  • FIG. 14 is a block diagram of the embodiment of FIG. 1B , during a step in an exemplary operation during a malicious attack scenario;
  • FIG. 15 is a block diagram of the embodiment of FIG. 1B , during a step in an exemplary operation during a malicious attack scenario.
  • FIG. 16 is a block diagram of the embodiment of FIG. 1B , during a step in an exemplary operation during a malicious attack scenario;
  • an analyzer includes a plurality of such analyzers.
  • an analysis includes a plurality of such analyses.
  • the terms “computer” and “user device” are meant to encompass a workstation, personal computer, personal digital assistant (PDA), wireless telephone, or any other suitable computing device including a processor, a computer readable medium upon which computer readable program code (including instructions and/or data) may be disposed, and a user interface.
  • Terms such as “server”, “application”, “engine” and the like are intended to refer to a computer-related component, including hardware, software, and/or software in execution.
  • an engine may be, but is not limited to being, a process running on a processor, a processor including an object, an executable, a thread of execution, a program, and a computer.
  • the various components may be localized on one computer and/or distributed between two or more computers.
  • real-time and “on-demand” refer to sensing and responding to external events nearly simultaneously (e.g., within milliseconds or microseconds) with their occurrence, or without intentional delay, given the processing limitations of the system and the time required to accurately respond to the inputs.
  • ком ⁇ онент may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and a computer.
  • a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and a computer.
  • an application running on a server and the server can be components.
  • One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers or control devices.
  • the system and method embodying the present invention can be programmed in any suitable language and technology, such as, but not limited to: Assembly Languages, C, C++; Visual Basic; Java; VBScript; Jscript; Node.js; BCMAscript; DHTM1; XML and CGI.
  • Alternative versions may be developed using other programming languages including, Hypertext Markup Language (HTML), Active ServerPages (ASP) and Javascript.
  • Any suitable database technology can be employed, such as, but not limited to, Microsoft SQL Server or IBM AS 400.
  • embodiments of the invention identify undesired process behaviors through high-performance analysis of a unique dataset containing outputs from custom collectors at each level of the computer system. For example, logfile, configuration, and process activity data may be gathered from user space, device driver and operating system information may be gathered from the kernel, and machine-level instruction and interrupt information is captured or derived from native hardware events. This information is organized into a structure that has been optimized for querying against a local rulebase that contains identifying patterns of common behaviors in malicious software. The result of this analysis is the capability to detect and disrupt the installation or operation of many types of malicious software.
  • These embodiments integrate a discrete set of collector interfaces, configured to gather a limited number of data elements required to satisfy the identification requirements of malicious behaviors defined in the rulebase. By limiting the information gathered and the calls/interfaces intermediated, minimal load is placed on the system, to likewise minimize the performance impact experienced by the users of the system.
  • the approach used in these embodiments validates the positive existence of unauthorized or malicious behavior.
  • this validation is applied to the actions undertaken by active software processes on the system, where the requests, process control, and network connections associated with a software program are monitored to identify specific indicators of potential malicious behavior. These monitored parameters, which may otherwise simply appear anomalous or benign, are then compared to a rulebase of known patterns of malicious behavior, to automatically identify and respond to threats in real time.
  • these embodiments are not merely identifying the signatures of particular viruses or malware, but instead, are broadly characterizing patterns of behavior common to entire classes of assailants, to cast a broader net than conventional approaches, such as described below.
  • This approach yields a new level of substantial certainty which drives confidence in results and the capability to take automatic remediating or mitigating action, without reliance on a human-driven system.
  • Particular embodiments may also recognize patterns associated with non-programmatic, human-driven attacks, in order to act upon those attacks in real time.
  • a newer approach to anomalous and malicious behavior detection is based on the virtualization technology, where entire sessions of user or system behavior are managed inside a virtual container, which separates the actual operation of the system from the perceived operation of processes by the user.
  • An aspect of this invention is the inventors' recognition that the performance impact, uncertain reliability, and software platform dependence of virtualization approaches render them inappropriate for many users and security applications.
  • a generalized virtualization approach technology is used to construct a complete virtual image of the system in which either the entire operating system or some user application is instantiated and run.
  • the virtualizing system is required to maintain state data around most, if not all, calls, data use, and even user interface interaction in order to simulate the expected behavior of the system.
  • the virtualization should also intermediate most, if not all, calls that are capable of existing between the user or process in the virtualized environment.
  • the problems of False Negatives and False Positives are addressed through the unique combination of a ruleset for known malicious behaviors and a new form of information gathering represented by a combination of multi-level collectors and the correlating capabilities of an Analytic Engine.
  • behavioral data from the collectors is assembled to match known indicators in the rulebase, protection and notification occur regardless of the actual on-disk representation/signature, source, or construction of the executable.
  • captured data correlates to the patterns of behavior represented in the rulebase, action may be effectively deemed conclusive and directly related to a known bad event.
  • the protection is applied in real time, and in particular embodiments, local to the machine.
  • the negative effects of full virtualization are mitigated by the use of the flexible low-level collector/framework, in which only a relatively small subset of the possible calls need to be examined and intermediated.
  • Such use of only a small subset of calls is possible because of the tiered approach, which will be described in greater detail hereinbelow, to significantly reduce the performance impact of the inventive solution relative to prior art approaches such as the aforementioned virtualization approach.
  • the present invention includes a method and system for automatically protecting endpoint systems from the effects of attacks and malicious software.
  • a method according to the present invention provides for the identification of malicious and unauthorized behavior in order to trigger appropriate steps for mitigation and disruption.
  • Methods and systems in accordance with the present invention employ new forms of information collection and analysis that are hardware and software agnostic and are capable of informing behavior analytics. These embodiments can further use the result of these analytics to disrupt the attack in real time.
  • data is provided through a selective low level monitoring and data collection technology that operates between the CPU hardware and any existing hypervisor and/or host operating system.
  • This data provides the capability to differentiate between the actual users of a system under attack, the attack that is impersonating an authorized user or process, and the operations that are being undertaken on the system.
  • this technology provides access to system functions while employing real-time analytics that adapt the criteria of the identification activity in order to further distinguish actual attacks from potential false positive reports.
  • the system functions provide both data and operational capabilities, and the resulting information flows inform the assessment of which rules should be applied to the current scenario.
  • the criteria supplied can be organized as a structured rule syntax, extensible by authorized individuals, which is then parsed by the protection mechanism in order to identify new indicators of attack. This information may also be made available to multiple instances of the invention to provide consistency of behavior across multiple, e.g., networked, systems.
  • the structured rule syntax can also be linked to response actions, specific to the identified malicious behavior, in order to provide a flexible means of integrating organizational priorities with the output of the malicious behavioral analysis.
  • Analytic results can be used to immediately interdict attacks in process.
  • the results can also be used to generate real time alerts to users and groups in order to better inform aggregated analysis and organizational security practices.
  • the protection provided is not visible by either local users or by processes through the use of these low-level capabilities.
  • Implementing a separate interface to technical functions such as memory management and process invocation allows the embodiments to selectively respond to requests for data, and to cloak its operation and existence.
  • Control of the rules, response, and versioning of the particular embodiment are also managed through the low-level capabilities of the host computer, through the use of user interfaces generated by an integral management module for display on remotely connected user devices. These interfaces may be configured to perform the functions of event aggregation, trending, and presentation.
  • the information presented may relate to the actual attacks or behaviors disrupted, and may not, as a matter of course, include information which is unrelated to conclusively identified attacks.
  • a thin, machine-level collector is deployed within the interrupt handling control chain that intermediates service requests associated with operations and interrupts servicing selected hardware and software, to include events triggered by the CPU, operating system and user space, for the purpose of providing unique context in order to positively correlate user identity, privilege, and process activity.
  • this intermediation minimizes latency and performance load by limiting its functions to simply recognizing the event in a low level collector module associated with that device, and passing the current interrupt context to a lightweight buffering mechanism which stores the data within the memory presently allocated to the low level collector(s). Transformation and processing of this information may be done within user space in order to capitalize on traditional system scheduling and performance optimizations.
  • the data once gathered, is attributed to one or more classes of malicious behavior, and is used in conjunction with information from other collectors to identify processes or threads that are known to be destructive or unauthorized.
  • the Low level collector(s) employs a selective framework of collectors that is configurable to load only modules necessary to intermediate events and calls that are directly related to malicious and unauthorized behaviors from the rule base. As new malicious behaviors are recognized in research, or as more information is required in the analysis of system attacks, new modules can be transparently loaded and unloaded from this low-level framework, e.g., via a management module.
  • This implementation includes the creation of the configurable low level collector/framework, which is a real-time mechanism for securely controlling and modifying collector behavior, a language and storage mechanism for rules defining consequent actions, and an active component of analysis capable of translating identified risks into action.
  • FIG. 1A a representative computer system onto which an embodiment of the present invention is deployed, is shown as system 100 .
  • This system 100 includes a computer having a processor (CPU) 102 , a Kernel/Operating System (Kernel/OS) 104 , a plurality of Applications 106 , and one or more low-level data collector modules 108 , such as in the form of a conventional micro-hypervisor (“microvisor”) configured in the accordance with the teachings herein.
  • processor CPU
  • Kernel/OS Kernel/Operating System
  • Applications 106 a plurality of Applications
  • low-level data collector modules 108 such as in the form of a conventional micro-hypervisor (“microvisor”) configured in the accordance with the teachings herein.
  • microvisor micro-hypervisor
  • microvisor refers to a Xen-based security-focused hypervisor that provides micro-virtualization technology to ensure secure computing environments.
  • VT Virtualization Technology
  • micro-VMs hardware-isolated micro virtual machines
  • embodiments of the present invention use a microvisor that has been modified to intercept (intermediate) a predetermined selection of calls (first tier calls) between the CPU 102 and Kernel/OS, and to store identifying information pertaining to the intermediated first tier calls (first tier call IDs) in a data store.
  • One or more Kernel Modules 110 are configured to intermediate a predetermined selection of calls (second tier calls) between applications/users as they are interpreted by the Kernel/OS and to store identifying information pertaining to the intermediated second tier calls (second tier call IDs) in the data store.
  • An Analytic Engine 112 aggregates and maps the stored first tier call IDs and second tier call IDs to a rulebase, to generate a threat analysis.
  • the rulebase includes patterns of first tier call IDs and second tier call IDs associated with identifiable security threats.
  • the Analytic Engine is configured to selectively enlarge or contract the predetermined selection (e.g., increase or decrease the number) of first tier calls and second tier calls to respectively increase or decrease specificity of the threat analysis.
  • the Analytic Engine is also configured to take responsive actions in response to the threat analysis.
  • a Management Module 114 is configured to generate a plurality of user interfaces accessible remotely, e.g., via the Internet, by a user computer, to enable a user (e.g., having administrative privileges) to update the rulebase and configure the low-level collector module 108 , the Kernel module 110 , and the Analytic Engine 112 .
  • the Analytic Engine 112 may take any number of actions in response to a detected threat. Non-limiting examples include one or more of (a) process termination, (b) thread termination, (c) event and alert notification and logging, (d) user disablement, (e) network disconnection, and (f) process fingerprinting.
  • first tier calls include one or more events or calls for activity that would otherwise pass directly between the CPU, hardware devices, and/or the Kernel/Operating System.
  • predeterminedselection of first tier calls represents a relatively small subset of the full range of calls capable of being passed between the CPU 102 and Kernel/OS 104 . The use of such a subset provides the aforementioned benefits including low processing overhead, increased processing speed, etc.
  • the second tier calls include one or more events or calls for service or data between the applications and the Kernel/Operating System including scheduling and functional service delivery.
  • the predetermined selection of second tier calls also represents a relatively small subset of the full range of calls capable of being passed between the applications and the Kernel/OS.
  • Non-limiting examples of calls includable in the predetermined selection of second tier calls includes communications with one or more of a (a) Network Monitor Driver, (b) Registry Monitor Driver, (c) Filesystem Monitor Driver, (d) Process Monitor Driver, and (e) Process Governor Driver.
  • both the rulebase and the data store used to store the first and second tier call IDs are local to system 100 , e.g., disposed in memory associated with Analytic Engine 112 and/or Management Module 114 .
  • the data store and/or the rulebase may be disposed remotely from the system 100 .
  • FIG. 1B an alternate embodiment of the present invention is shown as system 100 ′.
  • This system 100 ′ is substantially similar to system 100 ( FIG. 1A ), while also including one or more User Space Modules 120 .
  • the user space module(s) 120 is configured to collect a predetermined selection of user space data associated with the applications, and to store identifying informationpertaining to the collected user space data (user space IDs) in the data store.
  • the predetermined selection of user space data represents a relatively small subset of the full range of user space data capable of being generated and/or collected.
  • user space data usable in the predetermined selection of user space data include one or more of (a) Application Mouse Activity, (b) Application Keyboard Activity, (c) System Logfile Activity, and (d) System Registry Fields.
  • the Analytic Engine 112 ′ is substantially similar to Analytic Engine 112 , while also being configured to aggregate and map the user space IDs, along with the first and second tier call IDs, to the rulebase to generate a threat analysis.
  • the rulebase includes patterns of first tier call IDs, second tier call IDs and user space IDs associated with identifiable security threats.
  • the Analytic Engine 112 ′ is configured to selectively enlarge or contract the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to respectively increase or decrease the specificity of said threat analysis.
  • any of these predetermined selections may be automatically enlarged to increase the specificity of the threat analysis from a base level to one or more escalated levels when the threat analysis identifies a potential security threat. Conversely, any of the predetermined selections may be automatically contracted to decrease the specificity of the threat analysis, e.g., to free up computing resources, from the one or more escalated levels towards the base level once one or more of the aforementioned responsive actions has been implemented.
  • the implementation integrates the information gathered during previous call examination in order to more narrowly consider the inputs necessary to further investigate potentially damaging attacks.
  • the information collection happens through information passed to lightweight buffers which surface data to higher-level processing and analytic functions operating according to ordinary system scheduling, thereby minimizing the performance impact and visibility of the embodiment.
  • knowledge gained through an observed malicious or unauthorized activity on one system 100 , 100 ′ may be shared among all systems in a network utilizing the embodiments shown and described herein. Observable events are identified locally but may be shared globally increasing the learning efficiency of unrelated systems and preventing the spread of the observed malicious activity. For example, the particular first tier, second tier, and user space IDs associated with particular threats identified by one system 100 , 100 ′, may be added to the rulebase used by other systems 100 , 100 ′ to potentially provide for quicker threat identification by those other systems.
  • the capability described in the embodiment is defined by real-time knowledge of ongoing system behavior that is identified by characteristics described in the configurable rulebase.
  • the system can also leverage the capability of the embodiments' real-time rulebase modifications to react to information provided by other embodiment systems and presented from foreign systems.
  • the analytic engine may thus receive communication from other data sources that can then be transformed into the appropriate conditions to trigger application of rule changes.
  • the implementation of the rule will include necessary protection behaviors to prevent the advancement or proliferation of an identifiable attack emanating from another machine on the local network.
  • the detection of a particular malware activity on a protected machine may be shared with any locally or remotely accessible systems.
  • Embodiments of the present invention may provide additional preventative behaviors to ensure that the security event on the initial machine is not allowed to further corrupt other adjacent machines.
  • embodiments of the present invention can update both the indicators of a potential attack and the appropriate automated responses that can range from increased monitoring to denial of connections from the foreign exploited system.
  • conventional virus definition updates are based on the premise of a static signature, while the approach described with respect to the instant embodiments is effectively a “hive mind”, in which collective knowledge is shared in real-time so all systems share a collective understanding of potential malicious activity and sources.
  • a method 200 for securing and responding to security threats in a computer having a Central Processing Unit (CPU), a Kernel/Operating System includes intermediating 202 , with low-level data collector module 108 , a predetermined selection of first tier calls between the CPU and the Kernel/Operating System, and storing identifying information pertaining to the intermediated first tier calls (first tier call IDs) in a data store.
  • second tier calls are intermediated with kernel module 110 , information pertaining to the intermediated second tier calls (second tier call IDs) is stored in the data store.
  • Analytic Engine 112 aggregates and maps the stored first tier call IDs and second tier call IDs to a rulebase, to generate a threat analysis.
  • the Analytic Engine selectively enlarges or contracts the predetermined selection of first tier calls and the predetermined selection of second tier calls to respectively increase or decrease specificity of said threat analysis.
  • the Analytic Engine implements one or more of a plurality of responsive actions in response to the threat analysis.
  • the Management Module 114 generates a plurality of user interfaces to enable a user, via a communicably coupled user device, to update the rulebase and configure the low-level collector module 108 , the Kernel module 110 , and the Analytic Engine 112 .
  • step 210 may further include implementing one or more of a plurality of responsive actions including process termination, thread termination, event and alert notification and logging, user disablement, network disconnection, and process fingerprinting.
  • method 200 may further include using user space module 120 to collect a predetermined selection of user space data associated with the applications, and store identifying information pertaining to the collected user space data (user space IDs) in the data store.
  • step 206 may include aggregating and mapping the stored first tier call IDs, second tier call IDs, and the user space IDs to the rulebase, to generate a threat analysis.
  • step 208 may include selectively enlarging or contracting the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to respectively increase or decrease the specificity of said threat analysis.
  • step 208 may further include automatically enlarging the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to increase the specificity of said threat analysis from a base level to one or more escalated levels when the threat analysis identifies a potential security threat.
  • step 208 may further include automatically contracting the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to decrease the specificity of said threat analysis from the one or more escalated levels towards the base level once one or more of the plurality of responsive actions has been implemented.
  • TABLE II 214 implement responsive actions including process termination, thread termination, event and alert notification and logging, user disablement, network disconnection, and process fingerprinting 216 collect and store user space data 218 aggregate and map first tier call IDs, second tier call IDs, and user space IDs to the rulebase to generate threat analysis 220 selectively enlarge or contract the selection of first tier calls, second tier calls, and/or the selection of user space data to respectively increase or decrease the specificity of the threat analysis.
  • 222 automatically enlarge the selection from a base level to one or more escalated levels when the threat analysis identifies a potential security threat 224 automatically contract the selection from the one or more escalated levels towards the base level once responsive action has been implemented.
  • FIGS. 2-16 examples of attacks on a conventional system (Example I, FIGS. 2-8 ) and on a system 100 ′ of the present invention (Example II, FIGS. 9-16 ) are shown and described.
  • the described approach is illuminated through an examination of the behavior of the system in the presence of a common attack type.
  • malicious software 300 injects itself into a running instance of an Internet browser through a vulnerable condition in either the running code of browser or a third-party plug-in that is running within the browser. This is a common outcome and technique for phishing attack destinations and malicious websites.
  • the goal of this example malicious software is to exfiltrate copies of any PDF files that are accessible to the user of the browser and deposit them on a remote server.
  • process injection 302 the technical example exploit attack technique discussed is called process injection 302 .
  • a process injection attack takes advantage of a vulnerability affecting an executing program in order to replace actively running legitimate application code with malicious code in order to mask the malicious behavior by running it in the context of an expected or trusted process.
  • the existing process's memory space is mapped with code from the malicious executable, and then resumes execution from the newly mapped malicious program 304 .
  • the new process will now be executing the malicious code, but within the context of the originally targeted legitimate executable.
  • the malicious code attempts to locate the limited functions required by the malicious executable, either by actually attempting their execution while watching for failures, or by searching the import table in the shared memory of the corrupted executable.
  • the code 304 may then attempt to download additional functions or libraries necessary to augment existing functionality within the malicious code in order to load that functionality for use.
  • the goal is the identification and exfiltration of all accessible PDF files.
  • the malicious code will attempt to identify and enumerate all physical and logical drives that are accessible to the trusted context of the corrupted process.
  • This list is then searched for occurrences of the file type PDF 306 .
  • the malicious code uses the existing HTTP capability of the corrupted Internet browser to send 308 the contents of that file to the address of a receiving malicious site using standard HTTP POST or GET requests.
  • system 100 ′ initiates components in various sections of the system.
  • the low-level collector (e.g. microvisor) 108 intercepts the lowest level commands passing between the kernel 104 of the operating system and the actual hardware (e.g., CPU) 102 .
  • the microvisor 108 acts both as observer to inform actions, and as defender of the entire system 100 ′.
  • the kernel modules 110 gather information about ongoing system operation, and the analytic engine 112 ′ relates alerts to rules and actions. These components are all managed by the management module 114 . Once so configured as shown in FIG.
  • system 100 ′ operates so that at the time of the creation of a new process by the Internet browser, the virtual address space of critical or identified processes is intermediated by one of the low level collectors 108 and the Kernel/OS module 110 .
  • the default rules within the system 100 ′ is a rule to observe Internet browsers for certain activities, like spawning new processes.
  • the collector interacts with a kernel level memory subsystem watching process level loading and unloading of memory-mapped regions and passes the captured intermediated call information to the Analytics Engine, where both the existing and new processes invoked are named, a hashed fingerprint of the virtual memory space is taken, and any changes to critical code or data pages are captured. For example, remapping a new process is an unusual operation from within a browser, and as a result, particular embodiments include a default kernel module rule that identifies it. When the process clears the existing browser context and replaces it with new code (e.g., with the malicious code), the kernel module 110 identifies the behavior and sends an alert which includes the process identification and behavior to the analytic engine 112 ′.
  • the kernel module 110 identifies the behavior and sends an alert which includes the process identification and behavior to the analytic engine 112 ′.
  • the Analytics Engine 112 ′ then adds additional observation capabilities to the kernel module to capture additional system events related to the targeted process.
  • the relevant new capability in this case is to alert on any filesystem enumeration.
  • This is a single additional capability in satisfaction of this example. Any primary Alert will typically pass multiple new capabilities to the kernel or collector observation capability.
  • the remapping of the browser image is sufficiently suspicious that the Analytics Engine 112 ′ will continue its correlation activity using additional information gathered from the kernel and microvisor modules 110 , 108 , looking for conclusive malicious behavior.
  • the next element of the rule would trigger on filesystem enumeration FE, Which is seen through the kernel module(s) 110 .
  • the Analytics Engine will, according to response rule directives, terminate.
  • the kernel module(s) 110 recognize the prohibited request as a result. of the new rule and cancels the request.
  • the system may then potentially restart the parent Internet browser process ( FIG. 15 ).
  • the Analytic Engine 112 ′ now knows that the code in question was malicious, and that the Internet Browser image in memory has been corrupted.
  • the management module 114 terminates the malicious process and respawns the internet browser from a clean image.
  • the management module 114 also leaves identifying information M in the Analytics Engine 112 ′ for future use and resets both kernel and microvisor modules 110 , 108 , clearing the malicious code from memory, for normal operation ( FIG. 16 ).
  • the internet browser has now been respawned from the clean binary image on the system.
  • the management module sends a report to the appropriate console, and also forwards information about the thwarted attack to any other protected system on the subnet.
  • the Analytics Engine will also add the signature of the remapped binary to rules memory, allowing for the next attack of the same type to be immediately interdicted at time of attempted process memory remapping.
  • the Analytics Engine will also report on any or all of these events to the reporting console (e.g., a user interface generated by Management Module 114 depending upon the configuration of the particular system.
  • the reporting console e.g., a user interface generated by Management Module 114 depending upon the configuration of the particular system.
  • Embodiments of the present invention include a computer program code-based product, which includes a computer readable storage medium having program code stored therein which can be used to instruct a computer to perform any of the functions, methods and/or modules associated with the present invention.
  • the non-transitory computer readable medium includes any of, but not limited to, the following: CD-ROM, DVD, magnetic tape, optical disc, hard drive, floppy disk, ferroelectric memory, flash memory, phase-change memory, ferromagnetic memory, optical storage, charge coupled devices, magnetic or optical cards, smart cards, EEPROM, EPROM, RAM, ROM, DRAM, SRAM, SDRAM, and/or any other appropriate static, dynamic, or volatile memory or data storage devices, but does not include a transitory signal per se.
  • the present invention may be implemented on a conventional IBM PC or equivalent, multi-nodal system (e.g., LAN) or networking system (e.g., Internet, WWW, wireless web). All programming and data related thereto are stored in computer memory, static or dynamic or non-volatile, and may be retrieved by the user in any of: conventional computer storage, display (e.g., CRT, flat panel LCD, plasma, etc.) and/or hardcopy (i.e., printed) formats.
  • the programming of the present invention may be implemented by one skilled in the art of computer systems and/or software design.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Virology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Storage Device Security (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A security system and method secures and responds to security threats in a computer having a CPU, a Kernel/OS, and software applications. A low-level data collector intercepts a selection of first tier calls between the CPU and Kernel/OS, and stores associated first tier call IDs. A Kernel module intercepts a selection of second tier calls between applications and the Kernel/OS, and stores associated second tier call IDs. An Analytic Engine maps the stored first and second tier call IDs to a rulebase containing patterns of security threats, to generate a threat analysis, and then responds to the threat analysis. The Analytic Engine enlarges or contracts the selection of first and second tier calls to increase or decrease specificity of the threat analysis. A Management Module generates user interfaces accessible remotely by a user device, to update the rulebase and configure the low-level collector, the Kernel module, and the Analytic Engine.

Description

RELATED APPLICATION
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/971,244, entitled COMPUTER SECURITY SYSTEM AND METHOD, filed on Mar. 27, 2014, the contents of which are incorporated herein by reference in their entirety for all purposes.
BACKGROUND
Technical Field
This invention relates to computer system security, and more particularly, to a system and method for autonomously identifying and disrupting multiple forms of malicious software attacks through the correlation of hardware, operating system, and user space events.
Background Information
A mix of high false positives, complex management, unacceptable performance load, and a lack of automatic responses have critically reduced the efficacy and adoption of current security technologies in use at the endpoint. These technologies include anti-virus and malicious code detection products, network and host-based monitoring agents, and traditional host-based IPS and IDS technologies. These technologies are focused on detecting malware and automated attack mechanisms by recognizing direct representations (signatures) of known attack payloads, or by identifying a limited base of inappropriate or unauthorized actions. These approaches have proven increasingly ineffective as attackers use techniques such as polymorphism to change the appearance of attacks and increase their use of zero-day attacks, for which no signatures exist.
Modern attackers also leverage vulnerabilities in common applications and interfaces to elevate their privilege, providing them with the ability to co-opt the system configuration authority of the root user or administrator. From this position, the attackers and their tools can disable, remove, or reconfigure other software that is installed on the system. Existing technologies rely on their ability to instantiate themselves with priority over malware, and that priority is vulnerable in the case of privilege escalation attacks.
The preceding weaknesses in current technologies have led to the development of security systems that operate as nearly fully virtualized versions of the systems they seek to protect. By abstracting the actual operation of system-level functions from processes and users, these security systems can better identify patterns of behavior, and prevent malicious behavior, within the context of the virtualized image. However, the amount of data acquisition and process intermediation required by a fully virtualized or sandboxed environment often creates unacceptable performance impacts on the users of the systems along with other issues.
As a result of these multiple inadequacies, there are few automated solutions available to organizations looking to protect their endpoint systems. In an absence of trusted data and consistent reporting, endpoint security technologies instead provide monitoring data to human interpreters and remote data aggregation suites, from which attack identification and response decisions are made. This latency, between the attack, the detection of the attack, and the disruption or mitigation of the attack often takes months. Skilled individuals capable of recognizing attack patterns, and infrastructures capable of supporting them, also come at a high cost, making them inappropriate for all but the largest of organizations.
SUMMARY
One aspect of the invention includes a security system for securing and responding to security threats in a computer having a Central Processing Unit (CPU), a Kernel/Operating System, and a plurality of software applications. The system includes one or more low-level data collector modules configured to intercept a predetermined selection of first tier calls between the CPU and Kernel/OS, and to store identifying information pertaining to the intermediated first tier calls, i.e., first tier call IDs, in a data store. One or more Kernel Modules are configured to intermediate a predetermined selection of second tier calls between applications/users as they are interpreted by the Kernel/OS and to store identifying information pertaining to the intermediated second tier calls, i.e., second tier call IDs, in the data store. An Analytic Engine aggregates and maps the stored first and second tier call IDs to a rulebase containing patterns of first and second tier call IDs associated with identifiable security threats, to generate a threat analysis. The Analytic Engine selectively enlarges or contracts the predetermined selection of first and second tier calls to respectively increase or decrease specificity of the threat analysis. The Analytic Engine is also configured to take responsive actions in response to the threat analysis. A Management Module is configured to generate user interfaces accessible remotely, e.g., via the Internet, by a user device, to enable a user to update the rulebase and configure the low-level collector module, the Kernel module, and the Analytic Engine.
In another aspect of the invention, a method is provided for securing and responding to security threats in a computer having a Central Processing Unit (CPU), a Kernel/Operating System, and a plurality of software applications. The method includes intermediating a predetermined selection of first tier calls between the CPU and the Kernel/Operating System, and storing first tier call IDs in a data store. Second tier calls between the Kernel/OS and the applications are intermediated, with second tier call IDs stored in the data store. An Analytic Engine aggregates and maps the stored first and second tier call IDs to a rulebase to generate a threat analysis. The Analytic Engine selectively enlarges or contracts the predetermined selection of first and second tier calls to respectively increase or decrease specificity of said threat analysis. The Analytic Engine also implements responsive actions in response to the threat analysis. A Management Module generates a plurality of user interfaces to enable a user, via a user device, to update the rulebase and configure low-level collector and Kernel modules, and the Analytic Engine.
The features and advantages described herein are not all-inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and not to limit the scope of the inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
FIG. 1A is a block diagram of one embodiment of a system in accordance with the present invention;
FIG. 1B is a block diagram of an alternate embodiment of a system in accordance with the present invention;
FIG. 2 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario;
FIG. 3 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario;
FIG. 4 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario;
FIG. 5 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario;
FIG. 6 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario;
FIG. 7 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario;
FIG. 8 is a block diagram of a system of the prior art, during a step in an exemplary malicious attack scenario;
FIG. 9 is a block diagram of the embodiment of FIG. 1B, during a step in an exemplary operation during a malicious attack scenario;
FIG. 10 is a block diagram of the embodiment of FIG. 1B, during a step in an exemplary operation during a malicious attack scenario;
FIG. 11 is a block diagram of the embodiment of FIG. 1B, during a step in an exemplary operation during a malicious attack scenario;
FIG. 12 is a block diagram of the embodiment of FIG. 1B, during a step in an exemplary operation during a malicious attack scenario;
FIG. 13 is a block diagram of the embodiment of FIG. 1B, during a step in an exemplary operation during a malicious attack scenario;
FIG. 14 is a block diagram of the embodiment of FIG. 1B, during a step in an exemplary operation during a malicious attack scenario;
FIG. 15 is a block diagram of the embodiment of FIG. 1B, during a step in an exemplary operation during a malicious attack scenario; and
FIG. 16 is a block diagram of the embodiment of FIG. 1B, during a step in an exemplary operation during a malicious attack scenario;
DETAILED DESCRIPTION OF APPROACH
In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized. It is also to be understood that structural, procedural and system changes may be made without departing from the spirit and scope of the present invention. In addition, well-known structures, circuits and techniques have not been shown in detail in order not to obscure the understanding of this description. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims and their equivalents.
As used in the specification and in the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless the context clearly indicates otherwise. For example, reference to “an analyzer” includes a plurality of such analyzers. In another example, reference to “an analysis” includes a plurality of such analyses.
Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. All terms, including technical and scientific terms, as used herein, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless a term has been otherwise defined. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning as commonly understood by a person having ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure. Such commonly used terms will not be interpreted in an idealized or overly formal sense unless the disclosure herein expressly so defines otherwise.
As used herein, the terms “computer” and “user device” are meant to encompass a workstation, personal computer, personal digital assistant (PDA), wireless telephone, or any other suitable computing device including a processor, a computer readable medium upon which computer readable program code (including instructions and/or data) may be disposed, and a user interface. Terms such as “server”, “application”, “engine” and the like are intended to refer to a computer-related component, including hardware, software, and/or software in execution. For example, an engine may be, but is not limited to being, a process running on a processor, a processor including an object, an executable, a thread of execution, a program, and a computer. Moreover, the various components may be localized on one computer and/or distributed between two or more computers. The terms “real-time” and “on-demand” refer to sensing and responding to external events nearly simultaneously (e.g., within milliseconds or microseconds) with their occurrence, or without intentional delay, given the processing limitations of the system and the time required to accurately respond to the inputs.
Terms such as “component,” “module”, and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and a computer. By way of illustration, both an application running on a server and the server (or control related devices) can be components. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers or control devices.
Programming Languages
The system and method embodying the present invention can be programmed in any suitable language and technology, such as, but not limited to: Assembly Languages, C, C++; Visual Basic; Java; VBScript; Jscript; Node.js; BCMAscript; DHTM1; XML and CGI. Alternative versions may be developed using other programming languages including, Hypertext Markup Language (HTML), Active ServerPages (ASP) and Javascript. Any suitable database technology can be employed, such as, but not limited to, Microsoft SQL Server or IBM AS 400.
Overview
Briefly summarized, embodiments of the invention identify undesired process behaviors through high-performance analysis of a unique dataset containing outputs from custom collectors at each level of the computer system. For example, logfile, configuration, and process activity data may be gathered from user space, device driver and operating system information may be gathered from the kernel, and machine-level instruction and interrupt information is captured or derived from native hardware events. This information is organized into a structure that has been optimized for querying against a local rulebase that contains identifying patterns of common behaviors in malicious software. The result of this analysis is the capability to detect and disrupt the installation or operation of many types of malicious software.
These embodiments integrate a discrete set of collector interfaces, configured to gather a limited number of data elements required to satisfy the identification requirements of malicious behaviors defined in the rulebase. By limiting the information gathered and the calls/interfaces intermediated, minimal load is placed on the system, to likewise minimize the performance impact experienced by the users of the system.
The approach used in these embodiments validates the positive existence of unauthorized or malicious behavior. In an exemplary implementation, this validation is applied to the actions undertaken by active software processes on the system, where the requests, process control, and network connections associated with a software program are monitored to identify specific indicators of potential malicious behavior. These monitored parameters, which may otherwise simply appear anomalous or benign, are then compared to a rulebase of known patterns of malicious behavior, to automatically identify and respond to threats in real time. It should be noted that these embodiments are not merely identifying the signatures of particular viruses or malware, but instead, are broadly characterizing patterns of behavior common to entire classes of assailants, to cast a broader net than conventional approaches, such as described below.
This approach yields a new level of substantial certainty which drives confidence in results and the capability to take automatic remediating or mitigating action, without reliance on a human-driven system. Particular embodiments may also recognize patterns associated with non-programmatic, human-driven attacks, in order to act upon those attacks in real time.
Legacy technologies fall into two main categories, and suffer from three separate shortcomings which are commonly described as the issues of False Negatives, where actual attacks are not identified and disrupted, False Positives, where benign behavior is flagged as potentially harmful and where the volumes of data distracts users from actual protection tasks, and Unacceptable Performance Impacts.
As aspect of the present invention was the inventors' recognition that false negatives are a significant problem for existing technologies such as anti-virus and anti-malware offerings that attempt to provide protection from malicious behavior that is local to the system. These tools rely upon an ability to uniquely identify malicious software by fixed attributes of the compiled software itself, namely, the aforementioned ‘signatures’. These signatures are derived from an analysis of the content of the executable image. The present inventors have recognized that signature-based approaches tend to fail because of the new practice among virus creators of rebuilding the virus during the attack process, yielding a unique version of the same functional virus. Such an approach is called polymorphism, and it results in a widespread inability of these tools to identify many common attacks. Similarly, this signature-based approach is ineffective against new, or zero-day, attacks that have not previously been used. In this case, no signature exists, resulting in a lack of protection on the system.
Other conventional tools take the approach of seeking to use the source or network address of a connecting process or email system to identify malicious actors. The present inventors have recognized, however, that the dynamism of current network naming and address assignment makes those approaches similarly incomplete and out-of-date.
The instant inventors have also recognized that False Positives are a major problem for host and network based intrusion detection, intrusion prevention, and security incident and event management (SIEM) solutions. In most cases, these tools rely on the identification of anomalous behaviors among the messages from the systems that they protect. In the presence of activities that fall outside the set of either historical or expected behaviors, users and systems receive alerts that a potentially malicious activity is under way. This often results in an overwhelming challenge in terms of the volume of messages resulting from highly dynamic environments, or in a corresponding analytic challenge to merge and correlate data at a sufficient speed and accuracy to make the analytic results useful. Human intervention is thus typically required because the condition is best described as Not-known-good, as opposed to Known-bad. The alerts which are sent on Not-known-good events are, in the majority, benign, resulting in the challenge of widespread false positives.
A newer approach to anomalous and malicious behavior detection is based on the virtualization technology, where entire sessions of user or system behavior are managed inside a virtual container, which separates the actual operation of the system from the perceived operation of processes by the user.
An aspect of this invention is the inventors' recognition that the performance impact, uncertain reliability, and software platform dependence of virtualization approaches render them inappropriate for many users and security applications.
In a generalized virtualization approach, technology is used to construct a complete virtual image of the system in which either the entire operating system or some user application is instantiated and run. In order to do this, the virtualizing system is required to maintain state data around most, if not all, calls, data use, and even user interface interaction in order to simulate the expected behavior of the system. The virtualization should also intermediate most, if not all, calls that are capable of existing between the user or process in the virtualized environment.
As a result of these requirements, there is substantial overhead associated with the population and management of the virtual environment, and individual actions are separately processed and delayed through the intermediation process. Because of the depth of the intermediation, there also tends to be strict requirements for the platforms that are supported, due to the need to understand and instrument most, if not all of the calls.
In embodiments of the present invention, the problems of False Negatives and False Positives are addressed through the unique combination of a ruleset for known malicious behaviors and a new form of information gathering represented by a combination of multi-level collectors and the correlating capabilities of an Analytic Engine. When behavioral data from the collectors is assembled to match known indicators in the rulebase, protection and notification occur regardless of the actual on-disk representation/signature, source, or construction of the executable. When captured data correlates to the patterns of behavior represented in the rulebase, action may be effectively deemed conclusive and directly related to a known bad event. The protection is applied in real time, and in particular embodiments, local to the machine.
In these embodiments, the negative effects of full virtualization are mitigated by the use of the flexible low-level collector/framework, in which only a relatively small subset of the possible calls need to be examined and intermediated. Such use of only a small subset of calls is possible because of the tiered approach, which will be described in greater detail hereinbelow, to significantly reduce the performance impact of the inventive solution relative to prior art approaches such as the aforementioned virtualization approach.
In response to this need for immediate real time response and local action, not requiring human intermediation, the present invention includes a method and system for automatically protecting endpoint systems from the effects of attacks and malicious software. A method according to the present invention provides for the identification of malicious and unauthorized behavior in order to trigger appropriate steps for mitigation and disruption. Methods and systems in accordance with the present invention employ new forms of information collection and analysis that are hardware and software agnostic and are capable of informing behavior analytics. These embodiments can further use the result of these analytics to disrupt the attack in real time.
In accordance with embodiments of the invention, data is provided through a selective low level monitoring and data collection technology that operates between the CPU hardware and any existing hypervisor and/or host operating system. This data provides the capability to differentiate between the actual users of a system under attack, the attack that is impersonating an authorized user or process, and the operations that are being undertaken on the system.
In particular embodiments, this technology provides access to system functions while employing real-time analytics that adapt the criteria of the identification activity in order to further distinguish actual attacks from potential false positive reports. The system functions provide both data and operational capabilities, and the resulting information flows inform the assessment of which rules should be applied to the current scenario.
The criteria supplied can be organized as a structured rule syntax, extensible by authorized individuals, which is then parsed by the protection mechanism in order to identify new indicators of attack. This information may also be made available to multiple instances of the invention to provide consistency of behavior across multiple, e.g., networked, systems.
In these embodiments, the structured rule syntax can also be linked to response actions, specific to the identified malicious behavior, in order to provide a flexible means of integrating organizational priorities with the output of the malicious behavioral analysis.
Analytic results can be used to immediately interdict attacks in process. The results can also be used to generate real time alerts to users and groups in order to better inform aggregated analysis and organizational security practices.
In particular embodiments, the protection provided is not visible by either local users or by processes through the use of these low-level capabilities. Implementing a separate interface to technical functions such as memory management and process invocation allows the embodiments to selectively respond to requests for data, and to cloak its operation and existence.
Control of the rules, response, and versioning of the particular embodiment are also managed through the low-level capabilities of the host computer, through the use of user interfaces generated by an integral management module for display on remotely connected user devices. These interfaces may be configured to perform the functions of event aggregation, trending, and presentation. The information presented may relate to the actual attacks or behaviors disrupted, and may not, as a matter of course, include information which is unrelated to conclusively identified attacks.
Hardware Event Gathering and Analytics
An approach for selective real-time hardware interrupt vector content gathering by a security-focused behavioral analytics system is provided. In particular embodiments, a thin, machine-level collector is deployed within the interrupt handling control chain that intermediates service requests associated with operations and interrupts servicing selected hardware and software, to include events triggered by the CPU, operating system and user space, for the purpose of providing unique context in order to positively correlate user identity, privilege, and process activity.
The implementation of this intermediation minimizes latency and performance load by limiting its functions to simply recognizing the event in a low level collector module associated with that device, and passing the current interrupt context to a lightweight buffering mechanism which stores the data within the memory presently allocated to the low level collector(s). Transformation and processing of this information may be done within user space in order to capitalize on traditional system scheduling and performance optimizations.
The data, once gathered, is attributed to one or more classes of malicious behavior, and is used in conjunction with information from other collectors to identify processes or threads that are known to be destructive or unauthorized.
In particular embodiments, the Low level collector(s) employs a selective framework of collectors that is configurable to load only modules necessary to intermediate events and calls that are directly related to malicious and unauthorized behaviors from the rule base. As new malicious behaviors are recognized in research, or as more information is required in the analysis of system attacks, new modules can be transparently loaded and unloaded from this low-level framework, e.g., via a management module.
When an intermediated call is analyzed and found to contain context indicating that it could be a component of an unauthorized or malicious behavior as defined by rulebase contents, additional information that has been gathered from previous events is integrated in the correlation to confirm or exclude the call from the list of potential incidents.
This implementation includes the creation of the configurable low level collector/framework, which is a real-time mechanism for securely controlling and modifying collector behavior, a language and storage mechanism for rules defining consequent actions, and an active component of analysis capable of translating identified risks into action.
Turning now to now to the accompanying figures, particular aspects of the present invention will now be described in detail. As shown in FIG. 1A, a representative computer system onto which an embodiment of the present invention is deployed, is shown as system 100. This system 100 includes a computer having a processor (CPU) 102, a Kernel/Operating System (Kernel/OS) 104, a plurality of Applications 106, and one or more low-level data collector modules 108, such as in the form of a conventional micro-hypervisor (“microvisor”) configured in the accordance with the teachings herein. Those skilled in the art will recognize that the term microvisor refers to a Xen-based security-focused hypervisor that provides micro-virtualization technology to ensure secure computing environments. Short for micro-hypervisor, a microvisor works with the VT (Virtualization Technology) features built into Intel, AMD and other CPUs to create hardware-isolated micro virtual machines (micro-VMs) for each task performed by a user that utilizes data originating from an unknown source. In contrast to these conventional microvisors, embodiments of the present invention use a microvisor that has been modified to intercept (intermediate) a predetermined selection of calls (first tier calls) between the CPU 102 and Kernel/OS, and to store identifying information pertaining to the intermediated first tier calls (first tier call IDs) in a data store. One or more Kernel Modules 110 are configured to intermediate a predetermined selection of calls (second tier calls) between applications/users as they are interpreted by the Kernel/OS and to store identifying information pertaining to the intermediated second tier calls (second tier call IDs) in the data store. An Analytic Engine 112 aggregates and maps the stored first tier call IDs and second tier call IDs to a rulebase, to generate a threat analysis. As mentioned hereinabove, the rulebase includes patterns of first tier call IDs and second tier call IDs associated with identifiable security threats. In particular embodiments, the Analytic Engine is configured to selectively enlarge or contract the predetermined selection (e.g., increase or decrease the number) of first tier calls and second tier calls to respectively increase or decrease specificity of the threat analysis. The Analytic Engine is also configured to take responsive actions in response to the threat analysis. A Management Module 114 is configured to generate a plurality of user interfaces accessible remotely, e.g., via the Internet, by a user computer, to enable a user (e.g., having administrative privileges) to update the rulebase and configure the low-level collector module 108, the Kernel module 110, and the Analytic Engine 112.
The Analytic Engine 112 may take any number of actions in response to a detected threat. Non-limiting examples include one or more of (a) process termination, (b) thread termination, (c) event and alert notification and logging, (d) user disablement, (e) network disconnection, and (f) process fingerprinting.
It should be recognized that the first tier calls include one or more events or calls for activity that would otherwise pass directly between the CPU, hardware devices, and/or the Kernel/Operating System. In should also be recognized that the predeterminedselection of first tier calls represents a relatively small subset of the full range of calls capable of being passed between the CPU 102 and Kernel/OS 104. The use of such a subset provides the aforementioned benefits including low processing overhead, increased processing speed, etc. Non-limiting examples of calls includable in the predetermined selection of first tier calls include one or more of (a) apicmod=Advanced Programmable Interrupt Controller Module, (b) gmmumod=Guest Memory Management Unit Module, (c) gpmmumod=Guest Physical Memory Management Unit Module, (d) idtmod=Interrupt Descriptor Table Module, (e) kymtrmod=Keyboard Monitor Module, (f) msmtrmod=Mouse Monitor Module, (g) mxmlmod=Mini XML Module, (h) nwmtrmod=Network Monitor Module, (i) prmtnmod=Preemption Module, and (j) udis86mod=udis86 Module.
The second tier calls include one or more events or calls for service or data between the applications and the Kernel/Operating System including scheduling and functional service delivery. As discussed above with respect to the first tier calls, the predetermined selection of second tier calls also represents a relatively small subset of the full range of calls capable of being passed between the applications and the Kernel/OS. Non-limiting examples of calls includable in the predetermined selection of second tier calls includes communications with one or more of a (a) Network Monitor Driver, (b) Registry Monitor Driver, (c) Filesystem Monitor Driver, (d) Process Monitor Driver, and (e) Process Governor Driver.
As mentioned above, in particular embodiments, both the rulebase and the data store used to store the first and second tier call IDs, are local to system 100, e.g., disposed in memory associated with Analytic Engine 112 and/or Management Module 114. However, in some embodiments, the data store and/or the rulebase may be disposed remotely from the system 100.
Turning now to FIG. 1B, an alternate embodiment of the present invention is shown as system 100′. This system 100′ is substantially similar to system 100 (FIG. 1A), while also including one or more User Space Modules 120.
The user space module(s) 120 is configured to collect a predetermined selection of user space data associated with the applications, and to store identifying informationpertaining to the collected user space data (user space IDs) in the data store. As discussed above with respect to the first and second tier calls, the predetermined selection of user space data represents a relatively small subset of the full range of user space data capable of being generated and/or collected. Non-limiting examples user space data usable in the predetermined selection of user space data include one or more of (a) Application Mouse Activity, (b) Application Keyboard Activity, (c) System Logfile Activity, and (d) System Registry Fields.
The Analytic Engine 112′ is substantially similar to Analytic Engine 112, while also being configured to aggregate and map the user space IDs, along with the first and second tier call IDs, to the rulebase to generate a threat analysis. It will be noted that in this embodiment, the rulebase includes patterns of first tier call IDs, second tier call IDs and user space IDs associated with identifiable security threats. Similarly, the Analytic Engine 112′ is configured to selectively enlarge or contract the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to respectively increase or decrease the specificity of said threat analysis.
It should also be noted that any of these predetermined selections may be automatically enlarged to increase the specificity of the threat analysis from a base level to one or more escalated levels when the threat analysis identifies a potential security threat. Conversely, any of the predetermined selections may be automatically contracted to decrease the specificity of the threat analysis, e.g., to free up computing resources, from the one or more escalated levels towards the base level once one or more of the aforementioned responsive actions has been implemented.
Thus, unlike existing approaches which adapt security reporting and response according to statically identified behaviors on the system, these embodiments reconfigure their own data gathering capability to create an increasingly detailed understanding of the potential security events when necessary.
Emulating the process invoked by a human analyst, the implementation integrates the information gathered during previous call examination in order to more narrowly consider the inputs necessary to further investigate potentially damaging attacks.
The information collection happens through information passed to lightweight buffers which surface data to higher-level processing and analytic functions operating according to ordinary system scheduling, thereby minimizing the performance impact and visibility of the embodiment.
Automated Low Level Collector(s) Configuration
It should be recognized that knowledge gained through an observed malicious or unauthorized activity on one system 100, 100′ may be shared among all systems in a network utilizing the embodiments shown and described herein. Observable events are identified locally but may be shared globally increasing the learning efficiency of unrelated systems and preventing the spread of the observed malicious activity. For example, the particular first tier, second tier, and user space IDs associated with particular threats identified by one system 100, 100′, may be added to the rulebase used by other systems 100, 100′ to potentially provide for quicker threat identification by those other systems.
The capability described in the embodiment is defined by real-time knowledge of ongoing system behavior that is identified by characteristics described in the configurable rulebase. The system can also leverage the capability of the embodiments' real-time rulebase modifications to react to information provided by other embodiment systems and presented from foreign systems. The analytic engine may thus receive communication from other data sources that can then be transformed into the appropriate conditions to trigger application of rule changes. In this case, the implementation of the rule will include necessary protection behaviors to prevent the advancement or proliferation of an identifiable attack emanating from another machine on the local network. In this regard, the detection of a particular malware activity on a protected machine may be shared with any locally or remotely accessible systems. In response, those other systems, while uninfected, are aware of the conditions and network location of the offending system, and can therefore apply that inhibit or prevent connection from the offending machine or of the traffic type known to be causing the initial response on the foreign system. Existing implementations of the current state of the art do not contemplate this level of communication and coordination between low-level system protection technologies. The communications between systems protected by the invention delivers real-time status from adjacent systems, reporting on identifiable security events that they are experiencing.
Embodiments of the present invention may provide additional preventative behaviors to ensure that the security event on the initial machine is not allowed to further corrupt other adjacent machines. In this way, embodiments of the present invention can update both the indicators of a potential attack and the appropriate automated responses that can range from increased monitoring to denial of connections from the foreign exploited system. It is noted that conventional virus definition updates are based on the premise of a static signature, while the approach described with respect to the instant embodiments is effectively a “hive mind”, in which collective knowledge is shared in real-time so all systems share a collective understanding of potential malicious activity and sources.
Having described embodiments of the system 100, 100′ of the present invention, an exemplary method in accordance with aspects of the present invention will now be described as illustrated by the following Table I.
As shown, a method 200 for securing and responding to security threats in a computer having a Central Processing Unit (CPU), a Kernel/Operating System, includes intermediating 202, with low-level data collector module 108, a predetermined selection of first tier calls between the CPU and the Kernel/Operating System, and storing identifying information pertaining to the intermediated first tier calls (first tier call IDs) in a data store. At 204, second tier calls are intermediated with kernel module 110, information pertaining to the intermediated second tier calls (second tier call IDs) is stored in the data store. At 206, Analytic Engine 112 aggregates and maps the stored first tier call IDs and second tier call IDs to a rulebase, to generate a threat analysis. At 208, the Analytic Engine selectively enlarges or contracts the predetermined selection of first tier calls and the predetermined selection of second tier calls to respectively increase or decrease specificity of said threat analysis. At 210, the Analytic Engine implements one or more of a plurality of responsive actions in response to the threat analysis. At 212, the Management Module 114 generates a plurality of user interfaces to enable a user, via a communicably coupled user device, to update the rulebase and configure the low-level collector module 108, the Kernel module 110, and the Analytic Engine 112.
TABLE I
202 Intermediate and store first tier calls
204 Intermediate and store second tier calls
206 aggregate and map stored first tier call IDs and second tier call
IDs to a rulebase to generate a threat analysis
208 selectively enlarge or contract the selection of first tier calls
and second tier calls
210 implement responsive actions in response to the threat analysis
212 generate user interfaces to enable a user to update rulebase and
configure the low-level collector, Kernel module, and Analytic
Engine
Optional aspects of the method of Table I are shown and described with respect to the following Table II. As shown at 214, step 210 may further include implementing one or more of a plurality of responsive actions including process termination, thread termination, event and alert notification and logging, user disablement, network disconnection, and process fingerprinting. At 216, method 200 may further include using user space module 120 to collect a predetermined selection of user space data associated with the applications, and store identifying information pertaining to the collected user space data (user space IDs) in the data store. At 218, step 206 may include aggregating and mapping the stored first tier call IDs, second tier call IDs, and the user space IDs to the rulebase, to generate a threat analysis. At 220, step 208 may include selectively enlarging or contracting the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to respectively increase or decrease the specificity of said threat analysis.
At 222, step 208 may further include automatically enlarging the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to increase the specificity of said threat analysis from a base level to one or more escalated levels when the threat analysis identifies a potential security threat. At 224, step 208 may further include automatically contracting the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to decrease the specificity of said threat analysis from the one or more escalated levels towards the base level once one or more of the plurality of responsive actions has been implemented.
TABLE II
214 implement responsive actions including process termination, thread
termination, event and alert notification and logging, user
disablement, network disconnection, and process fingerprinting
216 collect and store user space data
218 aggregate and map first tier call IDs, second tier call IDs, and
user space IDs to the rulebase to generate threat analysis
220 selectively enlarge or contract the selection of first tier calls,
second tier calls, and/or the selection of user space data to
respectively increase or decrease the specificity of the threat
analysis.
222 automatically enlarge the selection from a base level to one or
more escalated levels when the threat analysis identifies a
potential security threat
224 automatically contract the selection from the one or more
escalated levels towards the base level once responsive action
has been implemented.
EXAMPLES
Turning now to FIGS. 2-16, examples of attacks on a conventional system (Example I, FIGS. 2-8) and on a system 100′ of the present invention (Example II, FIGS. 9-16) are shown and described.
Example I
In this example, the described approach is illuminated through an examination of the behavior of the system in the presence of a common attack type.
Specific Attack
Referring to FIGS. 2 and 3, in this example, malicious software 300 injects itself into a running instance of an Internet browser through a vulnerable condition in either the running code of browser or a third-party plug-in that is running within the browser. This is a common outcome and technique for phishing attack destinations and malicious websites. The goal of this example malicious software is to exfiltrate copies of any PDF files that are accessible to the user of the browser and deposit them on a remote server.
Attack Type Description
As shown in FIG. 4, the technical example exploit attack technique discussed is called process injection 302. In general, a process injection attack takes advantage of a vulnerability affecting an executing program in order to replace actively running legitimate application code with malicious code in order to mask the malicious behavior by running it in the context of an expected or trusted process.
In this popular form of attack, once the malicious code is executing, these are the common steps to exploiting the system:
Create a new process in a suspended state, a common way to create a new process and suspend execution until needed.
As shown in FIG. 5, the existing process's memory space is mapped with code from the malicious executable, and then resumes execution from the newly mapped malicious program 304. The new process will now be executing the malicious code, but within the context of the originally targeted legitimate executable.
The malicious code then attempts to locate the limited functions required by the malicious executable, either by actually attempting their execution while watching for failures, or by searching the import table in the shared memory of the corrupted executable. The code 304 may then attempt to download additional functions or libraries necessary to augment existing functionality within the malicious code in order to load that functionality for use.
Turning now to FIG. 6, in this specific example, the goal is the identification and exfiltration of all accessible PDF files. To do this, the malicious code will attempt to identify and enumerate all physical and logical drives that are accessible to the trusted context of the corrupted process.
This list is then searched for occurrences of the file type PDF 306.
As shown in FIGS. 7 and 8, on each successful identification of a PDF file, the malicious code then uses the existing HTTP capability of the corrupted Internet browser to send 308 the contents of that file to the address of a receiving malicious site using standard HTTP POST or GET requests.
Example II
Referring to FIGS. 9-16, system 100′ initiates components in various sections of the system. As shown in FIG. 9, at its base, the low-level collector (e.g. microvisor) 108 intercepts the lowest level commands passing between the kernel 104 of the operating system and the actual hardware (e.g., CPU) 102. The microvisor 108 acts both as observer to inform actions, and as defender of the entire system 100′. The kernel modules 110 gather information about ongoing system operation, and the analytic engine 112′relates alerts to rules and actions. These components are all managed by the management module 114. Once so configured as shown in FIG. 10, when the Internet browser visits the malicious site 300, and malicious code from site is returned at 302, the results are much different than as shown and discussed hereinabove with respect to the prior art. Turning to FIG. 11, system 100′ operates so that at the time of the creation of a new process by the Internet browser, the virtual address space of critical or identified processes is intermediated by one of the low level collectors 108 and the Kernel/OS module 110. For example, among the default rules within the system 100′ is a rule to observe Internet browsers for certain activities, like spawning new processes. So in this example when the browser spawns a new process at the request of the malicious code, that action is noted and identifying information is retrieved from the kernel and microvisor (collector) modules 110 and 108, respectively, to be sent to the analytic engine 112′.
As shown in FIG. 12, the collector interacts with a kernel level memory subsystem watching process level loading and unloading of memory-mapped regions and passes the captured intermediated call information to the Analytics Engine, where both the existing and new processes invoked are named, a hashed fingerprint of the virtual memory space is taken, and any changes to critical code or data pages are captured. For example, remapping a new process is an unusual operation from within a browser, and as a result, particular embodiments include a default kernel module rule that identifies it. When the process clears the existing browser context and replaces it with new code (e.g., with the malicious code), the kernel module 110 identifies the behavior and sends an alert which includes the process identification and behavior to the analytic engine 112′.
As shown in FIG. 13, the Analytics Engine 112′ then adds additional observation capabilities to the kernel module to capture additional system events related to the targeted process. The relevant new capability in this case is to alert on any filesystem enumeration. This is a single additional capability in satisfaction of this example. Any primary Alert will typically pass multiple new capabilities to the kernel or collector observation capability. In the example shown and discussed above, the remapping of the browser image is sufficiently suspicious that the Analytics Engine 112′ will continue its correlation activity using additional information gathered from the kernel and microvisor modules 110, 108, looking for conclusive malicious behavior. In this example, the next element of the rule would trigger on filesystem enumeration FE, Which is seen through the kernel module(s) 110.
When the malicious code performs the call to attempt enumeration of the discovered drives, e.g., to search for occurrences of the file type PDF at 306, a new flag in the kernel will be triggered, and an Alert will be sent to the Analytics Engine.
Receiving this second related Alert, the Analytics Engine will, according to response rule directives, terminate. In the example shown in FIG. 14, when the malicious code now tries to enumerate the file system as it looks for PDF files at 306, the kernel module(s) 110 recognize the prohibited request as a result. of the new rule and cancels the request. The system may then potentially restart the parent Internet browser process (FIG. 15). For example, as shown in FIG. 15, with the enumeration stopped, the Analytic Engine 112′ now knows that the code in question was malicious, and that the Internet Browser image in memory has been corrupted. Using a response defined in its rulebase, the management module 114 terminates the malicious process and respawns the internet browser from a clean image. The management module 114 also leaves identifying information M in the Analytics Engine 112′ for future use and resets both kernel and microvisor modules 110, 108, clearing the malicious code from memory, for normal operation (FIG. 16). In the example shown in FIG. 16, the internet browser has now been respawned from the clean binary image on the system. The management module sends a report to the appropriate console, and also forwards information about the thwarted attack to any other protected system on the subnet.
The Analytics Engine will also add the signature of the remapped binary to rules memory, allowing for the next attack of the same type to be immediately interdicted at time of attempted process memory remapping.
The Analytics Engine will also report on any or all of these events to the reporting console (e.g., a user interface generated by Management Module 114 depending upon the configuration of the particular system.
Embodiments of the present invention include a computer program code-based product, which includes a computer readable storage medium having program code stored therein which can be used to instruct a computer to perform any of the functions, methods and/or modules associated with the present invention. The non-transitory computer readable medium includes any of, but not limited to, the following: CD-ROM, DVD, magnetic tape, optical disc, hard drive, floppy disk, ferroelectric memory, flash memory, phase-change memory, ferromagnetic memory, optical storage, charge coupled devices, magnetic or optical cards, smart cards, EEPROM, EPROM, RAM, ROM, DRAM, SRAM, SDRAM, and/or any other appropriate static, dynamic, or volatile memory or data storage devices, but does not include a transitory signal per se.
The above systems are implemented in various computing environments. For example, the present invention may be implemented on a conventional IBM PC or equivalent, multi-nodal system (e.g., LAN) or networking system (e.g., Internet, WWW, wireless web). All programming and data related thereto are stored in computer memory, static or dynamic or non-volatile, and may be retrieved by the user in any of: conventional computer storage, display (e.g., CRT, flat panel LCD, plasma, etc.) and/or hardcopy (i.e., printed) formats. The programming of the present invention may be implemented by one skilled in the art of computer systems and/or software design.
In the preceding specification, the invention has been described with reference to specific exemplary embodiments for the purposes of illustration and description. It is notintended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of this disclosure. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
It should be further understood that any of the features described with respect to one of the embodiments described herein may be similarly applied to any of the other embodiments described herein without departing from the scope of the present invention.

Claims (27)

Having thus described the invention, what is claimed is:
1. A security system for securing and responding to security threats in a computer having a Central Processing Unit (CPU), a Kernel/Operating System, and a plurality of software applications, the system including:
a low-level data collector module in the form of a hypervisor, implemented with the CPU, configured to intermediate a predetermined selection of first tier calls between the CPU and the Kernel/Operating System without creating hardware-isolated virtual machines for each task performed by a user, and to store identifying information pertaining to the intermediated first tier calls (first tier call IDs) in a data store, the hypervisor being configurable in real-time to selectively enlarge or contract the predetermined selection of first tier calls;
a kernel module, implemented with the CPU, configured to intermediate a predetermined selection of second tier calls between the Kernel/Operating System and the applications, and to store identifying information pertaining to the intermediated second tier calls (second tier call IDs) in the data store;
an Analytic Engine, implemented with the CPU, configured to aggregate and map the stored first tier call IDs and second tier call IDs to a rulebase, to generate a threat analysis, the rulebase including patterns of first tier call IDs and second tier call IDs associated with identifiable security threats;
the Analytic Engine being configured to selectively enlarge or contract the predetermined selection of first tier calls and the predetermined selection of second tier calls to respectively increase or decrease specificity of said threat analysis;
the Analytic Engine being further configured to implement one or more of a plurality of responsive actions in response to said threat analysis; and
a Management Module communicably coupled to the rulebase, the low-level data collector module, the Kernel module and the Analytic Engine, the Management Module configured to generate a plurality of user interfaces accessible by a user computer communicably couplable to the system, the user interfaces configured to enable a user to update the rulebase and configure the low-level collector module, the Kernel module, and the Analytic Engine.
2. The system of claim 1, wherein the Analytic Engine is configured to implement one or more of a plurality of responsive actions including one or more of (a) process termination, (b) thread termination, (c) event and alert notification and logging, (d) user disablement, (e) network disconnection, and (f) process fingerprinting.
3. The system of claim 1, further comprising:
a user space module configured to collect a predetermined selection of user space data associated with the applications, and to store identifying information pertaining to the collected user space data (user space IDs) in the data store;
the Analytic Engine being further configured to aggregate and map the stored first tier call IDs, second tier call IDs, and the user space IDs to the rulebase, to generate a threat analysis, the rulebase including patterns of first tier call IDs, second tier call IDs and user space IDs associated with identifiable security threats; and
the Analytic Engine being configured to selectively enlarge or contract the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to respectively increase or decrease the specificity of said threat analysis.
4. The system of claim 3, wherein the Analytic Engine is configured to automatically enlarge the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to increase the specificity of said threat analysis from a base level to one or more escalated levels when the threat analysis identifies a potential security threat.
5. The system of claim 4, wherein the Analytic Engine is configured to automatically contract the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to decrease the specificity of said threat analysis from the one or more escalated levels towards the base level once one or more of the plurality of responsive actions has been implemented.
6. The system of claim 3, wherein the predetermined selection of user space data includes one or more of (a) Application Mouse Activity, (b) Application Keyboard Activity, (c) System Logfile Activity, and (d) System Registry Fields.
7. The system of claim 1, wherein the predetermined selection of first tier calls include one or more events or calls for activity that would otherwise pass directly between the CPU, hardware devices, and/or the Kernel/Operating System.
8. The system of claim 7, wherein the predetermined selection of first tier calls includes one or more of (a) apicmod=Advanced Programmable Interrupt Controller Module, (b) gmmumod=Guest Memory Management Unit Module, (c) gpmmumod=Guest Physical Memory Management Unit Module, (d) idtmod=Interrupt Descriptor Table Module, (e) kymtrmod=Keyboard Monitor Module, (f) msmtrmod=Mouse Monitor Module, (g) mxmlmod=Mini XML Module, (h) nwmtrmod=Network Monitor Module, (i) prmtnmod=Preemption Module, and (j) udis86mod=udis86 Module.
9. The system of claim 1, wherein the predetermined selection of second tier calls include one or more events or calls for service or data between the applications and the Kernel/Operating System including scheduling and functional service delivery.
10. The system of claim 9, wherein the predetermined selection of second tier calls includes communications with one or more of a (a) Network Monitor Driver, (b) Registry Monitor Driver, (c) Filesystem Monitor Driver, (d) Process Monitor Driver, and (e) Process Governor Driver.
11. The system of claim 1, further comprising the data store and the rulebase.
12. The system of claim 11, wherein said data store is disposed remotely from the computer.
13. The system of claim 11, wherein the rulebase is disposed remotely from the computer.
14. A method for securing and responding to security threats in a computer having a Central Processing Unit (CPU), a Kernel/Operating System, and a plurality of software applications, the method including:
(a) intermediating, with a low-level data collector module in the form of a hypervisor implemented with the CPU, a predetermined selection of first tier calls between the CPU and the Kernel/Operating System without creating hardware-isolated virtual machines for each task performed by a user, and storing identifying information pertaining to the intermediated first tier calls (first tier call IDs) in a data store, the hypervisor being configurable in real-time to selectively enlarge or contract the predetermined selection of first tier calls;
(b) intermediating, with a kernel module implemented with the CPU, a predetermined selection of second tier calls between the Kernel/Operating System and the applications, and storing identifying information pertaining to the intermediated second tier calls (second tier call IDs) in the data store;
(c) aggregating and mapping, with an Analytic Engine implemented with the CPU, the stored first tier call IDs and second tier call IDs to a rulebase, to generate a threat analysis, the rulebase including patterns of first tier call IDs and second tier call IDs associated with identifiable security threats;
(d) selectively enlarging or contracting, with the Analytic Engine, the predetermined selection of first tier calls and the predetermined selection of second tier calls to respectively increase or decrease specificity of said threat analysis;
(e) implementing, with the Analytic Engine, one or more of a plurality of responsive actions in response to said threat analysis; and
(f) actuating a plurality of user interfaces, with a Management Module, to update the rulebase and configure the low-level collector module, the Kernel module, and the Analytic Engine.
15. The method of claim 14, wherein said implementing (e) further comprises implementing one or more of a plurality of responsive actions including process termination, thread termination, event and alert notification and logging, user disablement, network disconnection, and process fingerprinting.
16. The method of claim 14, further comprising:
(g) collecting, with a user space module implemented with a processor, a predetermined selection of user space data associated with the applications, and storing identifying information pertaining to the collected user space data (user space IDs) in the data store;
said aggregating and mapping (c) further comprises aggregating and mapping the stored first tier call IDs, second tier call IDs, and the user space IDs to the rulebase, to generate a threat analysis, the rulebase including patterns of first tier call IDs, second tier call IDs and user space IDs associated with identifiable security threats; and
said selectively enlarging or contracting (d) further comprising selectively enlarging or contracting the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to respectively increase or decrease the specificity of said threat analysis.
17. The method of claim 16, wherein said selectively enlarging or contracting (d) further comprises automatically enlarging the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to increase the specificity of said threat analysis from a base level to one or more escalated levels when the threat analysis identifies a potential security threat.
18. The method of claim 17, wherein said selectively enlarging or contracting (d) further comprises automatically contracting the predetermined selection of first tier calls, the predetermined selection of second tier calls, and/or the predetermined selection of user space data to decrease the specificity of said threat analysis from the one or more escalated levels towards the base level once one or more of the plurality of responsive actions has been implemented.
19. The method of claim 16, wherein said collecting (g) further comprises collecting a predetermined selection of user space data including one or more of (a) Application Mouse Activity, (b) Application Keyboard Activity, (c) System Logfile Activity, and (d) System Registry Fields.
20. The method of claim 14, wherein said intermediating (a) further comprises intermediating a predetermined selection of first tier calls including one or more events or calls for activity that would otherwise pass directly between the CPU, hardware devices, and/or the Kernel/Operating System.
21. The method of claim 20, wherein said intermediating (a) further comprises intermediating a predetermined selection of first tier calls including one or more of (a) apicmod=Advanced Programmable Interrupt Controller Module, (b) gmmumod=Guest Memory Management Unit Module, (c) gpmmumod=Guest Physical Memory Management Unit Module, (d) idtmod=Interrupt Descriptor Table Module, (e) kymtrmod=Keyboard Monitor Module, (f) msmtrmod=Mouse Monitor Module, (g) mxmlmod=Mini XML Module, (h) nwmtrmod=Network Monitor Module, (i) prmtnmod=Preemption Module, and (j) udis86mod=udis86 Module.
22. The method of claim 14, wherein said intermediating (b) further comprises intermediating a predetermined selection of second tier calls include one or more events or calls for service or data between the applications and the Kernel/Operating System including scheduling and functional service delivery.
23. The method of claim 22, wherein said intermediating (b) further comprises intermediating a predetermined selection of second tier calls including communications with one or more of a (a) Network Monitor Driver, (b) Registry Monitor Driver, (c) Filesystem Monitor Driver, (d) Process Monitor Driver, and (e) Process Governor Driver.
24. The method of claim 14, comprising disposing the data store and the rulebase locally to the computer.
25. The method of claim 24, comprising disposing the data store and the rulebase remotely from the computer.
26. The method of claim 14, comprising adding particular first tier call IDs, second tier call IDs, and user space IDs, associated with particular identified threats, to the rulebase.
27. The method of claim 26, further comprising sharing the rulebase among a plurality of said computers.
US14/670,721 2014-03-27 2015-03-27 Malicious software identification integrating behavioral analytics and hardware events Active US9977895B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US14/670,721 US9977895B2 (en) 2014-03-27 2015-03-27 Malicious software identification integrating behavioral analytics and hardware events
US15/095,607 US9589132B2 (en) 2014-03-27 2016-04-11 Method and apparatus for hypervisor based monitoring of system interactions
US15/283,910 US9733976B2 (en) 2014-03-27 2016-10-03 Method and apparatus for SYSRET monitoring of system interactions
US15/853,795 US10078752B2 (en) 2014-03-27 2017-12-23 Continuous malicious software identification through responsive machine learning
US16/131,894 US10460104B2 (en) 2014-03-27 2018-09-14 Continuous malicious software identification through responsive machine learning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461971244P 2014-03-27 2014-03-27
US14/670,721 US9977895B2 (en) 2014-03-27 2015-03-27 Malicious software identification integrating behavioral analytics and hardware events

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US15/069,253 Continuation-In-Part US10198280B2 (en) 2014-03-27 2016-03-14 Method and apparatus for hypervisor based monitoring of system interactions
US15/853,795 Continuation-In-Part US10078752B2 (en) 2014-03-27 2017-12-23 Continuous malicious software identification through responsive machine learning

Publications (2)

Publication Number Publication Date
US20150281267A1 US20150281267A1 (en) 2015-10-01
US9977895B2 true US9977895B2 (en) 2018-05-22

Family

ID=54192032

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/670,721 Active US9977895B2 (en) 2014-03-27 2015-03-27 Malicious software identification integrating behavioral analytics and hardware events
US15/095,607 Active US9589132B2 (en) 2014-03-27 2016-04-11 Method and apparatus for hypervisor based monitoring of system interactions

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/095,607 Active US9589132B2 (en) 2014-03-27 2016-04-11 Method and apparatus for hypervisor based monitoring of system interactions

Country Status (4)

Country Link
US (2) US9977895B2 (en)
EP (1) EP3123390A4 (en)
AU (1) AU2015235840A1 (en)
WO (1) WO2015148914A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10313379B1 (en) * 2017-06-09 2019-06-04 Symantec Corporation Systems and methods for making security-related predictions
US11936666B1 (en) 2016-03-31 2024-03-19 Musarubra Us Llc Risk analyzer for ascertaining a risk of harm to a network and generating alerts regarding the ascertained risk
US11979428B1 (en) * 2016-03-31 2024-05-07 Musarubra Us Llc Technique for verifying exploit/malware at malware detection appliance through correlation with endpoints

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10078752B2 (en) * 2014-03-27 2018-09-18 Barkly Protects, Inc. Continuous malicious software identification through responsive machine learning
US10198280B2 (en) 2015-12-14 2019-02-05 Barkly Protects, Inc. Method and apparatus for hypervisor based monitoring of system interactions
US11507663B2 (en) * 2014-08-11 2022-11-22 Sentinel Labs Israel Ltd. Method of remediating operations performed by a program and system thereof
US9710648B2 (en) 2014-08-11 2017-07-18 Sentinel Labs Israel Ltd. Method of malware detection and system thereof
US10102374B1 (en) * 2014-08-11 2018-10-16 Sentinel Labs Israel Ltd. Method of remediating a program and system thereof by undoing operations
US10387649B2 (en) * 2015-10-31 2019-08-20 Quick Heal Technologies Private Limited Detecting malware when executing in a system
US10027692B2 (en) * 2016-01-05 2018-07-17 International Business Machines Corporation Modifying evasive code using correlation analysis
US10650621B1 (en) 2016-09-13 2020-05-12 Iocurrents, Inc. Interfacing with a vehicular controller area network
US11005859B1 (en) * 2016-09-23 2021-05-11 EMC IP Holding Company LLC Methods and apparatus for protecting against suspicious computer operations using multi-channel protocol
US10685111B2 (en) * 2016-10-31 2020-06-16 Crowdstrike, Inc. File-modifying malware detection
US10467082B2 (en) * 2016-12-09 2019-11-05 Microsoft Technology Licensing, Llc Device driver verification
US11695800B2 (en) 2016-12-19 2023-07-04 SentinelOne, Inc. Deceiving attackers accessing network data
US10635479B2 (en) * 2016-12-19 2020-04-28 Bitdefender IPR Management Ltd. Event filtering for virtual machine security applications
US11616812B2 (en) 2016-12-19 2023-03-28 Attivo Networks Inc. Deceiving attackers accessing active directory data
US10235157B2 (en) 2016-12-29 2019-03-19 Arris Enterprises Llc Method and system for analytics-based updating of networked devices
US10462171B2 (en) * 2017-08-08 2019-10-29 Sentinel Labs Israel Ltd. Methods, systems, and devices for dynamically modeling and grouping endpoints for edge networking
CN107479946B (en) * 2017-08-16 2020-06-16 南京大学 Interactive behavior monitoring scheme of kernel module
US11750623B2 (en) * 2017-09-04 2023-09-05 ITsMine Ltd. System and method for conducting a detailed computerized surveillance in a computerized environment
US10754950B2 (en) * 2017-11-30 2020-08-25 Assured Information Security, Inc. Entity resolution-based malicious file detection
WO2019125516A1 (en) * 2017-12-23 2019-06-27 Barkly Protects, Inc. Continuous malicious software identification through responsive machine learning
US11470115B2 (en) 2018-02-09 2022-10-11 Attivo Networks, Inc. Implementing decoys in a network environment
US10664592B2 (en) * 2018-03-22 2020-05-26 International Business Machines Corporation Method and system to securely run applications using containers
CN110445632B (en) * 2018-05-04 2023-09-01 北京京东尚科信息技术有限公司 Method and device for preventing client from crashing
US11016798B2 (en) 2018-06-01 2021-05-25 The Research Foundation for the State University Multi-hypervisor virtual machines that run on multiple co-located hypervisors
EP3973427A4 (en) 2019-05-20 2023-06-21 Sentinel Labs Israel Ltd. Systems and methods for executable code detection, automatic feature extraction and position independent code detection
RU2750627C2 (en) * 2019-06-28 2021-06-30 Акционерное общество "Лаборатория Касперского" Method for searching for samples of malicious messages
US11579857B2 (en) 2020-12-16 2023-02-14 Sentinel Labs Israel Ltd. Systems, methods and devices for device fingerprinting and automatic deployment of software in a computing network using a peer-to-peer approach
EP4030325A1 (en) * 2021-01-19 2022-07-20 Nokia Solutions and Networks Oy Information system security
US11899782B1 (en) 2021-07-13 2024-02-13 SentinelOne, Inc. Preserving DLL hooks
CN114726633B (en) * 2022-04-14 2023-10-03 中国电信股份有限公司 Traffic data processing method and device, storage medium and electronic equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080034429A1 (en) * 2006-08-07 2008-02-07 Schneider Jerome L Malware management through kernel detection
US20100017879A1 (en) * 2006-06-21 2010-01-21 Wibu-Systems Ag Method and System for Intrusion Detection
US20100192222A1 (en) 2009-01-23 2010-07-29 Microsoft Corporation Malware detection using multiple classifiers
US20110185423A1 (en) 2010-01-27 2011-07-28 Mcafee, Inc. Method and system for detection of malware that connect to network destinations through cloud scanning and web reputation
US20110289586A1 (en) 2004-07-15 2011-11-24 Kc Gaurav S Methods, systems, and media for detecting and preventing malcode execution
US8214900B1 (en) 2008-12-18 2012-07-03 Symantec Corporation Method and apparatus for monitoring a computer to detect operating system process manipulation
US20120192178A1 (en) * 2011-01-26 2012-07-26 International Business Machines Corporation Resetting a virtual function that is hosted by an input/output adapter
US20130198842A1 (en) 2012-01-31 2013-08-01 Trusteer Ltd. Method for detecting a malware
US20130275975A1 (en) * 2010-10-27 2013-10-17 Hitachi, Ltd. Resource management server, resource management method and storage medium in which resource management program is stored
US20130312099A1 (en) 2012-05-21 2013-11-21 Mcafee, Inc. Realtime Kernel Object Table and Type Protection
US20140007139A1 (en) * 2012-06-28 2014-01-02 Real Enterprise Solutions Development B.V. Dynamic rule management for kernel mode filter drivers
US20140075555A1 (en) * 2011-08-02 2014-03-13 Apoorva Technologies, LTD System and method for protecting computer systems from malware attacks
US20140115652A1 (en) * 2012-10-19 2014-04-24 Aditya Kapoor Real-Time Module Protection
US20140297780A1 (en) * 2013-03-26 2014-10-02 Vmware, Inc. Method and system for vm-granular ssd/flash cache live migration
US20150121135A1 (en) * 2013-10-31 2015-04-30 Assured Information Security, Inc. Virtual machine introspection facilities
US20170147387A1 (en) * 2014-08-26 2017-05-25 Amazon Technologies, Inc. Identifying kernel data structures

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8875295B2 (en) * 2013-02-22 2014-10-28 Bitdefender IPR Management Ltd. Memory introspection engine for integrity protection of virtual machines
US9323931B2 (en) * 2013-10-04 2016-04-26 Bitdefender IPR Management Ltd. Complex scoring for malware detection

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110289586A1 (en) 2004-07-15 2011-11-24 Kc Gaurav S Methods, systems, and media for detecting and preventing malcode execution
US20100017879A1 (en) * 2006-06-21 2010-01-21 Wibu-Systems Ag Method and System for Intrusion Detection
US20080034429A1 (en) * 2006-08-07 2008-02-07 Schneider Jerome L Malware management through kernel detection
US8214900B1 (en) 2008-12-18 2012-07-03 Symantec Corporation Method and apparatus for monitoring a computer to detect operating system process manipulation
US20100192222A1 (en) 2009-01-23 2010-07-29 Microsoft Corporation Malware detection using multiple classifiers
US20110185423A1 (en) 2010-01-27 2011-07-28 Mcafee, Inc. Method and system for detection of malware that connect to network destinations through cloud scanning and web reputation
US20130275975A1 (en) * 2010-10-27 2013-10-17 Hitachi, Ltd. Resource management server, resource management method and storage medium in which resource management program is stored
US20120192178A1 (en) * 2011-01-26 2012-07-26 International Business Machines Corporation Resetting a virtual function that is hosted by an input/output adapter
US20140075555A1 (en) * 2011-08-02 2014-03-13 Apoorva Technologies, LTD System and method for protecting computer systems from malware attacks
US20130198842A1 (en) 2012-01-31 2013-08-01 Trusteer Ltd. Method for detecting a malware
US20130312099A1 (en) 2012-05-21 2013-11-21 Mcafee, Inc. Realtime Kernel Object Table and Type Protection
US20140007139A1 (en) * 2012-06-28 2014-01-02 Real Enterprise Solutions Development B.V. Dynamic rule management for kernel mode filter drivers
US20140115652A1 (en) * 2012-10-19 2014-04-24 Aditya Kapoor Real-Time Module Protection
US20140297780A1 (en) * 2013-03-26 2014-10-02 Vmware, Inc. Method and system for vm-granular ssd/flash cache live migration
US20150121135A1 (en) * 2013-10-31 2015-04-30 Assured Information Security, Inc. Virtual machine introspection facilities
US20170147387A1 (en) * 2014-08-26 2017-05-25 Amazon Technologies, Inc. Identifying kernel data structures

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11936666B1 (en) 2016-03-31 2024-03-19 Musarubra Us Llc Risk analyzer for ascertaining a risk of harm to a network and generating alerts regarding the ascertained risk
US11979428B1 (en) * 2016-03-31 2024-05-07 Musarubra Us Llc Technique for verifying exploit/malware at malware detection appliance through correlation with endpoints
US10313379B1 (en) * 2017-06-09 2019-06-04 Symantec Corporation Systems and methods for making security-related predictions

Also Published As

Publication number Publication date
US20160224786A1 (en) 2016-08-04
AU2015235840A1 (en) 2016-08-18
WO2015148914A1 (en) 2015-10-01
EP3123390A4 (en) 2017-10-25
EP3123390A1 (en) 2017-02-01
US20150281267A1 (en) 2015-10-01
US9589132B2 (en) 2017-03-07

Similar Documents

Publication Publication Date Title
US9977895B2 (en) Malicious software identification integrating behavioral analytics and hardware events
US10460104B2 (en) Continuous malicious software identification through responsive machine learning
US11736530B2 (en) Framework for coordination between endpoint security and network security services
US10972493B2 (en) Automatically grouping malware based on artifacts
US10530789B2 (en) Alerting and tagging using a malware analysis platform for threat intelligence made actionable
US11636206B2 (en) Deferred malware scanning
US10200390B2 (en) Automatically determining whether malware samples are similar
US10200389B2 (en) Malware analysis platform for threat intelligence made actionable
US9251343B1 (en) Detecting bootkits resident on compromised computers
KR20150006042A (en) Systems and methods for providing mobile security based on dynamic attestation
US11706251B2 (en) Simulating user interactions for malware analysis
WO2019125516A1 (en) Continuous malicious software identification through responsive machine learning
Li et al. Viso: Characterizing malicious behaviors of virtual machines with unsupervised clustering
AU2022426852A1 (en) Zero trust file integrity protection
Samani An intelligent malware classification framework

Legal Events

Date Code Title Description
AS Assignment

Owner name: CYLENT SYSTEMS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DANAHY, JOHN J.;BERG, RYAN J.;SWIDOWSKI, KIRK R.;AND OTHERS;SIGNING DATES FROM 20150327 TO 20150625;REEL/FRAME:035976/0322

AS Assignment

Owner name: BARKLY PROTECTS, INC., MASSACHUSETTS

Free format text: CHANGE OF NAME;ASSIGNOR:CYLENT SYSTEMS, INC.;REEL/FRAME:038394/0480

Effective date: 20150514

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: SILICON VALLEY BANK, MASSACHUSETTS

Free format text: SECURITY INTEREST;ASSIGNOR:BARKLY PROTECTS, INC.;REEL/FRAME:047887/0046

Effective date: 20190102

AS Assignment

Owner name: ALERT LOGIC, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BARKLY PROTECTS, INC.;REEL/FRAME:048323/0598

Effective date: 20190130

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: BARKLY PROTECTS, INC., MASSACHUSETTS

Free format text: TERMINATION AND RELEASE OF INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:049285/0620

Effective date: 20190513

AS Assignment

Owner name: PACIFIC WESTERN BANK, NORTH CAROLINA

Free format text: SECURITY INTEREST;ASSIGNOR:ALERT LOGIC, INC.;REEL/FRAME:052203/0073

Effective date: 20200317

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: ALERT LOGIC, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:PACIFIC WESTERN BANK;REEL/FRAME:059498/0361

Effective date: 20220324

AS Assignment

Owner name: JEFFERIES FINANCE LLC, AS COLLATERAL AGENT, NEW YORK

Free format text: FIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:ALERT LOGIC, INC.;REEL/FRAME:060306/0555

Effective date: 20220603

Owner name: GOLUB CAPITAL MARKETS LLC, AS COLLATERAL AGENT, ILLINOIS

Free format text: SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:ALERT LOGIC, INC.;REEL/FRAME:060306/0758

Effective date: 20220603