US9378361B1 - Anomaly sensor framework for detecting advanced persistent threat attacks - Google Patents

Anomaly sensor framework for detecting advanced persistent threat attacks Download PDF

Info

Publication number
US9378361B1
US9378361B1 US13/731,635 US201213731635A US9378361B1 US 9378361 B1 US9378361 B1 US 9378361B1 US 201213731635 A US201213731635 A US 201213731635A US 9378361 B1 US9378361 B1 US 9378361B1
Authority
US
United States
Prior art keywords
activity
computer system
host
sensor
protected computer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/731,635
Inventor
Ting-Fang Yen
Ari Juels
Aditya Kuppa
Kaan Onarlioglu
Alina Oprea
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EMC Corp
Original Assignee
EMC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EMC Corp filed Critical EMC Corp
Priority to US13/731,635 priority Critical patent/US9378361B1/en
Assigned to EMC CORPORATION reassignment EMC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KUPPA, ADITYA, JUELS, ARI, OPREA, ALINA, YEN, TING-FANG, ONARLIOGLU, KAAN
Application granted granted Critical
Publication of US9378361B1 publication Critical patent/US9378361B1/en
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT SECURITY AGREEMENT Assignors: ASAP SOFTWARE EXPRESS, INC., AVENTAIL LLC, CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL SYSTEMS CORPORATION, DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., MAGINATICS LLC, MOZY, INC., SCALEIO LLC, SPANNING CLOUD APPS LLC, WYSE TECHNOLOGY L.L.C.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT SECURITY AGREEMENT Assignors: ASAP SOFTWARE EXPRESS, INC., AVENTAIL LLC, CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL SYSTEMS CORPORATION, DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., MAGINATICS LLC, MOZY, INC., SCALEIO LLC, SPANNING CLOUD APPS LLC, WYSE TECHNOLOGY L.L.C.
Assigned to EMC IP Holding Company LLC reassignment EMC IP Holding Company LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EMC CORPORATION
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to ASAP SOFTWARE EXPRESS, INC., DELL INTERNATIONAL, L.L.C., EMC CORPORATION, CREDANT TECHNOLOGIES, INC., MAGINATICS LLC, DELL MARKETING L.P., EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., SCALEIO LLC, DELL USA L.P., DELL SOFTWARE INC., DELL SYSTEMS CORPORATION, MOZY, INC., WYSE TECHNOLOGY L.L.C., DELL PRODUCTS L.P., AVENTAIL LLC reassignment ASAP SOFTWARE EXPRESS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH
Assigned to DELL INTERNATIONAL L.L.C., DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), SCALEIO LLC, DELL USA L.P., DELL PRODUCTS L.P., DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC) reassignment DELL INTERNATIONAL L.L.C. RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Assigned to DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), DELL INTERNATIONAL L.L.C., DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), SCALEIO LLC reassignment DELL PRODUCTS L.P. RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/562Static detection

Definitions

  • the invention relates the field of computer system security.
  • APT Advanced persistent threats
  • An APT generally refers to an attacker having capabilities and resources to persistently target a specific entity, along with the techniques they use including malware that infects a system under attack.
  • An APT may be capable of locating and taking harmful action with respect to sensitive data in a computer system, such as copying confidential data to an external machine for criminal or other ill-intended activities.
  • the APT performs its tasks in a stealthy manner so as to avoid detection.
  • an APT may be active in a system for a very long period of weeks or months and create corresponding levels of damage.
  • APT Athreot Detection mechanisms for detecting the presence of an APT in a computer system.
  • any malware that has entered a system can only be removed or disabled after it has first been identified in some manner. Identification in turn requires detection, i.e., acquiring knowledge indicating possible or actual presence of an APT or similar threat in a protected system.
  • APTs and similar threats in protected computer systems has relied upon human security specialists who have experience with the threats and can interrogate a protected system for operational information that may be indicative of APT presence.
  • a security specialist is looking for patterns of known bad activities, such as communications logs identifying an external host known or suspected to be used or controlled by computer criminals.
  • the process can be labor-intensive, difficult to generalize, and non-scalable.
  • Existing anomaly detection schemes commonly focus on very obvious anomalies such as volume-based outliers, but these are ill-suited for “low-and-slow” APT attacks and also suffer from high false positive rates.
  • a presently disclosed technique for threat detection is both automated and sufficiently sophisticated to detect the more subtle operation signatures presented by APTs and similar threats.
  • the technique is based partly on a hypothesis that, however stealthy an attacker might be, its behavior in attempting to steal sensitive information or subvert system operations should be different from that of a normal, benign user.
  • APT attacks typically include multiple stages, e.g., exploitation, command-and-control, lateral movement, and objectives, each move by an attacker provides an opportunity to detect behavioral deviations from the norm. Correlating events that may otherwise be treated as independent can reveal evidence of an intrusion, exposing stealthy attacks in a manner not possible with previous methods.
  • the technique includes use of detectors of behavioral deviations referred to as “anomaly sensors”, where each sensor examines one aspect of hosts' or users' activities. For instance, a sensor may examine the external sites hosts contact to identify unusual connections (potential command-and-control channels), to profile the machines each user logs into to find anomalous access patterns (potential “pivoting” behavior in the lateral movement stage), to study users' regular working hours to flag suspicious activities in the middle of the night, or to track the flow of data between internal hosts to find unusual locations where large amounts of data are being gathered (potential staging servers before data exfiltration).
  • a computer system realizes a threat detection system for detecting threats active in a protected computer system.
  • the system includes a set of anomaly sensors of distinct types including user-activity sensors, host-activity sensors and application-activity sensors, with each sensor being operative (1) to build a history of pertinent activity over a training period of operation, and (2) during a subsequent detection period of operation, compare current activity to the history of pertinent activity to detect new activity not occurring in the training period. The new activity is identified in respective sensor output.
  • the system further includes a set of correlators of different distinct types corresponding to different stages of threat activity according to modeled threat behavior. Each correlator receives output of one or more sensors of different types and applies logical and/or temporal testing to received sensor outputs to detect activity patterns of the different stages. Each correlator uses results of the logical and/or temporal testing to generate a respective alert output for a human or machine user of the threat detection system.
  • a framework is used in which templates are defined by a human analyst for the correlations implemented by the correlators.
  • the human analyst is given the flexibility of combining multiple sensors according to known attack patterns (e.g., command-and-control communications followed by lateral movement), to look for abnormal events that warrant investigation, or to generate behavioral reports of a given user's activities across time.
  • FIG. 1 is a block diagram of a computer system
  • FIG. 2 is a detailed block diagram of a threat detection system
  • FIG. 3 is a flow diagram of high-level operation of the threat detection system
  • FIG. 4 is a flow diagram of an address sanitization process
  • FIG. 5 is a flow diagram of a time sanitization process
  • FIG. 6 is a hardware block diagram of a computer.
  • FIG. 1 shows a computing system augmented by monitoring and/or protection elements.
  • the system includes a protected distributed computing system (PROTECTED SYSTEM) 10 and a security information and event management (SIEM) system 12 continually receiving a wide variety of system activity information 14 from operational components of the protected system 10 .
  • the SIEM system generates parsed logs 16 of logged activity information which are provided to a threat detection system 18 including a preprocessor 20 and a monitor/analyzer 22 .
  • the preprocessor 20 generates sanitized logs 24 for use by the monitor analyzer 22 , which in turn provides user-level functionality to a separate human or machine user, the functionality including things like alerts, reports, interactive tools for controlling or augmenting operations, etc.
  • the protected system 10 is generally a wide-area distributed computing system, such as a large organizational network. It may include one or more very large datacenters, as well as a number of smaller or “satellite” datacenters, all interconnected by a wide-area network that may include public network infrastructure (Internet) along with private networking components such as switches and routers, firewalls, virtual private network (VPN) components, etc.
  • Each datacenter includes local resources such as server computers (servers), client computers and storage systems, coupled together using local/intermediate networks such as local-area networks (LANs), metro-area networks (MANs), storage-area networks (SANs), etc.
  • the SIEM system 12 is a specialized computing system including hardware computing components executing specialized SIEM software components, including a large database for storing the parsed logs 16 .
  • the SIEM system 12 receives raw logs (not shown) generated by logging devices in the system and performs basic parsing into fields (e.g. IP address, timestamp, Msg ID, etc.) to produce the parsed logs 16 .
  • the SIEM system may utilize a SIEM product known as enVisionTM sold by RSA Security, Inc., the security division of EMC Corporation.
  • the SIEM system 12 gathers the raw logs generated by different devices within the protected system 10 and stores the parsed logs 16 in the database, functioning as a centralized repository.
  • the logs 16 need to be stored for some period of time (e.g., at least several months) in order to enable the analysis described herein.
  • the threat detection system 18 may be primarily software-implemented, utilizing hardware resources of the SIEM system 12 or in some cases its own dedicated hardware computers.
  • the threat detection system 18 is described herein as a collection of functional components. As described below, these are to be understood as one or more general-purpose computers executing specialized software for realizing each function.
  • the major component of the threat detection system 18 is the monitor/analyzer 22 . As described more below, it employs both top-down and bottom-up components.
  • the top-down component builds and utilizes templates based on known information about current and prior APT attacks, and these templates are used in analysis for detecting behavior that may be indicative of such attacks.
  • the bottom-up component gathers, stores and processes the system activity information as reflected in the sanitized logs 24 from the preprocessor 20 .
  • the monitor/analyzer 22 includes sensors 30 and correlators 32 .
  • sensor 30 Various types of sensor 30 are deployed, including for example a command-and-control (C & C) sensor 30 - 1 , new login sensor 30 - 2 , new applications sensor 30 - 3 and critical servers sensor 30 - 4 .
  • the correlators 32 work from sensor output in the form of reports 34 . Examples of correlators 32 include C & C and new application correlator 32 - 1 , unusual login correlator 32 - 2 and C & C and new login correlator 32 - 3 . Functions of these elements are described below.
  • the preprocessor 20 includes a time sanitizer 40 and a host address sanitizer 42 .
  • the host address sanitizer 42 includes a first sanitizer (STAT) 42 - 1 for static host addresses and a second sanitizer (DYN) 42 - 2 for dynamic host addresses.
  • STAT first sanitizer
  • DYN second sanitizer
  • Example parsed log inputs 16 are shown, including logs from web proxies, firewalls, domain controllers, VPN components, and Dynamic Host Control Protocol (DHCP) servers. Specific aspects of the preprocessor 20 are described below.
  • top-down component it may utilize published information about current and past APT attacks (documented in existing literature) as well as human analyst experience. This information is used to build templates for APT detection. Template definition and deployment may be based on an assumed model for APT attacks. In one model, a typical APT attack consists of the following stages: 1) Exploitation, 2) Command-and-control (C& C), 3) Lateral movement, and 4) Objectives (e.g., data exfiltration, service disruption). Exploitation refers to a threat's entering a system at points of vulnerability. Command and control refers to a resident threat's communications with the attacker (e.g.
  • Lateral movement refers to propagation or migration of threat components/activities within a system
  • objectives refers to activities that provide the desired data or other results from operation of the threat.
  • attack classes include attackers that propagate through social media, attackers that gather data to a central location and exfiltrate the data, attackers that exfiltrate from multiple machines in the enterprise. Templates for such different classes of attack can be built and deployed for use. The templates can also be refined during use as more knowledge is gathered about attacker behavior. Note that instead of utilizing known attack signatures, such as external sites known to be controlled by computer criminals or specific text strings used in known exploits, the disclosed system relies on patterns of attack behavior and hence is more difficult to evade.
  • the bottom-up component processes and analyzes information in event logs received by the SIEM system 12 in an enterprise and gathers evidence for abnormal user or host behavior as determined by different anomaly detection sensors 30 .
  • the bottom-up component generally consists of the following layers:
  • the SIEM system 12 gathers logs generated by different devices within an enterprise in a centralized repository, and stores these logs sufficiently long (e.g., at least several months) in order to enable analyses as described herein.
  • the role of individual sensors 30 - x is to maintain a history of normal user and host activity in a particular respect, perform profiling related to that aspect of user or host behavior, and generate a report 34 when activity deviating from the typical user or host profile is observed.
  • the following are examples of observable activity, along with the specific sensor 30 involved as well as the model stage in which it occurs:
  • Each individual sensor 30 may use statistical and machine learning techniques for profiling, and more details are given for several example sensors 30 below.
  • the system is preferably general enough to accommodate new types of sensors 30 being added to the system and/or refinement of existing sensor 30 based on the experience-based feedback.
  • the reports 34 from multiple sensors 30 are correlated and matched against the attack templates built by the top-down component.
  • An attack template might involve several sensors 30 and could use different operators applied to the reports 34 from the sensors 30 as described below.
  • Alerts may be triggered from the correlators 32 when suspicious events according to defined templates are detected.
  • the alerts may be provided to a human analyst for further investigation.
  • the correlators 32 may export an application programming interface (API) via which an external machine user can receive alerts and/or reports.
  • API application programming interface
  • the system preferably allows human analysts evaluating the alerts to give feedback based on the severity of the alerts and the usefulness of the alerts in detecting real attacks. Based on the analyst feedback, the system may undertake continuous refinement. For instance, new sensors 30 may be added, reports generated by various sensors 30 may be refined, alert prioritization may be improved, etc. New templates can also be generated as more attacks are discovered by the human analysts, and in general more knowledge about various attackers is gathered.
  • One philosophy that may be used in designing the anomaly sensor framework is to make each individual sensor 30 relatively weak or coarse, meaning that it employs a low threshold in selecting event information to be included in its output reports 34 , leaving the stronger detection logic for the correlators 32 . In this way, generally more lower-level information is obtained that may pertain to a variety of threats both known and unknown, and strong, flexible correlations between sensors 30 can be used to boost the quality of the overall result.
  • the sensors 30 utilize information from logs collected by the SIEM system 12 (e.g., enVision) in an enterprise environment. These include logs generated by proxy web servers, VPN servers, domain controllers, firewalls and DHCP servers for example.
  • the parsed logs 16 require pre-processing before different event types can be correlated. In particular, it is generally necessary to address inconsistencies arising from dynamic IP address assignments, as well as to develop lists of static IP addresses active in the enterprise. It is also necessary to address inconsistencies in the time-stamping of log data from devices in different time zones and using different time-zone configurations.
  • IP IP address
  • DHCP network Control Protocol
  • VPN logs as described in more detail below.
  • the outcome of this pre-processing may be stored in a database table with certain fields as shown below. Users can query this table directly. There may also be an interface (e.g., a web form) for accessing this information.
  • Start_time The start timestamp when this IP is allocated to the host.
  • End_time The end timestamp when this IP is no longer allocated to the host.
  • Ipaddr The IP address.
  • Hostname The hostname. Macaddr The host's MAC address.
  • IP addresses may be examined that do not appear in DHCP and VPN logs. For example, these IPs may be obtained from security gateway logs and host operating system (e.g., Windows) event logs. The hostname associated with those IP addresses may be looked up (e.g., by reverse DNS resolution using tools such as“nslookup” or “host”) repeatedly over time. An IP address that always resolves to the same hostname is considered static.
  • the parsed logs 16 are sanitized so that all log entries for all devices are reported in one consistent time, such as UTC time.
  • the sanitization procedure is done by the time sanitizer 40 as described more below.
  • each of the sensors 30 focuses on a particular aspect of host or user network behaviors.
  • the goal of a sensor 30 is to 1) profile/understand common behaviors during a training period covering a sufficiently long (e.g., at least one or two months) of network activity, and 2) identify outliers during a detection period following the training period.
  • each sensor 30 generates a respective report 34 with an ordered list of alerts (or suspicious behavior) identified during the detection period. The alerts are given a priority and score based on their relevance.
  • This sensor 30 is to identify possible C & C (Command-and-Control) domains according to the following heuristics:
  • the C & C sensor 30 - 1 constructs a history of web domains contacted by each host over a training period (e.g., one or two months). Afterward, a domain is flagged if it shows up in a log but not included in the history. A flagged domain is included in a watch list and actively monitored for some configurable time interval (e.g., one week). If at the end of the week, the activity to this domain looks normal (in terms of connection rate and frequency of domain contact from internal hosts), then the domain is added to the history. Otherwise, the domain is included in the report output of the C & C sensor 30 - 1 .
  • the goal of this sensor is to identify new login patterns not observed previously in the infrastructure using Windows log events. It profiles a user based on the set of machines the user commonly logs onto over the training period (e.g., one or two months). Similarly, for each hostname, it finds the set of users that commonly log onto the host. A history is built of ⁇ user, host ⁇ login events observed during training. Then during detection, new logins are identified and flagged (defined as new ⁇ user, host ⁇ login events not already in the history). The sensor 30 - 2 outputs in its report the new logins events found during detection.
  • the goal of this sensor is to identify human-like activities generated by a machine when the user is not at the terminal. During the training period, a pattern of regular working hours is built per user. In addition, a whitelist of domains contacted by many different machines at night (corresponding to automated processes) is created. The output of the sensor is a report containing a list of hosts and non-whitelisted web domains contacted outside regular working hours.
  • the goal of this sensor is to profile installed software/applications on a host. This may be done using user-agent string fields in HTTP requests that are recorded in security gateway proxy logs, for example.
  • a history including the list of all observed user-agent strings for each host is created.
  • new software corresponding to new user-agent strings not observed previously is identified.
  • the output is a list of suspicious hosts and new (possibly suspicious) applications they have installed.
  • This sensor profiles the internal communication among enterprise hosts to determine potential staging servers used for gathering data before exfiltrating it to external sites.
  • the sensor can use either firewall logs or records of network traffic analyzers, if available. While firewall logs normally report only basic connection information among different hosts (source and destination IP address and port number, and whether the connection is accepted or denied), network traffic analyzer records provide additional details such as the amount of data transferred in a connection.
  • a map of the communication pattern within the enterprise can be built. This includes pairs of internal hosts communicating and the average volume of traffic observed (if this information is available from traffic analyzer data). In the detection phase, the sensor flags pairs of hosts either initiating communication or deviating from the communication pattern recorded in the map.
  • the goal of this sensor is to identify critical machines/services in the enterprise network using the firewall logs.
  • the sensor examines its number of incoming and outgoing connections, and whether those connections were allowed or denied. After the training period, it identifies hosts that have a high ratio of incoming/outgoing connections. These are considered servers.
  • an attack template can be flexibly defined by a human operator as correlating outputs of different sensors 30 using various policies. Examples of operators that can be used when building attack templates using the outputs (reports) of multiple sensors are:
  • the correlators 32 Given a set of reports for various sensors in the system, and an attack template describing known aspects of an attack using statement with operators as above, the correlators 32 generate and output a list of (prioritized) alerts that satisfy the attack template.
  • the priority and score of these alerts is computed based on the priority and score of the alerts generated by each sensor 30 used by the template.
  • the alerts may be presented to a human user (analyst) in a graphical representation.
  • the system may also provide a graphical interface (tool) through which the analyst can provide feedback. For instance, the human analyst may rank each alert according to its indication of a real attack on a 1-10 scale. Based on the analyst feedback, the system is continuously refined by adding new sensors, augmenting reports generated by various sensors, refining attack templates and prioritizing alerts raised by either individual sensors or correlation among multiple sensors.
  • FIG. 3 provides a flow chart description of the overall process.
  • information from known attacks is used to analyze and describe identifying characteristics of attacks to be detected.
  • the sensors 30 and attack templates are defined, and the sensors 30 and correlators 32 implementing the attack templates are deployed.
  • the preprocessor 20 obtains and sanitizes the parsed logs 16 from the SIEM system 12 .
  • the sensors 30 are applied to the sanitized logs 24 to generate the sensor reports 34
  • the correlators 32 are applied to the reports 34 to generate output such as alerts etc.
  • an IP address can either be static or dynamic.
  • a static IP address is one that is assigned to a particular machine for an indefinitely long time, generally until some event such as network reconfiguration requires address reassignment. This period may be months or years in duration.
  • a dynamic IP address is one that is only temporarily assigned (“leased”) to any given machine for a generally much shorter period, and may be assigned to several different machines over different periods.
  • DHCP Dynamic Host Configuration Protocol
  • IP addresses Distinguishing between static and dynamic IP addresses is important to the goal of mapping IP addresses to unique hosts.
  • documentation about the use and configuration of individual IP address ranges is often scarce or non-existent.
  • dynamic IP addresses e.g., those administered by the corporate IT department
  • the following address classification may be used.
  • the set of all network addresses is divided between known dynamic addresses and other addresses, which are further divided into static addresses and other dynamic addresses.
  • Known dynamic IP addresses are those generally managed centrally by the IT department, while other dynamic IP addresses are typically managed by private DHCP servers to which the IT department lacks visibility.
  • a tool for classifying IP addresses regularly (e.g., daily) extracts all the IP addresses that appear in network logs to create a large enterprise IP address pool. Similarly, it extracts dynamic IP addresses from logs that are known to include only dynamic IP addresses, such as the logs from IT-managed DHCP servers collected by the SIEM system 12 . By taking the difference between these two IP address pools, resolving the names of the hosts assigned to the resulting IP addresses, and continuously monitoring those hosts for IP address re-assignments, the tool automatically maintains up-to-date and self-correcting lists of static IP addresses and other dynamic IP addresses.
  • Address sanitization is described with reference to FIG. 4 .
  • the address sanitizer 42 may be designed to perform a one-time bootstrap cycle to initialize its operations and data, and then run a periodic (e.g., daily) update cycle. These two separate cycles are both explained below with reference to FIG. 4 . As shown, the process operates using the data from the time-sanitized logs 24 -T. Further below is a brief description of operation of the time sanitizer 40 in creating those logs.
  • the tool builds its IP address pools and identifies a set of “undetermined” (but potentially static) IP addresses.
  • the combination of the set 62 of known dynamic IP addresses and the set 70 of static and other dynamic IP addresses make up the address-sanitized logs 24 -A that are provided to the monitor/analyzer 22 for use in the system monitoring/analysis functions.
  • the tool automatically runs an UPDATE cycle at regular intervals, e.g., daily, in order to update the IP address pools and classify IP addresses in the set 70 as static or dynamic.
  • the tool may be designed to auto-correct its outputs over time. The longer an IP address is monitored, the higher the confidence in its classification.
  • a mapping between IP addresses and corresponding unique hosts should include the following information: The IP address, a unique identifier for the host, and the start and end timestamps of the period during which the host is assigned this IP address.
  • the MAC address of a host's network interface is used as the unique identifier for the host.
  • DHCP Dynamic IP addresses are allocated to different machines at different time periods. This process takes place over the DHCP protocol.
  • dynamic IP addresses are leased to hosts for a given time period, after which the host must renew its IP assignment request, otherwise the DHCP server may reclaim that IP for use by another host.
  • DHCP logs collected by the SIEM system 12 may be parsed to collect information usable to construct a mapping of the known dynamic IP addresses 62 to unique hosts.
  • a list of all logging devices that report to the SIEM system 12 is known (e.g., a list of IP addresses of all logging devices). It is also assumed that the log timestamp translation is done after the logs are collected by the SIEM system 12 , i.e., administrator privileges to the logging devices are not available, so that the devices' clock configurations cannot be modified.
  • the output of the time sanitization technique is the time zone configuration of each logging device. This is stored in the following format:
  • all log timestamps can be translated into UTC by adding the corresponding ⁇ value to the device timestamp. For example, if a log from a device has a timestamp of T2, the adjusted log timestamp for that log message becomes T2+ ⁇ .
  • FIG. 5 describes the general procedure by which the timestamp correction values ⁇ are determined.
  • network events occurring in a known time frame e.g., UTC
  • the network events are known to have corresponding entries with timestamps in the parsed logs 16 .
  • difference values are calculated, each being a difference between an event time in the known time frame and a timestamp of the parsed log entry.
  • a selection function is applied to the difference values to obtain a correction value for each logging device.
  • the correction values are stored in association with respective identifiers of the logging devices (e.g., the IP address values Paddr).
  • One direct approach to detect a device's configured time zone is to send it “probes” over the network soliciting responses containing clock information. This is difficult in practice because neither the IP, UDP, or TCP headers include timestamps. Also, for security reasons many machines ignore packets sent to unused ports.
  • events are generated that will be logged (and time-stamped) by the device.
  • a Windows domain controller validates user logon events and generates logs describing the outcome of the logon attempts as it does so.
  • log entries and timestamps can be created by performing logons.
  • a web proxy forwards clients' HTTP requests and generates logs describing the network connection at the same time. Log entries and timestamps can be created by issuing HTTP requests.
  • TE time at which a testing event E is generated
  • TD timestamp
  • the active approach can be quite accurate and efficient, it may not be suitable for use in a large network with many different logging devices. In this case, events may be directed to different processing/logging devices depending on the source host's geographic location or network configuration. Without a comprehensive understanding of the topology of the enterprise network and access to multiple distributed client machines, the active approach may become infeasible.
  • An alternative passive approach may leverage information available in logs collected by a SIEM system to determine the devices' clock configuration.
  • the clock configuration in the SIEM system 12 may be static, which simplifies the processing.
  • the SIEM system 12 may generate all its timestamps in UTC time.
  • the passive approach compares the device timestamp TD with the SIEM system timestamp TS for all log messages generated by a device, where the SIEM system timestamp TS reflects the time that the SIEM system 12 received the log messages.
  • be the difference between TD and TS, rounded off to the nearest 15 minutes. From a set of (possibly inconsistent) ⁇ values derived from all logs generated by a device over a certain time period (e.g., one month), a process is employed to determine the correct actual time correction value for the device.
  • FIG. 6 is a generalized depiction of a computer such as may be used to realize the computers in the system, including hosts of the protected system 10 whose activities are monitored as well as computers implementing the SIEM system 12 and threat detection system 18 . It includes one or more processors 90 , memory 92 , local storage 94 and input/output (I/O) interface circuitry 96 coupled together by one or more data buses 98 .
  • the I/O interface circuitry 96 couples the computer to one or more external networks, additional storage devices or systems, and other input/output devices as generally known in the art.
  • System-level functionality of the computer is provided by the hardware executing computer program instructions (software), typically stored in the memory 92 and retrieved and executed by the processor(s) 90 .
  • any description herein of a software component performing a function is to be understood as a shorthand reference to operation of a computer or computerized device when executing the instructions of the software component.
  • the collection of components in FIG. 6 may be referred to as “processing circuitry”, and when executing a given software component may be viewed as a function-specialized circuit, for example as an “analyzer circuit” when executing a software component implementing an analyzer function.
  • This template could be indicative of an APT attack consisting of the four stages documented in the literature: (1) Exploitation, (2) Command-and-control, (3) Lateral movement, (4) Data exfiltration.
  • the template requires that an entity be found in three out of four reports from respective sensors 30 .
  • the “C & C” sensor output corresponds to phases (2) and/or (4); “New-logins” can be mapped to phase (3); “After-hours” could be connected to phase (2) or (4); and “Internal-staging” to phase (4) of the attack.
  • This template correlates users who contact new domains and subsequently log in to a machine they have never accessed in the past.
  • This succession of steps is also highly indicative of an APT attack. For instance: an internal host under the control of the attacker contacts a new domain (corresponding to a C & C server); the attacker obtains the credentials of the users typically logging in to the compromised host; the attacker then decides to use one of the compromised user credentials to perform lateral movement and obtain access to other internal hosts/servers.
  • New-login Sensor For a detection window of Td days, for each day, find new operating system logins that were not observed over the profiling period. These are the suspicious users.
  • Such a threshold may be an adjustable threshold that can be tuned by an analyst.

Abstract

A threat detection system for detecting threat activity in a protected computer system includes anomaly sensors of distinct types including user-activity sensors, host-activity sensors and application-activity sensors. Each sensor builds a history of pertinent activity over a training period, and during a subsequent detection period the sensor compares current activity to the history to detect new activity. The new activity is identified in respective sensor output. A set of correlators of distinct types are used that correspond to different stages of threat activity according to modeled threat behavior. Each correlator receives output of one or more different-type sensors and applies logical and/or temporal testing to detect activity patterns of the different stages. The results of the logical and/or temporal testing are used to generate alert outputs for a human or machine user.

Description

BACKGROUND
The invention relates the field of computer system security.
Computer systems are vulnerable to various kinds of attacks or threats, including so-called “advanced persistent threats” or APTs which can be very sophisticated and dangerous from a security perspective. The term APT generally refers to an attacker having capabilities and resources to persistently target a specific entity, along with the techniques they use including malware that infects a system under attack. An APT may be capable of locating and taking harmful action with respect to sensitive data in a computer system, such as copying confidential data to an external machine for criminal or other ill-intended activities. The APT performs its tasks in a stealthy manner so as to avoid detection. In some cases, an APT may be active in a system for a very long period of weeks or months and create corresponding levels of damage.
It is desirable to protect computer systems and their data from threats such as APTs. Among the mechanisms that can be utilized are detection mechanisms for detecting the presence of an APT in a computer system. Generally, any malware that has entered a system can only be removed or disabled after it has first been identified in some manner. Identification in turn requires detection, i.e., acquiring knowledge indicating possible or actual presence of an APT or similar threat in a protected system.
SUMMARY
The detection of APTs and similar threats in protected computer systems has relied upon human security specialists who have experience with the threats and can interrogate a protected system for operational information that may be indicative of APT presence. Generally such a security specialist is looking for patterns of known bad activities, such as communications logs identifying an external host known or suspected to be used or controlled by computer criminals. The process can be labor-intensive, difficult to generalize, and non-scalable. Existing anomaly detection schemes commonly focus on very obvious anomalies such as volume-based outliers, but these are ill-suited for “low-and-slow” APT attacks and also suffer from high false positive rates.
A presently disclosed technique for threat detection is both automated and sufficiently sophisticated to detect the more subtle operation signatures presented by APTs and similar threats. The technique is based partly on a hypothesis that, however stealthy an attacker might be, its behavior in attempting to steal sensitive information or subvert system operations should be different from that of a normal, benign user. Moreover, since APT attacks typically include multiple stages, e.g., exploitation, command-and-control, lateral movement, and objectives, each move by an attacker provides an opportunity to detect behavioral deviations from the norm. Correlating events that may otherwise be treated as independent can reveal evidence of an intrusion, exposing stealthy attacks in a manner not possible with previous methods.
The technique includes use of detectors of behavioral deviations referred to as “anomaly sensors”, where each sensor examines one aspect of hosts' or users' activities. For instance, a sensor may examine the external sites hosts contact to identify unusual connections (potential command-and-control channels), to profile the machines each user logs into to find anomalous access patterns (potential “pivoting” behavior in the lateral movement stage), to study users' regular working hours to flag suspicious activities in the middle of the night, or to track the flow of data between internal hosts to find unusual locations where large amounts of data are being gathered (potential staging servers before data exfiltration).
More particularly, a computer system realizes a threat detection system for detecting threats active in a protected computer system. The system includes a set of anomaly sensors of distinct types including user-activity sensors, host-activity sensors and application-activity sensors, with each sensor being operative (1) to build a history of pertinent activity over a training period of operation, and (2) during a subsequent detection period of operation, compare current activity to the history of pertinent activity to detect new activity not occurring in the training period. The new activity is identified in respective sensor output. The system further includes a set of correlators of different distinct types corresponding to different stages of threat activity according to modeled threat behavior. Each correlator receives output of one or more sensors of different types and applies logical and/or temporal testing to received sensor outputs to detect activity patterns of the different stages. Each correlator uses results of the logical and/or temporal testing to generate a respective alert output for a human or machine user of the threat detection system.
While the triggering of one sensor indicates the presence of a singular unusual activity, the triggering of multiple sensors suggests more suspicious behavior. A framework is used in which templates are defined by a human analyst for the correlations implemented by the correlators. The human analyst is given the flexibility of combining multiple sensors according to known attack patterns (e.g., command-and-control communications followed by lateral movement), to look for abnormal events that warrant investigation, or to generate behavioral reports of a given user's activities across time.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects, features and advantages will be apparent from the following description of particular embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views.
FIG. 1 is a block diagram of a computer system;
FIG. 2 is a detailed block diagram of a threat detection system;
FIG. 3 is a flow diagram of high-level operation of the threat detection system;
FIG. 4 is a flow diagram of an address sanitization process;
FIG. 5 is a flow diagram of a time sanitization process; and
FIG. 6 is a hardware block diagram of a computer.
DETAILED DESCRIPTION
FIG. 1 shows a computing system augmented by monitoring and/or protection elements. In particular, the system includes a protected distributed computing system (PROTECTED SYSTEM) 10 and a security information and event management (SIEM) system 12 continually receiving a wide variety of system activity information 14 from operational components of the protected system 10. The SIEM system generates parsed logs 16 of logged activity information which are provided to a threat detection system 18 including a preprocessor 20 and a monitor/analyzer 22. The preprocessor 20 generates sanitized logs 24 for use by the monitor analyzer 22, which in turn provides user-level functionality to a separate human or machine user, the functionality including things like alerts, reports, interactive tools for controlling or augmenting operations, etc.
The protected system 10 is generally a wide-area distributed computing system, such as a large organizational network. It may include one or more very large datacenters, as well as a number of smaller or “satellite” datacenters, all interconnected by a wide-area network that may include public network infrastructure (Internet) along with private networking components such as switches and routers, firewalls, virtual private network (VPN) components, etc. Each datacenter includes local resources such as server computers (servers), client computers and storage systems, coupled together using local/intermediate networks such as local-area networks (LANs), metro-area networks (MANs), storage-area networks (SANs), etc.
The SIEM system 12 is a specialized computing system including hardware computing components executing specialized SIEM software components, including a large database for storing the parsed logs 16. The SIEM system 12 receives raw logs (not shown) generated by logging devices in the system and performs basic parsing into fields (e.g. IP address, timestamp, Msg ID, etc.) to produce the parsed logs 16. In one embodiment the SIEM system may utilize a SIEM product known as enVision™ sold by RSA Security, Inc., the security division of EMC Corporation. The SIEM system 12 gathers the raw logs generated by different devices within the protected system 10 and stores the parsed logs 16 in the database, functioning as a centralized repository. The logs 16 need to be stored for some period of time (e.g., at least several months) in order to enable the analysis described herein.
The threat detection system 18 may be primarily software-implemented, utilizing hardware resources of the SIEM system 12 or in some cases its own dedicated hardware computers. The threat detection system 18 is described herein as a collection of functional components. As described below, these are to be understood as one or more general-purpose computers executing specialized software for realizing each function.
The major component of the threat detection system 18 is the monitor/analyzer 22. As described more below, it employs both top-down and bottom-up components. The top-down component builds and utilizes templates based on known information about current and prior APT attacks, and these templates are used in analysis for detecting behavior that may be indicative of such attacks. The bottom-up component gathers, stores and processes the system activity information as reflected in the sanitized logs 24 from the preprocessor 20.
As shown in FIG. 2, the monitor/analyzer 22 includes sensors 30 and correlators 32. Various types of sensor 30 are deployed, including for example a command-and-control (C & C) sensor 30-1, new login sensor 30-2, new applications sensor 30-3 and critical servers sensor 30-4. The correlators 32 work from sensor output in the form of reports 34. Examples of correlators 32 include C & C and new application correlator 32-1, unusual login correlator 32-2 and C & C and new login correlator 32-3. Functions of these elements are described below.
Because of inconsistencies in the parsed logs 16 (out-of-order events, time skew, missing events, etc.), the log data needs to be processed and sanitized before any analysis is attempted. To this end, the preprocessor 20 includes a time sanitizer 40 and a host address sanitizer 42. The host address sanitizer 42 includes a first sanitizer (STAT) 42-1 for static host addresses and a second sanitizer (DYN) 42-2 for dynamic host addresses. Example parsed log inputs 16 are shown, including logs from web proxies, firewalls, domain controllers, VPN components, and Dynamic Host Control Protocol (DHCP) servers. Specific aspects of the preprocessor 20 are described below.
Regarding the above-mentioned top-down component, it may utilize published information about current and past APT attacks (documented in existing literature) as well as human analyst experience. This information is used to build templates for APT detection. Template definition and deployment may be based on an assumed model for APT attacks. In one model, a typical APT attack consists of the following stages: 1) Exploitation, 2) Command-and-control (C& C), 3) Lateral movement, and 4) Objectives (e.g., data exfiltration, service disruption). Exploitation refers to a threat's entering a system at points of vulnerability. Command and control refers to a resident threat's communications with the attacker (e.g. to receive new attack directives or to report back the results) and operations in the system, such as gathering sensitive data. Lateral movement refers to propagation or migration of threat components/activities within a system, and objectives refers to activities that provide the desired data or other results from operation of the threat.
Starting from the general APT model, different classes of attacks can be defined and the classes used to guide template creation. Examples of attack classes include attackers that propagate through social media, attackers that gather data to a central location and exfiltrate the data, attackers that exfiltrate from multiple machines in the enterprise. Templates for such different classes of attack can be built and deployed for use. The templates can also be refined during use as more knowledge is gathered about attacker behavior. Note that instead of utilizing known attack signatures, such as external sites known to be controlled by computer criminals or specific text strings used in known exploits, the disclosed system relies on patterns of attack behavior and hence is more difficult to evade.
The bottom-up component processes and analyzes information in event logs received by the SIEM system 12 in an enterprise and gathers evidence for abnormal user or host behavior as determined by different anomaly detection sensors 30. The bottom-up component generally consists of the following layers:
1. SIEM System 12
The SIEM system 12 gathers logs generated by different devices within an enterprise in a centralized repository, and stores these logs sufficiently long (e.g., at least several months) in order to enable analyses as described herein.
2. Preprocessor 20
Because of log inconsistencies (out-of-order events, time skew, missing events), the data from the parsed logs 16 needs to be processed and sanitized before any analysis is attempted. Below are described three particular processing tools that may be used.
3. Anomaly Sensors 30
The role of individual sensors 30-x is to maintain a history of normal user and host activity in a particular respect, perform profiling related to that aspect of user or host behavior, and generate a report 34 when activity deviating from the typical user or host profile is observed. The following are examples of observable activity, along with the specific sensor 30 involved as well as the model stage in which it occurs:
    • New destination domains, which may be related to beaconing activity or data exfiltration (C & C sensor 30-1; C & C or data exfiltration stages)
    • Stealthy scanning or probing (lateral movement stage)
    • Abnormal VPN login patterns (lateral movement stage),
    • Abnormal login patterns to internal machines (Login sensor 30-2; lateral movement stage)
    • Abnormal communication patterns between internal hosts (lateral movement stage)
    • Activities on a machine when user is not at the terminal (After-hours sensor; lateral movement stage)
    • Abnormal flow of data between internal hosts, e.g., in terms of volume, which may be indicative of data gathering (Internal-staging sensor; data exfiltration stage)
    • Users in multiple physical locations at the same time (lateral movement stage)
    • Executables being downloaded and installed (applications sensor 30-13; exploitation stage)
Each individual sensor 30 may use statistical and machine learning techniques for profiling, and more details are given for several example sensors 30 below. The system is preferably general enough to accommodate new types of sensors 30 being added to the system and/or refinement of existing sensor 30 based on the experience-based feedback.
4. Correlators 32
The reports 34 from multiple sensors 30 are correlated and matched against the attack templates built by the top-down component. An attack template might involve several sensors 30 and could use different operators applied to the reports 34 from the sensors 30 as described below.
5. Alert/Report Generation
Alerts may be triggered from the correlators 32 when suspicious events according to defined templates are detected. The alerts may be provided to a human analyst for further investigation. Alternatively, the correlators 32 may export an application programming interface (API) via which an external machine user can receive alerts and/or reports.
6. Refinement According to Human Feedback
The system preferably allows human analysts evaluating the alerts to give feedback based on the severity of the alerts and the usefulness of the alerts in detecting real attacks. Based on the analyst feedback, the system may undertake continuous refinement. For instance, new sensors 30 may be added, reports generated by various sensors 30 may be refined, alert prioritization may be improved, etc. New templates can also be generated as more attacks are discovered by the human analysts, and in general more knowledge about various attackers is gathered.
One philosophy that may be used in designing the anomaly sensor framework is to make each individual sensor 30 relatively weak or coarse, meaning that it employs a low threshold in selecting event information to be included in its output reports 34, leaving the stronger detection logic for the correlators 32. In this way, generally more lower-level information is obtained that may pertain to a variety of threats both known and unknown, and strong, flexible correlations between sensors 30 can be used to boost the quality of the overall result.
As mentioned, the sensors 30 utilize information from logs collected by the SIEM system 12 (e.g., enVision) in an enterprise environment. These include logs generated by proxy web servers, VPN servers, domain controllers, firewalls and DHCP servers for example. The parsed logs 16 require pre-processing before different event types can be correlated. In particular, it is generally necessary to address inconsistencies arising from dynamic IP address assignments, as well as to develop lists of static IP addresses active in the enterprise. It is also necessary to address inconsistencies in the time-stamping of log data from devices in different time zones and using different time-zone configurations.
To deal with dynamic IP addresses (IPs), it is necessary to develop a consistent mapping between network (IP) addresses and hostnames/MAC addresses. This is done by parsing DHCP and VPN logs as described in more detail below. The outcome of this pre-processing may be stored in a database table with certain fields as shown below. Users can query this table directly. There may also be an interface (e.g., a web form) for accessing this information.
Column Name Description
Start_time The start timestamp when this IP is allocated
to the host.
End_time The end timestamp when this IP is no longer
allocated to the host.
Ipaddr The IP address.
Hostname The hostname.
Macaddr The host's MAC address.
To study hosts that are assigned static IP addresses, IP addresses may be examined that do not appear in DHCP and VPN logs. For example, these IPs may be obtained from security gateway logs and host operating system (e.g., Windows) event logs. The hostname associated with those IP addresses may be looked up (e.g., by reverse DNS resolution using tools such as“nslookup” or “host”) repeatedly over time. An IP address that always resolves to the same hostname is considered static.
For time sanitization, the parsed logs 16 are sanitized so that all log entries for all devices are reported in one consistent time, such as UTC time. The sanitization procedure is done by the time sanitizer 40 as described more below.
As mentioned, each of the sensors 30 focuses on a particular aspect of host or user network behaviors. The goal of a sensor 30 is to 1) profile/understand common behaviors during a training period covering a sufficiently long (e.g., at least one or two months) of network activity, and 2) identify outliers during a detection period following the training period. As output, each sensor 30 generates a respective report 34 with an ordered list of alerts (or suspicious behavior) identified during the detection period. The alerts are given a priority and score based on their relevance.
The following are examples of specific types of sensor 30 that may be utilized:
C & C Sensor 30-1
The goal of this sensor 30 is to identify possible C & C (Command-and-Control) domains according to the following heuristics:
    • Compromised hosts make repeated connections to the C & C domain.
    • The C & C domain is not contacted by many hosts within the enterprise (the assumption is that APTs keep a low profile).
    • The C & C domain shows up as a “new” destination that the enterprise hosts do not usually contact.
Analysis may be done using proxy logs of security gateway devices, such as a product known as IronPort®. The C & C sensor 30-1 constructs a history of web domains contacted by each host over a training period (e.g., one or two months). Afterward, a domain is flagged if it shows up in a log but not included in the history. A flagged domain is included in a watch list and actively monitored for some configurable time interval (e.g., one week). If at the end of the week, the activity to this domain looks normal (in terms of connection rate and frequency of domain contact from internal hosts), then the domain is added to the history. Otherwise, the domain is included in the report output of the C & C sensor 30-1.
New Login Sensor 30-2
The goal of this sensor is to identify new login patterns not observed previously in the infrastructure using Windows log events. It profiles a user based on the set of machines the user commonly logs onto over the training period (e.g., one or two months). Similarly, for each hostname, it finds the set of users that commonly log onto the host. A history is built of {user, host} login events observed during training. Then during detection, new logins are identified and flagged (defined as new {user, host} login events not already in the history). The sensor 30-2 outputs in its report the new logins events found during detection.
After-Hours Sensor (Not Shown)
The goal of this sensor is to identify human-like activities generated by a machine when the user is not at the terminal. During the training period, a pattern of regular working hours is built per user. In addition, a whitelist of domains contacted by many different machines at night (corresponding to automated processes) is created. The output of the sensor is a report containing a list of hosts and non-whitelisted web domains contacted outside regular working hours.
New Applications Sensor 30-3
The goal of this sensor is to profile installed software/applications on a host. This may be done using user-agent string fields in HTTP requests that are recorded in security gateway proxy logs, for example. During the training period, a history including the list of all observed user-agent strings for each host is created. Afterward, new software (corresponding to new user-agent strings) not observed previously is identified. The output is a list of suspicious hosts and new (possibly suspicious) applications they have installed.
Internal-Staging (Not Shown)
This sensor profiles the internal communication among enterprise hosts to determine potential staging servers used for gathering data before exfiltrating it to external sites. The sensor can use either firewall logs or records of network traffic analyzers, if available. While firewall logs normally report only basic connection information among different hosts (source and destination IP address and port number, and whether the connection is accepted or denied), network traffic analyzer records provide additional details such as the amount of data transferred in a connection. During training, a map of the communication pattern within the enterprise can be built. This includes pairs of internal hosts communicating and the average volume of traffic observed (if this information is available from traffic analyzer data). In the detection phase, the sensor flags pairs of hosts either initiating communication or deviating from the communication pattern recorded in the map.
Critical Servers Sensor 30-4
The goal of this sensor is to identify critical machines/services in the enterprise network using the firewall logs. During profiling, for each host, the sensor examines its number of incoming and outgoing connections, and whether those connections were allowed or denied. After the training period, it identifies hosts that have a high ratio of incoming/outgoing connections. These are considered servers.
As mentioned, in the top-down component an attack template can be flexibly defined by a human operator as correlating outputs of different sensors 30 using various policies. Examples of operators that can be used when building attack templates using the outputs (reports) of multiple sensors are:
    • Sensor 1 AND Sensor 2: a certain entity (e.g., host, user or domain) is required to be part of both sensors' outputs;
    • Sensor 1 OR Sensor 2: a certain entity (e.g., host, user or domain) is required to be part of at least one of the sensors' outputs;
    • Sensor 1 DIFF Sensor 2: a certain entity (e.g., host, user or domain) is required to be in the output of Sensor 1 but not Sensor 2;
    • Sensor 1 BEFORE Sensor 2: an entity or event is detected by Sensor 1 before being detected Sensor 2;
    • MATCH(m,n,Sensor 1, . . . , Sensor n): an entity (e.g., host, user or domain) matches m out of n sensor outputs.
Given a set of reports for various sensors in the system, and an attack template describing known aspects of an attack using statement with operators as above, the correlators 32 generate and output a list of (prioritized) alerts that satisfy the attack template. The priority and score of these alerts is computed based on the priority and score of the alerts generated by each sensor 30 used by the template. The alerts may be presented to a human user (analyst) in a graphical representation. The system may also provide a graphical interface (tool) through which the analyst can provide feedback. For instance, the human analyst may rank each alert according to its indication of a real attack on a 1-10 scale. Based on the analyst feedback, the system is continuously refined by adding new sensors, augmenting reports generated by various sensors, refining attack templates and prioritizing alerts raised by either individual sensors or correlation among multiple sensors.
FIG. 3 provides a flow chart description of the overall process. At 50, information from known attacks is used to analyze and describe identifying characteristics of attacks to be detected. At 52, the sensors 30 and attack templates are defined, and the sensors 30 and correlators 32 implementing the attack templates are deployed. At 54, the preprocessor 20 obtains and sanitizes the parsed logs 16 from the SIEM system 12. At 56, the sensors 30 are applied to the sanitized logs 24 to generate the sensor reports 34, and at 58 the correlators 32 are applied to the reports 34 to generate output such as alerts etc.
Address Sanitization
As generally known, an IP address can either be static or dynamic. A static IP address is one that is assigned to a particular machine for an indefinitely long time, generally until some event such as network reconfiguration requires address reassignment. This period may be months or years in duration. A dynamic IP address is one that is only temporarily assigned (“leased”) to any given machine for a generally much shorter period, and may be assigned to several different machines over different periods. One well-known network protocol for managing the assignment of dynamic IP address is the Dynamic Host Configuration Protocol (DHCP).
Distinguishing between static and dynamic IP addresses is important to the goal of mapping IP addresses to unique hosts. However, in large enterprise networks, documentation about the use and configuration of individual IP address ranges is often scarce or non-existent. While a large subset of dynamic IP addresses (e.g., those administered by the corporate IT department) can be inferred from logs collected from designated DHCP servers, it is much more difficult to identify static IP addresses and also dynamic IP addresses managed by “private” DHCP servers, i.e., local DHCP servers deployed in small or remote networks whose DHCP traffic is not visible to more centralized monitoring systems such as the SIEM system 12.
The following address classification may be used. The set of all network addresses is divided between known dynamic addresses and other addresses, which are further divided into static addresses and other dynamic addresses. Known dynamic IP addresses are those generally managed centrally by the IT department, while other dynamic IP addresses are typically managed by private DHCP servers to which the IT department lacks visibility.
In one embodiment a tool for classifying IP addresses regularly (e.g., daily) extracts all the IP addresses that appear in network logs to create a large enterprise IP address pool. Similarly, it extracts dynamic IP addresses from logs that are known to include only dynamic IP addresses, such as the logs from IT-managed DHCP servers collected by the SIEM system 12. By taking the difference between these two IP address pools, resolving the names of the hosts assigned to the resulting IP addresses, and continuously monitoring those hosts for IP address re-assignments, the tool automatically maintains up-to-date and self-correcting lists of static IP addresses and other dynamic IP addresses.
Address sanitization is described with reference to FIG. 4. In general, the address sanitizer 42 may be designed to perform a one-time bootstrap cycle to initialize its operations and data, and then run a periodic (e.g., daily) update cycle. These two separate cycles are both explained below with reference to FIG. 4. As shown, the process operates using the data from the time-sanitized logs 24-T. Further below is a brief description of operation of the time sanitizer 40 in creating those logs.
BOOTSTRAP Cycle
During this cycle, the tool builds its IP address pools and identifies a set of “undetermined” (but potentially static) IP addresses.
    • 1. At 60, create a set 62, referred to as D, of known dynamic IP addresses by extracting the IP addresses that appear in sanitized DHCP and VPN logs.
    • 2. At 64, create a set 66, referred to as A, of all IP addresses internal to the enterprise by extracting the IP addresses that appear in logs from various other network devices (e.g., IronPort Logs, Windows Event logs).
    • 3. At 68, create a set 70, referred to as S, as the difference between A and D. This is the set of undetermined (but potentially static) IP addresses, S=A\D. As shown, this process includes the following:
      • Compute A\D
      • Perform a reverse DNS lookup for every IP address in S
      • Record the hostnames for the IP addresses as returned by DNS
As shown, the combination of the set 62 of known dynamic IP addresses and the set 70 of static and other dynamic IP addresses make up the address-sanitized logs 24-A that are provided to the monitor/analyzer 22 for use in the system monitoring/analysis functions.
UPDATE Cycle
The tool automatically runs an UPDATE cycle at regular intervals, e.g., daily, in order to update the IP address pools and classify IP addresses in the set 70 as static or dynamic.
    • 1. At 60, create a new set of known dynamic IP addresses Dnew for that day by extracting the IP addresses that appear in DHCP and VPN logs. Merge with the existing set to create an updated set 62 of known dynamic IP addresses containing the new and old addresses, Dupdated=Dnew∪Dold.
    • 2. At 64, create a new set of all IP addresses Anew by extracting the IP addresses that appear in logs from various other network devices (e.g., IronPort Logs, Windows Event logs). Merge with the old set to create an updated set 66 of all IP addresses containing the new and old addresses, Aupdated=Anew∪Aold.
    • 3. At 68, update the set 70 of static and other dynamic IP addresses:
    • a) Compute the difference between Aupdated and Dupdated and create an updated set 70 of undetermined (but potentially static) IP addresses S=Aupdated\Dupdated.
    • b) Perform a reverse DNS lookup for every IP address in the set S and record the corresponding hostname
    • c) For each IP address in S that was also observed in a previous bootstrap or update cycle, compare its previously resolved hostname with the newly resolved name:
      • i) If they differ (i.e., the host changed IP addresses), classify the IP address as other dynamic (i.e., the host is not using a static IP address, and likely being managed by a DHCP server into which the IT department lacks visibility). Move the IP address into the set of previously unknown dynamic IP addresses, Dunknown
      • ii) If they are the same, the IP address is kept in S.
The tool may be designed to auto-correct its outputs over time. The longer an IP address is monitored, the higher the confidence in its classification.
Ideally, a mapping between IP addresses and corresponding unique hosts should include the following information: The IP address, a unique identifier for the host, and the start and end timestamps of the period during which the host is assigned this IP address. In one embodiment, the MAC address of a host's network interface is used as the unique identifier for the host.
In contrast to static IP addresses, which are permanently assigned to the same machine, dynamic IP addresses are allocated to different machines at different time periods. This process takes place over the DHCP protocol. In DHCP, dynamic IP addresses are leased to hosts for a given time period, after which the host must renew its IP assignment request, otherwise the DHCP server may reclaim that IP for use by another host. DHCP logs collected by the SIEM system 12 may be parsed to collect information usable to construct a mapping of the known dynamic IP addresses 62 to unique hosts.
Time Sanitization
It is assumed that a list of all logging devices that report to the SIEM system 12 is known (e.g., a list of IP addresses of all logging devices). It is also assumed that the log timestamp translation is done after the logs are collected by the SIEM system 12, i.e., administrator privileges to the logging devices are not available, so that the devices' clock configurations cannot be modified.
The output of the time sanitization technique is the time zone configuration of each logging device. This is stored in the following format:
Field Name Description
Paddr IP address of the logging device
δ Time difference (UTC - device, to nearest
15 minute interval)
Given the above information for each logging device, all log timestamps can be translated into UTC by adding the corresponding δ value to the device timestamp. For example, if a log from a device has a timestamp of T2, the adjusted log timestamp for that log message becomes T2+δ.
FIG. 5 describes the general procedure by which the timestamp correction values δ are determined. At 80, network events occurring in a known time frame (e.g., UTC) are either generated or simply identified (if existing by action of a separate mechanism). Examples of both operations are given below. The network events are known to have corresponding entries with timestamps in the parsed logs 16. At 82, difference values are calculated, each being a difference between an event time in the known time frame and a timestamp of the parsed log entry. At 84, a selection function is applied to the difference values to obtain a correction value for each logging device. At 86, the correction values are stored in association with respective identifiers of the logging devices (e.g., the IP address values Paddr).
Two different approaches are described for the general process of FIG. 5.
1. Active Approach
One direct approach to detect a device's configured time zone is to send it “probes” over the network soliciting responses containing clock information. This is difficult in practice because neither the IP, UDP, or TCP headers include timestamps. Also, for security reasons many machines ignore packets sent to unused ports.
In an alternative active approach, rather than contacting a logging network device directly, events are generated that will be logged (and time-stamped) by the device. For example, a Windows domain controller validates user logon events and generates logs describing the outcome of the logon attempts as it does so. Thus, log entries and timestamps can be created by performing logons. As another example, a web proxy forwards clients' HTTP requests and generates logs describing the network connection at the same time. Log entries and timestamps can be created by issuing HTTP requests.
Let the known time at which a testing event E is generated be TE, which is represented in UTC time. After the logging device processes this event, a log message is created with the device's timestamp TD. In terms of elapsed time, the difference between TE and TD is very small, e.g., on the order of milliseconds, because the same device often performs event processing and log generation. This is true in both the above examples (Windows domain controller, web proxy).
The difference value δ=TD−TE can be calculated, rounded off to the nearest 15 minutes (since that is the level of granularity at which time zones are set). Since TE is represented in UTC time, the device's time zone is hence known to be configured as UTC time−δ.
2. Passive Approach
While the active approach can be quite accurate and efficient, it may not be suitable for use in a large network with many different logging devices. In this case, events may be directed to different processing/logging devices depending on the source host's geographic location or network configuration. Without a comprehensive understanding of the topology of the enterprise network and access to multiple distributed client machines, the active approach may become infeasible.
An alternative passive approach may leverage information available in logs collected by a SIEM system to determine the devices' clock configuration. The clock configuration in the SIEM system 12 may be static, which simplifies the processing. For example, the SIEM system 12 may generate all its timestamps in UTC time.
At a high level, the passive approach compares the device timestamp TD with the SIEM system timestamp TS for all log messages generated by a device, where the SIEM system timestamp TS reflects the time that the SIEM system 12 received the log messages. Let δ be the difference between TD and TS, rounded off to the nearest 15 minutes. From a set of (possibly inconsistent) δ values derived from all logs generated by a device over a certain time period (e.g., one month), a process is employed to determine the correct actual time correction value for the device.
FIG. 6 is a generalized depiction of a computer such as may be used to realize the computers in the system, including hosts of the protected system 10 whose activities are monitored as well as computers implementing the SIEM system 12 and threat detection system 18. It includes one or more processors 90, memory 92, local storage 94 and input/output (I/O) interface circuitry 96 coupled together by one or more data buses 98. The I/O interface circuitry 96 couples the computer to one or more external networks, additional storage devices or systems, and other input/output devices as generally known in the art. System-level functionality of the computer is provided by the hardware executing computer program instructions (software), typically stored in the memory 92 and retrieved and executed by the processor(s) 90. Any description herein of a software component performing a function is to be understood as a shorthand reference to operation of a computer or computerized device when executing the instructions of the software component. Also, the collection of components in FIG. 6 may be referred to as “processing circuitry”, and when executing a given software component may be viewed as a function-specialized circuit, for example as an “analyzer circuit” when executing a software component implementing an analyzer function.
Two specific examples are now given to illustrate the operation of the threat detection system 18.
I. MATCH(3,4,C&C, New-Logins, After-Hours, Internal-Staging)
This template could be indicative of an APT attack consisting of the four stages documented in the literature: (1) Exploitation, (2) Command-and-control, (3) Lateral movement, (4) Data exfiltration. The template requires that an entity be found in three out of four reports from respective sensors 30. The “C & C” sensor output corresponds to phases (2) and/or (4); “New-logins” can be mapped to phase (3); “After-hours” could be connected to phase (2) or (4); and “Internal-staging” to phase (4) of the attack.
Since each APT attack proceeds uniquely, the above template does not require an exact match of all these sensors, but rather a relaxed 3-out-of-4 sensor matches. The alerts generated by this template contain highly suspicious hosts that should be further investigated by a human analyst.
II. C & C BEFORE New-Logins from the Same Host
This template correlates users who contact new domains and subsequently log in to a machine they have never accessed in the past. This succession of steps is also highly indicative of an APT attack. For instance: an internal host under the control of the attacker contacts a new domain (corresponding to a C & C server); the attacker obtains the credentials of the users typically logging in to the compromised host; the attacker then decides to use one of the compromised user credentials to perform lateral movement and obtain access to other internal hosts/servers.
In more detail, the output of this attack template is determined as follows:
1) Training: Build a history of users and the hosts that the users logged into over a profiling period of Tp days. Build a history of hosts and the web domains they contact over the same profiling period of Tp days.
2) New-login Sensor: For a detection window of Td days, for each day, find new operating system logins that were not observed over the profiling period. These are the suspicious users.
3) C & C Sensor: Find new web domains that were not contacted over the profiling period. These are the suspicious domains. Let the first time (or last time) host h contacted a suspicious domain be denoted as t1,h (or t2,h). Only consider domains where t1,h!=t2,h.
4) Correlation:
    • a. For each host that contacted a suspicious domain, find all users that previously logged on to that host. Refer to this set of users as U.
    • b. Find users in U that also performed a new login in the detection window, and where the new login by user u took place AFTER t1,A, where A is a machine user u has logged on in the training period. Refer to this set of users as U′.
5) Output:
    • a. Return the set U′.
    • b. Return the set H′ of hosts that user u (in U′) logged onto in the training period, and the suspicious domains those hosts contacted. It may be desirable to consider only the first new login by user u to host h. If user u logged onto host h again in the detection window, the same information is not returned redundantly.
    • c. Return the connection rate for domains contacted by hosts in H′. The rate, for host h and domain d, is defined as (the number of connections to d from h)/(t2,h−t1,h).
Further filtering can be applied to domains (and users) based on the domains' connection rate. Such a threshold may be an adjustable threshold that can be tuned by an analyst.
While various embodiments of the invention have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (27)

What is claimed is:
1. A threat detection system for detecting threat activity in a protected computer system, the threat detection system including one or more computers storing and executing computer program instructions of a threat detection application to cause the computers to realize components of the threat detection system, the components of the threat detection system comprising:
a set of anomaly sensors of a plurality of distinct types including user-activity sensors, host-activity sensors and application-activity sensors, each sensor being operative (1) to build a history of pertinent activity over a training period of operation, and (2) during a subsequent detection period of operation, compare current activity to the history of pertinent activity to detect new activity not occurring in the training period, the new activity being identified in respective sensor output, wherein the host-activity sensors include at least one internal staging sensor operative to i) build a history of communication activity between hosts located within the protected computer system during the training period of operation, and ii) compare current communication activity between hosts within the protected computer system to the history of communication activity between hosts within the protected computer system during the subsequent detection period of operation to detect gathering of data within the protected computer system prior to exfiltration of the data from the protected computer system to a site external to the protected computer system; and
a set of correlators of a plurality of distinct types corresponding to different stages of threat activity according to modeled threat behavior, each correlator receiving output of one or more sensors of different types and applying logical or temporal testing to received sensor outputs to detect activity patterns of the different stages, each correlator using results of the logical or temporal testing to generate a respective alert output for a human or machine user of the threat detection system; and
wherein the internal staging sensor is operative, during the detection period of operation, to flap one of the pairs of hosts located within the protected computer system as gathering data within the protected computer system prior to exfiltration of the data from the protected computer system in response to an average volume of traffic observed between the pair of hosts during the detection period of operation exceeding an average volume of traffic observed between the pair of hosts during the training period.
2. A threat detection system according to claim 1, wherein:
the user-activity sensors include a login sensor for which the pertinent activity includes user-host login events each identifying a user logging in to a respective host computer of the protected computer system;
the host-activity sensors include a command-and-control sensor for which the pertinent activity includes contact to external web domains by hosts of the protected computer system; and
the application-activity sensors include an installed-application sensor for which the pertinent activity includes installation of application software on hosts of the protected computer system.
3. A threat detection system according to claim 1, wherein the training period has a duration of at least one month.
4. A threat detection system according to claim 1, wherein the logical testing includes logical relationships among respective instances of an identifier of a system entity in the outputs of two or more of the sensors.
5. A threat detection system according to claim 4, wherein the logical relationships include logical AND, logical inclusive-OR, and logical DIFFERENCE.
6. A threat detection system according to claim 4, wherein the system entity is selected from the group consisting of host, user and domain.
7. A threat detection system according to claim 1, wherein the temporal testing includes temporal relationships among respective instances of an identifier of a system entity in the outputs of two or more of the sensors.
8. A threat detection system according to claim 7, wherein the logical temporal relationships include BEFORE and AFTER.
9. A threat detection system according to claim 1, wherein activity information about the pertinent activity is provided to the sensors via logs generated by respective components of the protected computer system.
10. A threat detection system according to claim 9, wherein the components of the protected computer system include one or more of Web proxies, firewalls, domain controllers, virtual private network (VPN) component and Dynamic Host Control Protocol (DHCP) servers.
11. A threat detection system according to claim 9, wherein the logs are parsed logs, and further comprising a preprocessor operative to generate sanitized logs from the parsed logs, the sanitized logs conveying activity information for activity reflected in the parsed logs with correction for timestamp inconsistencies or host address inconsistencies.
12. A threat detection system according to claim 1, wherein the anomaly sensors are tunable in accordance with feedback information indicating whether the new activity identified in sensor output corresponds to actual threat activity versus normal, non-threat activity.
13. A method of detecting threat activity in a protected computer system, comprising:
operating a set of anomaly sensors of a plurality of distinct types including user-activity sensors, host-activity sensors and application-activity sensors, each sensor (1) building a history of pertinent activity over a training period of operation, and (2) during a subsequent detection period of operation, comparing current activity to the history of pertinent activity to detect new activity not occurring in the training period, the new activity being identified in respective sensor output, wherein the host-activity sensors include at least one internal staging sensor operative to i) build a history of communication activity between hosts located within the protected computer system during the training period of operation, and ii) compare current communication activity between hosts within the protected computer system to the history of communication activity between hosts within the protected computer system during the subsequent detection period of operation to detect gathering of data within the protected computer system prior to exfiltration of the data from the protected computer system to a site external to the protected computer system;
operating a set of correlators of a plurality of distinct types corresponding to different stages of threat activity according to modeled threat behavior, each correlator receiving output of one or more sensors of different types and applying logical or temporal testing to received sensor outputs to detect activity patterns of the different stages, each correlator using results of the logical or temporal testing to generate a respective alert output for a human or machine user of the threat detection system; and
wherein the internal staging sensor is operative, during the detection period of operation, to flag one of the pairs of hosts located within the protected computer system as gathering data within the protected computer system prior to exfiltration of the data from the protected computer system in response to an average volume of traffic observed between the pair of hosts during the detection period of operation exceeding an average volume of traffic observed between the pair of hosts during the training period.
14. A method according to claim 13, wherein:
the user-activity sensors include a login sensor for which the pertinent activity includes user-host login events each identifying a user logging in to a respective host computer of the protected computer system;
the host-activity sensors include a command-and-control sensor for which the pertinent activity includes contact to external web domains by hosts of the protected computer system; and
the application-activity sensors include an installed-application sensor for which the pertinent activity includes installation of application software on hosts of the protected computer system.
15. A method according to claim 13, wherein the training period has a duration of at least one month.
16. A method according to claim 13, wherein the logical testing includes logical relationships among respective instances of an identifier of a system entity in the outputs of two or more of the sensors.
17. A method according to claim 16, wherein the logical relationships include logical AND, logical inclusive-OR, and logical DIFFERENCE.
18. A method according to claim 16, wherein the system entity is selected from the group consisting of host, user and domain.
19. A method according to claim 13, wherein the temporal testing includes temporal relationships among respective instances of an identifier of a system entity in the outputs of two or more of the sensors.
20. A method according to claim 19, wherein the logical temporal relationships include BEFORE and AFTER.
21. A method according to claim 13, wherein activity information about the pertinent activity is provided to the sensors via logs generated by respective components of the protected computer system.
22. A method according to claim 21, wherein the components of the protected computer system include one or more of Web proxies, firewalls, domain controllers, virtual private network (VPN) component and Dynamic Host Control Protocol (DHCP) servers.
23. A method according to claim 21, wherein the logs are parsed logs, and further comprising a preprocessor operative to generate sanitized logs from the parsed logs, the sanitized logs conveying activity information for activity reflected in the parsed logs with correction for timestamp inconsistencies or host address inconsistencies.
24. A method according to claim 13, wherein the anomaly sensors are tunable in accordance with feedback information indicating whether the new activity identified in sensor output corresponds to actual threat activity versus normal, non-threat activity.
25. A method according to claim 13, wherein:
each user-activity sensor is operative to (i) maintain a history of normal user activity for users of the protected computer system, (ii) create respective user profiles of the user activity based on the history of normal user activity, and (iii) generate user-activity sensor output and provide it to one or more of the set of correlators, the user-activity sensor output identifying anomalous user activity deviating from the respective user profile of user activity;
each host-activity sensor is operative to (i) maintain a history of normal host activity for host computers of the protected computer system, (ii) create respective host profiles of the host activity based on the history of normal host activity, and (iii) generate host-activity sensor output and provide it to one or more of the set of correlators, the host-activity sensor output identifying anomalous host activity deviating from the respective host profile of host activity; and
each application-activity sensor is operative to (i) maintain a history of normal application activity for application programs of the protected computer system, (ii) create respective application profiles of the application activity based on the history of normal application activity, and (iii) generate application-activity sensor output and provide it to one or more of the set of correlators, the application-activity sensor output identifying application activity deviating from the respective application profile of application activity.
26. A method according to claim 25, wherein:
the user-activity sensor uses first statistical or machine learning techniques in creating the user profiles;
the host-activity sensor uses second statistical or machine learning techniques in creating the host profiles; and
the application-activity sensor uses third statistical or machine learning techniques in creating the application profiles.
27. A method according to claim 13, wherein the internal staging sensor is operative to build the history of communication activity between hosts located within the protected computer system during the training period of operation by building a map of communications within the protected computer system during the training period using a record of at least one network traffic analyzer, wherein the record of the network traffic analyzer includes an amount of data transferred between each pair of hosts located within the protected computer system during the training period of operation; and
wherein the map of communications within the protected computer system during the training period includes an average volume of traffic observed during the training period between each pair of hosts located within the protected computer system.
US13/731,635 2012-12-31 2012-12-31 Anomaly sensor framework for detecting advanced persistent threat attacks Active 2033-09-02 US9378361B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/731,635 US9378361B1 (en) 2012-12-31 2012-12-31 Anomaly sensor framework for detecting advanced persistent threat attacks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/731,635 US9378361B1 (en) 2012-12-31 2012-12-31 Anomaly sensor framework for detecting advanced persistent threat attacks

Publications (1)

Publication Number Publication Date
US9378361B1 true US9378361B1 (en) 2016-06-28

Family

ID=56136411

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/731,635 Active 2033-09-02 US9378361B1 (en) 2012-12-31 2012-12-31 Anomaly sensor framework for detecting advanced persistent threat attacks

Country Status (1)

Country Link
US (1) US9378361B1 (en)

Cited By (69)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160110551A1 (en) * 2013-02-14 2016-04-21 The United States Of America As Represented By The Secretary Of The Navy Computer System Anomaly Detection Using Human Responses to Ambient Representations of Hidden Computing System and Process Metadata
US20160364745A1 (en) * 2015-06-09 2016-12-15 Yahoo! Inc. Outlier data detection
US20170048270A1 (en) * 2015-08-10 2017-02-16 Accenture Global Services Limited Network security
US20170142023A1 (en) * 2015-11-18 2017-05-18 The Rubicon Project, Inc. Networked system for interconnecting sensor-based devices
US9693195B2 (en) 2015-09-16 2017-06-27 Ivani, LLC Detecting location within a network
US20170244734A1 (en) * 2016-02-19 2017-08-24 Secureworks Corp. System and Method for Detecting and Monitoring Network Communication
US20180004958A1 (en) * 2016-07-01 2018-01-04 Hewlett Packard Enterprise Development Lp Computer attack model management
US20180096157A1 (en) * 2016-10-05 2018-04-05 Microsoft Technology Licensing, Llc Detection of compromised devices via user states
WO2018071280A1 (en) * 2016-10-13 2018-04-19 Microsoft Technology Licensing, Llc Active and passive method for ip to name resolution
US9979742B2 (en) 2013-01-16 2018-05-22 Palo Alto Networks (Israel Analytics) Ltd. Identifying anomalous messages
US10050731B1 (en) * 2017-01-27 2018-08-14 The United States Of America, As Represented By The Secretary Of The Navy Apparatus and method for detecting a multi-homed device using clock skew
US10064014B2 (en) 2015-09-16 2018-08-28 Ivani, LLC Detecting location within a network
US10075461B2 (en) 2015-05-31 2018-09-11 Palo Alto Networks (Israel Analytics) Ltd. Detection of anomalous administrative actions
US20180262498A1 (en) * 2017-03-13 2018-09-13 Microsoft Technology Licensing, Llc System to filter impossible user travel indicators
US10129239B2 (en) * 2015-05-08 2018-11-13 Citrix Systems, Inc. Systems and methods for performing targeted scanning of a target range of IP addresses to verify security certificates
US10230742B2 (en) * 2015-01-30 2019-03-12 Anomali Incorporated Space and time efficient threat detection
US10262132B2 (en) * 2016-07-01 2019-04-16 Entit Software Llc Model-based computer attack analytics orchestration
EP3484118A1 (en) * 2017-11-09 2019-05-15 Accenture Global Solutions Limited Detection of adversary lateral movement in multi-domain iiot environments
US10321270B2 (en) 2015-09-16 2019-06-11 Ivani, LLC Reverse-beacon indoor positioning system using existing detection fields
US10325641B2 (en) 2017-08-10 2019-06-18 Ivani, LLC Detecting location within a network
US10356106B2 (en) 2011-07-26 2019-07-16 Palo Alto Networks (Israel Analytics) Ltd. Detecting anomaly action within a computer network
US10361585B2 (en) 2014-01-27 2019-07-23 Ivani, LLC Systems and methods to allow for a smart device
US10382893B1 (en) 2015-09-16 2019-08-13 Ivani, LLC Building system control utilizing building occupancy
US10425436B2 (en) 2016-09-04 2019-09-24 Palo Alto Networks (Israel Analytics) Ltd. Identifying bulletproof autonomous systems
US20190379689A1 (en) * 2018-06-06 2019-12-12 ReliaQuest Holdings. LLC Threat mitigation system and method
US10574681B2 (en) * 2016-09-04 2020-02-25 Palo Alto Networks (Israel Analytics) Ltd. Detection of known and unknown malicious domains
US10599857B2 (en) 2017-08-29 2020-03-24 Micro Focus Llc Extracting features for authentication events
US10665284B2 (en) 2015-09-16 2020-05-26 Ivani, LLC Detecting location within a network
US10686687B2 (en) 2016-12-05 2020-06-16 Aware360 Ltd. Integrated personal safety and equipment monitoring system
US10686829B2 (en) 2016-09-05 2020-06-16 Palo Alto Networks (Israel Analytics) Ltd. Identifying changes in use of user credentials
US10713362B1 (en) * 2013-09-30 2020-07-14 Fireeye, Inc. Dynamically adaptive framework and method for classifying malware using intelligent static, emulation, and dynamic analyses
US10735374B2 (en) * 2015-12-24 2020-08-04 Huawei Technologies Co., Ltd. Method, apparatus, and system for detecting terminal security status
US10794093B2 (en) 2017-05-19 2020-10-06 Microsoft Technology Licensing, Llc Method of optimizing memory wire actuator energy output
US10834120B2 (en) 2014-12-03 2020-11-10 Splunk Inc. Identifying related communication interactions to a security threat in a computing environment
CN112597499A (en) * 2020-12-30 2021-04-02 北京启明星辰信息安全技术有限公司 Nondestructive safety inspection method and system for video monitoring equipment
US10984099B2 (en) 2017-08-29 2021-04-20 Micro Focus Llc Unauthorized authentication events
US10999304B2 (en) 2018-04-11 2021-05-04 Palo Alto Networks (Israel Analytics) Ltd. Bind shell attack detection
US11012492B1 (en) 2019-12-26 2021-05-18 Palo Alto Networks (Israel Analytics) Ltd. Human activity detection in computing device transmissions
US11070569B2 (en) 2019-01-30 2021-07-20 Palo Alto Networks (Israel Analytics) Ltd. Detecting outlier pairs of scanned ports
USD926200S1 (en) 2019-06-06 2021-07-27 Reliaquest Holdings, Llc Display screen or portion thereof with a graphical user interface
USD926782S1 (en) 2019-06-06 2021-08-03 Reliaquest Holdings, Llc Display screen or portion thereof with a graphical user interface
USD926811S1 (en) 2019-06-06 2021-08-03 Reliaquest Holdings, Llc Display screen or portion thereof with a graphical user interface
USD926810S1 (en) 2019-06-05 2021-08-03 Reliaquest Holdings, Llc Display screen or portion thereof with a graphical user interface
USD926809S1 (en) 2019-06-05 2021-08-03 Reliaquest Holdings, Llc Display screen or portion thereof with a graphical user interface
US11122064B2 (en) 2018-04-23 2021-09-14 Micro Focus Llc Unauthorized authentication event detection
US11184378B2 (en) 2019-01-30 2021-11-23 Palo Alto Networks (Israel Analytics) Ltd. Scanner probe detection
US11184376B2 (en) 2019-01-30 2021-11-23 Palo Alto Networks (Israel Analytics) Ltd. Port scan detection using destination profiles
US11184377B2 (en) 2019-01-30 2021-11-23 Palo Alto Networks (Israel Analytics) Ltd. Malicious port scan detection using source profiles
US11290479B2 (en) * 2018-08-11 2022-03-29 Rapid7, Inc. Determining insights in an electronic environment
US11310247B2 (en) * 2016-12-21 2022-04-19 Micro Focus Llc Abnormal behavior detection of enterprise entities using time-series data
US11316872B2 (en) 2019-01-30 2022-04-26 Palo Alto Networks (Israel Analytics) Ltd. Malicious port scan detection using port profiles
US11321463B2 (en) * 2020-01-09 2022-05-03 Rockwell Collins, Inc. Hardware malware profiling and detection system
US11350238B2 (en) 2015-09-16 2022-05-31 Ivani, LLC Systems and methods for detecting the presence of a user at a computer
US20220224704A1 (en) * 2019-05-21 2022-07-14 Schneider Electric USA, Inc. Establishing and maintaining secure device communication
US11397413B2 (en) 2017-08-29 2022-07-26 Micro Focus Llc Training models based on balanced training data sets
US11425162B2 (en) 2020-07-01 2022-08-23 Palo Alto Networks (Israel Analytics) Ltd. Detection of malicious C2 channels abusing social media sites
US20220353280A1 (en) * 2017-03-31 2022-11-03 Musarubra Us Llc Identifying malware-suspect end points through entropy changes in consolidated logs
US11509680B2 (en) 2020-09-30 2022-11-22 Palo Alto Networks (Israel Analytics) Ltd. Classification of cyber-alerts into security incidents
US11533584B2 (en) 2015-09-16 2022-12-20 Ivani, LLC Blockchain systems and methods for confirming presence
US20220407881A1 (en) * 2021-06-18 2022-12-22 Extrahop Networks, Inc. Identifying network entities based on beaconing activity
US11558301B2 (en) 2020-03-19 2023-01-17 EMC IP Holding Company LLC Method, device, and computer program product for accessing application system
US11606385B2 (en) 2020-02-13 2023-03-14 Palo Alto Networks (Israel Analytics) Ltd. Behavioral DNS tunneling identification
US20230224275A1 (en) * 2022-01-12 2023-07-13 Bank Of America Corporation Preemptive threat detection for an information system
US11709946B2 (en) 2018-06-06 2023-07-25 Reliaquest Holdings, Llc Threat mitigation system and method
US11777909B1 (en) * 2021-08-19 2023-10-03 Gen Digital Inc. Identifying and removing a tracking capability from an external domain that performs a tracking activity on a host web page
US11785025B2 (en) 2021-04-15 2023-10-10 Bank Of America Corporation Threat detection within information systems
US11799880B2 (en) 2022-01-10 2023-10-24 Palo Alto Networks (Israel Analytics) Ltd. Network adaptive alert prioritization system
US11811820B2 (en) 2020-02-24 2023-11-07 Palo Alto Networks (Israel Analytics) Ltd. Malicious C and C channel to fixed IP detection
US11921864B2 (en) 2022-09-23 2024-03-05 Reliaquest Holdings, Llc Threat mitigation system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080148398A1 (en) * 2006-10-31 2008-06-19 Derek John Mezack System and Method for Definition and Automated Analysis of Computer Security Threat Models
US20110153811A1 (en) * 2009-12-18 2011-06-23 Hyun Cheol Jeong System and method for modeling activity patterns of network traffic to detect botnets
CN102594625A (en) 2012-03-07 2012-07-18 北京启明星辰信息技术股份有限公司 White data filter method and system in APT (Advanced Persistent Threat) intelligent detection and analysis platform
WO2013014672A1 (en) 2011-07-26 2013-01-31 Light Cyber Ltd A method for detecting anomaly action within a computer network
US8402543B1 (en) * 2011-03-25 2013-03-19 Narus, Inc. Machine learning based botnet detection with dynamic adaptation
US8555388B1 (en) * 2011-05-24 2013-10-08 Palo Alto Networks, Inc. Heuristic botnet detection
US8793790B2 (en) * 2011-10-11 2014-07-29 Honeywell International Inc. System and method for insider threat detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080148398A1 (en) * 2006-10-31 2008-06-19 Derek John Mezack System and Method for Definition and Automated Analysis of Computer Security Threat Models
US20110153811A1 (en) * 2009-12-18 2011-06-23 Hyun Cheol Jeong System and method for modeling activity patterns of network traffic to detect botnets
US8402543B1 (en) * 2011-03-25 2013-03-19 Narus, Inc. Machine learning based botnet detection with dynamic adaptation
US8555388B1 (en) * 2011-05-24 2013-10-08 Palo Alto Networks, Inc. Heuristic botnet detection
WO2013014672A1 (en) 2011-07-26 2013-01-31 Light Cyber Ltd A method for detecting anomaly action within a computer network
US8793790B2 (en) * 2011-10-11 2014-07-29 Honeywell International Inc. System and method for insider threat detection
CN102594625A (en) 2012-03-07 2012-07-18 北京启明星辰信息技术股份有限公司 White data filter method and system in APT (Advanced Persistent Threat) intelligent detection and analysis platform

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Curry et al. "Mobilizing Intelligent security opeartions for advanced persistent threats", Feb. 2011, pp. 1-17 http://www.emc.com/collateral/industry-overview/11313-apt-brf.pdf. *
Curry et al. "Mobilizing Intelligent security opeartions for advanced persistent threats", Feb. 2011, pp. 1-17 http://www.emc.eom/collateral/industry-overview/11313-apt-brf.pdf. *
Giura et al., "Using Large Scale Distributied Computing to Unveil advanced Persistent Threats", Nov. 2012 (document properties), pp. 1-13 http://web2.research.att.com/techdocs/TD-101075.pdf. *
Giura et al., "Using Large Scale Distributied Computing to Unveil advanced Persistent Threats", Nov. 2012 (document proporties), pp. 1-13 http://web2.research.att.com/techdocs/TD-101075.pdf. *
Lunt et al., "A Real-Time Intrusion-Detection Expert System (IDES)", Feb. 1992, pp. 1-166, http://www.csl.sri.com/papers/9sri/9sri.pdf. *
Pantola et al., "Normalization of Logs for Networked Devices in a Security Information Event Management System", Nov. 2009 (document properties), pp. 1-6 http://justinspeaks.files.wordpress.com/2010/10/device-normalizer-paper.pdf. *

Cited By (140)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10356106B2 (en) 2011-07-26 2019-07-16 Palo Alto Networks (Israel Analytics) Ltd. Detecting anomaly action within a computer network
US9979739B2 (en) 2013-01-16 2018-05-22 Palo Alto Networks (Israel Analytics) Ltd. Automated forensics of computer systems using behavioral intelligence
US9979742B2 (en) 2013-01-16 2018-05-22 Palo Alto Networks (Israel Analytics) Ltd. Identifying anomalous messages
US20160110551A1 (en) * 2013-02-14 2016-04-21 The United States Of America As Represented By The Secretary Of The Navy Computer System Anomaly Detection Using Human Responses to Ambient Representations of Hidden Computing System and Process Metadata
US10713362B1 (en) * 2013-09-30 2020-07-14 Fireeye, Inc. Dynamically adaptive framework and method for classifying malware using intelligent static, emulation, and dynamic analyses
US10686329B2 (en) 2014-01-27 2020-06-16 Ivani, LLC Systems and methods to allow for a smart device
US10361585B2 (en) 2014-01-27 2019-07-23 Ivani, LLC Systems and methods to allow for a smart device
US11612045B2 (en) 2014-01-27 2023-03-21 Ivani, LLC Systems and methods to allow for a smart device
US11246207B2 (en) 2014-01-27 2022-02-08 Ivani, LLC Systems and methods to allow for a smart device
US10834120B2 (en) 2014-12-03 2020-11-10 Splunk Inc. Identifying related communication interactions to a security threat in a computing environment
US11895143B2 (en) 2014-12-03 2024-02-06 Splunk Inc. Providing action recommendations based on action effectiveness across information technology environments
US11323472B2 (en) 2014-12-03 2022-05-03 Splunk Inc. Identifying automated responses to security threats based on obtained communication interactions
US11765198B2 (en) 2014-12-03 2023-09-19 Splunk Inc. Selecting actions responsive to computing environment incidents based on severity rating
US11190539B2 (en) 2014-12-03 2021-11-30 Splunk Inc. Modifying incident response time periods based on containment action effectiveness
US11019092B2 (en) 2014-12-03 2021-05-25 Splunk. Inc. Learning based security threat containment
US11647043B2 (en) 2014-12-03 2023-05-09 Splunk Inc. Identifying security actions based on computing asset relationship data
US11805148B2 (en) 2014-12-03 2023-10-31 Splunk Inc. Modifying incident response time periods based on incident volume
US11658998B2 (en) 2014-12-03 2023-05-23 Splunk Inc. Translating security actions into computing asset-specific action procedures
US11870802B1 (en) 2014-12-03 2024-01-09 Splunk Inc. Identifying automated responses to security threats based on communication interactions content
US11165812B2 (en) 2014-12-03 2021-11-02 Splunk Inc. Containment of security threats within a computing environment
US11025664B2 (en) 2014-12-03 2021-06-01 Splunk Inc. Identifying security actions for responding to security threats based on threat state information
US11677780B2 (en) 2014-12-03 2023-06-13 Splunk Inc. Identifying automated response actions based on asset classification
US10855718B2 (en) * 2014-12-03 2020-12-01 Splunk Inc. Management of actions in a computing environment based on asset classification
US11757925B2 (en) 2014-12-03 2023-09-12 Splunk Inc. Managing security actions in a computing environment based on information gathering activity of a security threat
US10230742B2 (en) * 2015-01-30 2019-03-12 Anomali Incorporated Space and time efficient threat detection
US10616248B2 (en) 2015-01-30 2020-04-07 Anomali Incorporated Space and time efficient threat detection
US10129239B2 (en) * 2015-05-08 2018-11-13 Citrix Systems, Inc. Systems and methods for performing targeted scanning of a target range of IP addresses to verify security certificates
US10630674B2 (en) 2015-05-08 2020-04-21 Citrix Systems, Inc. Systems and methods for performing targeted scanning of a target range of IP addresses to verify security certificates
US10075461B2 (en) 2015-05-31 2018-09-11 Palo Alto Networks (Israel Analytics) Ltd. Detection of anomalous administrative actions
US10713683B2 (en) * 2015-06-09 2020-07-14 Oath Inc. Outlier data detection
US20160364745A1 (en) * 2015-06-09 2016-12-15 Yahoo! Inc. Outlier data detection
US9756067B2 (en) * 2015-08-10 2017-09-05 Accenture Global Services Limited Network security
US20170048270A1 (en) * 2015-08-10 2017-02-16 Accenture Global Services Limited Network security
US10142785B2 (en) 2015-09-16 2018-11-27 Ivani, LLC Detecting location within a network
US10904698B2 (en) 2015-09-16 2021-01-26 Ivani, LLC Detecting location within a network
US10477348B2 (en) 2015-09-16 2019-11-12 Ivani, LLC Detection network self-discovery
US9693195B2 (en) 2015-09-16 2017-06-27 Ivani, LLC Detecting location within a network
US10531230B2 (en) 2015-09-16 2020-01-07 Ivani, LLC Blockchain systems and methods for confirming presence
US11178508B2 (en) 2015-09-16 2021-11-16 Ivani, LLC Detection network self-discovery
US11323845B2 (en) 2015-09-16 2022-05-03 Ivani, LLC Reverse-beacon indoor positioning system using existing detection fields
US10455357B2 (en) 2015-09-16 2019-10-22 Ivani, LLC Detecting location within a network
US10917745B2 (en) 2015-09-16 2021-02-09 Ivani, LLC Building system control utilizing building occupancy
US10397742B2 (en) 2015-09-16 2019-08-27 Ivani, LLC Detecting location within a network
US10665284B2 (en) 2015-09-16 2020-05-26 Ivani, LLC Detecting location within a network
US10667086B2 (en) 2015-09-16 2020-05-26 Ivani, LLC Detecting location within a network
US10382893B1 (en) 2015-09-16 2019-08-13 Ivani, LLC Building system control utilizing building occupancy
US11350238B2 (en) 2015-09-16 2022-05-31 Ivani, LLC Systems and methods for detecting the presence of a user at a computer
US11533584B2 (en) 2015-09-16 2022-12-20 Ivani, LLC Blockchain systems and methods for confirming presence
US10064013B2 (en) 2015-09-16 2018-08-28 Ivani, LLC Detecting location within a network
US10321270B2 (en) 2015-09-16 2019-06-11 Ivani, LLC Reverse-beacon indoor positioning system using existing detection fields
US10064014B2 (en) 2015-09-16 2018-08-28 Ivani, LLC Detecting location within a network
US20170142023A1 (en) * 2015-11-18 2017-05-18 The Rubicon Project, Inc. Networked system for interconnecting sensor-based devices
US11431676B2 (en) * 2015-12-24 2022-08-30 Huawei Technologies Co., Ltd. Method, apparatus, and system for detecting terminal security status
US10735374B2 (en) * 2015-12-24 2020-08-04 Huawei Technologies Co., Ltd. Method, apparatus, and system for detecting terminal security status
US10713360B2 (en) * 2016-02-19 2020-07-14 Secureworks Corp. System and method for detecting and monitoring network communication
US20170244734A1 (en) * 2016-02-19 2017-08-24 Secureworks Corp. System and Method for Detecting and Monitoring Network Communication
US10262132B2 (en) * 2016-07-01 2019-04-16 Entit Software Llc Model-based computer attack analytics orchestration
US20180004958A1 (en) * 2016-07-01 2018-01-04 Hewlett Packard Enterprise Development Lp Computer attack model management
US10574681B2 (en) * 2016-09-04 2020-02-25 Palo Alto Networks (Israel Analytics) Ltd. Detection of known and unknown malicious domains
US10425436B2 (en) 2016-09-04 2019-09-24 Palo Alto Networks (Israel Analytics) Ltd. Identifying bulletproof autonomous systems
US10686829B2 (en) 2016-09-05 2020-06-16 Palo Alto Networks (Israel Analytics) Ltd. Identifying changes in use of user credentials
US10534925B2 (en) * 2016-10-05 2020-01-14 Microsoft Technology Licensing, Llc Detection of compromised devices via user states
WO2018067293A1 (en) * 2016-10-05 2018-04-12 Microsoft Technology Licensing, Llc Detection of compromised devices via user states
CN109791587A (en) * 2016-10-05 2019-05-21 微软技术许可有限责任公司 Equipment is endangered via User Status detection
US20180096157A1 (en) * 2016-10-05 2018-04-05 Microsoft Technology Licensing, Llc Detection of compromised devices via user states
CN109791587B (en) * 2016-10-05 2023-05-05 微软技术许可有限责任公司 Detecting compromised devices via user status
CN109891858A (en) * 2016-10-13 2019-06-14 微软技术许可有限责任公司 Actively and passively method for IP to name resolving
WO2018071280A1 (en) * 2016-10-13 2018-04-19 Microsoft Technology Licensing, Llc Active and passive method for ip to name resolution
US10505894B2 (en) 2016-10-13 2019-12-10 Microsoft Technology Licensing, Llc Active and passive method to perform IP to name resolution in organizational environments
US10686687B2 (en) 2016-12-05 2020-06-16 Aware360 Ltd. Integrated personal safety and equipment monitoring system
US11310247B2 (en) * 2016-12-21 2022-04-19 Micro Focus Llc Abnormal behavior detection of enterprise entities using time-series data
US10050731B1 (en) * 2017-01-27 2018-08-14 The United States Of America, As Represented By The Secretary Of The Navy Apparatus and method for detecting a multi-homed device using clock skew
US10511599B2 (en) * 2017-03-13 2019-12-17 Microsoft Technology Licensing, Llc System to filter impossible user travel indicators
US20180262498A1 (en) * 2017-03-13 2018-09-13 Microsoft Technology Licensing, Llc System to filter impossible user travel indicators
US20220353280A1 (en) * 2017-03-31 2022-11-03 Musarubra Us Llc Identifying malware-suspect end points through entropy changes in consolidated logs
US11916934B2 (en) * 2017-03-31 2024-02-27 Musarubra Us Llc Identifying malware-suspect end points through entropy changes in consolidated logs
US10794093B2 (en) 2017-05-19 2020-10-06 Microsoft Technology Licensing, Llc Method of optimizing memory wire actuator energy output
US10325641B2 (en) 2017-08-10 2019-06-18 Ivani, LLC Detecting location within a network
US10599857B2 (en) 2017-08-29 2020-03-24 Micro Focus Llc Extracting features for authentication events
US11397413B2 (en) 2017-08-29 2022-07-26 Micro Focus Llc Training models based on balanced training data sets
US10984099B2 (en) 2017-08-29 2021-04-20 Micro Focus Llc Unauthorized authentication events
EP3484118A1 (en) * 2017-11-09 2019-05-15 Accenture Global Solutions Limited Detection of adversary lateral movement in multi-domain iiot environments
US10812499B2 (en) 2017-11-09 2020-10-20 Accenture Global Solutions Limited Detection of adversary lateral movement in multi-domain IIOT environments
US11522882B2 (en) 2017-11-09 2022-12-06 Accenture Global Solutions Limited Detection of adversary lateral movement in multi-domain IIOT environments
US10999304B2 (en) 2018-04-11 2021-05-04 Palo Alto Networks (Israel Analytics) Ltd. Bind shell attack detection
US11122064B2 (en) 2018-04-23 2021-09-14 Micro Focus Llc Unauthorized authentication event detection
US10848506B2 (en) 2018-06-06 2020-11-24 Reliaquest Holdings, Llc Threat mitigation system and method
US10855711B2 (en) * 2018-06-06 2020-12-01 Reliaquest Holdings, Llc Threat mitigation system and method
US11637847B2 (en) 2018-06-06 2023-04-25 Reliaquest Holdings, Llc Threat mitigation system and method
US11611577B2 (en) 2018-06-06 2023-03-21 Reliaquest Holdings, Llc Threat mitigation system and method
US11265338B2 (en) 2018-06-06 2022-03-01 Reliaquest Holdings, Llc Threat mitigation system and method
US10735444B2 (en) 2018-06-06 2020-08-04 Reliaquest Holdings, Llc Threat mitigation system and method
US11297080B2 (en) 2018-06-06 2022-04-05 Reliaquest Holdings, Llc Threat mitigation system and method
US10848512B2 (en) 2018-06-06 2020-11-24 Reliaquest Holdings, Llc Threat mitigation system and method
US11108798B2 (en) 2018-06-06 2021-08-31 Reliaquest Holdings, Llc Threat mitigation system and method
US11323462B2 (en) 2018-06-06 2022-05-03 Reliaquest Holdings, Llc Threat mitigation system and method
US10965703B2 (en) 2018-06-06 2021-03-30 Reliaquest Holdings, Llc Threat mitigation system and method
US20190379689A1 (en) * 2018-06-06 2019-12-12 ReliaQuest Holdings. LLC Threat mitigation system and method
US10951641B2 (en) 2018-06-06 2021-03-16 Reliaquest Holdings, Llc Threat mitigation system and method
US10855702B2 (en) 2018-06-06 2020-12-01 Reliaquest Holdings, Llc Threat mitigation system and method
US11363043B2 (en) 2018-06-06 2022-06-14 Reliaquest Holdings, Llc Threat mitigation system and method
US11374951B2 (en) 2018-06-06 2022-06-28 Reliaquest Holdings, Llc Threat mitigation system and method
US10848513B2 (en) 2018-06-06 2020-11-24 Reliaquest Holdings, Llc Threat mitigation system and method
US11095673B2 (en) 2018-06-06 2021-08-17 Reliaquest Holdings, Llc Threat mitigation system and method
US10721252B2 (en) 2018-06-06 2020-07-21 Reliaquest Holdings, Llc Threat mitigation system and method
US10735443B2 (en) 2018-06-06 2020-08-04 Reliaquest Holdings, Llc Threat mitigation system and method
US11709946B2 (en) 2018-06-06 2023-07-25 Reliaquest Holdings, Llc Threat mitigation system and method
US11687659B2 (en) 2018-06-06 2023-06-27 Reliaquest Holdings, Llc Threat mitigation system and method
US11588838B2 (en) 2018-06-06 2023-02-21 Reliaquest Holdings, Llc Threat mitigation system and method
US11528287B2 (en) * 2018-06-06 2022-12-13 Reliaquest Holdings, Llc Threat mitigation system and method
US11856017B2 (en) 2018-08-11 2023-12-26 Rapid7, Inc. Machine learning correlator to infer network properties
US11290479B2 (en) * 2018-08-11 2022-03-29 Rapid7, Inc. Determining insights in an electronic environment
US11316872B2 (en) 2019-01-30 2022-04-26 Palo Alto Networks (Israel Analytics) Ltd. Malicious port scan detection using port profiles
US11070569B2 (en) 2019-01-30 2021-07-20 Palo Alto Networks (Israel Analytics) Ltd. Detecting outlier pairs of scanned ports
US11184377B2 (en) 2019-01-30 2021-11-23 Palo Alto Networks (Israel Analytics) Ltd. Malicious port scan detection using source profiles
US11184378B2 (en) 2019-01-30 2021-11-23 Palo Alto Networks (Israel Analytics) Ltd. Scanner probe detection
US11184376B2 (en) 2019-01-30 2021-11-23 Palo Alto Networks (Israel Analytics) Ltd. Port scan detection using destination profiles
US20220224704A1 (en) * 2019-05-21 2022-07-14 Schneider Electric USA, Inc. Establishing and maintaining secure device communication
USD926809S1 (en) 2019-06-05 2021-08-03 Reliaquest Holdings, Llc Display screen or portion thereof with a graphical user interface
USD926810S1 (en) 2019-06-05 2021-08-03 Reliaquest Holdings, Llc Display screen or portion thereof with a graphical user interface
USD926811S1 (en) 2019-06-06 2021-08-03 Reliaquest Holdings, Llc Display screen or portion thereof with a graphical user interface
USD926782S1 (en) 2019-06-06 2021-08-03 Reliaquest Holdings, Llc Display screen or portion thereof with a graphical user interface
USD926200S1 (en) 2019-06-06 2021-07-27 Reliaquest Holdings, Llc Display screen or portion thereof with a graphical user interface
US11012492B1 (en) 2019-12-26 2021-05-18 Palo Alto Networks (Israel Analytics) Ltd. Human activity detection in computing device transmissions
US11321463B2 (en) * 2020-01-09 2022-05-03 Rockwell Collins, Inc. Hardware malware profiling and detection system
US11606385B2 (en) 2020-02-13 2023-03-14 Palo Alto Networks (Israel Analytics) Ltd. Behavioral DNS tunneling identification
US11811820B2 (en) 2020-02-24 2023-11-07 Palo Alto Networks (Israel Analytics) Ltd. Malicious C and C channel to fixed IP detection
US11558301B2 (en) 2020-03-19 2023-01-17 EMC IP Holding Company LLC Method, device, and computer program product for accessing application system
US11924232B2 (en) * 2020-05-20 2024-03-05 Schneider Electric USA, Inc. Establishing and maintaining secure device communication
US11425162B2 (en) 2020-07-01 2022-08-23 Palo Alto Networks (Israel Analytics) Ltd. Detection of malicious C2 channels abusing social media sites
US11509680B2 (en) 2020-09-30 2022-11-22 Palo Alto Networks (Israel Analytics) Ltd. Classification of cyber-alerts into security incidents
CN112597499A (en) * 2020-12-30 2021-04-02 北京启明星辰信息安全技术有限公司 Nondestructive safety inspection method and system for video monitoring equipment
CN112597499B (en) * 2020-12-30 2024-02-20 北京启明星辰信息安全技术有限公司 Nondestructive security inspection method and system for video monitoring equipment
US11930025B2 (en) 2021-04-15 2024-03-12 Bank Of America Corporation Threat detection and prevention for information systems
US11785025B2 (en) 2021-04-15 2023-10-10 Bank Of America Corporation Threat detection within information systems
US20220407881A1 (en) * 2021-06-18 2022-12-22 Extrahop Networks, Inc. Identifying network entities based on beaconing activity
US11777909B1 (en) * 2021-08-19 2023-10-03 Gen Digital Inc. Identifying and removing a tracking capability from an external domain that performs a tracking activity on a host web page
US11799880B2 (en) 2022-01-10 2023-10-24 Palo Alto Networks (Israel Analytics) Ltd. Network adaptive alert prioritization system
US20230224275A1 (en) * 2022-01-12 2023-07-13 Bank Of America Corporation Preemptive threat detection for an information system
US11921864B2 (en) 2022-09-23 2024-03-05 Reliaquest Holdings, Llc Threat mitigation system and method

Similar Documents

Publication Publication Date Title
US9378361B1 (en) Anomaly sensor framework for detecting advanced persistent threat attacks
US11522887B2 (en) Artificial intelligence controller orchestrating network components for a cyber threat defense
US11902322B2 (en) Method, apparatus, and system to map network reachability
US10296748B2 (en) Simulated attack generator for testing a cybersecurity system
Chung et al. NICE: Network intrusion detection and countermeasure selection in virtual network systems
US9516039B1 (en) Behavioral detection of suspicious host activities in an enterprise
CN106537872B (en) Method for detecting attacks in a computer network
Miloslavskaya Security operations centers for information security incident management
Fuentes-García et al. Present and future of network security monitoring
Lange et al. Event Prioritization and Correlation based on Pattern Mining Techniques
Diaz-Honrubia et al. A trusted platform module-based, pre-emptive and dynamic asset discovery tool
Morgese Stepping out of the MUD: contextual network threat information for IoT devices with manufacturer-provided behavioural profiles
Mohammed Automatic Port Scanner
Taha Intrusion detection correlation in computer network using multi-agent system
Yadav et al. An automated network security checking and alert system: A new framework
Ananbeh et al. Improving ICS security through Honeynets and Machine Learning techniques
Dillabaugh et al. Cyber Security Approaches for Industrial Control Networks
Hubballi et al. Event Log Analysis and Correlation: A Digital Forensic Perspective
Wahid Estimating the internet malicious host population while preserving privacy
Sanjana et al. Network Intrusion Detection and Countermeasure Selection in Virtual Private Network Systems
NAIK et al. A Review of Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems
Lucia et al. Rapid Decentralized Network Intrusion Defense System on Multiple Virtual Machines
Luiijf et al. Intrusion detection: introduction and generics
Satish et al. Protecting Host-Based Intrusion Detectors Through Virtual Machines
Minega Network security analysis using Bayesian attack graphs

Legal Events

Date Code Title Description
AS Assignment

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YEN, TING-FANG;JUELS, ARI;KUPPA, ADITYA;AND OTHERS;SIGNING DATES FROM 20130103 TO 20130222;REEL/FRAME:030103/0831

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040134/0001

Effective date: 20160907

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040136/0001

Effective date: 20160907

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040134/0001

Effective date: 20160907

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., A

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040136/0001

Effective date: 20160907

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EMC CORPORATION;REEL/FRAME:040203/0001

Effective date: 20160906

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., T

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223

Effective date: 20190320

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223

Effective date: 20190320

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001

Effective date: 20200409

AS Assignment

Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: SCALEIO LLC, MASSACHUSETTS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: MOZY, INC., WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: MAGINATICS LLC, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: FORCE10 NETWORKS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL SYSTEMS CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL MARKETING L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL INTERNATIONAL, L.L.C., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: CREDANT TECHNOLOGIES, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: AVENTAIL LLC, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

AS Assignment

Owner name: SCALEIO LLC, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL INTERNATIONAL L.L.C., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

AS Assignment

Owner name: SCALEIO LLC, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL INTERNATIONAL L.L.C., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8