US11838300B1 - Run-time configurable cybersecurity system - Google Patents

Run-time configurable cybersecurity system Download PDF

Info

Publication number
US11838300B1
US11838300B1 US17/133,397 US202017133397A US11838300B1 US 11838300 B1 US11838300 B1 US 11838300B1 US 202017133397 A US202017133397 A US 202017133397A US 11838300 B1 US11838300 B1 US 11838300B1
Authority
US
United States
Prior art keywords
analytic
subscriber
submission
cybersecurity
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/133,397
Inventor
Sai Vashisht
Sagar Khangan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Musarubra US LLC
Original Assignee
Musarubra US LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Musarubra US LLC filed Critical Musarubra US LLC
Priority to US17/133,397 priority Critical patent/US11838300B1/en
Assigned to UBS AG, STAMFORD BRANCH, AS COLLATERAL AGENT reassignment UBS AG, STAMFORD BRANCH, AS COLLATERAL AGENT SECOND LIEN PATENT SECURITY AGREEMENT Assignors: FIREEYE SECURITY HOLDINGS US LLC
Assigned to UBS AG, STAMFORD BRANCH, AS COLLATERAL AGENT reassignment UBS AG, STAMFORD BRANCH, AS COLLATERAL AGENT FIRST LIEN PATENT SECURITY AGREEMENT Assignors: FIREEYE SECURITY HOLDINGS US LLC
Assigned to MANDIANT, INC. reassignment MANDIANT, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: FIREEYE, INC.
Assigned to FIREEYE SECURITY HOLDINGS US LLC reassignment FIREEYE SECURITY HOLDINGS US LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MANDIANT, INC.
Assigned to MUSARUBRA US LLC reassignment MUSARUBRA US LLC MERGER (SEE DOCUMENT FOR DETAILS). Assignors: FIREEYE SECURITY HOLDINGS US LLC
Application granted granted Critical
Publication of US11838300B1 publication Critical patent/US11838300B1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/102Entity profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1433Vulnerability analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1466Active attacks involving interception, injection, modification, spoofing of data unit addresses, e.g. hijacking, packet injection or TCP sequence number attacks

Definitions

  • Embodiments of the disclosure relate to the field of cybersecurity. More specifically, one embodiment of the disclosure relates to a system architecture directed to cybersecurity threat detection and a corresponding method thereof.
  • on-premises electronic devices Each on-premises electronic device may constitute a type of computer such as a personal computer, a locally maintained mainframe, or a local server for example.
  • on-premises electronic devices became subjected to cybersecurity attacks (cyberattacks) more regularly, in order to protect these electronic devices, certain preeminent cybersecurity vendors began to develop and deploy on-premises threat detection appliances.
  • a customer For on-premises deployments, a customer has to purchase threat detection appliances from a cybersecurity vendor, which requires both a significant upfront capital outlay for the purchase of the appliances as well as significant ongoing operational costs. These operational costs may include the costs for deploying, managing, maintaining, upgrading, repairing and replacing these appliances. For instance, a customer may be required to install multiple types of threat detection appliances within the enterprise network in order to detect different types of cybersecurity threats (cyberthreats). These cyberthreats may coincide with discrete activities associated with known or highly suspected cyberattacks.
  • malware may be generally considered to be software (e.g., executable) that is coded to cause a recipient electronic device to perform unauthorized, unexpected, anomalous, and/or unwanted behaviors or operations (hereinafter, “malicious behaviors”), such as altering the functionality of an electronic device upon execution of the malware.
  • Cybersecurity vendors have provided threat detection through cloud-based offerings that are self-hosted by these vendors.
  • the responsibility for the above-described upfront capital outlays and ongoing operational costs is shifted from the customer to the cybersecurity vendor.
  • the cybersecurity vendor are now saddled with even greater overall costs than a customer itself because the cybersecurity vendor must deploy infrastructure resources sized to handle the maximum aggregate threat detection analytic workload for all of its customers.
  • These overall costs, directed to data processing and storage usage would need to be passed on to its customers, where any significant cost increases may translate into a significant price increases for the cybersecurity services.
  • customers are unable to accurately estimate or anticipate the costs associated with current and future cybersecurity needs, given that impact that changes in cybersecurity need, amongst all of the customers, may influence the costs apportioned for processing or storage usage.
  • public cloud is a fully virtualized environment with a multi-tenant architecture that enables tenants (i.e., customers) to establish different cloud accounts, but share computing and storage resources and retain the isolation of data within each customer's cloud account.
  • the virtualized environment includes on-demand, cloud computing platforms that are provided by a collection of physical data centers, where each data center includes numerous servers hosted by the cloud provider. Examples of different types of public clouds may include, but is not limited or restricted to Amazon Web Services®, Microsoft® Azure® or Google Cloud PlatformTM for example.
  • FIG. 1 A is a block diagram of an exemplary embodiment of a cloud-based cybersecurity system deployed as a Security-as-a Service (SaaS) layered on a public cloud operating as an Infrastructure-as-a-Service (IaaS).
  • SaaS Security-as-a Service
  • IaaS Infrastructure-as-a-Service
  • FIG. 1 B is a block diagram of an exemplary embodiment of a cloud-based cybersecurity system deployed as a cybersecurity service within a cloud network.
  • FIG. 2 is a block diagram of an exemplary embodiment of logic forming the cybersecurity system of FIGS. 1 A- 1 B .
  • FIG. 3 is a block diagram of an exemplary embodiment of a multi-stage object evaluation logic implemented within the cybersecurity system of FIG. 2 .
  • FIG. 4 is a block diagram of an exemplary embodiment of a first evaluation stage of the object evaluation logic of FIG. 2 including a preliminary analytic module.
  • FIG. 5 is a block diagram of an exemplary embodiment of a second evaluation stage of the object evaluation logic including an analytic engine selection module operating with an cyberthreat analytic module deployed within a third evaluation stage of the object evaluation logic of FIG. 2 .
  • FIG. 6 is a block diagram of an exemplary embodiment of an analytic engine configured to operate as part of the cyberthreat analytic module of FIG. 3 .
  • FIG. 7 is a block diagram of an exemplary embodiment of a fourth evaluation stage of the object evaluation logic including a correlation module and a post-processing module deployed within a fifth evaluation stage of the object evaluation logic of FIG. 2 .
  • Embodiments of the present disclosure generally relate to a cloud-based cybersecurity system leveraging resources associated with the infrastructure provided by a public cloud.
  • One embodiment of the cybersecurity system operates as a multi-tenant (subscription-based) Security-as-a-Service (SaaS), which is layered on a multi-tenant Infrastructure-as-a-Service (IaaS) cloud platform.
  • SaaS Security-as-a-Service
  • IaaS Infrastructure-as-a-Service
  • public cloud infrastructure resources shared resources hosted by the public cloud
  • SaaS-operating cybersecurity system may be installed by a cybersecurity vendor being a different entity than the cloud provider
  • the SaaS may deploy a vendor-specific proprietary software stack to run on the compute and storage resources provided by the IaaS cloud platform.
  • the cybersecurity system may be configured to charge usage in accordance with a different pricing scheme than offered by the IaaS (public cloud).
  • the cybersecurity system may be configured with a tiered subscription pricing scheme based on a number of submissions of objects undergoing cyberthreat analytics by the cybersecurity system (e.g., the number of objects uploaded via a portal or other type of interface or the number of objects processed to account for objects created and processed during processing of another object if more details analytics are requested) along with additional subscription enrichments (e.g., enhanced reporting formats, memory dump capabilities, etc.).
  • the cybersecurity system may be configured with a “pay per usage” pricing scheme, which enjoys no maximum submission thresholds over a prescribed duration but higher costs are applied to each submission.
  • the cybersecurity system enables both the customer and cybersecurity vendor to avoid the complexity and significant capital outlay in buying and operating physical servers and other datacenter infrastructure.
  • the cybersecurity vendor incurs the costs associated with the actual use of certain public cloud infrastructure resources, such as storage amounts or compute time as measured by the time of data processing conducted by computing instances hosted by the public cloud and configured as analytic engines within the cybersecurity system as described below.
  • the subscribers incur the costs associated with their actual number of object submissions for a determination as to whether the objects constitute a cyberthreat.
  • the cybersecurity system is configured to be “submission agnostic,” meaning that the same submission scheme may be followed for uploading different object types for analysis (e.g., email messages, web page content, uniform resource locators (URLs), hashes, files, documents, etc.) and/or the same multi-stage evaluation is conducted on a data sample, inclusive of that object and context information associated with the object, independent of object type.
  • object types for analysis e.g., email messages, web page content, uniform resource locators (URLs), hashes, files, documents, etc.
  • the architecture of the cybersecurity system is designed to conduct cyberthreat analytics on multiple types of objects uploaded to cybersecurity system by at least (i) validating a submission by confirming that requisite information is included within the submission, (ii) authenticating the subscriber that input the submission, and/or (iii) verifying the subscriber is authorized to perform the task(s) associated with the submission.
  • a particular type of submission such as a data sample submission for example
  • the cybersecurity system conducts cyberthreat analytics on the object in accordance with a multi-stage evaluation that is submission agnostic (i.e., evaluation stages do not change based on the object type).
  • the cybersecurity system may be configured to receive multiple types of objects through an interface (e.g., a cybersecurity portal, device interface including one or more Application Programming Interfaces “APIs”, etc.) upon completion of a subscriber onboarding process.
  • an interface e.g., a cybersecurity portal, device interface including one or more Application Programming Interfaces “APIs”, etc.
  • the cybersecurity system may validate the data sample submission by confirming that the submission includes requisite information such as credential(s), a subscription identifier (hereinafter, “Subscription ID”), or the like.
  • the cybersecurity system may authenticate the subscriber by confirming that the submitted credential is active and verify that the subscriber is authorized to perform the requested task(s) through analysis of entitlements made available to the subscriber based on its chosen subscription type as identified by the Subscription ID (e.g., subscription parameters such as access privileges, data sample submission thresholds, virtual key allocation threshold, etc.).
  • the Subscription ID e.g., subscription parameters such as access privileges, data sample submission thresholds, virtual key allocation threshold, etc.
  • the cybersecurity system may conduct cyberthreat analytics on the object, namely analyses conducted on the object and/or context information associated with the object.
  • the context information may include meta-information associated with the object (object context), meta-information associated with the subscription (entitlement context), and/or meta-information associated with the submission (submission context).
  • object context meta-information associated with the object
  • entity context meta-information associated with the subscription
  • submission context meta-information associated with the submission
  • the “submission context” may include meta-information pertaining to the submission, such as the time of input, origin of the object included in the submission (e.g., from email, network cloud shared drive, network transmission medium, etc.), location of the subscriber's network device providing the object, or the like.
  • the “entitlement context” may include meta-information pertaining to the subscription selected by subscriber, such as information directed to what features are permitted by the subscription (e.g., types of analytics supported, reporting formats available, or other features may distinguish different subscription tiers).
  • the “object context” may include meta-information pertaining to the object, such as its extension type.
  • the analytic engines may be selected based, at least in part, on the submission context, entitlement context and/or the object context.
  • the analytic engines may be selected as a combination of any single type or any combination of two or more types of the following analytic engines: (i) static analytic engines that conduct an analysis on the content of an object and generate results including observed features represented by characteristics of the object (and accompanying context information); (ii) dynamic analytic engines that conduct an execution of the object and generate results including features represented by observed behaviors of the analytic engine (and accompanying context information); (iii) machine learning analytic engines that conduct extraction of insights from the submitted object and context information using a trained model and generate results including features represented by a probability of an object being malicious (and accompanying context information); and/or (iv) emulation analytic engines that conduct reproduction of operations representing the execution of the object without such execution and generate results including features represented by the behaviors observed during emulation (and accompanying context information).
  • the generated results (features) produced by the cyberthreat analytics conducted on the object are correlated with features of known malicious objects and/or known benign objects to determine a threat verdict for the object (e.g., malicious/benign, good/bad. high-risk/low-risk or any other measurement to signify the likelihood of the object being malicious or non-malicious).
  • the cybersecurity system may be further configured to conduct post-processing analytics based, at least in part, on the correlated results in order to determine what additional operations, if any, are to be conducted on the object. These operations may include retention of a portion of the context information associated with an identified malicious or benign object within the cybersecurity intelligence used by the cybersecurity system, transmission of the object to a forensic team for subsequent analysis, or the like.
  • the cybersecurity system is configured to monitor and maintain, on a per subscriber basis, SaaS metrics.
  • the SaaS metrics may include, inter alia, a sum total of data sample submissions made by a subscriber to the cybersecurity system (SaaS subscriber) during a selected time period and/or a sum total of active virtual keys currently issued to the SaaS subscriber.
  • the SaaS metrics may be used for billing of the subscriber based on the number of data sample submissions made during a selected time period, and in some cases, to ensure compliance with subscription entitlements.
  • the cybersecurity system includes an architecture that relies upon the public cloud infrastructure resources and monitors the usage of various services (e.g., data sample submissions, virtual key issuances, etc.) to ensure compliance with subscription entitlements as well as for reporting and billing purposes.
  • the cybersecurity system operates as a multi-tenant, subscription-based SaaS), which leverages resources, such as compute and storage resources, hosted by an IaaS cloud platform, although other deployments are available and pertain to the broader spirit and scope of the invention.
  • the cybersecurity system features (i) interface logic, (ii) administrative control logic, (iii) multi-stage, object evaluation logic, and (iv) reporting logic.
  • the interface logic enables communications to the administrative control logic to validate a submission, authenticate a subscriber associated with the submission, and verify that that the subscriber is authorized to perform one or more tasks associated with the submission.
  • the interface logic enables the return of data requested by the submission to the subscriber or routes at least a portion of the submission to the object evaluation logic.
  • the interface logic may include a cybersecurity portal that allows any user (potential subscriber) to register and establish a subscription with the cybersecurity system.
  • the user may receive credentials to allow for the submission of objects (in the form of data samples including the object and its context information) uploaded via the cybersecurity portal for cyberthreat analytics, submission of queries for certain subscriber-based metrics, or submission of parameters for customizing functionality of the object evaluation logic akin to the subscriber's needs.
  • objects in the form of data samples including the object and its context information
  • the interface logic may be provided with an additional interface (hereinafter, “device interface”).
  • the device interface includes logic supporting one or more APIs, where access to the APIs may depend on the subscription entitlements.
  • the APIs may include a first API for the submission of objects (data samples including the object and its context information) for cyberthreat analytics, a second API for subscription management (e.g., ascertain the subscriber-based metrics), and a third API for management and/or customization of the functionality of analytic engines operating within the object evaluation logic.
  • the administrative control logic includes a subscription management module, a subscriber accounts data store, a credential (key) management module, a consumption quota monitoring module, a configuration management module, a system health assessment module, an auto-scaling module, and a subscription billing module.
  • the subscriber accounts data store may be non-volatile, cloud-based storage hosted by the public cloud that is allocated to the IaaS subscriber (e.g., the cybersecurity vendor), where different portions of the subscriber accounts data store may be allocated to each SaaS subscriber. Therefore, each SaaS subscriber includes one or more virtual data stores that are secured and inaccessible by other SaaS subscribers.
  • Other of the above-identified modules may be shared by the SaaS subscribers, where these modules are maintained with cloud-based storage hosted by the public cloud and operate based on execution of these modules by compute engines hosted by the public cloud.
  • the subscription management module is configured to control access to the cybersecurity system by controlling a subscriber onboarding process in which user information and financial information are acquired prior to selection, by the user, of a particular subscription tier.
  • the subscription tiers may be allocated based on data sample submission thresholds, over a prescribed period of time, a desired number of submission sources (e.g., number of persons or network devices to be provided with a virtual key for subscriber authentication), or the like.
  • a subscription identifier (hereinafter, “Subscription ID”) may be assigned to a subscription secured by the subscriber and stored within a particular portion of the subscriber accounts data store reserved for that subscriber, given that certain subscribers (e.g., large enterprises) may acquire multiple subscriptions and identification of a particular subscription associated with the submission may be necessary.
  • the subscriber accounts data store may be configured as (i) one or more virtual data stores each maintaining a record of the account data for a particular subscriber, (ii) one or more virtual data stores maintaining a collection of references (e.g., links, etc.) each directed to a different portion of cloud-based storage maintained in the aggregate for the IaaS subscriber (cybersecurity vendor), but allocated separately by the cybersecurity system to different SaaS subscribers to include account data, or (iii) a combination thereof (e.g., storage of credentials and/or personal identifiable information within the virtual data store(s) along with references to a remainder of the account data maintained at different virtual data stores.
  • references e.g., links, etc.
  • subscriber account data may include any information (or meta-information) that may be used to identify the subscriber, provide subscription status, authenticate a subscriber based on credentials (e.g., tokens, keys or representatives thereof), identify certain entitlements to be provided to the data sample and other entitlements associated with the subscription to which compliance is required prior to the cybersecurity system completing a task requested by the submission, or the like.
  • credentials e.g., tokens, keys or representatives thereof
  • the subscriber account data may include a Subscription ID and information associated with the subscriber (e.g., contact information, financial information, location, etc.); subscription entitlements (e.g., subscription parameters such as data sample submission threshold, virtual key allocation threshold, additional enrichments based on the particular subscription directed to additional analytic capabilities made available to data samples from the particular subscriber, additional report formatting, etc.). Additionally, the subscriber account data may further maintain metrics pertaining to the subscription (e.g., SaaS metrics and/or IaaS metrics, etc.).
  • metrics pertaining to the subscription e.g., SaaS metrics and/or IaaS metrics, etc.
  • the credential (key) management module is deployed to control credential generation and subscriber authentication.
  • the credential management module upon establishing a subscription, the credential management module is notified to generate a first credential (referred to as a “master key”) assigned to a subscriber associated with the subscription.
  • the master key may be maintained as part of the subscriber account data, but it is not freely accessible to the subscriber. Instead, the master key may operate as a basis (e.g., seed keying material) used by the credential management module to generate second credentials (each referred to as a “virtual key”).
  • each virtual key may be based, at least in part, on the contents of the master key.
  • One or more virtual keys may be generated and returned to the subscriber in response to a key generation request submission, provided a sum total of the number of requested virtual keys and the number of active virtual keys do not exceed the subscription entitlements.
  • a virtual key is included as part of a submission (e.g., data sample submission, consumption quota submission, parameter adjustment submission, etc.) to authenticate the subscriber and verify that the subscriber is authorized to perform the task associated with that submission.
  • the virtual keys allow for tracking of usage of the cybersecurity system by different subscriber members (e.g., individuals, groups, departments, subsidiaries, etc.) as well as administrative control over access to the cybersecurity system, given that the virtual keys may be disabled, assigned prescribed periods of activity, or the like.
  • the consumption quota monitoring module may be accessed via the second API (or cybersecurity portal) to enable the subscriber to obtain metrics associated with the current state of the subscription (e.g., active status, number of submissions for a particular submission type (or in total) conducted during the subscription period, number of submissions remaining for the subscription period, etc.). Additionally, the consumption quota monitoring module may be accessed by the credential management module in order to confirm an incoming submission does not exceed the data sample submission threshold. This reliance may occur if the credential management module is permitted access to the credential information (e.g., master key, virtual keys, etc.) of the subscriber account data.
  • credential information e.g., master key, virtual keys, etc.
  • the configuration management module is configured to enable a subscriber, via the third API (or cybersecurity portal), to specify parameters that control operability of the cyberthreat analytics. For instance, prior to controlling such operability, the credential management module, upon receipt of a parameter adjustment submission, may extract a virtual key included as part of the submission to authenticate the subscriber and verify that the subscriber is authorized to perform this task (parameter adjustment).
  • contents of the parameter adjustment submission are routed to the configuration management module, which may alter stored parameters that may influence workflow, such as (i) operations of an analytic engine selection module deployed within the object evaluation logic of the cybersecurity system for selection of analytic engines, (ii) operations of the analytic engines, and/or (iii) operations of the correlation module, and/or (iv) operations of the post-processing module.
  • workflow such as (i) operations of an analytic engine selection module deployed within the object evaluation logic of the cybersecurity system for selection of analytic engines, (ii) operations of the analytic engines, and/or (iii) operations of the correlation module, and/or (iv) operations of the post-processing module.
  • the system health assessment module and the auto-scaling module are in communications with the object evaluation logic.
  • the system health assessment module is configured to communicate with analytic engines, which are computing instances hosted by the cloud network that are configured to conduct cyberthreat analytics on the submitted objects. Based on these communications along with additional abilities to monitor queue storage levels and other public cloud infrastructure resources, the system health assessment module may be configured to ascertain the health of cloud-based processing resources (e.g., operating state, capacity level, etc.) to surmise the overall health of the cybersecurity system.
  • cloud-based processing resources e.g., operating state, capacity level, etc.
  • the auto-scaling module is configured to (i) add additional analytic engines, as permitted by the subscription, in response to a prescribed increased in queued data samples awaiting cyberthreat analytics and/or (ii) terminate one or more analytic engines in response to a decrease in queued data samples awaiting cyberthreat analytics.
  • the increase and/or decrease may be measured based on the number of objects, rate of change in the increase or decrease, etc.
  • the auto-scaling module may be configured to monitor available queue capacity, where a decrease in available queue capacity denotes increased data samples awaiting analytics and potential addition of analytic engines and an increase in available queue capacity denotes decreased data samples awaiting analytics and potential termination of analytic engine(s).
  • the subscription billing module is configured to confirm that the subscription parameters have not been exceeded (to denote additional billing) for a time-based, flat-fee subscription (e.g., yearly, monthly, weekly or daily). Alternatively, for a pay-as-you-go subscription, the subscription billing module may be configured to maintain an account of the number of submissions (e.g., data sample submissions) over a prescribed period of time and generate a request for payment from the SaaS subscriber accordingly. Additionally, the subscription billing module may be operable to identify other paid cloud-based services utilized by the SaaS-subscriber for inclusion as part of the payment request. According to one embodiment, the subscription billing module may access the subscriber account data for the requisite information.
  • a time-based, flat-fee subscription e.g., yearly, monthly, weekly or daily.
  • the subscription billing module may be configured to maintain an account of the number of submissions (e.g., data sample submissions) over a prescribed period of time and generate a request for payment from the Saa
  • the object evaluation logic may be separated into multiple evaluation stages, where each evaluation stage is provided access to a queue that features a plurality of queue elements each storing content (object, context information, etc.) associated with a submitted data sample.
  • each “stage” queue is provided access to (or receives) content associated with a data sample evaluated in the preceding evaluation stage.
  • the object evaluation logic includes a preliminary analytic module (within a first evaluation stage), an analytic engine selection module (within a second evaluation stage), a cyberthreat analytic module (within a third evaluation stage), a correlation module (within a fourth evaluation stage) and a post-processing module (within a fifth evaluation stage).
  • the preliminary analytic module may be configured to conduct one or more preliminary analyses on content within the data sample, which includes the object and/or the context information accompanying the object, in comparison with content associated with accessible cybersecurity intelligence.
  • the cybersecurity intelligence may include context information associated with known malicious objects and known benign objects gathered from prior analytics conducted by the cybersecurity system as well as cybersecurity intelligence from sources external to the cybersecurity system.
  • the analytic engine selection module is provided access to the object and/or the context information as additional cyberthreat analytics are necessary. Otherwise, responsive to the preliminary analyses determining that the object is malicious or benign, the preliminary analytic module may bypass further cyberthreat analyses of the object.
  • the analytic engine selection module is configured to determine one or more analytic engines to conduct cyberthreat analytics of the object. This determination may be conducted, at least in part, on the context information accompanying the object.
  • the context information may be categorized as submission context, entitlement context, and/or object context as described below.
  • the analytic engine selection module may select the type of analytic engines (e.g., static analytic engine(s), dynamic analytic engine(s), machine-learning engine(s), and/or emulation analytic engine(s)) based on the context information.
  • the cyberthreat analytic module includes one or more analytic engines that are directed to different analysis approaches in analyzing an object for malware (and whether it constitutes a cyberthreat).
  • These analytic engines may include any one or combination of the following: (i) static analytic engines; (ii) dynamic analytic engines; (iii) machine learning analytic engines; and/or (iv) emulation analytic engines.
  • the static analytic engines conduct an analysis on the content of the object and generate results including observed features represented by characteristics of the object and context information associated with the object.
  • the context information provides additional information associated with the features (e.g., specific characteristic deemed malicious, location of that characteristic within the object, or the like.
  • the dynamic analytic engines conduct an execution of the object and each generates results including features represented by observed behaviors of the dynamic analytic engine along with context information accompanying the observed features (e.g., software profile, process or thread being executed that generates the malicious features, source object type, etc.).
  • machine learning analytic engines submit the object as input into a trained machine-learning model, each generating results including features represented by insights derived from the machine-learning module and accompanying context information, which may be similar to the type of context information provided with dynamic analytic results perhaps along with additional contextual observations learned from objects similar to the object.
  • emulation analytic engines conduct reproduction of operations representing the execution of the object, without such execution, which generates results including features represented by behaviors monitored during emulation and its accompanying context information.
  • each analytic engine may feature an analytic engine infrastructure, which includes a health assessment module, a configuration module, an update module, a task processing module and a result processing module.
  • the health assessment module is configured to determine the operational health of the analytic engine, which may be represented, at least in part, by its utilization level.
  • the configuration module controls the re-configuration of certain functionality of the analytic engine.
  • the update module is configured to receive and control installation of rule changes effecting operability of the task processing module and the result processing module and changes to software profiles (or guest images) to re-configure operability of the analytic engine.
  • the task processing module is further configured to monitor queue elements of the queue that maintain the objects (or data samples) awaiting cyberthreat analytics (i.e., third stage queue) and perhaps queues for the first and/or second evaluation stages to estimate future processing capacity needed.
  • the result processing module is responsible for queue management by removing a pending object (or data sample) from the third stage queue and moving the data sample for storage in a fourth stage queue accessible to the correlation module.
  • a correlation module is configured to classify the object included as part of the data sample as malicious, benign, unknown or suspicious based on the above-identified features collected from the analytic results produced by the analytic engines and their accompanying context information. This classification of the object (sometimes referred to as the “verdict”) is provided to the post-processing module that is part of the fifth evaluation stage.
  • the post-processing module may initiate actions to remediate a detected cyberthreat (object). Additionally, or in the alternative, the post-processing module may add certain context information associated with the object to the cybersecurity intelligence utilized by the preliminary analytic module in accordance with a prescribed retention policy maintained by the post-processing module.
  • the reporting logic is configured to generate a displayable report including the comprehensive results of the cyberthreat analytics (e.g., verdict, observed features and any corresponding meta-information representing the results associated with the cyberthreat analytics, context information associated with the observed features that identify the analyses conducted to produce the observed features, circumstances surrounding the features when observed, etc.).
  • the displayable report may be provided as an interactive screens or series of screens that allow a security administrator (corresponding to a representative of the SaaS-subscriber) to view results of data sample submissions in the aggregate and “drill-down” as to specifics associated with one of the objects uploaded to the cybersecurity system within a data sample submission.
  • the reporting logic may rely on the Subscription ID or a virtual key, which may be part of the data sample submitted to the object evaluation logic, to identify the subscriber and determine a preferred method for conveyance of the alert (and set access controls to preclude access to contents of the alert by other SaaS-subscribers). Additionally, or in the alterative, the reporting logic may generate an alert based on the comprehensive results of the cyberthreat analytics.
  • the alert may be in the form of a message (e.g., “threat warning” text or other electronic message).
  • logic is representative of hardware, firmware, and/or software that is configured to perform one or more functions.
  • the logic may include circuitry having data processing and/or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a hardware processor, a programmable gate array, a microcontroller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, or combinatorial logic.
  • the logic may be software in the form of one or more software modules, which may be configured to operate as its counterpart circuitry.
  • a software module may be a software instance that operates as a processor, namely a virtual processor whose underlying operations is based on a physical processor such as an EC2 instance within the Amazon® AWS infrastructure for example.
  • a software module may include an executable application, a daemon application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or even one or more instructions.
  • API application programming interface
  • the software module(s) may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals).
  • suitable non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; a semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device.
  • the logic or module or engine
  • malware is directed to software that produces an undesirable behavior upon execution, where the behavior is deemed to be “undesirable” based on customer-specific rules, manufacturer-based rules, or any other type of rules formulated by public opinion or a particular governmental or commercial entity.
  • This undesired behavior may include a communication-based anomaly or an execution-based anomaly that (1) alters the functionality of an electronic device executing that software in a malicious manner; (2) alters the functionality of an electronic device executing that software without any malicious intent; and/or (3) provides an unwanted functionality which is generally acceptable in other context.
  • network device should be generally construed as physical or virtualized device with data processing capability and/or a capability of connecting to a network, such as a public cloud network (e.g., Amazon Web Service (AWS®), Microsoft Azure®, Google Cloud®, etc.), a private cloud network, or any other network type.
  • the network devices may be used by or a security operations center (SOC), Security Information and Event Management system (SIEM), a network administrator, a forensic analyst, or cybersecurity system for another security provider for communication with an interface (e.g., cybersecurity portal) to access a SaaS-operating cybersecurity system.
  • SOC security operations center
  • SIEM Security Information and Event Management system
  • a network administrator e.g., forensic analyst
  • cybersecurity system e.g., cybersecurity portal
  • Examples of a network device may include, but are not limited or restricted to, the following: a server, a router or other intermediary communication device, an endpoint (e.g., a laptop, a smartphone, a tablet, a desktop computer, a netbook, etc.) or virtualized devices being software with the functionality of the network device.
  • the network device may also be deployed as part any physical or virtualized device communicatively coupled via a device interface (e.g., API(s)) for gaining access to the SaaS-operating cybersecurity system.
  • a device interface e.g., API(s)
  • the term “submission” a type of message (prescribed, structured data format) that is intended to result in a particular task to be performed.
  • the tasks may include object-based analytics (data sample submissions), return of requested information (consumption quota submissions), parameter updates that may influence operations associated with the cyberthreat analytics (parameter adjustment submissions), or the like.
  • the submission may include a data sample, namely an organized collection of data including one or more objects and context information at least pertaining to the object(s).
  • An “object” generally refers to a collection of information (e.g., file, document, URL, web content, email message, etc.) that may be extracted from the data sample for cyberthreat analytics.
  • cybersecurity system may be deployed to operate as a subscription-based Security-as-a-Service (SaaS) that utilizes public cloud infrastructure resources, such as virtual computing, virtual data stores, virtual (cloud) database resources for example, provided by an Infrastructure-as-a-Service (IaaS) cloud platform.
  • SaaS Security-as-a-Service
  • the cybersecurity system may be configured to operate as a multi-tenant service; namely a service made available to tenants (also referred to as “subscribers”) on demand.
  • the IaaS cloud platform may be configured to operate as a multi-tenant service to which a cybersecurity vendor offering the cybersecurity system corresponds to an IaaS-subscriber. Therefore, the cybersecurity system may leverage resources offered by the IaaS cloud platform to support operations conducted by SaaS-subscribers.
  • the terms “benign,” “suspicious” and “malicious” are used to identify different likelihoods of an object being associated with a cyberattack (i.e., constituting a cyberthreat).
  • An object may be classified as “benign” upon determining that the likelihood of the object being associated with a cyberattack is zero or falls below a first threshold (i.e. falls within a first likelihood range).
  • the object may be classified as “malicious” upon determining that the likelihood of the object being associated with a cyberattack is greater than a second threshold extending from a substantial likelihood to absolute certainty (i.e. falls within a third likelihood range).
  • the object may be classified as “suspicious” upon determining that the likelihood of the object being associated with a cyberattack falls between the first threshold and the second threshold (i.e. falls within a second likelihood range).
  • Different embodiments may use different measures of likelihood of malicious and non-maliciousness and may be referenced differently. Therefore, this terminology is merely used to identify different levels of maliciousness.
  • the terms “compare,” comparing,” “comparison,” or other tenses thereof generally mean determining if a match (e.g., identical or a prescribed level of correlation) is achieved between two items under analysis (e.g., context information, portions of objects, etc.) or representations of the two items (e.g., hash values, checksums, etc.).
  • transmission medium generally refers to a physical or logical communication link (or path) between two or more network devices.
  • a physical communication path wired and/or wireless interconnects in the form of electrical wiring, optical fiber, cable, bus trace, or a wireless channel using infrared, radio frequency (RF), may be used.
  • RF radio frequency
  • FIG. 1 A a block diagram of an exemplary embodiment of a cybersecurity system 100 operating as a service support by resources hosted by a cloud platform 110 (e.g., infrastructure provided by Microsoft Azure®, Amazon Web Services®, or Google Cloud®) is shown.
  • the cybersecurity system 100 operates as a multi-tenant, Security-as-a-Service (SaaS), which is accessible by a plurality of tenants 120 1 - 120 N (N ⁇ 1) on demand (hereinafter, “subscribers” 120 1 - 120 N ) over a transmission medium 130 .
  • SaaS Security-as-a-Service
  • Examples of subscribers 120 1 - 120 N may include enterprises (companies, partnerships, co-ops, governmental agencies or other agencies, etc.), individuals, or even other cybersecurity vendors that intend to utilize the cybersecurity system 100 to conduct additional analytics on objects submitted to the cybersecurity system 100 in order to obtain a verdict (e.g., malicious or non-malicious determination) for that object or verify a verdict ascertained by another cybersecurity vendor.
  • a verdict e.g., malicious or non-malicious determination
  • the SaaS-operating cybersecurity system 100 may operate in cooperation with the multi-tenant, cloud platform 110 , which corresponds to an Infrastructure-as-a-Service (IaaS) cloud platform 110 .
  • IaaS Infrastructure-as-a-Service
  • multiple subscribers 120 1 - 120 N may be provided controlled access to cybersecurity services offered by the SaaS-operating cybersecurity system 100 while multiple users (e.g., two or more IaaS subscribers, including the SaaS-operating cybersecurity system 100 as shown and other IaaS subscriber 102 ), may be provided controlled access to shared resources hosted by the IaaS cloud platform 110 (hereinafter, “public cloud infrastructure resources 150 ”).
  • the SaaS 100 may deploy a vendor-specific proprietary software stack to run on the resources 150 (e.g., compute and storage resources) provided by the IaaS cloud platform 110 .
  • the SaaS-operating cybersecurity system 100 is controlled by a different entity than the IaaS cloud provider.
  • the SaaS-operating cybersecurity system 100 may be configured to charge usage of the SaaS in accordance with a different parameters (and pricing scheme) than offered by the IaaS (public cloud).
  • the SaaS-operating cybersecurity system 100 may be configured with subscription tier pricing based on the number of submissions with objects provided to undergo cyberthreat analytics by the SaaS-operating cybersecurity system 100 (e.g., number of objects uploaded via a portal or other type of interface) or the number of objects processed (e.g., to account for objects included as part of one or more submissions and additional objects processed that were produced during the processing of another object).
  • This SaaS-IaaS deployment enables both the customer and cybersecurity vendor to avoid significant capital outlays in buying and operating physical servers and other datacenter infrastructure. Rather, the cybersecurity vendor incurs the costs associated with the actual use of certain public cloud infrastructure resources 150 in the aggregate, such as IaaS-based storage amounts or compute time for analytic engines formed from IaaS-based computing instances. The subscribers incur the costs associated with their actual number of submissions (e.g., data sample submissions described below) input into the SaaS-operating cybersecurity system 100 .
  • FIG. 1 B a block diagram of an exemplary embodiment of the SaaS-operating cybersecurity system 100 leveraging the public cloud infrastructure resource 150 provided by the IaaS cloud platform (referred to as “public cloud”) 110 is shown.
  • the cybersecurity system 100 is configured to operate as a multi-tenant, subscription-based SaaS; namely, a cloud-based subscription service that utilizes storage and compute services hosted by the public cloud 110 and is available to the plurality of subscribers 120 1 - 120 N over the transmission medium 130 including a public network (e.g., Internet).
  • a public network e.g., Internet
  • each subscriber may include one or more network devices 125 , where each of the network devices 125 may be permitted access to the cybersecurity system 100 if credentials submitted by that network device 125 are authenticated.
  • the credential authentication may be conducted in accordance with a credential (key) authentication scheme in which a (virtual) key generated by the cybersecurity system 100 and provided to a subscriber (e.g., subscriber 120 N ) is used to gain access to the cybersecurity system 100 .
  • the network devices 125 may be used by different sources, including but not limited or restricted to a security operations center (SOC), a Security Information and Event Management system (SIEM), a network administrator, a forensic analyst, a different cybersecurity vendor, or any other source seeking cybersecurity services offered by the cybersecurity system 100 .
  • SOC security operations center
  • SIEM Security Information and Event Management system
  • the cybersecurity system 100 is logic that leverages public cloud infrastructure resources 150 .
  • the logic associated with the cybersecurity system 100 may be stored within cloud-based storage resources (e.g., virtual data stores corresponding to a physical, non-transitory storage medium provided by the public cloud 110 such as Amazon® S3 storage instances, Amazon® Glacier or other AWS Storage Services).
  • This stored logic is executed, at least in part, by cloud processing resources (e.g., one or more computing instances operating as virtual processors whose underlying operations are based on physical processors, such as EC2 instances within the Amazon® AWS infrastructure).
  • the cybersecurity system 100 may request and active additional cloud processing resources 152 and cloud storage resources 154 .
  • the cybersecurity system 100 is configured to receive and respond to messages 140 requesting one or more tasks to be conducted by the cybersecurity system 100 (hereinafter referred to as “submissions”).
  • submissions 140 may include a data sample 142 , where the data sample submission 140 requests the cybersecurity system 100 to conduct analytics on an object 144 included as part of the data sample 142 .
  • Context information 146 pertaining to the object 144 may be included as part of the data sample 142 or part of the submission 140 .
  • the context information 146 may include different context types such as context information 147 associated with the data sample submission 140 (submission context 147 ), context information 148 associated with entitlements associated with a subscription to which the submitting source belongs (entitlement context 148 ), and/or context information 149 associated with the object 144 (object context 149 ).
  • the context information 146 is not static for the object 144 at the time of submission. Rather, the context information 146 may be modified (augmented) based on operations within the cybersecurity system 100 , especially entitlement context 148 obtained from a subscriber's account.
  • the context information 146 may be used to identify the subscriber 120 1 responsible for submitting the data sample 142 .
  • the cybersecurity system 100 may leverage the public cloud infrastructure resources 150 hosted by the public cloud 110 .
  • the public cloud infrastructure resources 150 may include, but are not limited or restricted to cloud processing resources 152 (e.g., computing instances, etc.) and cloud storage resources 154 (e.g., virtual data stores operating as non-volatile or volatile storage such as a log, queues, etc.), which may be allocated for use among the subscribers 120 1 - 120 N .
  • cloud processing resources 152 e.g., computing instances, etc.
  • cloud storage resources 154 e.g., virtual data stores operating as non-volatile or volatile storage such as a log, queues, etc.
  • the cybersecurity system 100 is able to immediately “scale up” (add additional analytic engines, as permitted by the subscription) or “scale down” (terminate one or more analytic engines) its cloud resource usage when such usage exceeds or falls below certain monitored thresholds.
  • the cybersecurity system 100 may monitor capacity levels of virtual data stores operating as queues that provide temporary storage at certain stages during analytics of the object 144 (hereafter, “queue capacity”).
  • the queue capacity may be determined through any number of metrics, such as the number of queued objects awaiting analytics, usage percentages of the queues, computed queue wait time per data sample, or the like.
  • the cybersecurity system 100 may scale up its usage of any public cloud infrastructure resources 150 , such as cloud processing resource 152 being customized to operate as analytic engines as described below, upon exceeding a first threshold, perhaps for a prolonged period of time to avoid throttling.
  • the cybersecurity system 100 may scale down its usage of the cloud processing resource 152 upon falling below a second threshold, perhaps for the prolonged period of time as well.
  • the cybersecurity system 100 may utilize the public cloud infrastructure resources 150 for supporting administrative tasks.
  • the cybersecurity system 100 may be allocated cloud storage resources 152 for maintaining data for use in monitoring compliance by the subscribers 120 1 - 120 N with their subscription entitlements.
  • the subscription entitlements may be represented as permissions such as (i) a maximum number of submissions over a prescribed period of time (e.g., subscription time period, yearly, monthly, weekly, daily, during certain hours, etc.), (ii) a maximum number of active virtual keys providing authorized access to the cybersecurity system 100 , (iii) additional capabilities as provided by enhancements made available based on the selected subscriber tier, or the like.
  • the cybersecurity system 100 supports bidirectional communications with the subscribers 120 1 - 120 N in which one or more responses 160 to the submissions 140 are returned to the subscribers 120 1 - 120 N .
  • the response 160 may correspond to a displayable report 160 including comprehensive results of cyberthreat analytics conducted on the object 144 and its accompanying context information 146 .
  • the comprehensive results may include a verdict, observed features and any corresponding meta-information representing the results associated with the cyberthreat analytics, and context information associated with the observed features (e.g., information that identifies the analyses conducted to produce the observed features, circumstances the features occurred, etc.).
  • the response 160 may include one or more alert messages (hereinafter, “alert message(s)”).
  • the alert message(s) may include a portion of the comprehensive results of cyberthreat analytics, such as verdict and name of the object 144 .
  • FIG. 2 a block diagram of an exemplary embodiment of logic forming the cybersecurity system 100 of FIG. 1 B is shown, wherein the logic relies upon the public cloud infrastructure resources 150 and monitors accesses to the cybersecurity system 100 for subscription compliance, billing and reporting.
  • the cybersecurity system 100 features interface logic 200 , administrative control logic 220 , object evaluation logic 270 , and reporting logic 290 .
  • the interface logic 200 enables communications with different modules forming the administrative control logic 220 .
  • the interface logic 200 Upon validation of the submission 140 , authentication of a subscriber (e.g., subscriber 120 N ) providing the submission 140 and verification that the subscriber 120 N is authorized to perform the task or tasks associated with the submission 140 , the task(s) associated with the submission 140 is(are) performed.
  • a subscriber e.g., subscriber 120 N
  • the task(s) associated with the submission 140 is(are) performed.
  • the interface logic 200 includes a cybersecurity portal 205 that allows any user (potential subscriber) to register and establish a subscription with the cybersecurity system 100 .
  • the user referred to as the “subscriber”
  • the user may be provided with additional accessibility to the cybersecurity system 100 via device interface 210 corresponding to logic supporting one or more APIs, where different combinations of APIs may be provided depending on the terms of the subscription.
  • logic associated with an API of the device interface 210 may be configured to await for the validation of the data sample submission 140 , authentication of the subscriber 120 N submitting the data sample submission 140 and verification that the subscriber 120 N is authorized to submit at least the data sample 142 for cyberthreat analytics before routing the data sample 142 to the object evaluation logic 270 .
  • the device interface 210 supports automated network device 125 to cybersecurity system 100 communications.
  • the cybersecurity portal 205 supports all submission types.
  • the device interface 210 when deployed, include a first API 212 , a second API 214 and/or a third API 216 .
  • the device interface 210 may include the first API 212 that provides an interface for the submission of the object 144 for cyberthreat analytics (in the form of the data sample submission 140 featuring the data sample 142 , which may include the object 144 and/or its context information 146 ).
  • the administrative control logic 220 is configured to validate the data sample submission 140 , authenticate the subscriber 120 N submitting the data sample 142 , verify that the submission of the data sample 142 is in compliance with parameters associated with the subscriber's subscription, and thereafter, provide at least a portion of the data sample 142 (e.g., object, context information) to the object evaluation logic 270 for analysis.
  • the data sample 142 e.g., object, context information
  • the second API 214 provides an interface for submissions directed to subscription management such as ascertain SaaS-based metrics associated with a current state of a subscription. These SaaS metrics may include object submission quota (e.g., number of objects submitted during the subscription period, number of objects available for submission during the remainder of the subscription period, etc.).
  • the third API 216 provides an interface for submissions to parameters and other information to a configuration management module 250 within the administrative control logic 220 to enable subscriber 120 N , via the device interface 210 , to specify parameters that control operability of the cyberthreat analytics.
  • the cybersecurity portal 205 features logic, namely the first logic 206 , second logic 207 and third logic 208 of the cybersecurity portal 205 , that correspond in operation to the first API 212 , the second API 214 and the third API 216 , respectively. These logic units support the handling of the submissions through the cybersecurity portal 205 in a manner similar to the APIs of the device interface 210 , as described above.
  • the administrative control logic 220 includes a plurality of modules that collectively operate to receive and validate the submission 140 , authenticate the subscriber 120 N operating as the source of the submission 140 , and verify that the subscriber 120 N is authorized to conduct the task associated with the submission 140 .
  • the verification may involve the credential (key) management module 235 confirming that the subscriber's subscription permits the handling of the task and the SaaS metrics associated with the current state of the subscriber's subscription do not preclude the handling of the task and/or metrics of the current state of submission (e.g., data sample submission threshold reached, etc.).
  • the above-identified modules of the administrative control logic 220 may include, but are not limited or restricted to the subscription management module 225 , a subscriber accounts data store 230 , the credential (key) management module 235 , a consumption quota monitoring module 245 , the configuration management module 250 , a system health assessment module 255 , an auto-scaling module 260 , and a subscription billing module 265 .
  • the subscription management module 225 is configured to control access, via the cybersecurity portal 205 , to the cybersecurity system 100 by controlling the subscription onboarding process. Via the cybersecurity portal 205 , during the onboarding process to register with and gain access to the cybersecurity system 100 , the subscription management module 225 gathers subscriber information (e.g., name of company, business address, industry by sector, geographic location, representative contact information, etc.) and financial information associated with the subscriber (e.g., bank account information, credit card information, etc.). The subscription management module 225 further prompts the subscriber, for example subscriber 120 N , for selection of a particular subscription tier. Each subscription tier may provide different types and/or levels of entitlements (e.g., access privileges, subscription parameters such as data sample submission thresholds, virtual key allocation threshold, etc.), where the usage or allocation of such entitlements may be monitored.
  • subscriber information e.g., name of company, business address, industry by sector, geographic location, representative contact information, etc.
  • financial information associated with the subscriber e.g
  • the subscription tiers may be based on different data sample submission thresholds for a prescribed period of time (e.g., a first subscription tier with one million data sample submissions per year (up to 1M/year) at cost $X and a second “pay-as-you-go” subscription tier with unlimited data sample submissions but higher submission costs per sample, $X+$Y).
  • the subscription tiers may be based on the numbers of credentials (e.g., keys, tokens, etc.) made available to the subscriber 120 N (e.g., prescribed number of active virtual keys allocated to the subscriber 120 N for subscriber/device authentication), or the like.
  • the subscription management module 225 may assign the Subscription ID 227 to the subscriber 120 N .
  • the Subscription ID 227 may be relied upon to assist in accessing account data associated with a particular subscription selected by the subscriber 120 N , which is maintained within the subscriber accounts data store 230 .
  • the subscriber accounts data store 230 constitutes a data store that is configured to maintain a record of account data associated with each subscriber 120 1 - 120 N registered to access cybersecurity services provided by the cybersecurity system 100 .
  • the subscriber accounts data store 230 may be configured as (i) one or more virtual data stores (e.g., Amazon® S3 data stores) each maintaining a record of the account data for a particular subscriber and utilized in the aggregate by the IaaS subscriber (cybersecurity vendor), (ii) one or more virtual data stores maintaining a collection of references (e.g., links, etc.), each directed to a different portion of cloud-based storage including account data maintained by public cloud infrastructure resources such as cloud (Amazon®) database resources 156 of FIG.
  • virtual data stores e.g., Amazon® S3 data stores
  • one or more virtual data stores maintaining a collection of references (e.
  • the “account data” may include any information or meta-information (e.g., Subscription ID 227 , credentials 240 / 242 such as tokens, keys or representatives thereof, metrics 232 / 234 ) that may be used to identify or authenticate its subscriber, provide subscription status or expiration date, and/or verify that a task associated with a submission may be handled by confirming compliance with entitlements provided by the subscriber-selected subscription tier.
  • information or meta-information e.g., Subscription ID 227 , credentials 240 / 242 such as tokens, keys or representatives thereof, metrics 232 / 234 .
  • each subscriber account may be located using the Subscription ID 227 and/or credentials 242 (e.g., content (or derivative thereof) may be used to locate a location in a virtual data store for account data associated with that subscriber) and is configured to include information associated with the subscriber and subscription entitlements (e.g., which APIs accessible by that subscriber; maximum number of submissions during a select time period, maximum number of issued virtual keys, etc.).
  • Subscription ID 227 and/or credentials 242 e.g., content (or derivative thereof) may be used to locate a location in a virtual data store for account data associated with that subscriber
  • subscription entitlements e.g., which APIs accessible by that subscriber; maximum number of submissions during a select time period, maximum number of issued virtual keys, etc.
  • the subscriber accounts data store 230 may be configured to monitor and maintain, on a per subscriber basis, metrics including SaaS metrics 232 (representing at least some of the subscription entitlements) and IaaS metrics 234 .
  • the SaaS metrics 232 may include metrics that represent and maintain a sum total of submissions made by the (SaaS) subscriber 120 N (e.g., sum total of data sample submissions) made during a particular period of time (e.g., subscription time period), which may be accessed to confirm that the sum total falls below the maximum number of submissions to ensure compliance with the subscription entitlements, especially before an incoming data sample submission is provided to the object evaluation logic 270 .
  • the SaaS metrics 232 may further include metrics that represent and maintain a sum total of virtual keys currently issued to the SaaS subscriber 120 N .
  • the SaaS metrics 232 may be used for billing of the subscriber 120 N based on the number of data sample submissions made during the particular period of time, and in some cases, to ensure compliance with subscription entitlements.
  • the SaaS metrics 232 may aggregation metrics directed to all SaaS subscribers.
  • the SaaS metrics 232 may include an aggregate as to the number of data sample submissions for all SaaS subscribers. This metric may be used to determine the profitability of the cybersecurity system 100 to determine whether the cost structure necessities a change in submission pricing.
  • the cybersecurity system 100 may be configured to monitor and maintain, on a per subscriber basis, IaaS metrics 234 .
  • the IaaS metrics 234 may include, inter alia, information that quantifies certain resource usage by the SaaS subscriber 120 N , which may be directed to subscription compliance or certain advanced features provided by the cybersecurity system (e.g., indicator of compromise “IOC” generation, use of forensic analysts, etc.) that may involve ancillary services hosted by the public cloud 110 .
  • the IaaS metrics 234 may conduct subscribed-based monitoring of public cloud infrastructure resources 150 (i.e., resources hosted by the public cloud network) to ensure compliance with certain subscription entitlements such as a quality of service (QoS) thresholds influenced by the number of computing instances used by the subscriber concurrently (e.g., at least partially overlapping in time), a maximum amount of cloud-based storage memory allocated, or the like.
  • QoS quality of service
  • the credential (key) management module 235 features a credential (key) generation module 236 configured to handle credential generation and a credential (key) authentication module 237 configured to handle subscriber authentication.
  • the key generation module 236 upon notification from the subscription management module 225 that the subscription process for the subscriber 120 N has successfully completed, the key generation module 236 generates a first (primary) credential 240 (referred to as a “master key”) assigned to the subscriber 120 N associated with the subscription.
  • the master key 240 may be maintained within a portion of the subscriber accounts data store 230 allocated to the subscriber 120 N , and it is not provided to the subscriber 120 N .
  • the master key 240 may operate as a basis (e.g., seed keying material) used by the credential generation module 236 to generate one or more second credentials 242 (referred to as “virtual keys”).
  • a virtual key 242 may be included as part of a submission (e.g., data sample, quota, parameter adjustment) and used by the credential management module 235 in authenticating the subscriber 120 N and confirming that the subscriber 120 N is authorized to perform a task associated with the submission accompanied by the virtual key 242 .
  • the key management module 235 may receive a virtual key generation request from a subscriber (e.g., the subscriber 120 N ). Upon receipt of the virtual key generation request, the key management module 235 confirms that the generation and release of the requested number of virtual keys is in compliance with the subscription entitlements (e.g., maximum number of issued (active) virtual keys available to the subscriber 120 N ). If the generation of the virtual keys is in compliance with the subscription parameters, the key generation module 236 generates and returns requested virtual keys 242 to the subscriber 120 N . Additionally, as shown in FIG. 2 , the key management module 235 stores the generated virtual keys 242 within the subscriber accounts data store 230 as part of the account data for the subscriber 120 N .
  • the subscription entitlements e.g., maximum number of issued (active) virtual keys available to the subscriber 120 N .
  • the key authentication module 237 is configured to authenticate the subscriber 120 N upon uploading the submission 140 (e.g., data sample submission, quota submission, parameter adjustment submission) and confirm that the task associated with the submission 140 is in compliance with the subscription entitlements afforded to the subscriber 120 N .
  • the submission 140 e.g., data sample submission, quota submission, parameter adjustment submission
  • the data sample submission 140 (inclusive of one of the virtual keys 242 (represented as virtual key 242 N ) along with an object selected for analysis, corresponding context information, and optionally the Subscription ID 227 ) is submitted to the cybersecurity system 100 via the interface logic 200 (e.g., first API 212 or optionally cybersecurity portal 205 ), content from the data sample submission 140 (e.g., object 144 , portions of the context information 146 , etc.) may be withheld from being provided to the key management module 235 .
  • the interface logic 200 e.g., first API 212 or optionally cybersecurity portal 205
  • content from the data sample submission 140 e.g., object 144 , portions of the context information 146 , etc.
  • the key management module 235 may determine a location of the account data associated with the subscriber 120 N within the subscription accounts data store 230 to validate the virtual key 242 N , thereby authenticating the subscriber 120 N . Additionally, the key management module 235 may conduct an analysis of certain context information 146 provided with the data sample submission 140 to confirm, based on the subscription entitlements and the SaaS metrics 232 associated with data sample submissions, whether the data sample submission 140 may be submitted to the object evaluation logic 270 .
  • the key management module 235 returns a message, which prompts the interface logic 200 to at least route the data sample 142 (and perhaps other content within the data sample submission 140 ) to the object evaluation logic 270 . Otherwise, the key management module 235 returns an error code, which prompts the interface logic 200 to notify the subscriber 120 N of a submission error consistent with the error code.
  • consumption quota monitoring module 245 may be accessed through the second API 214 (or via the cybersecurity portal 205 and is configured to enable a subscriber (e.g., the subscriber 120 N ) to obtain metrics associated with the current state of the subscription (e.g., active status, number of submissions for a particular submission type (or in total) conducted during the subscription period, number of submissions remaining for the subscription period, etc.).
  • a subscriber e.g., the subscriber 120 N
  • metrics associated with the current state of the subscription e.g., active status, number of submissions for a particular submission type (or in total) conducted during the subscription period, number of submissions remaining for the subscription period, etc.
  • the consumption quota monitoring module 245 may receive a message (quota request submission) from any of the subscribers 120 1 - 120 N (e.g., subscriber 120 N ) via the interface logic 200 , such as the second API 214 of the device interface 210 (or optionally logic 207 of the cybersecurity portal 205 for example).
  • the consumption quota monitoring module 245 may be configured to establish communications with the subscriber accounts data store 230 .
  • the consumption quota monitoring module 245 may access various metrics associated with the SaaS metrics 232 , such as the subscription status (active/inactive) and/or the sum total of submissions (or data sample submission in particular) made during a selected time period.
  • the consumption quota monitoring module 245 may be accessed by the key management module 235 to confirm that a requested task is in compliance with the subscription entitlements.
  • the credential management module 235 may be configured to access the consumption quota monitoring module 245 to confirm compliance with the subscription entitlements (e.g., maximum number of data sample submissions constituting the data sample submission threshold has not been exceeded) before task is initiated (e.g., data sample 142 is provided to the object evaluation logic 270 for cyberthreat analytics).
  • the configuration management module 250 is configured to enable a subscriber, via the third API 216 (or optionally the cybersecurity portal 205 ), to specify parameters that control operability of the cyberthreat analytics. For instance, prior to controlling such operability, the credential management module 235 , upon receipt of a parameter adjustment submission, may extract a virtual key included as part of the submission to authenticate the subscriber 120 N and verify that the subscriber is authorized to perform this task (cyberthreat analytics configuration).
  • contents of the parameter adjustment submission are routed to the configuration management module 250 , which may alter stored parameters that may influence workflow, such as (i) operations of an analytic engine selection module deployed within the object evaluation logic 270 of the cybersecurity system 100 for selection of analytic engines (e.g., priority of analytics, change of analytics based on subscriber or attack vectors targeting subscriber's industry, etc.), (ii) operations of the analytic engines deployed within the object evaluation logic 270 (e.g., changes in parameters that effect operations of the engines (e.g., available software profile(s) or guest images, run-time duration, priority in order of cyberthreat analytics, etc.), and/or (iii) operations of the correlation module deployed within the object evaluation logic 270 (e.g., changes to threshold parameters relied upon to issue a threat verdict, etc.) and/or (iv) operations of the post-processing module deployed within the object evaluation logic 270 (e.g., change of retention time periods for context information associated with benign or malicious objects within cybersecurity intelligence,
  • the system health assessment module 255 and the auto-scaling module 260 are in communications with various modules within the object evaluation logic 270 and SaaS subscribers have no visibility as to the operability of these modules.
  • the system health assessment module 255 is configured to monitor queue storage levels and/or the health (e.g., operating state, capacity level, etc.) of the public cloud infrastructure resources 150 , notably the analytic engines 275 utilized by the object evaluation logic 270 to conduct cybersecurity analytics on submitted data samples. From these communications, the system health assessment module 255 may be configured to ascertain the overall health of the object evaluation logic 270 . Additionally, the system health assessment module 255 may be configured to monitor the operability of certain public cloud infrastructure resources 150 utilized by the administrative control logic 220 , the reporting logic 290 and even logic associated with the interface logic 200 to surmise the overall health of the cybersecurity system 100 .
  • the auto-scaling module 260 may be configured to select and modify one or more additional computing instances 153 forming the basis for one or more analytic engines 275 within the object evaluation logic 270 .
  • the auto-scaling module 260 is configured to add additional analytic engines, as permitted by the subscription, in response to a prescribed increased in queued content associated with objects (or data samples) awaiting cyberthreat analytics (e.g., increased level of occupancy of content associated with the data sample within queuing elements being part of the distributed queues 155 hosted as part of the cloud storage resources 154 and responsible for temporarily storing data samples awaiting processing by the analytic engines 275 ).
  • the auto-scaling module 260 is configured to terminate one or more analytic engines in response to a decrease in queued data samples awaiting cyberthreat analytics.
  • the increase and/or decrease may be measured based on the number of objects, rate of change (increase or decrease), etc.
  • the auto-scaling module 260 may be configured to monitor available queue capacity, where a decrease in available queue capacity denotes increased data samples awaiting analytics and potential addition of analytic engines and an increase in available queue capacity denotes decreased data samples awaiting analytics and potential termination of analytic engine(s).
  • the prescribed decrease in available queue capacity may be measured based on a prescribed rate of change of available capacity for one or more queues, being part of the distributed queues 155 hosted as part of the cloud storage resources 154 and responsible for temporarily storing data samples awaiting processing by the analytic engines 275 , a decrease in the amount of storage available beyond a first prescribed threshold for the queue(s), or a decrease in the percentage of storage available for the queue(s).
  • the auto-scaling module 260 may be configured to terminate one or more of the computing instances operating as the analytic engines 275 in response to an increase in available queue capacity beyond a second prescribed threshold.
  • the first and second thresholds may be storage thresholds (e.g., number of data samples, percentage of storage capacity, etc.) in which the first threshold differs from the second threshold.
  • the subscription billing module 265 is configured to confirm that the subscription parameters have not been exceeded (to denote additional billing) for a time-based, flat-fee subscription (e.g., yearly, monthly, weekly or daily). Alternatively, for a pay-as-you-go subscription, the subscription billing module 265 may be configured to maintain an account of the number of submissions analyzed by the object evaluation logic 270 (e.g., data sample submissions) over a prescribed period of time and generate a request for payment from a SaaS subscriber (e.g., subscriber 120 N ) accordingly.
  • the number of data sample submissions include those submitted from the subscriber 120 N , and according to some embodiments, may include additional objects uncovered during analytics during the subscription period.
  • the subscription billing module 265 may be operable to identify other paid cloud-based services utilized by the SaaS-subscriber 120 N for inclusion as part of the payment request. According to one embodiment, the subscription billing module 265 may access the subscriber account data for the requisite information.
  • the object evaluation logic 270 is configured to receive data samples via the interface logic 200 and conduct cyberthreat analyses on these data sample.
  • the object evaluation logic may be separated into multiple evaluation stages, where each evaluation stage is provided access to a queue that features a plurality of queue elements each storing content (object, context information, etc.) associated with a submitted data sample.
  • each “stage” queue is provided access to (or receives) content associated with a data sample evaluated in the preceding evaluation stage.
  • the object evaluation logic includes a preliminary analytic module (within a first evaluation stage), an analytic engine selection module (within a second evaluation stage), a cyberthreat analytic module (within a third evaluation stage), a correlation module (within a fourth evaluation stage) and a post-processing module (within a fifth evaluation stage).
  • the object evaluation logic 270 is configured with logic to communicate with the administrative control logic 220 to exchange or return information, such as subscription-related information (e.g., number of processed objects, health information, queue capacity, etc.) that may be used for billing, auto-scaling and other operability provided by the cybersecurity system 100 .
  • subscription-related information e.g., number of processed objects, health information, queue capacity, etc.
  • the reporting logic 290 is configured to receive meta-information 292 associated with the analytic results produced by the object evaluation logic 270 and generate a displayable report 294 including the comprehensive results of the cyberthreat analytics (e.g., verdict, observed features and any corresponding meta-information representing the results associated with the cyberthreat analytics, context information associated with the observed features that identify the analyses conducted to produce the observed features, circumstances the features occurred, etc.).
  • the cyberthreat analytics e.g., verdict, observed features and any corresponding meta-information representing the results associated with the cyberthreat analytics, context information associated with the observed features that identify the analyses conducted to produce the observed features, circumstances the features occurred, etc.
  • the displayable report 294 may be provided as one or more interactive screens or a series of screens that allow a security administrator (corresponding to a representative of the SaaS-subscriber) to view results of data sample submissions in the aggregate and “drill-down” as to specifics associated with one of the objects uploaded to the cybersecurity system within a data sample submission.
  • a security administrator corresponding to a representative of the SaaS-subscriber
  • the reporting logic 290 may rely on the Subscription ID 227 or the virtual key 242 N , which may be part of the data sample 144 submitted to the object evaluation logic 270 , to identify the subscriber 120 N and determine a preferred method for conveyance of an alert of the presence of the displayable report 294 (and set access controls to preclude access to contents of the displayable report 294 by other SaaS-subscribers). Additionally, or in the alterative, the reporting logic 290 may generate an alert based on the comprehensive results of the cyberthreat analytics. The alert may be in the form of a message (e.g., “threat warning” text or other electronic message).
  • a message e.g., “threat warning” text or other electronic message.
  • FIG. 3 a block diagram of an exemplary embodiment of the object evaluation logic 270 implemented within the cybersecurity system 100 of FIG. 2 is shown.
  • the object evaluation logic 270 may be separated into multiple evaluation stages 390 - 394 , where each evaluation stage 390 . . . or 394 is assigned a queue including a plurality of queue elements to store content associated with the data sample 144 as it proceeds through the evaluation stages 390 - 394 along with context information generated as analytics is performed on the data sample 142 .
  • the queues associated with the evaluation stages 390 - 394 are illustrated in FIG. 3 as Q 1 -Q 5 .
  • the object evaluation logic 270 includes a preliminary analytic module 310 (within the first evaluation stage 390 ), an analytic engine selection module 340 (within the second evaluation stage 391 ), a cyberthreat analytic module 350 (within the third evaluation stage 392 ), a correlation module 370 (within the fourth evaluation stage 393 ) and a post-processing module 380 (within the fifth evaluation stage 394 ).
  • the object evaluation logic 270 receives content from the data sample 142 , such as an object 144 for analysis along with context information 146 associated with the object 144 .
  • the context information 146 may include submission context 147 , entitlement context 148 , and/or object context 149 .
  • the submission context 147 may include information pertaining to the submission 140 and/or data sample 142 , such as (i) time of receipt or upload into the cybersecurity system 100 , (ii) origin of the object 144 included in the submission 140 (e.g., from email, network cloud shared drive, network transmission medium, etc.), location of the subscriber device 120 N submitting the object 144 , Internet Protocol (IP) address of the subscriber device 120 N , or the like.
  • IP Internet Protocol
  • the entitlement context 148 may include information pertaining to the subscription selected by the subscriber, such as information directed to what features are permitted by the subscription (e.g., types of analytics supported, reporting formats available, credentials to access third party resources, or other features may distinguish different subscription tiers.
  • the object context 149 may include information pertaining to the object 144 , including meta-information associated with the object 144 such as the name of the object 144 , an extension type (e.g., pdf, exe, html, etc.), or the like.
  • the preliminary analytic module 310 is configured to conduct one or more preliminary analyses on content within the data sample 142 , which includes the object 144 and/or the context information 146 accompanying the object 144 , based on cybersecurity intelligence 320 accessible to the object evaluation logic 270 .
  • the cybersecurity intelligence 320 may include context information 322 associated with known malicious objects and known benign objects gathered from prior analytics conducted by the cybersecurity system 100 (hereinafter, “internal intelligence 322 ”).
  • the cybersecurity intelligence 320 may include context information 324 (hereinafter, “external intelligence”) 324 associated with known malicious objects and known benign objects gathered from analytics conducted by other cybersecurity intelligence sources (e.g., other cloud-based cybersecurity systems, on-premises cybersecurity systems, etc.) and/or context information 326 associated known malicious and/or benign objects accessible from one or more third party cybersecurity sources (hereinafter, “3P intelligence 326 ”).
  • external intelligence context information 324 associated with known malicious objects and known benign objects gathered from analytics conducted by other cybersecurity intelligence sources (e.g., other cloud-based cybersecurity systems, on-premises cybersecurity systems, etc.) and/or context information 326 associated known malicious and/or benign objects accessible from one or more third party cybersecurity sources (hereinafter, “3P intelligence 326 ”).
  • the preliminary analytic module 310 includes a context extraction module 400 and a filtering module 410 , which includes a first pre-filter module 420 , and a second pre-filter module 430 .
  • the context extraction module 400 is configured to recover the context information 146 from the data sample 142 while the filtering module 410 is configured to conduct one or more preliminary analyses of the context information 146 associated with the object 144 and, based on the preliminary analyses, determine an initial classification of the object 144 .
  • the preliminary analyses of the context information 146 may be conducted on the submission context 147 , entitlement context 148 , and/or object context 149 in the aggregate.
  • the filtering module 410 Upon classifying the object 144 as suspicious, the filtering module 410 passes the object 144 and/or the context information 146 to the analytic engine selection module 340 to conduct additional cyberthreat analytics. Otherwise, responsive to the preliminary malicious (or benign) preliminary classification, the filtering module 410 may bypass further cyberthreat analyses of the object 144 as illustrated by a feed-forward path 440 .
  • the first pre-filter module 420 analyzes the context information 146 , optionally in accordance with the separate consideration of different context types as described above, by conducting an analysis (e.g., comparison) between at least a portion of the context information 146 and the context information 322 associated with known malicious and/or benign objects gathered from prior analytics conducted by the cybersecurity system 100 .
  • the context information 322 may be maintained within one or more virtual data stores as part of the cloud storage resources 154 hosted by the cloud network 110 of FIG. 1 B .
  • the first pre-filter module 420 may bypass operations by at least the analytic engine selection module 340 , the cyberthreat analytic module 350 , the correlation module 370 as represented by the feed-forward path 440 . Otherwise, the context information 146 is provided to the second pre-filter module 430 .
  • the second pre-filter module 430 analyzes the context information 146 by conducting an analysis (e.g., comparison) between at least a portion of the context information 146 and the context information 324 associated with known malicious and/or benign objects gathered from analytics conducted by other cybersecurity intelligence sources and/or context information 326 associated known malicious and/or benign objects accessible from third party cybersecurity source(s).
  • the second pre-filter module 430 may also bypass operations by at least the analytic engine selection module 340 , the cyberthreat analytic module 350 , the correlation module 370 (and perhaps the post-processing module 380 ), as represented by the feed-forward path 440 . Otherwise, the object 144 is determined to be suspicious, where the context information 146 and/or the object 144 are made available to the second evaluation stage 391 of the object evaluation logic 270 .
  • the context information 146 and/or the object 144 are made available to the analytic engine selection module 340 .
  • the content associated with the object 144 and/or context information 146 with a first stage queue Q 1 may be passed (or made available by identifying its storage location) to a second stage queue Q 2 allocated for the second evaluation stage 391 .
  • the analytic engine selection module 340 is configured to determine the type and/or ordering of analytic engines to process the object 144 based on the context information 146 , such as the submission context 147 , the entitlement context 148 and/or the object context 149 maintained in the second stage queue Q 2 .
  • the analytic engine selection module 340 may select the analytic engine(s) based on the context information 146 .
  • the particular ordering (workflow) of the analytic engines may be based, at least in part, based on the types of context information.
  • the entitlement context 148 may identify certain types of analytic engines that are permitted for use (e.g., allow certain analytic engine types and preclude others, allow all types of analytic engine types) based on the subscription tier.
  • object context may tailor the type of analytic engine to avoid selection of a configuration for an analytic engine that is unsuitable or ineffective for a particular type of object while submission context may tailor those engines with attack vectors oriented to the origin of the object (e.g., email source for analytic engine more targeted for email analysis, etc.).
  • the analytic engine selection module 340 includes a controller 500 and a plurality of rule sets 510 , which are identified as a first rule set 520 , a second rule set 522 and a third rule set 524 .
  • the rule sets 510 may be executed or referenced by the controller 500 in the aggregate analyses of different types of context information 146 in determining the number and types of analytic engines selected for analysis of the object 144 .
  • the rule sets 510 may be maintained separate from the queue Q 2 being part of a distributed queue allocated for the analytic engine selection module 340 .
  • the controller 500 may select the analytic engine(s) based on the context information 146 considered in its totality.
  • the first rule set 520 may be used by the controller 500 in selecting a first group of analytic engines based on the submission context 147 provided with the data sample 142 .
  • the second rule set 522 may be used by the controller 500 in selecting a second group of analytic engines based on the entitlement context 148 while the third rule set 524 is used by the controller 500 in selecting a third group of analytic engines based on the object context 149 .
  • the analytic engines may be determined to be a subset of analytic engines common to the selected groups of analytic engines.
  • the controller 500 may be configured to formulate, from the computing instances, these selected analytic engines to operate sequentially or concurrently.
  • the selected analytic engines 275 1 - 275 L may include at least one or any combination of the following: (i) static analytic engines to conduct an analysis on the content of the object 144 within the data sample 142 and generate results including observed features represented by characteristics of the object 144 (and accompanying context information); (ii) dynamic analytic engines to conduct an execution of the object 144 and generate results including features represented by observed behaviors of the analytic engine (and accompanying context information); (iii) machine learning analytic engines to conduct extraction of insights using a trained model and generate results including features represented by a probability of the object 144 being malicious (and accompanying context information); and/or (iv) emulation analytic engines to conduct reproduction of operations representing the execution of the object 144 without such execution and generate results including features represented
  • the distributed queues 155 associated with the cyberthreat analytic module 350 may maintain the portions of the data sample 142 (e.g., object 144 , context information 146 , etc.) for retrieval by each of the selected analytic engines.
  • Features produced by the analytics conducted by the selected analytic engines 275 1 - 275 3 are collected by a feature collection module 530 operating, at least in part, as an event (feature) log.
  • the features correspond to resultant information produced by each of the selected analytic engines during analysis of at least a portion of the context information 146 and/or the object 144 .
  • the cyberthreat analytic module 350 includes one or more analytic engines 275 1 - 275 3 , which are selected to perform different analytics on the object 144 in efforts to determine whether the object is malicious (malware present) or non-malicious (no malware detected).
  • These analytic engines 275 1 - 275 3 may operate sequentially or concurrently (e.g., at least partially overlapping in time).
  • the analytic engines 275 1 - 275 3 may assess the content associated with the object 144 and/or context information 146 within a third stage queue Q 3 that is passed from the first stage queue Q 2 , where the context information 146 may include additional context information produced from the analyses conducted by at first and second evaluation stages 390 - 391 .
  • the analytic engines 275 1 - 275 L may be selected based, at least in part, on the submission context, entitlement context and/or the object context.
  • the analytic engines 275 1 - 275 3 may be selected as any one or any combination of at least two of the following analytic engines as described above: (i) static analytic engines; (ii) dynamic analytic engines, (iii) machine learning analytic engines, and/or (iv) emulation analytic engines.
  • a feedback path 360 represents that the cyberthreat analytic module 350 may need to conduct a reiterative, cascaded analysis of an additional object, uncovered during analysis of another object, with a different selection of engines (hereinafter, “sub-engines” 540 ).
  • the analytic engines 275 1 - 275 3 may be operating concurrently (in parallel), but the sub-engines 540 may be conducted serially after completion of operations by the analytic engine 275 1 .
  • the sub-engine 1 540 may be initiated to perform a sub-analysis based on an event created during processing of the object 144 by the analytic engines 275 1 .
  • the event may constitute detection of an additional object (e.g., an executable or URL embedded in the object 144 , such as a document for example, detected during analysis of the object 144 ) or detected information that warrant analytics different than previously performed. According to one embodiment of the disclosure, this may be accomplished by returning the additional object(s) along with its context information to the second stage queue Q 2 associated with the analytic engine selection module 340 , for selection of the particular sub-engine(s) 540 .
  • the processing of the object 144 and/or context information 146 by the analytic engines 275 2 - 275 3 may be conducted in parallel with the analytic engines 275 1 as well as sub-engines 540 .
  • each analytic engine 275 1 . . . or 275 L is based on an analytic engine infrastructure hosted by the cloud network and provisioned by the analytic engine selection module 340 .
  • each analytic engine 275 1 . . . or 275 L such as the analytic engine 275 1 for example, include a health assessment module 600 , a configuration module 610 , an update module 620 , a task processing module 630 and a result processing module 640 .
  • the health assessment module 600 is configured to determine the operational health of the analytic engine 275 1 .
  • the operational health may be represented, at least in part, by its utilization level that signifies when the analytic engine 275 1 is stalled or non-functional (e.g., ⁇ 5% utilization) or when the analytic engine 275 1 is at a higher risk than normal of failure (e.g., >90% utilization).
  • the aggregate of the operational health of each of the analytic engine 275 1 - 2743 may be accessed and used in determining overall system health by the system health assessment module 255 of FIG. 2 .
  • the configuration module 610 is configured to control the configuration and re-configuration of certain functionality of the analytic engine 275 1 .
  • the configuration module 610 may be configured to control reconfiguration and control interoperability between the analytic engine 275 1 and other modules within the subscription evaluation logic 270 and/or the administrative control logic 220 .
  • the configuration module 610 may be further configured to set and control the duration of an analysis conducted for the data sample 142 .
  • the duration may be uniform for all data samples independent of object type or may be set at different durations based on the type of object included as part of the data sample 142 .
  • the configuration module 610 may be configured to select (i) the queue (e.g., third stage queue Q 3 ) from which one or more data samples (including data sample 142 ) awaiting analysis by the analytic engine 275 1 is retrieved, (ii) different software profiles to install when conducting dynamic analytics on each data sample maintained in the queue, and/or (iii) what time to conduct such analytics on queued data samples.
  • the queue e.g., third stage queue Q 3
  • different software profiles to install when conducting dynamic analytics on each data sample maintained in the queue, and/or (iii) what time to conduct such analytics on queued data samples.
  • the update module 620 is configured to receive and control installation of changes to sets of rules controlling operability of the task processing module 630 and the result processing module 640 (described below) and changes to parameters to modify operability of the analytic engine 275 1 .
  • the task processing module 630 is configured to monitor the queuing infrastructure associated with the third evaluation stage 392 (third stage queue Q 3 ) of the object evaluation logic 270 of FIG. 3 . More specifically, the task processing module 630 monitors the third stage queue Q 3 for retention of data samples awaiting analysis by the analytic engine 275 1 to ascertain a current processing level for the cybersecurity system 100 and determine if a capacity threshold for the third stage queue Q 3 has been exceeded, perhaps over a prescribed period of time to avoid throttling.
  • the task processing module 630 may signal the auto-scaling module 260 within the administrative control logic 220 to activate one or more additional computing stances to be configured and used as additional analytic engines for the object evaluation logic 270 . Additionally, the task processing module 630 may be configured to further monitor one or more other stage queues (e.g., first stage queue Q 1 , second stage queue Q 2 , fourth stage queue Q 4 and/or fifth stage queue Q 5 ) to estimate future processing capacity, upon which the auto-scaling module 260 may commence scaling up or scaling down analytic engines.
  • stage queues e.g., first stage queue Q 1 , second stage queue Q 2 , fourth stage queue Q 4 and/or fifth stage queue Q 5
  • a fourth evaluation stage 393 includes a correlation module 370 , which operates in accordance with a fourth rule set 700 to classify the object included as part of the data sample as malicious, benign, unknown or suspicious based on the meta-information (events) collected from the analyses performed by the analytic engines.
  • the classification of the object 144 may be based, at least in part, on meta-information associated with the analytic results generated by the analytic engines 275 1 - 275 3 and maintained with the event log 530 (hereinafter, “analytic meta-information” 550 ).
  • the classification of the object (sometimes referred to as the “verdict”) is provided to post-processing module 380 that is part of a fifth evaluation stage 394 .
  • the post-processing module 380 operating in compliance with a fifth rule set 710 and deployed within the fifth evaluation stage 394 , may initiate actions to remediate, in accordance with a remediation policy 720 , a detected cyberthreat represented by the object 144 through blocking, resetting of configuration settings, or performance of a particular retention policy on the object 302 and/or context information 146 associated with the object 144 in accordance with a retention policy 730 .
  • the object 144 and/or context information 146 currently maintained in a fifth stage queue Q 5 , may be stored as part of the internal intelligence 322 accessible by the preliminary analytic module 310 (see FIG.
  • the context information 146 associated with the object 144 classified as “malicious” may be stored for a first prescribed period of time (e.g., ranging from a month to indefinitely) while this context information 146 may be stored for a second prescribed time less than the first prescribed time (e.g., ranging from a few days to a week or more) when the object 144 is classified as “benign”.
  • a first prescribed period of time e.g., ranging from a month to indefinitely
  • this context information 146 may be stored for a second prescribed time less than the first prescribed time (e.g., ranging from a few days to a week or more) when the object 144 is classified as “benign”.
  • the reporting logic 290 controls the reporting of these cyberthreat analytic results, which may include one or more alerts 160 to allow an administrator (e.g., person responsible for managing the customer cloud-hosted resources or the public cloud network itself) access to one or more dashboards via the cybersecurity portal 205 or the first API 212 .
  • an administrator e.g., person responsible for managing the customer cloud-hosted resources or the public cloud network itself
  • the reporting logic 290 is configured to receive the meta-information 292 associated with the analytic results produced by the object evaluation logic 270 and generate the displayable report 294 including the comprehensive results of the cyberthreat analytics (e.g., verdict, observed features and any corresponding context information including meta-information), as described above.
  • the cyberthreat analytics e.g., verdict, observed features and any corresponding context information including meta-information

Abstract

A system for conducting cyberthreat analytics on a submitted object to determine whether the object is malicious is described. The system features a cybersecurity system operating with a cloud platform, which is configured to host resources including cloud processing resources and cloud storage resources. The cybersecurity system is configured to analyze one or more received objects included as part of a submission received from a subscriber after authentication of the subscriber and verification that the subscriber is authorized to perform one or more tasks associated with the submission. The cybersecurity system is configured to operate as a multi-tenant Security-as-a-Service (SaaS) that relies upon the cloud processing resources and the cloud storage resources provided by the cloud platform in performing the cybersecurity operations.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of priority on U.S. Provisional Application No. 62/953,422 filed on Dec. 24, 2019, the entire content of which are incorporated by reference herein.
FIELD
Embodiments of the disclosure relate to the field of cybersecurity. More specifically, one embodiment of the disclosure relates to a system architecture directed to cybersecurity threat detection and a corresponding method thereof.
GENERAL BACKGROUND
In the past, businesses have relied on application software installed on one or more electronic devices residing in close proximity to its user (hereinafter, “on-premises electronic devices”). Each on-premises electronic device may constitute a type of computer such as a personal computer, a locally maintained mainframe, or a local server for example. As on-premises electronic devices became subjected to cybersecurity attacks (cyberattacks) more regularly, in order to protect these electronic devices, certain preeminent cybersecurity vendors began to develop and deploy on-premises threat detection appliances.
For on-premises deployments, a customer has to purchase threat detection appliances from a cybersecurity vendor, which requires both a significant upfront capital outlay for the purchase of the appliances as well as significant ongoing operational costs. These operational costs may include the costs for deploying, managing, maintaining, upgrading, repairing and replacing these appliances. For instance, a customer may be required to install multiple types of threat detection appliances within the enterprise network in order to detect different types of cybersecurity threats (cyberthreats). These cyberthreats may coincide with discrete activities associated with known or highly suspected cyberattacks.
As an illustrative example, a cybersecurity vendor would need to install one type of on-premises threat detection appliance that is directed to analyze electronic mail (email) messages for malware, normally ingress email messages from an outside source. Similarly, the cybersecurity vendor would need to install another type of on-premises threat detection appliance to analyze web-based content (e.g., downloaded web pages and related network traffic) in effort to detect cyberthreats such as web pages embedded with malware. Herein, “malware” may be generally considered to be software (e.g., executable) that is coded to cause a recipient electronic device to perform unauthorized, unexpected, anomalous, and/or unwanted behaviors or operations (hereinafter, “malicious behaviors”), such as altering the functionality of an electronic device upon execution of the malware.
Cybersecurity vendors have provided threat detection through cloud-based offerings that are self-hosted by these vendors. Herein, the responsibility for the above-described upfront capital outlays and ongoing operational costs is shifted from the customer to the cybersecurity vendor. As a result, the cybersecurity vendor are now saddled with even greater overall costs than a customer itself because the cybersecurity vendor must deploy infrastructure resources sized to handle the maximum aggregate threat detection analytic workload for all of its customers. These overall costs, directed to data processing and storage usage would need to be passed on to its customers, where any significant cost increases may translate into a significant price increases for the cybersecurity services. As a result, customers are unable to accurately estimate or anticipate the costs associated with current and future cybersecurity needs, given that impact that changes in cybersecurity need, amongst all of the customers, may influence the costs apportioned for processing or storage usage.
Recently, more businesses and individuals have begun to rely on a public cloud network (hereinafter, “public cloud”) for all types of services, including cybersecurity services offered by the cloud provider. A “public cloud” is a fully virtualized environment with a multi-tenant architecture that enables tenants (i.e., customers) to establish different cloud accounts, but share computing and storage resources and retain the isolation of data within each customer's cloud account. The virtualized environment includes on-demand, cloud computing platforms that are provided by a collection of physical data centers, where each data center includes numerous servers hosted by the cloud provider. Examples of different types of public clouds may include, but is not limited or restricted to Amazon Web Services®, Microsoft® Azure® or Google Cloud Platform™ for example.
Comprehensive cloud-based cybersecurity services are not know to be provided. Instead, cybersecurity services offered by cloud providers are typically limited to protecting its own infrastructure. The lack of cybersecurity vendor offerings in the public cloud, where the public cloud operates as an Infrastructure-as-a-Service (IaaS) cloud service, is due in large part to the fact that such a deployment is highly complex, especially when a common interface for object analytics is crucial for subscriber acceptance and ease of use, and a great number of keys for subscriber authentication is required.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
FIG. 1A is a block diagram of an exemplary embodiment of a cloud-based cybersecurity system deployed as a Security-as-a Service (SaaS) layered on a public cloud operating as an Infrastructure-as-a-Service (IaaS).
FIG. 1B is a block diagram of an exemplary embodiment of a cloud-based cybersecurity system deployed as a cybersecurity service within a cloud network.
FIG. 2 is a block diagram of an exemplary embodiment of logic forming the cybersecurity system of FIGS. 1A-1B.
FIG. 3 is a block diagram of an exemplary embodiment of a multi-stage object evaluation logic implemented within the cybersecurity system of FIG. 2 .
FIG. 4 is a block diagram of an exemplary embodiment of a first evaluation stage of the object evaluation logic of FIG. 2 including a preliminary analytic module.
FIG. 5 is a block diagram of an exemplary embodiment of a second evaluation stage of the object evaluation logic including an analytic engine selection module operating with an cyberthreat analytic module deployed within a third evaluation stage of the object evaluation logic of FIG. 2 .
FIG. 6 is a block diagram of an exemplary embodiment of an analytic engine configured to operate as part of the cyberthreat analytic module of FIG. 3 .
FIG. 7 is a block diagram of an exemplary embodiment of a fourth evaluation stage of the object evaluation logic including a correlation module and a post-processing module deployed within a fifth evaluation stage of the object evaluation logic of FIG. 2 .
DETAILED DESCRIPTION
Embodiments of the present disclosure generally relate to a cloud-based cybersecurity system leveraging resources associated with the infrastructure provided by a public cloud. One embodiment of the cybersecurity system operates as a multi-tenant (subscription-based) Security-as-a-Service (SaaS), which is layered on a multi-tenant Infrastructure-as-a-Service (IaaS) cloud platform. As a result, multiple subscribers may be afforded access to cybersecurity services offered by the cybersecurity system while multiple users, including the cybersecurity system, may be afforded access to shared resources hosted by the public cloud (hereinafter, “public cloud infrastructure resources”). Stated differently, as the SaaS-operating cybersecurity system (hereinafter, “cybersecurity system” or “SaaS”) may be installed by a cybersecurity vendor being a different entity than the cloud provider, the SaaS may deploy a vendor-specific proprietary software stack to run on the compute and storage resources provided by the IaaS cloud platform.
In light of this dual, multi-tenant deployment, the cybersecurity system may be configured to charge usage in accordance with a different pricing scheme than offered by the IaaS (public cloud). For example, the cybersecurity system may be configured with a tiered subscription pricing scheme based on a number of submissions of objects undergoing cyberthreat analytics by the cybersecurity system (e.g., the number of objects uploaded via a portal or other type of interface or the number of objects processed to account for objects created and processed during processing of another object if more details analytics are requested) along with additional subscription enrichments (e.g., enhanced reporting formats, memory dump capabilities, etc.). Additionally, or in the alternative, the cybersecurity system may be configured with a “pay per usage” pricing scheme, which enjoys no maximum submission thresholds over a prescribed duration but higher costs are applied to each submission.
As a result of the SaaS deployment described herein, the cybersecurity system enables both the customer and cybersecurity vendor to avoid the complexity and significant capital outlay in buying and operating physical servers and other datacenter infrastructure. Instead, the cybersecurity vendor incurs the costs associated with the actual use of certain public cloud infrastructure resources, such as storage amounts or compute time as measured by the time of data processing conducted by computing instances hosted by the public cloud and configured as analytic engines within the cybersecurity system as described below. The subscribers incur the costs associated with their actual number of object submissions for a determination as to whether the objects constitute a cyberthreat.
Unlike conventional cyberthreat detection appliances, the cybersecurity system is configured to be “submission agnostic,” meaning that the same submission scheme may be followed for uploading different object types for analysis (e.g., email messages, web page content, uniform resource locators (URLs), hashes, files, documents, etc.) and/or the same multi-stage evaluation is conducted on a data sample, inclusive of that object and context information associated with the object, independent of object type. Herein, the architecture of the cybersecurity system is designed to conduct cyberthreat analytics on multiple types of objects uploaded to cybersecurity system by at least (i) validating a submission by confirming that requisite information is included within the submission, (ii) authenticating the subscriber that input the submission, and/or (iii) verifying the subscriber is authorized to perform the task(s) associated with the submission. Upon successful validation, authentication and/or verification of a particular type of submission, such as a data sample submission for example, the cybersecurity system conducts cyberthreat analytics on the object in accordance with a multi-stage evaluation that is submission agnostic (i.e., evaluation stages do not change based on the object type).
I. General Summary
A. Overview
In general, the cybersecurity system may be configured to receive multiple types of objects through an interface (e.g., a cybersecurity portal, device interface including one or more Application Programming Interfaces “APIs”, etc.) upon completion of a subscriber onboarding process. Upon receipt of an object included as part of a data sample, the cybersecurity system may validate the data sample submission by confirming that the submission includes requisite information such as credential(s), a subscription identifier (hereinafter, “Subscription ID”), or the like. Additionally, the cybersecurity system may authenticate the subscriber by confirming that the submitted credential is active and verify that the subscriber is authorized to perform the requested task(s) through analysis of entitlements made available to the subscriber based on its chosen subscription type as identified by the Subscription ID (e.g., subscription parameters such as access privileges, data sample submission thresholds, virtual key allocation threshold, etc.).
Based on data sample submission validation, subscriber authentication, and task verification, the cybersecurity system may conduct cyberthreat analytics on the object, namely analyses conducted on the object and/or context information associated with the object. The context information may include meta-information associated with the object (object context), meta-information associated with the subscription (entitlement context), and/or meta-information associated with the submission (submission context). As illustrative examples, as described below, the “submission context” may include meta-information pertaining to the submission, such as the time of input, origin of the object included in the submission (e.g., from email, network cloud shared drive, network transmission medium, etc.), location of the subscriber's network device providing the object, or the like. The “entitlement context” may include meta-information pertaining to the subscription selected by subscriber, such as information directed to what features are permitted by the subscription (e.g., types of analytics supported, reporting formats available, or other features may distinguish different subscription tiers). Lastly, the “object context” may include meta-information pertaining to the object, such as its extension type.
Herein, according to one embodiment of the disclosure, the analytic engines may be selected based, at least in part, on the submission context, entitlement context and/or the object context. As a result, the analytic engines may be selected as a combination of any single type or any combination of two or more types of the following analytic engines: (i) static analytic engines that conduct an analysis on the content of an object and generate results including observed features represented by characteristics of the object (and accompanying context information); (ii) dynamic analytic engines that conduct an execution of the object and generate results including features represented by observed behaviors of the analytic engine (and accompanying context information); (iii) machine learning analytic engines that conduct extraction of insights from the submitted object and context information using a trained model and generate results including features represented by a probability of an object being malicious (and accompanying context information); and/or (iv) emulation analytic engines that conduct reproduction of operations representing the execution of the object without such execution and generate results including features represented by the behaviors observed during emulation (and accompanying context information).
Thereafter, the generated results (features) produced by the cyberthreat analytics conducted on the object (and its context information) are correlated with features of known malicious objects and/or known benign objects to determine a threat verdict for the object (e.g., malicious/benign, good/bad. high-risk/low-risk or any other measurement to signify the likelihood of the object being malicious or non-malicious). Based on the assigned threat verdict, the cybersecurity system may be further configured to conduct post-processing analytics based, at least in part, on the correlated results in order to determine what additional operations, if any, are to be conducted on the object. These operations may include retention of a portion of the context information associated with an identified malicious or benign object within the cybersecurity intelligence used by the cybersecurity system, transmission of the object to a forensic team for subsequent analysis, or the like.
In addition to conducting cyberthreat analytics, the cybersecurity system is configured to monitor and maintain, on a per subscriber basis, SaaS metrics. The SaaS metrics may include, inter alia, a sum total of data sample submissions made by a subscriber to the cybersecurity system (SaaS subscriber) during a selected time period and/or a sum total of active virtual keys currently issued to the SaaS subscriber. The SaaS metrics may be used for billing of the subscriber based on the number of data sample submissions made during a selected time period, and in some cases, to ensure compliance with subscription entitlements.
B. Architecture
Herein, the cybersecurity system includes an architecture that relies upon the public cloud infrastructure resources and monitors the usage of various services (e.g., data sample submissions, virtual key issuances, etc.) to ensure compliance with subscription entitlements as well as for reporting and billing purposes. According to one embodiment of the disclosure, the cybersecurity system operates as a multi-tenant, subscription-based SaaS), which leverages resources, such as compute and storage resources, hosted by an IaaS cloud platform, although other deployments are available and pertain to the broader spirit and scope of the invention. The cybersecurity system features (i) interface logic, (ii) administrative control logic, (iii) multi-stage, object evaluation logic, and (iv) reporting logic.
The interface logic enables communications to the administrative control logic to validate a submission, authenticate a subscriber associated with the submission, and verify that that the subscriber is authorized to perform one or more tasks associated with the submission. Depending on the submission type, upon submission validation, subscriber authentication and task verification, the interface logic enables the return of data requested by the submission to the subscriber or routes at least a portion of the submission to the object evaluation logic. For example, as an illustrative embodiment, the interface logic may include a cybersecurity portal that allows any user (potential subscriber) to register and establish a subscription with the cybersecurity system. After the subscription is established, the user (referred to as the “subscriber”) may receive credentials to allow for the submission of objects (in the form of data samples including the object and its context information) uploaded via the cybersecurity portal for cyberthreat analytics, submission of queries for certain subscriber-based metrics, or submission of parameters for customizing functionality of the object evaluation logic akin to the subscriber's needs.
Additionally, after the subscription is established, the interface logic may be provided with an additional interface (hereinafter, “device interface”). The device interface includes logic supporting one or more APIs, where access to the APIs may depend on the subscription entitlements. The APIs may include a first API for the submission of objects (data samples including the object and its context information) for cyberthreat analytics, a second API for subscription management (e.g., ascertain the subscriber-based metrics), and a third API for management and/or customization of the functionality of analytic engines operating within the object evaluation logic.
The administrative control logic includes a subscription management module, a subscriber accounts data store, a credential (key) management module, a consumption quota monitoring module, a configuration management module, a system health assessment module, an auto-scaling module, and a subscription billing module. The subscriber accounts data store may be non-volatile, cloud-based storage hosted by the public cloud that is allocated to the IaaS subscriber (e.g., the cybersecurity vendor), where different portions of the subscriber accounts data store may be allocated to each SaaS subscriber. Therefore, each SaaS subscriber includes one or more virtual data stores that are secured and inaccessible by other SaaS subscribers. Other of the above-identified modules may be shared by the SaaS subscribers, where these modules are maintained with cloud-based storage hosted by the public cloud and operate based on execution of these modules by compute engines hosted by the public cloud.
The subscription management module is configured to control access to the cybersecurity system by controlling a subscriber onboarding process in which user information and financial information are acquired prior to selection, by the user, of a particular subscription tier. The subscription tiers may be allocated based on data sample submission thresholds, over a prescribed period of time, a desired number of submission sources (e.g., number of persons or network devices to be provided with a virtual key for subscriber authentication), or the like. Based on the chosen subscription tier, a subscription identifier (hereinafter, “Subscription ID”) may be assigned to a subscription secured by the subscriber and stored within a particular portion of the subscriber accounts data store reserved for that subscriber, given that certain subscribers (e.g., large enterprises) may acquire multiple subscriptions and identification of a particular subscription associated with the submission may be necessary.
According to one embodiment of the disclosure, the subscriber accounts data store may be configured as (i) one or more virtual data stores each maintaining a record of the account data for a particular subscriber, (ii) one or more virtual data stores maintaining a collection of references (e.g., links, etc.) each directed to a different portion of cloud-based storage maintained in the aggregate for the IaaS subscriber (cybersecurity vendor), but allocated separately by the cybersecurity system to different SaaS subscribers to include account data, or (iii) a combination thereof (e.g., storage of credentials and/or personal identifiable information within the virtual data store(s) along with references to a remainder of the account data maintained at different virtual data stores.
Herein, according to one embodiment of the disclosure, subscriber account data may include any information (or meta-information) that may be used to identify the subscriber, provide subscription status, authenticate a subscriber based on credentials (e.g., tokens, keys or representatives thereof), identify certain entitlements to be provided to the data sample and other entitlements associated with the subscription to which compliance is required prior to the cybersecurity system completing a task requested by the submission, or the like. Hence, the subscriber account data may include a Subscription ID and information associated with the subscriber (e.g., contact information, financial information, location, etc.); subscription entitlements (e.g., subscription parameters such as data sample submission threshold, virtual key allocation threshold, additional enrichments based on the particular subscription directed to additional analytic capabilities made available to data samples from the particular subscriber, additional report formatting, etc.). Additionally, the subscriber account data may further maintain metrics pertaining to the subscription (e.g., SaaS metrics and/or IaaS metrics, etc.).
Within an embodiment of the administrative control logic, the credential (key) management module is deployed to control credential generation and subscriber authentication. In particular, upon establishing a subscription, the credential management module is notified to generate a first credential (referred to as a “master key”) assigned to a subscriber associated with the subscription. The master key may be maintained as part of the subscriber account data, but it is not freely accessible to the subscriber. Instead, the master key may operate as a basis (e.g., seed keying material) used by the credential management module to generate second credentials (each referred to as a “virtual key”). In particular, according to one embodiment of the disclosure, each virtual key may be based, at least in part, on the contents of the master key. One or more virtual keys may be generated and returned to the subscriber in response to a key generation request submission, provided a sum total of the number of requested virtual keys and the number of active virtual keys do not exceed the subscription entitlements. A virtual key is included as part of a submission (e.g., data sample submission, consumption quota submission, parameter adjustment submission, etc.) to authenticate the subscriber and verify that the subscriber is authorized to perform the task associated with that submission. The virtual keys allow for tracking of usage of the cybersecurity system by different subscriber members (e.g., individuals, groups, departments, subsidiaries, etc.) as well as administrative control over access to the cybersecurity system, given that the virtual keys may be disabled, assigned prescribed periods of activity, or the like.
For this embodiment of the administrative control logic, the consumption quota monitoring module may be accessed via the second API (or cybersecurity portal) to enable the subscriber to obtain metrics associated with the current state of the subscription (e.g., active status, number of submissions for a particular submission type (or in total) conducted during the subscription period, number of submissions remaining for the subscription period, etc.). Additionally, the consumption quota monitoring module may be accessed by the credential management module in order to confirm an incoming submission does not exceed the data sample submission threshold. This reliance may occur if the credential management module is permitted access to the credential information (e.g., master key, virtual keys, etc.) of the subscriber account data.
The configuration management module is configured to enable a subscriber, via the third API (or cybersecurity portal), to specify parameters that control operability of the cyberthreat analytics. For instance, prior to controlling such operability, the credential management module, upon receipt of a parameter adjustment submission, may extract a virtual key included as part of the submission to authenticate the subscriber and verify that the subscriber is authorized to perform this task (parameter adjustment). Thereafter, contents of the parameter adjustment submission are routed to the configuration management module, which may alter stored parameters that may influence workflow, such as (i) operations of an analytic engine selection module deployed within the object evaluation logic of the cybersecurity system for selection of analytic engines, (ii) operations of the analytic engines, and/or (iii) operations of the correlation module, and/or (iv) operations of the post-processing module.
Having no visibility to a SaaS subscriber, the system health assessment module and the auto-scaling module are in communications with the object evaluation logic. In particular, the system health assessment module is configured to communicate with analytic engines, which are computing instances hosted by the cloud network that are configured to conduct cyberthreat analytics on the submitted objects. Based on these communications along with additional abilities to monitor queue storage levels and other public cloud infrastructure resources, the system health assessment module may be configured to ascertain the health of cloud-based processing resources (e.g., operating state, capacity level, etc.) to surmise the overall health of the cybersecurity system. The auto-scaling module is configured to (i) add additional analytic engines, as permitted by the subscription, in response to a prescribed increased in queued data samples awaiting cyberthreat analytics and/or (ii) terminate one or more analytic engines in response to a decrease in queued data samples awaiting cyberthreat analytics. The increase and/or decrease may be measured based on the number of objects, rate of change in the increase or decrease, etc. Alternatively, the auto-scaling module may be configured to monitor available queue capacity, where a decrease in available queue capacity denotes increased data samples awaiting analytics and potential addition of analytic engines and an increase in available queue capacity denotes decreased data samples awaiting analytics and potential termination of analytic engine(s).
The subscription billing module is configured to confirm that the subscription parameters have not been exceeded (to denote additional billing) for a time-based, flat-fee subscription (e.g., yearly, monthly, weekly or daily). Alternatively, for a pay-as-you-go subscription, the subscription billing module may be configured to maintain an account of the number of submissions (e.g., data sample submissions) over a prescribed period of time and generate a request for payment from the SaaS subscriber accordingly. Additionally, the subscription billing module may be operable to identify other paid cloud-based services utilized by the SaaS-subscriber for inclusion as part of the payment request. According to one embodiment, the subscription billing module may access the subscriber account data for the requisite information.
According to this embodiment of the disclosure, the object evaluation logic may be separated into multiple evaluation stages, where each evaluation stage is provided access to a queue that features a plurality of queue elements each storing content (object, context information, etc.) associated with a submitted data sample. For this distributed queue architecture, each “stage” queue is provided access to (or receives) content associated with a data sample evaluated in the preceding evaluation stage. Herein, the object evaluation logic includes a preliminary analytic module (within a first evaluation stage), an analytic engine selection module (within a second evaluation stage), a cyberthreat analytic module (within a third evaluation stage), a correlation module (within a fourth evaluation stage) and a post-processing module (within a fifth evaluation stage).
Herein, operating as part of the first evaluation stage, the preliminary analytic module may be configured to conduct one or more preliminary analyses on content within the data sample, which includes the object and/or the context information accompanying the object, in comparison with content associated with accessible cybersecurity intelligence. The cybersecurity intelligence may include context information associated with known malicious objects and known benign objects gathered from prior analytics conducted by the cybersecurity system as well as cybersecurity intelligence from sources external to the cybersecurity system.
Based on analysis of the context information, upon classifying the object as suspicious, the analytic engine selection module is provided access to the object and/or the context information as additional cyberthreat analytics are necessary. Otherwise, responsive to the preliminary analyses determining that the object is malicious or benign, the preliminary analytic module may bypass further cyberthreat analyses of the object.
Operating as part of the second evaluation stage, the analytic engine selection module is configured to determine one or more analytic engines to conduct cyberthreat analytics of the object. This determination may be conducted, at least in part, on the context information accompanying the object. The context information may be categorized as submission context, entitlement context, and/or object context as described below. The analytic engine selection module may select the type of analytic engines (e.g., static analytic engine(s), dynamic analytic engine(s), machine-learning engine(s), and/or emulation analytic engine(s)) based on the context information.
Operating as part of the third evaluation stage, the cyberthreat analytic module includes one or more analytic engines that are directed to different analysis approaches in analyzing an object for malware (and whether it constitutes a cyberthreat). These analytic engines may include any one or combination of the following: (i) static analytic engines; (ii) dynamic analytic engines; (iii) machine learning analytic engines; and/or (iv) emulation analytic engines.
As described herein, the static analytic engines conduct an analysis on the content of the object and generate results including observed features represented by characteristics of the object and context information associated with the object. The context information provides additional information associated with the features (e.g., specific characteristic deemed malicious, location of that characteristic within the object, or the like. The dynamic analytic engines conduct an execution of the object and each generates results including features represented by observed behaviors of the dynamic analytic engine along with context information accompanying the observed features (e.g., software profile, process or thread being executed that generates the malicious features, source object type, etc.). Similarly, machine learning analytic engines submit the object as input into a trained machine-learning model, each generating results including features represented by insights derived from the machine-learning module and accompanying context information, which may be similar to the type of context information provided with dynamic analytic results perhaps along with additional contextual observations learned from objects similar to the object. Lastly, emulation analytic engines conduct reproduction of operations representing the execution of the object, without such execution, which generates results including features represented by behaviors monitored during emulation and its accompanying context information.
According to one embodiment of the disclosure, each analytic engine may feature an analytic engine infrastructure, which includes a health assessment module, a configuration module, an update module, a task processing module and a result processing module. Herein, the health assessment module is configured to determine the operational health of the analytic engine, which may be represented, at least in part, by its utilization level. The configuration module controls the re-configuration of certain functionality of the analytic engine. The update module is configured to receive and control installation of rule changes effecting operability of the task processing module and the result processing module and changes to software profiles (or guest images) to re-configure operability of the analytic engine. The task processing module is further configured to monitor queue elements of the queue that maintain the objects (or data samples) awaiting cyberthreat analytics (i.e., third stage queue) and perhaps queues for the first and/or second evaluation stages to estimate future processing capacity needed. Lastly, the result processing module is responsible for queue management by removing a pending object (or data sample) from the third stage queue and moving the data sample for storage in a fourth stage queue accessible to the correlation module.
Operating as part of the fourth evaluation stage, a correlation module is configured to classify the object included as part of the data sample as malicious, benign, unknown or suspicious based on the above-identified features collected from the analytic results produced by the analytic engines and their accompanying context information. This classification of the object (sometimes referred to as the “verdict”) is provided to the post-processing module that is part of the fifth evaluation stage.
Depending on the verdict, the post-processing module may initiate actions to remediate a detected cyberthreat (object). Additionally, or in the alternative, the post-processing module may add certain context information associated with the object to the cybersecurity intelligence utilized by the preliminary analytic module in accordance with a prescribed retention policy maintained by the post-processing module.
The reporting logic is configured to generate a displayable report including the comprehensive results of the cyberthreat analytics (e.g., verdict, observed features and any corresponding meta-information representing the results associated with the cyberthreat analytics, context information associated with the observed features that identify the analyses conducted to produce the observed features, circumstances surrounding the features when observed, etc.). Accessible via the cybersecurity portal, the displayable report may be provided as an interactive screens or series of screens that allow a security administrator (corresponding to a representative of the SaaS-subscriber) to view results of data sample submissions in the aggregate and “drill-down” as to specifics associated with one of the objects uploaded to the cybersecurity system within a data sample submission. The reporting logic may rely on the Subscription ID or a virtual key, which may be part of the data sample submitted to the object evaluation logic, to identify the subscriber and determine a preferred method for conveyance of the alert (and set access controls to preclude access to contents of the alert by other SaaS-subscribers). Additionally, or in the alterative, the reporting logic may generate an alert based on the comprehensive results of the cyberthreat analytics. The alert may be in the form of a message (e.g., “threat warning” text or other electronic message).
II. Terminology
In the following description, certain terminology is used to describe aspects of the invention. In certain situations, the terms “logic,” “module,” and “engine” are representative of hardware, firmware, and/or software that is configured to perform one or more functions. As hardware, the logic (or module or engine) may include circuitry having data processing and/or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a hardware processor, a programmable gate array, a microcontroller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, or combinatorial logic.
Alternatively, or in combination with the hardware circuitry described above, the logic (or module or engine) may be software in the form of one or more software modules, which may be configured to operate as its counterpart circuitry. For instance, a software module may be a software instance that operates as a processor, namely a virtual processor whose underlying operations is based on a physical processor such as an EC2 instance within the Amazon® AWS infrastructure for example. Additionally, a software module may include an executable application, a daemon application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or even one or more instructions.
The software module(s) may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; a semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As firmware, the logic (or module or engine) may be stored in persistent storage.
The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware.
The term “malware” is directed to software that produces an undesirable behavior upon execution, where the behavior is deemed to be “undesirable” based on customer-specific rules, manufacturer-based rules, or any other type of rules formulated by public opinion or a particular governmental or commercial entity. This undesired behavior may include a communication-based anomaly or an execution-based anomaly that (1) alters the functionality of an electronic device executing that software in a malicious manner; (2) alters the functionality of an electronic device executing that software without any malicious intent; and/or (3) provides an unwanted functionality which is generally acceptable in other context.
The term “network device” should be generally construed as physical or virtualized device with data processing capability and/or a capability of connecting to a network, such as a public cloud network (e.g., Amazon Web Service (AWS®), Microsoft Azure®, Google Cloud®, etc.), a private cloud network, or any other network type. The network devices may be used by or a security operations center (SOC), Security Information and Event Management system (SIEM), a network administrator, a forensic analyst, or cybersecurity system for another security provider for communication with an interface (e.g., cybersecurity portal) to access a SaaS-operating cybersecurity system. Examples of a network device may include, but are not limited or restricted to, the following: a server, a router or other intermediary communication device, an endpoint (e.g., a laptop, a smartphone, a tablet, a desktop computer, a netbook, etc.) or virtualized devices being software with the functionality of the network device. The network device may also be deployed as part any physical or virtualized device communicatively coupled via a device interface (e.g., API(s)) for gaining access to the SaaS-operating cybersecurity system.
The term “submission” a type of message (prescribed, structured data format) that is intended to result in a particular task to be performed. The tasks may include object-based analytics (data sample submissions), return of requested information (consumption quota submissions), parameter updates that may influence operations associated with the cyberthreat analytics (parameter adjustment submissions), or the like. With respect to data sample submissions, the submission may include a data sample, namely an organized collection of data including one or more objects and context information at least pertaining to the object(s). An “object” generally refers to a collection of information (e.g., file, document, URL, web content, email message, etc.) that may be extracted from the data sample for cyberthreat analytics.
As described herein, cybersecurity system may be deployed to operate as a subscription-based Security-as-a-Service (SaaS) that utilizes public cloud infrastructure resources, such as virtual computing, virtual data stores, virtual (cloud) database resources for example, provided by an Infrastructure-as-a-Service (IaaS) cloud platform. The cybersecurity system may be configured to operate as a multi-tenant service; namely a service made available to tenants (also referred to as “subscribers”) on demand. The IaaS cloud platform may be configured to operate as a multi-tenant service to which a cybersecurity vendor offering the cybersecurity system corresponds to an IaaS-subscriber. Therefore, the cybersecurity system may leverage resources offered by the IaaS cloud platform to support operations conducted by SaaS-subscribers.
The terms “benign,” “suspicious” and “malicious” are used to identify different likelihoods of an object being associated with a cyberattack (i.e., constituting a cyberthreat). An object may be classified as “benign” upon determining that the likelihood of the object being associated with a cyberattack is zero or falls below a first threshold (i.e. falls within a first likelihood range). The object may be classified as “malicious” upon determining that the likelihood of the object being associated with a cyberattack is greater than a second threshold extending from a substantial likelihood to absolute certainty (i.e. falls within a third likelihood range). The object may be classified as “suspicious” upon determining that the likelihood of the object being associated with a cyberattack falls between the first threshold and the second threshold (i.e. falls within a second likelihood range). Different embodiments may use different measures of likelihood of malicious and non-maliciousness and may be referenced differently. Therefore, this terminology is merely used to identify different levels of maliciousness.
In certain instances, the terms “compare,” comparing,” “comparison,” or other tenses thereof generally mean determining if a match (e.g., identical or a prescribed level of correlation) is achieved between two items under analysis (e.g., context information, portions of objects, etc.) or representations of the two items (e.g., hash values, checksums, etc.).
The term “transmission medium” generally refers to a physical or logical communication link (or path) between two or more network devices. For instance, as a physical communication path, wired and/or wireless interconnects in the form of electrical wiring, optical fiber, cable, bus trace, or a wireless channel using infrared, radio frequency (RF), may be used.
Finally, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. As an example, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described.
III. Cybersecurity System Architecture
Referring to FIG. 1A, a block diagram of an exemplary embodiment of a cybersecurity system 100 operating as a service support by resources hosted by a cloud platform 110 (e.g., infrastructure provided by Microsoft Azure®, Amazon Web Services®, or Google Cloud®) is shown. According to this embodiment, the cybersecurity system 100 operates as a multi-tenant, Security-as-a-Service (SaaS), which is accessible by a plurality of tenants 120 1-120 N (N≥1) on demand (hereinafter, “subscribers” 120 1-120 N) over a transmission medium 130. Examples of subscribers 120 1-120 N may include enterprises (companies, partnerships, co-ops, governmental agencies or other agencies, etc.), individuals, or even other cybersecurity vendors that intend to utilize the cybersecurity system 100 to conduct additional analytics on objects submitted to the cybersecurity system 100 in order to obtain a verdict (e.g., malicious or non-malicious determination) for that object or verify a verdict ascertained by another cybersecurity vendor.
The SaaS-operating cybersecurity system 100 may operate in cooperation with the multi-tenant, cloud platform 110, which corresponds to an Infrastructure-as-a-Service (IaaS) cloud platform 110. Hence, multiple subscribers 120 1-120 N may be provided controlled access to cybersecurity services offered by the SaaS-operating cybersecurity system 100 while multiple users (e.g., two or more IaaS subscribers, including the SaaS-operating cybersecurity system 100 as shown and other IaaS subscriber 102), may be provided controlled access to shared resources hosted by the IaaS cloud platform 110 (hereinafter, “public cloud infrastructure resources 150”). For example, the SaaS 100 may deploy a vendor-specific proprietary software stack to run on the resources 150 (e.g., compute and storage resources) provided by the IaaS cloud platform 110. According to this embodiment, the SaaS-operating cybersecurity system 100 is controlled by a different entity than the IaaS cloud provider.
Based on the dual multi-tenant deployment, the SaaS-operating cybersecurity system 100 may be configured to charge usage of the SaaS in accordance with a different parameters (and pricing scheme) than offered by the IaaS (public cloud). For example, the SaaS-operating cybersecurity system 100 may be configured with subscription tier pricing based on the number of submissions with objects provided to undergo cyberthreat analytics by the SaaS-operating cybersecurity system 100 (e.g., number of objects uploaded via a portal or other type of interface) or the number of objects processed (e.g., to account for objects included as part of one or more submissions and additional objects processed that were produced during the processing of another object).
This SaaS-IaaS deployment enables both the customer and cybersecurity vendor to avoid significant capital outlays in buying and operating physical servers and other datacenter infrastructure. Rather, the cybersecurity vendor incurs the costs associated with the actual use of certain public cloud infrastructure resources 150 in the aggregate, such as IaaS-based storage amounts or compute time for analytic engines formed from IaaS-based computing instances. The subscribers incur the costs associated with their actual number of submissions (e.g., data sample submissions described below) input into the SaaS-operating cybersecurity system 100.
Referring to FIG. 1B, a block diagram of an exemplary embodiment of the SaaS-operating cybersecurity system 100 leveraging the public cloud infrastructure resource 150 provided by the IaaS cloud platform (referred to as “public cloud”) 110 is shown. For this embodiment, the cybersecurity system 100 is configured to operate as a multi-tenant, subscription-based SaaS; namely, a cloud-based subscription service that utilizes storage and compute services hosted by the public cloud 110 and is available to the plurality of subscribers 120 1-120 N over the transmission medium 130 including a public network (e.g., Internet).
As shown, according to one embodiment of the disclosure, each subscriber (e.g., subscriber 120 1 . . . , or subscriber 120 N as shown) may include one or more network devices 125, where each of the network devices 125 may be permitted access to the cybersecurity system 100 if credentials submitted by that network device 125 are authenticated. According to one embodiment of the disclosure, the credential authentication may be conducted in accordance with a credential (key) authentication scheme in which a (virtual) key generated by the cybersecurity system 100 and provided to a subscriber (e.g., subscriber 120 N) is used to gain access to the cybersecurity system 100. Herein, the network devices 125 may be used by different sources, including but not limited or restricted to a security operations center (SOC), a Security Information and Event Management system (SIEM), a network administrator, a forensic analyst, a different cybersecurity vendor, or any other source seeking cybersecurity services offered by the cybersecurity system 100.
Herein, the cybersecurity system 100 is logic that leverages public cloud infrastructure resources 150. In particular, the logic associated with the cybersecurity system 100 may be stored within cloud-based storage resources (e.g., virtual data stores corresponding to a physical, non-transitory storage medium provided by the public cloud 110 such as Amazon® S3 storage instances, Amazon® Glacier or other AWS Storage Services). This stored logic is executed, at least in part, by cloud processing resources (e.g., one or more computing instances operating as virtual processors whose underlying operations are based on physical processors, such as EC2 instances within the Amazon® AWS infrastructure). As additional storage and/or processing capabilities are required, the cybersecurity system 100 may request and active additional cloud processing resources 152 and cloud storage resources 154.
According to this embodiment of the disclosure, the cybersecurity system 100 is configured to receive and respond to messages 140 requesting one or more tasks to be conducted by the cybersecurity system 100 (hereinafter referred to as “submissions”). One of these submissions 140 may include a data sample 142, where the data sample submission 140 requests the cybersecurity system 100 to conduct analytics on an object 144 included as part of the data sample 142. Context information 146 pertaining to the object 144 may be included as part of the data sample 142 or part of the submission 140.
According to one embodiment of the disclosure, the context information 146 may include different context types such as context information 147 associated with the data sample submission 140 (submission context 147), context information 148 associated with entitlements associated with a subscription to which the submitting source belongs (entitlement context 148), and/or context information 149 associated with the object 144 (object context 149). The context information 146 is not static for the object 144 at the time of submission. Rather, the context information 146 may be modified (augmented) based on operations within the cybersecurity system 100, especially entitlement context 148 obtained from a subscriber's account. Herein, the context information 146 may be used to identify the subscriber 120 1 responsible for submitting the data sample 142.
As described above, the cybersecurity system 100 may leverage the public cloud infrastructure resources 150 hosted by the public cloud 110. As described above, the public cloud infrastructure resources 150 may include, but are not limited or restricted to cloud processing resources 152 (e.g., computing instances, etc.) and cloud storage resources 154 (e.g., virtual data stores operating as non-volatile or volatile storage such as a log, queues, etc.), which may be allocated for use among the subscribers 120 1-120 N. By leveraging the infrastructure of the public cloud 110, the cybersecurity system 100 is able to immediately “scale up” (add additional analytic engines, as permitted by the subscription) or “scale down” (terminate one or more analytic engines) its cloud resource usage when such usage exceeds or falls below certain monitored thresholds.
As an illustrative example, the cybersecurity system 100 may monitor capacity levels of virtual data stores operating as queues that provide temporary storage at certain stages during analytics of the object 144 (hereafter, “queue capacity”). The queue capacity may be determined through any number of metrics, such as the number of queued objects awaiting analytics, usage percentages of the queues, computed queue wait time per data sample, or the like. Hence, the cybersecurity system 100 may scale up its usage of any public cloud infrastructure resources 150, such as cloud processing resource 152 being customized to operate as analytic engines as described below, upon exceeding a first threshold, perhaps for a prolonged period of time to avoid throttling. Similarly, the cybersecurity system 100 may scale down its usage of the cloud processing resource 152 upon falling below a second threshold, perhaps for the prolonged period of time as well.
Also, the cybersecurity system 100 may utilize the public cloud infrastructure resources 150 for supporting administrative tasks. As an illustrative example, the cybersecurity system 100 may be allocated cloud storage resources 152 for maintaining data for use in monitoring compliance by the subscribers 120 1-120 N with their subscription entitlements. The subscription entitlements may be represented as permissions such as (i) a maximum number of submissions over a prescribed period of time (e.g., subscription time period, yearly, monthly, weekly, daily, during certain hours, etc.), (ii) a maximum number of active virtual keys providing authorized access to the cybersecurity system 100, (iii) additional capabilities as provided by enhancements made available based on the selected subscriber tier, or the like.
The cybersecurity system 100 supports bidirectional communications with the subscribers 120 1-120 N in which one or more responses 160 to the submissions 140 are returned to the subscribers 120 1-120 N. For example, in response to the data sample submission 140 provided from a network device 1251 of the first subscriber 120 1, the response 160 may correspond to a displayable report 160 including comprehensive results of cyberthreat analytics conducted on the object 144 and its accompanying context information 146. Examples of the comprehensive results may include a verdict, observed features and any corresponding meta-information representing the results associated with the cyberthreat analytics, and context information associated with the observed features (e.g., information that identifies the analyses conducted to produce the observed features, circumstances the features occurred, etc.). Additionally, or in the alterative, the response 160 may include one or more alert messages (hereinafter, “alert message(s)”). The alert message(s) may include a portion of the comprehensive results of cyberthreat analytics, such as verdict and name of the object 144.
Referring now to FIG. 2 , a block diagram of an exemplary embodiment of logic forming the cybersecurity system 100 of FIG. 1B is shown, wherein the logic relies upon the public cloud infrastructure resources 150 and monitors accesses to the cybersecurity system 100 for subscription compliance, billing and reporting. Herein, the cybersecurity system 100 features interface logic 200, administrative control logic 220, object evaluation logic 270, and reporting logic 290.
As shown, according to this embodiment of the disclosure, based on the type of submission, the interface logic 200 enables communications with different modules forming the administrative control logic 220. Upon validation of the submission 140, authentication of a subscriber (e.g., subscriber 120 N) providing the submission 140 and verification that the subscriber 120 N is authorized to perform the task or tasks associated with the submission 140, the task(s) associated with the submission 140 is(are) performed.
According to one embodiment of the disclosure, as shown in FIG. 2 , the interface logic 200 includes a cybersecurity portal 205 that allows any user (potential subscriber) to register and establish a subscription with the cybersecurity system 100. After the subscription is established, the user (referred to as the “subscriber”) may be provided with additional accessibility to the cybersecurity system 100 via device interface 210 corresponding to logic supporting one or more APIs, where different combinations of APIs may be provided depending on the terms of the subscription. For example, where the submission 140 corresponds to a data sample submission, logic associated with an API of the device interface 210 may be configured to await for the validation of the data sample submission 140, authentication of the subscriber 120 N submitting the data sample submission 140 and verification that the subscriber 120 N is authorized to submit at least the data sample 142 for cyberthreat analytics before routing the data sample 142 to the object evaluation logic 270. The device interface 210 supports automated network device 125 to cybersecurity system 100 communications. However, the cybersecurity portal 205 supports all submission types.
More specifically, according to one embodiment of the disclosure, as shown in FIG. 2 , the device interface 210, when deployed, include a first API 212, a second API 214 and/or a third API 216. In particular, as an illustrative embodiment, the device interface 210 may include the first API 212 that provides an interface for the submission of the object 144 for cyberthreat analytics (in the form of the data sample submission 140 featuring the data sample 142, which may include the object 144 and/or its context information 146). The administrative control logic 220 is configured to validate the data sample submission 140, authenticate the subscriber 120 N submitting the data sample 142, verify that the submission of the data sample 142 is in compliance with parameters associated with the subscriber's subscription, and thereafter, provide at least a portion of the data sample 142 (e.g., object, context information) to the object evaluation logic 270 for analysis.
The second API 214 provides an interface for submissions directed to subscription management such as ascertain SaaS-based metrics associated with a current state of a subscription. These SaaS metrics may include object submission quota (e.g., number of objects submitted during the subscription period, number of objects available for submission during the remainder of the subscription period, etc.). The third API 216 provides an interface for submissions to parameters and other information to a configuration management module 250 within the administrative control logic 220 to enable subscriber 120 N, via the device interface 210, to specify parameters that control operability of the cyberthreat analytics.
Alternative, the cybersecurity portal 205 features logic, namely the first logic 206, second logic 207 and third logic 208 of the cybersecurity portal 205, that correspond in operation to the first API 212, the second API 214 and the third API 216, respectively. These logic units support the handling of the submissions through the cybersecurity portal 205 in a manner similar to the APIs of the device interface 210, as described above.
Referring still to FIG. 2 , an embodiment of modules deployed within the administrative control logic 220 is shown. Herein, the administrative control logic 220 includes a plurality of modules that collectively operate to receive and validate the submission 140, authenticate the subscriber 120 N operating as the source of the submission 140, and verify that the subscriber 120 N is authorized to conduct the task associated with the submission 140. The verification may involve the credential (key) management module 235 confirming that the subscriber's subscription permits the handling of the task and the SaaS metrics associated with the current state of the subscriber's subscription do not preclude the handling of the task and/or metrics of the current state of submission (e.g., data sample submission threshold reached, etc.). The above-identified modules of the administrative control logic 220 may include, but are not limited or restricted to the subscription management module 225, a subscriber accounts data store 230, the credential (key) management module 235, a consumption quota monitoring module 245, the configuration management module 250, a system health assessment module 255, an auto-scaling module 260, and a subscription billing module 265.
The subscription management module 225 is configured to control access, via the cybersecurity portal 205, to the cybersecurity system 100 by controlling the subscription onboarding process. Via the cybersecurity portal 205, during the onboarding process to register with and gain access to the cybersecurity system 100, the subscription management module 225 gathers subscriber information (e.g., name of company, business address, industry by sector, geographic location, representative contact information, etc.) and financial information associated with the subscriber (e.g., bank account information, credit card information, etc.). The subscription management module 225 further prompts the subscriber, for example subscriber 120 N, for selection of a particular subscription tier. Each subscription tier may provide different types and/or levels of entitlements (e.g., access privileges, subscription parameters such as data sample submission thresholds, virtual key allocation threshold, etc.), where the usage or allocation of such entitlements may be monitored.
For instance, as an illustrative example, the subscription tiers may be based on different data sample submission thresholds for a prescribed period of time (e.g., a first subscription tier with one million data sample submissions per year (up to 1M/year) at cost $X and a second “pay-as-you-go” subscription tier with unlimited data sample submissions but higher submission costs per sample, $X+$Y). Additionally, or in the alternative, the subscription tiers may be based on the numbers of credentials (e.g., keys, tokens, etc.) made available to the subscriber 120 N (e.g., prescribed number of active virtual keys allocated to the subscriber 120 N for subscriber/device authentication), or the like.
Additionally, the subscription management module 225 may assign the Subscription ID 227 to the subscriber 120 N. Herein, the Subscription ID 227 may be relied upon to assist in accessing account data associated with a particular subscription selected by the subscriber 120 N, which is maintained within the subscriber accounts data store 230.
The subscriber accounts data store 230 constitutes a data store that is configured to maintain a record of account data associated with each subscriber 120 1-120 N registered to access cybersecurity services provided by the cybersecurity system 100. According to one embodiment of the disclosure, the subscriber accounts data store 230 may be configured as (i) one or more virtual data stores (e.g., Amazon® S3 data stores) each maintaining a record of the account data for a particular subscriber and utilized in the aggregate by the IaaS subscriber (cybersecurity vendor), (ii) one or more virtual data stores maintaining a collection of references (e.g., links, etc.), each directed to a different portion of cloud-based storage including account data maintained by public cloud infrastructure resources such as cloud (Amazon®) database resources 156 of FIG. 1B, which is maintained in the aggregate for the IaaS subscriber (cybersecurity vendor), but allocated separately by the cybersecurity system 100 to different SaaS subscribers (e.g., subscribers 120 1-120 N), or (iii) a hybrid deployment where the storage of credentials and/or personal identifiable information may be included in the virtual data store(s) along with references to the remainder of the account data maintained by the cloud database resources 156.
The “account data” may include any information or meta-information (e.g., Subscription ID 227, credentials 240/242 such as tokens, keys or representatives thereof, metrics 232/234) that may be used to identify or authenticate its subscriber, provide subscription status or expiration date, and/or verify that a task associated with a submission may be handled by confirming compliance with entitlements provided by the subscriber-selected subscription tier. According to one embodiment of the disclosure, each subscriber account may be located using the Subscription ID 227 and/or credentials 242 (e.g., content (or derivative thereof) may be used to locate a location in a virtual data store for account data associated with that subscriber) and is configured to include information associated with the subscriber and subscription entitlements (e.g., which APIs accessible by that subscriber; maximum number of submissions during a select time period, maximum number of issued virtual keys, etc.).
According to one embodiment of the disclosure, the subscriber accounts data store 230 may be configured to monitor and maintain, on a per subscriber basis, metrics including SaaS metrics 232 (representing at least some of the subscription entitlements) and IaaS metrics 234. The SaaS metrics 232 may include metrics that represent and maintain a sum total of submissions made by the (SaaS) subscriber 120 N (e.g., sum total of data sample submissions) made during a particular period of time (e.g., subscription time period), which may be accessed to confirm that the sum total falls below the maximum number of submissions to ensure compliance with the subscription entitlements, especially before an incoming data sample submission is provided to the object evaluation logic 270. The SaaS metrics 232 may further include metrics that represent and maintain a sum total of virtual keys currently issued to the SaaS subscriber 120 N. The SaaS metrics 232 may be used for billing of the subscriber 120 N based on the number of data sample submissions made during the particular period of time, and in some cases, to ensure compliance with subscription entitlements.
Besides subscriber-specific metrics, the SaaS metrics 232 may aggregation metrics directed to all SaaS subscribers. For example, the SaaS metrics 232 may include an aggregate as to the number of data sample submissions for all SaaS subscribers. This metric may be used to determine the profitability of the cybersecurity system 100 to determine whether the cost structure necessities a change in submission pricing.
As an alternative (and optional) embodiment, the cybersecurity system 100 may be configured to monitor and maintain, on a per subscriber basis, IaaS metrics 234. The IaaS metrics 234 may include, inter alia, information that quantifies certain resource usage by the SaaS subscriber 120 N, which may be directed to subscription compliance or certain advanced features provided by the cybersecurity system (e.g., indicator of compromise “IOC” generation, use of forensic analysts, etc.) that may involve ancillary services hosted by the public cloud 110. For example, the IaaS metrics 234 may conduct subscribed-based monitoring of public cloud infrastructure resources 150 (i.e., resources hosted by the public cloud network) to ensure compliance with certain subscription entitlements such as a quality of service (QoS) thresholds influenced by the number of computing instances used by the subscriber concurrently (e.g., at least partially overlapping in time), a maximum amount of cloud-based storage memory allocated, or the like.
As further shown in FIG. 2 , the credential (key) management module 235 features a credential (key) generation module 236 configured to handle credential generation and a credential (key) authentication module 237 configured to handle subscriber authentication. In particular, upon notification from the subscription management module 225 that the subscription process for the subscriber 120 N has successfully completed, the key generation module 236 generates a first (primary) credential 240 (referred to as a “master key”) assigned to the subscriber 120 N associated with the subscription. According to one embodiment of the invention, the master key 240 may be maintained within a portion of the subscriber accounts data store 230 allocated to the subscriber 120 N, and it is not provided to the subscriber 120 N. Instead, the master key 240 may operate as a basis (e.g., seed keying material) used by the credential generation module 236 to generate one or more second credentials 242 (referred to as “virtual keys”). A virtual key 242 may be included as part of a submission (e.g., data sample, quota, parameter adjustment) and used by the credential management module 235 in authenticating the subscriber 120 N and confirming that the subscriber 120 N is authorized to perform a task associated with the submission accompanied by the virtual key 242.
In particular, after the subscription registration process has completed, the key management module 235 may receive a virtual key generation request from a subscriber (e.g., the subscriber 120 N). Upon receipt of the virtual key generation request, the key management module 235 confirms that the generation and release of the requested number of virtual keys is in compliance with the subscription entitlements (e.g., maximum number of issued (active) virtual keys available to the subscriber 120 N). If the generation of the virtual keys is in compliance with the subscription parameters, the key generation module 236 generates and returns requested virtual keys 242 to the subscriber 120 N. Additionally, as shown in FIG. 2 , the key management module 235 stores the generated virtual keys 242 within the subscriber accounts data store 230 as part of the account data for the subscriber 120 N.
Furthermore, the key authentication module 237 is configured to authenticate the subscriber 120 N upon uploading the submission 140 (e.g., data sample submission, quota submission, parameter adjustment submission) and confirm that the task associated with the submission 140 is in compliance with the subscription entitlements afforded to the subscriber 120 N. More specifically, while the data sample submission 140 (inclusive of one of the virtual keys 242 (represented as virtual key 242 N) along with an object selected for analysis, corresponding context information, and optionally the Subscription ID 227) is submitted to the cybersecurity system 100 via the interface logic 200 (e.g., first API 212 or optionally cybersecurity portal 205), content from the data sample submission 140 (e.g., object 144, portions of the context information 146, etc.) may be withheld from being provided to the key management module 235.
Using the virtual keys 242 N (or Subscription ID), the key management module 235 may determine a location of the account data associated with the subscriber 120 N within the subscription accounts data store 230 to validate the virtual key 242 N, thereby authenticating the subscriber 120 N. Additionally, the key management module 235 may conduct an analysis of certain context information 146 provided with the data sample submission 140 to confirm, based on the subscription entitlements and the SaaS metrics 232 associated with data sample submissions, whether the data sample submission 140 may be submitted to the object evaluation logic 270. In this case, provided that the subscriber 120 N has been authenticated and given authority to perform the task associated with the data sample submission 140 has been verified, the key management module 235 returns a message, which prompts the interface logic 200 to at least route the data sample 142 (and perhaps other content within the data sample submission 140) to the object evaluation logic 270. Otherwise, the key management module 235 returns an error code, which prompts the interface logic 200 to notify the subscriber 120 N of a submission error consistent with the error code.
Referring still to FIG. 2 , consumption quota monitoring module 245 may be accessed through the second API 214 (or via the cybersecurity portal 205 and is configured to enable a subscriber (e.g., the subscriber 120 N) to obtain metrics associated with the current state of the subscription (e.g., active status, number of submissions for a particular submission type (or in total) conducted during the subscription period, number of submissions remaining for the subscription period, etc.). For instance, as an illustrative example, the consumption quota monitoring module 245 may receive a message (quota request submission) from any of the subscribers 120 1-120 N (e.g., subscriber 120 N) via the interface logic 200, such as the second API 214 of the device interface 210 (or optionally logic 207 of the cybersecurity portal 205 for example). Upon receipt of the quota request submission (after virtual key 242 N included as part of the quota request submission has been extracted by the credential management module 235 to authenticate the subscriber 120 N and the subscriber 120 N is authorized to perform this task based on the subscription entitlements), the consumption quota monitoring module 245 may be configured to establish communications with the subscriber accounts data store 230. Upon establishing communications, the consumption quota monitoring module 245 may access various metrics associated with the SaaS metrics 232, such as the subscription status (active/inactive) and/or the sum total of submissions (or data sample submission in particular) made during a selected time period.
Optionally, depending on the logical configuration of the administrative control logic 220, the consumption quota monitoring module 245 may be accessed by the key management module 235 to confirm that a requested task is in compliance with the subscription entitlements. For example, responsive to a data sample submission being a task of conducting analytics on a submitted data sample, the credential management module 235 may be configured to access the consumption quota monitoring module 245 to confirm compliance with the subscription entitlements (e.g., maximum number of data sample submissions constituting the data sample submission threshold has not been exceeded) before task is initiated (e.g., data sample 142 is provided to the object evaluation logic 270 for cyberthreat analytics).
The configuration management module 250 is configured to enable a subscriber, via the third API 216 (or optionally the cybersecurity portal 205), to specify parameters that control operability of the cyberthreat analytics. For instance, prior to controlling such operability, the credential management module 235, upon receipt of a parameter adjustment submission, may extract a virtual key included as part of the submission to authenticate the subscriber 120 N and verify that the subscriber is authorized to perform this task (cyberthreat analytics configuration). Thereafter, contents of the parameter adjustment submission are routed to the configuration management module 250, which may alter stored parameters that may influence workflow, such as (i) operations of an analytic engine selection module deployed within the object evaluation logic 270 of the cybersecurity system 100 for selection of analytic engines (e.g., priority of analytics, change of analytics based on subscriber or attack vectors targeting subscriber's industry, etc.), (ii) operations of the analytic engines deployed within the object evaluation logic 270 (e.g., changes in parameters that effect operations of the engines (e.g., available software profile(s) or guest images, run-time duration, priority in order of cyberthreat analytics, etc.), and/or (iii) operations of the correlation module deployed within the object evaluation logic 270 (e.g., changes to threshold parameters relied upon to issue a threat verdict, etc.) and/or (iv) operations of the post-processing module deployed within the object evaluation logic 270 (e.g., change of retention time periods for context information associated with benign or malicious objects within cybersecurity intelligence, etc.).
The system health assessment module 255 and the auto-scaling module 260 are in communications with various modules within the object evaluation logic 270 and SaaS subscribers have no visibility as to the operability of these modules. Herein, the system health assessment module 255 is configured to monitor queue storage levels and/or the health (e.g., operating state, capacity level, etc.) of the public cloud infrastructure resources 150, notably the analytic engines 275 utilized by the object evaluation logic 270 to conduct cybersecurity analytics on submitted data samples. From these communications, the system health assessment module 255 may be configured to ascertain the overall health of the object evaluation logic 270. Additionally, the system health assessment module 255 may be configured to monitor the operability of certain public cloud infrastructure resources 150 utilized by the administrative control logic 220, the reporting logic 290 and even logic associated with the interface logic 200 to surmise the overall health of the cybersecurity system 100.
The auto-scaling module 260 may be configured to select and modify one or more additional computing instances 153 forming the basis for one or more analytic engines 275 within the object evaluation logic 270. In particular, the auto-scaling module 260 is configured to add additional analytic engines, as permitted by the subscription, in response to a prescribed increased in queued content associated with objects (or data samples) awaiting cyberthreat analytics (e.g., increased level of occupancy of content associated with the data sample within queuing elements being part of the distributed queues 155 hosted as part of the cloud storage resources 154 and responsible for temporarily storing data samples awaiting processing by the analytic engines 275). Additionally, the auto-scaling module 260 is configured to terminate one or more analytic engines in response to a decrease in queued data samples awaiting cyberthreat analytics. The increase and/or decrease may be measured based on the number of objects, rate of change (increase or decrease), etc.
Alternatively, the auto-scaling module 260 may be configured to monitor available queue capacity, where a decrease in available queue capacity denotes increased data samples awaiting analytics and potential addition of analytic engines and an increase in available queue capacity denotes decreased data samples awaiting analytics and potential termination of analytic engine(s). The prescribed decrease in available queue capacity may be measured based on a prescribed rate of change of available capacity for one or more queues, being part of the distributed queues 155 hosted as part of the cloud storage resources 154 and responsible for temporarily storing data samples awaiting processing by the analytic engines 275, a decrease in the amount of storage available beyond a first prescribed threshold for the queue(s), or a decrease in the percentage of storage available for the queue(s). Similarly, the auto-scaling module 260 may be configured to terminate one or more of the computing instances operating as the analytic engines 275 in response to an increase in available queue capacity beyond a second prescribed threshold. The first and second thresholds may be storage thresholds (e.g., number of data samples, percentage of storage capacity, etc.) in which the first threshold differs from the second threshold.
The subscription billing module 265 is configured to confirm that the subscription parameters have not been exceeded (to denote additional billing) for a time-based, flat-fee subscription (e.g., yearly, monthly, weekly or daily). Alternatively, for a pay-as-you-go subscription, the subscription billing module 265 may be configured to maintain an account of the number of submissions analyzed by the object evaluation logic 270 (e.g., data sample submissions) over a prescribed period of time and generate a request for payment from a SaaS subscriber (e.g., subscriber 120 N) accordingly. The number of data sample submissions include those submitted from the subscriber 120 N, and according to some embodiments, may include additional objects uncovered during analytics during the subscription period. Additionally, the subscription billing module 265 may be operable to identify other paid cloud-based services utilized by the SaaS-subscriber 120 N for inclusion as part of the payment request. According to one embodiment, the subscription billing module 265 may access the subscriber account data for the requisite information.
Referring still to FIG. 2 , the object evaluation logic 270 is configured to receive data samples via the interface logic 200 and conduct cyberthreat analyses on these data sample. The object evaluation logic may be separated into multiple evaluation stages, where each evaluation stage is provided access to a queue that features a plurality of queue elements each storing content (object, context information, etc.) associated with a submitted data sample. For this distributed queue architecture, each “stage” queue is provided access to (or receives) content associated with a data sample evaluated in the preceding evaluation stage. Herein, the object evaluation logic includes a preliminary analytic module (within a first evaluation stage), an analytic engine selection module (within a second evaluation stage), a cyberthreat analytic module (within a third evaluation stage), a correlation module (within a fourth evaluation stage) and a post-processing module (within a fifth evaluation stage). As illustrated by a bidirectional arrow, the object evaluation logic 270 is configured with logic to communicate with the administrative control logic 220 to exchange or return information, such as subscription-related information (e.g., number of processed objects, health information, queue capacity, etc.) that may be used for billing, auto-scaling and other operability provided by the cybersecurity system 100.
The reporting logic 290 is configured to receive meta-information 292 associated with the analytic results produced by the object evaluation logic 270 and generate a displayable report 294 including the comprehensive results of the cyberthreat analytics (e.g., verdict, observed features and any corresponding meta-information representing the results associated with the cyberthreat analytics, context information associated with the observed features that identify the analyses conducted to produce the observed features, circumstances the features occurred, etc.). Accessible by the subscriber 120 N via the cybersecurity portal 205, the displayable report 294 may be provided as one or more interactive screens or a series of screens that allow a security administrator (corresponding to a representative of the SaaS-subscriber) to view results of data sample submissions in the aggregate and “drill-down” as to specifics associated with one of the objects uploaded to the cybersecurity system within a data sample submission. The reporting logic 290 may rely on the Subscription ID 227 or the virtual key 242 N, which may be part of the data sample 144 submitted to the object evaluation logic 270, to identify the subscriber 120 N and determine a preferred method for conveyance of an alert of the presence of the displayable report 294 (and set access controls to preclude access to contents of the displayable report 294 by other SaaS-subscribers). Additionally, or in the alterative, the reporting logic 290 may generate an alert based on the comprehensive results of the cyberthreat analytics. The alert may be in the form of a message (e.g., “threat warning” text or other electronic message).
Referring to FIG. 3 , a block diagram of an exemplary embodiment of the object evaluation logic 270 implemented within the cybersecurity system 100 of FIG. 2 is shown. According to this embodiment of the disclosure, the object evaluation logic 270 may be separated into multiple evaluation stages 390-394, where each evaluation stage 390 . . . or 394 is assigned a queue including a plurality of queue elements to store content associated with the data sample 144 as it proceeds through the evaluation stages 390-394 along with context information generated as analytics is performed on the data sample 142. The queues associated with the evaluation stages 390-394 are illustrated in FIG. 3 as Q1-Q5. Herein, the object evaluation logic 270 includes a preliminary analytic module 310 (within the first evaluation stage 390), an analytic engine selection module 340 (within the second evaluation stage 391), a cyberthreat analytic module 350 (within the third evaluation stage 392), a correlation module 370 (within the fourth evaluation stage 393) and a post-processing module 380 (within the fifth evaluation stage 394).
Herein, the object evaluation logic 270 receives content from the data sample 142, such as an object 144 for analysis along with context information 146 associated with the object 144. More specifically, according to one embodiment of the disclosure, the context information 146 may include submission context 147, entitlement context 148, and/or object context 149. The submission context 147 may include information pertaining to the submission 140 and/or data sample 142, such as (i) time of receipt or upload into the cybersecurity system 100, (ii) origin of the object 144 included in the submission 140 (e.g., from email, network cloud shared drive, network transmission medium, etc.), location of the subscriber device 120 N submitting the object 144, Internet Protocol (IP) address of the subscriber device 120 N, or the like. The entitlement context 148 may include information pertaining to the subscription selected by the subscriber, such as information directed to what features are permitted by the subscription (e.g., types of analytics supported, reporting formats available, credentials to access third party resources, or other features may distinguish different subscription tiers. Lastly, the object context 149 may include information pertaining to the object 144, including meta-information associated with the object 144 such as the name of the object 144, an extension type (e.g., pdf, exe, html, etc.), or the like.
The preliminary analytic module 310 is configured to conduct one or more preliminary analyses on content within the data sample 142, which includes the object 144 and/or the context information 146 accompanying the object 144, based on cybersecurity intelligence 320 accessible to the object evaluation logic 270. The cybersecurity intelligence 320 may include context information 322 associated with known malicious objects and known benign objects gathered from prior analytics conducted by the cybersecurity system 100 (hereinafter, “internal intelligence 322”). Additionally, or in the alternative, the cybersecurity intelligence 320 may include context information 324 (hereinafter, “external intelligence”) 324 associated with known malicious objects and known benign objects gathered from analytics conducted by other cybersecurity intelligence sources (e.g., other cloud-based cybersecurity systems, on-premises cybersecurity systems, etc.) and/or context information 326 associated known malicious and/or benign objects accessible from one or more third party cybersecurity sources (hereinafter, “3P intelligence 326”).
Referring to FIG. 4 , the preliminary analytic module 310 includes a context extraction module 400 and a filtering module 410, which includes a first pre-filter module 420, and a second pre-filter module 430. The context extraction module 400 is configured to recover the context information 146 from the data sample 142 while the filtering module 410 is configured to conduct one or more preliminary analyses of the context information 146 associated with the object 144 and, based on the preliminary analyses, determine an initial classification of the object 144. According to one embodiment of the disclosure, the preliminary analyses of the context information 146 may be conducted on the submission context 147, entitlement context 148, and/or object context 149 in the aggregate.
Upon classifying the object 144 as suspicious, the filtering module 410 passes the object 144 and/or the context information 146 to the analytic engine selection module 340 to conduct additional cyberthreat analytics. Otherwise, responsive to the preliminary malicious (or benign) preliminary classification, the filtering module 410 may bypass further cyberthreat analyses of the object 144 as illustrated by a feed-forward path 440.
More specifically, the first pre-filter module 420 analyzes the context information 146, optionally in accordance with the separate consideration of different context types as described above, by conducting an analysis (e.g., comparison) between at least a portion of the context information 146 and the context information 322 associated with known malicious and/or benign objects gathered from prior analytics conducted by the cybersecurity system 100. The context information 322 may be maintained within one or more virtual data stores as part of the cloud storage resources 154 hosted by the cloud network 110 of FIG. 1B. In the event that the portion of the context information 146 is determined to be associated with a known malicious or benign object, the first pre-filter module 420 may bypass operations by at least the analytic engine selection module 340, the cyberthreat analytic module 350, the correlation module 370 as represented by the feed-forward path 440. Otherwise, the context information 146 is provided to the second pre-filter module 430.
Similarly, the second pre-filter module 430 analyzes the context information 146 by conducting an analysis (e.g., comparison) between at least a portion of the context information 146 and the context information 324 associated with known malicious and/or benign objects gathered from analytics conducted by other cybersecurity intelligence sources and/or context information 326 associated known malicious and/or benign objects accessible from third party cybersecurity source(s). In the event that the portion of context information 146 is determined to be associated with a known malicious or benign object, the second pre-filter module 430 may also bypass operations by at least the analytic engine selection module 340, the cyberthreat analytic module 350, the correlation module 370 (and perhaps the post-processing module 380), as represented by the feed-forward path 440. Otherwise, the object 144 is determined to be suspicious, where the context information 146 and/or the object 144 are made available to the second evaluation stage 391 of the object evaluation logic 270.
More specifically, the context information 146 and/or the object 144 are made available to the analytic engine selection module 340. For example, according to one embodiment of the disclosure, the content associated with the object 144 and/or context information 146 with a first stage queue Q1 may be passed (or made available by identifying its storage location) to a second stage queue Q2 allocated for the second evaluation stage 391.
Referring back to FIG. 3 , the analytic engine selection module 340 is configured to determine the type and/or ordering of analytic engines to process the object 144 based on the context information 146, such as the submission context 147, the entitlement context 148 and/or the object context 149 maintained in the second stage queue Q2. The analytic engine selection module 340 may select the analytic engine(s) based on the context information 146. The particular ordering (workflow) of the analytic engines may be based, at least in part, based on the types of context information. For example, the entitlement context 148 may identify certain types of analytic engines that are permitted for use (e.g., allow certain analytic engine types and preclude others, allow all types of analytic engine types) based on the subscription tier. Also, object context may tailor the type of analytic engine to avoid selection of a configuration for an analytic engine that is unsuitable or ineffective for a particular type of object while submission context may tailor those engines with attack vectors oriented to the origin of the object (e.g., email source for analytic engine more targeted for email analysis, etc.).
Referring now to FIG. 5 , a block diagram of an exemplary embodiment of the logical architecture of the analytic engine selection module 340 operating with the cyberthreat analytic module 350 of FIG. 3 is shown. Herein, according to this embodiment, the analytic engine selection module 340 includes a controller 500 and a plurality of rule sets 510, which are identified as a first rule set 520, a second rule set 522 and a third rule set 524. The rule sets 510 may be executed or referenced by the controller 500 in the aggregate analyses of different types of context information 146 in determining the number and types of analytic engines selected for analysis of the object 144. According to one embodiment of the disclosure, the rule sets 510 may be maintained separate from the queue Q2 being part of a distributed queue allocated for the analytic engine selection module 340. Although, in an alternative embodiment, the controller 500 may select the analytic engine(s) based on the context information 146 considered in its totality.
As an alternative embodiment, according to one embodiment of the disclosure, the first rule set 520 may be used by the controller 500 in selecting a first group of analytic engines based on the submission context 147 provided with the data sample 142. Similarly, the second rule set 522 may be used by the controller 500 in selecting a second group of analytic engines based on the entitlement context 148 while the third rule set 524 is used by the controller 500 in selecting a third group of analytic engines based on the object context 149. As the incoming context information 146 includes two or more different content types (e.g., any combination of two or more of submission context 147, entitlement context 148 and object context 149), the analytic engines may be determined to be a subset of analytic engines common to the selected groups of analytic engines.
Upon selecting one or more analytic engines to analyze the data sample 142, the controller 500 may be configured to formulate, from the computing instances, these selected analytic engines to operate sequentially or concurrently. Herein, the selected analytic engines 275 1-275 L (L≥1, L=3 for embodiment) may include at least one or any combination of the following: (i) static analytic engines to conduct an analysis on the content of the object 144 within the data sample 142 and generate results including observed features represented by characteristics of the object 144 (and accompanying context information); (ii) dynamic analytic engines to conduct an execution of the object 144 and generate results including features represented by observed behaviors of the analytic engine (and accompanying context information); (iii) machine learning analytic engines to conduct extraction of insights using a trained model and generate results including features represented by a probability of the object 144 being malicious (and accompanying context information); and/or (iv) emulation analytic engines to conduct reproduction of operations representing the execution of the object 144 without such execution and generate results including features represented by the emulated behaviors (and accompanying context information).
As further shown in FIG. 3 , the distributed queues 155 associated with the cyberthreat analytic module 350 may maintain the portions of the data sample 142 (e.g., object 144, context information 146, etc.) for retrieval by each of the selected analytic engines. Features produced by the analytics conducted by the selected analytic engines 275 1-275 3 are collected by a feature collection module 530 operating, at least in part, as an event (feature) log. The features correspond to resultant information produced by each of the selected analytic engines during analysis of at least a portion of the context information 146 and/or the object 144.
Referring to both FIG. 3 and FIG. 5 , as shown, the cyberthreat analytic module 350 includes one or more analytic engines 275 1-275 3, which are selected to perform different analytics on the object 144 in efforts to determine whether the object is malicious (malware present) or non-malicious (no malware detected). These analytic engines 275 1-275 3 may operate sequentially or concurrently (e.g., at least partially overlapping in time). The analytic engines 275 1-275 3, according to one embodiment of the disclosure, may assess the content associated with the object 144 and/or context information 146 within a third stage queue Q3 that is passed from the first stage queue Q2, where the context information 146 may include additional context information produced from the analyses conducted by at first and second evaluation stages 390-391. As described above, the analytic engines 275 1-275 L may be selected based, at least in part, on the submission context, entitlement context and/or the object context. As a result, the analytic engines 275 1-275 3 may be selected as any one or any combination of at least two of the following analytic engines as described above: (i) static analytic engines; (ii) dynamic analytic engines, (iii) machine learning analytic engines, and/or (iv) emulation analytic engines.
A feedback path 360 represents that the cyberthreat analytic module 350 may need to conduct a reiterative, cascaded analysis of an additional object, uncovered during analysis of another object, with a different selection of engines (hereinafter, “sub-engines” 540). Herein, the analytic engines 275 1-275 3 may be operating concurrently (in parallel), but the sub-engines 540 may be conducted serially after completion of operations by the analytic engine 275 1. The sub-engine 1 540 may be initiated to perform a sub-analysis based on an event created during processing of the object 144 by the analytic engines 275 1. The event may constitute detection of an additional object (e.g., an executable or URL embedded in the object 144, such as a document for example, detected during analysis of the object 144) or detected information that warrant analytics different than previously performed. According to one embodiment of the disclosure, this may be accomplished by returning the additional object(s) along with its context information to the second stage queue Q2 associated with the analytic engine selection module 340, for selection of the particular sub-engine(s) 540. The processing of the object 144 and/or context information 146 by the analytic engines 275 2-275 3 may be conducted in parallel with the analytic engines 275 1 as well as sub-engines 540.
Referring to FIG. 6 , a block diagram of an exemplary embodiment of an analytic engine (e.g., analytic engine 275 1) configured to operate as part of the cyberthreat analytic module 350 of FIG. 3 is shown. Herein, each analytic engine 275 1 . . . or 275 L is based on an analytic engine infrastructure hosted by the cloud network and provisioned by the analytic engine selection module 340. As shown, each analytic engine 275 1 . . . or 275 L, such as the analytic engine 275 1 for example, include a health assessment module 600, a configuration module 610, an update module 620, a task processing module 630 and a result processing module 640.
Herein, according to one embodiment of the disclosure, the health assessment module 600 is configured to determine the operational health of the analytic engine 275 1. The operational health may be represented, at least in part, by its utilization level that signifies when the analytic engine 275 1 is stalled or non-functional (e.g., <5% utilization) or when the analytic engine 275 1 is at a higher risk than normal of failure (e.g., >90% utilization). The aggregate of the operational health of each of the analytic engine 275 1-2743 may be accessed and used in determining overall system health by the system health assessment module 255 of FIG. 2 .
Referring still to FIG. 6 , the configuration module 610 is configured to control the configuration and re-configuration of certain functionality of the analytic engine 275 1. For example, according to one embodiment of the disclosure, the configuration module 610 may be configured to control reconfiguration and control interoperability between the analytic engine 275 1 and other modules within the subscription evaluation logic 270 and/or the administrative control logic 220. Additionally, the configuration module 610 may be further configured to set and control the duration of an analysis conducted for the data sample 142. The duration may be uniform for all data samples independent of object type or may be set at different durations based on the type of object included as part of the data sample 142. Additionally, the configuration module 610 may be configured to select (i) the queue (e.g., third stage queue Q3) from which one or more data samples (including data sample 142) awaiting analysis by the analytic engine 275 1 is retrieved, (ii) different software profiles to install when conducting dynamic analytics on each data sample maintained in the queue, and/or (iii) what time to conduct such analytics on queued data samples.
The update module 620 is configured to receive and control installation of changes to sets of rules controlling operability of the task processing module 630 and the result processing module 640 (described below) and changes to parameters to modify operability of the analytic engine 275 1.
The task processing module 630 is configured to monitor the queuing infrastructure associated with the third evaluation stage 392 (third stage queue Q3) of the object evaluation logic 270 of FIG. 3 . More specifically, the task processing module 630 monitors the third stage queue Q3 for retention of data samples awaiting analysis by the analytic engine 275 1 to ascertain a current processing level for the cybersecurity system 100 and determine if a capacity threshold for the third stage queue Q3 has been exceeded, perhaps over a prescribed period of time to avoid throttling. If so, the task processing module 630, if set by the configuration module 610, may signal the auto-scaling module 260 within the administrative control logic 220 to activate one or more additional computing stances to be configured and used as additional analytic engines for the object evaluation logic 270. Additionally, the task processing module 630 may be configured to further monitor one or more other stage queues (e.g., first stage queue Q1, second stage queue Q2, fourth stage queue Q4 and/or fifth stage queue Q5) to estimate future processing capacity, upon which the auto-scaling module 260 may commence scaling up or scaling down analytic engines.
Referring to both FIG. 3 and FIG. 7 , a fourth evaluation stage 393 includes a correlation module 370, which operates in accordance with a fourth rule set 700 to classify the object included as part of the data sample as malicious, benign, unknown or suspicious based on the meta-information (events) collected from the analyses performed by the analytic engines. The classification of the object 144 may be based, at least in part, on meta-information associated with the analytic results generated by the analytic engines 275 1-275 3 and maintained with the event log 530 (hereinafter, “analytic meta-information” 550). The classification of the object (sometimes referred to as the “verdict”) is provided to post-processing module 380 that is part of a fifth evaluation stage 394.
Depending on the verdict, the post-processing module 380, operating in compliance with a fifth rule set 710 and deployed within the fifth evaluation stage 394, may initiate actions to remediate, in accordance with a remediation policy 720, a detected cyberthreat represented by the object 144 through blocking, resetting of configuration settings, or performance of a particular retention policy on the object 302 and/or context information 146 associated with the object 144 in accordance with a retention policy 730. For example, the object 144 and/or context information 146, currently maintained in a fifth stage queue Q5, may be stored as part of the internal intelligence 322 accessible by the preliminary analytic module 310 (see FIG. 3 ), where certain portions of the context information 146 associated with the object 144 classified as “malicious” may be stored for a first prescribed period of time (e.g., ranging from a month to indefinitely) while this context information 146 may be stored for a second prescribed time less than the first prescribed time (e.g., ranging from a few days to a week or more) when the object 144 is classified as “benign”.
Based on the results of the cyberthreat analytics and determination by the correlation module 370, the reporting logic 290 controls the reporting of these cyberthreat analytic results, which may include one or more alerts 160 to allow an administrator (e.g., person responsible for managing the customer cloud-hosted resources or the public cloud network itself) access to one or more dashboards via the cybersecurity portal 205 or the first API 212.
The reporting logic 290 is configured to receive the meta-information 292 associated with the analytic results produced by the object evaluation logic 270 and generate the displayable report 294 including the comprehensive results of the cyberthreat analytics (e.g., verdict, observed features and any corresponding context information including meta-information), as described above.
In the foregoing description, the invention is described with reference to specific exemplary embodiments thereof. However, it will be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims.

Claims (20)

What is claimed is:
1. A system for conducting cyberthreat analytics on a submitted object to determine whether the object is malicious, comprising:
a cloud platform configured to host resources including cloud processing resources and cloud storage resources; and
a cybersecurity system to analyze one or more received objects included as part of a submission received from a subscriber after authentication of the subscriber and verification that the subscriber is authorized to perform one or more tasks associated with the submission, wherein the cybersecurity system comprises
an interface to receive the submission including the one or more objects for analysis,
administrative control logic including (i) a credential management module being configured to generate a first credential assigned to the subscriber associated with the submission, and (ii) an auto-scaling module to generate analytic engines based on computing instances hosted by the cloud platform, and
an object evaluation logic configured to receive a data sample from the administrative control logic, the data sample being a portion of the submission that comprises the one or more received objects and context information associated with the one or more received objects, the object evaluation logic includes a cyberthreat analytic module that comprises the analytic engines each directed to a different analysis approach in analyzing the one or more received objects for malware,
wherein the analytic engines comprise a combination of two or more of any of (1) a static analytic engine to conduct an analysis on content of an object of the one or more received objects and generate results including observed features represented by characteristics of the object and the context information associated with the object; (2) a dynamic analytic engine to execute the object and generate results including features represented by observed behaviors of the dynamic analytic engine along with context information accompanying the observed features; (3) a machine learning analytic engine to submit the object as input into a trained machine-learning model and generates results including features represented by insights derived from the machine-learning model and accompanying context information; and (4) an emulation analytic engine to conduct reproduction of operations representing an execution of the object and generate results including features represented by behaviors captured during emulation and accompanying context information.
2. The system of claim 1, wherein the cloud platform is operating as an Infrastructure-as-a-Service.
3. The system of claim 2, wherein the cloud processing resources includes one or more computing instances.
4. The system of claim 1, wherein the cybersecurity system further includes logic to monitor a number of submissions received from the subscriber for computation of costs associated with usage of the cybersecurity system while the cloud platform to monitor (i) an amount of processing time used by the cloud processing resources for execution of logic associated with the cybersecurity system and (ii) an amount of storage used by the cybersecurity system in maintaining the logic associated with the cybersecurity system.
5. The system of claim 1, wherein the cybersecurity system further includes the object evaluation logic configured to conduct cyberthreat analytics on the one or more received objects independent of object type.
6. The system of claim 1, wherein the cybersecurity system further includes a credential management module that is configured to generate one or more keys for use in authentication of the subscriber or verify that the subscriber is authorized to perform the one or more tasks associated with the submission.
7. The system of claim 5, wherein the auto-scaling module of the administrative control logic is configured to generate one or more of the analytic engines in response to detection of at least a particular level of usage of queue elements maintaining the one or more received objects that are awaiting cyberthreat analytics being conducted on the one or more received objects.
8. The system of claim 5, wherein the cybersecurity system includes system health monitor logic being communicatively coupled to the analytic engines generates based on computing instances associated with the cloud processing resources.
9. The system of claim 1, wherein the cybersecurity system further includes a consumption quota monitoring module configured to enable the subscriber to obtain metrics associated with the current state of a subscription, the metrics include at least a total number of submissions conducted during a subscription period or a number of submissions remaining for the subscription period.
10. A cybersecurity system deployed as a cloud-based, multi-tenant Security-as-a-Service (SaaS) leveraging resources hosted by a cloud platform operating as an Infrastructure-as-a-Service (IaaS), the cybersecurity system comprising:
an interface to receive a submission including one or more objects for analysis and a virtual key provided to a subscriber for attachment to the submission;
administrative control logic including (i) a credential management module being configured to generate a first credential assigned to the subscriber associated with the submission, and (ii) an auto-scaling module to generate analytic engines based on computing instances hosted by the cloud platform in response to detection of at least a particular level of usage of queue elements maintaining objects that are awaiting cyberthreat analytics being conducted on the maintained objects; and
an object evaluation logic configured to receive a data sample from the administrative control logic, the data sample being a portion of the submission that comprises content associated with the submission including one or more objects and context information associated with the one or more objects, the object evaluation logic includes a cyberthreat analytic module that comprises one or more analytic engines each directed to a different analysis approach in analyzing the one or more objects for malware,
wherein the one or more analytic engines comprises a combination of two or more of any of (1) a static analytic engine to conduct an analysis on content of an object of the one or more objects and generate results including observed features represented by characteristics of the object and the context information associated with the object; (2) a dynamic analytic engine to execute the object and generate results including features represented by observed behaviors of the dynamic analytic engine along with context information accompanying the observed features; (3) a machine learning analytic engine to submit the object as input into a trained machine-learning model and generates results including features represented by insights derived from the machine-learning model and accompanying context information; and (4) an emulation analytic engine to conduct reproduction of operations representing an execution of the object and generate results including features represented by behaviors captured during emulation and accompanying context information.
11. The cybersecurity system of claim 10, wherein at least one analytic engine of the analytic engines to perform cyberthreat analytics on the one or more objects to determine whether any of the one or more objects include malware.
12. The cybersecurity system of claim 10, wherein the credential management module and the auto-scaling module form a portion of administrative control logic of the cybersecurity system.
13. The cybersecurity system of claim 12 further comprising an object evaluation logic configured to receive a data sample from the administrative control logic, the data sample being a portion of the submission that comprises content associated with the submission including one or more objects and context information associated with the one or more objects, the object evaluation logic to conduct cyberthreat analyses on at least the one or more objects included as part of the data sample.
14. The cybersecurity system of claim 13, wherein the object evaluation logic comprises a plurality of evaluation stages with each evaluation stage of the plurality of evaluation stages being provided access a queue including a plurality of queue elements each storing the content.
15. The cybersecurity system of claim 14, wherein an evaluation stage of the plurality of evaluation stages includes a cyberthreat analytic module that comprises the one or more analytic engines each directed to a different analysis approach in analyzing the one or more objects for malware.
16. The cybersecurity system of claim 10, wherein each of the context information associated with the features provides additional information associated with the features.
17. The cybersecurity system of claim 10, wherein a second evaluation stage of the plurality of evaluation stages includes an analytic engine selection module configured to determine the one or more analytic engines to conduct cyberthreat analytics of the object based on at least a portion of the context information accompanying the object being part of the data sample.
18. The cybersecurity system of claim 17, wherein a third evaluation stage of the plurality of evaluation stages includes a correlation module to analyze features associated with the object to determine whether the object includes malware.
19. The cybersecurity system of claim 10, wherein interface includes an Application Programming Interface (API) provided to the subscriber upon completion of an onboarding subscription process provided as part of the cybersecurity system.
20. A cybersecurity system deployed as a cloud-based, multi-tenant Security-as-a-Service (SaaS) leveraging resources hosted by a cloud platform operating as an Infrastructure-as-a-Service (IaaS), the cybersecurity system comprising:
an interface to receive (ii) a submission that comprises a data sample including one or more objects and (ii) a virtual key attached to the submission to identify a subscriber that provided the submission, the data sample to be provided for cyberthreat analytics;
administrative control logic to validate the submission, authenticate the subscriber submitting the submission, verify that the submission including the data sample is in compliance with parameters associated with a subscription held by the subscriber to provide the submission to the cybersecurity system, and output at least the data sample;
object evaluation logic to receive the data sample provided from the administrative control logic and conduct cyberthreat analytics on the one or more objects included in the data sample, where the object evaluation logic includes a cyberthreat analytic module that comprises one or more analytic engines each directed to a different cyberthreat analytic approach in analyzing the one or more objects for malware, and the one or more analytic engines comprises a combination of two or more of any of (1) a static analytic engine to conduct an analysis on content of an object of the one or more objects and generate results including observed features represented by characteristics of the object and the context information associated with the object; (2) a dynamic analytic engine to execute the object and generate results including features represented by observed behaviors of the dynamic analytic engine along with context information accompanying the observed features; (3) a machine learning analytic engine to submit the object as input into a trained machine-learning model and generates results including features represented by insights derived from the machine-learning model and accompanying context information; and (4) an emulation analytic engine to conduct reproduction of operations representing an execution of the object and generate results including features represented by behaviors captured during emulation and accompanying context information; and
reporting logic to receive meta-information associated with results of the cyberthreat analytics conducted by the object evaluation logic on the one or more objects and generate a displayable report including the results.
US17/133,397 2019-12-24 2020-12-23 Run-time configurable cybersecurity system Active 2041-08-25 US11838300B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/133,397 US11838300B1 (en) 2019-12-24 2020-12-23 Run-time configurable cybersecurity system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962953422P 2019-12-24 2019-12-24
US17/133,397 US11838300B1 (en) 2019-12-24 2020-12-23 Run-time configurable cybersecurity system

Publications (1)

Publication Number Publication Date
US11838300B1 true US11838300B1 (en) 2023-12-05

Family

ID=88980024

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/133,397 Active 2041-08-25 US11838300B1 (en) 2019-12-24 2020-12-23 Run-time configurable cybersecurity system

Country Status (1)

Country Link
US (1) US11838300B1 (en)

Citations (300)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3171553A (en) 1961-12-08 1965-03-02 Jr Ralph Mooney Power backhoe boom control
WO2002006928A2 (en) 2000-07-14 2002-01-24 Vcis, Inc. Computer immune system and method for detecting unwanted code in a computer system
WO2002023805A2 (en) 2000-09-13 2002-03-21 Karakoram Limited Monitoring network activity
US20020038430A1 (en) 2000-09-13 2002-03-28 Charles Edwards System and method of data collection, processing, analysis, and annotation for monitoring cyber-threats and the notification thereof to subscribers
US20020091819A1 (en) 2001-01-05 2002-07-11 Daniel Melchione System and method for configuring computer applications and devices using inheritance
US20020095607A1 (en) 2001-01-18 2002-07-18 Catherine Lin-Hendel Security protection for computers and computer-networks
US20020169952A1 (en) 1999-06-21 2002-11-14 Disanto Frank J. Method and apparatus for securing e-mail attachments
US20020184528A1 (en) 2001-04-12 2002-12-05 Shevenell Michael P. Method and apparatus for security management via vicarious network devices
US20020188887A1 (en) 2000-05-19 2002-12-12 Self Repairing Computers, Inc. Computer with switchable components
US20030084318A1 (en) 2001-10-31 2003-05-01 Schertz Richard L. System and method of graphically correlating data for an intrusion protection system
US20030188190A1 (en) 2002-03-26 2003-10-02 Aaron Jeffrey A. System and method of intrusion detection employing broad-scope monitoring
US20030191957A1 (en) 1999-02-19 2003-10-09 Ari Hypponen Distributed computer virus detection and scanning
US20040015712A1 (en) 2002-07-19 2004-01-22 Peter Szor Heuristic detection of malicious computer code by page tracking
US20040019832A1 (en) 2002-07-23 2004-01-29 International Business Machines Corporation Method and apparatus for the automatic determination of potentially worm-like behavior of a program
US20040117624A1 (en) 2002-10-21 2004-06-17 Brandt David D. System and methodology providing automation security analysis, validation, and learning in an industrial controller environment
US20040236963A1 (en) 2003-05-20 2004-11-25 International Business Machines Corporation Applying blocking measures progressively to malicious network traffic
US20040255161A1 (en) 2003-04-12 2004-12-16 Deep Nines, Inc. System and method for network edge data protection
US20040268147A1 (en) 2003-06-30 2004-12-30 Wiederin Shawn E Integrated security system
US20050021740A1 (en) 2001-08-14 2005-01-27 Bar Anat Bremler Detecting and protecting against worm traffic on a network
US20050086523A1 (en) 2003-10-15 2005-04-21 Zimmer Vincent J. Methods and apparatus to provide network traffic support and physical security support
US20050091513A1 (en) 2003-10-28 2005-04-28 Fujitsu Limited Device, method and program for detecting unauthorized access
US20050108562A1 (en) 2003-06-18 2005-05-19 Khazan Roger I. Technique for detecting executable malicious code using a combination of static and dynamic analyses
US6898632B2 (en) 2003-03-31 2005-05-24 Finisar Corporation Network security tap for use with intrusion detection system
US20050125195A1 (en) 2001-12-21 2005-06-09 Juergen Brendel Method, apparatus and sofware for network traffic management
US20050149726A1 (en) 2003-10-21 2005-07-07 Amit Joshi Systems and methods for secure client applications
US20050157662A1 (en) 2004-01-20 2005-07-21 Justin Bingham Systems and methods for detecting a compromised network
US6941348B2 (en) 2002-02-19 2005-09-06 Postini, Inc. Systems and methods for managing the transmission of electronic messages through active message date updating
US20050238005A1 (en) 2004-04-21 2005-10-27 Yi-Fen Chen Method and apparatus for controlling traffic in a computer network
US20050262562A1 (en) 2004-05-21 2005-11-24 Paul Gassoway Systems and methods of computer security
US20050283839A1 (en) 2002-09-10 2005-12-22 Ingenia Technology Limited Security device and system
US20060010495A1 (en) 2004-07-06 2006-01-12 Oded Cohen Method for protecting a computer from suspicious objects
US20060015747A1 (en) 2004-07-16 2006-01-19 Red Hat, Inc. System and method for detecting computer virus
US20060015715A1 (en) 2004-07-16 2006-01-19 Eric Anderson Automatically protecting network service from network attack
US20060021029A1 (en) 2004-06-29 2006-01-26 Brickell Ernie F Method of improving computer security through sandboxing
US20060031476A1 (en) 2004-08-05 2006-02-09 Mathes Marvin L Apparatus and method for remotely monitoring a computer network
US20060070130A1 (en) 2004-09-27 2006-03-30 Microsoft Corporation System and method of identifying the source of an attack on a computer network
US20060117385A1 (en) 2004-11-30 2006-06-01 Mester Michael L Monitoring propagation protection within a network
US20060123477A1 (en) 2004-12-06 2006-06-08 Kollivakkam Raghavan Method and apparatus for generating a network topology representation based on inspection of application messages at a network device
US20060150249A1 (en) 2003-05-07 2006-07-06 Derek Gassen Method and apparatus for predictive and actual intrusion detection on a network
US7080408B1 (en) 2001-11-30 2006-07-18 Mcafee, Inc. Delayed-delivery quarantining of network communications having suspicious contents
US7080407B1 (en) 2000-06-27 2006-07-18 Cisco Technology, Inc. Virus detection and removal system and method for network-based systems
US20060161987A1 (en) 2004-11-10 2006-07-20 Guy Levy-Yurista Detecting and remedying unauthorized computer programs
US20060173992A1 (en) 2002-11-04 2006-08-03 Daniel Weber Event detection/anomaly correlation heuristics
US20060191010A1 (en) 2005-02-18 2006-08-24 Pace University System for intrusion detection and vulnerability assessment in a computer network using simulation and machine learning
US20060242709A1 (en) 2005-04-21 2006-10-26 Microsoft Corporation Protecting a computer that provides a Web service from malware
US20060251104A1 (en) 2005-03-31 2006-11-09 Fujitsu Limited Service apparatus, method of controlling switching of connection destination of client apparatus by service apparatus, and storage medium readable by machine
US20060288417A1 (en) 2005-06-21 2006-12-21 Sbc Knowledge Ventures Lp Method and apparatus for mitigating the effects of malicious software in a communication network
US20070006313A1 (en) 2004-09-17 2007-01-04 Phillip Porras Method and apparatus for combating malicious code
US20070006288A1 (en) 2005-06-30 2007-01-04 Microsoft Corporation Controlling network access
US20070011174A1 (en) 1998-09-22 2007-01-11 Kazuo Takaragi Method and a device for managing a computer network
US20070016951A1 (en) 2005-07-13 2007-01-18 Piccard Paul L Systems and methods for identifying sources of malware
US20070064689A1 (en) 2003-09-19 2007-03-22 Shin Yong M Method of controlling communication between devices in a network and apparatus for the same
US20070143827A1 (en) 2005-12-21 2007-06-21 Fiberlink Methods and systems for intelligently controlling access to computing resources
US20070157306A1 (en) 2005-12-30 2007-07-05 Elrod Craig T Network threat detection and mitigation
US7243371B1 (en) 2001-11-09 2007-07-10 Cisco Technology, Inc. Method and system for configurable network intrusion detection
US20070192858A1 (en) 2006-02-16 2007-08-16 Infoexpress, Inc. Peer based network access control
US20070208822A1 (en) 2006-03-01 2007-09-06 Microsoft Corporation Honey Monkey Network Exploration
US20070240220A1 (en) 2006-04-06 2007-10-11 George Tuvell System and method for managing malware protection on mobile devices
US20070250930A1 (en) 2004-04-01 2007-10-25 Ashar Aziz Virtual machine with dynamic data flow analysis
US20080005782A1 (en) 2004-04-01 2008-01-03 Ashar Aziz Heuristic based capture with replay to virtual machine
GB2439806A (en) 2006-06-30 2008-01-09 Sophos Plc Classifying software as malware using characteristics (or "genes")
US20080040710A1 (en) 2006-04-05 2008-02-14 Prevx Limited Method, computer program and computer for analysing an executable computer file
US20080077793A1 (en) 2006-09-21 2008-03-27 Sensory Networks, Inc. Apparatus and method for high throughput network security systems
WO2008041950A2 (en) 2006-10-04 2008-04-10 Trek 2000 International Ltd. Method, apparatus and system for authentication of external storage devices
US20080134334A1 (en) 2006-11-30 2008-06-05 Electronics And Telecommunications Research Institute Apparatus and method for detecting network attack
US20080141376A1 (en) 2006-10-24 2008-06-12 Pc Tools Technology Pty Ltd. Determining maliciousness of software
US20080184367A1 (en) 2007-01-25 2008-07-31 Mandiant, Inc. System and method for determining data entropy to identify malware
US7448084B1 (en) 2002-01-25 2008-11-04 The Trustees Of Columbia University In The City Of New York System and methods for detecting intrusions in a computer system by monitoring operating system registry accesses
US7458098B2 (en) 2002-03-08 2008-11-25 Secure Computing Corporation Systems and methods for enhancing electronic communication security
US20080307524A1 (en) 2004-04-08 2008-12-11 The Regents Of The University Of California Detecting Public Network Attacks Using Signatures and Fast Content Analysis
US7467408B1 (en) 2002-09-09 2008-12-16 Cisco Technology, Inc. Method and apparatus for capturing and filtering datagrams for network security monitoring
US20080320594A1 (en) 2007-03-19 2008-12-25 Xuxian Jiang Malware Detector
US20090003317A1 (en) 2007-06-29 2009-01-01 Kasralikar Rahul S Method and mechanism for port redirects in a network switch
US20090064332A1 (en) 2007-04-04 2009-03-05 Phillip Andrew Porras Method and apparatus for generating highly predictive blacklists
US7519990B1 (en) 2002-07-19 2009-04-14 Fortinet, Inc. Managing network traffic flow
US20090126015A1 (en) 2007-10-02 2009-05-14 Monastyrsky Alexey V System and method for detecting multi-component malware
US20090125976A1 (en) 2007-11-08 2009-05-14 Docomo Communications Laboratories Usa, Inc. Automated test input generation for web applications
US7540025B2 (en) 2004-11-18 2009-05-26 Cisco Technology, Inc. Mitigating network attacks using automatic signature generation
US20090144823A1 (en) 2006-03-27 2009-06-04 Gerardo Lamastra Method and System for Mobile Network Security, Related Network and Computer Program Product
US20090158430A1 (en) 2005-10-21 2009-06-18 Borders Kevin R Method, system and computer program product for detecting at least one of security threats and undesirable computer files
US20090172815A1 (en) 2007-04-04 2009-07-02 Guofei Gu Method and apparatus for detecting malware infection
US20090199274A1 (en) 2008-02-01 2009-08-06 Matthew Frazier method and system for collaboration during an event
US20090198689A1 (en) 2008-02-01 2009-08-06 Matthew Frazier System and method for data preservation and retrieval
US20090198651A1 (en) 2008-02-01 2009-08-06 Jason Shiffer Method and system for analyzing data related to an event
US20090198670A1 (en) 2008-02-01 2009-08-06 Jason Shiffer Method and system for collecting and organizing data corresponding to an event
US20090241190A1 (en) 2008-03-24 2009-09-24 Michael Todd System and method for securing a network from zero-day vulnerability exploits
US20090300589A1 (en) 2008-06-03 2009-12-03 Isight Partners, Inc. Electronic Crime Detection and Tracking
US7639714B2 (en) 2003-11-12 2009-12-29 The Trustees Of Columbia University In The City Of New York Apparatus method and medium for detecting payload anomaly using n-gram distribution of normal data
US20100030996A1 (en) 2008-08-01 2010-02-04 Mandiant, Inc. System and Method for Forensic Identification of Elements Within a Computer System
US20100058474A1 (en) 2008-08-29 2010-03-04 Avg Technologies Cz, S.R.O. System and method for the detection of malware
US20100077481A1 (en) 2008-09-22 2010-03-25 Microsoft Corporation Collecting and analyzing malware data
US7698548B2 (en) 2005-12-08 2010-04-13 Microsoft Corporation Communications traffic segregation for security purposes
US20100115621A1 (en) 2008-11-03 2010-05-06 Stuart Gresley Staniford Systems and Methods for Detecting Malicious Network Content
US20100132038A1 (en) 2008-11-26 2010-05-27 Zaitsev Oleg V System and Method for Computer Malware Detection
US20100154056A1 (en) 2008-12-17 2010-06-17 Symantec Corporation Context-Aware Real-Time Computer-Protection Systems and Methods
US20100192223A1 (en) 2004-04-01 2010-07-29 Osman Abdoul Ismael Detecting Malicious Network Content Using Virtual Environment Components
US7779463B2 (en) 2004-05-11 2010-08-17 The Trustees Of Columbia University In The City Of New York Systems and methods for correlating and distributing intrusion alert information among collaborating computer systems
US20100281542A1 (en) 2004-11-24 2010-11-04 The Trustees Of Columbia University In The City Of New York Systems and Methods for Correlating and Distributing Intrusion Alert Information Among Collaborating Computer Systems
US7854007B2 (en) 2005-05-05 2010-12-14 Ironport Systems, Inc. Identifying threats in electronic messages
US20110078794A1 (en) 2009-09-30 2011-03-31 Jayaraman Manni Network-Based Binary File Extraction and Analysis for Malware Detection
US20110093951A1 (en) 2004-06-14 2011-04-21 NetForts, Inc. Computer worm defense system and method
US20110099633A1 (en) 2004-06-14 2011-04-28 NetForts, Inc. System and method of containing computer worms
US20110099635A1 (en) 2009-10-27 2011-04-28 Silberman Peter J System and method for detecting executable machine instructions in a data stream
US7949849B2 (en) 2004-08-24 2011-05-24 Mcafee, Inc. File system for a capture system
US20110167493A1 (en) 2008-05-27 2011-07-07 Yingbo Song Systems, methods, ane media for detecting network anomalies
WO2011084431A2 (en) 2009-12-15 2011-07-14 Mcafee, Inc. Systems and methods for behavioral sandboxing
US20110178942A1 (en) 2010-01-18 2011-07-21 Isight Partners, Inc. Targeted Security Implementation Through Security Loss Forecasting
US20110219450A1 (en) 2010-03-08 2011-09-08 Raytheon Company System And Method For Malware Detection
US8020206B2 (en) 2006-07-10 2011-09-13 Websense, Inc. System and method of analyzing web content
WO2011112348A1 (en) 2010-03-08 2011-09-15 Raytheon Company System and method for host-level malware detection
US20110225624A1 (en) 2010-03-15 2011-09-15 Symantec Corporation Systems and Methods for Providing Network Access Control in Virtual Environments
US20110247072A1 (en) 2008-11-03 2011-10-06 Stuart Gresley Staniford Systems and Methods for Detecting Malicious PDF Network Content
US8045458B2 (en) 2007-11-08 2011-10-25 Mcafee, Inc. Prioritizing network traffic
US20110307956A1 (en) 2010-06-11 2011-12-15 M86 Security, Inc. System and method for analyzing malicious code using a static analyzer
US20110314546A1 (en) 2004-04-01 2011-12-22 Ashar Aziz Electronic Message Analysis for Malware Detection
WO2012075336A1 (en) 2010-12-01 2012-06-07 Sourcefire, Inc. Detecting malicious software through contextual convictions, generic signatures and machine learning techniques
US8201246B1 (en) 2008-02-25 2012-06-12 Trend Micro Incorporated Preventing malicious codes from performing malicious actions in a computer system
US8204984B1 (en) 2004-04-01 2012-06-19 Fireeye, Inc. Systems and methods for detecting encrypted bot command and control communication channels
US8214905B1 (en) 2011-12-21 2012-07-03 Kaspersky Lab Zao System and method for dynamically allocating computing resources for processing security information
US20120174218A1 (en) 2010-12-30 2012-07-05 Everis Inc. Network Communication System With Improved Security
US20120210423A1 (en) 2010-12-01 2012-08-16 Oliver Friedrichs Method and apparatus for detecting malicious software through contextual convictions, generic signatures and machine learning techniques
US20120233698A1 (en) 2011-03-07 2012-09-13 Isight Partners, Inc. Information System Security Based on Threat Vectors
GB2490431A (en) 2012-05-15 2012-10-31 F Secure Corp Foiling document exploit malware using repeat calls
US20120278886A1 (en) 2011-04-27 2012-11-01 Michael Luna Detection and filtering of malware based on traffic observations made in a distributed mobile traffic management system
US20120331553A1 (en) 2006-04-20 2012-12-27 Fireeye, Inc. Dynamic signature creation and enforcement
US8370939B2 (en) 2010-07-23 2013-02-05 Kaspersky Lab, Zao Protection against malware on web resources
US8370938B1 (en) 2009-04-25 2013-02-05 Dasient, Inc. Mitigating malware
US20130097706A1 (en) 2011-09-16 2013-04-18 Veracode, Inc. Automated behavioral and static analysis using an instrumented sandbox and machine learning classification for mobile security
WO2013067505A1 (en) 2011-11-03 2013-05-10 Cyphort, Inc. Systems and methods for virtualization and emulation assisted malware detection
US8464340B2 (en) 2007-09-04 2013-06-11 Samsung Electronics Co., Ltd. System, apparatus and method of malware diagnosis mechanism based on immunization database
US20130185795A1 (en) 2012-01-12 2013-07-18 Arxceo Corporation Methods and systems for providing network protection by progressive degradation of service
US20130227691A1 (en) 2012-02-24 2013-08-29 Ashar Aziz Detecting Malicious Network Content
US8528086B1 (en) 2004-04-01 2013-09-03 Fireeye, Inc. System and method of detecting computer worms
US8539582B1 (en) 2004-04-01 2013-09-17 Fireeye, Inc. Malware containment and security analysis on connection
US20130247186A1 (en) 2012-03-15 2013-09-19 Aaron LeMasters System to Bypass a Compromised Mass Storage Device Driver Stack and Method Thereof
US8561177B1 (en) 2004-04-01 2013-10-15 Fireeye, Inc. Systems and methods for detecting communication channels of bots
US8566946B1 (en) 2006-04-20 2013-10-22 Fireeye, Inc. Malware containment on connection
US20140032875A1 (en) 2012-07-27 2014-01-30 James Butler Physical Memory Forensics System and Method
US20140181131A1 (en) 2012-12-26 2014-06-26 David Ross Timeline wrinkling system and method
US20140189882A1 (en) 2012-12-28 2014-07-03 Robert Jung System and method for the programmatic runtime de-obfuscation of obfuscated software utilizing virtual machine introspection and manipulation of virtual machine guest memory permissions
US20140189866A1 (en) 2012-12-31 2014-07-03 Jason Shiffer Identification of obfuscated computer items using visual algorithms
US20140280245A1 (en) 2013-03-15 2014-09-18 Mandiant Corporation System and method to visualize user sessions
US20140283037A1 (en) 2013-03-15 2014-09-18 Michael Sikorski System and Method to Extract and Utilize Disassembly Features to Classify Software Intent
US20140283063A1 (en) 2013-03-15 2014-09-18 Matthew Thompson System and Method to Manage Sinkholes
US8881282B1 (en) 2004-04-01 2014-11-04 Fireeye, Inc. Systems and methods for malware attack detection and identification
US20140337836A1 (en) 2013-05-10 2014-11-13 Fireeye, Inc. Optimized resource allocation for virtual machines within a malware content detection system
US20140344926A1 (en) 2013-03-15 2014-11-20 Sean Cunningham System and method employing structured intelligence to verify and contain threats at endpoints
US8898788B1 (en) 2004-04-01 2014-11-25 Fireeye, Inc. Systems and methods for malware attack prevention
US20140380474A1 (en) 2013-06-24 2014-12-25 Fireeye, Inc. System and Method for Detecting Time-Bomb Malware
US20140380473A1 (en) 2013-06-24 2014-12-25 Fireeye, Inc. Zero-day discovery system
US20150007312A1 (en) 2013-06-28 2015-01-01 Vinay Pidathala System and method for detecting malicious links in electronic messages
US8990944B1 (en) 2013-02-23 2015-03-24 Fireeye, Inc. Systems and methods for automatically detecting backdoors
US20150096023A1 (en) 2013-09-30 2015-04-02 Fireeye, Inc. Fuzzy hash of behavioral results
US20150096025A1 (en) 2013-09-30 2015-04-02 Fireeye, Inc. System, Apparatus and Method for Using Malware Analysis Results to Drive Adaptive Instrumentation of Virtual Machines to Improve Exploit Detection
US20150096024A1 (en) 2013-09-30 2015-04-02 Fireeye, Inc. Advanced persistent threat (apt) detection center
US20150096022A1 (en) 2013-09-30 2015-04-02 Michael Vincent Dynamically adaptive framework and method for classifying malware using intelligent static, emulation, and dynamic analyses
US9009822B1 (en) 2013-02-23 2015-04-14 Fireeye, Inc. Framework for multi-phase analysis of mobile applications
US9009823B1 (en) 2013-02-23 2015-04-14 Fireeye, Inc. Framework for efficient security coverage of mobile software applications installed on mobile devices
US9027135B1 (en) 2004-04-01 2015-05-05 Fireeye, Inc. Prospective client identification using malware attack detection
US20150186645A1 (en) 2013-12-26 2015-07-02 Fireeye, Inc. System, apparatus and method for automatically verifying exploits within suspect objects and highlighting the display information associated with the verified exploits
US20150199513A1 (en) 2014-01-16 2015-07-16 Fireeye, Inc. Threat-aware microvisor
US20150220735A1 (en) 2014-02-05 2015-08-06 Fireeye, Inc. Detection efficacy of virtual machine-based analysis with application specific events
US9104867B1 (en) 2013-03-13 2015-08-11 Fireeye, Inc. Malicious content analysis using simulated user interaction without user involvement
US9159035B1 (en) 2013-02-23 2015-10-13 Fireeye, Inc. Framework for computer application analysis of sensitive information tracking
US9176843B1 (en) 2013-02-23 2015-11-03 Fireeye, Inc. Framework for efficient security coverage of mobile software applications
US9189627B1 (en) 2013-11-21 2015-11-17 Fireeye, Inc. System, apparatus and method for conducting on-the-fly decryption of encrypted objects for malware detection
US9195829B1 (en) 2013-02-23 2015-11-24 Fireeye, Inc. User interface with real-time visual playback along with synchronous textual analysis log display and event/time index for anomalous behavior detection in applications
US20150372980A1 (en) 2014-06-24 2015-12-24 Fireeye, Inc. Intrusion prevention and remedy system
US9223972B1 (en) 2014-03-31 2015-12-29 Fireeye, Inc. Dynamically remote tuning of a malware content detection system
US20160004869A1 (en) 2014-07-01 2016-01-07 Fireeye, Inc. Verification of trusted threat-aware microvisor
US20160006756A1 (en) 2014-07-01 2016-01-07 Fireeye, Inc. Trusted threat-aware microvisor
US9241010B1 (en) 2014-03-20 2016-01-19 Fireeye, Inc. System and method for network behavior detection
US9251343B1 (en) 2013-03-15 2016-02-02 Fireeye, Inc. Detecting bootkits resident on compromised computers
US20160044000A1 (en) 2014-08-05 2016-02-11 Fireeye, Inc. System and method to communicate sensitive information via one or more untrusted intermediate nodes with resilience to disconnected network topology
US9311479B1 (en) 2013-03-14 2016-04-12 Fireeye, Inc. Correlation and consolidation of analytic data for holistic view of a malware attack
US9355247B1 (en) 2013-03-13 2016-05-31 Fireeye, Inc. File extraction from memory dump for malicious content analysis
US9363280B1 (en) 2014-08-22 2016-06-07 Fireeye, Inc. System and method of detecting delivery of malware using cross-customer data
US9367681B1 (en) 2013-02-23 2016-06-14 Fireeye, Inc. Framework for efficient security coverage of mobile software applications using symbolic execution to reach regions of interest within an application
US20160191550A1 (en) 2014-12-29 2016-06-30 Fireeye, Inc. Microvisor-based malware detection endpoint architecture
US20160191547A1 (en) 2014-12-26 2016-06-30 Fireeye, Inc. Zero-Day Rotating Guest Image Profile
US9398028B1 (en) 2014-06-26 2016-07-19 Fireeye, Inc. System, device and method for detecting a malicious attack based on communcations between remotely hosted virtual machines and malicious web servers
US20160241580A1 (en) 2014-04-03 2016-08-18 Isight Partners, Inc. System and Method of Cyber Threat Structure Mapping and Application to Cyber Threat Mitigation
US20160241581A1 (en) 2014-04-03 2016-08-18 Isight Partners, Inc. System and Method of Cyber Threat Intensity Determination and Application to Cyber Threat Mitigation
US9426071B1 (en) 2013-08-22 2016-08-23 Fireeye, Inc. Storing network bidirectional flow data and metadata with efficient processing technique
US9432389B1 (en) 2014-03-31 2016-08-30 Fireeye, Inc. System, apparatus and method for detecting a malicious attack based on static analysis of a multi-flow object
US9430646B1 (en) 2013-03-14 2016-08-30 Fireeye, Inc. Distributed systems and methods for automatically detecting unknown bots and botnets
US9438623B1 (en) 2014-06-06 2016-09-06 Fireeye, Inc. Computer exploit detection using heap spray pattern matching
US9438613B1 (en) 2015-03-30 2016-09-06 Fireeye, Inc. Dynamic content activation for automated analysis of embedded objects
US20160285914A1 (en) 2015-03-25 2016-09-29 Fireeye, Inc. Exploit detection system
US9467460B1 (en) 2014-12-23 2016-10-11 Fireeye, Inc. Modularized database architecture using vertical partitioning for a state machine
US9483644B1 (en) 2015-03-31 2016-11-01 Fireeye, Inc. Methods for detecting file altering malware in VM based analysis
US20160323295A1 (en) 2015-04-28 2016-11-03 Isight Partners, Inc. Computer Imposed Countermeasures Driven by Malware Lineage
US20160335110A1 (en) 2015-03-31 2016-11-17 Fireeye, Inc. Selective virtualization for security threat detection
US9537972B1 (en) 2014-02-20 2017-01-03 Fireeye, Inc. Efficient access to sparse packets in large repositories of stored network traffic
US9565202B1 (en) 2013-03-13 2017-02-07 Fireeye, Inc. System and method for detecting exfiltration content
US9591015B1 (en) 2014-03-28 2017-03-07 Fireeye, Inc. System and method for offloading packet processing and static analysis operations
US9594904B1 (en) 2015-04-23 2017-03-14 Fireeye, Inc. Detecting malware based on reflection
US9594912B1 (en) 2014-06-06 2017-03-14 Fireeye, Inc. Return-oriented programming detection
US20170083703A1 (en) 2015-09-22 2017-03-23 Fireeye, Inc. Leveraging behavior-based rules for malware family classification
US9628498B1 (en) 2004-04-01 2017-04-18 Fireeye, Inc. System and method for bot detection
US9626509B1 (en) 2013-03-13 2017-04-18 Fireeye, Inc. Malicious content analysis with multi-version application support within single operating environment
US9635039B1 (en) 2013-05-13 2017-04-25 Fireeye, Inc. Classifying sets of malicious indicators for detecting command and control communications associated with malware
US9654485B1 (en) 2015-04-13 2017-05-16 Fireeye, Inc. Analytics-based security monitoring system and method
US9690933B1 (en) 2014-12-22 2017-06-27 Fireeye, Inc. Framework for classifying an object as malicious with machine learning for deploying updated predictive models
US9690936B1 (en) 2013-09-30 2017-06-27 Fireeye, Inc. Multistage system and method for analyzing obfuscated content for malware
US9690606B1 (en) 2015-03-25 2017-06-27 Fireeye, Inc. Selective system call monitoring
US9747446B1 (en) 2013-12-26 2017-08-29 Fireeye, Inc. System and method for run-time object classification
US9773112B1 (en) 2014-09-29 2017-09-26 Fireeye, Inc. Exploit detection of malware and malware families
US9781144B1 (en) 2014-09-30 2017-10-03 Fireeye, Inc. Determining duplicate objects for malware analysis using environmental/context information
US9825976B1 (en) 2015-09-30 2017-11-21 Fireeye, Inc. Detection and classification of exploit kits
US9825989B1 (en) 2015-09-30 2017-11-21 Fireeye, Inc. Cyber attack early warning system
US9824209B1 (en) 2013-02-23 2017-11-21 Fireeye, Inc. Framework for efficient security coverage of mobile software applications that is usable to harden in the field code
US9824216B1 (en) 2015-12-31 2017-11-21 Fireeye, Inc. Susceptible environment detection system
US9838417B1 (en) 2014-12-30 2017-12-05 Fireeye, Inc. Intelligent context aware user interaction for malware detection
US9888016B1 (en) 2013-06-28 2018-02-06 Fireeye, Inc. System and method for detecting phishing using password prediction
US20180048660A1 (en) 2015-11-10 2018-02-15 Fireeye, Inc. Launcher for setting analysis environment variations for malware detection
US9912681B1 (en) 2015-03-31 2018-03-06 Fireeye, Inc. Injection of content processing delay in an endpoint
US9921978B1 (en) 2013-11-08 2018-03-20 Fireeye, Inc. System and method for enhanced security of storage devices
US9934376B1 (en) 2014-12-29 2018-04-03 Fireeye, Inc. Malware detection appliance architecture
US9973531B1 (en) 2014-06-06 2018-05-15 Fireeye, Inc. Shellcode detection
US10027689B1 (en) 2014-09-29 2018-07-17 Fireeye, Inc. Interactive infection visualization for improved exploit detection and signature generation for malware and malware families
US10025691B1 (en) 2016-09-09 2018-07-17 Fireeye, Inc. Verification of complex software code using a modularized architecture
US10033759B1 (en) 2015-09-28 2018-07-24 Fireeye, Inc. System and method of threat detection under hypervisor control
US10033747B1 (en) 2015-09-29 2018-07-24 Fireeye, Inc. System and method for detecting interpreter-based exploit attacks
US10050998B1 (en) 2015-12-30 2018-08-14 Fireeye, Inc. Malicious message analysis system
US10089461B1 (en) 2013-09-30 2018-10-02 Fireeye, Inc. Page replacement code injection
US20180288077A1 (en) * 2017-03-30 2018-10-04 Fireeye, Inc. Attribute-controlled malware detection
US10108446B1 (en) 2015-12-11 2018-10-23 Fireeye, Inc. Late load technique for deploying a virtualization layer underneath a running operating system
US10121000B1 (en) 2016-06-28 2018-11-06 Fireeye, Inc. System and method to detect premium attacks on electronic networks and electronic devices
US10133866B1 (en) 2015-12-30 2018-11-20 Fireeye, Inc. System and method for triggering analysis of an object for malware in response to modification of that object
US20180375886A1 (en) * 2017-06-22 2018-12-27 Oracle International Corporation Techniques for monitoring privileged users and detecting anomalous activities in a computing environment
US10169585B1 (en) 2016-06-22 2019-01-01 Fireeye, Inc. System and methods for advanced malware detection through placement of transition events
US10192052B1 (en) 2013-09-30 2019-01-29 Fireeye, Inc. System, apparatus and method for classifying a file as malicious using static scanning
US10191861B1 (en) 2016-09-06 2019-01-29 Fireeye, Inc. Technique for implementing memory views using a layered virtualization architecture
US10210329B1 (en) 2015-09-30 2019-02-19 Fireeye, Inc. Method to detect application execution hijacking using memory protection
US10216927B1 (en) 2015-06-30 2019-02-26 Fireeye, Inc. System and method for protecting memory pages associated with a process using a virtualization layer
US20190068619A1 (en) * 2017-08-24 2019-02-28 At&T Intellectual Property I, L.P. Systems and methods for dynamic analysis and resolution of network anomalies
US10242185B1 (en) 2014-03-21 2019-03-26 Fireeye, Inc. Dynamic guest image creation and rollback
US20190104154A1 (en) 2017-10-01 2019-04-04 Fireeye, Inc. Phishing attack detection
US20190132334A1 (en) 2017-10-27 2019-05-02 Fireeye, Inc. System and method for analyzing binary code for malware classification using artificial neural network techniques
US10341365B1 (en) 2015-12-30 2019-07-02 Fireeye, Inc. Methods and system for hiding transition events for malware detection
US20190207966A1 (en) 2017-12-28 2019-07-04 Fireeye, Inc. Platform and Method for Enhanced Cyber-Attack Detection and Response Employing a Global Data Store
US20190207967A1 (en) 2017-12-28 2019-07-04 Fireeye, Inc. Platform and method for retroactive reclassification employing a cybersecurity-based global data store
US10395029B1 (en) 2015-06-30 2019-08-27 Fireeye, Inc. Virtual system and method with threat protection
US10430586B1 (en) 2016-09-07 2019-10-01 Fireeye, Inc. Methods of identifying heap spray attacks using memory anomaly detection
US10447728B1 (en) 2015-12-10 2019-10-15 Fireeye, Inc. Technique for protecting guest processes using a layered virtualization architecture
US10454950B1 (en) 2015-06-30 2019-10-22 Fireeye, Inc. Centralized aggregation technique for detecting lateral movement of stealthy cyber-attacks
US10462173B1 (en) 2016-06-30 2019-10-29 Fireeye, Inc. Malware detection verification and enhancement by coordinating endpoint and malware detection systems
US10474813B1 (en) 2015-03-31 2019-11-12 Fireeye, Inc. Code injection technique for remediation at an endpoint of a network
US10476906B1 (en) 2016-03-25 2019-11-12 Fireeye, Inc. System and method for managing formation and modification of a cluster within a malware detection system
US10491627B1 (en) 2016-09-29 2019-11-26 Fireeye, Inc. Advanced malware detection using similarity analysis
US10503904B1 (en) 2017-06-29 2019-12-10 Fireeye, Inc. Ransomware detection and mitigation
US10515214B1 (en) 2013-09-30 2019-12-24 Fireeye, Inc. System and method for classifying malware within content created during analysis of a specimen
US10523609B1 (en) 2016-12-27 2019-12-31 Fireeye, Inc. Multi-vector malware detection and analysis
US10552610B1 (en) 2016-12-22 2020-02-04 Fireeye, Inc. Adaptive virtual machine snapshot update framework for malware behavioral analysis
US10554507B1 (en) 2017-03-30 2020-02-04 Fireeye, Inc. Multi-level control for enhanced resource and object evaluation management of malware detection system
US10565378B1 (en) 2015-12-30 2020-02-18 Fireeye, Inc. Exploit of privilege detection framework
US10581879B1 (en) 2016-12-22 2020-03-03 Fireeye, Inc. Enhanced malware detection for generated objects
US10581874B1 (en) 2015-12-31 2020-03-03 Fireeye, Inc. Malware detection system with contextual analysis
US10587647B1 (en) 2016-11-22 2020-03-10 Fireeye, Inc. Technique for malware detection capability comparison of network security devices
US10592678B1 (en) 2016-09-09 2020-03-17 Fireeye, Inc. Secure communications between peers using a verified virtual trusted platform module
US10601848B1 (en) 2017-06-29 2020-03-24 Fireeye, Inc. Cyber-security system and method for weak indicator detection and correlation to generate strong indicators
US10601865B1 (en) 2015-09-30 2020-03-24 Fireeye, Inc. Detection of credential spearphishing attacks using email analysis
US10601863B1 (en) 2016-03-25 2020-03-24 Fireeye, Inc. System and method for managing sensor enrollment
US10642753B1 (en) 2015-06-30 2020-05-05 Fireeye, Inc. System and method for protecting a software component running in virtual machine using a virtualization layer
US10671726B1 (en) 2014-09-22 2020-06-02 Fireeye Inc. System and method for malware analysis using thread-level event monitoring
US10671721B1 (en) 2016-03-25 2020-06-02 Fireeye, Inc. Timeout management services
US10706149B1 (en) 2015-09-30 2020-07-07 Fireeye, Inc. Detecting delayed activation malware using a primary controller and plural time controllers
US10715542B1 (en) 2015-08-14 2020-07-14 Fireeye, Inc. Mobile application risk analysis
US10726127B1 (en) 2015-06-30 2020-07-28 Fireeye, Inc. System and method for protecting a software component running in a virtual machine through virtual interrupts by the virtualization layer
US20200241911A1 (en) * 2019-01-29 2020-07-30 Hewlett Packard Enterprise Development Lp Automatically freeing up virtual machine resources based on virtual machine tagging
US20200252428A1 (en) 2018-12-21 2020-08-06 Fireeye, Inc. System and method for detecting cyberattacks impersonating legitimate sources
US20200257815A1 (en) * 2019-02-12 2020-08-13 Citrix Systems, Inc. Accessing encrypted user data at a multi-tenant hosted cloud service
US10747872B1 (en) 2017-09-27 2020-08-18 Fireeye, Inc. System and method for preventing malware evasion
US10785255B1 (en) 2016-03-25 2020-09-22 Fireeye, Inc. Cluster configuration within a scalable malware detection system
US10791138B1 (en) 2017-03-30 2020-09-29 Fireeye, Inc. Subscription-based malware detection
US10795991B1 (en) 2016-11-08 2020-10-06 Fireeye, Inc. Enterprise search
US10805340B1 (en) 2014-06-26 2020-10-13 Fireeye, Inc. Infection vector and malware tracking with an interactive user display
US20200327124A1 (en) * 2019-04-10 2020-10-15 Snowflake Inc. Internal resource provisioning in database systems
US10817606B1 (en) 2015-09-30 2020-10-27 Fireeye, Inc. Detecting delayed activation malware using a run-time monitoring agent and time-dilation logic
US20200341920A1 (en) * 2019-04-29 2020-10-29 Instant Labs, Inc. Data access optimized across access nodes
US10826931B1 (en) 2018-03-29 2020-11-03 Fireeye, Inc. System and method for predicting and mitigating cybersecurity system misconfigurations
US10826933B1 (en) 2016-03-31 2020-11-03 Fireeye, Inc. Technique for verifying exploit/malware at malware detection appliance through correlation with endpoints
US10846117B1 (en) 2015-12-10 2020-11-24 Fireeye, Inc. Technique for establishing secure communication between host and guest processes of a virtualization architecture
US10855700B1 (en) 2017-06-29 2020-12-01 Fireeye, Inc. Post-intrusion detection of cyber-attacks during lateral movement within networks
US10893068B1 (en) 2017-06-30 2021-01-12 Fireeye, Inc. Ransomware file modification prevention technique
US10893059B1 (en) 2016-03-31 2021-01-12 Fireeye, Inc. Verification and enhancement using detection systems located at the network periphery and endpoint devices
US10902119B1 (en) 2017-03-30 2021-01-26 Fireeye, Inc. Data extraction system for malware analysis
US10904286B1 (en) 2017-03-24 2021-01-26 Fireeye, Inc. Detection of phishing attacks using similarity analysis
US11522884B1 (en) * 2019-12-24 2022-12-06 Fireeye Security Holdings Us Llc Subscription and key management system
US20220400130A1 (en) * 2017-11-27 2022-12-15 Lacework, Inc. Generating User-Specific Polygraphs For Network Activity
US20220400129A1 (en) * 2017-11-27 2022-12-15 Lacework, Inc. Detecting Anomalous Behavior Of A Device
US11537627B1 (en) * 2018-09-28 2022-12-27 Splunk Inc. Information technology networked cloud service monitoring
US20230007483A1 (en) * 2019-11-14 2023-01-05 Intel Corporation Technologies for implementing the radio equipment directive
US11550900B1 (en) * 2018-11-16 2023-01-10 Sophos Limited Malware mitigation based on runtime memory allocation
US20230008173A1 (en) * 2015-10-28 2023-01-12 Qomplx, Inc. System and method for detection and mitigation of data source compromises in adversarial information environments
US20230014242A1 (en) * 2017-01-10 2023-01-19 Confiant Inc Methods and apparatus for hindrance of adverse and detrimental digital content in computer networks
US11570209B2 (en) * 2015-10-28 2023-01-31 Qomplx, Inc. Detecting and mitigating attacks using forged authentication objects within a domain
US11570204B2 (en) * 2015-10-28 2023-01-31 Qomplx, Inc. Detecting and mitigating golden ticket attacks within a domain
US20230032686A1 (en) * 2017-11-27 2023-02-02 Lacework, Inc. Using real-time monitoring to inform static analysis

Patent Citations (508)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3171553A (en) 1961-12-08 1965-03-02 Jr Ralph Mooney Power backhoe boom control
US20070011174A1 (en) 1998-09-22 2007-01-11 Kazuo Takaragi Method and a device for managing a computer network
US20030191957A1 (en) 1999-02-19 2003-10-09 Ari Hypponen Distributed computer virus detection and scanning
US20020169952A1 (en) 1999-06-21 2002-11-14 Disanto Frank J. Method and apparatus for securing e-mail attachments
US20020188887A1 (en) 2000-05-19 2002-12-12 Self Repairing Computers, Inc. Computer with switchable components
US7080407B1 (en) 2000-06-27 2006-07-18 Cisco Technology, Inc. Virus detection and removal system and method for network-based systems
WO2002006928A2 (en) 2000-07-14 2002-01-24 Vcis, Inc. Computer immune system and method for detecting unwanted code in a computer system
US20020038430A1 (en) 2000-09-13 2002-03-28 Charles Edwards System and method of data collection, processing, analysis, and annotation for monitoring cyber-threats and the notification thereof to subscribers
WO2002023805A2 (en) 2000-09-13 2002-03-21 Karakoram Limited Monitoring network activity
US20040117478A1 (en) 2000-09-13 2004-06-17 Triulzi Arrigo G.B. Monitoring network activity
US20020091819A1 (en) 2001-01-05 2002-07-11 Daniel Melchione System and method for configuring computer applications and devices using inheritance
US20020095607A1 (en) 2001-01-18 2002-07-18 Catherine Lin-Hendel Security protection for computers and computer-networks
US20020184528A1 (en) 2001-04-12 2002-12-05 Shevenell Michael P. Method and apparatus for security management via vicarious network devices
US20050021740A1 (en) 2001-08-14 2005-01-27 Bar Anat Bremler Detecting and protecting against worm traffic on a network
US20030084318A1 (en) 2001-10-31 2003-05-01 Schertz Richard L. System and method of graphically correlating data for an intrusion protection system
US7243371B1 (en) 2001-11-09 2007-07-10 Cisco Technology, Inc. Method and system for configurable network intrusion detection
US7080408B1 (en) 2001-11-30 2006-07-18 Mcafee, Inc. Delayed-delivery quarantining of network communications having suspicious contents
US20050125195A1 (en) 2001-12-21 2005-06-09 Juergen Brendel Method, apparatus and sofware for network traffic management
US20090083855A1 (en) 2002-01-25 2009-03-26 Frank Apap System and methods for detecting intrusions in a computer system by monitoring operating system registry accesses
US7448084B1 (en) 2002-01-25 2008-11-04 The Trustees Of Columbia University In The City Of New York System and methods for detecting intrusions in a computer system by monitoring operating system registry accesses
US6941348B2 (en) 2002-02-19 2005-09-06 Postini, Inc. Systems and methods for managing the transmission of electronic messages through active message date updating
US7458098B2 (en) 2002-03-08 2008-11-25 Secure Computing Corporation Systems and methods for enhancing electronic communication security
US20030188190A1 (en) 2002-03-26 2003-10-02 Aaron Jeffrey A. System and method of intrusion detection employing broad-scope monitoring
US20040015712A1 (en) 2002-07-19 2004-01-22 Peter Szor Heuristic detection of malicious computer code by page tracking
US7519990B1 (en) 2002-07-19 2009-04-14 Fortinet, Inc. Managing network traffic flow
US20080189787A1 (en) 2002-07-23 2008-08-07 International Business Machines Corporation Method and Apparatus for the Automatic Determination of Potentially Worm-Like Behavior of a Program
US20040019832A1 (en) 2002-07-23 2004-01-29 International Business Machines Corporation Method and apparatus for the automatic determination of potentially worm-like behavior of a program
US7467408B1 (en) 2002-09-09 2008-12-16 Cisco Technology, Inc. Method and apparatus for capturing and filtering datagrams for network security monitoring
US20050283839A1 (en) 2002-09-10 2005-12-22 Ingenia Technology Limited Security device and system
US20040117624A1 (en) 2002-10-21 2004-06-17 Brandt David D. System and methodology providing automation security analysis, validation, and learning in an industrial controller environment
US20060173992A1 (en) 2002-11-04 2006-08-03 Daniel Weber Event detection/anomaly correlation heuristics
US6898632B2 (en) 2003-03-31 2005-05-24 Finisar Corporation Network security tap for use with intrusion detection system
US20040255161A1 (en) 2003-04-12 2004-12-16 Deep Nines, Inc. System and method for network edge data protection
US20060150249A1 (en) 2003-05-07 2006-07-06 Derek Gassen Method and apparatus for predictive and actual intrusion detection on a network
US7308716B2 (en) 2003-05-20 2007-12-11 International Business Machines Corporation Applying blocking measures progressively to malicious network traffic
US20040236963A1 (en) 2003-05-20 2004-11-25 International Business Machines Corporation Applying blocking measures progressively to malicious network traffic
US20080072326A1 (en) 2003-05-20 2008-03-20 Danford Robert W Applying blocking measures progressively to malicious network traffic
US20050108562A1 (en) 2003-06-18 2005-05-19 Khazan Roger I. Technique for detecting executable malicious code using a combination of static and dynamic analyses
US20040268147A1 (en) 2003-06-30 2004-12-30 Wiederin Shawn E Integrated security system
US20070064689A1 (en) 2003-09-19 2007-03-22 Shin Yong M Method of controlling communication between devices in a network and apparatus for the same
US20050086523A1 (en) 2003-10-15 2005-04-21 Zimmer Vincent J. Methods and apparatus to provide network traffic support and physical security support
US7496961B2 (en) 2003-10-15 2009-02-24 Intel Corporation Methods and apparatus to provide network traffic support and physical security support
US20050149726A1 (en) 2003-10-21 2005-07-07 Amit Joshi Systems and methods for secure client applications
US20050091513A1 (en) 2003-10-28 2005-04-28 Fujitsu Limited Device, method and program for detecting unauthorized access
US7639714B2 (en) 2003-11-12 2009-12-29 The Trustees Of Columbia University In The City Of New York Apparatus method and medium for detecting payload anomaly using n-gram distribution of normal data
US20050157662A1 (en) 2004-01-20 2005-07-21 Justin Bingham Systems and methods for detecting a compromised network
US10284574B1 (en) 2004-04-01 2019-05-07 Fireeye, Inc. System and method for threat detection and identification
US8539582B1 (en) 2004-04-01 2013-09-17 Fireeye, Inc. Malware containment and security analysis on connection
US20160301703A1 (en) 2004-04-01 2016-10-13 Fireeye, Inc. Systems and methods for computer worm defense
US8776229B1 (en) 2004-04-01 2014-07-08 Fireeye, Inc. System and method of detecting malicious traffic while reducing false positives
US9027135B1 (en) 2004-04-01 2015-05-05 Fireeye, Inc. Prospective client identification using malware attack detection
US8516593B2 (en) 2004-04-01 2013-08-20 Fireeye, Inc. Systems and methods for computer worm defense
US8635696B1 (en) 2004-04-01 2014-01-21 Fireeye, Inc. System and method of detecting time-delayed malicious traffic
US8793787B2 (en) 2004-04-01 2014-07-29 Fireeye, Inc. Detecting malicious network content using virtual environment components
US10757120B1 (en) 2004-04-01 2020-08-25 Fireeye, Inc. Malicious network content detection
US20100192223A1 (en) 2004-04-01 2010-07-29 Osman Abdoul Ismael Detecting Malicious Network Content Using Virtual Environment Components
US10623434B1 (en) 2004-04-01 2020-04-14 Fireeye, Inc. System and method for virtual analysis of network data
US8881282B1 (en) 2004-04-01 2014-11-04 Fireeye, Inc. Systems and methods for malware attack detection and identification
US20130047257A1 (en) 2004-04-01 2013-02-21 Ashar Aziz Systems and Methods for Computer Worm Defense
US20130036472A1 (en) 2004-04-01 2013-02-07 FireEye, Inc Computer Worm Defense System and Method
US10587636B1 (en) 2004-04-01 2020-03-10 Fireeye, Inc. System and method for bot detection
US9071638B1 (en) 2004-04-01 2015-06-30 Fireeye, Inc. System and method for malware containment
US20070250930A1 (en) 2004-04-01 2007-10-25 Ashar Aziz Virtual machine with dynamic data flow analysis
US8561177B1 (en) 2004-04-01 2013-10-15 Fireeye, Inc. Systems and methods for detecting communication channels of bots
US20080005782A1 (en) 2004-04-01 2008-01-03 Ashar Aziz Heuristic based capture with replay to virtual machine
US10567405B1 (en) 2004-04-01 2020-02-18 Fireeye, Inc. System for detecting a presence of malware from behavioral analysis
US10511614B1 (en) 2004-04-01 2019-12-17 Fireeye, Inc. Subscription based malware detection under management system control
US8584239B2 (en) 2004-04-01 2013-11-12 Fireeye, Inc. Virtual machine with dynamic data flow analysis
US8291499B2 (en) 2004-04-01 2012-10-16 Fireeye, Inc. Policy based capture with replay to virtual machine
US8984638B1 (en) 2004-04-01 2015-03-17 Fireeye, Inc. System and method for analyzing suspicious network data
US9282109B1 (en) 2004-04-01 2016-03-08 Fireeye, Inc. System and method for analyzing packets
US8528086B1 (en) 2004-04-01 2013-09-03 Fireeye, Inc. System and method of detecting computer worms
US8898788B1 (en) 2004-04-01 2014-11-25 Fireeye, Inc. Systems and methods for malware attack prevention
US9306960B1 (en) 2004-04-01 2016-04-05 Fireeye, Inc. Systems and methods for unauthorized activity defense
US9591020B1 (en) 2004-04-01 2017-03-07 Fireeye, Inc. System and method for signature generation
US20160127393A1 (en) 2004-04-01 2016-05-05 Fireeye, Inc. Electronic Message Analysis For Malware Detection
US9838411B1 (en) 2004-04-01 2017-12-05 Fireeye, Inc. Subscriber based protection system
US9661018B1 (en) 2004-04-01 2017-05-23 Fireeye, Inc. System and method for detecting anomalous behaviors using a virtual machine environment
US20120174186A1 (en) 2004-04-01 2012-07-05 Ashar Aziz Policy Based Capture with Replay to Virtual Machine
US10165000B1 (en) 2004-04-01 2018-12-25 Fireeye, Inc. Systems and methods for malware attack prevention by intercepting flows of information
US9628498B1 (en) 2004-04-01 2017-04-18 Fireeye, Inc. System and method for bot detection
US8204984B1 (en) 2004-04-01 2012-06-19 Fireeye, Inc. Systems and methods for detecting encrypted bot command and control communication channels
US9356944B1 (en) 2004-04-01 2016-05-31 Fireeye, Inc. System and method for detecting malicious traffic using a virtual machine configured with a select software environment
US9516057B2 (en) 2004-04-01 2016-12-06 Fireeye, Inc. Systems and methods for computer worm defense
US10097573B1 (en) 2004-04-01 2018-10-09 Fireeye, Inc. Systems and methods for malware defense
US20110314546A1 (en) 2004-04-01 2011-12-22 Ashar Aziz Electronic Message Analysis for Malware Detection
US10068091B1 (en) 2004-04-01 2018-09-04 Fireeye, Inc. System and method for malware containment
US10027690B2 (en) 2004-04-01 2018-07-17 Fireeye, Inc. Electronic message analysis for malware detection
US9106694B2 (en) 2004-04-01 2015-08-11 Fireeye, Inc. Electronic message analysis for malware detection
US9197664B1 (en) 2004-04-01 2015-11-24 Fire Eye, Inc. System and method for malware containment
US9912684B1 (en) 2004-04-01 2018-03-06 Fireeye, Inc. System and method for virtual analysis of network data
US8689333B2 (en) 2004-04-01 2014-04-01 Fireeye, Inc. Malware defense system and method
US20080307524A1 (en) 2004-04-08 2008-12-11 The Regents Of The University Of California Detecting Public Network Attacks Using Signatures and Fast Content Analysis
US20050238005A1 (en) 2004-04-21 2005-10-27 Yi-Fen Chen Method and apparatus for controlling traffic in a computer network
US7779463B2 (en) 2004-05-11 2010-08-17 The Trustees Of Columbia University In The City Of New York Systems and methods for correlating and distributing intrusion alert information among collaborating computer systems
US20050262562A1 (en) 2004-05-21 2005-11-24 Paul Gassoway Systems and methods of computer security
US9838416B1 (en) 2004-06-14 2017-12-05 Fireeye, Inc. System and method of detecting malicious content
US8006305B2 (en) 2004-06-14 2011-08-23 Fireeye, Inc. Computer worm defense system and method
US8549638B2 (en) 2004-06-14 2013-10-01 Fireeye, Inc. System and method of containing computer worms
US20110099633A1 (en) 2004-06-14 2011-04-28 NetForts, Inc. System and method of containing computer worms
US20110093951A1 (en) 2004-06-14 2011-04-21 NetForts, Inc. Computer worm defense system and method
US20060021029A1 (en) 2004-06-29 2006-01-26 Brickell Ernie F Method of improving computer security through sandboxing
US20060010495A1 (en) 2004-07-06 2006-01-12 Oded Cohen Method for protecting a computer from suspicious objects
US20060015747A1 (en) 2004-07-16 2006-01-19 Red Hat, Inc. System and method for detecting computer virus
US20060015715A1 (en) 2004-07-16 2006-01-19 Eric Anderson Automatically protecting network service from network attack
US20060031476A1 (en) 2004-08-05 2006-02-09 Mathes Marvin L Apparatus and method for remotely monitoring a computer network
US7949849B2 (en) 2004-08-24 2011-05-24 Mcafee, Inc. File system for a capture system
US20070006313A1 (en) 2004-09-17 2007-01-04 Phillip Porras Method and apparatus for combating malicious code
US20060070130A1 (en) 2004-09-27 2006-03-30 Microsoft Corporation System and method of identifying the source of an attack on a computer network
US20060161987A1 (en) 2004-11-10 2006-07-20 Guy Levy-Yurista Detecting and remedying unauthorized computer programs
US7540025B2 (en) 2004-11-18 2009-05-26 Cisco Technology, Inc. Mitigating network attacks using automatic signature generation
US20100281542A1 (en) 2004-11-24 2010-11-04 The Trustees Of Columbia University In The City Of New York Systems and Methods for Correlating and Distributing Intrusion Alert Information Among Collaborating Computer Systems
US20060117385A1 (en) 2004-11-30 2006-06-01 Mester Michael L Monitoring propagation protection within a network
US20060123477A1 (en) 2004-12-06 2006-06-08 Kollivakkam Raghavan Method and apparatus for generating a network topology representation based on inspection of application messages at a network device
US20060191010A1 (en) 2005-02-18 2006-08-24 Pace University System for intrusion detection and vulnerability assessment in a computer network using simulation and machine learning
US20060251104A1 (en) 2005-03-31 2006-11-09 Fujitsu Limited Service apparatus, method of controlling switching of connection destination of client apparatus by service apparatus, and storage medium readable by machine
US20060242709A1 (en) 2005-04-21 2006-10-26 Microsoft Corporation Protecting a computer that provides a Web service from malware
US7854007B2 (en) 2005-05-05 2010-12-14 Ironport Systems, Inc. Identifying threats in electronic messages
US20060288417A1 (en) 2005-06-21 2006-12-21 Sbc Knowledge Ventures Lp Method and apparatus for mitigating the effects of malicious software in a communication network
US20070006288A1 (en) 2005-06-30 2007-01-04 Microsoft Corporation Controlling network access
US20070016951A1 (en) 2005-07-13 2007-01-18 Piccard Paul L Systems and methods for identifying sources of malware
US20090158430A1 (en) 2005-10-21 2009-06-18 Borders Kevin R Method, system and computer program product for detecting at least one of security threats and undesirable computer files
US7698548B2 (en) 2005-12-08 2010-04-13 Microsoft Corporation Communications traffic segregation for security purposes
US20070143827A1 (en) 2005-12-21 2007-06-21 Fiberlink Methods and systems for intelligently controlling access to computing resources
US20070157306A1 (en) 2005-12-30 2007-07-05 Elrod Craig T Network threat detection and mitigation
US20070192858A1 (en) 2006-02-16 2007-08-16 Infoexpress, Inc. Peer based network access control
US20070208822A1 (en) 2006-03-01 2007-09-06 Microsoft Corporation Honey Monkey Network Exploration
US20090144823A1 (en) 2006-03-27 2009-06-04 Gerardo Lamastra Method and System for Mobile Network Security, Related Network and Computer Program Product
US20080040710A1 (en) 2006-04-05 2008-02-14 Prevx Limited Method, computer program and computer for analysing an executable computer file
US20070240218A1 (en) 2006-04-06 2007-10-11 George Tuvell Malware Detection System and Method for Mobile Platforms
US20070240222A1 (en) 2006-04-06 2007-10-11 George Tuvell System and Method for Managing Malware Protection on Mobile Devices
WO2007117636A2 (en) 2006-04-06 2007-10-18 Smobile Systems, Inc. Malware detection system and method for comprssed data on mobile platforms
US20070240220A1 (en) 2006-04-06 2007-10-11 George Tuvell System and method for managing malware protection on mobile devices
US8566946B1 (en) 2006-04-20 2013-10-22 Fireeye, Inc. Malware containment on connection
US8375444B2 (en) 2006-04-20 2013-02-12 Fireeye, Inc. Dynamic signature creation and enforcement
US20120331553A1 (en) 2006-04-20 2012-12-27 Fireeye, Inc. Dynamic signature creation and enforcement
GB2439806A (en) 2006-06-30 2008-01-09 Sophos Plc Classifying software as malware using characteristics (or "genes")
US8020206B2 (en) 2006-07-10 2011-09-13 Websense, Inc. System and method of analyzing web content
US20080077793A1 (en) 2006-09-21 2008-03-27 Sensory Networks, Inc. Apparatus and method for high throughput network security systems
US20100017546A1 (en) 2006-10-04 2010-01-21 Trek 2000 International Ltd. Method, apparatus and system for authentication of external storage devices
WO2008041950A2 (en) 2006-10-04 2008-04-10 Trek 2000 International Ltd. Method, apparatus and system for authentication of external storage devices
US20080141376A1 (en) 2006-10-24 2008-06-12 Pc Tools Technology Pty Ltd. Determining maliciousness of software
US20080134334A1 (en) 2006-11-30 2008-06-05 Electronics And Telecommunications Research Institute Apparatus and method for detecting network attack
US20080184367A1 (en) 2007-01-25 2008-07-31 Mandiant, Inc. System and method for determining data entropy to identify malware
US8069484B2 (en) 2007-01-25 2011-11-29 Mandiant Corporation System and method for determining data entropy to identify malware
US20080320594A1 (en) 2007-03-19 2008-12-25 Xuxian Jiang Malware Detector
US20090064332A1 (en) 2007-04-04 2009-03-05 Phillip Andrew Porras Method and apparatus for generating highly predictive blacklists
US20090172815A1 (en) 2007-04-04 2009-07-02 Guofei Gu Method and apparatus for detecting malware infection
US20090003317A1 (en) 2007-06-29 2009-01-01 Kasralikar Rahul S Method and mechanism for port redirects in a network switch
US8464340B2 (en) 2007-09-04 2013-06-11 Samsung Electronics Co., Ltd. System, apparatus and method of malware diagnosis mechanism based on immunization database
US20090126015A1 (en) 2007-10-02 2009-05-14 Monastyrsky Alexey V System and method for detecting multi-component malware
US8045458B2 (en) 2007-11-08 2011-10-25 Mcafee, Inc. Prioritizing network traffic
US20090125976A1 (en) 2007-11-08 2009-05-14 Docomo Communications Laboratories Usa, Inc. Automated test input generation for web applications
US20090198651A1 (en) 2008-02-01 2009-08-06 Jason Shiffer Method and system for analyzing data related to an event
US20110173213A1 (en) 2008-02-01 2011-07-14 Matthew Frazier System and method for data preservation and retrieval
US8793278B2 (en) 2008-02-01 2014-07-29 Mandiant, Llc System and method for data preservation and retrieval
US20130325792A1 (en) 2008-02-01 2013-12-05 Jason Shiffer Method and System for Analyzing Data Related to an Event
US9106630B2 (en) 2008-02-01 2015-08-11 Mandiant, Llc Method and system for collaboration during an event
US20090199274A1 (en) 2008-02-01 2009-08-06 Matthew Frazier method and system for collaboration during an event
US20090198689A1 (en) 2008-02-01 2009-08-06 Matthew Frazier System and method for data preservation and retrieval
US8949257B2 (en) 2008-02-01 2015-02-03 Mandiant, Llc Method and system for collecting and organizing data corresponding to an event
US20090198670A1 (en) 2008-02-01 2009-08-06 Jason Shiffer Method and system for collecting and organizing data corresponding to an event
US10146810B2 (en) 2008-02-01 2018-12-04 Fireeye, Inc. Method and system for collecting and organizing data corresponding to an event
US20130325872A1 (en) 2008-02-01 2013-12-05 Jason Shiffer Method and System for Collecting and Organizing Data Corresponding to an Event
US20130318038A1 (en) 2008-02-01 2013-11-28 Jason Shiffer Method and System for Analyzing Data Related to an Event
US20130318073A1 (en) 2008-02-01 2013-11-28 Jason Shiffer Method and System for Collecting and Organizing Data Corresponding to an Event
US8566476B2 (en) 2008-02-01 2013-10-22 Mandiant Corporation Method and system for analyzing data related to an event
US20130325871A1 (en) 2008-02-01 2013-12-05 Jason Shiffer Method and System for Collecting and Organizing Data Corresponding to an Event
US20130325791A1 (en) 2008-02-01 2013-12-05 Jason Shiffer Method and System for Analyzing Data Related to an Event
US7937387B2 (en) 2008-02-01 2011-05-03 Mandiant System and method for data preservation and retrieval
US8201246B1 (en) 2008-02-25 2012-06-12 Trend Micro Incorporated Preventing malicious codes from performing malicious actions in a computer system
US20090241190A1 (en) 2008-03-24 2009-09-24 Michael Todd System and method for securing a network from zero-day vulnerability exploits
US20110167493A1 (en) 2008-05-27 2011-07-07 Yingbo Song Systems, methods, ane media for detecting network anomalies
US20140297494A1 (en) 2008-06-03 2014-10-02 Isight Partners, Inc. Electronic Crime Detection and Tracking
US9904955B2 (en) 2008-06-03 2018-02-27 Fireeye, Inc. Electronic crime detection and tracking
US20090300589A1 (en) 2008-06-03 2009-12-03 Isight Partners, Inc. Electronic Crime Detection and Tracking
US8813050B2 (en) 2008-06-03 2014-08-19 Isight Partners, Inc. Electronic crime detection and tracking
US8881271B2 (en) 2008-08-01 2014-11-04 Mandiant, Llc System and method for forensic identification of elements within a computer system
US20100030996A1 (en) 2008-08-01 2010-02-04 Mandiant, Inc. System and Method for Forensic Identification of Elements Within a Computer System
US20100058474A1 (en) 2008-08-29 2010-03-04 Avg Technologies Cz, S.R.O. System and method for the detection of malware
US20100077481A1 (en) 2008-09-22 2010-03-25 Microsoft Corporation Collecting and analyzing malware data
US8990939B2 (en) 2008-11-03 2015-03-24 Fireeye, Inc. Systems and methods for scheduling analysis of network content for malware
US20100115621A1 (en) 2008-11-03 2010-05-06 Stuart Gresley Staniford Systems and Methods for Detecting Malicious Network Content
US20150180886A1 (en) 2008-11-03 2015-06-25 Fireeye, Inc. Systems and Methods for Scheduling Analysis of Network Content for Malware
US20110247072A1 (en) 2008-11-03 2011-10-06 Stuart Gresley Staniford Systems and Methods for Detecting Malicious PDF Network Content
US9118715B2 (en) 2008-11-03 2015-08-25 Fireeye, Inc. Systems and methods for detecting malicious PDF network content
US9954890B1 (en) 2008-11-03 2018-04-24 Fireeye, Inc. Systems and methods for analyzing PDF documents
US20120222121A1 (en) 2008-11-03 2012-08-30 Stuart Gresley Staniford Systems and Methods for Detecting Malicious PDF Network Content
US8997219B2 (en) 2008-11-03 2015-03-31 Fireeye, Inc. Systems and methods for detecting malicious PDF network content
US20130291109A1 (en) 2008-11-03 2013-10-31 Fireeye, Inc. Systems and Methods for Scheduling Analysis of Network Content for Malware
US9438622B1 (en) 2008-11-03 2016-09-06 Fireeye, Inc. Systems and methods for analyzing malicious PDF network content
US8850571B2 (en) 2008-11-03 2014-09-30 Fireeye, Inc. Systems and methods for detecting malicious network content
US20100132038A1 (en) 2008-11-26 2010-05-27 Zaitsev Oleg V System and Method for Computer Malware Detection
US20100154056A1 (en) 2008-12-17 2010-06-17 Symantec Corporation Context-Aware Real-Time Computer-Protection Systems and Methods
US8370938B1 (en) 2009-04-25 2013-02-05 Dasient, Inc. Mitigating malware
US20110078794A1 (en) 2009-09-30 2011-03-31 Jayaraman Manni Network-Based Binary File Extraction and Analysis for Malware Detection
US8832829B2 (en) 2009-09-30 2014-09-09 Fireeye, Inc. Network-based binary file extraction and analysis for malware detection
US8935779B2 (en) 2009-09-30 2015-01-13 Fireeye, Inc. Network-based binary file extraction and analysis for malware detection
US20120117652A1 (en) 2009-09-30 2012-05-10 Jayaraman Manni Network-Based Binary File Extraction and Analysis for Malware Detection
US20140237600A1 (en) 2009-10-27 2014-08-21 Peter J Silberman System and method for detecting executable machine instructions in a data stream
US10019573B2 (en) 2009-10-27 2018-07-10 Fireeye, Inc. System and method for detecting executable machine instructions in a data stream
US20110099635A1 (en) 2009-10-27 2011-04-28 Silberman Peter J System and method for detecting executable machine instructions in a data stream
US8713681B2 (en) 2009-10-27 2014-04-29 Mandiant, Llc System and method for detecting executable machine instructions in a data stream
WO2011084431A2 (en) 2009-12-15 2011-07-14 Mcafee, Inc. Systems and methods for behavioral sandboxing
US8494974B2 (en) 2010-01-18 2013-07-23 iSIGHT Partners Inc. Targeted security implementation through security loss forecasting
US20110178942A1 (en) 2010-01-18 2011-07-21 Isight Partners, Inc. Targeted Security Implementation Through Security Loss Forecasting
US20130282426A1 (en) 2010-01-18 2013-10-24 Isight Partners, Inc. Targeted Security Implementation Through Security Loss Forecasting
WO2011112348A1 (en) 2010-03-08 2011-09-15 Raytheon Company System and method for host-level malware detection
US20110219450A1 (en) 2010-03-08 2011-09-08 Raytheon Company System And Method For Malware Detection
US20110225624A1 (en) 2010-03-15 2011-09-15 Symantec Corporation Systems and Methods for Providing Network Access Control in Virtual Environments
US20110307956A1 (en) 2010-06-11 2011-12-15 M86 Security, Inc. System and method for analyzing malicious code using a static analyzer
US20110307955A1 (en) 2010-06-11 2011-12-15 M86 Security, Inc. System and method for detecting malicious content
US20110307954A1 (en) 2010-06-11 2011-12-15 M86 Security, Inc. System and method for improving coverage for web code
US8370939B2 (en) 2010-07-23 2013-02-05 Kaspersky Lab, Zao Protection against malware on web resources
WO2012075336A1 (en) 2010-12-01 2012-06-07 Sourcefire, Inc. Detecting malicious software through contextual convictions, generic signatures and machine learning techniques
US20120210423A1 (en) 2010-12-01 2012-08-16 Oliver Friedrichs Method and apparatus for detecting malicious software through contextual convictions, generic signatures and machine learning techniques
US20120174218A1 (en) 2010-12-30 2012-07-05 Everis Inc. Network Communication System With Improved Security
US9015846B2 (en) 2011-03-07 2015-04-21 Isight Partners, Inc. Information system security based on threat vectors
US8438644B2 (en) 2011-03-07 2013-05-07 Isight Partners, Inc. Information system security based on threat vectors
US20120233698A1 (en) 2011-03-07 2012-09-13 Isight Partners, Inc. Information System Security Based on Threat Vectors
US20130232577A1 (en) 2011-03-07 2013-09-05 Isight Partners, Inc. Information System Security Based on Threat Vectors
WO2012145066A1 (en) 2011-04-18 2012-10-26 Fireeye, Inc. Electronic message analysis for malware detection
US20120278886A1 (en) 2011-04-27 2012-11-01 Michael Luna Detection and filtering of malware based on traffic observations made in a distributed mobile traffic management system
US20130097706A1 (en) 2011-09-16 2013-04-18 Veracode, Inc. Automated behavioral and static analysis using an instrumented sandbox and machine learning classification for mobile security
WO2013067505A1 (en) 2011-11-03 2013-05-10 Cyphort, Inc. Systems and methods for virtualization and emulation assisted malware detection
US8214905B1 (en) 2011-12-21 2012-07-03 Kaspersky Lab Zao System and method for dynamically allocating computing resources for processing security information
US20130185795A1 (en) 2012-01-12 2013-07-18 Arxceo Corporation Methods and systems for providing network protection by progressive degradation of service
US20130227691A1 (en) 2012-02-24 2013-08-29 Ashar Aziz Detecting Malicious Network Content
US9519782B2 (en) 2012-02-24 2016-12-13 Fireeye, Inc. Detecting malicious network content
US10282548B1 (en) 2012-02-24 2019-05-07 Fireeye, Inc. Method for detecting malware within network content
US20130247186A1 (en) 2012-03-15 2013-09-19 Aaron LeMasters System to Bypass a Compromised Mass Storage Device Driver Stack and Method Thereof
US9275229B2 (en) 2012-03-15 2016-03-01 Mandiant, Llc System to bypass a compromised mass storage device driver stack and method thereof
GB2490431A (en) 2012-05-15 2012-10-31 F Secure Corp Foiling document exploit malware using repeat calls
US9268936B2 (en) 2012-07-27 2016-02-23 Mandiant, Llc Physical memory forensics system and method
US20140032875A1 (en) 2012-07-27 2014-01-30 James Butler Physical Memory Forensics System and Method
US20140181131A1 (en) 2012-12-26 2014-06-26 David Ross Timeline wrinkling system and method
US9633134B2 (en) 2012-12-26 2017-04-25 Fireeye, Inc. Timeline wrinkling system and method
US10380343B1 (en) 2012-12-28 2019-08-13 Fireeye, Inc. System and method for programmatic runtime de-obfuscation of obfuscated software utilizing virtual machine introspection and manipulation of virtual machine guest memory permissions
US10572665B2 (en) 2012-12-28 2020-02-25 Fireeye, Inc. System and method to create a number of breakpoints in a virtual machine via virtual machine trapping events
US9459901B2 (en) 2012-12-28 2016-10-04 Fireeye, Inc. System and method for the programmatic runtime de-obfuscation of obfuscated software utilizing virtual machine introspection and manipulation of virtual machine guest memory permissions
US20140189882A1 (en) 2012-12-28 2014-07-03 Robert Jung System and method for the programmatic runtime de-obfuscation of obfuscated software utilizing virtual machine introspection and manipulation of virtual machine guest memory permissions
US20140189687A1 (en) 2012-12-28 2014-07-03 Robert Jung System and Method to Create a Number of Breakpoints in a Virtual Machine Via Virtual Machine Trapping Events
US9690935B2 (en) 2012-12-31 2017-06-27 Fireeye, Inc. Identification of obfuscated computer items using visual algorithms
US20140189866A1 (en) 2012-12-31 2014-07-03 Jason Shiffer Identification of obfuscated computer items using visual algorithms
US8990944B1 (en) 2013-02-23 2015-03-24 Fireeye, Inc. Systems and methods for automatically detecting backdoors
US9792196B1 (en) 2013-02-23 2017-10-17 Fireeye, Inc. Framework for efficient security coverage of mobile software applications
US9225740B1 (en) 2013-02-23 2015-12-29 Fireeye, Inc. Framework for iterative analysis of mobile software applications
US9159035B1 (en) 2013-02-23 2015-10-13 Fireeye, Inc. Framework for computer application analysis of sensitive information tracking
US9009822B1 (en) 2013-02-23 2015-04-14 Fireeye, Inc. Framework for multi-phase analysis of mobile applications
US9009823B1 (en) 2013-02-23 2015-04-14 Fireeye, Inc. Framework for efficient security coverage of mobile software applications installed on mobile devices
US9594905B1 (en) 2013-02-23 2017-03-14 Fireeye, Inc. Framework for efficient security coverage of mobile software applications using machine learning
US10929266B1 (en) 2013-02-23 2021-02-23 Fireeye, Inc. Real-time visual playback with synchronous textual analysis log display and event/time indexing
US9824209B1 (en) 2013-02-23 2017-11-21 Fireeye, Inc. Framework for efficient security coverage of mobile software applications that is usable to harden in the field code
US9195829B1 (en) 2013-02-23 2015-11-24 Fireeye, Inc. User interface with real-time visual playback along with synchronous textual analysis log display and event/time index for anomalous behavior detection in applications
US10296437B2 (en) 2013-02-23 2019-05-21 Fireeye, Inc. Framework for efficient security coverage of mobile software applications
US20180121316A1 (en) 2013-02-23 2018-05-03 Fireeye, Inc. Framework For Efficient Security Coverage Of Mobile Software Applications
US9367681B1 (en) 2013-02-23 2016-06-14 Fireeye, Inc. Framework for efficient security coverage of mobile software applications using symbolic execution to reach regions of interest within an application
US9176843B1 (en) 2013-02-23 2015-11-03 Fireeye, Inc. Framework for efficient security coverage of mobile software applications
US10019338B1 (en) 2013-02-23 2018-07-10 Fireeye, Inc. User interface with real-time visual playback along with synchronous textual analysis log display and event/time index for anomalous behavior detection in applications
US10181029B1 (en) 2013-02-23 2019-01-15 Fireeye, Inc. Security cloud service framework for hardening in the field code of mobile software applications
US9355247B1 (en) 2013-03-13 2016-05-31 Fireeye, Inc. File extraction from memory dump for malicious content analysis
US10198574B1 (en) 2013-03-13 2019-02-05 Fireeye, Inc. System and method for analysis of a memory dump associated with a potentially malicious content suspect
US10848521B1 (en) 2013-03-13 2020-11-24 Fireeye, Inc. Malicious content analysis using simulated user interaction without user involvement
US9934381B1 (en) 2013-03-13 2018-04-03 Fireeye, Inc. System and method for detecting malicious activity based on at least one environmental property
US9104867B1 (en) 2013-03-13 2015-08-11 Fireeye, Inc. Malicious content analysis using simulated user interaction without user involvement
US10467414B1 (en) 2013-03-13 2019-11-05 Fireeye, Inc. System and method for detecting exfiltration content
US9912698B1 (en) 2013-03-13 2018-03-06 Fireeye, Inc. Malicious content analysis using simulated user interaction without user involvement
US9565202B1 (en) 2013-03-13 2017-02-07 Fireeye, Inc. System and method for detecting exfiltration content
US9626509B1 (en) 2013-03-13 2017-04-18 Fireeye, Inc. Malicious content analysis with multi-version application support within single operating environment
US10025927B1 (en) 2013-03-13 2018-07-17 Fireeye, Inc. Malicious content analysis with multi-version application support within single operating environment
US10200384B1 (en) 2013-03-14 2019-02-05 Fireeye, Inc. Distributed systems and methods for automatically detecting unknown bots and botnets
US9430646B1 (en) 2013-03-14 2016-08-30 Fireeye, Inc. Distributed systems and methods for automatically detecting unknown bots and botnets
US10812513B1 (en) 2013-03-14 2020-10-20 Fireeye, Inc. Correlation and consolidation holistic views of analytic data pertaining to a malware attack
US10122746B1 (en) 2013-03-14 2018-11-06 Fireeye, Inc. Correlation and consolidation of analytic data for holistic view of malware attack
US9311479B1 (en) 2013-03-14 2016-04-12 Fireeye, Inc. Correlation and consolidation of analytic data for holistic view of a malware attack
US9641546B1 (en) 2013-03-14 2017-05-02 Fireeye, Inc. Electronic device for aggregation, correlation and consolidation of analysis attributes
US20140283037A1 (en) 2013-03-15 2014-09-18 Michael Sikorski System and Method to Extract and Utilize Disassembly Features to Classify Software Intent
US9251343B1 (en) 2013-03-15 2016-02-02 Fireeye, Inc. Detecting bootkits resident on compromised computers
US10033748B1 (en) 2013-03-15 2018-07-24 Fireeye, Inc. System and method employing structured intelligence to verify and contain threats at endpoints
US10701091B1 (en) 2013-03-15 2020-06-30 Fireeye, Inc. System and method for verifying a cyberthreat
US20140283063A1 (en) 2013-03-15 2014-09-18 Matthew Thompson System and Method to Manage Sinkholes
US20140280245A1 (en) 2013-03-15 2014-09-18 Mandiant Corporation System and method to visualize user sessions
US9413781B2 (en) 2013-03-15 2016-08-09 Fireeye, Inc. System and method employing structured intelligence to verify and contain threats at endpoints
US10713358B2 (en) 2013-03-15 2020-07-14 Fireeye, Inc. System and method to extract and utilize disassembly features to classify software intent
US9824211B2 (en) 2013-03-15 2017-11-21 Fireeye, Inc. System and method to visualize user sessions
US9497213B2 (en) 2013-03-15 2016-11-15 Fireeye, Inc. System and method to manage sinkholes
US20140344926A1 (en) 2013-03-15 2014-11-20 Sean Cunningham System and method employing structured intelligence to verify and contain threats at endpoints
US10469512B1 (en) 2013-05-10 2019-11-05 Fireeye, Inc. Optimized resource allocation for virtual machines within a malware content detection system
US9495180B2 (en) 2013-05-10 2016-11-15 Fireeye, Inc. Optimized resource allocation for virtual machines within a malware content detection system
US20140337836A1 (en) 2013-05-10 2014-11-13 Fireeye, Inc. Optimized resource allocation for virtual machines within a malware content detection system
US10033753B1 (en) 2013-05-13 2018-07-24 Fireeye, Inc. System and method for detecting malicious activity and classifying a network communication based on different indicator types
US9635039B1 (en) 2013-05-13 2017-04-25 Fireeye, Inc. Classifying sets of malicious indicators for detecting command and control communications associated with malware
US10637880B1 (en) 2013-05-13 2020-04-28 Fireeye, Inc. Classifying sets of malicious indicators for detecting command and control communications associated with malware
US9536091B2 (en) 2013-06-24 2017-01-03 Fireeye, Inc. System and method for detecting time-bomb malware
US10133863B2 (en) 2013-06-24 2018-11-20 Fireeye, Inc. Zero-day discovery system
US20140380474A1 (en) 2013-06-24 2014-12-25 Fireeye, Inc. System and Method for Detecting Time-Bomb Malware
US10083302B1 (en) 2013-06-24 2018-09-25 Fireeye, Inc. System and method for detecting time-bomb malware
US10335738B1 (en) 2013-06-24 2019-07-02 Fireeye, Inc. System and method for detecting time-bomb malware
US20140380473A1 (en) 2013-06-24 2014-12-25 Fireeye, Inc. Zero-day discovery system
US10505956B1 (en) 2013-06-28 2019-12-10 Fireeye, Inc. System and method for detecting malicious links in electronic messages
US9300686B2 (en) 2013-06-28 2016-03-29 Fireeye, Inc. System and method for detecting malicious links in electronic messages
US20150007312A1 (en) 2013-06-28 2015-01-01 Vinay Pidathala System and method for detecting malicious links in electronic messages
US9888016B1 (en) 2013-06-28 2018-02-06 Fireeye, Inc. System and method for detecting phishing using password prediction
US9888019B1 (en) 2013-06-28 2018-02-06 Fireeye, Inc. System and method for detecting malicious links in electronic messages
US9426071B1 (en) 2013-08-22 2016-08-23 Fireeye, Inc. Storing network bidirectional flow data and metadata with efficient processing technique
US9876701B1 (en) 2013-08-22 2018-01-23 Fireeye, Inc. Arrangement for efficient search and retrieval of indexes used to locate captured packets
US20150096025A1 (en) 2013-09-30 2015-04-02 Fireeye, Inc. System, Apparatus and Method for Using Malware Analysis Results to Drive Adaptive Instrumentation of Virtual Machines to Improve Exploit Detection
US10515214B1 (en) 2013-09-30 2019-12-24 Fireeye, Inc. System and method for classifying malware within content created during analysis of a specimen
US9736179B2 (en) 2013-09-30 2017-08-15 Fireeye, Inc. System, apparatus and method for using malware analysis results to drive adaptive instrumentation of virtual machines to improve exploit detection
US10218740B1 (en) 2013-09-30 2019-02-26 Fireeye, Inc. Fuzzy hash of behavioral results
US10192052B1 (en) 2013-09-30 2019-01-29 Fireeye, Inc. System, apparatus and method for classifying a file as malicious using static scanning
US20160261612A1 (en) 2013-09-30 2016-09-08 Fireeye, Inc. Fuzzy hash of behavioral results
US20150096022A1 (en) 2013-09-30 2015-04-02 Michael Vincent Dynamically adaptive framework and method for classifying malware using intelligent static, emulation, and dynamic analyses
US9912691B2 (en) 2013-09-30 2018-03-06 Fireeye, Inc. Fuzzy hash of behavioral results
US10713362B1 (en) 2013-09-30 2020-07-14 Fireeye, Inc. Dynamically adaptive framework and method for classifying malware using intelligent static, emulation, and dynamic analyses
US9910988B1 (en) 2013-09-30 2018-03-06 Fireeye, Inc. Malware analysis in accordance with an analysis plan
US9628507B2 (en) 2013-09-30 2017-04-18 Fireeye, Inc. Advanced persistent threat (APT) detection center
US10657251B1 (en) 2013-09-30 2020-05-19 Fireeye, Inc. Multistage system and method for analyzing obfuscated content for malware
US20150096024A1 (en) 2013-09-30 2015-04-02 Fireeye, Inc. Advanced persistent threat (apt) detection center
US10735458B1 (en) 2013-09-30 2020-08-04 Fireeye, Inc. Detection center to detect targeted malware
US20180013770A1 (en) 2013-09-30 2018-01-11 Fireeye, Inc. System, Apparatus And Method For Using Malware Analysis Results To Drive Adaptive Instrumentation Of Virtual Machines To Improve Exploit Detection
US9690936B1 (en) 2013-09-30 2017-06-27 Fireeye, Inc. Multistage system and method for analyzing obfuscated content for malware
US9294501B2 (en) 2013-09-30 2016-03-22 Fireeye, Inc. Fuzzy hash of behavioral results
US20150096023A1 (en) 2013-09-30 2015-04-02 Fireeye, Inc. Fuzzy hash of behavioral results
US9171160B2 (en) 2013-09-30 2015-10-27 Fireeye, Inc. Dynamically adaptive framework and method for classifying malware using intelligent static, emulation, and dynamic analyses
US10089461B1 (en) 2013-09-30 2018-10-02 Fireeye, Inc. Page replacement code injection
US9921978B1 (en) 2013-11-08 2018-03-20 Fireeye, Inc. System and method for enhanced security of storage devices
US9560059B1 (en) 2013-11-21 2017-01-31 Fireeye, Inc. System, apparatus and method for conducting on-the-fly decryption of encrypted objects for malware detection
US9189627B1 (en) 2013-11-21 2015-11-17 Fireeye, Inc. System, apparatus and method for conducting on-the-fly decryption of encrypted objects for malware detection
US9306974B1 (en) 2013-12-26 2016-04-05 Fireeye, Inc. System, apparatus and method for automatically verifying exploits within suspect objects and highlighting the display information associated with the verified exploits
US10467411B1 (en) 2013-12-26 2019-11-05 Fireeye, Inc. System and method for generating a malware identifier
US20150186645A1 (en) 2013-12-26 2015-07-02 Fireeye, Inc. System, apparatus and method for automatically verifying exploits within suspect objects and highlighting the display information associated with the verified exploits
US9756074B2 (en) 2013-12-26 2017-09-05 Fireeye, Inc. System and method for IPS and VM-based detection of suspicious objects
US10476909B1 (en) 2013-12-26 2019-11-12 Fireeye, Inc. System, apparatus and method for automatically verifying exploits within suspect objects and highlighting the display information associated with the verified exploits
US9747446B1 (en) 2013-12-26 2017-08-29 Fireeye, Inc. System and method for run-time object classification
US20150199531A1 (en) 2014-01-16 2015-07-16 Fireeye, Inc. Exploit detection system with threat-aware microvisor
US9946568B1 (en) 2014-01-16 2018-04-17 Fireeye, Inc. Micro-virtualization architecture for threat-aware module deployment in a node of a network environment
US9292686B2 (en) 2014-01-16 2016-03-22 Fireeye, Inc. Micro-virtualization architecture for threat-aware microvisor deployment in a node of a network environment
US9507935B2 (en) 2014-01-16 2016-11-29 Fireeye, Inc. Exploit detection system with threat-aware microvisor
US10740456B1 (en) 2014-01-16 2020-08-11 Fireeye, Inc. Threat-aware architecture
US20150199532A1 (en) 2014-01-16 2015-07-16 Fireeye, Inc. Micro-virtualization architecture for threat-aware microvisor deployment in a node of a network environment
US20150199513A1 (en) 2014-01-16 2015-07-16 Fireeye, Inc. Threat-aware microvisor
US9740857B2 (en) 2014-01-16 2017-08-22 Fireeye, Inc. Threat-aware microvisor
US10534906B1 (en) 2014-02-05 2020-01-14 Fireeye, Inc. Detection efficacy of virtual machine-based analysis with application specific events
US20150220735A1 (en) 2014-02-05 2015-08-06 Fireeye, Inc. Detection efficacy of virtual machine-based analysis with application specific events
US9262635B2 (en) 2014-02-05 2016-02-16 Fireeye, Inc. Detection efficacy of virtual machine-based analysis with application specific events
US9916440B1 (en) 2014-02-05 2018-03-13 Fireeye, Inc. Detection efficacy of virtual machine-based analysis with application specific events
US9674298B1 (en) 2014-02-20 2017-06-06 Fireeye, Inc. Efficient access to sparse packets in large repositories of stored network traffic
US9537972B1 (en) 2014-02-20 2017-01-03 Fireeye, Inc. Efficient access to sparse packets in large repositories of stored network traffic
US10432649B1 (en) 2014-03-20 2019-10-01 Fireeye, Inc. System and method for classifying an object based on an aggregated behavior results
US9241010B1 (en) 2014-03-20 2016-01-19 Fireeye, Inc. System and method for network behavior detection
US10242185B1 (en) 2014-03-21 2019-03-26 Fireeye, Inc. Dynamic guest image creation and rollback
US9591015B1 (en) 2014-03-28 2017-03-07 Fireeye, Inc. System and method for offloading packet processing and static analysis operations
US10454953B1 (en) 2014-03-28 2019-10-22 Fireeye, Inc. System and method for separated packet processing and static analysis
US9787700B1 (en) 2014-03-28 2017-10-10 Fireeye, Inc. System and method for offloading packet processing and static analysis operations
US10341363B1 (en) 2014-03-31 2019-07-02 Fireeye, Inc. Dynamically remote tuning of a malware content detection system
US9432389B1 (en) 2014-03-31 2016-08-30 Fireeye, Inc. System, apparatus and method for detecting a malicious attack based on static analysis of a multi-flow object
US9223972B1 (en) 2014-03-31 2015-12-29 Fireeye, Inc. Dynamically remote tuning of a malware content detection system
US20180069891A1 (en) 2014-04-03 2018-03-08 Fireeye, Inc. System and Method of Mitigating Cyber Attack Risks
US20160241580A1 (en) 2014-04-03 2016-08-18 Isight Partners, Inc. System and Method of Cyber Threat Structure Mapping and Application to Cyber Threat Mitigation
US9749344B2 (en) 2014-04-03 2017-08-29 Fireeye, Inc. System and method of cyber threat intensity determination and application to cyber threat mitigation
US9749343B2 (en) 2014-04-03 2017-08-29 Fireeye, Inc. System and method of cyber threat structure mapping and application to cyber threat mitigation
US10063583B2 (en) 2014-04-03 2018-08-28 Fireeye, Inc. System and method of mitigating cyber attack risks
US20160241581A1 (en) 2014-04-03 2016-08-18 Isight Partners, Inc. System and Method of Cyber Threat Intensity Determination and Application to Cyber Threat Mitigation
US9594912B1 (en) 2014-06-06 2017-03-14 Fireeye, Inc. Return-oriented programming detection
US9438623B1 (en) 2014-06-06 2016-09-06 Fireeye, Inc. Computer exploit detection using heap spray pattern matching
US9973531B1 (en) 2014-06-06 2018-05-15 Fireeye, Inc. Shellcode detection
US10084813B2 (en) 2014-06-24 2018-09-25 Fireeye, Inc. Intrusion prevention and remedy system
US20150372980A1 (en) 2014-06-24 2015-12-24 Fireeye, Inc. Intrusion prevention and remedy system
US10757134B1 (en) 2014-06-24 2020-08-25 Fireeye, Inc. System and method for detecting and remediating a cybersecurity attack
US10805340B1 (en) 2014-06-26 2020-10-13 Fireeye, Inc. Infection vector and malware tracking with an interactive user display
US9838408B1 (en) 2014-06-26 2017-12-05 Fireeye, Inc. System, device and method for detecting a malicious attack based on direct communications between remotely hosted virtual machines and malicious web servers
US9398028B1 (en) 2014-06-26 2016-07-19 Fireeye, Inc. System, device and method for detecting a malicious attack based on communcations between remotely hosted virtual machines and malicious web servers
US9661009B1 (en) 2014-06-26 2017-05-23 Fireeye, Inc. Network-based malware detection
US9680862B2 (en) 2014-07-01 2017-06-13 Fireeye, Inc. Trusted threat-aware microvisor
US20160006756A1 (en) 2014-07-01 2016-01-07 Fireeye, Inc. Trusted threat-aware microvisor
US10002252B2 (en) 2014-07-01 2018-06-19 Fireeye, Inc. Verification of trusted threat-aware microvisor
US20160004869A1 (en) 2014-07-01 2016-01-07 Fireeye, Inc. Verification of trusted threat-aware microvisor
US20160044000A1 (en) 2014-08-05 2016-02-11 Fireeye, Inc. System and method to communicate sensitive information via one or more untrusted intermediate nodes with resilience to disconnected network topology
US9912644B2 (en) 2014-08-05 2018-03-06 Fireeye, Inc. System and method to communicate sensitive information via one or more untrusted intermediate nodes with resilience to disconnected network topology
US10027696B1 (en) 2014-08-22 2018-07-17 Fireeye, Inc. System and method for determining a threat based on correlation of indicators of compromise from other sources
US9609007B1 (en) 2014-08-22 2017-03-28 Fireeye, Inc. System and method of detecting delivery of malware based on indicators of compromise from different sources
US9363280B1 (en) 2014-08-22 2016-06-07 Fireeye, Inc. System and method of detecting delivery of malware using cross-customer data
US10404725B1 (en) 2014-08-22 2019-09-03 Fireeye, Inc. System and method of detecting delivery of malware using cross-customer data
US10671726B1 (en) 2014-09-22 2020-06-02 Fireeye Inc. System and method for malware analysis using thread-level event monitoring
US9773112B1 (en) 2014-09-29 2017-09-26 Fireeye, Inc. Exploit detection of malware and malware families
US10027689B1 (en) 2014-09-29 2018-07-17 Fireeye, Inc. Interactive infection visualization for improved exploit detection and signature generation for malware and malware families
US10868818B1 (en) 2014-09-29 2020-12-15 Fireeye, Inc. Systems and methods for generation of signature generation using interactive infection visualizations
US9781144B1 (en) 2014-09-30 2017-10-03 Fireeye, Inc. Determining duplicate objects for malware analysis using environmental/context information
US9690933B1 (en) 2014-12-22 2017-06-27 Fireeye, Inc. Framework for classifying an object as malicious with machine learning for deploying updated predictive models
US10366231B1 (en) 2014-12-22 2019-07-30 Fireeye, Inc. Framework for classifying an object as malicious with machine learning for deploying updated predictive models
US10902117B1 (en) 2014-12-22 2021-01-26 Fireeye, Inc. Framework for classifying an object as malicious with machine learning for deploying updated predictive models
US9787706B1 (en) 2014-12-23 2017-10-10 Fireeye, Inc. Modular architecture for analysis database
US9467460B1 (en) 2014-12-23 2016-10-11 Fireeye, Inc. Modularized database architecture using vertical partitioning for a state machine
US20160191547A1 (en) 2014-12-26 2016-06-30 Fireeye, Inc. Zero-Day Rotating Guest Image Profile
US10075455B2 (en) 2014-12-26 2018-09-11 Fireeye, Inc. Zero-day rotating guest image profile
US20160191550A1 (en) 2014-12-29 2016-06-30 Fireeye, Inc. Microvisor-based malware detection endpoint architecture
US9934376B1 (en) 2014-12-29 2018-04-03 Fireeye, Inc. Malware detection appliance architecture
US10528726B1 (en) 2014-12-29 2020-01-07 Fireeye, Inc. Microvisor-based malware detection appliance architecture
US9838417B1 (en) 2014-12-30 2017-12-05 Fireeye, Inc. Intelligent context aware user interaction for malware detection
US10798121B1 (en) 2014-12-30 2020-10-06 Fireeye, Inc. Intelligent context aware user interaction for malware detection
US10148693B2 (en) 2015-03-25 2018-12-04 Fireeye, Inc. Exploit detection system
US10666686B1 (en) 2015-03-25 2020-05-26 Fireeye, Inc. Virtualized exploit detection system
US9690606B1 (en) 2015-03-25 2017-06-27 Fireeye, Inc. Selective system call monitoring
US20160285914A1 (en) 2015-03-25 2016-09-29 Fireeye, Inc. Exploit detection system
US9438613B1 (en) 2015-03-30 2016-09-06 Fireeye, Inc. Dynamic content activation for automated analysis of embedded objects
US9912681B1 (en) 2015-03-31 2018-03-06 Fireeye, Inc. Injection of content processing delay in an endpoint
US20160335110A1 (en) 2015-03-31 2016-11-17 Fireeye, Inc. Selective virtualization for security threat detection
US10417031B2 (en) 2015-03-31 2019-09-17 Fireeye, Inc. Selective virtualization for security threat detection
US10474813B1 (en) 2015-03-31 2019-11-12 Fireeye, Inc. Code injection technique for remediation at an endpoint of a network
US9483644B1 (en) 2015-03-31 2016-11-01 Fireeye, Inc. Methods for detecting file altering malware in VM based analysis
US9846776B1 (en) 2015-03-31 2017-12-19 Fireeye, Inc. System and method for detecting file altering behaviors pertaining to a malicious attack
US10104102B1 (en) 2015-04-13 2018-10-16 Fireeye, Inc. Analytic-based security with learning adaptability
US10728263B1 (en) 2015-04-13 2020-07-28 Fireeye, Inc. Analytic-based security monitoring system and method
US9654485B1 (en) 2015-04-13 2017-05-16 Fireeye, Inc. Analytics-based security monitoring system and method
US9594904B1 (en) 2015-04-23 2017-03-14 Fireeye, Inc. Detecting malware based on reflection
US20160323295A1 (en) 2015-04-28 2016-11-03 Isight Partners, Inc. Computer Imposed Countermeasures Driven by Malware Lineage
US9892261B2 (en) 2015-04-28 2018-02-13 Fireeye, Inc. Computer imposed countermeasures driven by malware lineage
US10454950B1 (en) 2015-06-30 2019-10-22 Fireeye, Inc. Centralized aggregation technique for detecting lateral movement of stealthy cyber-attacks
US10395029B1 (en) 2015-06-30 2019-08-27 Fireeye, Inc. Virtual system and method with threat protection
US10216927B1 (en) 2015-06-30 2019-02-26 Fireeye, Inc. System and method for protecting memory pages associated with a process using a virtualization layer
US10726127B1 (en) 2015-06-30 2020-07-28 Fireeye, Inc. System and method for protecting a software component running in a virtual machine through virtual interrupts by the virtualization layer
US10642753B1 (en) 2015-06-30 2020-05-05 Fireeye, Inc. System and method for protecting a software component running in virtual machine using a virtualization layer
US10715542B1 (en) 2015-08-14 2020-07-14 Fireeye, Inc. Mobile application risk analysis
US10176321B2 (en) 2015-09-22 2019-01-08 Fireeye, Inc. Leveraging behavior-based rules for malware family classification
US20170083703A1 (en) 2015-09-22 2017-03-23 Fireeye, Inc. Leveraging behavior-based rules for malware family classification
US10033759B1 (en) 2015-09-28 2018-07-24 Fireeye, Inc. System and method of threat detection under hypervisor control
US10033747B1 (en) 2015-09-29 2018-07-24 Fireeye, Inc. System and method for detecting interpreter-based exploit attacks
US10887328B1 (en) 2015-09-29 2021-01-05 Fireeye, Inc. System and method for detecting interpreter-based exploit attacks
US9825989B1 (en) 2015-09-30 2017-11-21 Fireeye, Inc. Cyber attack early warning system
US10601865B1 (en) 2015-09-30 2020-03-24 Fireeye, Inc. Detection of credential spearphishing attacks using email analysis
US10817606B1 (en) 2015-09-30 2020-10-27 Fireeye, Inc. Detecting delayed activation malware using a run-time monitoring agent and time-dilation logic
US10873597B1 (en) 2015-09-30 2020-12-22 Fireeye, Inc. Cyber attack early warning system
US10706149B1 (en) 2015-09-30 2020-07-07 Fireeye, Inc. Detecting delayed activation malware using a primary controller and plural time controllers
US10210329B1 (en) 2015-09-30 2019-02-19 Fireeye, Inc. Method to detect application execution hijacking using memory protection
US9825976B1 (en) 2015-09-30 2017-11-21 Fireeye, Inc. Detection and classification of exploit kits
US11570204B2 (en) * 2015-10-28 2023-01-31 Qomplx, Inc. Detecting and mitigating golden ticket attacks within a domain
US20230008173A1 (en) * 2015-10-28 2023-01-12 Qomplx, Inc. System and method for detection and mitigation of data source compromises in adversarial information environments
US11570209B2 (en) * 2015-10-28 2023-01-31 Qomplx, Inc. Detecting and mitigating attacks using forged authentication objects within a domain
US20180048660A1 (en) 2015-11-10 2018-02-15 Fireeye, Inc. Launcher for setting analysis environment variations for malware detection
US10284575B2 (en) 2015-11-10 2019-05-07 Fireeye, Inc. Launcher for setting analysis environment variations for malware detection
US10834107B1 (en) 2015-11-10 2020-11-10 Fireeye, Inc. Launcher for setting analysis environment variations for malware detection
US10846117B1 (en) 2015-12-10 2020-11-24 Fireeye, Inc. Technique for establishing secure communication between host and guest processes of a virtualization architecture
US10447728B1 (en) 2015-12-10 2019-10-15 Fireeye, Inc. Technique for protecting guest processes using a layered virtualization architecture
US10108446B1 (en) 2015-12-11 2018-10-23 Fireeye, Inc. Late load technique for deploying a virtualization layer underneath a running operating system
US10341365B1 (en) 2015-12-30 2019-07-02 Fireeye, Inc. Methods and system for hiding transition events for malware detection
US10133866B1 (en) 2015-12-30 2018-11-20 Fireeye, Inc. System and method for triggering analysis of an object for malware in response to modification of that object
US10565378B1 (en) 2015-12-30 2020-02-18 Fireeye, Inc. Exploit of privilege detection framework
US10050998B1 (en) 2015-12-30 2018-08-14 Fireeye, Inc. Malicious message analysis system
US10872151B1 (en) 2015-12-30 2020-12-22 Fireeye, Inc. System and method for triggering analysis of an object for malware in response to modification of that object
US10581898B1 (en) 2015-12-30 2020-03-03 Fireeye, Inc. Malicious message analysis system
US10621338B1 (en) 2015-12-30 2020-04-14 Fireeye, Inc. Method to detect forgery and exploits using last branch recording registers
US10445502B1 (en) 2015-12-31 2019-10-15 Fireeye, Inc. Susceptible environment detection system
US9824216B1 (en) 2015-12-31 2017-11-21 Fireeye, Inc. Susceptible environment detection system
US10581874B1 (en) 2015-12-31 2020-03-03 Fireeye, Inc. Malware detection system with contextual analysis
US10671721B1 (en) 2016-03-25 2020-06-02 Fireeye, Inc. Timeout management services
US10476906B1 (en) 2016-03-25 2019-11-12 Fireeye, Inc. System and method for managing formation and modification of a cluster within a malware detection system
US10785255B1 (en) 2016-03-25 2020-09-22 Fireeye, Inc. Cluster configuration within a scalable malware detection system
US10601863B1 (en) 2016-03-25 2020-03-24 Fireeye, Inc. System and method for managing sensor enrollment
US10616266B1 (en) 2016-03-25 2020-04-07 Fireeye, Inc. Distributed malware detection system and submission workflow thereof
US10893059B1 (en) 2016-03-31 2021-01-12 Fireeye, Inc. Verification and enhancement using detection systems located at the network periphery and endpoint devices
US10826933B1 (en) 2016-03-31 2020-11-03 Fireeye, Inc. Technique for verifying exploit/malware at malware detection appliance through correlation with endpoints
US10169585B1 (en) 2016-06-22 2019-01-01 Fireeye, Inc. System and methods for advanced malware detection through placement of transition events
US10121000B1 (en) 2016-06-28 2018-11-06 Fireeye, Inc. System and method to detect premium attacks on electronic networks and electronic devices
US10462173B1 (en) 2016-06-30 2019-10-29 Fireeye, Inc. Malware detection verification and enhancement by coordinating endpoint and malware detection systems
US10191861B1 (en) 2016-09-06 2019-01-29 Fireeye, Inc. Technique for implementing memory views using a layered virtualization architecture
US10430586B1 (en) 2016-09-07 2019-10-01 Fireeye, Inc. Methods of identifying heap spray attacks using memory anomaly detection
US10592678B1 (en) 2016-09-09 2020-03-17 Fireeye, Inc. Secure communications between peers using a verified virtual trusted platform module
US10025691B1 (en) 2016-09-09 2018-07-17 Fireeye, Inc. Verification of complex software code using a modularized architecture
US10491627B1 (en) 2016-09-29 2019-11-26 Fireeye, Inc. Advanced malware detection using similarity analysis
US10795991B1 (en) 2016-11-08 2020-10-06 Fireeye, Inc. Enterprise search
US10587647B1 (en) 2016-11-22 2020-03-10 Fireeye, Inc. Technique for malware detection capability comparison of network security devices
US10581879B1 (en) 2016-12-22 2020-03-03 Fireeye, Inc. Enhanced malware detection for generated objects
US10552610B1 (en) 2016-12-22 2020-02-04 Fireeye, Inc. Adaptive virtual machine snapshot update framework for malware behavioral analysis
US10523609B1 (en) 2016-12-27 2019-12-31 Fireeye, Inc. Multi-vector malware detection and analysis
US20230014242A1 (en) * 2017-01-10 2023-01-19 Confiant Inc Methods and apparatus for hindrance of adverse and detrimental digital content in computer networks
US10904286B1 (en) 2017-03-24 2021-01-26 Fireeye, Inc. Detection of phishing attacks using similarity analysis
US10848397B1 (en) 2017-03-30 2020-11-24 Fireeye, Inc. System and method for enforcing compliance with subscription requirements for cyber-attack detection service
US20180288077A1 (en) * 2017-03-30 2018-10-04 Fireeye, Inc. Attribute-controlled malware detection
US10554507B1 (en) 2017-03-30 2020-02-04 Fireeye, Inc. Multi-level control for enhanced resource and object evaluation management of malware detection system
US10902119B1 (en) 2017-03-30 2021-01-26 Fireeye, Inc. Data extraction system for malware analysis
US10798112B2 (en) 2017-03-30 2020-10-06 Fireeye, Inc. Attribute-controlled malware detection
US10791138B1 (en) 2017-03-30 2020-09-29 Fireeye, Inc. Subscription-based malware detection
US20180375886A1 (en) * 2017-06-22 2018-12-27 Oracle International Corporation Techniques for monitoring privileged users and detecting anomalous activities in a computing environment
US10503904B1 (en) 2017-06-29 2019-12-10 Fireeye, Inc. Ransomware detection and mitigation
US10601848B1 (en) 2017-06-29 2020-03-24 Fireeye, Inc. Cyber-security system and method for weak indicator detection and correlation to generate strong indicators
US10855700B1 (en) 2017-06-29 2020-12-01 Fireeye, Inc. Post-intrusion detection of cyber-attacks during lateral movement within networks
US10893068B1 (en) 2017-06-30 2021-01-12 Fireeye, Inc. Ransomware file modification prevention technique
US20190068619A1 (en) * 2017-08-24 2019-02-28 At&T Intellectual Property I, L.P. Systems and methods for dynamic analysis and resolution of network anomalies
US10747872B1 (en) 2017-09-27 2020-08-18 Fireeye, Inc. System and method for preventing malware evasion
US10805346B2 (en) 2017-10-01 2020-10-13 Fireeye, Inc. Phishing attack detection
US20190104154A1 (en) 2017-10-01 2019-04-04 Fireeye, Inc. Phishing attack detection
US20190132334A1 (en) 2017-10-27 2019-05-02 Fireeye, Inc. System and method for analyzing binary code for malware classification using artificial neural network techniques
US20230032686A1 (en) * 2017-11-27 2023-02-02 Lacework, Inc. Using real-time monitoring to inform static analysis
US20220400129A1 (en) * 2017-11-27 2022-12-15 Lacework, Inc. Detecting Anomalous Behavior Of A Device
US20220400130A1 (en) * 2017-11-27 2022-12-15 Lacework, Inc. Generating User-Specific Polygraphs For Network Activity
US20190207967A1 (en) 2017-12-28 2019-07-04 Fireeye, Inc. Platform and method for retroactive reclassification employing a cybersecurity-based global data store
US20190207966A1 (en) 2017-12-28 2019-07-04 Fireeye, Inc. Platform and Method for Enhanced Cyber-Attack Detection and Response Employing a Global Data Store
US10826931B1 (en) 2018-03-29 2020-11-03 Fireeye, Inc. System and method for predicting and mitigating cybersecurity system misconfigurations
US11537627B1 (en) * 2018-09-28 2022-12-27 Splunk Inc. Information technology networked cloud service monitoring
US11550900B1 (en) * 2018-11-16 2023-01-10 Sophos Limited Malware mitigation based on runtime memory allocation
US20200252428A1 (en) 2018-12-21 2020-08-06 Fireeye, Inc. System and method for detecting cyberattacks impersonating legitimate sources
US20200241911A1 (en) * 2019-01-29 2020-07-30 Hewlett Packard Enterprise Development Lp Automatically freeing up virtual machine resources based on virtual machine tagging
US20200257815A1 (en) * 2019-02-12 2020-08-13 Citrix Systems, Inc. Accessing encrypted user data at a multi-tenant hosted cloud service
US20200327124A1 (en) * 2019-04-10 2020-10-15 Snowflake Inc. Internal resource provisioning in database systems
US20200341920A1 (en) * 2019-04-29 2020-10-29 Instant Labs, Inc. Data access optimized across access nodes
US20230007483A1 (en) * 2019-11-14 2023-01-05 Intel Corporation Technologies for implementing the radio equipment directive
US11522884B1 (en) * 2019-12-24 2022-12-06 Fireeye Security Holdings Us Llc Subscription and key management system

Non-Patent Citations (57)

* Cited by examiner, † Cited by third party
Title
"Mining Specification of Malicious Behavior"—Jha et al, UCSB, Sep. 2007 https://www.cs.ucsb.edu/.about.chris/research/doc/esec07.sub.--mining.pdf.
"Network Security: NetDetector—Network Intrusion Forensic System (NIFS) Whitepaper", ("NetDetector Whitepaper"), (2003).
"When Virtual is Better Than Real", IEEEXplore Digital Library, available at, http://ieeexplore.ieee.org/xpl/articleDetails.isp?reload=true&arnumbe-r=990073, (Dec. 7, 2013).
Abdullah, et al., Visualizing Network Data for Intrusion Detection, 2005 IEEE Workshop on Information Assurance and Security, pp. 100-108.
Adetoye, Adedayo , et al., "Network Intrusion Detection & Response System", ("Adetoye"), (Sep. 2003).
Apostolopoulos, George; hassapis, Constantinos; "V-eM: A cluster of Virtual Machines for Robust, Detailed, and High-Performance Network Emulation", 14th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, Sep. 11-14, 2006, pp. 117-126.
Aura, Tuomas, "Scanning electronic documents for personally identifiable information", Proceedings of the 5th ACM workshop on Privacy in electronic society. ACM, 2006.
Baecher, "The Nepenthes Platform: An Efficient Approach to collect Malware", Springer-verlag Berlin Heidelberg, (2006), pp. 165-184.
Bayer, et al., "Dynamic Analysis of Malicious Code", J Comput Virol, Springer-Verlag, France., (2006), pp. 67-77.
Boubalos, Chris , "extracting syslog data out of raw pcap dumps, seclists.org, Honeypots mailing list archives", available at http://seclists.org/honeypots/2003/q2/319 ("Boubalos"), (Jun. 5, 2003).
Chaudet, C. , et al., "Optimal Positioning of Active and Passive Monitoring Devices", International Conference on Emerging Networking Experiments and Technologies, Proceedings of the 2005 ACM Conference on Emerging Network Experiment and Technology, CoNEXT '05, Toulousse, France, (Oct. 2005), pp. 71-82.
Chen, P. M. and Noble, B. D., "When Virtual is Better Than Real, Department of Electrical Engineering and Computer Science", University of Michigan ("Chen") (2001).
Cisco "Intrusion Prevention for the Cisco ASA 5500-x Series" Data Sheet (2012).
Cohen, M.I. , "PyFlag—An advanced network forensic framework", Digital investigation 5, Elsevier, (2008), pp. S112-S120.
Costa, M. , et al., "Vigilante: End-to-End Containment of Internet Worms", SOSP '05, Association for Computing Machinery, Inc., Brighton U.K., (Oct. 23-26, 2005).
Didier Stevens, "Malicious PDF Documents Explained", Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 9, No. 1, Jan. 1, 2011, pp. 80-82, XP011329453, ISSN: 1540-7993, DOI: 10.1109/MSP.2011.14.
Distler, "Malware Analysis: An Introduction", SANS Institute InfoSec Reading Room, SANS Institute, (2007).
Dunlap, George W. , et al., "ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay", Proceeding of the 5th Symposium on Operating Systems Design and Implementation, USENIX Association, ("Dunlap"), (Dec. 9, 2002).
FireEye Malware Analysis & Exchange Network, Malware Protection System, FireEye Inc., 2010.
FireEye Malware Analysis, Modern Malware Forensics, FireEye Inc., 2010.
FireEye v.6.0 Security Target, pp. 1-35, Version 1.1, FireEye Inc., May 2011.
Goel, et al., Reconstructing System State for Intrusion Analysis, Apr. 2008 SIGOPS Operating Systems Review, vol. 42 Issue 3, pp. 21-28.
Gregg Keizer: "Microsoft's HoneyMonkeys Show Patching Windows Works", Aug. 8, 2005, XP055143386, Retrieved from the Internet: URL:http://www.informationweek.com/microsofts-honeymonkeys-show-patching-windows-works/d/d-id/1035069? [retrieved on Jun. 1, 2016].
Heng Yin et al, Panorama: Capturing System-Wide Information Flow for Malware Detection and Analysis, Research Showcase @ CMU, Carnegie Mellon University, 2007.
Hiroshi Shinotsuka, Malware Authors Using New Techniques to Evade Automated Threat Analysis Systems, Oct. 26, 2012, http://www.symantec.com/connect/blogs/, pp. 1-4.
Idika et al., A-Survey-of-Malware-Detection-Techniques, Feb. 2, 2007, Department of Computer Science, Purdue University.
Isohara, Takamasa, Keisuke Takemori, and Ayumu Kubota. "Kernel-based behavior analysis for android malware detection." Computational intelligence and Security (CIS), 2011 Seventh International Conference on. IEEE, 2011.
Kaeo, Merike , "Designing Network Security", ("Kaeo"), (Nov. 2003).
Kevin A Roundy et al: "Hybrid Analysis and Control of Malware", Sep. 15, 2010, Recent Advances in Intrusion Detection, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 317-338, XP019150454 ISBN:978-3-642-15511-6.
Khaled Salah et al: "Using Cloud Computing to Implement a Security Overlay Network", Security & Privacy, IEEE, IEEE Service Center, Los Alamitos, CA, US, vol. 11, No. 1, Jan. 1, 2013 (Jan. 1, 2013).
Kim, H. , et al., "Autograph: Toward Automated, Distributed Worm Signature Detection", Proceedings of the 13th Usenix Security Symposium (Security 2004), San Diego, (Aug. 2004), pp. 271-286.
King, Samuel T., et al., "Operating System Support for Virtual Machines", ("King"), (2003).
Kreibich, C. , et al., "Honeycomb-Creating Intrusion Detection Signatures Using Honeypots", 2nd Workshop on Hot Topics in Networks (HotNets-11), Boston, USA, (2003).
Kristoff, J. , "Botnets, Detection and Mitigation: DNS-Based Techniques", NU Security Day, (2005), 23 pages.
Lastline Labs, The Threat of Evasive Malware, Feb. 25, 2013, Lastline Labs, pp. 1-8.
Li et al., A VMM-Based System Call Interposition Framework for Program Monitoring, Dec. 2010, IEEE 16th International Conference on Parallel and Distributed Systems, pp. 706-711.
Lindorfer, Martina, Clemens Kolbitsch, and Paolo Milani Comparetti. "Detecting environment-sensitive malware." Recent Advances in Intrusion Detection. Springer Berlin Heidelberg, 2011.
Marchette, David J., "Computer Intrusion Detection and Network Monitoring: A Statistical Viewpoint", ("Marchette"), (2001).
Moore, D. , et al., "Internet Quarantine: Requirements for Containing Self-Propagating Code", INFOCOM, vol. 3, (Mar. 30-Apr. 3, 2003), pp. 1901-1910.
Morales, Jose A., et al., ""Analyzing and exploiting network behaviors of malware."", Security and Privacy in Communication Networks. Springer Berlin Heidelberg, 2010. 20-34.
Mori, Detecting Unknown Computer Viruses, 2004, Springer-Verlag Berlin Heidelberg.
Natvig, Kurt , "SANDBOXII: Internet", Virus Bulletin Conference, ("Natvig"), (Sep. 2002).
NetBIOS Working Group. Protocol Standard for a NetBIOS Service on a TCP/UDP transport: Concepts and Methods. STD 19, RFC 1001, Mar. 1987.
Newsome, J. , et al., "Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software", In Proceedings of the 12th Annual Network and Distributed System Security, Symposium (NDSS '05), (Feb. 2005).
Nojiri, D. , et al., "Cooperation Response Strategies for Large Scale Attack Mitigation", DARPA Information Survivability Conference and Exposition, vol. 1, (Apr. 22-24, 2003), pp. 293-302.
Oberheide et al., CloudAV.sub.--N-Version Antivirus in the Network Cloud, 17th USENIX Security Symposium USENIX Security '08 Jul. 28-Aug. 1, 2008 San Jose, CA.
Reiner Sailer, Enriquillo Valdez, Trent Jaeger, Roonald Perez, Leendert van Doorn, John Linwood Griffin, Stefan Berger., sHype: Secure Hypervisor Appraoch to Trusted Virtualized Systems (Feb. 2, 2005) ("Sailer").
Silicon Defense, "Worm Containment in the Internal Network", (Mar. 2003), pp. 1-25.
Singh, S. , et al., "Automated Worm Fingerprinting", Proceedings of the ACM/USENIX Symposium on Operating System Design and Implementation, San Francisco, California, (Dec. 2004).
Thomas H. Ptacek, and Timothy N. Newsham , "Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection", Secure Networks, ("Ptacek"), (Jan. 1998).
Venezia, Paul , "NetDetector Captures Intrusions", InfoWorld Issue 27, ("Venezia"), (Jul. 14, 2003).
Vladimir Getov: "Security as a Service in Smart Clouds—Opportunities and Concerns", Computer Software and Applications Conference (COMPSAC), 2012 IEEE 36th Annual, IEEE, Jul. 16, 2012 (Jul. 16, 2012).
Wahid et al., Characterising the Evolution in Scanning Activity of Suspicious Hosts, Oct. 2009, Third International Conference on Network and System Security, pp. 344-350.
Whyte, et al., "DNS-Based Detection of Scanning Works in an Enterprise Network", Proceedings of the 12th Annual Network and Distributed System Security Symposium, (Feb. 2005), 15 pages.
Williamson, Matthew M., "Throttling Viruses: Restricting Propagation to Defeat Malicious Mobile Code", ACSAC Conference, Las Vegas, NV, USA, (Dec. 2002), pp. 1-9.
Yuhei Kawakoya et al: "Memory behavior-based automatic malware unpacking in stealth debugging environment", Malicious and Unwanted Software (Malware), 2010 5th International Conference on, IEEE, Piscataway, NJ, USA, Oct. 19, 2010, pp. 39-46, XP031833827, ISBN:978-1-4244-8-9353-1.
Zhang et al., The Effects of Threading, Infection Time, and Multiple-Attacker Collaboration on Malware Propagation, Sep. 2009, IEEE 28th International Symposium on Reliable Distributed Systems, pp. 73-82.

Similar Documents

Publication Publication Date Title
US11271955B2 (en) Platform and method for retroactive reclassification employing a cybersecurity-based global data store
US11647039B2 (en) User and entity behavioral analysis with network topology enhancement
US20190207966A1 (en) Platform and Method for Enhanced Cyber-Attack Detection and Response Employing a Global Data Store
US11627054B1 (en) Methods and systems to manage data objects in a cloud computing environment
US11483334B2 (en) Automated asset criticality assessment
US10686809B2 (en) Data protection in a networked computing environment
US11757906B2 (en) Detecting behavior anomalies of cloud users for outlier actions
CN113949557B (en) Method, system, and medium for monitoring privileged users and detecting abnormal activity in a computing environment
US10467426B1 (en) Methods and systems to manage data objects in a cloud computing environment
US11240275B1 (en) Platform and method for performing cybersecurity analyses employing an intelligence hub with a modular architecture
US8478708B1 (en) System and method for determining risk posed by a web user
US10341355B1 (en) Confidential malicious behavior analysis for virtual computing resources
US9729506B2 (en) Application programming interface wall
US9471469B2 (en) Software automation and regression management systems and methods
US20180295154A1 (en) Application of advanced cybersecurity threat mitigation to rogue devices, privilege escalation, and risk-based vulnerability and patch management
US20180033009A1 (en) Method and system for facilitating the identification and prevention of potentially fraudulent activity in a financial system
US11757920B2 (en) User and entity behavioral analysis with network topology enhancements
US20200167481A1 (en) System for information security threat assessment and event triggering
US11050773B2 (en) Selecting security incidents for advanced automatic analysis
US11888875B1 (en) Subscription and key management system
US20190319972A1 (en) Advanced threat detection through historical log analysis
US20210306342A1 (en) Dynamically generating restriction profiles for managed devices
US11838300B1 (en) Run-time configurable cybersecurity system
US20220385677A1 (en) Cloud-based security for identity imposter
WO2023020067A1 (en) Identifying credential attacks on encrypted network traffic

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE