US20240031380A1 - Unifying of the network device entity and the user entity for better cyber security modeling along with ingesting firewall rules to determine pathways through a network - Google Patents

Unifying of the network device entity and the user entity for better cyber security modeling along with ingesting firewall rules to determine pathways through a network Download PDF

Info

Publication number
US20240031380A1
US20240031380A1 US18/207,061 US202318207061A US2024031380A1 US 20240031380 A1 US20240031380 A1 US 20240031380A1 US 202318207061 A US202318207061 A US 202318207061A US 2024031380 A1 US2024031380 A1 US 2024031380A1
Authority
US
United States
Prior art keywords
network
cyber
data
linking service
different
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/207,061
Inventor
Jake Lal
Guy Howlett
Alexander Fox Thomson
James Rees Wingar
Andrew Woodford
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Darktrace Holdings Ltd
Original Assignee
Darktrace Holdings Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Darktrace Holdings Ltd filed Critical Darktrace Holdings Ltd
Priority to US18/207,061 priority Critical patent/US20240031380A1/en
Assigned to Darktrace Holdings Limited reassignment Darktrace Holdings Limited ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THOMSON, ALEXANDER FOX, WINGAR, JAMES REES
Publication of US20240031380A1 publication Critical patent/US20240031380A1/en
Assigned to Darktrace Holdings Limited reassignment Darktrace Holdings Limited ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LAL, JAKE, WOODFORD, Andrew, THOMSON, ALEXANDER FOX, WINGAR, JAMES REES, HOWLETT, Guy
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/145Countermeasures against malicious traffic the attack involving the propagation of malware through the network, e.g. viruses, trojans or worms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0227Filtering policies
    • H04L63/0263Rule management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1458Denial of Service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1483Countermeasures against malicious traffic service impersonation, e.g. phishing, pharming or web spoofing

Definitions

  • Cybersecurity attacks have become a pervasive problem for enterprises as many computing devices and other resources have been subjected to attack and compromised.
  • a “cyberattack” constitutes a threat to security of an enterprise (e.g., enterprise network, one or more computing devices connected to the enterprise network, or the like).
  • the cyberattack may be a cybersecurity threat against the enterprise network, one or more computing devices connected to the enterprise network, stored or in-flight data accessible over the enterprise network, and/or other enterprise-based resources.
  • This security threat may involve malware (malicious software) introduced into a computing device or into the network.
  • the security threat may originate from an external endpoint or an internal entity (e.g., a negligent or rogue authorized user).
  • the security threats may represent malicious or criminal activity, ranging from theft of credential to even a nation-state attack, where the source initiating or causing the security threat is commonly referred to as a “malicious” source.
  • Conventional cybersecurity products are commonly used to detect and prioritize cybersecurity threats (hereinafter, “cyber threats”) against the enterprise, and to determine preventive and/or remedial actions for the enterprise in response to those cyber threats.
  • the techniques described herein relate to an apparatus, including: a device linking service configured to unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into the network via cross-referencing information from the different sources of access into the network.
  • the device linking service creates a unified network device identifier for the different device identifiers from the different sources of access into the network.
  • the device linking service supplies the unified network device identifier and associated information with the different device identifiers from the different sources of access into the network to a prediction engine.
  • the prediction engine runs a simulation of attack paths for the network that a cyber threat may take.
  • the techniques described herein relate to an apparatus, where the device linking service creates a meta entity identifier from the unified network device identifier and one or more user identifiers associated with the different device identifiers from the different sources of access into the network.
  • the device linking service supplies the meta entity identifier and associated information to a cyber security appliance configured to detect the cyber threat in the network.
  • the cyber security appliance uses the meta entity identifier and information associated with the unified network device identifier and the one or more user identifiers associated with the different device identifiers to create multiple models of a pattern of life for the meta entity identifier in order to detect the cyber threat.
  • the techniques described herein relate to an apparatus, where the cyber security appliance is configured to have an autonomous response module to autonomously respond to mitigate the cyber threat as well as to cooperate with the prediction engine in order to determine how to properly autonomously respond to a cyber attack by the cyber threat based upon simulations run in the prediction engine modelling the attack paths into and through the network.
  • the techniques described herein relate to a non-transitory computer readable medium configured to store instructions in an executable format in the non-transitory computer readable medium, which when executed by one or more processors cause operations, including: providing a device linking service to unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into the network via cross-referencing information from the different sources of access into the network, providing the device linking service to create a unified network device identifier for the different device identifiers from the different sources of access into the network, providing the device linking service to then link the unified network device identifier with a user in the network, and providing the device linking service to supply the unified network device identifier and associated information with the different device identifiers from the different sources of access into the network to a prediction engine, where the prediction engine is configured to run a simulation of attack paths for the network that a cyber threat may take.
  • FIG. 1 illustrates an embodiment of a block diagram of an example device linking service to unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into a network via cross-referencing information from the different sources of access into the network.
  • FIG. 2 illustrates an embodiment of a block diagram of an example device linking service in one process links/correlates network device identifiers (dids) from different sources with different device data streams across a particular network into a single unified network device identifier for that individual physical device, under analysis, where the device linking service next matches different device IDs (dids) across separate networks.
  • FIG. 3 illustrates an embodiment of a block diagram of an example device linking service to aggregate network presence information about each particular person/user of the network and their different user accounts on different third-party applications served from third-party platforms external to the network, whom is then also associated with their particular individual physical network device.
  • FIG. 4 illustrates an embodiment of a block diagram of an example device linking service to unify network device identifiers and its user(s) into one meta entity identifier for a more accurate analysis in cyber security modeling to detect a cyber threat as well as how to properly autonomously respond to a cyber-attack based on the meta entity information being provided into the prediction engine to model the attack paths into and through the network.
  • FIG. 5 A illustrates an embodiment of a portion of the block diagram of the example device linking service shown in FIG. 4 .
  • FIG. 5 B illustrates an embodiment of another portion of the block diagram of the example device linking service shown in FIG. 4 .
  • FIG. 6 illustrates an embodiment of a block diagram of an example Artificial Intelligence based cyber security appliance.
  • FIG. 7 illustrates a diagram of an embodiment of a cyber threat prediction engine and its Artificial Intelligence-based simulations constructing a graph of nodes in an example network and simulating how the cyberattack might likely progress in the future tailored with an innate understanding of a normal behavior of the nodes in the system being protected and a current operational state of each node in the graph of the protected system during simulations of cyberattacks.
  • FIG. 8 illustrates a diagram of an embodiment of the cyber threat prediction engine and its Artificial Intelligence-based simulations constructing an example graph of nodes in an example network and simulating how the cyberattack might likely progress in the future tailored with an innate understanding of a normal behavior of the nodes in the system being protected and a current operational state of each node in the graph of the protected system during simulations of cyberattacks.
  • FIG. 9 illustrates a graph of an embodiment of an example chain of unusual behavior for, in this example, the email activities as well as IT activities deviating from a normal pattern of life for this user and/or device in connection with the rest of the network under analysis.
  • FIG. 10 illustrates an embodiment of the AI based cyber security appliance plugging in as an appliance platform to protect a system.
  • FIG. 11 illustrates a block diagram of an embodiment of one or more computing devices that can be a part of the Artificial Intelligence based cyber security system for an embodiment of the current design discussed herein.
  • FIG. 12 illustrates a block diagram of an embodiment of one or more computing devices that can be a part of an AI-based, cyber security system including the cyber security appliance, the restoration engine, the prediction engine, etc. for an embodiment of the current design discussed herein.
  • FIG. 1 illustrates an embodiment of a block diagram of an example device linking service to unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into a network via cross-referencing information from the different sources of access into the network.
  • the device linking service 180 can create a unified network device identifier (e.g., new metadevice identifier) for the different device identifiers from the different sources of access into the network.
  • the device linking service 180 can reconcile different data streams into the behavior of a single physical network device such as a physical laptop, a physical tablet, a physical smart phone, etc.
  • the cyber security appliance 100 , a device linking service 180 one or more physical devices such as Laptop A and a network.
  • the device linking service 180 can link each individual physical network device accessing into the network protected by a cyber security appliance 100 through multiple different connection pathways from the network traffic back together for each individual physical network device in the network.
  • One physical network device will likely have many different device identifiers assigned by the different sources of access into a network. If a network has four ways to access that network, then the one physical network device will likely have four different device identifiers assigned to that single device by the four different sources of access into the network.
  • An example network device of a laptop identifiable as, for example, laptop A can access into a network protected by the cyber security appliance 100 through, for example, i) a Zscaler Private Access (ZPA), ii) a Virtual Private Network (VPN), iii) a remote desktop connection, iv) a direct connection via ethernet and/or WiFi into the network, etc.
  • ZPA allows a network administrator to give users policy-based secure access only to the internal apps they need to get their work done unlike VPNs, which require the user's physical network device to connect to the network to access the enterprise's applications.
  • application access does not require network access but still the device's behavior needs to be tracked and monitored.
  • the cyber security appliance 100 facilitates the user's laptop A to connect into the network when the user is working from home on their Wi-fi on laptop A.
  • the cyber security appliance 100 will collect and monitor the data (e.g., IP traffic, logs, metadata, etc. coming into the network from the ZPA, the VPN, the RDP, and from the direct .
  • the VPN virtual private network
  • the VPN can be a service that creates a safe, encrypted online connection between the corporate network and the endpoint device (e.g., individual physical network device) being used by the user to remotely access the network.
  • RDP Remote Desktop Protocol
  • RDP Remote Desktop Protocol
  • a VPN is more of an open pass, where anyone who can connect to the VPN server can use it to access secure networks. However, it is the VPN server processing a device's outbound and inbound online traffic—the device's requests, websites' responses to the device's requests, and any files the user of the device decides to send or receive rather than an RDP remote device itself doing these tasks.
  • the endpoint agent that is installed in the network device can monitor activities when the device is not directly connected to the Wi-Fi of the network (e.g. working from home). However, some of the activities and communication traffic such as browsing while on a VPN can be invisible to that installed endpoint agent because the activity and communication would be rooted through a cloud-based VPN service.
  • the device linking service 180 can ingest the Syslog from the VPN client (e.g., Zscalar) and other information from the VPN client to create another source of data linkable via its device identifier and/or username on the VPN.
  • the network device for example, laptop A
  • the network device will also have an endpoint agent/client monitoring sensor resident on that individual physical network device itself monitoring activities occurring on that device, which can be helpful in correlating the activities from these different ways into the network back to this single individual physical network device.
  • the endpoint agent/client monitoring sensor resident on that network device will securely directly communicate with the cyber security appliance 100 to supply data (such as network traffic, information on monitored processes e.g., applications, operating system events and interactions, whether the ZPA, VPN, and/or RDP was active on the network device, etc.) resident on the network device itself. This can also supply a lot of information on what is happening with the network device when the network device is off-line/ not connected to the network but still being used remotely by the user.
  • the user's physical network device (e.g. laptop A) also connects directly into the organization's Wi-Fi portion of the network protected by the cyber security appliance 100 when the user is working in their office at work.
  • the same network device laptop connects to the network being protected by the cyber security appliance 100 through these different access paths and an additional communication stream of data regarding that individual physical network device (e.g. a particular network device) from the endpoint client monitoring sensor resident on that network device itself.
  • the information relating back to that particular physical network device A can have different identifiers assigned by i), for example, the ZPA for that network device (assigned in this example the device id is Zia Device did 3 ), the identifier assigned by the network itself (assigned in this example the device id is Traffic Device did 1 ), which can be a different identifier than the identifier used by the endpoint client monitoring sensor resident on that network device (which typically can be the manufacturer's ID of that particular machine and in this example cSensor device did 2 ), which can be different that the VPN device id-did 4 .
  • the device linking service 180 needs to correlate these four different device identifiers into one unified network device identifier (e.g.
  • the machine learning models trained to model the pattern of life of this physical network device via utilizing the common unified network device identifier for that single network device but while still being able to separate out particular behavior for that network device (and corresponding models of a pattern of life/normal behavior for that network device) in each of the four different use scenarios e.g. off-line from the main network, connecting to the main network while at homing or traveling through a third party remote access service through the ZPA and/or VPN, and connecting directly to the main network while in the office.
  • the cyber security appliance 100 still needs to properly analyze all of these aspects of the physical network device and its associated user in context individually for that work scenario as well as a big picture for all of their activities and data associated with the network device and its user(s).
  • Each individual device identifier for that physical network device can still have its own machine learning models trained to model the pattern of life of that device identifier be kept maintained for that particular device identifier and then one unified device analysis can also occur for a machine learning model maintained for the unified network device identifier (e.g., new metadevice identifier) for that single physical network device.
  • a machine learning model maintained for the unified network device identifier e.g., new metadevice identifier
  • the prediction engine 702 is configured to simulate cyber-attacks against the network being protected by the cyber security appliance 100 .
  • the prediction engine 702 can use the unified network device identifier to determine all of the ways a cyber threat could attack and spread throughout the network being protected by the cyber security appliance 100 and simulated by the prediction engine 702 for the different device identifiers from the different sources of access into the network.
  • the prediction engine 702 can create a logical and reasonable attack path through the network by unifying all of the possible routes of this network device compared to considering the device id assigned to that physical network device, separately depending upon what use scenario because the same physical network device can have a different identifier for that device in the different use scenarios.
  • the prediction engine 702 can use the unified network device identifier for that single physical network device to view that device as a connected whole while still having the capability to viewing individual scenarios.
  • the prediction engine 702 cooperates with the cyber security appliance 100 to monitor traffic into the network in order to map all paths into and through the network taken by the monitored traffic.
  • the device linking service 180 in one process links/correlates network device identifiers from different sources with different device data streams across a particular network into a single/individual unifying device identifier for that single physical device in order to link the different device data streams into a singular physical device, where the device linking service 180 next also matches user IDs across those different platforms to device IDs for each user ID known to use that particular device ID.
  • the device linking service 180 unifies data streams to get a composite picture of a behavior of a given network device that has different device identifiers from different sources of access into a network, via cross-referencing information from the different sources of access into the network, and then linking that unified network device Id with a user entity.
  • the device linking service 180 maps and links device ids from different sources of access to the network by looking at many shared elements amongst the physical network device identified from each different source.
  • some of the shared elements may be device identifier and/or username IP address, hostname, date, and time of the same activity overlapping between the different sources of data (e.g. details regarding some browsing information that occurred on the physical network device that was captured as part of the data being fed from that source to the device linking service 180 , another activity overlapping between the different sources of data could be a matching login and log out time and date.
  • the device linking service 180 can use a direct string matching of different values to find these shared elements but also could use Fuzzy logic matching. Fuzzy logic can be an artificial intelligence and machine learning technology that performs fuzzy approximate name matching and/or fuzzy approximate string matching that identifies similar, but not identical elements in data table sets.
  • the device linking service 180 correlates a lot of different sources for the network data, application data, and other data that the device linking service 180 can combine to create meta-entities (the unified network device identifier and one or more user identifiers associated with the different device identifiers) and link them together while still being able to put activities for a given work scenario in context according to a model of a normal pattern of life.
  • FIG. 2 illustrates an embodiment of a block diagram of an example device linking service in one process links/correlates network device identifiers (dids) from different sources with different device data streams across a particular network into a single unified network device identifier for that individual physical device, under analysis, where the device linking service next matches different device IDs (dids) across separate networks.
  • two networks, network 1 and network 2 each have their own set of physical network devices connecting to that particular network.
  • Each separate network is protected by its own cyber security appliance 100 into a single unified network device identifier for that individual physical device for improved, more efficient, and accurate simulation runs (e.g. attack path modeling) by the prediction engine 702 .
  • a physical network device can be used in multiple different geographic locations that each have their particular network protected by its own cyber security appliance 100 .
  • the device linking service 180 links an individual physical network device monitored and tracked from multiple different cyber security appliance 100 platforms back together into a unified network device identifier being utilized in multiple different networks each network protected by a different cyber security appliance 100 .
  • the device linking service 180 links a physical network device used in multiple different monitored networks with a common universal view of the physical network device of network platforms used in both platforms back together.
  • Some organizations global networks are so big and geographically diverse that portions of that global network need to be broken up into their own local portion of the global network.
  • Each local portion of the global network can have its own cyber security appliance 100 protecting that local portion of the global network while a master cyber security appliance 100 protects the overall global network.
  • Employees travel to different locations—the United States, the United Kingdom, Asia, etc. and they may be working at different geographically located offices which are monitored by different cyber security appliance 100 s and all that activity and behavior has to be tied back together under one unified view of that network entity.
  • the device linking service 180 is configured to correlate information on these different device identifiers from discrete networks into the one common unified network device identifier for that single network device.
  • the device linking service 180 can use a uniformed analysis format via translation and mapping (e.g. applying string matching/fuzzy logic between device ids from different platforms, and user ids from different platforms and user ids tied to use of one or more device ids) and placing the information into a central data store to store these data points organized by how they related to another data point as well as create a new record that maintains the composite data points stored in a uniformed analysis format.
  • translation and mapping e.g. applying string matching/fuzzy logic between device ids from different platforms, and user ids from different platforms and user ids tied to use of one or more device ids
  • the cyber security appliance 100 1 monitors and tracks information relating to that particular physical network device's activities and behavior in network 1 by different device identifiers assigned by, for example, i) the Cisco ASA VPN for that physical network device assigned in this example device did 2 ), ii) the network itself assigned in this example as traffic mirror did 2 , as well as iii) the endpoint client monitoring sensor resident on that physical network device assigned in this example as device did 3 .
  • the cyber security appliance 100 2 monitors and tracks information relating to that particular physical network device's activities and behavior in network 2 by different device identifiers assigned by, for example, i) the network itself assigned in this example as traffic mirror did 202 , as well as ii) the endpoint client monitoring sensor resident on that physical network device assigned in this example as device did 203 .
  • the device linking service 180 is configured to correlate information on these two different unified network device identifiers for that single physical network device together back into one common unified network device identifier for that single network device.
  • the device linking service 180 is configured to also create a centralized data store for information on that physical network device being used across separate networks. (e.g., see FIG. 4 )
  • FIG. 3 illustrates an embodiment of a block diagram of an example device linking service to aggregate network presence information about each particular person/user of the network and their different user accounts on different third-party applications served from third-party platforms external to the network, whom is then also associated with their particular individual physical network device.
  • a portion of the device linking service 180 can obtain device identifiers for a physical network device from information obtained through the cyber security appliance 100 monitoring the network traffic, and other data.
  • a portion of the device linking service 180 can also obtain user account identifiers for a user associated with a given physical network device from information obtained through the prediction engine 702 monitoring the traffic, application information, and other data.
  • the device linking service 180 is then configured to link the unified network device identifier with a user entity in the network.
  • the device linking service 180 is configured to create a meta entity identifier from the unified network device identifier and one or more user identifiers associated with the different device identifiers from the different sources of access into the network.
  • the device linking service 180 can link device identifiers for a physical network device from information obtained through the cyber security appliance 100 and the prediction engine 702 collecting metadata on a user and their different account information for this entity.
  • the device linking service 180 creates a meta entity for a physical network device with the example identifier of WS 192 Laptop and its associated user(s).
  • the device linking service 180 can link device identifiers for a physical network device in the top portion of FIG. 3 .
  • the device linking service 180 matches different device IDs (dids) across a particular network into a single unified network device identifier for that single physical device.
  • the device linking service 180 performs this act/methodology for each different platform/network so that each platform has one single unifying device identifier for that single physical device accessing that network from multiple different source platforms. (See FIG. 1 and the top portion of FIG. 3 )
  • the device linking service 180 next matches different device IDs (dids) across a separate networks, each protected by its own cyber security appliance 100 , into a single unified network device identifier for that single physical device. (See FIG.
  • the detect device linking can match the IP addresses, logins, approximate time stamps, and other shared elements to link device ids from different sources to the same network device.
  • the device linking service 180 also matches user IDs from across different account information from discrete platforms into a single unifying user ID. (See the bottom portion of FIG. 3 ) Next, the device linking service 180 also matches user IDs across those different platforms to device IDs for each user ID known to use that particular device ID. (See the interexchange between the top and bottom of FIG. 3 ) Finally, the device linking service 180 can then create a meta entity id by linking the single unifying meta user ID across the different networks and different platforms with the single unifying meta device id across the different networks and different platforms and store that and all of the information needed to make that meta entity id in the central data store. Thus, the device linking service 180 can secondarily link that properly understood network device (e.g.
  • the device linking service 180 can link meta data collected on the entity by the prediction engine 702 .
  • the device linking service 180 can also do more fuzzy matching and more active querying to link the user's id across different sources of information. For example, traffic monitoring email accounts e.g.
  • a physical network device such as a laptop, assigned an identifier of WS 192 on the network
  • Microsoft Defender security app master data management (MDM) data to create a single master record for each person, place, and/or thing in network, from across internal and external data
  • traffic monitoring Salesforce tools to manage customer data, automate processes, analyze data and insights, and create personalized customer experiences (e.g. Salesforce domains)—ash.b
  • Salesforce User, Active Directory security groups Lightweight Directory Access Protocol (LDAP) data distributed directory information services over an Internet Protocol network, other user account linking services such as Account Linking from Google, etc., of what is the user's personal name and then matching that back to the user's identity.
  • the device linking service 180 actively queries the sources of data to obtain additional information from each of those services regarding the user (their account information) as well as passively monitors the activities and communications to collect information that can be used for the fuzzy logic matching.
  • LDAP Lightweight Directory Access Protocol
  • the device linking service 180 actively queries third-party/external sources to obtain additional information to create the unified view of the user entity. This meta entity (of the combined user identity and unified network device entity) can then be used to better improve, attack path modeling with the example of making it easier to traverse between user links and then follow them.
  • the device linking service 180 in the cyber security appliance 100 uses the I/O ports to actively pull data/query data from and passively monitor data from multiple sources.
  • the cyber security appliance 100 pulls data from and receives data from, for example, an endpoint management platform that covers the endpoint.
  • the device linking service 180 in the cyber security appliance 100 can actively pull the user account information from different third party platforms hosting these applications. For example, the cyber security appliance 100 pulls data from and receives data from SaaS and email third-party platforms.
  • the cyber security appliance 100 pulls data from and receives data from the network that it is protecting and monitoring.
  • FIG. 4 illustrates an embodiment of a block diagram of an example device linking service to unify network device identifiers and its user(s) into one meta entity identifier for a more accurate analysis in cyber security modeling to detect a cyber threat as well as how to properly autonomously respond to a cyber-attack based on the meta entity information being provided into the prediction engine 702 to model the attack paths into and through the network.
  • FIG. 5 A illustrates an embodiment of a portion of the block diagram of the example device linking service shown in FIG. 4 .
  • the example device linking service 180 can supply the meta entity identifier and associated information to a cyber security appliance 100 configured to detect a cyber threat in the network.
  • the cyber security appliance 100 can use the meta entity identifier and information associated with the unified network device identifier and the one or more user identifiers associated with the different device identifiers to create multiple models of a pattern of life for the meta entity in order to detect the cyber threat.
  • FIG. 5 B illustrates an embodiment of another portion of the block diagram of the example device linking service 180 shown in FIG. 4 .
  • An example device linking service 180 can cooperate with the firewall configuration ingester 176 , the prediction engine 702 , the cyber security appliance 100 , and a central data store 192 .
  • the device linking service 180 can generate queries for contextual data about the user and the physical network device from sources such as a threat Intel platform, for example, Sophos, an Active Directory environment, a Mobile Device Management (MDM) aka Jam F service, an asset management platform, etc.
  • the information from the device linking service 180 can be fed from there into the cyber security appliance 100 , the central data store 192 , and the prediction engine 702 .
  • the device linking service 180 creates the conception of the common unified network device entity of what the single physical network device, e.g. laptop, is, then the device linking service 180 will link that unified network device identifier to the user of that device as a kind of a secondary entity. (See FIG. 3 SaaS account, email account linked to the user)
  • the device linking service 180 can aggregate different network device identifiers into one common unified network device entity identifier for that single network device, under analysis, so that the rest of the cyber security system—e.g. the prediction engine 702 and the cyber security appliance 100 can link this single physical network device, under analysis, accessing into a network protected by the cyber security appliance 100 through multiple different connection pathways from network traffic back together into one network entity for a better pattern of life tracking, as well as better modeling by a prediction engine 702 of the possible route that a cyber-attack may be able to take through a network.
  • the rest of the cyber security system e.g. the prediction engine 702 and the cyber security appliance 100 can link this single physical network device, under analysis, accessing into a network protected by the cyber security appliance 100 through multiple different connection pathways from network traffic back together into one network entity for a better pattern of life tracking, as well as better modeling by a prediction engine 702 of the possible route that a cyber-attack may be able to take through a network.
  • a Detect engine in the cyber security appliance 100 can have many modules such as a network ingestion module (including on premise/local network or cloud), a SAAS module, an email module, a login module, an endpoint agent (e.g. client sensor cSensor) input module, a cloud environment monitoring module, and an industrial network ingestion module.
  • the modules in the Detect engine can be used to establish what is the behavior of a user's physical network device, such as a laptop, that has been broken down between all these different incoming data streams with some duplication. These modules are monitored and act as a source of data to model the pattern of life for each specific physical network device and user in the network in the cyber security appliance 100 .
  • the cyber security appliance 100 thus creates multiple modelled entities for the pattern of life for this meta entity identifier per the user's normal work routines along with a big picture/composite model pattern of life for that model of the particular network device.
  • the device linking service 180 correlates a lot of different sources for the data that the system can combine to create these meta entities and link them together while still being able to put activities for a given work scenario in context according to a specific model of a normal pattern of life. In an embodiment, merely a unified view of the pattern of life for that model of the particular network device is created and maintained.
  • the device linking service 180 also queries for the pattern of life information on the network devices and their corresponding user(s) in the network.
  • the cyber security appliance 100 pulls data from and receives data from, for example, an endpoint vendor platform that covers endpoint devices.
  • the cyber security appliance 100 pulls data from and receives data from third-party platforms such as SaaS, email third-party platforms not run directly by the network being protected by the cyber security appliance 100 (e.g. personal email accounts versus the employee's work email account).
  • the cyber security appliance 100 pulls data from and receives data from the IT network that it is protecting and monitoring.
  • the multiple modules in the cyber security appliance 100 can supply information to the device linking service 180 regarding in this example network information from both on premise/local network as well as accessed via RDP through the cloud, SAAS information, email information, login information, an endpoint agent information, the cloud environment information from the external third party services, etc.
  • the device linking service 180 outputs a unified network device identifier for each physical network device connecting to that network.
  • the device linking service 180 also creates meta entities (combining the unified network device identifier and the user account identifier information for the user of that physical network device).
  • the device linking service 180 can supply the unified network device identifier and associated information with the different device identifiers from the different sources of access into the network to a prediction engine 702 .
  • the prediction engine 702 uses the unified network device identifier and associated information to determine nodes in the network directly touched and/or effected by the unified network device identifier and ingests the firewall rules as well to determine and add onto the possible pathways through the network in the attack path modeling (e.g. simulation runs).
  • the device linking service 180 in one process links assigned network device identifiers from different sources in order to link the different device data streams into a singular physical network device (e.g. that device's unified network device identifier) that is in a second process linked with the physical user(s) of the linked physical network devices (e.g. a meta entity identifier) for improved, more efficient and accurate attack path modeling by a prediction engine 702 .
  • the device linking service 180 unifies data streams to get a better idea big picture of a behavior of a given physical network device that has different device identifiers from different sources of information and then links that unified network device identifier with a user entity of that physical device (which forms a meta entity identifier).
  • the device linking service 180 actively queries the third-party/external sources to obtain additional information to create the unified view of the user entity and their user account information from the many third-party applications.
  • This meta entity (of the combined user identity and unified network device entity) can then be used to better improve, attack path modeling with the example of making it easier to traverse between the user links and their associated physical network device links and then follow the combination of those links/nodes in the attack path through the network.
  • the device linking service 180 passively as well as actively queries to ingest data from various viable third-party vendor platforms and analyzes the ingested data, passing it into the prediction engine 702 to perform attack path modeling (simulations) knowing all of the hypothetical paths possible paths because now it knows all of the nodes that the meta entity touches/effects instead of just paths directly related to one of the user identifiers or one of the network device ids.
  • attack path modeling simulations
  • the device linking service 180 can use fuzzy logic and correlations to link devices to users.
  • the device linking service 180 can also use a very similar methodology to identifying user identities back to a specific user within cybersecurity software by matching SaaS Accounts, Emails, Active Directory (AD) accounts, and network devices that are used by the same person.
  • an AD account can be a centralized and standardized system for Microsoft Windows that automates management of user data, security, and distributed resources and enables interoperation with other directories.
  • the device linking service 180 can identify user identities for SaaS devices and users across different services (Emails, AD Accounts, Network) via aggregation through the fuzzy matching, other more complex machine learning-based methods, and linking of network devices for SaaS services using aggregation data such as that acquired from Lightweight Directory Access Protocol (LDAP), Active Directory (AD) servers) enrichment, and other external services.
  • network devices may be linked to SaaS services using aggregation data such as that acquired from the LDAP and AD Servers enrichment, from external services such as Microsoft Defender, or from credentials observed by our own endpoint agents.
  • LDAP can be a standard protocol designed to maintain and access distributed “directory services” within an IP network to, for example, provide a central place to store usernames and passwords.
  • the unified network device entity can be a composite representation of all of the different device ids for a physical network device.
  • the unified network device identifier for a given physical network device can be taken one step further in the device linking service 180 , where these linked devices are turned into “meta”/unified device entities.
  • Information known about composite parts of the meta entity e.g., the user and their physical network device
  • the prediction engine 702 can be used in, for example, the prediction engine 702 to contribute to scoring about their importance to the organization, their “weakness”, and to tailor synthetic campaigns towards them.
  • the prediction engine 702 can tailor a cyber attack involving that meta entity using starting with a phishing attack using a Windows updates alert.
  • the cyber-attack would spread in all directions for each of the different device identifiers and user accounts associated with that meta entity.
  • a meta-device entity for firstname.lastname@example.com contains an AWS account, but others do not, it can be deduced they have access to a cloud environment which boosts their importance.
  • the combined user's presence and their network device connections across many different facets of business operations can be aggregated to overall impact their “targetability” in the context of attack path modeling.
  • Node exposure, node weakness, and “damage” scores can all be impacted by the presence on these different services, which can be added together to assign these scores to a meta-device which represents a person in a business.
  • the meta entity identifier is an aggregation of the user's presence of all of their network devices and different user accounts.
  • the device linking service 180 can create a common meta entity identifier to aggregate different sources of network traffic from an individual physical device across multiple sources of access and a user's network presence information for a certain physical network device, for each physical network device in the network.
  • the device linking service 180 also aggregates different information about a person/user of the network and their different user accounts on different third-party platforms, who is then associated with this particular network device and also has a presence in third-party applications and its user (e.g. an aggregate meta user id).
  • the device linking service 180 correlates the common unified network device identifier with the aggregate meta user id for the user of that particular network device, under analysis, to form the meta entity identifier.
  • the meta entity identifier can be created by the prediction engine 702 . Either way, the meta entity identifier is used to improve the ability to replicate a logical and reasonable attack path by unifying all of the routes of the user presence compared to considering each different device identifier for its source separately.
  • the device linking service 180 can passively monitor the data streams from the different sources having access into the network as well as actively query third-party platforms to gather and ingest device data, user data, and activity data from various third-party vendors and then analyze the ingested data, and then pass the ingested data on the meta entity (e.g. aggregate user Ids and device IDs) data into the prediction engine 702 to perform the (attack path modeling) simulation of attack paths for the network that the cyber threat may take.
  • the prediction engine 702 now knows all of the hypothetical paths possible paths because of all of the known nodes that the meta entity touches/effects are identified instead of just paths directly related to one of the user identifiers or one of the network device identifiers making up the meta entity.
  • the device linking service 180 in the cyber security appliance 100 can use the gathering module, the datastore, and the I/O ports to actively pull data/query data from and passively monitor data from the multiple third-party sources to obtain stream data needed to match the network devices identifiers across different networks.
  • the device linking service 180 can maintain data from the data streams and other sources of data in their generic format as well as put relevant data into a uniform analysis format in a central data store 192 via translation and mapping (e.g. applying string matching/fuzzy logic as well as a central data store 192 storing data points organized by how they related to another data point) and then using a central data store 192 to store the relevant data for the uniform analysis format.
  • the device linking service 180 can 1) apply at least one of string matching and fuzzy logic to cross-reference information from the different sources of access into the network as well as 2) use a central data store 192 to store data points organized by how the data points relate to another data point.
  • the device linking service 180 supplies the meta entity identifier information to the prediction engine 702 for attack path modeling in order to have better modeling of device behavior as well as the Detect engine for modeling the pattern of life.
  • the prediction engine 702 for attack path modeling modifies both the network node exposure score as well as its weakness score, which are utilized to determine a particular cyber threat path into and then through the network in a simulated cyber-attack on that network.
  • the device linking service 180 can use the results of any of the above steps/processes to make use of that information for a better pattern of life tracking, as well as a better modeling by a prediction engine 702 of the possible route that a cyber-attack may be able to take through a network.
  • the cyber security appliance 100 can have an autonomous response module 140 to autonomously respond to mitigate the cyber threat as well as to cooperate with the prediction engine 702 in order to determine how to properly autonomously respond to a cyber-attack by the cyber threat based upon simulations run in the prediction engine 702 modeling the attack paths into and through the network.
  • the prediction engine 702 runs many, many simulations of attack paths for the network that each cyber threat may take.
  • the device linking service 180 allows the monitoring of the behavior (traffic and activity of that network device) across the many different sources of information some from third-party platforms and then take autonomous action on the network device in each of those third-party platforms and the network entity as a group, rather than individually applying them to each one.
  • the cyber security appliance 100 can have an autonomous response module 140 to autonomously respond to mitigate the cyber threat as well as to cooperate with the prediction engine 702 in order to determine how to properly autonomously respond to a cyber-attack by the cyber threat based upon simulations run in the prediction engine 702 modeling the attack paths into and through the network.
  • the firewall network tool is on the right side.
  • the device linking service 180 can cooperate with a firewall configuration ingester 176 and the prediction engine 702 .
  • the prediction engine 702 can combine all of the paths into and through the network taken by the monitored traffic with the possible paths through the network theoretically possible in accordance with the firewall rules from the firewall configuration ingester 176 in light of the unified network device identifier with a user entity in the network from the device linking service 180 to determine possible attack paths when running the simulation of attack paths for the network that the cyber threat may take.
  • the firewall configuration ingester 176 requests current firewall configurations for points of ingress into the network.
  • the firewall configuration ingester 176 queries for firewall configuration rules.
  • the firewall configuration ingester 176 can examine firewall rules implemented by a firewall to identify routes into the organization's network allowed by the current firewall rules and supply the prediction engine 702 with the possible routes that a cyber-attack by the cyber threat may take into the network and permitted reasons into the network.
  • the firewall configuration ingester 176 can ingest firewall rules to determine theoretically possible paths through the network in accordance with the firewall rules and a mapping of nodes of the network.
  • the firewall configuration ingester 176 also analyzes possible routes into the network based upon the configuration. For example, the firewall configuration ingester 176 analyzes possible routes between subnets allowed by the firewall configuration rules. The firewall configuration ingester 176 then determines all of the hypothetically possible paths/routes through the network based on these two factors.
  • the firewall configuration ingester 176 can then pass the hypothetically possible paths/routes through the network to the attack path modeling component in the prediction engine 702 .
  • the firewall configuration ingester 176 determines all of the hypothetically possible paths/routes through the network which can be analyzed together with the actual paths/routes that have actually been used previously in the network (which have been detected through previous network traffic and activities tracked in this network).
  • the attack path modeling component in the cyber-attack simulator could also determine all of the hypothetically possible paths/routes through the network which can be analyzed together with the actual path/routes that have actually been used previously in the network (derived from the monitored network traffic).
  • the firewall configuration ingester 176 looks at current firewall configurations for points of ingress in the network and identifies changes over time to the firewall configurations causing new attack path modeling routes.
  • the firewall configuration ingester 176 in the cyber security appliance 100 can look at current firewall configurations and their settings for points of ingress into the network and identify changes to the firewall configurations that cause new attack path modeling routes.
  • a number of firewall integrations require the firewall configuration ingester 176 to request the list of all the current firewall rules that the Artificial Intelligence based cyber security appliance 100 needs to know.
  • the firewall configuration ingester 176 identifies changes to the firewall configurations by modeling firewall configuration rules that over time, and then looks at time series data for the firewall configuration rules.
  • the Artificial Intelligence based cyber security appliance 100 can utilize this information to identify “true” routes into the organization for the prediction engine 702 . This gives the Artificial Intelligence an awareness of ways/routes into the network and permitted reasons into the network.
  • the Artificial Intelligence based cyber security appliance 100 can also model the changes in these rules over time to detect unusual rules. For example, the time model can keep track of those rules over time and, for example, look for spikes in things that now have gotten access and/or have an unusual access.
  • the firewall configuration ingester 176 can examine firewall configurations and their settings for points of ingress in the network and identifies changes over time to the firewall configurations that cause new attack path modeling routes into the network and then supplies this information into the prediction engine 702 of the possible route that a cyber-attack may be able to take to progress into the network and can also model the changes in these rules over time to detect unusual rules.
  • the firewall configuration ingester 176 examines firewall configurations and their settings to model changes in these rules over time to detect unusual rules over time to the firewall configurations that cause new attack path modeling routes into the network.
  • the Artificial Intelligence model of the firewall configuration ingester 176 receives firewall rules (and its related and/or associated data) as another form of time series data that the modules and models check for anomalies, and when a new route into the network as new network boundary equipment becomes available.
  • the firewall configuration ingester 176 also ensures that the information and the relevant details are fed to a restoration component (e.g. modules scripted to restore network devices and the network to a configuration it was in before the cyber-attack occurred, without additional human input is needed to perform the restoration) as well as the prediction engine 702 .
  • the firewall configuration ingester 176 uses this information to determine many things, for example, the network has a new route into the network, a new externally exposed component is communicating into the network, a network component, etc.
  • the Artificial Intelligence based cyber security appliance 100 can create firewall rules that prevent external network access to compromised or recently “healed” devices (e.g. restored to a configuration before the cyber-attack) to halt any connectivity.
  • the modules can also feed routine and anomalous ingress data to the prediction engine 702 (simulator, virtual network generator, etc. configured to run attack scenarios and feedback the results of those attack scenarios) and also to flag detected anomalies and errant configurations that might be vulnerable to a cyber threat (e.g. malicious actor) letting unauthorized data in or out of the network.
  • the firewall configuration ingester can look at and use legacy data that the security appliance already has access to, for example, data from legacy systems.
  • the modules and model are able to figure out whether a new path for an attack into or out of the network has come into existence.
  • the firewall configuration ingester 176 pulls in the firewall configuration rules to combine i) actually detected network paths through the network with ii) information on what paths are hypothetically possible based on the firewall configuration rules for better attack path modeling.
  • the node exposure and weakness scores are changed to factor in the hypothetical paths with the actual detected paths and then to factor in the composite meta entity analysis (common unified network device identifier plus the common user entity identifier).
  • FIG. 6 illustrates a block diagram of an embodiment of the AI based cyber security appliance 100 that protects a system, including but not limited to a network/domain, from cyber threats.
  • Various Artificial Intelligence models and modules of the cyber security appliance 100 cooperate to protect one or more networks/domains under analysis from cyber threats.
  • the AI-based cyber security appliance 100 may include a trigger module 105 , a gather module 110 , an analyzer module 115 , a cyber threat analyst module 120 , an assessment module 125 , a formatting module 130 , a data store 135 , an autonomous response module 140 , a first (1 st ) domain module 145 , a second (2 nd ) domain module 150 , and a coordinator module 155 , one or more AI models 160 (hereinafter, AI model(s)”), and/or other modules.
  • AI model(s) one or more AI models 160
  • the AI model(s) 160 may be trained with machine learning on a normal pattern of life for entities in the network(s)/domain(s) under analysis, with machine learning on cyber threat hypotheses to form and investigate a cyber threat hypothesis on what are a possible set of cyber threats and their characteristics, symptoms, remediations, etc., and/or trained on possible cyber threats including their characteristics and symptoms, a data store, an autonomous response module, a 1st domain module, a 2nd domain module, and a coordinator module.
  • the cyber security appliance 100 with the Artificial Intelligence (AI) based cyber security system may protect a network/domain from a cyber threat.
  • the cyber security appliance 100 can protect all of the devices (e.g., computing devices on the network(s)/domain(s) being monitored by monitoring domain activity including communications).
  • a network domain module e.g., first domain module 145
  • the gather module 110 may be configured with one or more process identifier classifiers. Each process identifier classifier may be configured to identify and track one or more processes and/or devices in the network, under analysis, making communication connections.
  • the data store 135 cooperates with the process identifier classifier to collect and maintain historical data of processes and their connections, which is updated over time as the network is in operation. Individual processes may be present in merely one or more domains being monitored.
  • the process identifier classifier can identify each process running on a given device along with its endpoint connections, which are stored in the data store 135 .
  • a feature classifier can examine and determine features in the data being analyzed into different categories.
  • the analyzer module 115 can cooperate with the AI model(s) 160 or other modules in the cyber security appliance 100 to confirm a presence of a cyberattack against one or more domains in an enterprise's system (e.g., see system/enterprise network 50 of FIG. 11 ).
  • a process identifier in the analyzer module 115 can cooperate with the gather module 110 to collect any additional data and metrics to support a possible cyber threat hypothesis.
  • the cyber threat analyst module 120 can cooperate with the internal data sources as well as external data sources to collect data in its investigation.
  • the cyber threat analyst module 120 can cooperate with the other modules and the AI model(s) 160 in the cyber security appliance 100 to conduct a long-term investigation and/or a more in-depth investigation of potential and emerging cyber threats directed to one or more domains in an enterprise's system.
  • the cyber threat analyst module 120 and/or the analyzer module 115 can also monitor for other anomalies, such as model breaches, including, for example, deviations for a normal behavior of an entity, and other techniques discussed herein.
  • the analyzer module 115 and/or the cyber threat analyst module 120 can cooperate with the AI model(s) 160 trained on potential cyber threats in order to assist in examining and factoring these additional data points that have occurred over a given timeframe to see if a correlation exists between 1) a series of two or more anomalies occurring within that time frame and 2) possible known and unknown cyber threats.
  • the cyber threat analyst module can cooperate with the internal data sources as well as external data sources to collect data in its investigation.
  • the cyber threat analyst module 120 allows two levels of investigations of a cyber threat that may suggest a potential impending cyberattack. In a first level of investigation, the analyzer module 115 and AI model(s) 160 can rapidly detect and then the autonomous response module 140 will autonomously respond to overt and obvious cyberattacks.
  • the cyber threat analyst module 120 also conducts a second level of investigation over time with the assistance of the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis that can detect these advanced persistent cyber threats actively trying to avoid detection by looking at one or more of these low-level anomalies as a part of a chain of linked information.
  • a data analysis process can be algorithms/scripts written by humans to perform their function discussed herein; and can in various cases use AI classifiers as part of their operation.
  • the cyber threat analyst module 120 forms in conjunction with the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis investigate hypotheses on what are a possible set of cyber threats.
  • the cyber threat analyst module 120 can also cooperate with the analyzer module 115 with its one or more data analysis processes to conduct an investigation on a possible set of cyber threats hypotheses that would include an anomaly of at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) any combination of both, identified through cooperation with, for example, the AI model(s) 160 trained with machine learning on the normal pattern of life of entities in the system.
  • the cyber threat analyst module 120 may perform several additional rounds 400 of gathering additional information, including abnormal behavior, over a period of time, in this example, examining data over a 7-day period to determine causal links between the information.
  • the cyber threat analyst module 120 may submit to check and recheck various combinations/a chain of potentially related information, including abnormal behavior of a device/user account under analysis for example, until each of the one or more hypotheses on potential cyber threats are one of 1) refuted, 2) supported, or 3) included in a report that includes details of activities assessed to be relevant activities to the anomaly of interest to the user and that also conveys at least this particular hypothesis was neither supported or refuted.
  • a human cyber security analyst is needed to further investigate the anomaly (and/or anomalies) of interest included in the chain of potentially related information.
  • an input from the cyber threat analyst module 120 of a supported hypothesis of a potential cyber threat will trigger the analyzer module 115 to compare, confirm, and send a signal to act upon and mitigate that cyber threat.
  • the cyber threat analyst module 120 investigates subtle indicators and/or initially seemingly isolated unusual or suspicious activity such as a worker is logging in after their normal working hours or a simple system misconfiguration has occurred.
  • Most of the investigations conducted by the cyber threat analyst module 120 cooperating with the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis on unusual or suspicious activities/behavior may not result in a cyber threat hypothesis that is supported but rather most are refuted or simply not supported.
  • the rounds of data gathering may build chains of linked low-level indicators of unusual activity along with potential activities that could be within a normal pattern of life for that entity to evaluate the whole chain of activities to support or refute each potential cyber threat hypothesis formed. (See again, for example, FIG. 4 and a chain of linked low-level indicators, including abnormal behavior compared to the normal patten of life for that entity, all under a score of 50 on a threat indicator score).
  • the investigations by the cyber threat analyst module 120 can happen over a relatively long period of time and be far more in depth than the analyzer module 115 which will work with the other modules and AI model(s) 160 to confirm that a cyber threat has in fact been detected.
  • the gather module 110 may further extract data from the data store 135 at the request of the cyber threat analyst module 120 and/or analyzer module 115 on each possible hypothetical threat that would include the abnormal behavior or suspicious activity and then can assist to filter that collection of data down to relevant points of data to either 1) support or 2) refute each particular hypothesis of what the cyber threat, the suspicious activity and/or abnormal behavior relates to.
  • the gather module 110 cooperates with the cyber threat analyst module 120 and/or analyzer module 115 to collect data to support or to refute each of the one or more possible cyber threat hypotheses that could include this abnormal behavior or suspicious activity by cooperating with one or more of the cyber threat hypotheses mechanisms to form and investigate hypotheses on what are a possible set of cyber threats.
  • the cyber threat analyst module 120 is configured to cooperate with the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis to form and investigate hypotheses on what are a possible set of cyber threats and then can cooperate with the analyzer module 115 with the one or more data analysis processes to confirm the results of the investigation on the possible set of cyber threats hypotheses that would include the at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) any combination of both, identified through cooperation with the AI model(s) 160 trained with machine learning on the normal pattern of life/normal behavior of entities in the domains under analysis.
  • the gather module 110 and the analyzer module 115 cooperate to supply any data and/or metrics requested by the analyzer module 115 cooperating with the AI model(s) 160 trained on possible cyber threats to support or rebut each possible type of cyber threat.
  • the analyzer module 115 can cooperate with the AI model(s) 160 and/or other modules to rapidly detect and then cooperate with the autonomous response module 140 to autonomously respond to overt and obvious cyberattacks, (including ones found to be supported by the cyber threat analyst module 120 ).
  • the AI-based cyber security appliance 100 can use multiple modules, each capable of identifying abnormal behavior and/or suspicious activity against the AI model(s) 160 of normal pattern of life for the entities in the network/domain under analysis, which is supplied to the analyzer module 115 and/or the cyber threat analyst module 120 .
  • the analyzer module 115 and/or the cyber threat analyst module 120 may also receive other inputs such as AI model breaches, AI classifier breaches, etc. a trigger to start an investigation from an external source.
  • the cyber threat analyst module 120 cooperating with the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis in the AI-based cyber security appliance 100 provides an advantage as it reduces the time taken for human led or cyber security investigations, provides an alternative to manpower for small organizations and improves detection (and remediation) capabilities within the cyber security appliance 100 .
  • the cyber threat analyst module 120 which forms and investigates hypotheses on what are the possible set of cyber threats, can use hypotheses mechanisms including any of 1) one or more of the AI model(s) 160 trained on how human cyber security analysts form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis that would include at least an anomaly of interest, 2) one or more scripts outlining how to conduct an investigation on a possible set of cyber threats hypotheses that would include at least the anomaly of interest, 3) one or more rules-based models on how to conduct an investigation on a possible set of cyber threats hypotheses and how to form a possible set of cyber threats hypotheses that would include at least the anomaly of interest, and 4) any combination of these.
  • the AI model(s) 160 trained on ‘how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis’ may use supervised machine learning on human-led cyber threat investigations and then steps, data, metrics, and metadata on how to support or to refute a plurality of the possible cyber threat hypotheses, and then the scripts and rules-based models will include the steps, data, metrics, and metadata on how to support or to refute the plurality of the possible cyber threat hypotheses.
  • the cyber threat analyst module 120 and/or the analyzer module 115 can feed the cyber threat details to the assessment module 125 to generate a threat risk score that indicate a level of severity of the cyber threat.
  • the assessment module 125 can cooperate with the AI model(s) 160 trained on possible cyber threats to use AI algorithms to identify actual cyber threats and generate threat risk scores based on both the level of confidence that the cyber threat is a viable threat and the severity of the cyber threat (e.g., attack type where ransomware attacks has greater severity than phishing attack; degree of infection; computing devices likely to be targeted, etc.).
  • the threat risk scores be used to rank alerts that may be directed to enterprise or computing device administrators. This risk assessment and ranking is conducted to avoid frequent “false positive” alerts that diminish the degree of reliance/confidence on the cyber security appliance 100 .
  • an initial training of the AI model trained on cyber threats can occur using unsupervised learning and/or supervised learning on characteristics and attributes of known potential cyber threats including malware, insider threats, and other kinds of cyber threats that can occur within that domain.
  • Each Artificial Intelligence can be programmed and configured with the background information to understand and handle particulars, including different types of data, protocols used, types of devices, user accounts, etc. of the system being protected.
  • the Artificial Intelligence pre-deployment can all be trained on the specific machine learning task that they will perform when put into deployment.
  • AI model(s) 160 trained on identifying a specific cyber threat learns at least both in the pre-deployment training i) the characteristics and attributes of known potential cyber threats as well as ii) a set of characteristics and attributes of each category of potential cyber threats and their weights assigned on how indicative certain characteristics and attributes correlate to potential cyber threats of that category of threats.
  • one of the AI model(s) 160 trained on identifying a specific cyber threat can be trained with machine learning such as Linear Regression, Regression Trees, Non-Linear Regression, Bayesian Linear Regression, Deep learning, etc. to learn and understand the characteristics and attributes in that category of cyber threats. Later, when in deployment in a domain/network being protected by the cyber security appliance 100 , the AI model trained on cyber threats can determine whether a potentially unknown threat has been detected via a number of techniques including an overlap of some of the same characteristics and attributes in that category of cyber threats. The AI model may use unsupervised learning when deployed to better learn newer and updated characteristics of cyberattacks.
  • machine learning such as Linear Regression, Regression Trees, Non-Linear Regression, Bayesian Linear Regression, Deep learning, etc.
  • one or more of the AI models 160 may be trained on a normal pattern of life of entities in the system are self-learning AI model using unsupervised machine learning and machine learning algorithms to analyze patterns and ‘learn’ what is the ‘normal behavior’ of the network by analyzing data on the activity on, for example, the network level, at the device level, and at the employee level.
  • the self-learning AI model using unsupervised machine learning understands the system under analysis' normal patterns of life in, for example, a week of being deployed on that system, and grows more bespoke with every passing minute.
  • the AI unsupervised learning model learns patterns from the features in the day-to-day dataset and detecting abnormal data which would not have fallen into the category (cluster) of normal behavior.
  • the self-learning AI model using unsupervised machine learning can simply be placed into an observation mode for an initial week or two when first deployed on a network/domain in order to establish an initial normal behavior for entities in the network/domain under analysis.
  • a deployed AI model trained on a normal pattern of life of entities in the system can be configured to observe the nodes in the system being protected. Training on a normal behavior of entities in the system can occur while monitoring for the first week or two until enough data has been observed to establish a statistically reliable set of normal operations for each node (e.g., user account, device, etc.).
  • Initial training of one or more of the AI models 160 of FIG. 6 trained with machine learning on a behavior of the pattern of life of the entities in the network/domain can occur where each type of network and/or domain will generally have some common typical behavior with each model trained specifically to understand components/devices, protocols, activity level, etc. to that type of network/system/domain.
  • pre-deployment machine learning training of the AI model(s) 160 of FIG. 6 trained on a normal pattern of life of entities in the system can occur.
  • Initial training of the AI model(s) 160 trained with machine learning on a behavior of the pattern of life of the entities in the network/domain can occur where each type of network and/or domain will generally have some common typical behavior with each model trained specifically to understand components/devices, protocols, activity level, etc. to that type of network/system/domain.
  • What is normal behavior of each entity within that system can be established either prior to deployment and then adjusted during deployment or alternatively the model can simply be placed into an observation mode for an initial week or two when first deployed on a network/domain in order to establish an initial normal behavior for entities in the network/domain under analysis.
  • AI models 160 can be implemented with various mechanisms such neural networks, decision trees, etc. and combinations of these.
  • one or more supervised machine learning AI models 160 may be trained to create possible hypotheses and perform cyber threat investigations on agnostic examples of past historical incidents of detecting a multitude of possible types of cyber threat hypotheses previously analyzed by human cyber security analyst. More on the training of AI models 160 are trained to create one or more possible hypotheses and perform cyber threat investigations will be discussed later.
  • the self-learning AI models 160 that model the normal behavior (e.g. a normal pattern of life) of entities in the network mathematically characterizes what constitutes ‘normal’ behavior, based on the analysis of a large number of different measures of a device's network behavior—packet traffic and network activity/processes including server access, data volumes, timings of events, credential use, connection type, volume, and directionality of, for example, uploads/downloads into the network, file type, packet intention, admin activity, resource and information requests, command sent, etc.
  • packet traffic and network activity/processes including server access, data volumes, timings of events, credential use, connection type, volume, and directionality of, for example, uploads/downloads into the network, file type, packet intention, admin activity, resource and information requests, command sent, etc.
  • the AI models can use unsupervised machine learning to algorithmically identify significant groupings, a task which is virtually impossible to do manually.
  • the AI models and AI classifiers employ a number of different clustering methods, including matrix-based clustering, density-based clustering, and hierarchical clustering techniques. The resulting clusters can then be used, for example, to inform the modeling of the normative behaviors and/or similar groupings.
  • the AI models and AI classifiers can employ a large-scale computational approach to understand sparse structure in models of network connectivity based on applying L 1 -regularization techniques (the lasso method). This allows the artificial intelligence to discover true associations between different elements of a network which can be cast as efficiently solvable convex optimization problems and yield parsimonious models.
  • L 1 -regularization techniques the lasso method.
  • one or more supervised machine learning AI models are trained to create possible hypotheses and how to perform cyber threat investigations on agnostic examples of past historical incidents of detecting a multitude of possible types of cyber threat hypotheses previously analyzed by human cyber security analyst.
  • AI models trained on forming and investigating hypotheses on what are a possible set of cyber threats can be trained initially with supervised learning.
  • these AI models can be trained on how to form and investigate hypotheses on what are a possible set of cyber threats and steps to take in supporting or refuting hypotheses.
  • the AI models trained on forming and investigating hypotheses are updated with unsupervised machine learning algorithms when correctly supporting or refuting the hypotheses including what additional collected data proved to be the most useful.
  • the various AI models and AI classifiers combine use of unsupervised and supervised machine learning to learn ‘on the job’—it does not depend upon solely knowledge of previous cyberattacks.
  • the AI models and classifiers combine use of unsupervised and supervised machine learning constantly revises assumptions about behavior, using probabilistic mathematics, which is always up to date on what a current normal behavior is, and not solely reliant on human input.
  • the AI models and classifiers combine use of unsupervised and supervised machine learning on cyber security is capable of seeing hitherto undiscovered cyber events, from a variety of threat sources, which would otherwise have gone unnoticed.
  • these cyber threats can include, for example, Insider threat—malicious or accidental, Zero-day attacks—previously unseen, novel exploits, latent vulnerabilities, machine-speed attacks—ransomware and other automated attacks that propagate and/or mutate very quickly, Cloud and SaaS-based attacks, other silent and stealthy attacks advance persistent threats, advanced spear-phishing, etc.
  • the assessment module 125 and/or cyber threat analyst module 120 of FIG. 6 can cooperate with the AI model(s) 160 trained on possible cyber threats to use AI algorithms to account for ambiguities by distinguishing between the subtly differing levels of evidence that characterize network data. Instead of generating the simple binary outputs ‘malicious’ or ‘benign,’ the AI's mathematical algorithms produce outputs marked with differing degrees of potential threat. This enables users of the system to rank alerts or notifications to the enterprise security administrator in a rigorous manner and prioritize those which most urgently require action. Meanwhile, it also assists to avoid the problem of numerous false positives associated with simply a rule-based approach.
  • the analyzer module 115 and/or cyber threat analyst module 120 can cooperate with the one or more unsupervised AI (machine learning) model 160 trained on the normal pattern of life/normal behavior in order to perform anomaly detection against the actual normal pattern of life for that system to determine whether an anomaly (e.g., the identified abnormal behavior and/or suspicious activity) is malicious or benign.
  • the emerging cyber threat can be previously unknown, but the emerging threat landscape data 170 representative of the emerging cyber threat shares enough (or does not share enough) in common with the traits from the AI models 160 trained on cyber threats to now be identified as malicious or benign. Note, if later confirmed as malicious, then the AI models 160 trained with machine learning on possible cyber threats can update their training.
  • the one or more AI models trained on a normal pattern of life for each of the entities in the system can be updated and trained with unsupervised machine learning algorithms.
  • the analyzer module 115 can use any number of data analysis processes (discussed more in detail below and including the agent analyzer data analysis process here) to help obtain system data points so that this data can be fed and compared to the one or more AI models trained on a normal pattern of life, as well as the one or more machine learning models trained on potential cyber threats, as well as create and store data points with the connection fingerprints.
  • the AI model(s) 160 of FIGS. 1 and 3 can continually learn and train with unsupervised machine learning algorithms on an ongoing basis when deployed in their system that the cyber security appliance 100 is protecting. Thus, learning and training on what is normal behavior for each user, each device, and the system overall and lowering a threshold of what is an anomaly.
  • Anomaly detection can discover unusual data points in your dataset. Anomaly can be a synonym for the word ‘outlier.’ Anomaly detection (or outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Anomalous activities can be linked to some kind of problems or rare events. Since there are tons of ways to induce a particular cyberattack, it is very difficult to have information about all these attacks beforehand in a dataset. But, since the majority of the user activity and device activity in the system under analysis is normal, the system overtime captures almost all of the ways which indicate normal behavior.
  • the self-learning AI model using unsupervised machine learning can predict with high confidence that the given activity is anomalous.
  • the AI unsupervised learning model learns patterns from the features in the day-to-day dataset and detecting abnormal data which would not have fallen into the category (cluster) of normal behavior.
  • the goal of the anomaly detection algorithm through the data fed to it is to learn the patterns of a normal activity so that when an anomalous activity occurs, the modules can flag the anomalies through the inclusion-exclusion principle.
  • the goal of the anomaly detection algorithm through the data fed to it is to learn the patterns of a normal activity so that when an anomalous activity occurs, the modules can flag the anomalies through the inclusion-exclusion principle.
  • the cyber threat module can perform its two-level analysis on anomalous behavior and determine correlations.
  • 95% of data in a normal distribution lies within two standard-deviations from the mean. Since the likelihood of anomalies in general is very low, the modules cooperating with the AI model of normal behavior can say with high confidence that data points spread near the mean value are non-anomalous. And since the probability distribution values between mean and two standard-deviations are large enough, the modules cooperating with the AI model of normal behavior can set a value in this example range as a threshold (a parameter that can be tuned over time through the self-learning), where feature values with probability larger than this threshold indicate that the given feature's values are non-anomalous, otherwise it's anomalous. Note, this anomaly detection can determine that a data point is anomalous/non-anomalous on the basis of a particular feature.
  • a threshold a parameter that can be tuned over time through the self-learning
  • the cyber security appliance 100 should not flag a data point as an anomaly based on a single feature.
  • the modules cooperating with the AI model of normal behavior can say with high confidence whether a data point is an anomaly or not.
  • the AI models trained on a normal pattern of life of entities in a system may perform the cyber threat detection through a probabilistic change in a normal behavior through the application of, for example, an unsupervised Bayesian mathematical model to detect the behavioral change in computers and computer networks.
  • the Bayesian probabilistic approach can determine periodicity in multiple time series data and identify changes across single and multiple time series data for the purpose of anomalous behavior detection.
  • U.S. Pat. No. 10,701,093 granted Jun. 30, 2020 titled “Anomaly alert system for cyber threat detection” for an example Bayesian probabilistic approach, which is incorporated by reference in its entirety.
  • the cyber threat analyst module 120 and the analyzer module 115 can use data analysis processes and cooperate with AI model(s) 160 trained on forming and investigating hypotheses on what are a possible set of cyber threats.
  • AI model(s) 160 trained on forming and investigating hypotheses on what are a possible set of cyber threats.
  • another set of AI models can be trained on how to form and investigate hypotheses on what are a possible set of cyber threats and steps to take in supporting or refuting hypotheses.
  • the AI models trained on forming and investigating hypotheses are updated with unsupervised machine learning algorithms when correctly supporting or refuting the hypotheses including what additional collected data proved to be the most useful.
  • the data analysis processes used by the analyzer module 115 can use unsupervised machine learning to update the initial training learned during pre-deployment, and then update the training with unsupervised learning algorithms during the cyber security appliance's 100 deployment in the system being protected when various different steps to either i) support or ii) refute the possible set of cyber threats hypotheses worked better or worked worse.
  • the AI model(s) 160 trained on a normal pattern of life of entities in a domain under analysis may perform the threat detection through a probabilistic change in a normal behavior through the application of, for example, an unsupervised Bayesian mathematical model to detect a behavioral change in computers and computer networks.
  • the Bayesian probabilistic approach can determine periodicity in multiple time series data and identify changes across single and multiple time series data for the purpose of anomalous behavior detection.
  • a system being protected can include both email and IT network domains under analysis. Thus, email and IT network raw sources of data can be examined along with a large number of derived metrics that each produce time series data for the given metric
  • the gather module 110 cooperates with the data store 135 .
  • the data store 135 stores comprehensive logs for network traffic observed. These logs can be filtered with complex logical queries and each IP packet can be interrogated on a vast number of metrics in the network information stored in the data store. Similarly, other domain's communications and data, such as emails, logs, etc. may be collected and stored in the data store 135 .
  • the gather module 110 may consist of multiple automatic data gatherers that each look at different aspects of the data depending on the particular hypothesis formed for the analysed event. The data relevant to each type of possible hypothesis can be automatically pulled from additional external and internal sources. Some data is pulled or retrieved by the gather module 110 for each possible hypothesis.
  • the data store 135 can store the metrics and previous threat alerts associated with network traffic for a period of time, which is, by default, at least 27 days. This corpus of data is fully searchable.
  • the cyber security appliance 100 works with network probes to monitor network traffic and store and record the data and metadata associated with the network traffic in the data store.
  • the gather module 110 may have a process identifier classifier.
  • the process identifier classifier can identify and track each process and device in the network, under analysis, making communication connections.
  • the data store 135 cooperates with the process identifier classifier to collect and maintain historical data of processes and their connections, which is updated over time as the network is in operation.
  • the process identifier classifier can identify each process running on a given device along with its endpoint connections, which are stored in the data store. Similarly, data from any of the domains under analysis may be collected and compared.
  • Examples of domains/networks under analysis being protected can include any of i) an Informational Technology network, ii) an Operational Technology network, iii) a Cloud service, iv) a SaaS service, v) an endpoint device, vi) an email domain, and vii) any combinations of these.
  • a domain module is constructed and coded to interact with and understand a specific domain.
  • the first domain module 145 may operate as an IT network module configured to receive information from and send information to, in this example, IT network-based sensors (i.e., probes, taps, etc.).
  • the first domain module 145 also has algorithms and components configured to understand, in this example, IT network parameters, IT network protocols, IT network activity, and other IT network characteristics of the network under analysis.
  • the second domain module 150 is, in this example, an email module.
  • the second domain module 150 can be an email network module configured to receive information from and send information to, in this example, email-based sensors (i.e., probes, taps, etc.).
  • the second domain module 150 also has algorithms and components configured to understand, in this example, email parameters, email protocols and formats, email activity, and other email characteristics of the network under analysis. Additional domain modules can also collect domain data from another respective domain.
  • the coordinator module 155 is configured to work with various machine learning algorithms and relational mechanisms to i) assess, ii) annotate, and/or iii) position in a vector diagram, a directed graph, a relational database, etc., activity including events occurring, for example, in the first domain compared to activity including events occurring in the second domain.
  • the domain modules can cooperate to exchange and store their information with the data store.
  • the process identifier classifier (not shown) in the gather module 110 can cooperate with additional classifiers in each of the domain modules 145 / 150 to assist in tracking individual processes and associating them with entities in a domain under analysis as well as individual processes and how they relate to each other.
  • the process identifier classifier can cooperate with other trained AI classifiers in the modules to supply useful metadata along with helping to make logical nexuses.
  • a feedback loop of cooperation exists between the gather module 110 , the analyzer module 115 , AI model(s) 160 trained on different aspects of this process, and the cyber threat analyst module 120 to gather information to determine whether a cyber threat is potentially attacking the networks/domains under analysis.
  • the analyzer module 115 and/or cyber threat analyst module 120 can use multiple factors to the determination of whether a process, event, object, entity, etc. is likely malicious.
  • the analyzer module 115 and/or cyber threat analyst module 120 can cooperate with one or more of the AI model(s) 160 trained on certain cyber threats to detect whether the anomalous activity detected, such as suspicious email messages, exhibit traits that may suggest a malicious intent, such as phishing links, scam language, sent from suspicious domains, etc.
  • the analyzer module 115 and/or cyber threat analyst module 120 can also cooperate with one of more of the AI model(s) 160 trained on potential IT based cyber threats to detect whether the anomalous activity detected, such as suspicious IT links, URLs, domains, user activity, etc., may suggest a malicious intent as indicated by the AI models trained on potential IT based cyber threats.
  • the analyzer module 115 and/or the cyber threat analyst module 120 can cooperate with the one or more AI models 160 trained with machine learning on the normal pattern of life for entities in an email domain under analysis to detect, in this example, anomalous emails which are detected as outside of the usual pattern of life for each entity, such as a user, email server, etc., of the email network/domain.
  • the analyzer module 115 and/or the cyber threat analyst module 120 can cooperate with the one or more AI models trained with machine learning on the normal pattern of life for entities in a second domain under analysis (in this example, an IT network) to detect, in this example, anomalous network activity by user and/or devices in the network, which is detected as outside of the usual pattern of life (e.g. abnormal) for each entity, such as a user or a device, of the second domain's network under analysis.
  • a second domain under analysis in this example, an IT network
  • the analyzer module 115 and/or the cyber threat analyst module 120 can be configured with one or more data analysis processes to cooperate with the one or more of the AI model(s) 160 trained with machine learning on the normal pattern of life in the system, to identify an anomaly of at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) the combination of both, from one or more entities in the system.
  • other sources such as other model breaches, can also identify at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) the combination of both to trigger the investigation.
  • the analyzer module 115 and/or the cyber threat analyst module 120 can also use AI classifiers that look at the features and determine a potential maliciousness based on commonality or overlap with known characteristics of malicious processes/entities. Many factors including anomalies that include unusual and suspicious behavior, and other indicators of processes and events are examined by the one or more AI models 160 trained on potential cyber threats and/or the AI classifiers looking at specific features for their malicious nature in order to make a determination of whether an individual factor and/or whether a chain of anomalies is determined to be likely malicious.
  • the rare JA3 hash and/or rare user agent connections for this network coming from a new or unusual process are factored just like in the first wireless domain suspicious wireless signals are considered. These are quickly determined by referencing the one or more of the AI model(s) 160 trained with machine learning on the pattern of life of each device and its associated processes in the system.
  • the analyzer module 115 and/or the cyber threat analyst module 120 can have an external input to ingest threat intelligence from other devices in the network cooperating with the cyber security appliance 100 .
  • the analyzer module 115 and/or the cyber threat analyst module 120 can look for other anomalies, such as model breaches, while the AI models trained on potential cyber threats can assist in examining and factoring other anomalies that have occurred over a given timeframe to see if a correlation exists between a series of two or more anomalies occurring within that time frame.
  • the analyzer module 115 and/or the cyber threat analyst module 120 can combine these Indicators of Compromise (e.g., unusual network JA3, unusual device JA3, . . . ) with many other weak indicators to detect the earliest signs of an emerging threat, including previously unknown threats, without using strict blacklists or hard-coded thresholds.
  • the AI classifiers can also routinely look at blacklists, etc. to identify maliciousness of features looked at.
  • Another example of features may include a deeper analysis of endpoint data.
  • This endpoint data may include domain metadata, which can reveal peculiarities such as one or more indicators of potentially a malicious domain (i.e., its URL).
  • the deeper analysis may assist in confirming an analysis to determine that indeed a cyber threat has been detected.
  • the analysis module can also look at factors of how rare the endpoint connection is, how old the endpoint is, where geographically the endpoint is located, how a security certificate associated with a communication is verified only by an endpoint device or by an external 3rd party, just to name a few additional factors.
  • the analyzer module 115 (and similarly the cyber threat analyst module 120 ) can then assign weighting given to these factors in the machine learning that can be supervised based on how strongly that characteristic has been found to match up to actual malicious sites in the training.
  • the agent analyzer data analysis process in the analyzer module 115 and/or cyber threat analyst module 120 may cooperate with the process identifier classifier to identify all of the additional factors of i) are one or more processes running independently of other processes, ii) are the one or more processes running independent are recent to this network, and iii) are the one or more processes running independent connect to the endpoint, which the endpoint is a rare connection for this network, which are referenced and compared to one or more AI models trained with machine learning on the normal behavior of the pattern of life of the system.
  • a user agent such as a browser
  • the Hypertext Transfer Protocol (HTTP) identifies the client software originating (an example user agent) the request, using a user-agent header, even when the client is not operated by a user. Note, this identification can be faked, so it is only a weak indicator of the software on its own, but when compared to other observed user agents on the device, this can be used to identify possible software processes responsible for requests.
  • the analyzer module 115 and/or the cyber threat analyst module 120 may use the agent analyzer data analysis process that detects a potentially malicious agent previously unknown to the system to start an investigation on one or more possible cyber threat hypotheses.
  • the determination and output of this step is what are possible cyber threats that can include or be indicated by the identified abnormal behavior and/or identified suspicious activity identified by the agent analyzer data analysis process.
  • the cyber threat analyst module 120 can use the agent analyzer data analysis process and the AI models(s) trained on forming and investigating hypotheses on what are a possible set of cyber threats to use the machine learning and/or set scripts to aid in forming one or more hypotheses to support or refute each hypothesis.
  • the cyber threat analyst module 120 can cooperate with the AI models trained on forming and investigating hypotheses to form an initial set of possible hypotheses, which needs to be intelligently filtered down.
  • the cyber threat analyst module 120 can be configured to use the one or more supervised machine learning models trained on i) agnostic examples of a past history of detection of a multitude of possible types of cyber threat hypotheses previously analyzed by human, who was a cyber security professional, ii) a behavior and input of how a plurality of human cyber security analysts make a decision and analyze a risk level regarding and a probability of a potential cyber threat, iii) steps to take to conduct an investigation start with anomaly via learning how expert humans tackle investigations into specific real and synthesized cyber threats and then the steps taken by the human cyber security professional to narrow down and identify a potential cyber threat, and iv) what type of data and metrics that were helpful to further support or refute each of the types of cyber threats, in order to determine a likelihood of whether the abnormal behavior and/or suspicious activity is either i) malicious or ii) benign?
  • the cyber threat analyst module 120 using AI models, scripts and/or rules based modules is configured to conduct initial investigations regarding the anomaly of interest, collected additional information to form a chain of potentially related/linked information under analysis and then form one or more hypotheses that could have this chain of information that is potentially related/linked under analysis and then gather additional information in order to refute or support each of the one or more hypotheses.
  • a behavioral pattern analysis for identifying what are the unusual behaviors of the network/system/device/user under analysis by the AI (machine learning) models may be as follows.
  • the coordinator module 155 can tie the alerts, activities, and events from, in this example, the email domain to the alerts, activities, and events from the IT network domain.
  • the cyber threat analyst module 120 and/or analyzer module 115 can cooperate with one or more AI (machine learning) models.
  • the one or more AI (machine learning) models are trained and otherwise configured with mathematical algorithms to infer, for the cyber-threat analysis, ‘what is possibly happening with the chain of distinct alerts, activities, and/or events, which came from the unusual pattern,’ and then assign a threat risk associated with that distinct item of the chain of alerts and/or events forming the unusual pattern.
  • the unusual pattern can be determined by examining initially what activities/events/alerts that do not fall within the window of what is the normal pattern of life for that network/system/device/user under analysis can be analysed to determine whether that activity is unusual or suspicious.
  • a chain of related activity that can include both unusual activity and activity within a pattern of normal life for that entity can be formed and checked against individual cyber threat hypothesis to determine whether that pattern is indicative of a behavior of a malicious actor—human, program, or other threat.
  • the cyber threat analyst module 120 can go back and pull in some of the normal activities to help support or refute a possible hypothesis of whether that pattern is indicative of a behavior of a malicious actor.
  • the cyber threat analyst module 120 detects a chain of anomalous behavior of unusual data transfers three times, unusual characteristics in email messages in the monitored system three times which seem to have some causal link to the unusual data transfers. Likewise, twice unusual credentials attempted the unusual behavior of trying to gain access to sensitive areas or malicious IP addresses and the user associated with the unusual credentials trying unusual behavior has a causal link to at least one of those three email messages with unusual characteristics. Again, the cyber security appliance 100 can go back and pull in some of the normal activities to help support or refute a possible hypothesis of whether that pattern is indicative of a behavior of a malicious actor.
  • the cyber threat analyst module 120 can put data and entities into 1) a directed graph and nodes in that graph that are overlapping or close in distance have a good possibility of being related in some manner, 2) a vector diagram, 3) relational database, and 4) other relational techniques that will at least be examined to assist in creating the chain of related activity connected by causal links, such as similar time, similar entity and/or type of entity involved, similar activity, etc., under analysis. If the pattern of behaviors under analysis is believed to be indicative of a malicious actor, then a score of how confident is the system in this assessment of identifying whether the unusual pattern was caused by a malicious actor is created.
  • the cyber security appliance 100 is configurable in a user interface, by a user, enabling what type of automatic response actions, if any, the cyber security appliance 100 may take when different types of cyber threats, indicated by the pattern of behaviors under analysis, that are equal to or above a configurable level of threat posed by this malicious actor.
  • the autonomous response module 140 is configured to take one or more autonomous mitigation actions to mitigate the cyber threat during the cyberattack by the cyber threat.
  • the autonomous response module 140 can reference an AI model trained to track a normal pattern of life for each node of the protected system to perform an autonomous act of, for example, restricting a potentially compromised node having i) an actual indication of compromise and/or ii) merely adjacent to a known compromised node, to merely take actions that are within that node's normal pattern of life to mitigate the cyber threat.
  • the cyber-threat module may reference the one or more machine learning models trained on, in this example, e-mail threats to identify similar characteristics from the individual alerts and/or events forming the distinct item made up of the chain of alerts and/or events forming the unusual pattern.
  • the analyzer module 115 and/or cyber threat analyst module 120 generates one or more supported possible cyber threat hypotheses from the possible set of cyber threat hypotheses.
  • the analyzer module 115 generates the supporting data and details of why each individual hypothesis is supported or not.
  • the analyzer module 115 can also generate one or more possible cyber threat hypotheses and the supporting data and details of why they were refuted.
  • the analyzer module 115 cooperates with the following three sources.
  • the analyzer module 115 cooperates with the one or more of the AI model(s) 160 trained on cyber threats to determine whether an anomaly such as the abnormal behavior and/or suspicious activity is either 1 ) malicious or 2 ) benign when the potential cyber threat under analysis is previously unknown to the cyber security appliance 100 .
  • the analyzer module 115 cooperates with one or more of the AI model(s) 160 trained on a normal pattern of life of entities in the network under analysis.
  • the analyzer module 115 cooperates with various AI-trained classifiers.
  • the analyzer module can make a final determination to confirm that a cyber threat likely exists and send that cyber threat to the assessment module to assess the threat score associated with that cyber threat. Certain model breaches will always trigger a potential cyber threat that the analyzer will compare and confirm the cyber threat.
  • the assessment module 125 with the AI classifiers is configured to cooperate with the analyzer module 115 .
  • the analyzer module 115 supplies the identity of the supported possible cyber threat hypotheses from the possible set of cyber threat hypotheses to the assessment module 125 .
  • the assessment module 125 with the AI classifiers cooperates with the one or more of the AI model(s) 160 trained on possible cyber threats can make a determination on whether a cyber threat exists and what level of severity is associated with that cyber threat.
  • the assessment module 125 with the AI classifiers cooperates with one or more of the AI model(s) 160 trained on possible cyber threats in order assign a numerical assessment of a given cyber threat hypothesis that was found likely to be supported by the analyzer module 115 with the one or more data analysis processes, via the abnormal behavior, the suspicious activity, or the collection of system data points.
  • the assessment module 125 with the AI classifiers output can be a score (ranked number system, probability, etc.) that a given identified process is likely a malicious process.
  • the assessment module 125 with the AI classifiers can be configured to assign a numerical assessment, such as a probability, of a given cyber threat hypothesis that is supported and a threat level posed by that cyber threat hypothesis which was found likely to be supported by the analyzer module 115 , which includes the abnormal behavior or suspicious activity as well as one or more of the collection of system data points, with the one or more AI models trained on possible cyber threats.
  • a numerical assessment such as a probability
  • the cyber threat analyst module 120 in the AI-based cyber security appliance 100 component provides an advantage over competitors' products as it reduces the time taken for cyber security investigations, provides an alternative to manpower for small organizations and improves detection (and remediation) capabilities within the cyber security appliance 100 .
  • the AI-based, cyber threat analyst module 120 performs its own computation of threat and identifies interesting network events with one or more processers. These methods of detection and identification of threat all add to the above capabilities that make the cyber threat analyst module 120 a desirable part of the cyber security appliance 100 .
  • the cyber threat analyst module 120 offers a method of prioritizing which is not just a summary or highest score alert of an event evaluated by itself equals the worst and prevents more complex attacks being missed because their composite parts/individual threats only produced low-level alerts.
  • the AI classifiers can be part of the assessment module 125 , which scores the outputs of the analyzer module 115 . Again, as for the other AI classifiers discussed, the AI classifier can be coded to take in multiple pieces of information about an entity, object, and/or thing and based on its training and then output a prediction about the entity, object, or thing. Given one or more inputs, the AI classifier model will try to predict the value of one or more outcomes.
  • the AI classifiers cooperate with the range of data analysis processes that produce features for the AI classifiers. The various techniques cooperating here allow anomaly detection and assessment of a cyber threat level posed by a given anomaly; but more importantly, an overall cyber threat level posed by a series/chain of correlated anomalies under analysis.
  • the formatting module 130 can generate an output such as a printed or electronic report with the relevant data.
  • the formatting module 130 can cooperate with both the analyzer module 115 and the assessment module 125 depending on what the user wants to be reported.
  • the formatting module 130 is configured to format, present a rank for, and output one or more supported possible cyber threat hypotheses from the assessment module into a formalized report, from a one or more report templates populated with the data for that incident.
  • the formatting module 130 is configured to format, present a rank for, and output one or more detected cyber threats from the analyzer module or from the assessment module into a formalized report, from a one or more report templates populated with the data for that incident.
  • the formalized report on the template is outputted for a human user's consumption in a medium of any of 1) printable report, 2) presented digitally on a user interface, 3) in a machine-readable format for further use in machine-learning reinforcement and refinement, or 4) any combination of the three.
  • the formatting module 130 is further configured to generate a textual write up of an incident report in the formalized report for a wide range of breaches of normal behavior, used by the AI models trained with machine learning on the normal behavior of the system, based on analyzing previous reports with one or more models trained with machine learning on assessing and populating relevant data into the incident report corresponding to each possible cyber threat.
  • the formatting module 130 can generate a threat incident report in the formalized report from a multitude of a dynamic human-supplied and/or machine created templates corresponding to different types of cyber threats, each template corresponding to different types of cyber threats that vary in format, style, and standard fields in the multitude of templates.
  • the formatting module 130 can populate a given template with relevant data, graphs, or other information as appropriate in various specified fields, along with a ranking of a likelihood of whether that hypothesis cyber threat is supported and its threat severity level for each of the supported cyber threat hypotheses, and then output the formatted threat incident report with the ranking of each supported cyber threat hypothesis, which is presented digitally on the user interface and/or printed as the printable report.
  • the assessment module 125 with the AI classifiers once armed with the knowledge that malicious activity is likely occurring/is associated with a given process from the analyzer module 115 , then cooperates with the autonomous response module 140 to take an autonomous action such as i) deny access in or out of the device or the network ii) shutdown activities involving a detected malicious agent, iii) restrict devices and/or user's to merely operate within their particular normal pattern of life, iv) remove some user privileges/permissions associated with the compromised user account, etc.
  • an autonomous action such as i) deny access in or out of the device or the network ii) shutdown activities involving a detected malicious agent, iii) restrict devices and/or user's to merely operate within their particular normal pattern of life, iv) remove some user privileges/permissions associated with the compromised user account, etc.
  • the autonomous response module 140 can be configured to cause one or more rapid autonomous actions in response to be taken to counter the cyber threat.
  • a user interface for the response module can program the autonomous response module 140 i) to merely make a suggested response to take to counter the cyber threat that will be presented on a display screen and/or sent by a notice to an enterprise security administrator for explicit authorization when the cyber threat is detected or ii) to autonomously take a response to counter the cyber threat without a need for a human to approve the response when the cyber threat is detected.
  • the autonomous response module 140 will then send a notice of the autonomous response as well as display the autonomous response taken on the display screen.
  • Example autonomous responses may include cut off connections, shutdown devices, change the privileges of users, delete and remove malicious links in emails, slow down a transfer rate, cooperate with other security devices such as a firewall to trigger its autonomous actions, and other autonomous actions against the devices and/or users.
  • the autonomous response module 140 uses one or more of the AI model(s) 160 that are configured to intelligently work with other third-party defense systems in that customer's network against threats.
  • the autonomous response module 140 can send its own protocol commands to devices and/or take actions on its own.
  • the autonomous response module 140 uses the one or more of the AI model(s) 160 to orchestrate with other third-party defense systems to create a unified defense response against a detected threat within or external to that customer's network.
  • the autonomous response module 140 can be an autonomous self-learning digital response coordinator that is trained specifically to control and reconfigure the actions of traditional legacy computer defenses (e.g., firewalls, switches, proxy servers, etc.) to contain threats propagated by, or enabled by, networks and the internet.
  • the cyber threat analyst module 120 and/or assessment module 125 can cooperate with the autonomous response module 140 to cause one or more autonomous actions in response to be taken to counter the cyber threat, improves computing devices in the system by limiting an impact of the cyber threat from consuming unauthorized CPU cycles, memory space, and power consumption in the computing devices via responding to the cyber threat without waiting for some human intervention.
  • the trigger module 105 , analyzer module 115 , assessment module 125 , the cyber threat analyst module 120 , and formatting module 130 cooperate to improve the analysis and formalized report generation with less repetition to consume CPU cycles with greater efficiency than humans repetitively going through these steps and re-duplicating steps to filter and rank the one or more supported possible cyber threat hypotheses from the possible set of cyber threat hypotheses.
  • the cyber security appliance 100 and its modules use Artificial Intelligence algorithms configured and trained to perform a first machine-learned task of detecting the cyber threat as well as the autonomous response module 140 can use a combination of user configurable settings on actions to take to mitigate a detected cyber threat, a default set of actions to take to mitigate a detected cyber threat, and Artificial Intelligence algorithms configured and trained to perform a second machine-learned task of taking one or more mitigation actions to mitigate the cyber threat.
  • a cyber security restoration engine 190 deployed in the cyber security appliance 100 uses Artificial Intelligence algorithms configured and trained to perform a third machine-learned task of remediating the system/network being protected back to a trusted operational state.
  • the prediction engine 702 conducts Artificial Intelligence-based simulations by constructing a graph of nodes of the system being protected (e.g., a network including (a) the physical devices connecting to the network, any virtualized instances of the network, user accounts in the network, email accounts in the network, etc. as well as (b) connections and pathways through the network) to create a virtualized instance of the network to be tested.
  • the various cooperating modules residing in the prediction engine 702 may include, but are not limited to, a collections module 705 , a cyberattack generator (e.g.
  • phishing email generator 710 , an email module 715 , a network module 720 , an analyzer module 725 , a payloads module 730 with first and second payloads, a communication module 735 , a training module 740 , a simulated attack path module 750 , a cleanup module 744 , a scenario module 760 , a user interface 765 , a reporting module 770 , a formatting module 775 , an orchestration module 780 , and/or an AI classifier 785 with a list of specified classifiers.
  • the simulated attack path module 750 in the prediction engine 702 may be implemented via i) a simulator to model the system being protected and/or ii) a clone creator to spin up a virtual network and create a virtual clone of the system being protected configured to pen-test one or more defenses provided by the cyber security appliance 100 .
  • the prediction engine 702 may include and cooperate with one or more AI models 787 trained with machine learning on the contextual knowledge of the organization, such as those in the cyber security appliance 100 or have its own separate model trained with machine learning on the contextual knowledge of the organization and each user's and device's normal pattern of behavior.
  • These trained AI models 787 may be configured to identify data points from the contextual knowledge of the organization and its entities, which may include, but is not limited to, language-based data, email/network connectivity and behavior pattern data, and/or historic knowledgebase data.
  • the prediction engine 702 may use the trained AI models 787 to cooperate with one or more AI classifier(s) 785 by producing a list of specific organization-based classifiers for the AI classifier(s) 785 .
  • the simulated attack path module 750 by cooperating with the other modules in the prediction engine 702 is further configured to calculate and run one or more hypothetical simulations of a possible cyberattack and/or of an actual ongoing cyberattack from a cyber threat through an attack pathway through the system being protected.
  • the prediction engine 702 is further configured to calculate, based at least in part on the results of the one or more hypothetical simulations of a possible cyberattack and/or of an actual ongoing cyberattack from a cyber threat through an attack pathway through the system being protected, a risk score for each node (e.g. each device, user account, etc.), the threat risk score being indicative of a possible severity of the compromise and/or chance of compromise prior to an autonomous response action is taken in response to an actual cyberattack of the cyber incident.
  • a risk score for each node e.g. each device, user account, etc.
  • the simulated attack path module 750 is configured to initially create the network being protected in a simulated or virtual device environment. Additionally, the orchestration module 780 and communications module 735 may be configured to cooperate with the cyber security appliance 100 to securely obtain specific data about specific users, devices, and entities in specific networks of for this specific organization. The training module 740 and simulated attack path module 750 in the prediction engine 702 use the obtained specific data to generate one or more specific cyberattacks, such as a phishing email, tailored to those specific users, devices, and/or entities of the specific organization. Many different cyberattacks can be simulated by the AI red team module but a phishing email attack will be used as an example cyberattack.
  • the prediction engine 702 is communicatively coupled to the cyber security appliance 100 , an open source (OS) database server 790 , an email system 791 with one or more endpoint computing devices 791 A-B, and a network system 792 with one or more entities 793 - 799 , and a restoration engine 745 over one or more networks 746 / 747 .
  • OS open source
  • the cyber security appliance 100 may cooperate with the prediction engine 702 to initiate a pen-test in the form of, for example, a software attack, which generates a customized, for example, phishing email to spoof one or more specific users/devices/entities of an organization in an email/network defense system and then looks for any security vulnerabilities, risks, threats, and/or weaknesses potentially gaining access to one or more features and data of that specific user/device/entity.
  • a pen-test in the form of, for example, a software attack, which generates a customized, for example, phishing email to spoof one or more specific users/devices/entities of an organization in an email/network defense system and then looks for any security vulnerabilities, risks, threats, and/or weaknesses potentially gaining access to one or more features and data of that specific user/device/entity.
  • the prediction engine 702 may be customized and/or driven by a centralized AI using and/or modelling a smart awareness of a variety of specific historical email/network behavior patterns and communications of a specific organization's hierarchy within a specific organization.
  • AI modelling may be trained and derived through machine learning and the understanding of the organization itself based on: (i) a variety of OS materials such as any OS materials collected from the OS database server 790 and (ii) its historical awareness of any specific email/network connectivity and behavior patterns to target for that organization as part of an offensive (or attacking) security approach.
  • the training module 740 can contain for reference a database of cyberattack scenarios as well as restoration response scenarios by the restoration engine 745 stored in the database.
  • the prediction engine 702 may use the orchestration module 780 to implement and orchestrate this offensive approach all the way from an initial social engineering attack at an earlier stage of the pentest to a subsequent payload delivery attack at a later stage of the pentest and so on.
  • the prediction engine 702 is configured to: (i) intelligently initiate a customized cyberattack on the components, for example, in the IT network and email system 791 ; as well as (ii) subsequently generating a report to highlight and/or raise awareness of one or more key areas of vulnerabilities and/or risks for that organization after observing the intelligently initiated attack (e.g., such key areas may be formatted and reported in a way tailored for that organization using both the formatting and reporting modules, as described below); and (iii) then allow that enterprise (e.g., organization) to be trained on that attack and its impact on those specific security postures, thereby allowing that organization to go in directly to mitigate and improve those compromised security postures going forward, as well as iv) during an actual cyberattack, obtain and ingest data
  • the prediction engine 702 may cooperate with the cyber security appliance 100 to provide feedback on any successful attacks and detections.
  • the prediction engine 702 may be configured to at least provide the cyber security appliance 100 (and/or any other predetermined entities) with any feedback on the successful pentest as well as any specifics regarding the processes uses for that successful pentest, such as providing feedback on the specific attack vectors, scenarios, targeted entities, characteristics of the customized phishing emails, payloads, and contextual data, etc., that were used.
  • the simulated attack path module 750 in the prediction engine 702 may be configured with an attack path modeling component (not shown), which is programmed to work out the key paths and devices in a network via running cyberattacks on a simulated or virtual device version of the network under analysis incorporating metrics that feed into that modeling by running simulated cyberattacks on the particulars known about this specific network being protected by the cyber security appliance 100 .
  • the attack modeling has been programmed with the knowledge of a layout and connection pattern of each particular network device in a network and a number of connections and/or hops to other network devices in the network.
  • the attack path modeling component ingests the information for the purposes of modeling and simulating a potential attack against the network and routes that an attacker would take through the network.
  • the attack path modeling component can be constructed with information to i) understand an importance of network nodes in the network compared to other network nodes in the network, and ii) to determine key pathways within the network and vulnerable network nodes in the network that a cyberattack would use during the cyberattack, via modeling the cyberattack on at least one of 1) a simulated device version and 2) a virtual device version of the network under analysis.
  • FIG. 8 illustrates a diagram of an embodiment of the cyber threat prediction engine and its Artificial Intelligence-based simulations constructing an example graph of nodes in an example network and simulating how the cyberattack might likely progress in the future tailored with an innate understanding of a normal behavior of the nodes in the system being protected and a current operational state of each node in the graph of the protected system during simulations of cyberattacks.
  • the prediction engine 702 plots the attack path through the nodes of the network and estimated times to reach critical nodes in the network.
  • the cyberattack simulation modeling of the prediction engine runs the simulations to identify the routes, difficulty, and time periods from certain entry notes to certain key servers.
  • the simulations of the cyber attack by the cyber threat from one compromised entry point such as Device n to a key server can take 10 days, 100 days, etc. depending on normal behavior of Device n, security settings of the network devices in the different nodes of the network, etc.
  • the attack path modeling component in the simulated attack path module 750 cooperating with the other modules in the prediction engine 702 are configured to determine the key pathways within the network and the vulnerable network nodes in the network that the cyberattack would use during the cyberattack, via the modeling of the cyberattack on at least one of 1) the simulated device version and 2) the virtual device version of the network under analysis via using the actual detected vulnerabilities of each network node, a predicted frequency of remediation of those vulnerabilities within a specific network device in the network without a notice from the restoration engine 745 , and an importance of the key network nodes with the actual vulnerabilities compared to other network nodes in the network.
  • the modules essentially seed the attack path modeling component with weakness scores that provide current data, customized to each user account and/or network device, which then allows the artificial intelligence running the attack path simulation to choose entry network nodes into the network with more accuracy as well as plot the attack path through the nodes and estimated times to reach critical nodes in the network much more accurately based on the actual current operational condition of the many user accounts and network devices in the network.
  • the attack simulation modeling can be run to identify the routes, difficulty, and time periods from certain entry notes to certain key servers.
  • the cyber threat analyst module 120 in the cyber security appliance 100 of FIG. 6 as well as the prediction engine 702 of FIG. 7 may use any unusual, detected behavior deviating from the normal behavior and then build a sequence/chain of unusual behavior and the causal links between the sequence/chain of unusual behavior to detect any potential cyber threats. For example, as shown in FIG.
  • the cyber security appliance 100 and the prediction engine 702 may determine the unusual patterns by analyzing i) what activities/events/alerts that fall outside of the window of what is the normal pattern of life for that network/system/entity/device/user under analysis; and (ii) then pulling in and analyzing the pattern of the behavior of the activities/events/alerts that are in the normal pattern of life but also connect to the indictors for a possible cyberattack, to determine whether that pattern is indicative of a behavior of a malicious actor, such as a human, program, and/or any other cyber harmful threat.
  • a malicious actor such as a human, program, and/or any other cyber harmful threat.
  • the prediction engine 702 and its Artificial Intelligence-based simulations use artificial intelligence to cooperate with the restoration engine 745 to assist in choosing one or more remediation actions to perform on nodes affected by the cyberattack back to a trusted operational state while still mitigating the cyber threat during an ongoing cyberattack based on effects determined through the simulation of possible remediation actions to perform and their effects on the nodes making up the system being protected and preempt possible escalations of the cyberattack while restoring one or more nodes back to a trusted operational state.
  • the restoration engine 745 restores the one or more nodes in the protected system by cooperating with any of 1) an AI model trained to model a normal pattern of life for each node in the protected system, 2) an AI model trained on what are a possible set of cyber threats and their characteristics and symptoms to identify the cyber threat (e.g. malicious actor/device/file) that is causing a particular node to behave abnormally (e.g. malicious behavior) and fall outside of that node's normal pattern of life, and 3 ) the autonomous response module 140 .
  • an AI model trained to model a normal pattern of life for each node in the protected system e.g. malicious actor/device/file
  • the autonomous response module 140 e.g. malicious actor/device/file
  • the restoration engine 745 can reference both i) a database of restoration response scenarios stored in the database and ii) a prediction engine 702 configured to run AI-based simulations and use the operational state of each node in the graph of the protected system during simulations of cyberattacks on the protected system to restore 1) each node compromised by the cyber threat and 2) promote protection of the corresponding nodes adjacent to a compromised node in the graph of the protected system.
  • the restoration engine 745 can prioritize among the one or more nodes to restore, which nodes to remediate and an order of the nodes to remediate, based on two or more factors including i) a dependency order needed for the recovery efforts, ii) an importance of a particular recovered node compared to other nodes in the system being protected, iii) a level of compromise of a particular node contemplated to be restored, iv) an urgency to recover that node compared to whether containment of the cyber threat was successful, v) a list of a most important things in the protected system to recover earliest, and vi) factoring in a result of a cyberattack simulation being run during the cyberattack by the prediction engine 702 to predict a likely result regarding the cyberattack when that node is restored.
  • An interactive response loop exists between the restoration engine 745 , the cyber security appliance 100 , and the prediction engine 702 .
  • the restoration engine 745 , the cyber security appliance 100 , and the prediction engine 702 can be configured to cooperate to combine an understanding of normal operations of the nodes making up the devices and users in the system being protected by the cyber security appliance 100 , an understanding emerging cyber threats, an ability to contain those emerging cyber threats, and a restoration of the nodes of the system to heal the system with an adaptive feedback between the multiple AI-based engines in light of simulations of the cyberattack to predict what might occur in the nodes in the system based on the progression of the attack so far, mitigation actions taken to contain those emerging cyber threats and remediation actions taken to heal the nodes using the simulated cyberattack information.
  • the multiple AI-based engines have communication hooks in between them to exchange a significant amount of behavioral metrics including data between the multiple AI-based engines to work in together to provide an overall cyber threat response.
  • the cyber security appliance 100 and its modules use Artificial Intelligence algorithms configured and trained to perform a first machine-learned task of detecting the cyber threat as well as the autonomous response module 140 can use a combination of user configurable settings on actions to take to mitigate a detected cyber threat, a default set of actions to take to mitigate a detected cyber threat, and Artificial Intelligence algorithms configured and trained to perform a second machine-learned task of taking one or more mitigation actions to mitigate the cyber threat.
  • the restoration engine 745 uses Artificial Intelligence algorithms configured and trained to perform a third machine-learned task of remediating the system/network being protected back to a trusted operational state.
  • the prediction engine 702 uses Artificial Intelligence algorithms configured and trained to perform a fourth machine-learned task of AI-based simulations of cyberattacks to assist in determining 1) how a simulated cyberattack might occur in the system being protected, and 2) how to use the simulated cyberattack information to preempt possible escalations of an ongoing actual cyberattack.
  • the autonomous response module 140 uses its intelligence to cooperate with the prediction engine 702 and its AI-based simulations to choose and initiate an initial set of one or more mitigation actions indicated as a preferred targeted initial response to the detected cyber threat by autonomously initiating those mitigation actions to defend against the detected cyber threat, rather than a human taking an action.
  • FIG. 10 illustrates an embodiment of the AI based cyber security appliance 100 plugging in as an appliance platform to protect a system.
  • the cyber security appliance 100 is part of an enterprise network 230 , which may further include one or more computing devices 240 such as database servers 250 , web servers 260 , networking devices 270 (e.g., bridge, switch, router, load-balancers, gateways, and/or firewalls and endpoint devices) with connectivity to resources within the enterprise network 230 as well as a publicly accessible network 280 (e.g., the Internet).
  • the endpoint devices 270 may include, but are not limited or restricted to desktop computers, laptops, smart phones, tablets, wearables, smart appliances, or the like.
  • the security controls operate as probes and detectors that are configured to monitor, for example, network-based activity (e.g., email activity, TCP/IP communications, text or Short Message Service (SMS) activity, etc.) and computing device activity (e.g., download activity based on volume, day, time of day, etc.); credential update/modification activity (e.g., credential changes, failed access attempts to a resource, etc.); and/or resource activity (e.g., attempted/successful accesses to enterprise resources, etc.).
  • the security controls provide the monitored data (or a version of the monitored data) as input into the modules of the cyber security appliance 100 to determine what is occurring in each domain individually.
  • FIG. 11 illustrates an example Artificial Intelligence based cyber security appliance 100 using a cyber threat analyst module 104 to protect an example network.
  • the example network of computer systems 50 uses a cyber security appliance 100 .
  • the system depicted is a simplified illustration, which is provided for ease of explanation.
  • the system 50 comprises a first computer system 10 within a building, which uses the threat detection system to detect and thereby attempt to prevent threats to computing devices within its bounds.
  • the first computer system 10 comprises three computing devices 1 , 2 , 3 , a local server 4 , and a multifunctional device (MFD) 5 that provides printing, scanning and facsimile functionalities to each of the computers 1 , 2 , 3 .
  • MFD multifunctional device
  • All of the devices within the first computer system 10 are communicatively coupled via a first Local Area Network (LAN) 6 . Consequently, all of the computing devices 1 , 2 , 3 are able to access the local server 4 via the first LAN 6 and use the functionalities of the MFD 5 via the LAN 6 .
  • the first LAN 6 of the first computer system 10 is connected to the Internet 20 , which in turn provides computing devices 1 , 2 , 3 with access to a multitude of other computing devices, including a server 30 and a second computer system 40 .
  • the second computer system 40 also includes two computing devices 41 , 42 , connected by a second LAN 43 .
  • a first computing device 1 on the first computer system 10 has the electronic hardware, modules, models, and various software processes of the cyber security appliance 100 ; and therefore, runs threat detection for detecting threats to the first computer system 10 .
  • the first computing device 1 includes one or more processors arranged to run the steps of the process described herein, memory storage components required to store information related to the running of the process, as well as one or more network interfaces for collecting information from various security controls (e.g., sensors, probes, etc.) collecting data associated with the system (network) 50 under analysis.
  • the cyber security appliance 100 in the first computing device 1 builds and maintains a dynamic, ever-changing model of the ‘normal behavior’ of each user and machine within the first computer system 10 .
  • the approach is based on Bayesian mathematics, and monitors all interactions, events and communications within the first computer system 10 —which computing device is talking to which, files that have been created, networks that are being accessed.
  • a second computing device 2 is based in a company's San Francisco office and operated by a marketing employee who regularly accesses the marketing network, usually communicates with machines in the company's U.K. office in the second computer system 40 between 9.30 AM and midday, and is active from about 8:30 AM until 6 PM.
  • the cyber security appliance 100 takes all the information that is available relating to this employee and establishes a ‘pattern of life’ for that person and the devices used by that person in that system, which is dynamically updated as more information is gathered.
  • the model of the normal pattern of life for an entity in the system 50 under analysis is used as a moving benchmark, allowing the cyber security appliance 100 to spot behavior on the system 50 seems to fall outside of this normal pattern of life, and flags this behavior as anomalous, requiring further investigation and/or autonomous action.
  • the cyber security appliance 100 is built to deal with the fact that today's attackers are getting stealthier, and an attacker/malicious agent may be ‘hiding’ in a system to ensure that they avoid raising suspicion in an end user, such as by slowing their machine down.
  • the AI model(s) 160 in the cyber security appliance 100 builds a sophisticated ‘pattern of life’—that understands what represents normality for every person, device, and network activity in the system being protected by the cyber security appliance 100 .
  • the self-learning algorithms in the AI can, for example, understand each node's (user account, device, etc.) in an organization's normal patterns of life in about a week, and grows more bespoke with every passing minute.
  • the detection engine self-learning AI can learn “on the job” from real-world data occurring in the system and constantly evolves its understanding as the system's environment changes.
  • the Artificial Intelligence can use machine learning algorithms to analyze patterns and ‘learn’ what is the ‘normal behavior’ of the system (network) 50 by analyzing data on the activity on the system 50 at the device and employee level.
  • the unsupervised machine learning does not need humans to supervise the learning in the model but rather discovers hidden patterns or data groupings without the need for human intervention.
  • the unsupervised machine learning discovers the patterns and related information using the unlabeled data monitored in the system itself.
  • Unsupervised learning algorithms can include clustering, anomaly detection, neural networks, etc. Unsupervised learning can break down features of what it is analyzing (e.g., a network node of a device or user account), which can be useful for categorization, and then identify what else has similar or overlapping feature sets matching to what it is analyzing.
  • the cyber security appliance 100 can use unsupervised machine learning to works things out without pre-defined labels. In the case of sorting a series of different entities, such as different devices, the system analyzes the information and works out the different classes of devices. This allows the system 50 to handle the unexpected and embrace uncertainty when new entities and classes are examined.
  • the modules and models of the cyber security appliance 100 do not always know what they are looking for but can independently classify data and detect compelling patterns.
  • the cyber security appliance's 100 unsupervised machine learning methods do not require training data with pre-defined labels. Instead, they are able to identify key patterns and trends in the data, without the need for human input.
  • the advantage of unsupervised learning in this system is that it allows computers to go beyond what their programmers already know and discover previously unknown relationships.
  • the unsupervised machine learning methods can use a probabilistic approach based on a Bayesian framework.
  • the machine learning allows the cyber security appliance 100 to integrate a huge number of weak indicators/low threat values by themselves of potentially anomalous network behavior to produce a single clear overall measure of these correlated anomalies to determine how likely a network device is to be compromised.
  • This probabilistic mathematical approach provides an ability to understand important information, amid the noise of the network—even when it does not know what it is looking for.
  • the cyber security appliance 100 can use a Recursive Bayesian Estimation to combine these multiple analyzes of different measures of network behavior to generate a single overall/comprehensive picture of the state of each device, the cyber security appliance 100 takes advantage of the power of Recursive Bayesian Estimation (RBE) via an implementation of the Bayes filter.
  • RBE Recursive Bayesian Estimation
  • the cyber security appliance's 100 AI models are able to constantly adapt themselves, in a computationally efficient manner, as new information becomes available to the system.
  • the AI model(s) of the cyber security appliance 100 may be configured to continually recalculate threat levels in the light of new evidence, identifying changing attack behaviors where conventional signature-based methods fall down.
  • Training an AI model can be accomplished by having the model learn good values for all of the weights and the bias for labeled examples created by the system, and in this case; starting with no labels initially.
  • a goal of the training of the AI model can be to find a set of weights and biases that have low loss, on average, across all examples.
  • the AI classifier can receive supervised machine learning with a labeled data set to learn to perform their task as discussed herein.
  • An anomaly detection technique that can be used is supervised anomaly detection that requires a data set that has been labeled as “normal” and “abnormal” and involves training a classifier.
  • Another anomaly detection technique that can be used is an unsupervised anomaly detection that detects anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set.
  • the AI model representing normal behavior from a given normal training data set can detect anomalies by establishing the normal pattern and then test the likelihood of a test instance under analysis to be generated by the AI model.
  • Anomaly detection can identify rare items, events or observations which raise suspicions by differing significantly from the majority of the data, which includes rare objects as well as things like unexpected bursts in activity.
  • the method and system are arranged to be performed by one or more processing components with any portions of software stored in an executable format on a computer readable medium.
  • any portions of the method, apparatus and system implemented as software can be stored in one or more non-transitory memory storage devices in an executable format to be executed by one or more processors.
  • the computer readable medium may be non-transitory and does not include radio or other carrier waves.
  • the computer readable medium could be, for example, a physical computer readable medium such as semiconductor memory or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disc, and an optical disk, such as a CD-ROM, CD-R/W or DVD.
  • the various methods described above may also be implemented by a computer program product.
  • the computer program product may include computer code arranged to instruct a computer to perform the functions of one or more of the various methods described above.
  • the computer program and/or the code for performing such methods may be provided to an apparatus, such as a computer, on a computer readable medium or computer program product.
  • a transitory computer readable medium may include radio or other carrier waves.
  • FIG. 12 illustrates a block diagram of an embodiment of one or more computing devices that can be a part of an AI-based, cyber security system including the cyber security appliance 100 , the restoration engine, the prediction engine 702 , etc. for an embodiment of the current design discussed herein.
  • the computing device may include one or more processors (e.g. processing units) 620 to execute instructions, one or more memories 630 - 632 to store information, one or more data input components 660 - 663 to receive data input from a user of the computing device 600 , one or more modules that include the management module, a network interface communication circuit 670 to establish a communication link to communicate with other computing devices external to the computing device, one or more sensors where an output from the sensors is used for sensing a specific triggering condition and then correspondingly generating one or more preprogrammed actions, a display screen 691 to display at least some of the information stored in the one or more memories 630 - 632 and other components.
  • processors e.g. processing units
  • memories 630 - 632 to store information
  • one or more data input components 660 - 663 to receive data input from a user of the computing device 600
  • modules that include the management module
  • a network interface communication circuit 670 to establish a communication link to communicate with other computing devices external to the computing device
  • the processing unit 620 may have one or more processing cores, which couples to a system bus 621 that couples various system components including the system memory 630 .
  • the system bus 621 may be any of several types of bus structures selected from a memory bus, an interconnect fabric, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • Computing device 602 typically includes a variety of computing machine-readable media.
  • Non-transitory machine-readable media can be any available media that can be accessed by computing device 602 and includes both volatile and nonvolatile media, and removable and non-removable media.
  • non-transitory machine-readable media use includes storage of information, such as computer-readable instructions, data structures, other executable software, or other data.
  • Non-transitory machine-readable media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information, and which can be accessed by the computing device 602 .
  • Transitory media such as wireless channels are not included in the machine-readable media.
  • Machine-readable media typically embody computer readable instructions, data structures, and other executable software.
  • a volatile memory drive 641 is illustrated for storing portions of the operating system 644 , application programs 645 , other executable software 646 , and program data 647 .
  • a user may enter commands and information into the computing device 602 through input devices such as a keyboard, touchscreen, or software or hardware input buttons 662 , a microphone 663 , a pointing device and/or scrolling input component, such as a mouse, trackball or touch pad 661 .
  • the microphone 663 can cooperate with speech recognition software.
  • These and other input devices are often connected to the processing unit 620 through a user input interface 660 that is coupled to the system bus 621 , but can be connected by other interface and bus structures, such as a lighting port, game port, or a universal serial bus (USB).
  • a display monitor 691 or other type of display screen device is also connected to the system bus 621 via an interface, such as a display interface 690 .
  • computing devices may also include other peripheral output devices such as speakers 697 , a vibration device 699 , and other output devices, which may be connected through an output peripheral interface 695 .
  • the computing device 602 can operate in a networked environment using logical connections to one or more remote computers/client devices, such as a remote computing system 680 .
  • the remote computing system 680 can a personal computer, a mobile computing device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computing device 602 .
  • the logical connections can include a personal area network (PAN) 672 (e.g., Bluetooth®), a local area network (LAN) 671 (e.g., Wi-Fi), and a wide area network (WAN) 673 (e.g., cellular network).
  • PAN personal area network
  • LAN local area network
  • WAN wide area network
  • a browser application and/or one or more local apps may be resident on the computing device and stored in the memory.
  • the computing device 602 When used in a LAN networking environment, the computing device 602 is connected to the LAN 671 through a network interface 670 , which can be, for example, a Bluetooth® or Wi-Fi adapter.
  • a network interface 670 When used in a WAN networking environment (e.g., Internet), the computing device 602 typically includes some means for establishing communications over the WAN 673 .
  • a radio interface which can be internal or external, can be connected to the system bus 621 via the network interface 670 , or other appropriate mechanism.
  • other software depicted relative to the computing device 602 may be stored in the remote memory storage device.
  • remote application programs 685 as reside on remote computing device 680 .
  • network connections shown are examples and other means of establishing a communications link between the computing devices that may be used.
  • present design can be carried out on a single computing device or on a distributed system in which different portions of the present design are carried out on different parts of the distributed computing system.
  • each of the terms “engine,” “module” and “component” is representative of hardware, firmware, and/or software that is configured to perform one or more functions.
  • the engine (or module or component) may include circuitry having data processing and/or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a processor, a programmable gate array, a microcontroller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, or combinatorial logic.
  • the engine (or module or component) may be software in the form of one or more software modules, which may be configured to operate as its counterpart circuitry.
  • a software module may be a software instance that operates as or is executed by a processor, namely a virtual processor whose underlying operations is based on a physical processor such as virtual processor instances for Microsoft® Azure® or Google® Cloud Services platform or an EC2 instance within the Amazon® AWS infrastructure, for example.
  • Illustrative examples of the software module may include an executable application, a daemon application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or simply one or more instructions.
  • a module may be implemented in hardware electronic components, software components, and a combination of both.
  • a module is a core component of a complex system consisting of hardware and/or software that is capable of performing its function discretely from other portions of the entire complex system but designed to interact with the other portions of the entire complex system.
  • the term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware.
  • the terms “computing device” or “device” should be generally construed as physical device with data processing capability, data storage capability, and/or a capability of connecting to any type of network, such as a public cloud network, a private cloud network, or any other network type.
  • Examples of a computing device may include, but are not limited or restricted to, the following: a server, a router or other intermediary communication device, an endpoint (e.g., a laptop, a smartphone, a tablet, a desktop computer, a netbook, IoT device, networked wearable, etc.)
  • an endpoint e.g., a laptop, a smartphone, a tablet, a desktop computer, a netbook, IoT device, networked wearable, etc.
  • A, B or C or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.”
  • An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
  • an application described herein includes but is not limited to software applications, mobile applications, and programs routines, objects, widgets, plug-ins that are part of an operating system application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Virology (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A device linking service can unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into the network via cross-referencing information from the different sources of access into the network. The device linking service creates a unified network device identifier for the different device identifiers from the different sources of access into the network. The device linking service supplies the unified network device identifier and associated information with the different device identifiers from the different sources of access into the network to a prediction engine. The prediction engine runs a simulation of attack paths for the network that a cyber threat may take.

Description

    RELATED APPLICATION
  • This application claims priority under 35 USC 119 to U.S. Provisional patent application No. 63/350,781 entitled “An AI CYBER SECURITY SYSTEM” filed Jun. 9, 2022, and U.S. Provisional patent application No. 63/396,105 entitled “A CYBER THREAT PROTECTION SYSTEM” filed Aug. 8, 2022, the contents of both of which are incorporated herein by reference in their entirety.
  • NOTICE OF COPYRIGHT
  • A portion of this disclosure contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the material subject to copyright protection as it appears in the United States Patent & Trademark Office's patent file or records, but otherwise reserves all copyright rights whatsoever.
  • FIELD
  • Cyber security and in an embodiment use of Artificial Intelligence in cyber security to detect malicious cyber threats.
  • BACKGROUND
  • Cybersecurity attacks have become a pervasive problem for enterprises as many computing devices and other resources have been subjected to attack and compromised. A “cyberattack” constitutes a threat to security of an enterprise (e.g., enterprise network, one or more computing devices connected to the enterprise network, or the like). As an example, the cyberattack may be a cybersecurity threat against the enterprise network, one or more computing devices connected to the enterprise network, stored or in-flight data accessible over the enterprise network, and/or other enterprise-based resources. This security threat may involve malware (malicious software) introduced into a computing device or into the network. The security threat may originate from an external endpoint or an internal entity (e.g., a negligent or rogue authorized user). The security threats may represent malicious or criminal activity, ranging from theft of credential to even a nation-state attack, where the source initiating or causing the security threat is commonly referred to as a “malicious” source. Conventional cybersecurity products are commonly used to detect and prioritize cybersecurity threats (hereinafter, “cyber threats”) against the enterprise, and to determine preventive and/or remedial actions for the enterprise in response to those cyber threats.
  • SUMMARY
  • Methods, systems, and apparatus are disclosed for an Artificial Intelligence based cyber security system. In some aspects, the techniques described herein relate to an apparatus, including: a device linking service configured to unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into the network via cross-referencing information from the different sources of access into the network. The device linking service creates a unified network device identifier for the different device identifiers from the different sources of access into the network. The device linking service supplies the unified network device identifier and associated information with the different device identifiers from the different sources of access into the network to a prediction engine. The prediction engine runs a simulation of attack paths for the network that a cyber threat may take.
  • In some aspects, the techniques described herein relate to an apparatus, where the device linking service creates a meta entity identifier from the unified network device identifier and one or more user identifiers associated with the different device identifiers from the different sources of access into the network. The device linking service supplies the meta entity identifier and associated information to a cyber security appliance configured to detect the cyber threat in the network. The cyber security appliance uses the meta entity identifier and information associated with the unified network device identifier and the one or more user identifiers associated with the different device identifiers to create multiple models of a pattern of life for the meta entity identifier in order to detect the cyber threat.
  • In some aspects, the techniques described herein relate to an apparatus, where the cyber security appliance is configured to have an autonomous response module to autonomously respond to mitigate the cyber threat as well as to cooperate with the prediction engine in order to determine how to properly autonomously respond to a cyber attack by the cyber threat based upon simulations run in the prediction engine modelling the attack paths into and through the network.
  • In some aspects, the techniques described herein relate to a non-transitory computer readable medium configured to store instructions in an executable format in the non-transitory computer readable medium, which when executed by one or more processors cause operations, including: providing a device linking service to unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into the network via cross-referencing information from the different sources of access into the network, providing the device linking service to create a unified network device identifier for the different device identifiers from the different sources of access into the network, providing the device linking service to then link the unified network device identifier with a user in the network, and providing the device linking service to supply the unified network device identifier and associated information with the different device identifiers from the different sources of access into the network to a prediction engine, where the prediction engine is configured to run a simulation of attack paths for the network that a cyber threat may take.
  • These and other features of the design provided herein can be better understood with reference to the drawings, description, and claims, all of which form the disclosure of this patent application.
  • DRAWINGS
  • The drawings refer to some embodiments of the design provided herein in which:
  • FIG. 1 illustrates an embodiment of a block diagram of an example device linking service to unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into a network via cross-referencing information from the different sources of access into the network.
  • FIG. 2 illustrates an embodiment of a block diagram of an example device linking service in one process links/correlates network device identifiers (dids) from different sources with different device data streams across a particular network into a single unified network device identifier for that individual physical device, under analysis, where the device linking service next matches different device IDs (dids) across separate networks.
  • FIG. 3 illustrates an embodiment of a block diagram of an example device linking service to aggregate network presence information about each particular person/user of the network and their different user accounts on different third-party applications served from third-party platforms external to the network, whom is then also associated with their particular individual physical network device.
  • FIG. 4 illustrates an embodiment of a block diagram of an example device linking service to unify network device identifiers and its user(s) into one meta entity identifier for a more accurate analysis in cyber security modeling to detect a cyber threat as well as how to properly autonomously respond to a cyber-attack based on the meta entity information being provided into the prediction engine to model the attack paths into and through the network.
  • FIG. 5A illustrates an embodiment of a portion of the block diagram of the example device linking service shown in FIG. 4 .
  • FIG. 5B illustrates an embodiment of another portion of the block diagram of the example device linking service shown in FIG. 4 .
  • FIG. 6 illustrates an embodiment of a block diagram of an example Artificial Intelligence based cyber security appliance.
  • FIG. 7 illustrates a diagram of an embodiment of a cyber threat prediction engine and its Artificial Intelligence-based simulations constructing a graph of nodes in an example network and simulating how the cyberattack might likely progress in the future tailored with an innate understanding of a normal behavior of the nodes in the system being protected and a current operational state of each node in the graph of the protected system during simulations of cyberattacks.
  • FIG. 8 illustrates a diagram of an embodiment of the cyber threat prediction engine and its Artificial Intelligence-based simulations constructing an example graph of nodes in an example network and simulating how the cyberattack might likely progress in the future tailored with an innate understanding of a normal behavior of the nodes in the system being protected and a current operational state of each node in the graph of the protected system during simulations of cyberattacks.
  • FIG. 9 illustrates a graph of an embodiment of an example chain of unusual behavior for, in this example, the email activities as well as IT activities deviating from a normal pattern of life for this user and/or device in connection with the rest of the network under analysis.
  • FIG. 10 illustrates an embodiment of the AI based cyber security appliance plugging in as an appliance platform to protect a system.
  • FIG. 11 illustrates a block diagram of an embodiment of one or more computing devices that can be a part of the Artificial Intelligence based cyber security system for an embodiment of the current design discussed herein.
  • FIG. 12 illustrates a block diagram of an embodiment of one or more computing devices that can be a part of an AI-based, cyber security system including the cyber security appliance, the restoration engine, the prediction engine, etc. for an embodiment of the current design discussed herein.
  • While the design is subject to various modifications, equivalents, and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will now be described in detail. It should be understood that the design is not limited to the particular embodiments disclosed, but—on the contrary—the intention is to cover all modifications, equivalents, and alternative forms using the specific embodiments.
  • DESCRIPTION
  • In the following description, numerous specific details are set forth, such as examples of specific data signals, named components, number of servers in a system, etc., in order to provide a thorough understanding of the present design. It will be apparent, however, to one of ordinary skill in the art that the present design can be practiced without these specific details. In other instances, well known components or methods have not been described in detail but rather in a block diagram in order to avoid unnecessarily obscuring the present design. Further, specific numeric references such as a first server, can be made. However, the specific numeric reference should not be interpreted as a literal sequential order but rather interpreted that the first server is different than a second server. Thus, the specific details set forth are merely exemplary. Also, the features implemented in one embodiment may be implemented in another embodiment where logically possible. The specific details can be varied from and still be contemplated to be within the spirit and scope of the present design. The term coupled is defined as meaning connected either directly to the component or indirectly to the component through another component.
  • FIG. 1 illustrates an embodiment of a block diagram of an example device linking service to unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into a network via cross-referencing information from the different sources of access into the network. The device linking service 180 can create a unified network device identifier (e.g., new metadevice identifier) for the different device identifiers from the different sources of access into the network. The device linking service 180 can reconcile different data streams into the behavior of a single physical network device such as a physical laptop, a physical tablet, a physical smart phone, etc. The cyber security appliance 100, a device linking service 180, one or more physical devices such as Laptop A and a network.
  • The device linking service 180 can link each individual physical network device accessing into the network protected by a cyber security appliance 100 through multiple different connection pathways from the network traffic back together for each individual physical network device in the network. One physical network device will likely have many different device identifiers assigned by the different sources of access into a network. If a network has four ways to access that network, then the one physical network device will likely have four different device identifiers assigned to that single device by the four different sources of access into the network.
  • An example network device of a laptop identifiable as, for example, laptop A can access into a network protected by the cyber security appliance 100 through, for example, i) a Zscaler Private Access (ZPA), ii) a Virtual Private Network (VPN), iii) a remote desktop connection, iv) a direct connection via ethernet and/or WiFi into the network, etc. Note, ZPA allows a network administrator to give users policy-based secure access only to the internal apps they need to get their work done unlike VPNs, which require the user's physical network device to connect to the network to access the enterprise's applications. With ZPA, application access does not require network access but still the device's behavior needs to be tracked and monitored. The ZPA, the VPN, the RDP, etc. routes of access into the organization's network resources protected by the cyber security appliance 100 facilitates the user's laptop A to connect into the network when the user is working from home on their Wi-fi on laptop A. The cyber security appliance 100 will collect and monitor the data (e.g., IP traffic, logs, metadata, etc. coming into the network from the ZPA, the VPN, the RDP, and from the direct . The VPN (virtual private network) can be a service that creates a safe, encrypted online connection between the corporate network and the endpoint device (e.g., individual physical network device) being used by the user to remotely access the network. Note, RDP (Remote Desktop Protocol) provides remote access to a computer or device but is limited to a single device or network (and thus easy to track). A VPN is more of an open pass, where anyone who can connect to the VPN server can use it to access secure networks. However, it is the VPN server processing a device's outbound and inbound online traffic—the device's requests, websites' responses to the device's requests, and any files the user of the device decides to send or receive rather than an RDP remote device itself doing these tasks. Note, the endpoint agent that is installed in the network device can monitor activities when the device is not directly connected to the Wi-Fi of the network (e.g. working from home). However, some of the activities and communication traffic such as browsing while on a VPN can be invisible to that installed endpoint agent because the activity and communication would be rooted through a cloud-based VPN service. The device linking service 180 can ingest the Syslog from the VPN client (e.g., Zscalar) and other information from the VPN client to create another source of data linkable via its device identifier and/or username on the VPN.
  • Generally, the network device, for example, laptop A, will also have an endpoint agent/client monitoring sensor resident on that individual physical network device itself monitoring activities occurring on that device, which can be helpful in correlating the activities from these different ways into the network back to this single individual physical network device. The endpoint agent/client monitoring sensor resident on that network device will securely directly communicate with the cyber security appliance 100 to supply data (such as network traffic, information on monitored processes e.g., applications, operating system events and interactions, whether the ZPA, VPN, and/or RDP was active on the network device, etc.) resident on the network device itself. This can also supply a lot of information on what is happening with the network device when the network device is off-line/ not connected to the network but still being used remotely by the user. The user's physical network device (e.g. laptop A) also connects directly into the organization's Wi-Fi portion of the network protected by the cyber security appliance 100 when the user is working in their office at work. In this example, the same network device laptop connects to the network being protected by the cyber security appliance 100 through these different access paths and an additional communication stream of data regarding that individual physical network device (e.g. a particular network device) from the endpoint client monitoring sensor resident on that network device itself. The information relating back to that particular physical network device A can have different identifiers assigned by i), for example, the ZPA for that network device (assigned in this example the device id is Zia Device did3), the identifier assigned by the network itself (assigned in this example the device id is Traffic Device did1), which can be a different identifier than the identifier used by the endpoint client monitoring sensor resident on that network device (which typically can be the manufacturer's ID of that particular machine and in this example cSensor device did2), which can be different that the VPN device id-did4. The device linking service 180 needs to correlate these four different device identifiers into one unified network device identifier (e.g. new metadevice identifier) for that single physical network device so that the rest of the cyber security system (e.g. cyber security appliance 100, prediction engine 702, etc.) can link this physical network device accessing into a network protected by the cyber security appliance 100 through multiple different connection pathways/access paths from network traffic back together into one network entity for analysis and other purposes. In an example, the machine learning models trained to model the pattern of life of this physical network device via utilizing the common unified network device identifier for that single network device but while still being able to separate out particular behavior for that network device (and corresponding models of a pattern of life/normal behavior for that network device) in each of the four different use scenarios e.g. off-line from the main network, connecting to the main network while at homing or traveling through a third party remote access service through the ZPA and/or VPN, and connecting directly to the main network while in the office.
  • Thus, services like ZPA and all these other VPN services and things like that cause a much more fragmented picture of an individual user's and associated device's presence in the network than existed many years ago, because a much more dynamic workforce is being used who come into one or more offices, work while traveling, work from home, and more extensively use 3rd party SaaS services, the situation is now the cyber security appliance 100 needs to analyze lots of different pieces of information from non-related sources providing that information. A user's working patterns are also so much more fragmented than maintaining one solitary model of normal behavior is not practical because of all of the ways a user can and does perform their work activities.
  • However, the cyber security appliance 100 still needs to properly analyze all of these aspects of the physical network device and its associated user in context individually for that work scenario as well as a big picture for all of their activities and data associated with the network device and its user(s).
  • Each individual device identifier for that physical network device can still have its own machine learning models trained to model the pattern of life of that device identifier be kept maintained for that particular device identifier and then one unified device analysis can also occur for a machine learning model maintained for the unified network device identifier (e.g., new metadevice identifier) for that single physical network device.
  • Next, the prediction engine 702 is configured to simulate cyber-attacks against the network being protected by the cyber security appliance 100. The prediction engine 702 can use the unified network device identifier to determine all of the ways a cyber threat could attack and spread throughout the network being protected by the cyber security appliance 100 and simulated by the prediction engine 702 for the different device identifiers from the different sources of access into the network. The prediction engine 702 can create a logical and reasonable attack path through the network by unifying all of the possible routes of this network device compared to considering the device id assigned to that physical network device, separately depending upon what use scenario because the same physical network device can have a different identifier for that device in the different use scenarios. The prediction engine 702 can use the unified network device identifier for that single physical network device to view that device as a connected whole while still having the capability to viewing individual scenarios. The prediction engine 702 cooperates with the cyber security appliance 100 to monitor traffic into the network in order to map all paths into and through the network taken by the monitored traffic.
  • The device linking service 180 in one process links/correlates network device identifiers from different sources with different device data streams across a particular network into a single/individual unifying device identifier for that single physical device in order to link the different device data streams into a singular physical device, where the device linking service 180 next also matches user IDs across those different platforms to device IDs for each user ID known to use that particular device ID. The device linking service 180 unifies data streams to get a composite picture of a behavior of a given network device that has different device identifiers from different sources of access into a network, via cross-referencing information from the different sources of access into the network, and then linking that unified network device Id with a user entity.
  • The device linking service 180 maps and links device ids from different sources of access to the network by looking at many shared elements amongst the physical network device identified from each different source. For example, some of the shared elements may be device identifier and/or username IP address, hostname, date, and time of the same activity overlapping between the different sources of data (e.g. details regarding some browsing information that occurred on the physical network device that was captured as part of the data being fed from that source to the device linking service 180, another activity overlapping between the different sources of data could be a matching login and log out time and date. The device linking service 180 can use a direct string matching of different values to find these shared elements but also could use Fuzzy logic matching. Fuzzy logic can be an artificial intelligence and machine learning technology that performs fuzzy approximate name matching and/or fuzzy approximate string matching that identifies similar, but not identical elements in data table sets.
  • The device linking service 180 correlates a lot of different sources for the network data, application data, and other data that the device linking service 180 can combine to create meta-entities (the unified network device identifier and one or more user identifiers associated with the different device identifiers) and link them together while still being able to put activities for a given work scenario in context according to a model of a normal pattern of life.
  • FIG. 2 illustrates an embodiment of a block diagram of an example device linking service in one process links/correlates network device identifiers (dids) from different sources with different device data streams across a particular network into a single unified network device identifier for that individual physical device, under analysis, where the device linking service next matches different device IDs (dids) across separate networks. In this example, two networks, network 1 and network 2, each have their own set of physical network devices connecting to that particular network. Each separate network is protected by its own cyber security appliance 100 into a single unified network device identifier for that individual physical device for improved, more efficient, and accurate simulation runs (e.g. attack path modeling) by the prediction engine 702.
  • A physical network device can be used in multiple different geographic locations that each have their particular network protected by its own cyber security appliance 100. Thus, the device linking service 180 links an individual physical network device monitored and tracked from multiple different cyber security appliance 100 platforms back together into a unified network device identifier being utilized in multiple different networks each network protected by a different cyber security appliance 100.
  • The device linking service 180 links a physical network device used in multiple different monitored networks with a common universal view of the physical network device of network platforms used in both platforms back together. Some organizations global networks are so big and geographically diverse that portions of that global network need to be broken up into their own local portion of the global network. Each local portion of the global network can have its own cyber security appliance 100 protecting that local portion of the global network while a master cyber security appliance 100 protects the overall global network. Employees travel to different locations—the United States, the United Kingdom, Asia, etc. and they may be working at different geographically located offices which are monitored by different cyber security appliance 100s and all that activity and behavior has to be tied back together under one unified view of that network entity.
  • The device linking service 180 is configured to correlate information on these different device identifiers from discrete networks into the one common unified network device identifier for that single network device. The device linking service 180 can use a uniformed analysis format via translation and mapping (e.g. applying string matching/fuzzy logic between device ids from different platforms, and user ids from different platforms and user ids tied to use of one or more device ids) and placing the information into a central data store to store these data points organized by how they related to another data point as well as create a new record that maintains the composite data points stored in a uniformed analysis format.
  • The cyber security appliance 100 1 monitors and tracks information relating to that particular physical network device's activities and behavior in network 1 by different device identifiers assigned by, for example, i) the Cisco ASA VPN for that physical network device assigned in this example device did2), ii) the network itself assigned in this example as traffic mirror did2, as well as iii) the endpoint client monitoring sensor resident on that physical network device assigned in this example as device did3.
  • The cyber security appliance 100 2 monitors and tracks information relating to that particular physical network device's activities and behavior in network 2 by different device identifiers assigned by, for example, i) the network itself assigned in this example as traffic mirror did202, as well as ii) the endpoint client monitoring sensor resident on that physical network device assigned in this example as device did203.
  • The device linking service 180 is configured to correlate information on these two different unified network device identifiers for that single physical network device together back into one common unified network device identifier for that single network device.
  • Note, the device linking service 180 is configured to also create a centralized data store for information on that physical network device being used across separate networks. (e.g., see FIG. 4 )
  • FIG. 3 illustrates an embodiment of a block diagram of an example device linking service to aggregate network presence information about each particular person/user of the network and their different user accounts on different third-party applications served from third-party platforms external to the network, whom is then also associated with their particular individual physical network device. A portion of the device linking service 180 can obtain device identifiers for a physical network device from information obtained through the cyber security appliance 100 monitoring the network traffic, and other data. A portion of the device linking service 180 can also obtain user account identifiers for a user associated with a given physical network device from information obtained through the prediction engine 702 monitoring the traffic, application information, and other data. The device linking service 180 is then configured to link the unified network device identifier with a user entity in the network. The device linking service 180 is configured to create a meta entity identifier from the unified network device identifier and one or more user identifiers associated with the different device identifiers from the different sources of access into the network. The device linking service 180 can link device identifiers for a physical network device from information obtained through the cyber security appliance 100 and the prediction engine 702 collecting metadata on a user and their different account information for this entity. In this example, the device linking service 180 creates a meta entity for a physical network device with the example identifier of WS192 Laptop and its associated user(s).
  • The device linking service 180 can link device identifiers for a physical network device in the top portion of FIG. 3 . The device linking service 180 matches different device IDs (dids) across a particular network into a single unified network device identifier for that single physical device. The device linking service 180 performs this act/methodology for each different platform/network so that each platform has one single unifying device identifier for that single physical device accessing that network from multiple different source platforms. (See FIG. 1 and the top portion of FIG. 3 ) Next, the device linking service 180 next matches different device IDs (dids) across a separate networks, each protected by its own cyber security appliance 100, into a single unified network device identifier for that single physical device. (See FIG. 2 ) The device linking service 180 can link and log input metadata from different sources monitored by the cyber security appliance 100 such as, for example, traffic monitoring (Outlook domains) device ID=did1, endpoint monitoring sensors cSensor device did=2, ZIA Device did=3, Salesforce User did=4, Microsoft 365 user ID=did5, etc. via a hostname match and/or a username matching to the physical network device. The detect device linking can match the IP addresses, logins, approximate time stamps, and other shared elements to link device ids from different sources to the same network device.
  • The device linking service 180 also matches user IDs from across different account information from discrete platforms into a single unifying user ID. (See the bottom portion of FIG. 3 ) Next, the device linking service 180 also matches user IDs across those different platforms to device IDs for each user ID known to use that particular device ID. (See the interexchange between the top and bottom of FIG. 3 ) Finally, the device linking service 180 can then create a meta entity id by linking the single unifying meta user ID across the different networks and different platforms with the single unifying meta device id across the different networks and different platforms and store that and all of the information needed to make that meta entity id in the central data store. Thus, the device linking service 180 can secondarily link that properly understood network device (e.g. laptop A) to the user and all of the user's other presence into and across the network. The device linking service 180 can link meta data collected on the entity by the prediction engine 702. The device linking service 180 can also do more fuzzy matching and more active querying to link the user's id across different sources of information. For example, traffic monitoring email accounts e.g. ben.ash@holdingsinc.com email address from an email vendor and/or traffic monitoring, a physical network device, such as a laptop, assigned an identifier of WS192 on the network, Microsoft Defender security app, master data management (MDM) data to create a single master record for each person, place, and/or thing in network, from across internal and external data, traffic monitoring Salesforce tools to manage customer data, automate processes, analyze data and insights, and create personalized customer experiences (e.g. Salesforce domains)—ash.b Salesforce User, Active Directory security groups, Lightweight Directory Access Protocol (LDAP) data distributed directory information services over an Internet Protocol network, other user account linking services such as Account Linking from Google, etc., of what is the user's personal name and then matching that back to the user's identity. The device linking service 180 actively queries the sources of data to obtain additional information from each of those services regarding the user (their account information) as well as passively monitors the activities and communications to collect information that can be used for the fuzzy logic matching.
  • The device linking service 180 actively queries third-party/external sources to obtain additional information to create the unified view of the user entity. This meta entity (of the combined user identity and unified network device entity) can then be used to better improve, attack path modeling with the example of making it easier to traverse between user links and then follow them. The device linking service 180 in the cyber security appliance 100 uses the I/O ports to actively pull data/query data from and passively monitor data from multiple sources. The cyber security appliance 100 pulls data from and receives data from, for example, an endpoint management platform that covers the endpoint. The device linking service 180 in the cyber security appliance 100 can actively pull the user account information from different third party platforms hosting these applications. For example, the cyber security appliance 100 pulls data from and receives data from SaaS and email third-party platforms. The cyber security appliance 100 pulls data from and receives data from the network that it is protecting and monitoring.
  • FIG. 4 illustrates an embodiment of a block diagram of an example device linking service to unify network device identifiers and its user(s) into one meta entity identifier for a more accurate analysis in cyber security modeling to detect a cyber threat as well as how to properly autonomously respond to a cyber-attack based on the meta entity information being provided into the prediction engine 702 to model the attack paths into and through the network. FIG. 5A illustrates an embodiment of a portion of the block diagram of the example device linking service shown in FIG. 4 . The example device linking service 180 can supply the meta entity identifier and associated information to a cyber security appliance 100 configured to detect a cyber threat in the network. The cyber security appliance 100 can use the meta entity identifier and information associated with the unified network device identifier and the one or more user identifiers associated with the different device identifiers to create multiple models of a pattern of life for the meta entity in order to detect the cyber threat. FIG. 5B illustrates an embodiment of another portion of the block diagram of the example device linking service 180 shown in FIG. 4 . An example device linking service 180 can cooperate with the firewall configuration ingester 176, the prediction engine 702, the cyber security appliance 100, and a central data store 192.
  • On the top of the Figure, the device linking service 180 can generate queries for contextual data about the user and the physical network device from sources such as a threat Intel platform, for example, Sophos, an Active Directory environment, a Mobile Device Management (MDM) aka Jam F service, an asset management platform, etc. The information from the device linking service 180 can be fed from there into the cyber security appliance 100, the central data store 192, and the prediction engine 702. Once device linking service 180 creates the conception of the common unified network device entity of what the single physical network device, e.g. laptop, is, then the device linking service 180 will link that unified network device identifier to the user of that device as a kind of a secondary entity. (See FIG. 3 SaaS account, email account linked to the user)
  • The device linking service 180 can aggregate different network device identifiers into one common unified network device entity identifier for that single network device, under analysis, so that the rest of the cyber security system—e.g. the prediction engine 702 and the cyber security appliance 100 can link this single physical network device, under analysis, accessing into a network protected by the cyber security appliance 100 through multiple different connection pathways from network traffic back together into one network entity for a better pattern of life tracking, as well as better modeling by a prediction engine 702 of the possible route that a cyber-attack may be able to take through a network.
  • On the left hand side, a Detect engine in the cyber security appliance 100 can have many modules such as a network ingestion module (including on premise/local network or cloud), a SAAS module, an email module, a login module, an endpoint agent (e.g. client sensor cSensor) input module, a cloud environment monitoring module, and an industrial network ingestion module. The modules in the Detect engine can be used to establish what is the behavior of a user's physical network device, such as a laptop, that has been broken down between all these different incoming data streams with some duplication. These modules are monitored and act as a source of data to model the pattern of life for each specific physical network device and user in the network in the cyber security appliance 100. Note, some overlap in modeling the pattern of life can be implemented to account for different working environment scenarios as discussed herein. The cyber security appliance 100 thus creates multiple modelled entities for the pattern of life for this meta entity identifier per the user's normal work routines along with a big picture/composite model pattern of life for that model of the particular network device. The device linking service 180 correlates a lot of different sources for the data that the system can combine to create these meta entities and link them together while still being able to put activities for a given work scenario in context according to a specific model of a normal pattern of life. In an embodiment, merely a unified view of the pattern of life for that model of the particular network device is created and maintained.
  • The device linking service 180 also queries for the pattern of life information on the network devices and their corresponding user(s) in the network. The cyber security appliance 100 pulls data from and receives data from, for example, an endpoint vendor platform that covers endpoint devices. The cyber security appliance 100 pulls data from and receives data from third-party platforms such as SaaS, email third-party platforms not run directly by the network being protected by the cyber security appliance 100 (e.g. personal email accounts versus the employee's work email account). The cyber security appliance 100 pulls data from and receives data from the IT network that it is protecting and monitoring. The multiple modules in the cyber security appliance 100 can supply information to the device linking service 180 regarding in this example network information from both on premise/local network as well as accessed via RDP through the cloud, SAAS information, email information, login information, an endpoint agent information, the cloud environment information from the external third party services, etc.
  • The device linking service 180 outputs a unified network device identifier for each physical network device connecting to that network. The device linking service 180 also creates meta entities (combining the unified network device identifier and the user account identifier information for the user of that physical network device). The device linking service 180 can supply the unified network device identifier and associated information with the different device identifiers from the different sources of access into the network to a prediction engine 702. The prediction engine 702 uses the unified network device identifier and associated information to determine nodes in the network directly touched and/or effected by the unified network device identifier and ingests the firewall rules as well to determine and add onto the possible pathways through the network in the attack path modeling (e.g. simulation runs). The device linking service 180 in one process links assigned network device identifiers from different sources in order to link the different device data streams into a singular physical network device (e.g. that device's unified network device identifier) that is in a second process linked with the physical user(s) of the linked physical network devices (e.g. a meta entity identifier) for improved, more efficient and accurate attack path modeling by a prediction engine 702. Thus, the device linking service 180 unifies data streams to get a better idea big picture of a behavior of a given physical network device that has different device identifiers from different sources of information and then links that unified network device identifier with a user entity of that physical device (which forms a meta entity identifier).
  • The device linking service 180 actively queries the third-party/external sources to obtain additional information to create the unified view of the user entity and their user account information from the many third-party applications. This meta entity (of the combined user identity and unified network device entity) can then be used to better improve, attack path modeling with the example of making it easier to traverse between the user links and their associated physical network device links and then follow the combination of those links/nodes in the attack path through the network. The device linking service 180 passively as well as actively queries to ingest data from various viable third-party vendor platforms and analyzes the ingested data, passing it into the prediction engine 702 to perform attack path modeling (simulations) knowing all of the hypothetical paths possible paths because now it knows all of the nodes that the meta entity touches/effects instead of just paths directly related to one of the user identifiers or one of the network device ids.
  • The device linking service 180 can use fuzzy logic and correlations to link devices to users. The device linking service 180 can also use a very similar methodology to identifying user identities back to a specific user within cybersecurity software by matching SaaS Accounts, Emails, Active Directory (AD) accounts, and network devices that are used by the same person. Note, an AD account can be a centralized and standardized system for Microsoft Windows that automates management of user data, security, and distributed resources and enables interoperation with other directories. The device linking service 180 can identify user identities for SaaS devices and users across different services (Emails, AD Accounts, Network) via aggregation through the fuzzy matching, other more complex machine learning-based methods, and linking of network devices for SaaS services using aggregation data such as that acquired from Lightweight Directory Access Protocol (LDAP), Active Directory (AD) servers) enrichment, and other external services. Similarly, network devices may be linked to SaaS services using aggregation data such as that acquired from the LDAP and AD Servers enrichment, from external services such as Microsoft Defender, or from credentials observed by our own endpoint agents. Note, LDAP can be a standard protocol designed to maintain and access distributed “directory services” within an IP network to, for example, provide a central place to store usernames and passwords.
  • The unified network device entity can be a composite representation of all of the different device ids for a physical network device. The unified network device identifier for a given physical network device can be taken one step further in the device linking service 180, where these linked devices are turned into “meta”/unified device entities. Information known about composite parts of the meta entity (e.g., the user and their physical network device) can be used in, for example, the prediction engine 702 to contribute to scoring about their importance to the organization, their “weakness”, and to tailor synthetic campaigns towards them.
  • In an example, if a meta entity for firstname.lastname@example.com email address is associated with a Windows device, then the prediction engine 702 can tailor a cyber attack involving that meta entity using starting with a phishing attack using a Windows updates alert. The cyber-attack would spread in all directions for each of the different device identifiers and user accounts associated with that meta entity. In another example, if a meta-device entity for firstname.lastname@example.com contains an AWS account, but others do not, it can be deduced they have access to a cloud environment which boosts their importance. The combined user's presence and their network device connections across many different facets of business operations (e.g., network device, corporate phone, SaaS services, email, etc.) can be aggregated to overall impact their “targetability” in the context of attack path modeling. Node exposure, node weakness, and “damage” scores can all be impacted by the presence on these different services, which can be added together to assign these scores to a meta-device which represents a person in a business.
  • The meta entity identifier is an aggregation of the user's presence of all of their network devices and different user accounts. The device linking service 180 can create a common meta entity identifier to aggregate different sources of network traffic from an individual physical device across multiple sources of access and a user's network presence information for a certain physical network device, for each physical network device in the network. The device linking service 180 also aggregates different information about a person/user of the network and their different user accounts on different third-party platforms, who is then associated with this particular network device and also has a presence in third-party applications and its user (e.g. an aggregate meta user id). The device linking service 180 correlates the common unified network device identifier with the aggregate meta user id for the user of that particular network device, under analysis, to form the meta entity identifier. In an embodiment the meta entity identifier can be created by the prediction engine 702. Either way, the meta entity identifier is used to improve the ability to replicate a logical and reasonable attack path by unifying all of the routes of the user presence compared to considering each different device identifier for its source separately.
  • The device linking service 180 can passively monitor the data streams from the different sources having access into the network as well as actively query third-party platforms to gather and ingest device data, user data, and activity data from various third-party vendors and then analyze the ingested data, and then pass the ingested data on the meta entity (e.g. aggregate user Ids and device IDs) data into the prediction engine 702 to perform the (attack path modeling) simulation of attack paths for the network that the cyber threat may take. The prediction engine 702 now knows all of the hypothetical paths possible paths because of all of the known nodes that the meta entity touches/effects are identified instead of just paths directly related to one of the user identifiers or one of the network device identifiers making up the meta entity. Note, the device linking service 180 in the cyber security appliance 100 can use the gathering module, the datastore, and the I/O ports to actively pull data/query data from and passively monitor data from the multiple third-party sources to obtain stream data needed to match the network devices identifiers across different networks.
  • Next, the device linking service 180 can maintain data from the data streams and other sources of data in their generic format as well as put relevant data into a uniform analysis format in a central data store 192 via translation and mapping (e.g. applying string matching/fuzzy logic as well as a central data store 192 storing data points organized by how they related to another data point) and then using a central data store 192 to store the relevant data for the uniform analysis format. The device linking service 180 can 1) apply at least one of string matching and fuzzy logic to cross-reference information from the different sources of access into the network as well as 2) use a central data store 192 to store data points organized by how the data points relate to another data point.
  • The device linking service 180 supplies the meta entity identifier information to the prediction engine 702 for attack path modeling in order to have better modeling of device behavior as well as the Detect engine for modeling the pattern of life. The prediction engine 702 for attack path modeling then modifies both the network node exposure score as well as its weakness score, which are utilized to determine a particular cyber threat path into and then through the network in a simulated cyber-attack on that network. The device linking service 180 can use the results of any of the above steps/processes to make use of that information for a better pattern of life tracking, as well as a better modeling by a prediction engine 702 of the possible route that a cyber-attack may be able to take through a network.
  • The cyber security appliance 100 can have an autonomous response module 140 to autonomously respond to mitigate the cyber threat as well as to cooperate with the prediction engine 702 in order to determine how to properly autonomously respond to a cyber-attack by the cyber threat based upon simulations run in the prediction engine 702 modeling the attack paths into and through the network. The prediction engine 702 runs many, many simulations of attack paths for the network that each cyber threat may take.
  • The device linking service 180 allows the monitoring of the behavior (traffic and activity of that network device) across the many different sources of information some from third-party platforms and then take autonomous action on the network device in each of those third-party platforms and the network entity as a group, rather than individually applying them to each one. The cyber security appliance 100 can have an autonomous response module 140 to autonomously respond to mitigate the cyber threat as well as to cooperate with the prediction engine 702 in order to determine how to properly autonomously respond to a cyber-attack by the cyber threat based upon simulations run in the prediction engine 702 modeling the attack paths into and through the network.
  • Next, the firewall network tool is on the right side. On the righthand side, the device linking service 180 can cooperate with a firewall configuration ingester 176 and the prediction engine 702. The prediction engine 702 can combine all of the paths into and through the network taken by the monitored traffic with the possible paths through the network theoretically possible in accordance with the firewall rules from the firewall configuration ingester 176 in light of the unified network device identifier with a user entity in the network from the device linking service 180 to determine possible attack paths when running the simulation of attack paths for the network that the cyber threat may take. The firewall configuration ingester 176 requests current firewall configurations for points of ingress into the network. The firewall configuration ingester 176 queries for firewall configuration rules. The firewall configuration ingester 176 can examine firewall rules implemented by a firewall to identify routes into the organization's network allowed by the current firewall rules and supply the prediction engine 702 with the possible routes that a cyber-attack by the cyber threat may take into the network and permitted reasons into the network. The firewall configuration ingester 176 can ingest firewall rules to determine theoretically possible paths through the network in accordance with the firewall rules and a mapping of nodes of the network. The firewall configuration ingester 176 also analyzes possible routes into the network based upon the configuration. For example, the firewall configuration ingester 176 analyzes possible routes between subnets allowed by the firewall configuration rules. The firewall configuration ingester 176 then determines all of the hypothetically possible paths/routes through the network based on these two factors. The firewall configuration ingester 176 can then pass the hypothetically possible paths/routes through the network to the attack path modeling component in the prediction engine 702. Alternatively, the firewall configuration ingester 176 then determines all of the hypothetically possible paths/routes through the network which can be analyzed together with the actual paths/routes that have actually been used previously in the network (which have been detected through previous network traffic and activities tracked in this network). The attack path modeling component in the cyber-attack simulator could also determine all of the hypothetically possible paths/routes through the network which can be analyzed together with the actual path/routes that have actually been used previously in the network (derived from the monitored network traffic).
  • Next, the firewall configuration ingester 176 looks at current firewall configurations for points of ingress in the network and identifies changes over time to the firewall configurations causing new attack path modeling routes. The firewall configuration ingester 176 in the cyber security appliance 100 can look at current firewall configurations and their settings for points of ingress into the network and identify changes to the firewall configurations that cause new attack path modeling routes. A number of firewall integrations require the firewall configuration ingester 176 to request the list of all the current firewall rules that the Artificial Intelligence based cyber security appliance 100 needs to know. The firewall configuration ingester 176 identifies changes to the firewall configurations by modeling firewall configuration rules that over time, and then looks at time series data for the firewall configuration rules. The Artificial Intelligence based cyber security appliance 100 can utilize this information to identify “true” routes into the organization for the prediction engine 702. This gives the Artificial Intelligence an awareness of ways/routes into the network and permitted reasons into the network. The Artificial Intelligence based cyber security appliance 100 can also model the changes in these rules over time to detect unusual rules. For example, the time model can keep track of those rules over time and, for example, look for spikes in things that now have gotten access and/or have an unusual access. The firewall configuration ingester 176 can examine firewall configurations and their settings for points of ingress in the network and identifies changes over time to the firewall configurations that cause new attack path modeling routes into the network and then supplies this information into the prediction engine 702 of the possible route that a cyber-attack may be able to take to progress into the network and can also model the changes in these rules over time to detect unusual rules. The firewall configuration ingester 176 examines firewall configurations and their settings to model changes in these rules over time to detect unusual rules over time to the firewall configurations that cause new attack path modeling routes into the network. The Artificial Intelligence model of the firewall configuration ingester 176 receives firewall rules (and its related and/or associated data) as another form of time series data that the modules and models check for anomalies, and when a new route into the network as new network boundary equipment becomes available. The firewall configuration ingester 176 also ensures that the information and the relevant details are fed to a restoration component (e.g. modules scripted to restore network devices and the network to a configuration it was in before the cyber-attack occurred, without additional human input is needed to perform the restoration) as well as the prediction engine 702. The firewall configuration ingester 176 uses this information to determine many things, for example, the network has a new route into the network, a new externally exposed component is communicating into the network, a network component, etc. that is externally exposed to attacks that the network operators were previously unaware of. As a remediation action, the Artificial Intelligence based cyber security appliance 100 can create firewall rules that prevent external network access to compromised or recently “healed” devices (e.g. restored to a configuration before the cyber-attack) to halt any connectivity. The modules can also feed routine and anomalous ingress data to the prediction engine 702 (simulator, virtual network generator, etc. configured to run attack scenarios and feedback the results of those attack scenarios) and also to flag detected anomalies and errant configurations that might be vulnerable to a cyber threat (e.g. malicious actor) letting unauthorized data in or out of the network. Thus, the firewall configuration ingester can look at and use legacy data that the security appliance already has access to, for example, data from legacy systems. By analyzing the data in light of firewall configurations, the modules and model are able to figure out whether a new path for an attack into or out of the network has come into existence.
  • In addition, the firewall configuration ingester 176 pulls in the firewall configuration rules to combine i) actually detected network paths through the network with ii) information on what paths are hypothetically possible based on the firewall configuration rules for better attack path modeling. The node exposure and weakness scores are changed to factor in the hypothetical paths with the actual detected paths and then to factor in the composite meta entity analysis (common unified network device identifier plus the common user entity identifier).
  • FIG. 6 illustrates a block diagram of an embodiment of the AI based cyber security appliance 100 that protects a system, including but not limited to a network/domain, from cyber threats. Various Artificial Intelligence models and modules of the cyber security appliance 100 cooperate to protect one or more networks/domains under analysis from cyber threats. As shown, according to one embodiment of the disclosure, the AI-based cyber security appliance 100 may include a trigger module 105, a gather module 110, an analyzer module 115, a cyber threat analyst module 120, an assessment module 125, a formatting module 130, a data store 135, an autonomous response module 140, a first (1st) domain module 145, a second (2nd) domain module 150, and a coordinator module 155, one or more AI models 160 (hereinafter, AI model(s)”), and/or other modules. The AI model(s) 160 may be trained with machine learning on a normal pattern of life for entities in the network(s)/domain(s) under analysis, with machine learning on cyber threat hypotheses to form and investigate a cyber threat hypothesis on what are a possible set of cyber threats and their characteristics, symptoms, remediations, etc., and/or trained on possible cyber threats including their characteristics and symptoms, a data store, an autonomous response module, a 1st domain module, a 2nd domain module, and a coordinator module.
  • The cyber security appliance 100 with the Artificial Intelligence (AI) based cyber security system may protect a network/domain from a cyber threat. In an embodiment, the cyber security appliance 100 can protect all of the devices (e.g., computing devices on the network(s)/domain(s) being monitored by monitoring domain activity including communications). For example, a network domain module (e.g., first domain module 145) may communicate with network sensors to monitor network traffic going to and from the computing devices on the network as well as receive secure communications from software agents embedded in host computing devices/containers. The steps below will detail the activities and functions of several of the components in the cyber security appliance 100.
  • The gather module 110 may be configured with one or more process identifier classifiers. Each process identifier classifier may be configured to identify and track one or more processes and/or devices in the network, under analysis, making communication connections. The data store 135 cooperates with the process identifier classifier to collect and maintain historical data of processes and their connections, which is updated over time as the network is in operation. Individual processes may be present in merely one or more domains being monitored. In an example, the process identifier classifier can identify each process running on a given device along with its endpoint connections, which are stored in the data store 135. In addition, a feature classifier can examine and determine features in the data being analyzed into different categories.
  • The analyzer module 115 can cooperate with the AI model(s) 160 or other modules in the cyber security appliance 100 to confirm a presence of a cyberattack against one or more domains in an enterprise's system (e.g., see system/enterprise network 50 of FIG. 11 ). A process identifier in the analyzer module 115 can cooperate with the gather module 110 to collect any additional data and metrics to support a possible cyber threat hypothesis. Similarly, the cyber threat analyst module 120 can cooperate with the internal data sources as well as external data sources to collect data in its investigation. More specifically, the cyber threat analyst module 120 can cooperate with the other modules and the AI model(s) 160 in the cyber security appliance 100 to conduct a long-term investigation and/or a more in-depth investigation of potential and emerging cyber threats directed to one or more domains in an enterprise's system. Herein, the cyber threat analyst module 120 and/or the analyzer module 115 can also monitor for other anomalies, such as model breaches, including, for example, deviations for a normal behavior of an entity, and other techniques discussed herein. As an illustrative example, the analyzer module 115 and/or the cyber threat analyst module 120 can cooperate with the AI model(s) 160 trained on potential cyber threats in order to assist in examining and factoring these additional data points that have occurred over a given timeframe to see if a correlation exists between 1) a series of two or more anomalies occurring within that time frame and 2) possible known and unknown cyber threats. The cyber threat analyst module can cooperate with the internal data sources as well as external data sources to collect data in its investigation.
  • According to one embodiment of the disclosure, the cyber threat analyst module 120 allows two levels of investigations of a cyber threat that may suggest a potential impending cyberattack. In a first level of investigation, the analyzer module 115 and AI model(s) 160 can rapidly detect and then the autonomous response module 140 will autonomously respond to overt and obvious cyberattacks. However, thousands to millions of low level anomalies occur in a domain under analysis all of the time; and thus, most other systems need to set the threshold of trying to detect a cyberattack by a cyber threat at level higher than the low level anomalies examined by the cyber threat analyst module 120 just to not have too many false positive indications of a cyberattack when one is not actually occurring, as well as to not overwhelm a human cyber security analyst receiving the alerts with so many notifications of low level anomalies that they just start tuning out those alerts. However, advanced persistent threats attempt to avoid detection by making these low-level anomalies in the system over time during their cyberattack before making their final coup de grace/ultimate mortal blow against the system (e.g., domain) being protected. The cyber threat analyst module 120 also conducts a second level of investigation over time with the assistance of the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis that can detect these advanced persistent cyber threats actively trying to avoid detection by looking at one or more of these low-level anomalies as a part of a chain of linked information.
  • Note, a data analysis process can be algorithms/scripts written by humans to perform their function discussed herein; and can in various cases use AI classifiers as part of their operation. The cyber threat analyst module 120 forms in conjunction with the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis investigate hypotheses on what are a possible set of cyber threats. The cyber threat analyst module 120 can also cooperate with the analyzer module 115 with its one or more data analysis processes to conduct an investigation on a possible set of cyber threats hypotheses that would include an anomaly of at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) any combination of both, identified through cooperation with, for example, the AI model(s) 160 trained with machine learning on the normal pattern of life of entities in the system. For example, as shown in FIG. 4 , the cyber threat analyst module 120 may perform several additional rounds 400 of gathering additional information, including abnormal behavior, over a period of time, in this example, examining data over a 7-day period to determine causal links between the information. The cyber threat analyst module 120 may submit to check and recheck various combinations/a chain of potentially related information, including abnormal behavior of a device/user account under analysis for example, until each of the one or more hypotheses on potential cyber threats are one of 1) refuted, 2) supported, or 3) included in a report that includes details of activities assessed to be relevant activities to the anomaly of interest to the user and that also conveys at least this particular hypothesis was neither supported or refuted. For this embodiment, a human cyber security analyst is needed to further investigate the anomaly (and/or anomalies) of interest included in the chain of potentially related information.
  • Returning back to FIG. 6 , an input from the cyber threat analyst module 120 of a supported hypothesis of a potential cyber threat will trigger the analyzer module 115 to compare, confirm, and send a signal to act upon and mitigate that cyber threat. In contrast, the cyber threat analyst module 120 investigates subtle indicators and/or initially seemingly isolated unusual or suspicious activity such as a worker is logging in after their normal working hours or a simple system misconfiguration has occurred. Most of the investigations conducted by the cyber threat analyst module 120 cooperating with the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis on unusual or suspicious activities/behavior may not result in a cyber threat hypothesis that is supported but rather most are refuted or simply not supported. Typically, during the investigations, several rounds of data gathering to support or refute the long list of potential cyber threat hypotheses formed by the cyber threat analyst module 120 will occur before the algorithms in the cyber threat analyst module 120 will determine whether a particular cyber threat hypothesis is supported, refuted, or needs further investigation by a human. The rounds of data gathering may build chains of linked low-level indicators of unusual activity along with potential activities that could be within a normal pattern of life for that entity to evaluate the whole chain of activities to support or refute each potential cyber threat hypothesis formed. (See again, for example, FIG. 4 and a chain of linked low-level indicators, including abnormal behavior compared to the normal patten of life for that entity, all under a score of 50 on a threat indicator score). The investigations by the cyber threat analyst module 120 can happen over a relatively long period of time and be far more in depth than the analyzer module 115 which will work with the other modules and AI model(s) 160 to confirm that a cyber threat has in fact been detected.
  • The gather module 110 may further extract data from the data store 135 at the request of the cyber threat analyst module 120 and/or analyzer module 115 on each possible hypothetical threat that would include the abnormal behavior or suspicious activity and then can assist to filter that collection of data down to relevant points of data to either 1) support or 2) refute each particular hypothesis of what the cyber threat, the suspicious activity and/or abnormal behavior relates to. The gather module 110 cooperates with the cyber threat analyst module 120 and/or analyzer module 115 to collect data to support or to refute each of the one or more possible cyber threat hypotheses that could include this abnormal behavior or suspicious activity by cooperating with one or more of the cyber threat hypotheses mechanisms to form and investigate hypotheses on what are a possible set of cyber threats.
  • Thus, the cyber threat analyst module 120 is configured to cooperate with the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis to form and investigate hypotheses on what are a possible set of cyber threats and then can cooperate with the analyzer module 115 with the one or more data analysis processes to confirm the results of the investigation on the possible set of cyber threats hypotheses that would include the at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) any combination of both, identified through cooperation with the AI model(s) 160 trained with machine learning on the normal pattern of life/normal behavior of entities in the domains under analysis.
  • Note, in the first level of threat detection, the gather module 110 and the analyzer module 115 cooperate to supply any data and/or metrics requested by the analyzer module 115 cooperating with the AI model(s) 160 trained on possible cyber threats to support or rebut each possible type of cyber threat. Again, the analyzer module 115 can cooperate with the AI model(s) 160 and/or other modules to rapidly detect and then cooperate with the autonomous response module 140 to autonomously respond to overt and obvious cyberattacks, (including ones found to be supported by the cyber threat analyst module 120).
  • As a starting point, the AI-based cyber security appliance 100 can use multiple modules, each capable of identifying abnormal behavior and/or suspicious activity against the AI model(s) 160 of normal pattern of life for the entities in the network/domain under analysis, which is supplied to the analyzer module 115 and/or the cyber threat analyst module 120. The analyzer module 115 and/or the cyber threat analyst module 120 may also receive other inputs such as AI model breaches, AI classifier breaches, etc. a trigger to start an investigation from an external source.
  • Many other model breaches of the AI model(s) 160 trained with machine learning on the normal behavior of the system can send an input into the cyber threat analyst module 120 and/or the trigger module 105 to trigger an investigation to start the formation of one or more hypotheses on what are a possible set of cyber threats that could include the initially identified abnormal behavior and/or suspicious activity. Note, a deeper analysis can look at example factors such as i) how long has the endpoint existed or is registered; ii) what kind of certificate is the communication using; iii) is the endpoint on a known good domain or known bad domain or an unknown domain, and if unknown what other information exists such as registrant's name and/or country; iv) how rare; v), etc.
  • Note, the cyber threat analyst module 120 cooperating with the AI model(s) 160 trained with machine learning on how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis in the AI-based cyber security appliance 100 provides an advantage as it reduces the time taken for human led or cyber security investigations, provides an alternative to manpower for small organizations and improves detection (and remediation) capabilities within the cyber security appliance 100.
  • The cyber threat analyst module 120, which forms and investigates hypotheses on what are the possible set of cyber threats, can use hypotheses mechanisms including any of 1) one or more of the AI model(s) 160 trained on how human cyber security analysts form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis that would include at least an anomaly of interest, 2) one or more scripts outlining how to conduct an investigation on a possible set of cyber threats hypotheses that would include at least the anomaly of interest, 3) one or more rules-based models on how to conduct an investigation on a possible set of cyber threats hypotheses and how to form a possible set of cyber threats hypotheses that would include at least the anomaly of interest, and 4) any combination of these. Again, the AI model(s) 160 trained on ‘how to form cyber threat hypotheses and how to conduct investigations for a cyber threat hypothesis’ may use supervised machine learning on human-led cyber threat investigations and then steps, data, metrics, and metadata on how to support or to refute a plurality of the possible cyber threat hypotheses, and then the scripts and rules-based models will include the steps, data, metrics, and metadata on how to support or to refute the plurality of the possible cyber threat hypotheses. The cyber threat analyst module 120 and/or the analyzer module 115 can feed the cyber threat details to the assessment module 125 to generate a threat risk score that indicate a level of severity of the cyber threat.
  • According to one embodiment of the disclosure, the assessment module 125 can cooperate with the AI model(s) 160 trained on possible cyber threats to use AI algorithms to identify actual cyber threats and generate threat risk scores based on both the level of confidence that the cyber threat is a viable threat and the severity of the cyber threat (e.g., attack type where ransomware attacks has greater severity than phishing attack; degree of infection; computing devices likely to be targeted, etc.). The threat risk scores be used to rank alerts that may be directed to enterprise or computing device administrators. This risk assessment and ranking is conducted to avoid frequent “false positive” alerts that diminish the degree of reliance/confidence on the cyber security appliance 100.
  • Training of AI Pre Deployment and then During Deployment
  • In step 1, an initial training of the AI model trained on cyber threats can occur using unsupervised learning and/or supervised learning on characteristics and attributes of known potential cyber threats including malware, insider threats, and other kinds of cyber threats that can occur within that domain. Each Artificial Intelligence can be programmed and configured with the background information to understand and handle particulars, including different types of data, protocols used, types of devices, user accounts, etc. of the system being protected. The Artificial Intelligence pre-deployment can all be trained on the specific machine learning task that they will perform when put into deployment. For example, the AI model, such as AI model(s) 160 or example (hereinafter “AI model(s) 160”), trained on identifying a specific cyber threat learns at least both in the pre-deployment training i) the characteristics and attributes of known potential cyber threats as well as ii) a set of characteristics and attributes of each category of potential cyber threats and their weights assigned on how indicative certain characteristics and attributes correlate to potential cyber threats of that category of threats.
  • In this example, one of the AI model(s) 160 trained on identifying a specific cyber threat can be trained with machine learning such as Linear Regression, Regression Trees, Non-Linear Regression, Bayesian Linear Regression, Deep learning, etc. to learn and understand the characteristics and attributes in that category of cyber threats. Later, when in deployment in a domain/network being protected by the cyber security appliance 100, the AI model trained on cyber threats can determine whether a potentially unknown threat has been detected via a number of techniques including an overlap of some of the same characteristics and attributes in that category of cyber threats. The AI model may use unsupervised learning when deployed to better learn newer and updated characteristics of cyberattacks.
  • In an embodiment, one or more of the AI models 160 may be trained on a normal pattern of life of entities in the system are self-learning AI model using unsupervised machine learning and machine learning algorithms to analyze patterns and ‘learn’ what is the ‘normal behavior’ of the network by analyzing data on the activity on, for example, the network level, at the device level, and at the employee level. The self-learning AI model using unsupervised machine learning understands the system under analysis' normal patterns of life in, for example, a week of being deployed on that system, and grows more bespoke with every passing minute. The AI unsupervised learning model learns patterns from the features in the day-to-day dataset and detecting abnormal data which would not have fallen into the category (cluster) of normal behavior. The self-learning AI model using unsupervised machine learning can simply be placed into an observation mode for an initial week or two when first deployed on a network/domain in order to establish an initial normal behavior for entities in the network/domain under analysis.
  • A deployed AI model trained on a normal pattern of life of entities in the system can be configured to observe the nodes in the system being protected. Training on a normal behavior of entities in the system can occur while monitoring for the first week or two until enough data has been observed to establish a statistically reliable set of normal operations for each node (e.g., user account, device, etc.). Initial training of one or more of the AI models 160 of FIG. 6 trained with machine learning on a behavior of the pattern of life of the entities in the network/domain can occur where each type of network and/or domain will generally have some common typical behavior with each model trained specifically to understand components/devices, protocols, activity level, etc. to that type of network/system/domain. Alternatively, pre-deployment machine learning training of the AI model(s) 160 of FIG. 6 trained on a normal pattern of life of entities in the system can occur. Initial training of the AI model(s) 160 trained with machine learning on a behavior of the pattern of life of the entities in the network/domain can occur where each type of network and/or domain will generally have some common typical behavior with each model trained specifically to understand components/devices, protocols, activity level, etc. to that type of network/system/domain. What is normal behavior of each entity within that system can be established either prior to deployment and then adjusted during deployment or alternatively the model can simply be placed into an observation mode for an initial week or two when first deployed on a network/domain in order to establish an initial normal behavior for entities in the network/domain under analysis.
  • During deployment, what is considered normal behavior will change as each different entity's behavior changes and will be reflected through the use of unsupervised learning in the model such as various Bayesian techniques, clustering, etc. The AI models 160 can be implemented with various mechanisms such neural networks, decision trees, etc. and combinations of these. Likewise, one or more supervised machine learning AI models 160 may be trained to create possible hypotheses and perform cyber threat investigations on agnostic examples of past historical incidents of detecting a multitude of possible types of cyber threat hypotheses previously analyzed by human cyber security analyst. More on the training of AI models 160 are trained to create one or more possible hypotheses and perform cyber threat investigations will be discussed later.
  • At its core, the self-learning AI models 160 that model the normal behavior (e.g. a normal pattern of life) of entities in the network mathematically characterizes what constitutes ‘normal’ behavior, based on the analysis of a large number of different measures of a device's network behavior—packet traffic and network activity/processes including server access, data volumes, timings of events, credential use, connection type, volume, and directionality of, for example, uploads/downloads into the network, file type, packet intention, admin activity, resource and information requests, command sent, etc.
  • Clustering Methods
  • In order to model what should be considered as normal for a device or cloud container, its behavior can be analyzed in the context of other similar entities on the network. The AI models (e.g., AI model(s) 160) can use unsupervised machine learning to algorithmically identify significant groupings, a task which is virtually impossible to do manually. To create a holistic image of the relationships within the network, the AI models and AI classifiers employ a number of different clustering methods, including matrix-based clustering, density-based clustering, and hierarchical clustering techniques. The resulting clusters can then be used, for example, to inform the modeling of the normative behaviors and/or similar groupings.
  • The AI models and AI classifiers can employ a large-scale computational approach to understand sparse structure in models of network connectivity based on applying L1-regularization techniques (the lasso method). This allows the artificial intelligence to discover true associations between different elements of a network which can be cast as efficiently solvable convex optimization problems and yield parsimonious models. Various mathematical approaches assist.
  • Next, one or more supervised machine learning AI models are trained to create possible hypotheses and how to perform cyber threat investigations on agnostic examples of past historical incidents of detecting a multitude of possible types of cyber threat hypotheses previously analyzed by human cyber security analyst. AI models trained on forming and investigating hypotheses on what are a possible set of cyber threats can be trained initially with supervised learning. Thus, these AI models can be trained on how to form and investigate hypotheses on what are a possible set of cyber threats and steps to take in supporting or refuting hypotheses. The AI models trained on forming and investigating hypotheses are updated with unsupervised machine learning algorithms when correctly supporting or refuting the hypotheses including what additional collected data proved to be the most useful.
  • Next, the various AI models and AI classifiers combine use of unsupervised and supervised machine learning to learn ‘on the job’—it does not depend upon solely knowledge of previous cyberattacks. The AI models and classifiers combine use of unsupervised and supervised machine learning constantly revises assumptions about behavior, using probabilistic mathematics, which is always up to date on what a current normal behavior is, and not solely reliant on human input. The AI models and classifiers combine use of unsupervised and supervised machine learning on cyber security is capable of seeing hitherto undiscovered cyber events, from a variety of threat sources, which would otherwise have gone unnoticed.
  • Next, these cyber threats can include, for example, Insider threat—malicious or accidental, Zero-day attacks—previously unseen, novel exploits, latent vulnerabilities, machine-speed attacks—ransomware and other automated attacks that propagate and/or mutate very quickly, Cloud and SaaS-based attacks, other silent and stealthy attacks advance persistent threats, advanced spear-phishing, etc.
  • Ranking the Cyber Threat
  • The assessment module 125 and/or cyber threat analyst module 120 of FIG. 6 can cooperate with the AI model(s) 160 trained on possible cyber threats to use AI algorithms to account for ambiguities by distinguishing between the subtly differing levels of evidence that characterize network data. Instead of generating the simple binary outputs ‘malicious’ or ‘benign,’ the AI's mathematical algorithms produce outputs marked with differing degrees of potential threat. This enables users of the system to rank alerts or notifications to the enterprise security administrator in a rigorous manner and prioritize those which most urgently require action. Meanwhile, it also assists to avoid the problem of numerous false positives associated with simply a rule-based approach.
  • As discussed in more detail above, the analyzer module 115 and/or cyber threat analyst module 120 can cooperate with the one or more unsupervised AI (machine learning) model 160 trained on the normal pattern of life/normal behavior in order to perform anomaly detection against the actual normal pattern of life for that system to determine whether an anomaly (e.g., the identified abnormal behavior and/or suspicious activity) is malicious or benign. In the operation of the cyber security appliance 100, the emerging cyber threat can be previously unknown, but the emerging threat landscape data 170 representative of the emerging cyber threat shares enough (or does not share enough) in common with the traits from the AI models 160 trained on cyber threats to now be identified as malicious or benign. Note, if later confirmed as malicious, then the AI models 160 trained with machine learning on possible cyber threats can update their training. Likewise, as the cyber security appliance 100 continues to operate, then the one or more AI models trained on a normal pattern of life for each of the entities in the system can be updated and trained with unsupervised machine learning algorithms. The analyzer module 115 can use any number of data analysis processes (discussed more in detail below and including the agent analyzer data analysis process here) to help obtain system data points so that this data can be fed and compared to the one or more AI models trained on a normal pattern of life, as well as the one or more machine learning models trained on potential cyber threats, as well as create and store data points with the connection fingerprints.
  • The AI model(s) 160 of FIGS. 1 and 3 can continually learn and train with unsupervised machine learning algorithms on an ongoing basis when deployed in their system that the cyber security appliance 100 is protecting. Thus, learning and training on what is normal behavior for each user, each device, and the system overall and lowering a threshold of what is an anomaly.
  • Anomaly Detection/Deviations
  • Anomaly detection can discover unusual data points in your dataset. Anomaly can be a synonym for the word ‘outlier.’ Anomaly detection (or outlier detection) is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Anomalous activities can be linked to some kind of problems or rare events. Since there are tons of ways to induce a particular cyberattack, it is very difficult to have information about all these attacks beforehand in a dataset. But, since the majority of the user activity and device activity in the system under analysis is normal, the system overtime captures almost all of the ways which indicate normal behavior. And from the inclusion-exclusion principle, if an activity under scrutiny does not give indications of normal activity, the self-learning AI model using unsupervised machine learning can predict with high confidence that the given activity is anomalous. The AI unsupervised learning model learns patterns from the features in the day-to-day dataset and detecting abnormal data which would not have fallen into the category (cluster) of normal behavior. The goal of the anomaly detection algorithm through the data fed to it is to learn the patterns of a normal activity so that when an anomalous activity occurs, the modules can flag the anomalies through the inclusion-exclusion principle. The goal of the anomaly detection algorithm through the data fed to it is to learn the patterns of a normal activity so that when an anomalous activity occurs, the modules can flag the anomalies through the inclusion-exclusion principle. The cyber threat module can perform its two-level analysis on anomalous behavior and determine correlations.
  • In an example, 95% of data in a normal distribution lies within two standard-deviations from the mean. Since the likelihood of anomalies in general is very low, the modules cooperating with the AI model of normal behavior can say with high confidence that data points spread near the mean value are non-anomalous. And since the probability distribution values between mean and two standard-deviations are large enough, the modules cooperating with the AI model of normal behavior can set a value in this example range as a threshold (a parameter that can be tuned over time through the self-learning), where feature values with probability larger than this threshold indicate that the given feature's values are non-anomalous, otherwise it's anomalous. Note, this anomaly detection can determine that a data point is anomalous/non-anomalous on the basis of a particular feature. In reality, the cyber security appliance 100 should not flag a data point as an anomaly based on a single feature. Merely, when a combination of all the probability values for all features for a given data point is calculated can the modules cooperating with the AI model of normal behavior can say with high confidence whether a data point is an anomaly or not.
  • Again, the AI models trained on a normal pattern of life of entities in a system (e.g., domain) under analysis may perform the cyber threat detection through a probabilistic change in a normal behavior through the application of, for example, an unsupervised Bayesian mathematical model to detect the behavioral change in computers and computer networks. The Bayesian probabilistic approach can determine periodicity in multiple time series data and identify changes across single and multiple time series data for the purpose of anomalous behavior detection. Please reference U.S. Pat. No. 10,701,093 granted Jun. 30, 2020, titled “Anomaly alert system for cyber threat detection” for an example Bayesian probabilistic approach, which is incorporated by reference in its entirety. In addition, please reference US patent publication number “US2021273958A1 filed Feb. 26, 2021, titled “Multi-stage anomaly detection for process chains in multi-host environments” for another example anomalous behavior detector using a recurrent neural network and a bidirectional long short-term memory (LSTM), which is incorporated by reference in its entirety. In addition, please reference US patent publication number “US2020244673A1, filed Apr. 23, 2019, titled “Multivariate network structure anomaly detector,” which is incorporated by reference in its entirety, for another example anomalous behavior detector with a Multivariate Network and Artificial Intelligence classifiers.
  • Next, as discussed further below, during pre-deployment the cyber threat analyst module 120 and the analyzer module 115 can use data analysis processes and cooperate with AI model(s) 160 trained on forming and investigating hypotheses on what are a possible set of cyber threats. In addition, another set of AI models can be trained on how to form and investigate hypotheses on what are a possible set of cyber threats and steps to take in supporting or refuting hypotheses. The AI models trained on forming and investigating hypotheses are updated with unsupervised machine learning algorithms when correctly supporting or refuting the hypotheses including what additional collected data proved to be the most useful.
  • Similarly, during deployment, the data analysis processes (discussed herein) used by the analyzer module 115 can use unsupervised machine learning to update the initial training learned during pre-deployment, and then update the training with unsupervised learning algorithms during the cyber security appliance's 100 deployment in the system being protected when various different steps to either i) support or ii) refute the possible set of cyber threats hypotheses worked better or worked worse.
  • The AI model(s) 160 trained on a normal pattern of life of entities in a domain under analysis may perform the threat detection through a probabilistic change in a normal behavior through the application of, for example, an unsupervised Bayesian mathematical model to detect a behavioral change in computers and computer networks. The Bayesian probabilistic approach can determine periodicity in multiple time series data and identify changes across single and multiple time series data for the purpose of anomalous behavior detection. In an example, a system being protected can include both email and IT network domains under analysis. Thus, email and IT network raw sources of data can be examined along with a large number of derived metrics that each produce time series data for the given metric
  • Additional Module Interactions
  • Referring back to FIG. 6 , the gather module 110 cooperates with the data store 135. The data store 135 stores comprehensive logs for network traffic observed. These logs can be filtered with complex logical queries and each IP packet can be interrogated on a vast number of metrics in the network information stored in the data store. Similarly, other domain's communications and data, such as emails, logs, etc. may be collected and stored in the data store 135. The gather module 110 may consist of multiple automatic data gatherers that each look at different aspects of the data depending on the particular hypothesis formed for the analysed event. The data relevant to each type of possible hypothesis can be automatically pulled from additional external and internal sources. Some data is pulled or retrieved by the gather module 110 for each possible hypothesis.
  • The data store 135 can store the metrics and previous threat alerts associated with network traffic for a period of time, which is, by default, at least 27 days. This corpus of data is fully searchable. The cyber security appliance 100 works with network probes to monitor network traffic and store and record the data and metadata associated with the network traffic in the data store.
  • The gather module 110 may have a process identifier classifier. The process identifier classifier can identify and track each process and device in the network, under analysis, making communication connections. The data store 135 cooperates with the process identifier classifier to collect and maintain historical data of processes and their connections, which is updated over time as the network is in operation. In an example, the process identifier classifier can identify each process running on a given device along with its endpoint connections, which are stored in the data store. Similarly, data from any of the domains under analysis may be collected and compared.
  • Examples of domains/networks under analysis being protected can include any of i) an Informational Technology network, ii) an Operational Technology network, iii) a Cloud service, iv) a SaaS service, v) an endpoint device, vi) an email domain, and vii) any combinations of these. A domain module is constructed and coded to interact with and understand a specific domain.
  • For instance, the first domain module 145 may operate as an IT network module configured to receive information from and send information to, in this example, IT network-based sensors (i.e., probes, taps, etc.). The first domain module 145 also has algorithms and components configured to understand, in this example, IT network parameters, IT network protocols, IT network activity, and other IT network characteristics of the network under analysis. The second domain module 150 is, in this example, an email module. The second domain module 150 can be an email network module configured to receive information from and send information to, in this example, email-based sensors (i.e., probes, taps, etc.). The second domain module 150 also has algorithms and components configured to understand, in this example, email parameters, email protocols and formats, email activity, and other email characteristics of the network under analysis. Additional domain modules can also collect domain data from another respective domain.
  • The coordinator module 155 is configured to work with various machine learning algorithms and relational mechanisms to i) assess, ii) annotate, and/or iii) position in a vector diagram, a directed graph, a relational database, etc., activity including events occurring, for example, in the first domain compared to activity including events occurring in the second domain. The domain modules can cooperate to exchange and store their information with the data store.
  • The process identifier classifier (not shown) in the gather module 110 can cooperate with additional classifiers in each of the domain modules 145/150 to assist in tracking individual processes and associating them with entities in a domain under analysis as well as individual processes and how they relate to each other. The process identifier classifier can cooperate with other trained AI classifiers in the modules to supply useful metadata along with helping to make logical nexuses.
  • A feedback loop of cooperation exists between the gather module 110, the analyzer module 115, AI model(s) 160 trained on different aspects of this process, and the cyber threat analyst module 120 to gather information to determine whether a cyber threat is potentially attacking the networks/domains under analysis.
  • Determination of Whether Something is Likely Malicious
  • In the following examples the analyzer module 115 and/or cyber threat analyst module 120 can use multiple factors to the determination of whether a process, event, object, entity, etc. is likely malicious.
  • In an example, the analyzer module 115 and/or cyber threat analyst module 120 can cooperate with one or more of the AI model(s) 160 trained on certain cyber threats to detect whether the anomalous activity detected, such as suspicious email messages, exhibit traits that may suggest a malicious intent, such as phishing links, scam language, sent from suspicious domains, etc. The analyzer module 115 and/or cyber threat analyst module 120 can also cooperate with one of more of the AI model(s) 160 trained on potential IT based cyber threats to detect whether the anomalous activity detected, such as suspicious IT links, URLs, domains, user activity, etc., may suggest a malicious intent as indicated by the AI models trained on potential IT based cyber threats.
  • In the above example, the analyzer module 115 and/or the cyber threat analyst module 120 can cooperate with the one or more AI models 160 trained with machine learning on the normal pattern of life for entities in an email domain under analysis to detect, in this example, anomalous emails which are detected as outside of the usual pattern of life for each entity, such as a user, email server, etc., of the email network/domain. Likewise, the analyzer module 115 and/or the cyber threat analyst module 120 can cooperate with the one or more AI models trained with machine learning on the normal pattern of life for entities in a second domain under analysis (in this example, an IT network) to detect, in this example, anomalous network activity by user and/or devices in the network, which is detected as outside of the usual pattern of life (e.g. abnormal) for each entity, such as a user or a device, of the second domain's network under analysis.
  • Thus, the analyzer module 115 and/or the cyber threat analyst module 120 can be configured with one or more data analysis processes to cooperate with the one or more of the AI model(s) 160 trained with machine learning on the normal pattern of life in the system, to identify an anomaly of at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) the combination of both, from one or more entities in the system. Note, other sources, such as other model breaches, can also identify at least one of i) the abnormal behavior, ii) the suspicious activity, and iii) the combination of both to trigger the investigation.
  • Accordingly, during this cyber threat determination process, the analyzer module 115 and/or the cyber threat analyst module 120 can also use AI classifiers that look at the features and determine a potential maliciousness based on commonality or overlap with known characteristics of malicious processes/entities. Many factors including anomalies that include unusual and suspicious behavior, and other indicators of processes and events are examined by the one or more AI models 160 trained on potential cyber threats and/or the AI classifiers looking at specific features for their malicious nature in order to make a determination of whether an individual factor and/or whether a chain of anomalies is determined to be likely malicious.
  • Initially, in this example of activity in an IT network analysis, the rare JA3 hash and/or rare user agent connections for this network coming from a new or unusual process are factored just like in the first wireless domain suspicious wireless signals are considered. These are quickly determined by referencing the one or more of the AI model(s) 160 trained with machine learning on the pattern of life of each device and its associated processes in the system. Next, the analyzer module 115 and/or the cyber threat analyst module 120 can have an external input to ingest threat intelligence from other devices in the network cooperating with the cyber security appliance 100. Next, the analyzer module 115 and/or the cyber threat analyst module 120 can look for other anomalies, such as model breaches, while the AI models trained on potential cyber threats can assist in examining and factoring other anomalies that have occurred over a given timeframe to see if a correlation exists between a series of two or more anomalies occurring within that time frame.
  • The analyzer module 115 and/or the cyber threat analyst module 120 can combine these Indicators of Compromise (e.g., unusual network JA3, unusual device JA3, . . . ) with many other weak indicators to detect the earliest signs of an emerging threat, including previously unknown threats, without using strict blacklists or hard-coded thresholds. However, the AI classifiers can also routinely look at blacklists, etc. to identify maliciousness of features looked at.
  • Another example of features may include a deeper analysis of endpoint data. This endpoint data may include domain metadata, which can reveal peculiarities such as one or more indicators of potentially a malicious domain (i.e., its URL). The deeper analysis may assist in confirming an analysis to determine that indeed a cyber threat has been detected. The analysis module can also look at factors of how rare the endpoint connection is, how old the endpoint is, where geographically the endpoint is located, how a security certificate associated with a communication is verified only by an endpoint device or by an external 3rd party, just to name a few additional factors. The analyzer module 115 (and similarly the cyber threat analyst module 120) can then assign weighting given to these factors in the machine learning that can be supervised based on how strongly that characteristic has been found to match up to actual malicious sites in the training.
  • In another AI classifier to find potentially malicious indicators, the agent analyzer data analysis process in the analyzer module 115 and/or cyber threat analyst module 120 may cooperate with the process identifier classifier to identify all of the additional factors of i) are one or more processes running independently of other processes, ii) are the one or more processes running independent are recent to this network, and iii) are the one or more processes running independent connect to the endpoint, which the endpoint is a rare connection for this network, which are referenced and compared to one or more AI models trained with machine learning on the normal behavior of the pattern of life of the system.
  • Note, a user agent, such as a browser, can act as a client in a network protocol used in communications within a client-server distributed computing system. In particular, the Hypertext Transfer Protocol (HTTP) identifies the client software originating (an example user agent) the request, using a user-agent header, even when the client is not operated by a user. Note, this identification can be faked, so it is only a weak indicator of the software on its own, but when compared to other observed user agents on the device, this can be used to identify possible software processes responsible for requests.
  • The analyzer module 115 and/or the cyber threat analyst module 120 may use the agent analyzer data analysis process that detects a potentially malicious agent previously unknown to the system to start an investigation on one or more possible cyber threat hypotheses. The determination and output of this step is what are possible cyber threats that can include or be indicated by the identified abnormal behavior and/or identified suspicious activity identified by the agent analyzer data analysis process.
  • In an example, the cyber threat analyst module 120 can use the agent analyzer data analysis process and the AI models(s) trained on forming and investigating hypotheses on what are a possible set of cyber threats to use the machine learning and/or set scripts to aid in forming one or more hypotheses to support or refute each hypothesis. The cyber threat analyst module 120 can cooperate with the AI models trained on forming and investigating hypotheses to form an initial set of possible hypotheses, which needs to be intelligently filtered down. The cyber threat analyst module 120 can be configured to use the one or more supervised machine learning models trained on i) agnostic examples of a past history of detection of a multitude of possible types of cyber threat hypotheses previously analyzed by human, who was a cyber security professional, ii) a behavior and input of how a plurality of human cyber security analysts make a decision and analyze a risk level regarding and a probability of a potential cyber threat, iii) steps to take to conduct an investigation start with anomaly via learning how expert humans tackle investigations into specific real and synthesized cyber threats and then the steps taken by the human cyber security professional to narrow down and identify a potential cyber threat, and iv) what type of data and metrics that were helpful to further support or refute each of the types of cyber threats, in order to determine a likelihood of whether the abnormal behavior and/or suspicious activity is either i) malicious or ii) benign?
  • The cyber threat analyst module 120 using AI models, scripts and/or rules based modules is configured to conduct initial investigations regarding the anomaly of interest, collected additional information to form a chain of potentially related/linked information under analysis and then form one or more hypotheses that could have this chain of information that is potentially related/linked under analysis and then gather additional information in order to refute or support each of the one or more hypotheses.
  • In an example, a behavioral pattern analysis for identifying what are the unusual behaviors of the network/system/device/user under analysis by the AI (machine learning) models may be as follows. The coordinator module 155 can tie the alerts, activities, and events from, in this example, the email domain to the alerts, activities, and events from the IT network domain. As shown in FIG. 9 , a graph 410 of an embodiment of an example chain of unusual behavior for, in this example, the email activities as well as IT activities deviating from a normal pattern of life for this user and/or device in connection with the rest of the system/network under analysis. The cyber threat analyst module 120 and/or analyzer module 115 can cooperate with one or more AI (machine learning) models. The one or more AI (machine learning) models are trained and otherwise configured with mathematical algorithms to infer, for the cyber-threat analysis, ‘what is possibly happening with the chain of distinct alerts, activities, and/or events, which came from the unusual pattern,’ and then assign a threat risk associated with that distinct item of the chain of alerts and/or events forming the unusual pattern. The unusual pattern can be determined by examining initially what activities/events/alerts that do not fall within the window of what is the normal pattern of life for that network/system/device/user under analysis can be analysed to determine whether that activity is unusual or suspicious. A chain of related activity that can include both unusual activity and activity within a pattern of normal life for that entity can be formed and checked against individual cyber threat hypothesis to determine whether that pattern is indicative of a behavior of a malicious actor—human, program, or other threat. The cyber threat analyst module 120 can go back and pull in some of the normal activities to help support or refute a possible hypothesis of whether that pattern is indicative of a behavior of a malicious actor.
  • An illustrative example of a behavioral pattern included in the chain is shown in the graph over a time frame of, an example, 7 days. The cyber threat analyst module 120 detects a chain of anomalous behavior of unusual data transfers three times, unusual characteristics in email messages in the monitored system three times which seem to have some causal link to the unusual data transfers. Likewise, twice unusual credentials attempted the unusual behavior of trying to gain access to sensitive areas or malicious IP addresses and the user associated with the unusual credentials trying unusual behavior has a causal link to at least one of those three email messages with unusual characteristics. Again, the cyber security appliance 100 can go back and pull in some of the normal activities to help support or refute a possible hypothesis of whether that pattern is indicative of a behavior of a malicious actor. The analyzer module 115 of FIG. 6 can cooperate with one or more models trained on cyber threats and their behavior to try to determine if a potential cyber threat is causing these unusual behaviors. The cyber threat analyst module 120 can put data and entities into 1) a directed graph and nodes in that graph that are overlapping or close in distance have a good possibility of being related in some manner, 2) a vector diagram, 3) relational database, and 4) other relational techniques that will at least be examined to assist in creating the chain of related activity connected by causal links, such as similar time, similar entity and/or type of entity involved, similar activity, etc., under analysis. If the pattern of behaviors under analysis is believed to be indicative of a malicious actor, then a score of how confident is the system in this assessment of identifying whether the unusual pattern was caused by a malicious actor is created. Next, also assigned is a threat level score or probability indicative of what level of threat does this malicious actor pose. Lastly, the cyber security appliance 100 is configurable in a user interface, by a user, enabling what type of automatic response actions, if any, the cyber security appliance 100 may take when different types of cyber threats, indicated by the pattern of behaviors under analysis, that are equal to or above a configurable level of threat posed by this malicious actor.
  • Referring still to FIG. 6 , the autonomous response module 140 is configured to take one or more autonomous mitigation actions to mitigate the cyber threat during the cyberattack by the cyber threat. The autonomous response module 140 can reference an AI model trained to track a normal pattern of life for each node of the protected system to perform an autonomous act of, for example, restricting a potentially compromised node having i) an actual indication of compromise and/or ii) merely adjacent to a known compromised node, to merely take actions that are within that node's normal pattern of life to mitigate the cyber threat.
  • The chain of the individual alerts, activities, and events that form the pattern including one or more unusual or suspicious activities into a distinct item for cyber-threat analysis of that chain of distinct alerts, activities, and/or events. The cyber-threat module may reference the one or more machine learning models trained on, in this example, e-mail threats to identify similar characteristics from the individual alerts and/or events forming the distinct item made up of the chain of alerts and/or events forming the unusual pattern.
  • Cyber Threat Assessment and Autonomous Actions
  • In the next step, the analyzer module 115 and/or cyber threat analyst module 120 generates one or more supported possible cyber threat hypotheses from the possible set of cyber threat hypotheses. The analyzer module 115 generates the supporting data and details of why each individual hypothesis is supported or not. The analyzer module 115 can also generate one or more possible cyber threat hypotheses and the supporting data and details of why they were refuted.
  • In general, the analyzer module 115 cooperates with the following three sources. The analyzer module 115 cooperates with the one or more of the AI model(s) 160 trained on cyber threats to determine whether an anomaly such as the abnormal behavior and/or suspicious activity is either 1) malicious or 2) benign when the potential cyber threat under analysis is previously unknown to the cyber security appliance 100. The analyzer module 115 cooperates with one or more of the AI model(s) 160 trained on a normal pattern of life of entities in the network under analysis. The analyzer module 115 cooperates with various AI-trained classifiers. With all of these sources, when they input information that indicates a potential cyber threat that is i) severe enough to cause real harm to the network under analysis and/or ii) a close match to known cyber threats, then the analyzer module can make a final determination to confirm that a cyber threat likely exists and send that cyber threat to the assessment module to assess the threat score associated with that cyber threat. Certain model breaches will always trigger a potential cyber threat that the analyzer will compare and confirm the cyber threat.
  • In the next step, the assessment module 125 with the AI classifiers is configured to cooperate with the analyzer module 115. The analyzer module 115 supplies the identity of the supported possible cyber threat hypotheses from the possible set of cyber threat hypotheses to the assessment module 125. The assessment module 125 with the AI classifiers cooperates with the one or more of the AI model(s) 160 trained on possible cyber threats can make a determination on whether a cyber threat exists and what level of severity is associated with that cyber threat. The assessment module 125 with the AI classifiers cooperates with one or more of the AI model(s) 160 trained on possible cyber threats in order assign a numerical assessment of a given cyber threat hypothesis that was found likely to be supported by the analyzer module 115 with the one or more data analysis processes, via the abnormal behavior, the suspicious activity, or the collection of system data points. The assessment module 125 with the AI classifiers output can be a score (ranked number system, probability, etc.) that a given identified process is likely a malicious process. The assessment module 125 with the AI classifiers can be configured to assign a numerical assessment, such as a probability, of a given cyber threat hypothesis that is supported and a threat level posed by that cyber threat hypothesis which was found likely to be supported by the analyzer module 115, which includes the abnormal behavior or suspicious activity as well as one or more of the collection of system data points, with the one or more AI models trained on possible cyber threats.
  • The cyber threat analyst module 120 in the AI-based cyber security appliance 100 component provides an advantage over competitors' products as it reduces the time taken for cyber security investigations, provides an alternative to manpower for small organizations and improves detection (and remediation) capabilities within the cyber security appliance 100. The AI-based, cyber threat analyst module 120 performs its own computation of threat and identifies interesting network events with one or more processers. These methods of detection and identification of threat all add to the above capabilities that make the cyber threat analyst module 120 a desirable part of the cyber security appliance 100. The cyber threat analyst module 120 offers a method of prioritizing which is not just a summary or highest score alert of an event evaluated by itself equals the worst and prevents more complex attacks being missed because their composite parts/individual threats only produced low-level alerts.
  • The AI classifiers can be part of the assessment module 125, which scores the outputs of the analyzer module 115. Again, as for the other AI classifiers discussed, the AI classifier can be coded to take in multiple pieces of information about an entity, object, and/or thing and based on its training and then output a prediction about the entity, object, or thing. Given one or more inputs, the AI classifier model will try to predict the value of one or more outcomes. The AI classifiers cooperate with the range of data analysis processes that produce features for the AI classifiers. The various techniques cooperating here allow anomaly detection and assessment of a cyber threat level posed by a given anomaly; but more importantly, an overall cyber threat level posed by a series/chain of correlated anomalies under analysis.
  • In the next step, the formatting module 130 can generate an output such as a printed or electronic report with the relevant data. The formatting module 130 can cooperate with both the analyzer module 115 and the assessment module 125 depending on what the user wants to be reported. The formatting module 130 is configured to format, present a rank for, and output one or more supported possible cyber threat hypotheses from the assessment module into a formalized report, from a one or more report templates populated with the data for that incident. The formatting module 130 is configured to format, present a rank for, and output one or more detected cyber threats from the analyzer module or from the assessment module into a formalized report, from a one or more report templates populated with the data for that incident. Many different types of formalized report templates exist to be populated with data and can be outputted in an easily understandable format for a human user's consumption. The formalized report on the template is outputted for a human user's consumption in a medium of any of 1) printable report, 2) presented digitally on a user interface, 3) in a machine-readable format for further use in machine-learning reinforcement and refinement, or 4) any combination of the three. The formatting module 130 is further configured to generate a textual write up of an incident report in the formalized report for a wide range of breaches of normal behavior, used by the AI models trained with machine learning on the normal behavior of the system, based on analyzing previous reports with one or more models trained with machine learning on assessing and populating relevant data into the incident report corresponding to each possible cyber threat. The formatting module 130 can generate a threat incident report in the formalized report from a multitude of a dynamic human-supplied and/or machine created templates corresponding to different types of cyber threats, each template corresponding to different types of cyber threats that vary in format, style, and standard fields in the multitude of templates. The formatting module 130 can populate a given template with relevant data, graphs, or other information as appropriate in various specified fields, along with a ranking of a likelihood of whether that hypothesis cyber threat is supported and its threat severity level for each of the supported cyber threat hypotheses, and then output the formatted threat incident report with the ranking of each supported cyber threat hypothesis, which is presented digitally on the user interface and/or printed as the printable report.
  • In the next step, the assessment module 125 with the AI classifiers, once armed with the knowledge that malicious activity is likely occurring/is associated with a given process from the analyzer module 115, then cooperates with the autonomous response module 140 to take an autonomous action such as i) deny access in or out of the device or the network ii) shutdown activities involving a detected malicious agent, iii) restrict devices and/or user's to merely operate within their particular normal pattern of life, iv) remove some user privileges/permissions associated with the compromised user account, etc.
  • The autonomous response module 140, rather than a human taking an action, can be configured to cause one or more rapid autonomous actions in response to be taken to counter the cyber threat. A user interface for the response module can program the autonomous response module 140 i) to merely make a suggested response to take to counter the cyber threat that will be presented on a display screen and/or sent by a notice to an enterprise security administrator for explicit authorization when the cyber threat is detected or ii) to autonomously take a response to counter the cyber threat without a need for a human to approve the response when the cyber threat is detected. The autonomous response module 140 will then send a notice of the autonomous response as well as display the autonomous response taken on the display screen. Example autonomous responses may include cut off connections, shutdown devices, change the privileges of users, delete and remove malicious links in emails, slow down a transfer rate, cooperate with other security devices such as a firewall to trigger its autonomous actions, and other autonomous actions against the devices and/or users. The autonomous response module 140 uses one or more of the AI model(s) 160 that are configured to intelligently work with other third-party defense systems in that customer's network against threats. The autonomous response module 140 can send its own protocol commands to devices and/or take actions on its own. In addition, the autonomous response module 140 uses the one or more of the AI model(s) 160 to orchestrate with other third-party defense systems to create a unified defense response against a detected threat within or external to that customer's network. The autonomous response module 140 can be an autonomous self-learning digital response coordinator that is trained specifically to control and reconfigure the actions of traditional legacy computer defenses (e.g., firewalls, switches, proxy servers, etc.) to contain threats propagated by, or enabled by, networks and the internet. The cyber threat analyst module 120 and/or assessment module 125 can cooperate with the autonomous response module 140 to cause one or more autonomous actions in response to be taken to counter the cyber threat, improves computing devices in the system by limiting an impact of the cyber threat from consuming unauthorized CPU cycles, memory space, and power consumption in the computing devices via responding to the cyber threat without waiting for some human intervention. The trigger module 105, analyzer module 115, assessment module 125, the cyber threat analyst module 120, and formatting module 130 cooperate to improve the analysis and formalized report generation with less repetition to consume CPU cycles with greater efficiency than humans repetitively going through these steps and re-duplicating steps to filter and rank the one or more supported possible cyber threat hypotheses from the possible set of cyber threat hypotheses.
  • Prediction Engine and Restoration Engine
  • Overall, the cyber security appliance 100 and its modules use Artificial Intelligence algorithms configured and trained to perform a first machine-learned task of detecting the cyber threat as well as the autonomous response module 140 can use a combination of user configurable settings on actions to take to mitigate a detected cyber threat, a default set of actions to take to mitigate a detected cyber threat, and Artificial Intelligence algorithms configured and trained to perform a second machine-learned task of taking one or more mitigation actions to mitigate the cyber threat. A cyber security restoration engine 190 deployed in the cyber security appliance 100 uses Artificial Intelligence algorithms configured and trained to perform a third machine-learned task of remediating the system/network being protected back to a trusted operational state. The prediction engine 702 of FIG. 7 uses Artificial Intelligence algorithms configured and trained to perform a fourth machine-learned task of Artificial Intelligence-based simulations of cyberattacks to assist in determining 1) how a simulated cyberattack might occur in the system being protected, and 2) how to use the simulated cyberattack information to preempt possible escalations of an ongoing actual cyberattack.
  • Referring now to FIG. 7 , an exemplary block diagram of embodiment of the prediction engine 702 is shown. The prediction engine 702 conducts Artificial Intelligence-based simulations by constructing a graph of nodes of the system being protected (e.g., a network including (a) the physical devices connecting to the network, any virtualized instances of the network, user accounts in the network, email accounts in the network, etc. as well as (b) connections and pathways through the network) to create a virtualized instance of the network to be tested. As shown in FIG. 7 , the various cooperating modules residing in the prediction engine 702 may include, but are not limited to, a collections module 705, a cyberattack generator (e.g. phishing email generator) 710, an email module 715, a network module 720, an analyzer module 725, a payloads module 730 with first and second payloads, a communication module 735, a training module 740, a simulated attack path module 750, a cleanup module 744, a scenario module 760, a user interface 765, a reporting module 770, a formatting module 775, an orchestration module 780, and/or an AI classifier 785 with a list of specified classifiers.
  • The simulated attack path module 750 in the prediction engine 702 may be implemented via i) a simulator to model the system being protected and/or ii) a clone creator to spin up a virtual network and create a virtual clone of the system being protected configured to pen-test one or more defenses provided by the cyber security appliance 100. The prediction engine 702 may include and cooperate with one or more AI models 787 trained with machine learning on the contextual knowledge of the organization, such as those in the cyber security appliance 100 or have its own separate model trained with machine learning on the contextual knowledge of the organization and each user's and device's normal pattern of behavior. These trained AI models 787 may be configured to identify data points from the contextual knowledge of the organization and its entities, which may include, but is not limited to, language-based data, email/network connectivity and behavior pattern data, and/or historic knowledgebase data. The prediction engine 702 may use the trained AI models 787 to cooperate with one or more AI classifier(s) 785 by producing a list of specific organization-based classifiers for the AI classifier(s) 785.
  • The simulated attack path module 750 by cooperating with the other modules in the prediction engine 702 is further configured to calculate and run one or more hypothetical simulations of a possible cyberattack and/or of an actual ongoing cyberattack from a cyber threat through an attack pathway through the system being protected. The prediction engine 702 is further configured to calculate, based at least in part on the results of the one or more hypothetical simulations of a possible cyberattack and/or of an actual ongoing cyberattack from a cyber threat through an attack pathway through the system being protected, a risk score for each node (e.g. each device, user account, etc.), the threat risk score being indicative of a possible severity of the compromise and/or chance of compromise prior to an autonomous response action is taken in response to an actual cyberattack of the cyber incident.
  • The simulated attack path module 750 is configured to initially create the network being protected in a simulated or virtual device environment. Additionally, the orchestration module 780 and communications module 735 may be configured to cooperate with the cyber security appliance 100 to securely obtain specific data about specific users, devices, and entities in specific networks of for this specific organization. The training module 740 and simulated attack path module 750 in the prediction engine 702 use the obtained specific data to generate one or more specific cyberattacks, such as a phishing email, tailored to those specific users, devices, and/or entities of the specific organization. Many different cyberattacks can be simulated by the AI red team module but a phishing email attack will be used as an example cyberattack.
  • The prediction engine 702 is communicatively coupled to the cyber security appliance 100, an open source (OS) database server 790, an email system 791 with one or more endpoint computing devices 791A-B, and a network system 792 with one or more entities 793-799, and a restoration engine 745 over one or more networks 746/747. The cyber security appliance 100 may cooperate with the prediction engine 702 to initiate a pen-test in the form of, for example, a software attack, which generates a customized, for example, phishing email to spoof one or more specific users/devices/entities of an organization in an email/network defense system and then looks for any security vulnerabilities, risks, threats, and/or weaknesses potentially gaining access to one or more features and data of that specific user/device/entity.
  • The prediction engine 702 may be customized and/or driven by a centralized AI using and/or modelling a smart awareness of a variety of specific historical email/network behavior patterns and communications of a specific organization's hierarchy within a specific organization. Such AI modelling may be trained and derived through machine learning and the understanding of the organization itself based on: (i) a variety of OS materials such as any OS materials collected from the OS database server 790 and (ii) its historical awareness of any specific email/network connectivity and behavior patterns to target for that organization as part of an offensive (or attacking) security approach. The training module 740 can contain for reference a database of cyberattack scenarios as well as restoration response scenarios by the restoration engine 745 stored in the database.
  • The prediction engine 702 may use the orchestration module 780 to implement and orchestrate this offensive approach all the way from an initial social engineering attack at an earlier stage of the pentest to a subsequent payload delivery attack at a later stage of the pentest and so on. The prediction engine 702 is configured to: (i) intelligently initiate a customized cyberattack on the components, for example, in the IT network and email system 791; as well as (ii) subsequently generating a report to highlight and/or raise awareness of one or more key areas of vulnerabilities and/or risks for that organization after observing the intelligently initiated attack (e.g., such key areas may be formatted and reported in a way tailored for that organization using both the formatting and reporting modules, as described below); and (iii) then allow that enterprise (e.g., organization) to be trained on that attack and its impact on those specific security postures, thereby allowing that organization to go in directly to mitigate and improve those compromised security postures going forward, as well as iv) during an actual cyberattack, obtain and ingest data known on the cyberattack, run simulations, and then supply information, for example, to the autonomous response module in the cyber security appliance to mitigate the actual cyberattack.
  • The prediction engine 702 may cooperate with the cyber security appliance 100 to provide feedback on any successful attacks and detections. For example, in the event that the prediction engine 702 is successful in pentesting any of the organization's entities in the email and network defense systems 791/792, the prediction engine 702 may be configured to at least provide the cyber security appliance 100 (and/or any other predetermined entities) with any feedback on the successful pentest as well as any specifics regarding the processes uses for that successful pentest, such as providing feedback on the specific attack vectors, scenarios, targeted entities, characteristics of the customized phishing emails, payloads, and contextual data, etc., that were used.
  • The simulated attack path module 750 in the prediction engine 702 may be configured with an attack path modeling component (not shown), which is programmed to work out the key paths and devices in a network via running cyberattacks on a simulated or virtual device version of the network under analysis incorporating metrics that feed into that modeling by running simulated cyberattacks on the particulars known about this specific network being protected by the cyber security appliance 100. The attack modeling has been programmed with the knowledge of a layout and connection pattern of each particular network device in a network and a number of connections and/or hops to other network devices in the network. Also, how important a particular device (a key importance) can be determined by the function of that network device, the user(s) associated with that network device, the location of the device within the network and a number of connections and/or hops to other important devices in the network. The attack path modeling component ingests the information for the purposes of modeling and simulating a potential attack against the network and routes that an attacker would take through the network. The attack path modeling component can be constructed with information to i) understand an importance of network nodes in the network compared to other network nodes in the network, and ii) to determine key pathways within the network and vulnerable network nodes in the network that a cyberattack would use during the cyberattack, via modeling the cyberattack on at least one of 1) a simulated device version and 2) a virtual device version of the network under analysis.
  • FIG. 8 illustrates a diagram of an embodiment of the cyber threat prediction engine and its Artificial Intelligence-based simulations constructing an example graph of nodes in an example network and simulating how the cyberattack might likely progress in the future tailored with an innate understanding of a normal behavior of the nodes in the system being protected and a current operational state of each node in the graph of the protected system during simulations of cyberattacks. The prediction engine 702 plots the attack path through the nodes of the network and estimated times to reach critical nodes in the network. The cyberattack simulation modeling of the prediction engine runs the simulations to identify the routes, difficulty, and time periods from certain entry notes to certain key servers. The simulations of the cyber attack by the cyber threat from one compromised entry point such as Device n to a key server can take 10 days, 100 days, etc. depending on normal behavior of Device n, security settings of the network devices in the different nodes of the network, etc.
  • The attack path modeling component in the simulated attack path module 750 cooperating with the other modules in the prediction engine 702 are configured to determine the key pathways within the network and the vulnerable network nodes in the network that the cyberattack would use during the cyberattack, via the modeling of the cyberattack on at least one of 1) the simulated device version and 2) the virtual device version of the network under analysis via using the actual detected vulnerabilities of each network node, a predicted frequency of remediation of those vulnerabilities within a specific network device in the network without a notice from the restoration engine 745, and an importance of the key network nodes with the actual vulnerabilities compared to other network nodes in the network.
  • The modules essentially seed the attack path modeling component with weakness scores that provide current data, customized to each user account and/or network device, which then allows the artificial intelligence running the attack path simulation to choose entry network nodes into the network with more accuracy as well as plot the attack path through the nodes and estimated times to reach critical nodes in the network much more accurately based on the actual current operational condition of the many user accounts and network devices in the network. The attack simulation modeling can be run to identify the routes, difficulty, and time periods from certain entry notes to certain key servers.
  • Note, the cyber threat analyst module 120 in the cyber security appliance 100 of FIG. 6 as well as the prediction engine 702 of FIG. 7 may use any unusual, detected behavior deviating from the normal behavior and then build a sequence/chain of unusual behavior and the causal links between the sequence/chain of unusual behavior to detect any potential cyber threats. For example, as shown in FIG. 6 , the cyber security appliance 100 and the prediction engine 702 may determine the unusual patterns by analyzing i) what activities/events/alerts that fall outside of the window of what is the normal pattern of life for that network/system/entity/device/user under analysis; and (ii) then pulling in and analyzing the pattern of the behavior of the activities/events/alerts that are in the normal pattern of life but also connect to the indictors for a possible cyberattack, to determine whether that pattern is indicative of a behavior of a malicious actor, such as a human, program, and/or any other cyber harmful threat.
  • The prediction engine 702 and its Artificial Intelligence-based simulations use artificial intelligence to cooperate with the restoration engine 745 to assist in choosing one or more remediation actions to perform on nodes affected by the cyberattack back to a trusted operational state while still mitigating the cyber threat during an ongoing cyberattack based on effects determined through the simulation of possible remediation actions to perform and their effects on the nodes making up the system being protected and preempt possible escalations of the cyberattack while restoring one or more nodes back to a trusted operational state. Thus, for example, the restoration engine 745 restores the one or more nodes in the protected system by cooperating with any of 1) an AI model trained to model a normal pattern of life for each node in the protected system, 2) an AI model trained on what are a possible set of cyber threats and their characteristics and symptoms to identify the cyber threat (e.g. malicious actor/device/file) that is causing a particular node to behave abnormally (e.g. malicious behavior) and fall outside of that node's normal pattern of life, and 3) the autonomous response module 140.
  • The restoration engine 745 can reference both i) a database of restoration response scenarios stored in the database and ii) a prediction engine 702 configured to run AI-based simulations and use the operational state of each node in the graph of the protected system during simulations of cyberattacks on the protected system to restore 1) each node compromised by the cyber threat and 2) promote protection of the corresponding nodes adjacent to a compromised node in the graph of the protected system.
  • The restoration engine 745 can prioritize among the one or more nodes to restore, which nodes to remediate and an order of the nodes to remediate, based on two or more factors including i) a dependency order needed for the recovery efforts, ii) an importance of a particular recovered node compared to other nodes in the system being protected, iii) a level of compromise of a particular node contemplated to be restored, iv) an urgency to recover that node compared to whether containment of the cyber threat was successful, v) a list of a most important things in the protected system to recover earliest, and vi) factoring in a result of a cyberattack simulation being run during the cyberattack by the prediction engine 702 to predict a likely result regarding the cyberattack when that node is restored.
  • An interactive response loop exists between the restoration engine 745, the cyber security appliance 100, and the prediction engine 702. The restoration engine 745, the cyber security appliance 100, and the prediction engine 702 can be configured to cooperate to combine an understanding of normal operations of the nodes making up the devices and users in the system being protected by the cyber security appliance 100, an understanding emerging cyber threats, an ability to contain those emerging cyber threats, and a restoration of the nodes of the system to heal the system with an adaptive feedback between the multiple AI-based engines in light of simulations of the cyberattack to predict what might occur in the nodes in the system based on the progression of the attack so far, mitigation actions taken to contain those emerging cyber threats and remediation actions taken to heal the nodes using the simulated cyberattack information. The multiple AI-based engines have communication hooks in between them to exchange a significant amount of behavioral metrics including data between the multiple AI-based engines to work in together to provide an overall cyber threat response.
  • The cyber security appliance 100 and its modules use Artificial Intelligence algorithms configured and trained to perform a first machine-learned task of detecting the cyber threat as well as the autonomous response module 140 can use a combination of user configurable settings on actions to take to mitigate a detected cyber threat, a default set of actions to take to mitigate a detected cyber threat, and Artificial Intelligence algorithms configured and trained to perform a second machine-learned task of taking one or more mitigation actions to mitigate the cyber threat. The restoration engine 745 uses Artificial Intelligence algorithms configured and trained to perform a third machine-learned task of remediating the system/network being protected back to a trusted operational state. The prediction engine 702 uses Artificial Intelligence algorithms configured and trained to perform a fourth machine-learned task of AI-based simulations of cyberattacks to assist in determining 1) how a simulated cyberattack might occur in the system being protected, and 2) how to use the simulated cyberattack information to preempt possible escalations of an ongoing actual cyberattack. In an example, the autonomous response module 140 uses its intelligence to cooperate with the prediction engine 702 and its AI-based simulations to choose and initiate an initial set of one or more mitigation actions indicated as a preferred targeted initial response to the detected cyber threat by autonomously initiating those mitigation actions to defend against the detected cyber threat, rather than a human taking an action.
  • FIG. 10 illustrates an embodiment of the AI based cyber security appliance 100 plugging in as an appliance platform to protect a system. The cyber security appliance 100 is part of an enterprise network 230, which may further include one or more computing devices 240 such as database servers 250, web servers 260, networking devices 270 (e.g., bridge, switch, router, load-balancers, gateways, and/or firewalls and endpoint devices) with connectivity to resources within the enterprise network 230 as well as a publicly accessible network 280 (e.g., the Internet). The endpoint devices 270 may include, but are not limited or restricted to desktop computers, laptops, smart phones, tablets, wearables, smart appliances, or the like. The security controls operate as probes and detectors that are configured to monitor, for example, network-based activity (e.g., email activity, TCP/IP communications, text or Short Message Service (SMS) activity, etc.) and computing device activity (e.g., download activity based on volume, day, time of day, etc.); credential update/modification activity (e.g., credential changes, failed access attempts to a resource, etc.); and/or resource activity (e.g., attempted/successful accesses to enterprise resources, etc.). The security controls provide the monitored data (or a version of the monitored data) as input into the modules of the cyber security appliance 100 to determine what is occurring in each domain individually.
  • FIG. 11 illustrates an example Artificial Intelligence based cyber security appliance 100 using a cyber threat analyst module 104 to protect an example network. The example network of computer systems 50 uses a cyber security appliance 100. The system depicted is a simplified illustration, which is provided for ease of explanation.
  • Referring to FIG. 11 , an exemplary and generalized embodiment of a system (e.g., enterprise network) 50 featuring computer systems 10 and 40, where one or more of these computer systems 10 and/or 40 may deploy the AI-based, cyber security appliance 100 of FIG. 6 to protect the enterprise, is shown. Herein, the system 50 comprises a first computer system 10 within a building, which uses the threat detection system to detect and thereby attempt to prevent threats to computing devices within its bounds. The first computer system 10 comprises three computing devices 1, 2, 3, a local server 4, and a multifunctional device (MFD) 5 that provides printing, scanning and facsimile functionalities to each of the computers 1, 2, 3. All of the devices within the first computer system 10 are communicatively coupled via a first Local Area Network (LAN) 6. Consequently, all of the computing devices 1, 2, 3 are able to access the local server 4 via the first LAN 6 and use the functionalities of the MFD 5 via the LAN 6. The first LAN 6 of the first computer system 10 is connected to the Internet 20, which in turn provides computing devices 1, 2, 3 with access to a multitude of other computing devices, including a server 30 and a second computer system 40. The second computer system 40 also includes two computing devices 41, 42, connected by a second LAN 43.
  • In this exemplary embodiment of the cyber security appliance 100, a first computing device 1 on the first computer system 10 has the electronic hardware, modules, models, and various software processes of the cyber security appliance 100; and therefore, runs threat detection for detecting threats to the first computer system 10. As such, the first computing device 1 includes one or more processors arranged to run the steps of the process described herein, memory storage components required to store information related to the running of the process, as well as one or more network interfaces for collecting information from various security controls (e.g., sensors, probes, etc.) collecting data associated with the system (network) 50 under analysis.
  • The cyber security appliance 100 in the first computing device 1 builds and maintains a dynamic, ever-changing model of the ‘normal behavior’ of each user and machine within the first computer system 10. The approach is based on Bayesian mathematics, and monitors all interactions, events and communications within the first computer system 10—which computing device is talking to which, files that have been created, networks that are being accessed. For example, a second computing device 2 is based in a company's San Francisco office and operated by a marketing employee who regularly accesses the marketing network, usually communicates with machines in the company's U.K. office in the second computer system 40 between 9.30 AM and midday, and is active from about 8:30 AM until 6 PM. The same employee virtually never accesses the employee time sheets, very rarely connects to the company's Atlanta network and has no dealings in South-East Asia. The cyber security appliance 100 takes all the information that is available relating to this employee and establishes a ‘pattern of life’ for that person and the devices used by that person in that system, which is dynamically updated as more information is gathered. The model of the normal pattern of life for an entity in the system 50 under analysis is used as a moving benchmark, allowing the cyber security appliance 100 to spot behavior on the system 50 seems to fall outside of this normal pattern of life, and flags this behavior as anomalous, requiring further investigation and/or autonomous action.
  • The cyber security appliance 100 is built to deal with the fact that today's attackers are getting stealthier, and an attacker/malicious agent may be ‘hiding’ in a system to ensure that they avoid raising suspicion in an end user, such as by slowing their machine down. The AI model(s) 160 in the cyber security appliance 100 builds a sophisticated ‘pattern of life’—that understands what represents normality for every person, device, and network activity in the system being protected by the cyber security appliance 100. The self-learning algorithms in the AI can, for example, understand each node's (user account, device, etc.) in an organization's normal patterns of life in about a week, and grows more bespoke with every passing minute. Conventional AI typically relies solely on identifying threats based on historical attack data and reported techniques, requiring data to be cleansed, labelled, and moved to a centralized repository. The detection engine self-learning AI can learn “on the job” from real-world data occurring in the system and constantly evolves its understanding as the system's environment changes. The Artificial Intelligence can use machine learning algorithms to analyze patterns and ‘learn’ what is the ‘normal behavior’ of the system (network) 50 by analyzing data on the activity on the system 50 at the device and employee level. The unsupervised machine learning does not need humans to supervise the learning in the model but rather discovers hidden patterns or data groupings without the need for human intervention. The unsupervised machine learning discovers the patterns and related information using the unlabeled data monitored in the system itself. Unsupervised learning algorithms can include clustering, anomaly detection, neural networks, etc. Unsupervised learning can break down features of what it is analyzing (e.g., a network node of a device or user account), which can be useful for categorization, and then identify what else has similar or overlapping feature sets matching to what it is analyzing.
  • The cyber security appliance 100 can use unsupervised machine learning to works things out without pre-defined labels. In the case of sorting a series of different entities, such as different devices, the system analyzes the information and works out the different classes of devices. This allows the system 50 to handle the unexpected and embrace uncertainty when new entities and classes are examined. The modules and models of the cyber security appliance 100 do not always know what they are looking for but can independently classify data and detect compelling patterns. The cyber security appliance's 100 unsupervised machine learning methods do not require training data with pre-defined labels. Instead, they are able to identify key patterns and trends in the data, without the need for human input. The advantage of unsupervised learning in this system is that it allows computers to go beyond what their programmers already know and discover previously unknown relationships. The unsupervised machine learning methods can use a probabilistic approach based on a Bayesian framework. The machine learning allows the cyber security appliance 100 to integrate a huge number of weak indicators/low threat values by themselves of potentially anomalous network behavior to produce a single clear overall measure of these correlated anomalies to determine how likely a network device is to be compromised. This probabilistic mathematical approach provides an ability to understand important information, amid the noise of the network—even when it does not know what it is looking for.
  • The cyber security appliance 100 can use a Recursive Bayesian Estimation to combine these multiple analyzes of different measures of network behavior to generate a single overall/comprehensive picture of the state of each device, the cyber security appliance 100 takes advantage of the power of Recursive Bayesian Estimation (RBE) via an implementation of the Bayes filter. Using RBE, the cyber security appliance's 100 AI models are able to constantly adapt themselves, in a computationally efficient manner, as new information becomes available to the system. The AI model(s) of the cyber security appliance 100 may be configured to continually recalculate threat levels in the light of new evidence, identifying changing attack behaviors where conventional signature-based methods fall down.
  • Training an AI model can be accomplished by having the model learn good values for all of the weights and the bias for labeled examples created by the system, and in this case; starting with no labels initially. A goal of the training of the AI model can be to find a set of weights and biases that have low loss, on average, across all examples.
  • The AI classifier can receive supervised machine learning with a labeled data set to learn to perform their task as discussed herein. An anomaly detection technique that can be used is supervised anomaly detection that requires a data set that has been labeled as “normal” and “abnormal” and involves training a classifier. Another anomaly detection technique that can be used is an unsupervised anomaly detection that detects anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set. The AI model representing normal behavior from a given normal training data set can detect anomalies by establishing the normal pattern and then test the likelihood of a test instance under analysis to be generated by the AI model. Anomaly detection can identify rare items, events or observations which raise suspicions by differing significantly from the majority of the data, which includes rare objects as well as things like unexpected bursts in activity.
  • The method and system are arranged to be performed by one or more processing components with any portions of software stored in an executable format on a computer readable medium. Thus, any portions of the method, apparatus and system implemented as software can be stored in one or more non-transitory memory storage devices in an executable format to be executed by one or more processors. The computer readable medium may be non-transitory and does not include radio or other carrier waves. The computer readable medium could be, for example, a physical computer readable medium such as semiconductor memory or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disc, and an optical disk, such as a CD-ROM, CD-R/W or DVD.
  • The various methods described above may also be implemented by a computer program product. The computer program product may include computer code arranged to instruct a computer to perform the functions of one or more of the various methods described above. The computer program and/or the code for performing such methods may be provided to an apparatus, such as a computer, on a computer readable medium or computer program product. For the computer program product, a transitory computer readable medium may include radio or other carrier waves.
  • Computing Devices
  • FIG. 12 illustrates a block diagram of an embodiment of one or more computing devices that can be a part of an AI-based, cyber security system including the cyber security appliance 100, the restoration engine, the prediction engine 702, etc. for an embodiment of the current design discussed herein.
  • The computing device may include one or more processors (e.g. processing units) 620 to execute instructions, one or more memories 630-632 to store information, one or more data input components 660-663 to receive data input from a user of the computing device 600, one or more modules that include the management module, a network interface communication circuit 670 to establish a communication link to communicate with other computing devices external to the computing device, one or more sensors where an output from the sensors is used for sensing a specific triggering condition and then correspondingly generating one or more preprogrammed actions, a display screen 691 to display at least some of the information stored in the one or more memories 630-632 and other components. Note, portions of this design implemented in software 644, 645, 646 are stored in the one or more memories 630-632 and are executed by the one or more processors 620. The processing unit 620 may have one or more processing cores, which couples to a system bus 621 that couples various system components including the system memory 630. The system bus 621 may be any of several types of bus structures selected from a memory bus, an interconnect fabric, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • Computing device 602 typically includes a variety of computing machine-readable media. Non-transitory machine-readable media can be any available media that can be accessed by computing device 602 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, non-transitory machine-readable media use includes storage of information, such as computer-readable instructions, data structures, other executable software, or other data. Non-transitory machine-readable media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information, and which can be accessed by the computing device 602. Transitory media such as wireless channels are not included in the machine-readable media. Machine-readable media typically embody computer readable instructions, data structures, and other executable software.
  • In an example, a volatile memory drive 641 is illustrated for storing portions of the operating system 644, application programs 645, other executable software 646, and program data 647.
  • A user may enter commands and information into the computing device 602 through input devices such as a keyboard, touchscreen, or software or hardware input buttons 662, a microphone 663, a pointing device and/or scrolling input component, such as a mouse, trackball or touch pad 661. The microphone 663 can cooperate with speech recognition software. These and other input devices are often connected to the processing unit 620 through a user input interface 660 that is coupled to the system bus 621, but can be connected by other interface and bus structures, such as a lighting port, game port, or a universal serial bus (USB). A display monitor 691 or other type of display screen device is also connected to the system bus 621 via an interface, such as a display interface 690. In addition to the monitor 691, computing devices may also include other peripheral output devices such as speakers 697, a vibration device 699, and other output devices, which may be connected through an output peripheral interface 695.
  • The computing device 602 can operate in a networked environment using logical connections to one or more remote computers/client devices, such as a remote computing system 680. The remote computing system 680 can a personal computer, a mobile computing device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computing device 602. The logical connections can include a personal area network (PAN) 672 (e.g., Bluetooth®), a local area network (LAN) 671 (e.g., Wi-Fi), and a wide area network (WAN) 673 (e.g., cellular network). Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. A browser application and/or one or more local apps may be resident on the computing device and stored in the memory.
  • When used in a LAN networking environment, the computing device 602 is connected to the LAN 671 through a network interface 670, which can be, for example, a Bluetooth® or Wi-Fi adapter. When used in a WAN networking environment (e.g., Internet), the computing device 602 typically includes some means for establishing communications over the WAN 673. With respect to mobile telecommunication technologies, for example, a radio interface, which can be internal or external, can be connected to the system bus 621 via the network interface 670, or other appropriate mechanism. In a networked environment, other software depicted relative to the computing device 602, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, remote application programs 685 as reside on remote computing device 680. It will be appreciated that the network connections shown are examples and other means of establishing a communications link between the computing devices that may be used. It should be noted that the present design can be carried out on a single computing device or on a distributed system in which different portions of the present design are carried out on different parts of the distributed computing system.
  • In certain situations, each of the terms “engine,” “module” and “component” is representative of hardware, firmware, and/or software that is configured to perform one or more functions. As hardware, the engine (or module or component) may include circuitry having data processing and/or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a processor, a programmable gate array, a microcontroller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, or combinatorial logic. Alternatively, or in combination with the hardware circuitry described above, the engine (or module or component) may be software in the form of one or more software modules, which may be configured to operate as its counterpart circuitry. For instance, a software module may be a software instance that operates as or is executed by a processor, namely a virtual processor whose underlying operations is based on a physical processor such as virtual processor instances for Microsoft® Azure® or Google® Cloud Services platform or an EC2 instance within the Amazon® AWS infrastructure, for example. Illustrative examples of the software module may include an executable application, a daemon application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or simply one or more instructions. A module may be implemented in hardware electronic components, software components, and a combination of both. A module is a core component of a complex system consisting of hardware and/or software that is capable of performing its function discretely from other portions of the entire complex system but designed to interact with the other portions of the entire complex system. The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software and/or firmware. The terms “computing device” or “device” should be generally construed as physical device with data processing capability, data storage capability, and/or a capability of connecting to any type of network, such as a public cloud network, a private cloud network, or any other network type. Examples of a computing device may include, but are not limited or restricted to, the following: a server, a router or other intermediary communication device, an endpoint (e.g., a laptop, a smartphone, a tablet, a desktop computer, a netbook, IoT device, networked wearable, etc.) Finally, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. As an example, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
  • Note, an application described herein includes but is not limited to software applications, mobile applications, and programs routines, objects, widgets, plug-ins that are part of an operating system application. Some portions of this description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to convey the substance of their work most effectively to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. These algorithms can be written in a number of different software programming languages such as Python, C, C++, Java, HTTP, or other similar languages. Also, an algorithm can be implemented with lines of code in software, configured logic gates in hardware, or a combination of both. In an embodiment, the logic consists of electronic circuits that follow the rules of Boolean Logic, software that contain patterns of instructions, or any combination of both. Note, many functions performed by electronic hardware components can be duplicated by software emulation. Thus, a software program written to accomplish those same functions can emulate the functionality of the hardware components in the electronic circuitry.
  • Unless specifically stated otherwise as apparent from the above discussions, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers, or other such information storage, transmission or display devices.
  • While the foregoing design and embodiments thereof have been provided in considerable detail, it is not the intention of the applicant(s) for the design and embodiments provided herein to be limiting. Additional adaptations and/or modifications are possible, and, in broader aspects, these adaptations and/or modifications are also encompassed. Accordingly, departures may be made from the foregoing design and embodiments without departing from the scope afforded by the following claims, which scope is only limited by the claims when appropriately construed.

Claims (20)

1. An apparatus, comprising:
a device linking service configured to unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into the network via cross-referencing information from the different sources of access into the network, where the device linking service is configured to create a unified network device identifier for the different device identifiers from the different sources of access into the network, where the device linking service is configured to supply the unified network device identifier and associated information with the different device identifiers from the different sources of access into the network to a prediction engine, where the prediction engine is configured to run a simulation of attack paths for the network that a cyber threat may take, and where any instructions for the device linking service and the prediction engine are stored in an executable format on one or more non-transitory computer readable mediums, which are executable by one or more processors.
2. The apparatus of claim 1,
where the device linking service is configured to create a meta entity identifier from the unified network device identifier and one or more user identifiers associated with the different device identifiers from the different sources of access into the network,
where the device linking service is configured to supply the meta entity identifier and associated information to a cyber security appliance configured to detect the cyber threat in the network, and
where the cyber security appliance is configured to use the meta entity identifier and information associated with the unified network device identifier and the one or more user identifiers associated with the different device identifiers to create multiple models of a pattern of life for the meta entity identifier in order to detect the cyber threat.
3. The apparatus of claim 2, where the cyber security appliance is configured to have an autonomous response module to autonomously respond to mitigate the cyber threat as well as to cooperate with the prediction engine in order to determine how to properly autonomously respond to a cyber attack by the cyber threat based upon simulations run in the prediction engine modelling the attack paths into and through the network.
4. The apparatus of claim 1,
where the device linking service is configured to cooperate with a firewall configuration ingester and the prediction engine,
where the prediction engine is configured to monitor traffic into the network in order to map all of the paths into and through the network taken by the monitored traffic,
where the firewall configuration ingester is configured to ingest firewall rules to determine theoretically possible paths through the network in accordance with the firewall rules and a mapping of nodes of the network, and
where the prediction engine is configured to combine all of the paths into and through the network taken by the monitored traffic with the possible paths through the network theoretically possible in accordance with the firewall rules from the firewall configuration ingester in light of the unified network device identifier with a user entity in the network from the device linking service to determine possible attack paths when running the simulation of attack paths for the network that the cyber threat may take.
5. The apparatus of claim 1, where the device linking service is configured to passively monitor the data streams from different sources having access into the network as well as to actively query third party platforms to gather and ingest device data, user data, and activity data from multiple third party vendors and then analyze the ingested data, and then pass the ingested data into the prediction engine to perform the simulation of attack paths for the network that the cyber threat may take.
6. The apparatus of claim 1, where the device linking service is configured to maintain data from the data streams in their generic format as well as put relevant data into a uniform analysis format in a central data store via translation and mapping and then using the central data store to store the relevant data for the uniform analysis format.
7. The apparatus of claim 1, where the device linking service is configured to 1) apply at least one of string matching and fuzzy logic to cross-reference information from the different sources of access into the network as well as 2) use a central data store to store data points organized by how the data points relate to another data point.
8. The apparatus of claim 1, where the device linking service is configured to aggregate network presence information about a user of the network and their different user accounts on different third-party applications served from third-party platforms external to the network, who is then also associated with this particular individual physical network device.
9. The apparatus of claim 1, further comprising:
a firewall configuration ingester configured to cooperate with the device linking service, where the firewall configuration ingester is configured to examine rules of firewall configurations and their settings to model changes in these rules over time to detect unusual rules over time to the firewall configurations that cause new attack path modelling routes into the network.
10. The apparatus of claim 1, further comprising:
a firewall configuration ingester configured to cooperate with the device linking service and the prediction engine, where the firewall configuration ingester is configured to examine firewall rules implemented by a firewall to identify routes into the network allowed by a current firewall rules and supply the prediction engine with a set of possible routes that a cyber attack by the cyber threat may take into the network and permitted reasons into the network.
11. A non-transitory computer readable medium configured to store instructions in an executable format in the non-transitory computer readable medium, which when executed by one or more processors cause operations, comprising:
providing a device linking service to unify data streams from different sources of access into a network to get a composite picture of a behavior of an individual physical network device that has different device identifiers from the different sources of access into the network via cross-referencing information from the different sources of access into the network,
providing the device linking service to create a unified network device identifier for the different device identifiers from the different sources of access into the network,
providing the device linking service to then link the unified network device identifier with a user in the network, and
providing the device linking service to supply the unified network device identifier and associated information with the different device identifiers from the different sources of access into the network to a prediction engine, where the prediction engine is configured to run a simulation of attack paths for the network that a cyber threat may take.
12. The non-transitory computer readable medium of claim 11, further comprising:
providing the device linking service to create a meta entity identifier from the unified network device identifier and one or more user identifiers associated with the different device identifiers from the different sources of access into the network,
providing the device linking service to supply the meta entity identifier and associated information to a cyber security appliance configured to detect the cyber threat in the network, and
providing the cyber security appliance to use the meta entity identifier and information associated with the unified network device identifier and the one or more user identifiers associated with the different device identifiers to create multiple models of a pattern of life for the meta entity identifier in order to detect the cyber threat.
13. The non-transitory computer readable medium of claim 12, further comprising:
providing the cyber security appliance to have an autonomous response module to autonomously respond to mitigate the cyber threat as well as to cooperate with the prediction engine in order to determine how to properly autonomously respond to a cyber attack by the cyber threat based upon simulations run in the prediction engine modelling the attack paths into and through the network.
14. The non-transitory computer readable medium of claim 11, further comprising:
providing the prediction engine to monitor traffic into the network in order to map all of the paths into and through the network taken by the monitored traffic,
providing a firewall configuration ingester to ingest firewall rules to determine theoretically possible paths through the network in accordance with the firewall rules and a mapping of nodes of the network, and
providing the prediction engine to combine all of the paths into and through the network taken by the monitored traffic with the possible paths through the network theoretically possible in accordance with the firewall rules from the firewall configuration ingester in light of the unified network device identifier with a user entity in the network from the device linking service to determine possible attack paths when running the simulation of attack paths for the network that the cyber threat may take.
15. The non-transitory computer readable medium of claim 11, further comprising:
providing the device linking service to passively monitor the data streams from different sources having access into the network as well as to actively query third party platforms to gather and ingest device data, user data, and activity data from multiple third party vendors and then analyze the ingested data, and then pass the ingested data into the prediction engine to perform the simulation of attack paths for the network that the cyber threat may take.
16. The non-transitory computer readable medium of claim 11, further comprising:
providing the device linking service to maintain data from the data streams in their generic format as well as put relevant data into a uniform analysis format in a central data store via translation and mapping and then using the central data store to store the relevant data for the uniform analysis format.
17. The non-transitory computer readable medium of claim 11, further comprising:
providing the device linking service to 1) apply at least one of string matching and fuzzy logic to cross-reference information from the different sources of access into the network as well as 2) use a central data store to store data points organized by how the data points relate to another data point.
18. The non-transitory computer readable medium of claim 11, further comprising:
providing the device linking service to aggregate network presence information about the user of the network and their different user accounts on different third-party applications served from third-party platforms external to the network, who is then also associated with this particular individual physical network device.
19. The non-transitory computer readable medium of claim 11, further comprising:
providing a firewall configuration ingester to examine rules of firewall configurations and their settings to model changes in these rules over time to detect unusual rules over time to the firewall configurations that cause new attack path modelling routes into the network.
20. The non-transitory computer readable medium of claim 11, further comprising:
providing a firewall configuration ingester to examine firewall rules implemented by a firewall to identify routes into the network allowed by a current firewall rules and supply the prediction engine with a set of possible routes that a cyber attack by the cyber threat may take into the network and permitted reasons into the network.
US18/207,061 2022-06-09 2023-06-07 Unifying of the network device entity and the user entity for better cyber security modeling along with ingesting firewall rules to determine pathways through a network Pending US20240031380A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/207,061 US20240031380A1 (en) 2022-06-09 2023-06-07 Unifying of the network device entity and the user entity for better cyber security modeling along with ingesting firewall rules to determine pathways through a network

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202263350781P 2022-06-09 2022-06-09
US202263396105P 2022-08-08 2022-08-08
US18/207,061 US20240031380A1 (en) 2022-06-09 2023-06-07 Unifying of the network device entity and the user entity for better cyber security modeling along with ingesting firewall rules to determine pathways through a network

Publications (1)

Publication Number Publication Date
US20240031380A1 true US20240031380A1 (en) 2024-01-25

Family

ID=89076963

Family Applications (3)

Application Number Title Priority Date Filing Date
US18/207,061 Pending US20240031380A1 (en) 2022-06-09 2023-06-07 Unifying of the network device entity and the user entity for better cyber security modeling along with ingesting firewall rules to determine pathways through a network
US18/207,058 Pending US20230403296A1 (en) 2022-06-09 2023-06-07 Analyses and aggregation of domain behavior for email threat detection by a cyber security system
US18/207,059 Pending US20240121262A1 (en) 2022-06-09 2023-06-07 Endpoint agents and scalable cloud architecture for low latency classification

Family Applications After (2)

Application Number Title Priority Date Filing Date
US18/207,058 Pending US20230403296A1 (en) 2022-06-09 2023-06-07 Analyses and aggregation of domain behavior for email threat detection by a cyber security system
US18/207,059 Pending US20240121262A1 (en) 2022-06-09 2023-06-07 Endpoint agents and scalable cloud architecture for low latency classification

Country Status (2)

Country Link
US (3) US20240031380A1 (en)
WO (2) WO2023239813A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230024602A1 (en) * 2021-07-21 2023-01-26 Box, Inc. Identifying and resolving conflicts in access permissions during migration of data and user accounts
US20230289448A1 (en) * 2022-03-10 2023-09-14 Denso Corporation Securing software package composition information

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240291842A1 (en) * 2023-02-23 2024-08-29 Reliaquest Holdings, Llc Threat mitigation system and method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7624047B1 (en) * 2002-07-31 2009-11-24 Amazon Technologies, Inc. Managing server load by varying responses to requests for dynamically-generated web pages
US7472183B1 (en) * 2003-08-07 2008-12-30 Cisco Technology, Inc. Approaches for capturing illegal and undesired behavior in network components and component interactions
US9525606B1 (en) * 2014-09-04 2016-12-20 HCA Holdings, Inc. Differential processing of data streams based on protocols
US9697355B1 (en) * 2015-06-17 2017-07-04 Mission Secure, Inc. Cyber security for physical systems
US10218735B2 (en) * 2015-06-30 2019-02-26 The Mitre Corporation Network attack simulation systems and methods
EP3329373A1 (en) * 2015-07-29 2018-06-06 B+B Smartworx Limited An edge network device for a data network and a method of processing data in a data network
US11632382B2 (en) * 2017-05-15 2023-04-18 Forcepoint Llc Anomaly detection using endpoint counters
US12034767B2 (en) * 2019-08-29 2024-07-09 Darktrace Holdings Limited Artificial intelligence adversary red team
EP4111343A1 (en) * 2020-02-28 2023-01-04 Darktrace Holdings Limited An artificial intelligence adversary red team
US20220224724A1 (en) * 2021-01-08 2022-07-14 Darktrace Holdings Limited Artificial intelligence based analyst as an evaluator
EP4367839A1 (en) * 2021-07-07 2024-05-15 Darktrace Holdings Limited Cyber security system utilizing interactions between detected and hypothesize cyber-incidents

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230024602A1 (en) * 2021-07-21 2023-01-26 Box, Inc. Identifying and resolving conflicts in access permissions during migration of data and user accounts
US20230289448A1 (en) * 2022-03-10 2023-09-14 Denso Corporation Securing software package composition information
US12039056B2 (en) * 2022-03-10 2024-07-16 Denso Corporation Securing software package composition information

Also Published As

Publication number Publication date
WO2023239813A1 (en) 2023-12-14
US20240121262A1 (en) 2024-04-11
US20230403296A1 (en) 2023-12-14
WO2023239812A1 (en) 2023-12-14

Similar Documents

Publication Publication Date Title
US20240022595A1 (en) Method for sharing cybersecurity threat analysis and defensive measures amongst a community
US20210273953A1 (en) ENDPOINT AGENT CLIENT SENSORS (cSENSORS) AND ASSOCIATED INFRASTRUCTURES FOR EXTENDING NETWORK VISIBILITY IN AN ARTIFICIAL INTELLIGENCE (AI) THREAT DEFENSE ENVIRONMENT
US20230164158A1 (en) Interactive artificial intelligence-based response loop to a cyberattack
US20230009127A1 (en) Method for cyber threat risk analysis and mitigation in development environments
US20230336581A1 (en) Intelligent prioritization of assessment and remediation of common vulnerabilities and exposures for network nodes
US20240031380A1 (en) Unifying of the network device entity and the user entity for better cyber security modeling along with ingesting firewall rules to determine pathways through a network
US20230095415A1 (en) Helper agent and system
US20230283629A1 (en) Automated vulnerability and threat landscape analysis
US20240098100A1 (en) Automated sandbox generator for a cyber-attack exercise on a mimic network in a cloud environment
EP4154136A1 (en) Endpoint client sensors for extending network visibility
US20240045990A1 (en) Interactive cyber security user interface
WO2023283356A1 (en) Cyber security system utilizing interactions between detected and hypothesize cyber-incidents
US20240223596A1 (en) Large scale security data aggregation, with machine learning analysis and use of that security data aggregation
US20240223592A1 (en) Use of graph neural networks to classify, generate, and analyze synthetic cyber security incidents
US20230403294A1 (en) Cyber security restoration engine
US20240333743A1 (en) Generation of embeddings and use thereof for detection and cyber security analysis
WO2024035746A1 (en) A cyber security restoration engine

Legal Events

Date Code Title Description
AS Assignment

Owner name: DARKTRACE HOLDINGS LIMITED, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:THOMSON, ALEXANDER FOX;WINGAR, JAMES REES;REEL/FRAME:063887/0825

Effective date: 20230606

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: DARKTRACE HOLDINGS LIMITED, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LAL, JAKE;HOWLETT, GUY;THOMSON, ALEXANDER FOX;AND OTHERS;SIGNING DATES FROM 20230602 TO 20231018;REEL/FRAME:066563/0551