WO2020046260A1 - Process semantic based causal mapping for security monitoring and assessment of control networks - Google Patents

Process semantic based causal mapping for security monitoring and assessment of control networks Download PDF

Info

Publication number
WO2020046260A1
WO2020046260A1 PCT/US2018/048047 US2018048047W WO2020046260A1 WO 2020046260 A1 WO2020046260 A1 WO 2020046260A1 US 2018048047 W US2018048047 W US 2018048047W WO 2020046260 A1 WO2020046260 A1 WO 2020046260A1
Authority
WO
WIPO (PCT)
Prior art keywords
causal
node
network
data
control
Prior art date
Application number
PCT/US2018/048047
Other languages
French (fr)
Inventor
Leandro Pfleger De Aguiar
Jiaxing PI
Dong Wei
Stefan Woronka
Original Assignee
Siemens Aktiengesellschaft
Siemens Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft, Siemens Corporation filed Critical Siemens Aktiengesellschaft
Priority to PCT/US2018/048047 priority Critical patent/WO2020046260A1/en
Publication of WO2020046260A1 publication Critical patent/WO2020046260A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting

Definitions

  • This application relates to network security. More particularly, this application relates to causal mapping for security monitoring and assessment of control networks.
  • ICS Industrial Control Systems
  • IT Information Technology
  • OT Operations Technology
  • ICS networks Due to aspects like vertical integration of the production systems and horizontal integration of the value chain, ICS networks are often directly or indirectly connected to IT networks (office network) and the Internet, hence offering an opportunity for cyber attackers to penetrate such environments and exploit any existing vulnerabilities.
  • IT networks office network
  • ICS continue to be recognized as highly vulnerable due to the lack of basic protection features. Attacks targeting the ICS network can cause significant economic loss, human casualties and national security threatening incidents.
  • PLCs Programmable Logic Controllers
  • APTs Advanced Persistent Threats
  • ICS systems such Distributed Control Systems (DCS), motion controllers, Supervisory Control and Data Acquisition (SCADA) servers, and Human Machine Interfaces (HMIs) offer additional challenges when it comes to deploying security measures.
  • DCS Distributed Control Systems
  • SCADA Supervisory Control and Data Acquisition
  • HMIs Human Machine Interfaces
  • Recent innovative approaches developed to improve the detection space in industrial systems include the ability to detect deviations on values observed from process variables/IO (i.e. , sensor and actuator values) and to model the process trend data based on neural networks or autoregressive models.
  • process variables/IO i.e. , sensor and actuator values
  • process trend data based on neural networks or autoregressive models.
  • energy management to construct the model between different data sources. Such methods might suffer from the following major issues when monitoring production process specific data.
  • a semantics based causal graph for a cyber-physical system such as an Industrial Control System (ICS), building automation control, traffic systems, energy control systems, or the like.
  • the causal graph may be composed of single layer or multiple layers of system information.
  • collected process data typically represent the physical information and might include control logic.
  • Extraction of control system semantics includes determining how data collected from a single source may directly relate to the information collected from a different layer of the system.
  • PLC programmable logic controller
  • HMI human machine interface
  • FIG. 1 shows an example of a multilevel intrusion detection system for an Industrial Control System according to embodiments of this disclosure
  • FIG. 2 shows an example of a PLC with an intrusion detection agent according to embodiments of this disclosure
  • FIG. 3 is a diagram for an example of a central unit according to embodiments of the present disclosure.
  • FIG. 4 shows an example of a multilayer causal mapping according to embodiments of this disclosure
  • FIG. 5 shows a diagram for an example of anomaly detection according to embodiments of this disclosure
  • FIG. 6 is a diagram for an example of security assessment according to embodiments of the disclosure.
  • FIG. 7 is a diagram for an example of root cause analysis according to embodiments of the disclosure.
  • FIG. 8 shows an example of a computing environment within which embodiments of the disclosure may be implemented.
  • Methods and systems are disclosed for security monitoring and assessment of multilevel Industrial Control System devices.
  • the disclosed computer-implemented methods and systems present an improvement to the functionality of the computer used to perform such a computer based task. While currently available tools may detect an anomaly based on deviations on values observed from process variables of a sensed measurement or an actuator value, monitoring of production process specific data may fail detect intrusion types that could evade conventional measures due to the inability to correlate data from sources of different domains or different system layers, or the inability to generalize analytical methods to different cyber-physical systems.
  • Data may be collected from multiple software agents placed at different levels of the control network, which may autonomously activate and execute data collection, and in some instances, transform the data from a fieldbus protocol to a communication protocol that is more conducive to causal analysis.
  • the embodiments of the present disclosure enable the intrusion detection system to be more robust, efficient, and effective than conventional means.
  • FIG. 1 is a block diagram for an example of an ICS according to embodiments of this disclosure.
  • an ICS 100 may have a plant wide structure that includes multiple control levels, such as a production scheduling control level 4, a production control level 3, a plant supervisory control level 2, a direct control level 1 , and a field bus control level 0, as shown in FIG. 1.
  • Each of the control levels may communicate according to an industrial Ethernet protocol, controlled by routers or Ethernet switches at each level.
  • switch 135 is placed within the control network to control data packet routing between control levels 3 and 4.
  • the control level 4 components of the ICS 100 may include one or more production scheduling servers 141 as the highest level of control for the plant wide ICS 100.
  • the server 141 may be remotely located and connected to the ICS 100 via a network 143 such as the internet, and connected to other fleet plants via network 144.
  • a DMZ 145 may provide a firewall between the plant control network and the external network 143.
  • the control level 3 components of the ICS 100 may include one or more coordinating computers 131 , and one or more web servers or central archiving servers 133.
  • An office network 132 may share a common router 135 with the control level 3 components, and may include one or more user terminals used by plant personnel to perform administrative functions that may be ancillary to plant control.
  • the office network 132 may present a vulnerability to the ICS 100 by way of external communication via network 143, such as the internet. For example, an office worker laptop could be victimized by a cyber attack and infected with malware that could later move laterally to potentially intercept and alter data packets in the ICS 100.
  • Control level 2 of the ICS 100 may perform a supervisory function for the network.
  • the level 2 components of the ICS 100 may include one or more SCADA servers 127, one or more historian units 125, an engineering workstation 121 , and an HMI unit 123.
  • SCADA servers 127 are useful for remote access to level 1 controllers and may serve to provide overriding functionality at a supervisory level.
  • Historian units are useful for remote access to level 1 controllers and may serve to provide overriding functionality at a supervisory level.
  • Level 2 switches may control data packets for level 2 ICS components. For example, switch
  • level 2 switches such as switch 128, may be similarly placed within the ICS 100 for controlling other level 2 control components dedicated to different zones of the plant.
  • Control level 1 of the ICS 100 may include direct controllers responsible for controlling actions of field devices and for collecting sensor and measurement information related to the field devices.
  • Control level 1 may include one or more controllers 1 15, one or more PLCs 1 1 1 , and one or more remote telemetry units (RTUs) 1 17.
  • Each of the PLCs 1 1 1 may be coupled to a data collector 1 13 for logging and storing historical and production data related to the field devices, such as to database storage.
  • a PLC 1 1 1 may perform scan cycles of inputs and outputs, which are stored as process images for access by the SCADA server 127. The outputs may be communicated to the operator at an HMI unit such as HMI unit 123.
  • Such data transmissions between control components at the control levels may be susceptible to a cyber attack, such as a manipulation of process view.
  • Control level 0 of the ICS 100 may include one or more field buses to which field devices, such as sensors and actuators, are connected.
  • the signals exchanged at the field bus may be referred to as process variables, including received control instructions from the level 0 control devices, and control feedback signals, such as instrument measurements and sensor readings, sent back to the level 0 control devices.
  • field device 102 may be controlled by controller 1 15, while field devices 104, 106 are controlled by PLC 1 1 1.
  • a control level 1 switch 1 14 may be implemented as an Ethernet router and/or gateway for exchanging data packets at control level 1 to control level 2.
  • switch 1 14 may include a gateway for conversion of PLC data to Ethernet based data to communication with higher control level ICS components, such as SCADA server 127.
  • the interface between the controllers, such as PLC 1 1 1 , and the level 0 field devices may be a serial port protocol, such as Profibus RS-485 standard protocol, which is incompatible with Ethernet. While Ethernet or industrial Ethernet is described as one possible protocol for higher levels of the ICS 100, other data transfer protocols may be applied with conversion and switching as appropriate according to the same manner as described.
  • the data collection may include one or more network based implementations, which utilize high level detection tools, such as IDS/PDS units 136.
  • the IDS/PDS units 136 may be configured to read one or more communication protocols, such as Modbus, S7comm, Ethernet/IP, or the like.
  • Network based data collection may track origins and destinations of network data packets and detect anomalies based on signatures or unexpected behavior of network devices.
  • a table of data collection for communication packets between two devices, such as SCADA server 127 and historian 125 may indicate an expected throughput (e.g., 25kbps) during a particular time span (e.g., between 08:00 and 20:00 each day).
  • a network based anomaly may be indicated in response to detection of constraint violations, such as changing direction of data flow or maximum throughput being exceeded.
  • Other examples of network based anomaly detection may include monitoring relevant security alerts relative to performed functions and processing, such as execution of code and other performed events at different control levels.
  • the data collection may include one or more host based implementations, which utilize placement of a local agent in a host device or at a network switch.
  • an agent 162 may be disposed in a host device, such as the PLC 1 1 1.
  • agent 162 may include a software function block 230 to implement data collection at the host device, such as extraction of process variables.
  • the function blocks of agent 162 may be executed by PLC processor 201.
  • the agent 162 may be implemented as an embedded computer with a separate microprocessor to execute the function blocks.
  • the host based data collection may be implemented as a separate unit, such as agent 161 , connected to the memory of the host device, such as PLC memory 200.
  • agent 161 the memory of the host device
  • FIG. 1 shows the data collection functionality in one PLC 1 1 1 for centrally based causal analysis to detect potential cyberattacks on itself, it can also be implemented in a way that one PLC can assist the central based causal analysis to detect potential cyberattacks on its peer PLCs, by collecting data from other PLCs.
  • the control program 225 includes the instructions executed by the PLC 1 1 1 for operation of connected field devices. Additionally, the control program 225 manages input/output, global variables, and access paths. A central unit in the network, such as central unit 152, 153 shown in FIG. 1 , may be configured to analyze these input/output, global variables, and access paths as they are received or modified to identify conditions which may indicate that a malicious intrusion is taking place.
  • the data collection function block 231 may collect process data including physical information and control logic. Collected process data may include measurements observed by sensors of a production system. For example, conveyors of a packaging line may be measured to be operating in the speed range of 0.25 ⁇ 0.5 m/s when the production line is in a producing state. As another example during a production state, a controller may receive a speed setting of 0.4 m/s for the conveyor from an operator via the HMI 123. As PLC 1 1 1 may control a variety of devices, data collection function block 231 is configured to collect data associated with different types of process variables obtained from multiple sensors on field bus level 0. Each sensor data stream may be collected and stored as a time series data graph. Data collected by data collection function block 231 may directly relate to information collected from different levels of the control network, such as data from HMI 123 in level 2 and data collected from agent 160 at field bus level 0.
  • a combination of network-based data collection and host-based data collection may be implemented.
  • data collection of network-based detection device such as an IDS unit 136
  • data collection of one or more host- based devices such as ID unit 160
  • ID unit 160 may be monitored continuously, which can be used by a centrally based causal analysis of network anomalies and input/output (I/O) process anomalies for network security assessment.
  • I/O input/output
  • placement of multiple agents may be deployed at different levels.
  • one or more agents may be placed at each control level, such as agent 142 in control level 4 server 141 , agent 134 in switch 135 for control level 3, agents 122, 175, 176, 177, 178 in control level 2, agent 161/162 in control level 1 , and agent 160 at control level 0.
  • an agent may be deployed at each control level in at least one control level component or switch for the control level.
  • the device or switch in which the agent may be deployed can either be the same type of component, or may be a variety of component types.
  • the agents may consist of agent 142 at control level 4 server 141 , agents 134, 164, 174 at network switches for control levels 1 , 2, 3, and agent 160 for control level 0.
  • An agent may be a free standing computer connected to a control level component, or may be an embedded system within a control level component.
  • an agent may be implemented as an industrial personal computer (PC), such as a ruggedized PC, connected to a network switch, such as agent 163.
  • the agent may include a software application installed on memory within the unit, programmed to execute a local data collection function.
  • a serial protocol such as Profibus
  • standard IT protocol detection is not compatible, and a transformation of the signals is required.
  • the agent 160 at control level 0 may therefore include a transformation component (e.g., a gateway) to translate the extracted data to a protocol (e.g., Ethernet) useable by the system for causal analysis with higher control level data.
  • a transformation component e.g., a gateway
  • network data packets may be encrypted and each agent is configured to have access to the encryption key(s) in order to decrypt the data packets.
  • one or more agents may be configured to data collection.
  • the agent 161 , 162 may periodically scan the memory of the control component, such as the PLC 1 1 1.
  • FIG. 2 shows a diagram for an example of a data collection agent according to embodiments of the disclosure.
  • Agent 161 , 162 may be implemented for PLC 1 1 1 having a memory 200, which may include a process image input table (Pll) 210, a process image output table (PIQ) 220, a control program 225 and a processor 201.
  • Pll process image input table
  • PIQ process image output table
  • processor 201 may read the status of inputs 210, such as sensor value 80 at address A1 , and execute the control program 225 using the status of the inputs. Output values generated by the control program are stored in the output table 220. For example the output value 10 at address B1 is the result of the corresponding input value 80. The processor 201 may then scan the output table, and send the output values to the level 0 field devices.
  • An agent such as agent 161 or agent 162, may periodically scan the input table 210 and output table 220 of the PLC memory 200. To conserve system resources, the agent active scan may implement a discriminate scan of the input and output tables.
  • the periodicity of the active scan may be determined based on a learning algorithm that optimizes data collection relative to a threshold of excessive data dumping and processor usage of the PLC 1 1 1.
  • the agents may be programmed to limit the types of information to actively scan from the stored data, which may be identified by the source or the address for example.
  • control components of the network may be coupled using an open platform communication (OPC) protocol.
  • OPC open platform communication
  • OPC interfaces such as link 190 between historian unit 125 and PLC 1 1 1 (e.g., a wireless protocol as shown in FIG. 1 , or a wired protocol) using multiple agents, a distributed process history can be accumulated, from which one or more causal graphs can reveal anomalies which may be analyzed as a potential cyber attack.
  • OPC interfaces such as link 190 between historian unit 125 and PLC 1 1 1 (e.g., a wireless protocol as shown in FIG. 1 , or a wired protocol) using multiple agents, a distributed process history can be accumulated, from which one or more causal graphs can reveal anomalies which may be analyzed as a potential cyber attack.
  • One advantage of data collection at OPC interfaces for historian units is that cyber attacks typically alter data for presentation at a user access point, such as displayed data at an HMI unit for an operator, but fail to alter the data recorded at a historian unit in
  • the agents collect data among the multiple control levels in real-time during plant operations
  • the data may be analyzed by a central unit, such as CU 152, 153, according to one or more embodiments in order to search for anomalies based on causal graphs.
  • a central unit may extract process semantics from collected data received from each agent that corresponds to field device 160, and compare the data values in search for any inconsistency that could be an indication of an anomaly due to a cyber attack.
  • This process may be performed by the central unit at each scan for multiple field devices, or other types of corresponding data retrieved by the agents.
  • the central unit may be implemented as an embedded system within any of the control level components.
  • FIG. 3 is a diagram for an example of a central unit according to embodiments of the present disclosure.
  • a central unit 301 may include a processor 302 and a memory 304 with application programs 310 executable by the processor 302.
  • Each application program may be stored as a module, such as a causal mapping module 31 1 , an anomaly detection module 313, a security assessment module 315, a root cause module 317 and an alert module 319.
  • the central unit 301 corresponds to the central units 152, 153 shown and described above with respect to FIG. 1.
  • the causal mapping module 31 1 may construct a causal graph based on the collected process variables from the distributed agents 321 for analysis of causal relations of control network behavior and events. To develop the causal relations, the causal mapping module 31 1 may extract semantics from the collected process variable values and if applicable, control logic of the control device. To illustrate the extraction of process semantics, an example is now described in which PLC 1 1 1 controls operations related to a water tank. Agent 162 may collect process variables for tank level sensor measurement, water temperature sensor measurement, and control variables for heater status, temperature setpoint, and valve status. The central unit 152 may read the collected data over time as a stream of values, and read control logic commands stored in PCL 1 1 1 memory.
  • the causal mapping module of the central unit 152 may identify the corresponding process variable by the behavior of values for the collected data over time as one of the following: remains constant - maps to setpoints; fluctuates among a set of discrete values - maps to a heater or valve state; and varies gradually and continuously - maps to sensor measurements. [0033]
  • the causal mapping module 31 1 may map each process variable obtained from the agents to a node while constructing the causal graph, and define directional edges connecting nodes to represent the causal relationship between the nodes.
  • the causal mapping module 31 1 may calculate a pairwise causality measurement between nodes to determine the weight of the connecting edge.
  • causal mapping module 31 1 Information propagation may be extracted by the causal mapping module 31 1 based on causal relationships between the nodes representing process related variables.
  • causality estimation methods implemented include, but are not limited to Granger causality, transfer entropy and transfer entropy variations, and the like.
  • causal relationships are directional and reflect how one process variable has impact on its connected neighbor variables. Therefore, by observing how the information is propagated through the causal graph, further analysis is available that is not possible using only correlation information, such as root cause analysis and identification of critical nodes for security assessment of a control system.
  • causality measurement methods do not consider the causal impact from one variable to other ones. For example, a potential indirect causal relationship may be revealed when causality measurements indicate that x— > y, y— > z and x— > z, as it must be determined whether z is impacted directly by x or indirectly through the influence of y.
  • the causal mapping module 31 1 may maintain the same dynamics, without a decision as to direct or indirect causal relationship.
  • the causal mapping module 31 1 may perform a determination analysis to classify the relationship as direct or indirect causality.
  • causal relations x— > y, y— > z and x— > z are observed and z has an event for which the root cause needs to be found, the direct impact from other variables need to be quantified. In this case, whether the impact from x to z is through y is undecided. Therefore, a causality measurement method that finds the direct causal relation between two variables, such as a direct transfer entropy (DTE) method, may be applied by the causal mapping module 31 1 to decide whether x has impact on z directly or through the intermediate variable y.
  • DTE direct transfer entropy
  • FIG. 4 shows an example of a multilayer causal mapping according to embodiments of this disclosure.
  • One or more layers 401 , 402, 403 of causal graphs may be constructed by the causal mapping module 31 1 based on the process data collected by the agents 321.
  • an intra-device layer 401 causal graph may be constructed based on data collected by an agent associated with one device, such as data inside a computer being transformed.
  • a central unit may analyze input and output data collected by an agent and extract semantics based on the expected data flow to generate the causal graph at intra-device layer 401.
  • agent 162 in PLC 1 1 1 shown in FIG. 1 may collect input process image 201 and output process image 220, from which causal mapping module 31 1 of central unit 301 may extract semantics related to a process variables for a production component, such as sensor measurement input and a control signal output to the production component.
  • the sensor measurement may relate to a water level reading
  • the control signal output may be a value corresponding to open/close state for a valve to control the water level.
  • the extracted semantic for this operation may be the rule applied by the control logic to change the valve state once a control limit for the water level is reached. From this rule based semantic, a causal mapping may generate a causal map at intra device layer 401 for this device (i.e. , the valve control). Anomaly detection for the device may then be based on detection of changes to the causal graph at intra-device layer 401.
  • an intra-network layer 402 causal graph may be constructed based on process semantics extracted from the network traffic flow within the network.
  • the causal mapping module 31 1 may extract process semantics based on observation of direction of data flow between agents of devices disposed at different control levels. For example, process variable data flowing from field device 104 (e.g., a temperature sensor) to PLC 1 1 1 to HMI 123 or from field devices 104, 106 to PLC 1 1 1 and [0039]
  • an extra-network layer 403 causal graph may be constructed, which monitor the data flowing from a first control system related to a first industrial or production process, to another control system related to a different industrial or production process.
  • a causal mapping module 31 1 of the first control system may receive process variable data from certain agents at control devices that interface with control system B.
  • the causal mapping module 31 1 may extract process semantics from such extra-network process variable data, from which an extra network layer 403 causal graph may then be constructed.
  • the layers 401 , 402, 403 may overlap where a common node represents a process variable collected from a control device common two or more layers.
  • Causal analysis and security assessment may be applied to one of the causal graph layers 401 , 402, 403, or any combination thereof.
  • false anomalies may be detected where overlap of the layers exists, and inconsistent predictions of anomalous nodes may be an indication that a predicted anomaly is erroneous.
  • historical data of inter-layer 402 causal map may produce more reliable anomaly detection compared with anomaly detections based on intra-layer 401 causal graph with respect to process variables modeled for a particular control device.
  • an anomaly detection from intra-layer 401 causal graph may be given less weight to trigger an alert signal when no anomaly detection for the same process variable is presented from the inter-layer 402 causal map. Accordingly, the implementation of multiple layers 401 , 402, 403 causal mapping provides more reliable security monitoring than conventional methods.
  • the anomaly detection module 313 may analyze dynamics of the causal graph over time to detect an edge or a group of edges in the casual map demonstrating abnormal weight evolution over time.
  • An example of a type of temporal analysis for the anomaly detection is temporal graph mining.
  • FIG. 5 shows a diagram for an example of anomaly detection according to embodiments of this disclosure.
  • a group of nodes and edges 510 from a portion of a casual map include edges 502a, 504a, and 506a.
  • the edge weights are monitored by the anomaly detection module 313, and an anomalous edge weight is detected for edge 502b, between nodes 501 and 503, compared to edge weights 504b and 506b.
  • the anomaly detection module 313 may trigger an alert from alert module 419, which may include a visual indication on a causal graph rendering, such as a color change or other form of highlighting of the edge or edge weight value 502b, on a display device available to an operator.
  • the anomaly detection may be an indication of a potential intrusion in the control system by a cyber attacker.
  • the security assessment module 315 may perform a security assessment for a target node in a causal graph.
  • the security assessment module 315 may monitor a causal graph to track vulnerability of the system by assessing target nodes that meet or exceed criticality thresholds, such as number of causal relationships that are direct, indirect, or a combination thereof.
  • the security assessment module 315 may include functionality to annotate a rendering of the causal graph with device vulnerability information as a result of an intermediate step of system vulnerability scanning.
  • the annotated vulnerability information for a target node may include a score value indicating level of criticality reached within a defined scale or range containing the allowable threshold.
  • a secondary causal graph might be used and correlated with the main causal graph to detect lateral moves.
  • the security assessment module 315 may include functionality to enable a visual tool for the security assessment, such as shown in FIG. 6.
  • node 601 On a rendering of a portion of the causal graph, node 601 may be selected by an operator as a query for security assessment, such as by using graphical user interface to selecting node 601 using a graphical tool (e.g., clicking on a graphical representation of the node on a display), and the security assessment module 315 may respond by highlighting the selected node 601 (e.g., rendering the node in a particular color such as red, or by a similar highlighting indication), and also highlighting affected nodes 602, 603, and 604 (e.g., rendering affected nodes in a different color than node 601 , such as blue, or by a similar highlighting indication).
  • Affected nodes are determined by the security assessment module 315 based on propagation edges between node 601 and affected nodes, which would be impacted by any change to node 601.
  • the security assessment of the causal graph provides an indication of how information is propagated from one node to other components, such that asset criticality relationships in the ICS may be derived.
  • the impact of on the whole network by a single component being security compromised may be derived, no matter how large the size of the cyber physical network.
  • the root cause module 317 may respond to an anomaly detected by anomaly detection module 313 by performing an analysis of propagation from the anomalous map entity to the causal source node.
  • FIG. 7 is a diagram for an example of root cause analysis according to embodiments of the disclosure.
  • a cluster of nodes from a portion of the entire causal graph may be observed by the root cause module 317 over three time period evolutions 710a, 710b and 710c.
  • an anomalous node may be detected by anomaly detection module 313 (e.g., by applying temporal graph mining technique).
  • root cause module 317 may determine the propagation of the root cause by following the causal graph from node 701 in the opposite direction of the causal relationship (i.e. , the directional arrows) along the path of the causal graph in 710c until a dissimilar causality measurement is detected over a time sequence. Moving from node 701 to node 703, the causality measurement value 702 is observed between time 710a and 710b and determined that the values 0.3 and 0.4 are relatively similar. Next, moving from node 703 to node 705, again the causality measurements 704 are relatively similar.
  • the root cause module 317 may infer that an anomalous node 705 affected the pairwise causality measurement to node 707, and as such the root cause is assigned to node 705. As a graphical aid, the root cause module 317 may render the nodes 701 , 702 and 705 with a differently (e.g., according to a different color scheme) than other local nodes as an indication of the propagation from the root cause node 705 to the affected nodes 703 and 701.
  • the alert module 319 may include predetermined thresholds for triggering an alert signal or message in response to one or more anomalies being detected that surpass the thresholds. For example, a detection of edge value change that exceeds a threshold may trigger an alert to indicate a detected anomalous edge evolution.
  • the alert module 319 may also trigger alert messages in response to a detected anomaly by anomaly module 313 or in response to a root cause detection by root cause module 317. In response to a triggered alert, the alert module 319 may generate the alert signal or message for display to an operator at a user terminal display device, such as the HMI 123.
  • a central unit may be deployed in a cloud-based implementation, such as a cloud server.
  • a cloud server may be configured to run a product data management service, such as MindSphere, to which the production of network 100 is tied.
  • the ICS 100 may utilize the service with the central unit extension to additionally incorporate the data retrieved by the multilevel agents and to perform the root cause and security assessment at the cloud server.
  • the control level 3 server 141 may deploy a central unit 152 which can be utilized to implement fleet-level intrusion detection by collection of data from agents deployed at multiple control levels at other plants in a similar manner as shown for ICS 100. Applying a fleet-level analytics may include monitoring and comparing similar process setups or identical equipment running on different plant sites, or for different customers.
  • a central correlation unit may be deployed in a plant-level network server located on-premises, such as central unit 153 in network server 133.
  • the central unit may be implemented as an embedded system with a dedicated processor, or by sharing an existing processor in the network server 133.
  • the causal mapping for the multilevel agents may be implemented as a distributed network of smart agents.
  • agents deployed at multiple levels may each be equipped with communication means, such as a transceiver, to communicate peer-to-peer (P2P) to form a network, such as wireless local area network (WLAN).
  • P2P peer-to-peer
  • the agents may be configured as nodes of a network virtualization to form an overlay network, such as a software defined network (SDN), which would be invisible to a cyber attacker.
  • SDN software defined network
  • a P2P, overlay, or virtual network may allow each agent to receive the data from the other agents, and each agent may be equipped to independently execute a causal analysis that compares its own data to corresponding data received from the other agents. From the comparison, each agent may determine any anomalous readings, mismatched or unexpected values as an indication of a potential cyber attack.
  • the analysis of the time series generated by the agents deployed at various multilevel collection points and a continuous causal analysis by the central unit(s) allow for instant association of such monitoring points and their memory representations for monitoring.
  • stream analytics or edge analytics methods may be utilized by a central unit by tagging the agents, and by mapping dependencies through machine learning, which can define a baseline of normal behavior for subsequent anomaly detection.
  • the causal mapping module 31 1 of the central units may construct a baseline graph. Real-time traffic during normal operations may be compared to the baseline graph to detect anomalies. Accordingly, the anomaly detection module 313 model may detect both single process variable anomalies as well as discoordination of different process variables.
  • the security assessment may be implemented as an automated process.
  • an automated method of anomaly detection may detect process variable dependencies and causal relations based on a symbolic or simulated execution of extracted control logic. Additional tracking may be performed relating to user activities, such as user interactions at an HMI unit 123 or an engineering workstation 121. Accordingly, ICS 100 anomalies can be used to back trace a root cause of the cyber attack by tracing the propagation along the causal graph with network and user activities.
  • FIG. 8 illustrates an example of a computing environment within which embodiments of the present disclosure may be implemented.
  • a computing environment 800 includes a computer system 810 that may include a communication mechanism such as a system bus 821 or other communication mechanism for communicating information within the computer system 810.
  • the computer system 810 further includes one or more processors 820 coupled with the system bus 821 for processing the information.
  • computing environment 800 corresponds to a portion of an ICS, in which the computer system 810 relates to a central unit 301 described below in greater detail.
  • the processors 820 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as described herein is a device for executing machine- readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device.
  • CPUs central processing units
  • GPUs graphical processing units
  • a processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer.
  • a processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth.
  • RISC Reduced Instruction Set Computer
  • CISC Complex Instruction Set Computer
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • SoC System-on-a-Chip
  • DSP digital signal processor
  • processor(s) 820 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like.
  • the microarchitecture design of the processor may be capable of supporting any of a variety of instruction sets.
  • a processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between.
  • a user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof.
  • a user interface comprises one or more display images enabling user interaction with a processor or other device.
  • the system bus 821 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 810.
  • the system bus 821 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth.
  • the system bus 821 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCI) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • AGP Accelerated Graphics Port
  • PCI Peripheral Component Interconnects
  • PCMCIA Personal Computer Memory Card International Association
  • USB Universal Serial Bus
  • the computer system 810 may also include a system memory 830 coupled to the system bus 821 for storing information and instructions to be executed by processors 820.
  • the system memory 830 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 831 and/or random access memory (RAM) 832.
  • the RAM 832 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM).
  • the ROM 831 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM).
  • system memory 830 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 820.
  • a basic input/output system 833 (BIOS) containing the basic routines that help to transfer information between elements within computer system 810, such as during start-up, may be stored in the ROM 831.
  • RAM 832 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 820.
  • System memory 830 may additionally include, for example, operating system 834, application programs 835, and other program modules 836.
  • Application programs 835 may also include a user portal for development of the application program, allowing input parameters to be entered and modified as necessary.
  • the operating system 834 may be loaded into the memory 830 and may provide an interface between other application software executing on the computer system 810 and hardware resources of the computer system 810. More specifically, the operating system 834 may include a set of computer-executable instructions for managing hardware resources of the computer system 810 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the operating system 834 may control execution of one or more of the program modules depicted as being stored in the data storage 840.
  • the operating system 834 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system.
  • the computer system 810 may also include a disk/media controller 843 coupled to the system bus 821 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 841 and/or a removable media drive 842 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive).
  • Storage devices 840 may be added to the computer system 810 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
  • Storage devices 841 , 842 may be external to the computer system 810.
  • the computer system 810 may include a user input interface or graphical user interface (GUI) 861 , which may comprise one or more input devices, such as a keyboard, touchscreen, tablet and/or a pointing device, for interacting with a computer user and providing information to the processors 820.
  • GUI graphical user interface
  • the computer system 810 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 820 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 830. Such instructions may be read into the system memory 830 from another computer readable medium of storage 840, such as the magnetic hard disk 841 or the removable media drive 842.
  • the magnetic hard disk 841 and/or removable media drive 842 may contain one or more data stores and data files used by embodiments of the present disclosure.
  • the data store 840 may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. Data store contents and data files may be encrypted to improve security.
  • the processors 820 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 830.
  • hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
  • the computer system 810 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein.
  • the term“computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 820 for execution.
  • a computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media.
  • Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 841 or removable media drive 842.
  • Non-limiting examples of volatile media include dynamic memory, such as system memory 830.
  • Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 821.
  • Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
  • Computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • the computing environment 800 may further include the computer system 810 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 880 and remote agents 881.
  • remote computing devices 880 may correspond to at least one additional central unit 301 in the ICS.
  • the network interface 870 may enable communication, for example, with other remote devices 880 or systems and/or the storage devices 841 , 842 via the network 871.
  • Remote computing device 880 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 810.
  • computer system 810 may include modem 872 for establishing communications over a network 871 , such as the Internet. Modem 872 may be connected to system bus 821 via user network interface 870, or via another appropriate mechanism.
  • Network 871 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 810 and other computers (e.g., remote computing device 880).
  • the network 871 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art.
  • Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 871.
  • program modules, applications, computer- executable instructions, code, or the like depicted in FIG. 8 as being stored in the system memory 830 are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module.
  • various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the computer system 810, the remote device 880, and/or hosted on other computing device(s) accessible via one or more of the network(s) 871 may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in FIG.
  • functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG. 8 may be performed by a fewer or greater number of modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module.
  • program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer- to-peer model, and so forth.
  • any of the functionality described as being supported by any of the program modules depicted in FIG. 8 may be implemented, at least partially, in hardware and/or firmware across any number of devices.
  • the computer system 810 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computer system 810 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program modules have been depicted and described as software modules stored in system memory 830, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality.
  • This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as sub-modules of other modules.
  • any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like can be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase“based on,” or variants thereof, should be interpreted as“based at least in part on.”
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the Figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

Systems and methods are disclosed for security assessment in an Industrial Control System (ICS). A plurality of agents, disposed in the network at different control levels of the ICS, collects data including process variables related to control processes. A causal mapping module constructs a causal graph of nodes by mapping each of the process variables to a node, mapping semantics based directional relationships to edges between nodes, and assigning edge weights based on calculated pairwise causality measurements between nodes. An anomaly detection module analyzes dynamics of the causal graph over time to detect an anomaly in response to observing an abnormal edge weight evolution. A security assessment module performs a security assessment for a target node in the causal graph by assessing a criticality threshold for the target node based on number of causal relationships with the target node.

Description

PROCESS SEMANTIC BASED CAUSAL MAPPING FOR SECURITY MONITORING AND ASSESSMENT OF CONTROL NETWORKS
TECHNICAL FIELD
[0001] This application relates to network security. More particularly, this application relates to causal mapping for security monitoring and assessment of control networks.
BACKGROUND
[0002] Most Industrial Control Systems (ICS) have been historically designed to operate isolated from other systems, with the “air-gap” as a standard approach to provide security. However, Information Technology (IT) and Operations Technology (OT) networks have been converging to enable additional use cases on monitoring and management of the industrial process. Due to aspects like vertical integration of the production systems and horizontal integration of the value chain, ICS networks are often directly or indirectly connected to IT networks (office network) and the Internet, hence offering an opportunity for cyber attackers to penetrate such environments and exploit any existing vulnerabilities. At the same time, ICS continue to be recognized as highly vulnerable due to the lack of basic protection features. Attacks targeting the ICS network can cause significant economic loss, human casualties and national security threatening incidents.
[0003] Most of the existing security features adopted within many industrial control systems were directly migrated from the IT world and thus they are useful in mitigating attacks that are not specially tailored for ICS components and networks. Existing cases in the past have proven, however, that if attackers are able to construct malicious payloads that exploit process information without disrupting normal network traffic flow, those methods might successfully avoid existing intrusion detection methods.
[0004] In process control, Programmable Logic Controllers (PLCs) are the key components that collect data from field devices, process, and control field actuators. Through unauthorized access to a PLC, attackers might directly change process variables and, thus manipulate field devices. Skilled attackers can hide their actions under the normal network traffic levels and use legitimate systems to alter PLC control logic. Advanced Persistent Threats (APTs) carefully designed by highly motivated top experts, sometimes with extended resources sponsored by nation states, might create additional challenges from a detection perspective because they have a specific purpose and can run for a long time without being discovered. Such sophisticated cyber-attacks aimed at ICS devices are often intentionally camouflaged under normal network traffic and hidden inside legitimate systems with methods that avoid detection by existing signature based malware detection methods. Currently existing security controls have proven to be insufficient for such threats, especially in the case of the process level attacks. Other ICS systems such Distributed Control Systems (DCS), motion controllers, Supervisory Control and Data Acquisition (SCADA) servers, and Human Machine Interfaces (HMIs) offer additional challenges when it comes to deploying security measures.
[0005] Recent innovative approaches developed to improve the detection space in industrial systems include the ability to detect deviations on values observed from process variables/IO (i.e. , sensor and actuator values) and to model the process trend data based on neural networks or autoregressive models. There are other methods that are based on pure physical model for a particular system, such as energy management, to construct the model between different data sources. Such methods might suffer from the following major issues when monitoring production process specific data.
[0006] Firstly, there may be a failure to correlate detected deviations on a given process with anomalies observed from other data sources. For example, when one process variable starts a pattern change, it is difficult to determine if this is caused by an external stimulus to the environment or by a naturally expected process behavior. This happens because different monitoring domains (e.g. security, network performance, production process) are monitored independently.
[0007] Secondly, such methods are unable to provide further root cause analysis without knowing the causal relationships between process variables or information from different layers a priori. For example, in a water tank storage system, if the water pressure variables and valve status are monitored independently and valve is not working properly, it is hard to tell whether the sensor for pressure monitoring or the valve should be investigated since water pressure will be impacted due to the valve malfunction.
[0008] Thirdly, such methods lack the ability to generalize their analytical methods to different cyber-physical systems. Each monitoring method is particularly designed for a particular physical system. For example, the physical model for energy management system cannot be used for manufacturing system. Also, constructing multiple specific systems demands specific knowledge from topic matter experts and thus, might be expensive and time consuming.
SUMMARY
[0009] Methods and systems are disclosed for generating a semantics based causal graph for a cyber-physical system, such as an Industrial Control System (ICS), building automation control, traffic systems, energy control systems, or the like. The causal graph may be composed of single layer or multiple layers of system information. For an ICS, collected process data typically represent the physical information and might include control logic. Extraction of control system semantics includes determining how data collected from a single source may directly relate to the information collected from a different layer of the system. For example, data collected from a programmable logic controller (PLC) of an ICS may directly relate to information collected from another layer sources such a field bus or human machine interface (HMI). Finding the relationship between different variables across different data collection points can be used to generate indicators that, when monitored by specialized security analytics algorithms, can have impact on how threat detection might happen.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The foregoing and other aspects of the present invention are best understood from the following detailed description when read in connection with the accompanying drawings. For the purpose of illustrating the invention, there is shown in the drawings embodiments that are presently preferred, it being understood, however, that the invention is not limited to the specific instrumentalities disclosed. Included in the drawings are the following Figures: FIG. 1 shows an example of a multilevel intrusion detection system for an Industrial Control System according to embodiments of this disclosure;
FIG. 2 shows an example of a PLC with an intrusion detection agent according to embodiments of this disclosure;
FIG. 3 is a diagram for an example of a central unit according to embodiments of the present disclosure;
FIG. 4 shows an example of a multilayer causal mapping according to embodiments of this disclosure;
FIG. 5 shows a diagram for an example of anomaly detection according to embodiments of this disclosure;
FIG. 6 is a diagram for an example of security assessment according to embodiments of the disclosure;
FIG. 7 is a diagram for an example of root cause analysis according to embodiments of the disclosure; and
FIG. 8 shows an example of a computing environment within which embodiments of the disclosure may be implemented.
DETAILED DESCRIPTION
[0011] Methods and systems are disclosed for security monitoring and assessment of multilevel Industrial Control System devices. The disclosed computer-implemented methods and systems present an improvement to the functionality of the computer used to perform such a computer based task. While currently available tools may detect an anomaly based on deviations on values observed from process variables of a sensed measurement or an actuator value, monitoring of production process specific data may fail detect intrusion types that could evade conventional measures due to the inability to correlate data from sources of different domains or different system layers, or the inability to generalize analytical methods to different cyber-physical systems.
[0012] Data may be collected from multiple software agents placed at different levels of the control network, which may autonomously activate and execute data collection, and in some instances, transform the data from a fieldbus protocol to a communication protocol that is more conducive to causal analysis. Hence, the embodiments of the present disclosure enable the intrusion detection system to be more robust, efficient, and effective than conventional means.
[0013] Causal graphs created for single layer or multiple system layers capture collected process data and control logic for finding relationships between different variables across different data collection points, which are useful for providing accurate indicators to better predict threat detections.
[0014] FIG. 1 is a block diagram for an example of an ICS according to embodiments of this disclosure. In an embodiment, an ICS 100 may have a plant wide structure that includes multiple control levels, such as a production scheduling control level 4, a production control level 3, a plant supervisory control level 2, a direct control level 1 , and a field bus control level 0, as shown in FIG. 1. Each of the control levels may communicate according to an industrial Ethernet protocol, controlled by routers or Ethernet switches at each level. For example, switch 135 is placed within the control network to control data packet routing between control levels 3 and 4.
[0015] The control level 4 components of the ICS 100 may include one or more production scheduling servers 141 as the highest level of control for the plant wide ICS 100. The server 141 may be remotely located and connected to the ICS 100 via a network 143 such as the internet, and connected to other fleet plants via network 144. A DMZ 145 may provide a firewall between the plant control network and the external network 143.
[0016] The control level 3 components of the ICS 100 may include one or more coordinating computers 131 , and one or more web servers or central archiving servers 133. An office network 132 may share a common router 135 with the control level 3 components, and may include one or more user terminals used by plant personnel to perform administrative functions that may be ancillary to plant control. However, by sharing a common path at switch 135, the office network 132 may present a vulnerability to the ICS 100 by way of external communication via network 143, such as the internet. For example, an office worker laptop could be victimized by a cyber attack and infected with malware that could later move laterally to potentially intercept and alter data packets in the ICS 100. [0017] Control level 2 of the ICS 100 may perform a supervisory function for the network. The level 2 components of the ICS 100 may include one or more SCADA servers 127, one or more historian units 125, an engineering workstation 121 , and an HMI unit 123. SCADA servers 127 are useful for remote access to level 1 controllers and may serve to provide overriding functionality at a supervisory level. Historian units
125 may be embedded or external devices used for storing historical process data, such as process variable information, event information, and/or user action information, collected by a SCADA server 127 or an HMI unit 123. For example, a historian unit 125 may be implemented as a plant information management system (PIMS) device. Level 2 switches may control data packets for level 2 ICS components. For example, switch
126 may control communications to and from each of SCADA servers 127, historians 125, engineering workstations 121 , and HMIs 123 when communicating with ICS components of other levels. Other level 2 switches, such as switch 128, may be similarly placed within the ICS 100 for controlling other level 2 control components dedicated to different zones of the plant.
[0018] Control level 1 of the ICS 100 may include direct controllers responsible for controlling actions of field devices and for collecting sensor and measurement information related to the field devices. Control level 1 may include one or more controllers 1 15, one or more PLCs 1 1 1 , and one or more remote telemetry units (RTUs) 1 17. Each of the PLCs 1 1 1 may be coupled to a data collector 1 13 for logging and storing historical and production data related to the field devices, such as to database storage. During plant operations, a PLC 1 1 1 may perform scan cycles of inputs and outputs, which are stored as process images for access by the SCADA server 127. The outputs may be communicated to the operator at an HMI unit such as HMI unit 123. Such data transmissions between control components at the control levels may be susceptible to a cyber attack, such as a manipulation of process view.
[0019] Control level 0 of the ICS 100 may include one or more field buses to which field devices, such as sensors and actuators, are connected. The signals exchanged at the field bus may be referred to as process variables, including received control instructions from the level 0 control devices, and control feedback signals, such as instrument measurements and sensor readings, sent back to the level 0 control devices. For example, field device 102 may be controlled by controller 1 15, while field devices 104, 106 are controlled by PLC 1 1 1. A control level 1 switch 1 14 may be implemented as an Ethernet router and/or gateway for exchanging data packets at control level 1 to control level 2. For PLCs 1 1 1 that are not Ethernet enabled, switch 1 14 may include a gateway for conversion of PLC data to Ethernet based data to communication with higher control level ICS components, such as SCADA server 127. The interface between the controllers, such as PLC 1 1 1 , and the level 0 field devices may be a serial port protocol, such as Profibus RS-485 standard protocol, which is incompatible with Ethernet. While Ethernet or industrial Ethernet is described as one possible protocol for higher levels of the ICS 100, other data transfer protocols may be applied with conversion and switching as appropriate according to the same manner as described.
[0020] In an embodiment, the data collection may include one or more network based implementations, which utilize high level detection tools, such as IDS/PDS units 136. For example, the IDS/PDS units 136 may be configured to read one or more communication protocols, such as Modbus, S7comm, Ethernet/IP, or the like. Network based data collection may track origins and destinations of network data packets and detect anomalies based on signatures or unexpected behavior of network devices. For example, a table of data collection for communication packets between two devices, such as SCADA server 127 and historian 125, may indicate an expected throughput (e.g., 25kbps) during a particular time span (e.g., between 08:00 and 20:00 each day). A network based anomaly may be indicated in response to detection of constraint violations, such as changing direction of data flow or maximum throughput being exceeded. Other examples of network based anomaly detection may include monitoring relevant security alerts relative to performed functions and processing, such as execution of code and other performed events at different control levels.
[0021] In an embodiment, the data collection may include one or more host based implementations, which utilize placement of a local agent in a host device or at a network switch. For example, as shown in FIG. 1 , an agent 162 may be disposed in a host device, such as the PLC 1 1 1. As shown in FIG. 2, agent 162 may include a software function block 230 to implement data collection at the host device, such as extraction of process variables. In an embodiment, the function blocks of agent 162 may be executed by PLC processor 201. In an embodiment, the agent 162 may be implemented as an embedded computer with a separate microprocessor to execute the function blocks. The host based data collection may be implemented as a separate unit, such as agent 161 , connected to the memory of the host device, such as PLC memory 200. Although the example of FIG. 1 shows the data collection functionality in one PLC 1 1 1 for centrally based causal analysis to detect potential cyberattacks on itself, it can also be implemented in a way that one PLC can assist the central based causal analysis to detect potential cyberattacks on its peer PLCs, by collecting data from other PLCs.
[0022] The control program 225 includes the instructions executed by the PLC 1 1 1 for operation of connected field devices. Additionally, the control program 225 manages input/output, global variables, and access paths. A central unit in the network, such as central unit 152, 153 shown in FIG. 1 , may be configured to analyze these input/output, global variables, and access paths as they are received or modified to identify conditions which may indicate that a malicious intrusion is taking place.
[0023] The data collection function block 231 may collect process data including physical information and control logic. Collected process data may include measurements observed by sensors of a production system. For example, conveyors of a packaging line may be measured to be operating in the speed range of 0.25~0.5 m/s when the production line is in a producing state. As another example during a production state, a controller may receive a speed setting of 0.4 m/s for the conveyor from an operator via the HMI 123. As PLC 1 1 1 may control a variety of devices, data collection function block 231 is configured to collect data associated with different types of process variables obtained from multiple sensors on field bus level 0. Each sensor data stream may be collected and stored as a time series data graph. Data collected by data collection function block 231 may directly relate to information collected from different levels of the control network, such as data from HMI 123 in level 2 and data collected from agent 160 at field bus level 0.
[0024] A combination of network-based data collection and host-based data collection may be implemented. For example, data collection of network-based detection device, such as an IDS unit 136, and data collection of one or more host- based devices such as ID unit 160, may be monitored continuously, which can be used by a centrally based causal analysis of network anomalies and input/output (I/O) process anomalies for network security assessment.
[0025] In an embodiment, placement of multiple agents may be deployed at different levels. As shown in FIG. 1 , one or more agents, either as embedded agents or external units, may be placed at each control level, such as agent 142 in control level 4 server 141 , agent 134 in switch 135 for control level 3, agents 122, 175, 176, 177, 178 in control level 2, agent 161/162 in control level 1 , and agent 160 at control level 0. In an embodiment, an agent may be deployed at each control level in at least one control level component or switch for the control level. The device or switch in which the agent may be deployed can either be the same type of component, or may be a variety of component types. As one example for a combination of agents to cover the multiple control levels, the agents may consist of agent 142 at control level 4 server 141 , agents 134, 164, 174 at network switches for control levels 1 , 2, 3, and agent 160 for control level 0.
[0026] An agent may be a free standing computer connected to a control level component, or may be an embedded system within a control level component. In an embodiment, an agent may be implemented as an industrial personal computer (PC), such as a ruggedized PC, connected to a network switch, such as agent 163. The agent may include a software application installed on memory within the unit, programmed to execute a local data collection function. For control level 0 devices connected to the field bus via a serial protocol, such as Profibus, standard IT protocol detection is not compatible, and a transformation of the signals is required. For example, the agent 160 at control level 0 may therefore include a transformation component (e.g., a gateway) to translate the extracted data to a protocol (e.g., Ethernet) useable by the system for causal analysis with higher control level data. In an embodiment, network data packets may be encrypted and each agent is configured to have access to the encryption key(s) in order to decrypt the data packets.
[0027] In an embodiment, one or more agents may be configured to data collection. For example, the agent 161 , 162 may periodically scan the memory of the control component, such as the PLC 1 1 1. [0028] FIG. 2 shows a diagram for an example of a data collection agent according to embodiments of the disclosure. Agent 161 , 162 may be implemented for PLC 1 1 1 having a memory 200, which may include a process image input table (Pll) 210, a process image output table (PIQ) 220, a control program 225 and a processor 201. During a scanning cycle for PLC 1 1 1 , processor 201 may read the status of inputs 210, such as sensor value 80 at address A1 , and execute the control program 225 using the status of the inputs. Output values generated by the control program are stored in the output table 220. For example the output value 10 at address B1 is the result of the corresponding input value 80. The processor 201 may then scan the output table, and send the output values to the level 0 field devices. An agent, such as agent 161 or agent 162, may periodically scan the input table 210 and output table 220 of the PLC memory 200. To conserve system resources, the agent active scan may implement a discriminate scan of the input and output tables. For example, the periodicity of the active scan may be determined based on a learning algorithm that optimizes data collection relative to a threshold of excessive data dumping and processor usage of the PLC 1 1 1. As another example, the agents may be programmed to limit the types of information to actively scan from the stored data, which may be identified by the source or the address for example.
[0029] In an embodiment, control components of the network may be coupled using an open platform communication (OPC) protocol. By monitoring OPC interfaces, such as link 190 between historian unit 125 and PLC 1 1 1 (e.g., a wireless protocol as shown in FIG. 1 , or a wired protocol) using multiple agents, a distributed process history can be accumulated, from which one or more causal graphs can reveal anomalies which may be analyzed as a potential cyber attack. One advantage of data collection at OPC interfaces for historian units is that cyber attacks typically alter data for presentation at a user access point, such as displayed data at an HMI unit for an operator, but fail to alter the data recorded at a historian unit in a consistent manner.
[0030] As the agents collect data among the multiple control levels in real-time during plant operations, the data may be analyzed by a central unit, such as CU 152, 153, according to one or more embodiments in order to search for anomalies based on causal graphs. For example, time series data scans may be performed by agents 160, 161 , 177 and agent 134, and a central unit may extract process semantics from collected data received from each agent that corresponds to field device 160, and compare the data values in search for any inconsistency that could be an indication of an anomaly due to a cyber attack. This process may be performed by the central unit at each scan for multiple field devices, or other types of corresponding data retrieved by the agents. The central unit may be implemented as an embedded system within any of the control level components.
[0031] FIG. 3 is a diagram for an example of a central unit according to embodiments of the present disclosure. In an embodiment, a central unit 301 may include a processor 302 and a memory 304 with application programs 310 executable by the processor 302. Each application program may be stored as a module, such as a causal mapping module 31 1 , an anomaly detection module 313, a security assessment module 315, a root cause module 317 and an alert module 319. The central unit 301 corresponds to the central units 152, 153 shown and described above with respect to FIG. 1.
[0032] The causal mapping module 31 1 may construct a causal graph based on the collected process variables from the distributed agents 321 for analysis of causal relations of control network behavior and events. To develop the causal relations, the causal mapping module 31 1 may extract semantics from the collected process variable values and if applicable, control logic of the control device. To illustrate the extraction of process semantics, an example is now described in which PLC 1 1 1 controls operations related to a water tank. Agent 162 may collect process variables for tank level sensor measurement, water temperature sensor measurement, and control variables for heater status, temperature setpoint, and valve status. The central unit 152 may read the collected data over time as a stream of values, and read control logic commands stored in PCL 1 1 1 memory. The causal mapping module of the central unit 152 may identify the corresponding process variable by the behavior of values for the collected data over time as one of the following: remains constant - maps to setpoints; fluctuates among a set of discrete values - maps to a heater or valve state; and varies gradually and continuously - maps to sensor measurements. [0033] The causal mapping module 31 1 may map each process variable obtained from the agents to a node while constructing the causal graph, and define directional edges connecting nodes to represent the causal relationship between the nodes. The causal mapping module 31 1 may calculate a pairwise causality measurement between nodes to determine the weight of the connecting edge.
[0034] Information propagation may be extracted by the causal mapping module 31 1 based on causal relationships between the nodes representing process related variables. To estimate the measurement between information passed from one process to the other, causality estimation methods implemented include, but are not limited to Granger causality, transfer entropy and transfer entropy variations, and the like. Unlike correlation approaches to security assessment, causal relationships are directional and reflect how one process variable has impact on its connected neighbor variables. Therefore, by observing how the information is propagated through the causal graph, further analysis is available that is not possible using only correlation information, such as root cause analysis and identification of critical nodes for security assessment of a control system.
[0035] Some causality measurement methods do not consider the causal impact from one variable to other ones. For example, a potential indirect causal relationship may be revealed when causality measurements indicate that x— > y, y— > z and x— > z, as it must be determined whether z is impacted directly by x or indirectly through the influence of y. To resolve the decision for a security monitoring function, the causal mapping module 31 1 may maintain the same dynamics, without a decision as to direct or indirect causal relationship. When the causal graph is applied to performing a security assessment by security assessment module 315, the causal mapping module 31 1 may perform a determination analysis to classify the relationship as direct or indirect causality. If causal relations x— > y, y— > z and x— > z are observed and z has an event for which the root cause needs to be found, the direct impact from other variables need to be quantified. In this case, whether the impact from x to z is through y is undecided. Therefore, a causality measurement method that finds the direct causal relation between two variables, such as a direct transfer entropy (DTE) method, may be applied by the causal mapping module 31 1 to decide whether x has impact on z directly or through the intermediate variable y.
[0036] FIG. 4 shows an example of a multilayer causal mapping according to embodiments of this disclosure. One or more layers 401 , 402, 403 of causal graphs may be constructed by the causal mapping module 31 1 based on the process data collected by the agents 321.
[0037] In an embodiment, an intra-device layer 401 causal graph may be constructed based on data collected by an agent associated with one device, such as data inside a computer being transformed. In an embodiment, a central unit may analyze input and output data collected by an agent and extract semantics based on the expected data flow to generate the causal graph at intra-device layer 401. For example, agent 162 in PLC 1 1 1 shown in FIG. 1 , may collect input process image 201 and output process image 220, from which causal mapping module 31 1 of central unit 301 may extract semantics related to a process variables for a production component, such as sensor measurement input and a control signal output to the production component. In particular, the sensor measurement may relate to a water level reading, and the control signal output may be a value corresponding to open/close state for a valve to control the water level. The extracted semantic for this operation may be the rule applied by the control logic to change the valve state once a control limit for the water level is reached. From this rule based semantic, a causal mapping may generate a causal map at intra device layer 401 for this device (i.e. , the valve control). Anomaly detection for the device may then be based on detection of changes to the causal graph at intra-device layer 401.
[0038] In an embodiment, an intra-network layer 402 causal graph may be constructed based on process semantics extracted from the network traffic flow within the network. In an embodiment, the causal mapping module 31 1 may extract process semantics based on observation of direction of data flow between agents of devices disposed at different control levels. For example, process variable data flowing from field device 104 (e.g., a temperature sensor) to PLC 1 1 1 to HMI 123 or from field devices 104, 106 to PLC 1 1 1 and [0039] In an embodiment, an extra-network layer 403 causal graph may be constructed, which monitor the data flowing from a first control system related to a first industrial or production process, to another control system related to a different industrial or production process. For example, in a production facility where product is manufactured in sector A of the facility and the deployed control system A for the production equipment is from a supplier A, and sector B has a packaging process staged to receive the product from sector A and has a deployed control system B of a different provider. The control systems have overlaps to allow the entire manufacturing and packaging processes to coordinate. A causal mapping module 31 1 of the first control system may receive process variable data from certain agents at control devices that interface with control system B. The causal mapping module 31 1 may extract process semantics from such extra-network process variable data, from which an extra network layer 403 causal graph may then be constructed.
[0040] The layers 401 , 402, 403 may overlap where a common node represents a process variable collected from a control device common two or more layers. Causal analysis and security assessment may be applied to one of the causal graph layers 401 , 402, 403, or any combination thereof. In an embodiment, false anomalies may be detected where overlap of the layers exists, and inconsistent predictions of anomalous nodes may be an indication that a predicted anomaly is erroneous. For example, historical data of inter-layer 402 causal map may produce more reliable anomaly detection compared with anomaly detections based on intra-layer 401 causal graph with respect to process variables modeled for a particular control device. In such case, an anomaly detection from intra-layer 401 causal graph may be given less weight to trigger an alert signal when no anomaly detection for the same process variable is presented from the inter-layer 402 causal map. Accordingly, the implementation of multiple layers 401 , 402, 403 causal mapping provides more reliable security monitoring than conventional methods.
[0041] Returning to FIG. 3, the anomaly detection module 313 may analyze dynamics of the causal graph over time to detect an edge or a group of edges in the casual map demonstrating abnormal weight evolution over time. An example of a type of temporal analysis for the anomaly detection is temporal graph mining. As an example, FIG. 5 shows a diagram for an example of anomaly detection according to embodiments of this disclosure. A group of nodes and edges 510 from a portion of a casual map include edges 502a, 504a, and 506a. After a period of time, the edge weights are monitored by the anomaly detection module 313, and an anomalous edge weight is detected for edge 502b, between nodes 501 and 503, compared to edge weights 504b and 506b. In particular, the edge weight value of 502b decreased by 0.3, while edge weight values 504b and 506b only decreased by 0.1. As a result, the anomaly detection module 313 may trigger an alert from alert module 419, which may include a visual indication on a causal graph rendering, such as a color change or other form of highlighting of the edge or edge weight value 502b, on a display device available to an operator. The anomaly detection may be an indication of a potential intrusion in the control system by a cyber attacker.
[0042] The security assessment module 315 may perform a security assessment for a target node in a causal graph. In an embodiment, the security assessment module 315 may monitor a causal graph to track vulnerability of the system by assessing target nodes that meet or exceed criticality thresholds, such as number of causal relationships that are direct, indirect, or a combination thereof. The security assessment module 315 may include functionality to annotate a rendering of the causal graph with device vulnerability information as a result of an intermediate step of system vulnerability scanning. For example, the annotated vulnerability information for a target node may include a score value indicating level of criticality reached within a defined scale or range containing the allowable threshold. Alternatively, a secondary causal graph might be used and correlated with the main causal graph to detect lateral moves.
[0043] In an embodiment, the security assessment module 315 may include functionality to enable a visual tool for the security assessment, such as shown in FIG. 6. On a rendering of a portion of the causal graph, node 601 may be selected by an operator as a query for security assessment, such as by using graphical user interface to selecting node 601 using a graphical tool (e.g., clicking on a graphical representation of the node on a display), and the security assessment module 315 may respond by highlighting the selected node 601 (e.g., rendering the node in a particular color such as red, or by a similar highlighting indication), and also highlighting affected nodes 602, 603, and 604 (e.g., rendering affected nodes in a different color than node 601 , such as blue, or by a similar highlighting indication). Affected nodes are determined by the security assessment module 315 based on propagation edges between node 601 and affected nodes, which would be impacted by any change to node 601. As a result, the security assessment of the causal graph provides an indication of how information is propagated from one node to other components, such that asset criticality relationships in the ICS may be derived. The impact of on the whole network by a single component being security compromised may be derived, no matter how large the size of the cyber physical network.
[0044] The root cause module 317 may respond to an anomaly detected by anomaly detection module 313 by performing an analysis of propagation from the anomalous map entity to the causal source node. As an example, FIG. 7 is a diagram for an example of root cause analysis according to embodiments of the disclosure. A cluster of nodes from a portion of the entire causal graph may be observed by the root cause module 317 over three time period evolutions 710a, 710b and 710c. In an embodiment, an anomalous node may be detected by anomaly detection module 313 (e.g., by applying temporal graph mining technique). In response to the detection of anomalous node 701 at 710b of the sequence, root cause module 317 may determine the propagation of the root cause by following the causal graph from node 701 in the opposite direction of the causal relationship (i.e. , the directional arrows) along the path of the causal graph in 710c until a dissimilar causality measurement is detected over a time sequence. Moving from node 701 to node 703, the causality measurement value 702 is observed between time 710a and 710b and determined that the values 0.3 and 0.4 are relatively similar. Next, moving from node 703 to node 705, again the causality measurements 704 are relatively similar. Finally, observing the causality measurement 706 from node 705 to 707 between time 710a and 710b, it is determined that a significant change has occurred. The root cause module 317 may infer that an anomalous node 705 affected the pairwise causality measurement to node 707, and as such the root cause is assigned to node 705. As a graphical aid, the root cause module 317 may render the nodes 701 , 702 and 705 with a differently (e.g., according to a different color scheme) than other local nodes as an indication of the propagation from the root cause node 705 to the affected nodes 703 and 701.
[0045] The alert module 319 may include predetermined thresholds for triggering an alert signal or message in response to one or more anomalies being detected that surpass the thresholds. For example, a detection of edge value change that exceeds a threshold may trigger an alert to indicate a detected anomalous edge evolution. The alert module 319 may also trigger alert messages in response to a detected anomaly by anomaly module 313 or in response to a root cause detection by root cause module 317. In response to a triggered alert, the alert module 319 may generate the alert signal or message for display to an operator at a user terminal display device, such as the HMI 123.
[0046] In an embodiment, a central unit may be deployed in a cloud-based implementation, such as a cloud server. For example, a cloud server may be configured to run a product data management service, such as MindSphere, to which the production of network 100 is tied. Accordingly, the ICS 100 may utilize the service with the central unit extension to additionally incorporate the data retrieved by the multilevel agents and to perform the root cause and security assessment at the cloud server. As another example, the control level 3 server 141 may deploy a central unit 152 which can be utilized to implement fleet-level intrusion detection by collection of data from agents deployed at multiple control levels at other plants in a similar manner as shown for ICS 100. Applying a fleet-level analytics may include monitoring and comparing similar process setups or identical equipment running on different plant sites, or for different customers.
[0047] In an embodiment, a central correlation unit may be deployed in a plant-level network server located on-premises, such as central unit 153 in network server 133. The central unit may be implemented as an embedded system with a dedicated processor, or by sharing an existing processor in the network server 133.
[0048] In an embodiment, the causal mapping for the multilevel agents may be implemented as a distributed network of smart agents. For example, agents deployed at multiple levels may each be equipped with communication means, such as a transceiver, to communicate peer-to-peer (P2P) to form a network, such as wireless local area network (WLAN). In an embodiment, the agents may be configured as nodes of a network virtualization to form an overlay network, such as a software defined network (SDN), which would be invisible to a cyber attacker. Accordingly, a P2P, overlay, or virtual network may allow each agent to receive the data from the other agents, and each agent may be equipped to independently execute a causal analysis that compares its own data to corresponding data received from the other agents. From the comparison, each agent may determine any anomalous readings, mismatched or unexpected values as an indication of a potential cyber attack.
[0049] The analysis of the time series generated by the agents deployed at various multilevel collection points and a continuous causal analysis by the central unit(s) allow for instant association of such monitoring points and their memory representations for monitoring. For example, stream analytics or edge analytics methods may be utilized by a central unit by tagging the agents, and by mapping dependencies through machine learning, which can define a baseline of normal behavior for subsequent anomaly detection. The causal mapping module 31 1 of the central units may construct a baseline graph. Real-time traffic during normal operations may be compared to the baseline graph to detect anomalies. Accordingly, the anomaly detection module 313 model may detect both single process variable anomalies as well as discoordination of different process variables. In an embodiment, the security assessment may be implemented as an automated process. For example, an automated method of anomaly detection may detect process variable dependencies and causal relations based on a symbolic or simulated execution of extracted control logic. Additional tracking may be performed relating to user activities, such as user interactions at an HMI unit 123 or an engineering workstation 121. Accordingly, ICS 100 anomalies can be used to back trace a root cause of the cyber attack by tracing the propagation along the causal graph with network and user activities.
[0050] While the embodiments have been described primarily in the context of an industrial production domain, the methods and systems may also be applied to multilevel control systems of other types of networks requiring security monitoring and assessment against cyber attacks, including but not limited to building automation control, traffic systems, energy control systems, or the like. [0051] FIG. 8 illustrates an example of a computing environment within which embodiments of the present disclosure may be implemented. A computing environment 800 includes a computer system 810 that may include a communication mechanism such as a system bus 821 or other communication mechanism for communicating information within the computer system 810. The computer system 810 further includes one or more processors 820 coupled with the system bus 821 for processing the information. In an embodiment, computing environment 800 corresponds to a portion of an ICS, in which the computer system 810 relates to a central unit 301 described below in greater detail.
[0052] The processors 820 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as described herein is a device for executing machine- readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine-readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device. A processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer. A processor may include any type of suitable processing unit including, but not limited to, a central processing unit, a microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Complex Instruction Set Computer (CISC) microprocessor, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), a System-on-a-Chip (SoC), a digital signal processor (DSP), and so forth. Further, the processor(s) 820 may have any suitable microarchitecture design that includes any number of constituent components such as, for example, registers, multiplexers, arithmetic logic units, cache controllers for controlling read/write operations to cache memory, branch predictors, or the like. The microarchitecture design of the processor may be capable of supporting any of a variety of instruction sets. A processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between. A user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof. A user interface comprises one or more display images enabling user interaction with a processor or other device.
[0053] The system bus 821 may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the computer system 810. The system bus 821 may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. The system bus 821 may be associated with any suitable bus architecture including, without limitation, an Industry Standard Architecture (ISA), a Micro Channel Architecture (MCA), an Enhanced ISA (EISA), a Video Electronics Standards Association (VESA) architecture, an Accelerated Graphics Port (AGP) architecture, a Peripheral Component Interconnects (PCI) architecture, a PCI-Express architecture, a Personal Computer Memory Card International Association (PCMCIA) architecture, a Universal Serial Bus (USB) architecture, and so forth.
[0054] Continuing with reference to FIG. 8, the computer system 810 may also include a system memory 830 coupled to the system bus 821 for storing information and instructions to be executed by processors 820. The system memory 830 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 831 and/or random access memory (RAM) 832. The RAM 832 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM). The ROM 831 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM). In addition, the system memory 830 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 820. A basic input/output system 833 (BIOS) containing the basic routines that help to transfer information between elements within computer system 810, such as during start-up, may be stored in the ROM 831. RAM 832 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 820. System memory 830 may additionally include, for example, operating system 834, application programs 835, and other program modules 836. Application programs 835 may also include a user portal for development of the application program, allowing input parameters to be entered and modified as necessary.
[0055] The operating system 834 may be loaded into the memory 830 and may provide an interface between other application software executing on the computer system 810 and hardware resources of the computer system 810. More specifically, the operating system 834 may include a set of computer-executable instructions for managing hardware resources of the computer system 810 and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the operating system 834 may control execution of one or more of the program modules depicted as being stored in the data storage 840. The operating system 834 may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system.
[0056] The computer system 810 may also include a disk/media controller 843 coupled to the system bus 821 to control one or more storage devices for storing information and instructions, such as a magnetic hard disk 841 and/or a removable media drive 842 (e.g., floppy disk drive, compact disc drive, tape drive, flash drive, and/or solid state drive). Storage devices 840 may be added to the computer system 810 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire). Storage devices 841 , 842 may be external to the computer system 810.
[0057] The computer system 810 may include a user input interface or graphical user interface (GUI) 861 , which may comprise one or more input devices, such as a keyboard, touchscreen, tablet and/or a pointing device, for interacting with a computer user and providing information to the processors 820.
[0058] The computer system 810 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 820 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 830. Such instructions may be read into the system memory 830 from another computer readable medium of storage 840, such as the magnetic hard disk 841 or the removable media drive 842. The magnetic hard disk 841 and/or removable media drive 842 may contain one or more data stores and data files used by embodiments of the present disclosure. The data store 840 may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. Data store contents and data files may be encrypted to improve security. The processors 820 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 830. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
[0059] As stated above, the computer system 810 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein. The term“computer readable medium” as used herein refers to any medium that participates in providing instructions to the processors 820 for execution. A computer readable medium may take many forms including, but not limited to, non-transitory, non-volatile media, volatile media, and transmission media. Non-limiting examples of non-volatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as magnetic hard disk 841 or removable media drive 842. Non-limiting examples of volatile media include dynamic memory, such as system memory 830. Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the system bus 821. Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
[0060] Computer readable medium instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
[0061] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer readable medium instructions.
[0062] The computing environment 800 may further include the computer system 810 operating in a networked environment using logical connections to one or more remote computers, such as remote computing device 880 and remote agents 881. In an embodiment, remote computing devices 880 may correspond to at least one additional central unit 301 in the ICS. The network interface 870 may enable communication, for example, with other remote devices 880 or systems and/or the storage devices 841 , 842 via the network 871. Remote computing device 880 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 810. When used in a networking environment, computer system 810 may include modem 872 for establishing communications over a network 871 , such as the Internet. Modem 872 may be connected to system bus 821 via user network interface 870, or via another appropriate mechanism.
[0063] Network 871 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 810 and other computers (e.g., remote computing device 880). The network 871 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-6, or any other wired connection generally known in the art. Wireless connections may be implemented using Wi-Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network 871.
[0064] It should be appreciated that the program modules, applications, computer- executable instructions, code, or the like depicted in FIG. 8 as being stored in the system memory 830 are merely illustrative and not exhaustive and that processing described as being supported by any particular module may alternatively be distributed across multiple modules or performed by a different module. In addition, various program module(s), script(s), plug-in(s), Application Programming Interface(s) (API(s)), or any other suitable computer-executable code hosted locally on the computer system 810, the remote device 880, and/or hosted on other computing device(s) accessible via one or more of the network(s) 871 , may be provided to support functionality provided by the program modules, applications, or computer-executable code depicted in FIG. 8 and/or additional or alternate functionality. Further, functionality may be modularized differently such that processing described as being supported collectively by the collection of program modules depicted in FIG. 8 may be performed by a fewer or greater number of modules, or functionality described as being supported by any particular module may be supported, at least in part, by another module. In addition, program modules that support the functionality described herein may form part of one or more applications executable across any number of systems or devices in accordance with any suitable computing model such as, for example, a client-server model, a peer- to-peer model, and so forth. In addition, any of the functionality described as being supported by any of the program modules depicted in FIG. 8 may be implemented, at least partially, in hardware and/or firmware across any number of devices.
[0065] It should further be appreciated that the computer system 810 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computer system 810 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program modules have been depicted and described as software modules stored in system memory 830, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality. This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as sub-modules of other modules. [0066] Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. In addition, it should be appreciated that any operation, element, component, data, or the like described herein as being based on another operation, element, component, data, or the like can be additionally based on one or more other operations, elements, components, data, or the like. Accordingly, the phrase“based on,” or variants thereof, should be interpreted as“based at least in part on.”
[0067] Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. Conditional language, such as, among others,“can,”“could,”“might,” or“may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment.
[0068] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.

Claims

CLAIMS What is claimed is:
1. A system for security assessment in a control network of an industrial control system, the method comprising:
a plurality of agents configured to collect data, including process variables, the agents disposed in the network at different control levels of the automation and control network, the control levels comprising a field bus control level and a direct control level, wherein at least one agent connected at the field bus control level is configured to translate field bus control data from a serial protocol to a communication protocol used by higher control levels;
a central unit comprising a processor to execute algorithmic modules stored on a storage device, the modules comprising:
a causal mapping module configured to:
receive the collected data from the plurality of agents,
extract process semantics from the collected data; and
construct a causal graph of nodes by mapping each of the process variables to a node, mapping directional relationships to edges between nodes using the process semantics, and assigning edge weights based on calculated pairwise causality measurements between the nodes; and
an anomaly detection module configured to analyze dynamics of the causal graph over time to detect an anomaly in response to observing an abnormal edge weight evolution.
2. The system of claim 1 , further comprising:
a security assessment module configured to perform a security assessment for a target node in the causal graph by assessing a criticality threshold for the target node, wherein the criticality threshold is based on number of causal relationships with the target node that are direct, indirect, or a combination thereof.
3. The system of claim 1 , wherein the security assessment module is further configured to graphically indicate impacted nodes in the causal graph on a condition that the target node becomes security compromised.
4. The system of claim 1 , wherein the security assessment module is further configured to annotate the causal graph with vulnerability information for the target node.
5. The system of claim 1 , further comprising:
a root cause module configured to perform a root cause analysis by tracing a path in the causal graph from a detected anomalous node to a root cause node by following the directional relationships node by node in a reverse direction in the causal graph until a dissimilar causality measurement is detected over a time sequence.
6. The system of claim 1 , wherein the causing mapping module is configured to construct the causal map to include one or more of the following layer types:
an intra-device layer based on extracted process semantics associated with data flow within a single device;
an intra-network layer based on the extracted process semantics associated with data flow between devices of different control levels; and
an extra-network layer based on the extracted process semantics associated with data flow between different control systems in the network.
7. The system of claim 6, wherein the causal map comprises at least two different layer types sharing a common node, and an anomaly detection of the node in a first layer type is determined to be a false anomaly on a condition that no anomaly detection occurs for the node in the second layer type.
8. A computer-based method for security assessment in an automation and control network, the system comprising:
collecting data, including process variables, from a plurality of agents disposed at different control levels of the automation and control network, the control levels comprising a field bus control level and a direct control level, wherein at least one agent connected at the field bus control level is configured to translate field bus control data from a serial protocol to a communication protocol used by higher control levels; and using at least one central computing unit for:
receiving the collected data from the plurality of agents,
extracting process semantics from the collected data;
constructing a causal graph of nodes by mapping each of the process variables to a node, mapping directional relationships to edges between nodes using the process semantics, and assigning edge weights based on calculated pairwise causality measurements between the nodes; and
analyzing dynamics of the causal graph over time to detect an anomaly in response to observing an abnormal edge weight evolution.
9. The method of claim 8, further comprising:
determining asset criticality relationships of the network by graphical indication of information propagation through the causal graph, wherein criticality of a target node is based on number of related nodes impacted by a change to the target node.
10. The method of claim 9, further comprising:
graphically indicating impacted nodes in the causal graph on a condition that the target node becomes security compromised.
1 1. The method of claim 9, further comprising:
annotating the causal graph with vulnerability information for the target node.
12. The method of claim 8, further comprising:
performing a root cause analysis by tracing a path in the causal graph from a detected anomalous node to a root cause node by following the directional relationships node by node in a reverse direction in the causal graph until a dissimilar causality measurement is detected over a time sequence..
13. The method of claim 8, wherein constructing the causal graph comprises:
including one or more of the following layer types:
an intra-device layer based on extracted process semantics associated with data flow within a single device;
an intra-network layer based on the extracted process semantics associated with data flow between devices of different control levels; and
an extra-network layer based on the extracted process semantics associated with data flow between different control systems in the network.
14. The method of claim 13, wherein the causal map comprises at least two different layer types sharing a common node, the method further comprising:
determining an anomaly detection of the node in a first layer type to be a false anomaly on a condition that no anomaly detection occurs for the node in the second layer type.
PCT/US2018/048047 2018-08-27 2018-08-27 Process semantic based causal mapping for security monitoring and assessment of control networks WO2020046260A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2018/048047 WO2020046260A1 (en) 2018-08-27 2018-08-27 Process semantic based causal mapping for security monitoring and assessment of control networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2018/048047 WO2020046260A1 (en) 2018-08-27 2018-08-27 Process semantic based causal mapping for security monitoring and assessment of control networks

Publications (1)

Publication Number Publication Date
WO2020046260A1 true WO2020046260A1 (en) 2020-03-05

Family

ID=63557681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/048047 WO2020046260A1 (en) 2018-08-27 2018-08-27 Process semantic based causal mapping for security monitoring and assessment of control networks

Country Status (1)

Country Link
WO (1) WO2020046260A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111565201A (en) * 2020-07-15 2020-08-21 北京东方通科技股份有限公司 Multi-attribute-based industrial internet security assessment method and system
CN112417462A (en) * 2020-12-10 2021-02-26 中国农业科学院农业信息研究所 Network security vulnerability tracking method and system
IT202000008155A1 (en) * 2020-04-17 2021-10-17 Nsr S R L Method and system for vulnerability assessment of IoT devices
EP3896591A1 (en) * 2020-04-17 2021-10-20 NSR S.r.l. Method and system for security assessment of iot devices
US20220191227A1 (en) * 2019-04-02 2022-06-16 Siemens Energy Global GmbH & Co. KG User behavorial analytics for security anomaly detection in industrial control systems
CN114640496A (en) * 2021-11-26 2022-06-17 北京天融信网络安全技术有限公司 Flow transmission control method and device, electronic equipment and storage medium
GB2602967A (en) * 2021-01-19 2022-07-27 British Telecomm Anomalous network behaviour identification
WO2022188172A1 (en) * 2021-03-12 2022-09-15 Siemens Aktiengesellschaft Graph transformation method, apparatus and system of function block chain
EP4120110A1 (en) * 2021-07-12 2023-01-18 Abb Schweiz Ag Opc ua-based anomaly detection and recovery system and method
US20230052533A1 (en) * 2020-03-05 2023-02-16 Aetna Inc. Systems and methods for identifying access anomalies using network graphs
EP4149090A1 (en) * 2021-09-10 2023-03-15 Rockwell Automation Technologies, Inc. Security and safety of an industrial operation using opportunistic sensing
CN116107847A (en) * 2023-04-13 2023-05-12 平安科技(深圳)有限公司 Multi-element time series data anomaly detection method, device, equipment and storage medium
CN116541305A (en) * 2023-06-26 2023-08-04 京东方艺云(杭州)科技有限公司 Abnormality detection method and device, electronic equipment and storage medium
US20230308464A1 (en) * 2020-10-16 2023-09-28 Visa International Service Association System, Method, and Computer Program Product for User Network Activity Anomaly Detection
CN118569655A (en) * 2024-08-02 2024-08-30 深圳建安润星安全技术有限公司 Staged data life cycle safety assessment method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3079337A1 (en) * 2015-04-09 2016-10-12 Accenture Global Services Limited Event correlation across heterogeneous operations
US20180157838A1 (en) * 2016-12-07 2018-06-07 General Electric Company Feature and boundary tuning for threat detection in industrial asset control system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3079337A1 (en) * 2015-04-09 2016-10-12 Accenture Global Services Limited Event correlation across heterogeneous operations
US20180157838A1 (en) * 2016-12-07 2018-06-07 General Electric Company Feature and boundary tuning for threat detection in industrial asset control system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CYNTHIA PHILLIPS ET AL: "A graph-based system for network-vulnerability analysis", NEW SECURITY PARADIGMS WORKSHOP. PROCEEDINGS. CHARLOTTSVILLE, VA, SEPT. 22 - 25, 1998; [NEW SECURITY PARADIGMS WORKSHOP. PROCEEDINGS], NEW YORK, NY : ACM, US, 1 January 1998 (1998-01-01), pages 71 - 79, XP058107412, ISBN: 978-1-58113-168-0, DOI: 10.1145/310889.310919 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220191227A1 (en) * 2019-04-02 2022-06-16 Siemens Energy Global GmbH & Co. KG User behavorial analytics for security anomaly detection in industrial control systems
US20230052533A1 (en) * 2020-03-05 2023-02-16 Aetna Inc. Systems and methods for identifying access anomalies using network graphs
US11848952B2 (en) * 2020-03-05 2023-12-19 Aetna Inc. Systems and methods for identifying access anomalies using network graphs
IT202000008155A1 (en) * 2020-04-17 2021-10-17 Nsr S R L Method and system for vulnerability assessment of IoT devices
EP3896591A1 (en) * 2020-04-17 2021-10-20 NSR S.r.l. Method and system for security assessment of iot devices
CN111565201B (en) * 2020-07-15 2020-11-10 北京东方通科技股份有限公司 Multi-attribute-based industrial internet security assessment method and system
CN111565201A (en) * 2020-07-15 2020-08-21 北京东方通科技股份有限公司 Multi-attribute-based industrial internet security assessment method and system
US12074893B2 (en) * 2020-10-16 2024-08-27 Visa International Service Association System, method, and computer program product for user network activity anomaly detection
US20230308464A1 (en) * 2020-10-16 2023-09-28 Visa International Service Association System, Method, and Computer Program Product for User Network Activity Anomaly Detection
CN112417462B (en) * 2020-12-10 2024-02-02 中国农业科学院农业信息研究所 Network security vulnerability tracking method and system
CN112417462A (en) * 2020-12-10 2021-02-26 中国农业科学院农业信息研究所 Network security vulnerability tracking method and system
GB2602967A (en) * 2021-01-19 2022-07-27 British Telecomm Anomalous network behaviour identification
WO2022188172A1 (en) * 2021-03-12 2022-09-15 Siemens Aktiengesellschaft Graph transformation method, apparatus and system of function block chain
EP4120110A1 (en) * 2021-07-12 2023-01-18 Abb Schweiz Ag Opc ua-based anomaly detection and recovery system and method
EP4149090A1 (en) * 2021-09-10 2023-03-15 Rockwell Automation Technologies, Inc. Security and safety of an industrial operation using opportunistic sensing
CN114640496A (en) * 2021-11-26 2022-06-17 北京天融信网络安全技术有限公司 Flow transmission control method and device, electronic equipment and storage medium
CN114640496B (en) * 2021-11-26 2024-02-06 北京天融信网络安全技术有限公司 Flow transmission control method and device, electronic equipment and storage medium
CN116107847A (en) * 2023-04-13 2023-05-12 平安科技(深圳)有限公司 Multi-element time series data anomaly detection method, device, equipment and storage medium
CN116107847B (en) * 2023-04-13 2023-06-27 平安科技(深圳)有限公司 Multi-element time series data anomaly detection method, device, equipment and storage medium
CN116541305A (en) * 2023-06-26 2023-08-04 京东方艺云(杭州)科技有限公司 Abnormality detection method and device, electronic equipment and storage medium
CN116541305B (en) * 2023-06-26 2023-12-15 京东方艺云(杭州)科技有限公司 Abnormality detection method and device, electronic equipment and storage medium
CN118569655A (en) * 2024-08-02 2024-08-30 深圳建安润星安全技术有限公司 Staged data life cycle safety assessment method and system

Similar Documents

Publication Publication Date Title
WO2020046260A1 (en) Process semantic based causal mapping for security monitoring and assessment of control networks
EP3607484B1 (en) Multilevel intrusion detection in automation and control systems
EP3528459B1 (en) A cyber security appliance for an operational technology network
US10148686B2 (en) Telemetry analysis system for physical process anomaly detection
CN107976968B (en) Method and system for detecting an operating mode of a valve in a process plant
US10044749B2 (en) System and method for cyber-physical security
MR et al. Machine learning for intrusion detection in industrial control systems: challenges and lessons from experimental evaluation
WO2018044410A1 (en) High interaction non-intrusive industrial control system honeypot
US20160330225A1 (en) Systems, Methods, and Devices for Detecting Anomalies in an Industrial Control System
US11252169B2 (en) Intelligent data augmentation for supervised anomaly detection associated with a cyber-physical system
Robles-Durazno et al. PLC memory attack detection and response in a clean water supply system
US20210382989A1 (en) Multilevel consistency check for a cyber attack detection in an automation and control system
US11924227B2 (en) Hybrid unsupervised machine learning framework for industrial control system intrusion detection
CN113924570A (en) User behavior analysis for security anomaly detection in industrial control systems
Alrumaih et al. Cyber resilience in industrial networks: A state of the art, challenges, and future directions
Hamouda et al. Intrusion detection systems for industrial internet of things: A survey
MR et al. AICrit: A unified framework for real-time anomaly detection in water treatment plants
Sung et al. Design-knowledge in learning plant dynamics for detecting process anomalies in water treatment plants
Ghaeini et al. Zero residual attacks on industrial control systems and stateful countermeasures
Alqurashi et al. On the performance of isolation forest and multi layer perceptron for anomaly detection in industrial control systems networks
Ahakonye et al. Trees Bootstrap Aggregation for Detection and Characterization of IoT-SCADA Network Traffic
Aliyari Securing industrial infrastructure against cyber-attacks using machine learning and artificial intelligence at the age of industry 4.0
Guibene et al. False data injection attack against cyber-physical systems protected by a watermark
Yask et al. A review of model on malware detection and protection for the distributed control systems (Industrial control systems) in oil & gas sectors
Manyfield-Donald et al. The Current State of Fingerprinting in Operational Technology Environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18769258

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18769258

Country of ref document: EP

Kind code of ref document: A1