WO2023218660A1 - Security risk assessment device, security risk assessment system, security risk assessment method, and program - Google Patents

Security risk assessment device, security risk assessment system, security risk assessment method, and program Download PDF

Info

Publication number
WO2023218660A1
WO2023218660A1 PCT/JP2022/020265 JP2022020265W WO2023218660A1 WO 2023218660 A1 WO2023218660 A1 WO 2023218660A1 JP 2022020265 W JP2022020265 W JP 2022020265W WO 2023218660 A1 WO2023218660 A1 WO 2023218660A1
Authority
WO
WIPO (PCT)
Prior art keywords
risk
security
probability
risk assessment
information
Prior art date
Application number
PCT/JP2022/020265
Other languages
French (fr)
Japanese (ja)
Inventor
諒平 佐藤
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2022/020265 priority Critical patent/WO2023218660A1/en
Publication of WO2023218660A1 publication Critical patent/WO2023218660A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities

Definitions

  • the present invention relates to security risk management and security risk assessment of network systems, and particularly relates to a security risk assessment device, a security risk assessment system, a security risk assessment method, and a program.
  • Security Risk Assessment which accurately identifies, quantitatively analyzes and evaluates network system security risks, is essential for correct and efficient security risk management.
  • Security risk assessment involves collecting and analyzing a huge amount of security information to identify, analyze, and evaluate risks.
  • Security information includes network configuration management information, vulnerability information, intrusion detection system (IDS)/intrusion prevention system (IPS) alarms, etc.
  • IDS intrusion detection system
  • IPS intrusion prevention system
  • Non-Patent Document 1 describes a security risk assessment method using Bayesian Attack Graph (BAG).
  • BAG is a Bayesian Network (BN) for expressing probabilistic dependencies between vulnerabilities inherent in network systems. This is a graph that comprehensively describes the procedure.
  • nodes represent system states, and edges represent probabilities of system state transitions.
  • a system state is a unique combination of system variables, such as "a state in which administrator authority for a specific information asset (host, etc.) has been handed over to an attacker.” The transition of the system state corresponds to "(successful) exploitation of vulnerability.”
  • BAG it is possible to mechanically and quantitatively calculate the probability of transition to each system state while considering vulnerability dependencies.
  • Non-Patent Document 2 describes a technique for automatically creating an Attack Graph (AG).
  • Non-Patent Document 1 does not discuss the method of creating a BAG.
  • the collection of security information and risk identification are completely left to humans.
  • System administrators and others who create BAGs are required to collect and analyze various security information and accurately identify dependencies between vulnerabilities. Therefore, BAG creators are required to have highly specialized knowledge regarding network systems and vulnerabilities. Additionally, the amount of effort and time required to create a BAG is enormous.
  • in order to automatically create an AG it is necessary to manually prepare a huge amount of input information such as configuration management information and vulnerability dependencies.
  • the present invention aims to solve the above problems and automate risk identification and risk analysis in security risk assessment operations.
  • the security risk assessment device identifies security risks inherent in the network system based on security information regarding information assets, vulnerabilities, and threats in the network system collected by a plurality of security devices.
  • a risk identification unit that creates an analysis graph that describes the probabilistic dependencies of the identified security risks in the form of an acyclic directed graph; and a risk analysis unit that uses the analysis graph to calculate risk probabilities regarding node states. It is characterized by comprising the following.
  • risk identification and risk analysis in security risk assessment work can be automated.
  • FIG. 1 is a schematic configuration diagram of a system including a security risk assessment device according to a first embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing probabilistic dependencies of information assets, vulnerabilities, threats, and various security information in a network system.
  • FIG. 2 is a schematic diagram of an analysis graph created by the security risk assessment device.
  • 3 is a flowchart showing the flow of processing for creating an analysis graph.
  • 5 is a flowchart showing the flow of step S1 in FIG. 4.
  • FIG. FIG. 5 is a schematic diagram showing the correspondence between step S1 in FIG. 4 and an analysis graph.
  • FIG. 5 is a schematic diagram showing the correspondence between step S2 in FIG. 4 and an analysis graph. 5 is a schematic diagram showing an example of calculation performed in step S2 in FIG. 4.
  • FIG. 5 is a schematic diagram showing the correspondence between step S3 in FIG. 4 and an analysis graph.
  • FIG. 5 is a schematic diagram showing the correspondence between step S4 in FIG. 4 and an analysis graph.
  • 5 is a schematic diagram showing an example of calculation performed in step S4 in FIG. 4.
  • FIG. 12 is a flowchart showing the flow of processing for calculating risk probabilities regarding node states using an analysis graph.
  • FIG. 2 is a schematic configuration diagram of a system including a security risk assessment device according to a second embodiment of the present invention.
  • 1 is a hardware configuration diagram showing an example of a security risk assessment device according to each embodiment of the present invention and a computer that implements the functions of each device.
  • the security risk assessment system 1 includes an information acquisition section 10, a data processing section 20, a database 30, a risk assessment section (security risk assessment device) 40, and an assessment result output section 50.
  • the information acquisition unit 10 automatically collects security information regarding information assets, vulnerabilities, and threats in a network system, which are collected by a plurality of security devices.
  • the data processing unit 20 processes the security information acquired by the information acquisition unit 10 to format it according to predetermined requirements, and stores in the database 30 data associated with the time at which the security information was collected.
  • the risk assessment unit 40 performs risk assessment based on data stored in the database 30.
  • the assessment result output unit 50 presents the results of the risk assessment to the system administrator 2 and the like.
  • FIG. 2 is a schematic diagram showing the probabilistic dependencies of information assets, vulnerabilities, threats, and various security information in a network system.
  • information assets in a network system are expressed as a tuple of "host” and "application or protocol.” Therefore, it is assumed that an ID that can uniquely identify each of them is given in advance.
  • an IP address and a port number are used as IDs, but in reality, any ID may be used.
  • an IP address is used as an ID to identify a host
  • a TCP/UDP port number is used as an ID to identify an application or protocol.
  • an information asset is expressed as (196.216.0.1, TCP3306 (MySQL (registered trademark))).
  • This example tuple indicates an application (DB, etc.) that uses MySQL (registered trademark) on the host having the IP address.
  • Vulnerabilities within a network system are uniquely identified using a "vulnerability ID.”
  • CVE-ID CVE identification number
  • Exposures the vulnerability ID.
  • the threat information is the "average number of alerts per unit time" for each information asset.
  • the probabilistic dependencies of the various security information mentioned above are calculated mainly based on network flow (communication amount). The details of each part of the security risk assessment system 1 will be described below.
  • the information acquisition unit 10 includes a network flow information acquisition unit 11, a vulnerability information acquisition unit 12, and a threat information acquisition unit 13, as shown in FIG.
  • the network flow information acquisition unit 11 acquires security information regarding information assets.
  • the vulnerability information acquisition unit 12 acquires security information regarding vulnerability.
  • the threat information acquisition unit 13 acquires security information regarding threats.
  • a network flow collector (for example, NetFlow) installed within the network can be applied to the network flow information acquisition unit 11.
  • a network flow collector can obtain statistical information on packets flowing through a router.
  • a vulnerability detection tool (for example, Vuls) installed within the network can be applied to the vulnerability information acquisition unit 12.
  • an IDS/IPS (for example, Suricata) installed within the network can be applied. Suricata can identify attacks by combining IDS and IPS.
  • the information obtained from various security products of the information acquisition unit 10 includes logs, alerts, detection results, and the like.
  • the data processing unit 20 aggregates information obtained from various security products of the information acquisition unit 10, organizes and formats the information into a format that satisfies the following requirements, and stores the information in the database 30.
  • the database 30 stores information shown in the table below.
  • a tuple of a host and a source port is referred to as a "source interface.”
  • a tuple of a host and a destination port is called a "destination interface.”
  • data is also expressed in symbols here.
  • h be the host
  • s be the start port
  • d be the end port.
  • the database 30 receives a specific period as a query from the risk identification unit 41, and stores the data flow related to each interface during that period, the unit time average of the number of malicious communications, and the internal information of each host during that period.
  • the vulnerability information that was previously used is provided to the risk identification unit 41, respectively. Therefore, to meet this requirement, these data are stored with time information. It is assumed that such processing is performed in advance by the data processing unit 20, and that the database 30 also satisfies this requirement.
  • the risk assessment unit 40 calculates the "probability that unauthorized communication reaches each asset (interface) in a certain period" and the "probability that each vulnerability is exploited in a certain period.” calculate.
  • the risk assessment unit 40 uses a directed acyclic graph as shown in FIG.
  • such a directed acyclic graph will be referred to as an analytic graph.
  • the starting point interface (Src Interface) in the network system the ending point interface (Dst Interface), and the vulnerability inherent in each host are described as nodes in the analysis graph.
  • the analysis graph expresses the probability that unauthorized communication from the outside will propagate between interfaces, the probability that it will lead to a vulnerability attack, etc. as edges and their weights.
  • Each node has its own state variables. If the node is an interface node, the unique state variable is a binary variable that indicates whether (1) an unauthorized communication has arrived or not (0). If the node is a vulnerable node, the unique state variable is a binary variable that indicates whether it has been exploited (1) or not (0).
  • the risk assessment unit 40 determines the probability that unauthorized communication from the outside will propagate between interfaces by determining the probability that each of these state variables becomes 1 (that is, the expected value of each state variable) based on graph theory. and the probability that it will lead to a vulnerability attack.
  • Non-Patent Document 1 represent system states (unique combinations of system variables), so when applying that prior art to a large-scale network, the system variables are There is a problem in that it causes an explosion and makes it difficult to create a BN in terms of computational cost.
  • the BN nodes created by the risk assessment unit 40 represent ports or vulnerabilities in the host, so the graph is less likely to become enlarged and can be applied to large-scale networks.
  • the risk assessment section 40 includes a risk identification section 41 and a risk analysis section 42.
  • the risk identification unit 41 identifies security risks inherent in the network system based on the security information, and creates an analytical graph that describes the probabilistic dependencies of the identified security risks in the form of an acyclic directed graph.
  • the risk analysis unit 42 calculates the risk probability regarding the state of the node using the analysis graph. The details of each part of the risk assessment section 40 will be explained below.
  • the risk identification unit 41 Based on the information obtained from the information acquisition unit 10 during a certain period, the risk identification unit 41 identifies risks inherent in the network system during the period, and describes the probabilistic dependencies thereof as an analysis graph. do. For this purpose, a system administrator or the like specifies an arbitrary period t in the past. The period t may be, for example, a period from 5 minutes ago to the present.
  • the risk identification unit 41 sequentially executes the following processes.
  • the risk identification unit 41 inputs the period t into the database 30 as a query and acquires various information for the period.
  • the risk identification unit 41 receives information obtained from the database 30 and creates an analysis graph for use in risk analysis.
  • the risk identification unit 41 inputs the created analysis graph to the risk analysis unit 42.
  • the risk identifying unit 41 identifies dependencies between hosts in a predetermined period t (step S1).
  • the risk identification unit 41 describes dependencies between hosts based on data flow and alert information.
  • the risk identification unit 41 identifies inter-interface dependencies between the host groups (step S2).
  • the risk identification unit 41 calculates and describes the dependency relationship between ports (interfaces), that is, the probability that unauthorized communication will flow between the interfaces, according to the dependency relationship between hosts identified in step S1.
  • the risk identifying unit 41 identifies the dependency relationship between the end point interface and the vulnerability in each host (step S3).
  • the risk identification unit 41 calculates and describes the probability that a vulnerability will be accessed from the endpoint port (endpoint interface) and exploited.
  • the risk identifying unit 41 identifies the dependency relationship between the vulnerability and the starting point interface in each host (step S4).
  • the risk identification unit 41 calculates and describes the probability that unauthorized communication will be performed from the source port (source interface) after the vulnerability is exploited.
  • the risk identification unit 41 sets the state variable of each node in each host based on the alert information (step S5).
  • Step S1 the risk identification unit 41 obtains the inter-host dependency relationships L 0 , L 1 , L 2 ,...L l during the period t.
  • L l is a set of hosts whose “distance” from the Internet (external) is l.
  • a list in which hosts whose distance from the outside is l is enumerated is called a list L l .
  • List L 0 enumerates hosts that have directly received unauthorized communications from the outside.
  • List L1 contains hosts that have received direct communication from hosts belonging to the immediately preceding list L0 .
  • FIG. 6 is a schematic diagram showing the correspondence between the inter-host dependencies L 0 , L 1 , L 2 ,....L l and the analysis graph when the analysis graph creation process by the risk identification unit 41 is completed. It is a diagram.
  • the risk identification unit 41 first enumerates hosts that communicated (alerts, network flows) during the period t in a list LA (step S11).
  • This list L A initially lists all hosts for which an alert has been generated due to communication from the outside within the period t, or all hosts that have communicated with other hosts within the network system. Then, during the following process, the risk identification unit 41 performs an update to sequentially delete hosts from the list LA according to the dependency relationship between the hosts.
  • the risk identification unit 41 enumerates, in the list L0 , hosts that have received malicious communications (alerts) from the outside during the period t from the list LA (step S12 ). In the list L0 , as shown in FIG. 6, all hosts that have received an alert are listed in close proximity to the outside (Internet). The risk identification unit 41 then determines whether the list L 0 is empty (step S13). If the list L 0 is not empty (step S13: No), the risk identification unit 41 deletes the host in the list L 0 from the list LA (step S14), and updates the list LA . The risk identification unit 41 continues updating the list LA until the list LA becomes empty, as described below. First, the risk identification unit 41 determines whether the list LA is empty (step S15). If the list L A is not empty (step S15: No), the risk identification unit 41 sets the value of distance l from the Internet (external) to 0 (l ⁇ 0: step S16), and repeats the following process. .
  • the risk identification unit 41 selects the hosts included in the list L A from among the hosts that directly received communication (network flow) from each host in the list L l during the period t into a list L l+1. (Step S21). Note that when the value of the distance l is 0 in step S21, the list L l+1 becomes L 1 . At this time, as shown in FIG. 6 , in the list L1, all hosts that have directly received communication (network flow) from each host in the list L0 are listed in proximity to the list L0 .
  • the risk identification unit 41 determines whether the list L l+1 is empty (step S22). If the list L l+1 is not empty (step S22: No), the risk identification unit 41 deletes the host in the list L l+1 from the list LA (step S23) and updates the list LA . Subsequently, the risk identification unit 41 determines whether the list LA is empty (step S24). If the list LA is not empty (step S24: No), the risk identification unit 41 increments the value of l (l ⁇ l+1: step S25), and returns to step S21. Note that when the value of the distance l is 1 in step S21, the list L l+1 becomes L 2 . At this time, as shown in FIG.
  • list L2 in list L2 , all hosts that have directly received communication (network flow) from each host in list L1 are listed in proximity to list L1 .
  • list L l includes all hosts that have directly received communication (network flow) from each host in list L 1-1 that are close to list L 1-1 , as shown in FIG. are listed.
  • step S24: Yes the risk identification unit 41 increments the value of l (l ⁇ l+1: step S26), and The relationships L 0 , L 1 , L 2 ,...L l are obtained (step S27), and the process ends.
  • the risk identification unit 41 proceeds to step S27.
  • the list LA is empty in step S15 (step S15: Yes)
  • the risk identification unit 41 obtains L 0 as the inter-host dependency relationship at the time (step S28), and ends the process.
  • step S13 step S13
  • Step S2 the risk identification unit 41 calculates the conditional probability that malicious communication will occur between each start point interface node and end point interface node, and creates an edge using the calculated value as a weight.
  • An edge is defined between a host group (L n ) and a host group (L n+1 ) that are adjacent to each other.
  • FIG. 7A is a schematic diagram showing the correspondence between step S2 and the analysis graph.
  • an area 201 surrounded by a virtual line indicates communication occurring between a starting point interface node of a host belonging to host group L 0 and a destination point interface node of a host belonging to host group L 1 .
  • an area 202 surrounded by a virtual line schematically shows communication occurring between a starting point interface node of a host belonging to host group L 1 and a destination point interface node of a host belonging to host group L 2 . .
  • step S2 the risk identification unit 41 normalizes the ratio of network flows to a probability p ij .
  • the risk identification unit 41 calculates the probability p ij using, for example, the following equation (1).
  • ⁇ ij is the network flow between the source interface node i and the destination interface node j. Further, ⁇ i is the total amount of network flows sent out from the starting point interface node i.
  • the probability p ij calculated by the risk identification unit 41 will be described with reference to FIG. 7B.
  • the network flows (communication volumes) from the source interface node i to the destination interface nodes j 1 , j 2 , and j 3 are 2, 5, and 4, respectively
  • the amount of data sent from the source interface node i is The total amount of network flows ⁇ i is 11. Therefore, according to equation (1), the probabilities that communication will occur from the starting point interface node i to the ending point interface nodes j 1 , j 2 , and j 3 are 2/11, 5/11, and 4/11, respectively.
  • system administrator may give an arbitrary probability value as the probability p ij . Also, the system administrator may provide another method for calculating the probability p ij .
  • FIG. 8 is a schematic diagram showing the correspondence between step S3 and the analysis graph.
  • an area 301 surrounded by a virtual line schematically shows access from an end point interface node to a vulnerable node in a host belonging to host group L0 .
  • An area 302 surrounded by a virtual line schematically shows access from the end point interface node to the vulnerable node within the hosts belonging to the host group L1 .
  • An area 303 surrounded by a virtual line schematically shows access from the end point interface node to the vulnerable node within the hosts belonging to the host group L2 .
  • An area 304 surrounded by a virtual line schematically shows access from the end point interface node to the vulnerable node within the host belonging to the host group L l .
  • the probability q ju is defined as a binary value of 0 or 1, for example. For example, it can be defined as 1 if application a with vulnerability v uses d as the end port, and 0 otherwise. Further, the probability w u can be calculated using, for example, a calculation method based on the Common Vulnerability Scoring System (CVSS). Note that a calculation method based on CVSS is described in Non-Patent Document 1.
  • CVSS Common Vulnerability Scoring System
  • system administrator may give arbitrary probability values as the probability q ju and the probability w u . Further, the system administrator may provide another calculation method for the probability q ju and the probability w u .
  • Step S4 the risk identification unit 41 calculates "the probability that unauthorized communication will be performed from each start point interface node after each vulnerable node is exploited" and creates an edge using the calculated value as a weight.
  • An edge is defined between a vulnerable node and a source interface (port) node within the same host.
  • FIG. 9A is a schematic diagram showing the correspondence between step S4 and the analysis graph.
  • an area 401 surrounded by a virtual line schematically shows access from a vulnerable node to a starting point interface node in a host belonging to host group L0 .
  • An area 402 surrounded by a virtual line schematically shows access from a vulnerable node to a starting point interface node within a host belonging to host group L1 .
  • An area 403 surrounded by a virtual line schematically shows access from a vulnerable node to a starting point interface node in a host belonging to host group L2 .
  • the probability identification unit 41 calculates the probability r ui using the following equation (2).
  • ⁇ i is the total amount of network flows sent out from the source interface node i.
  • FIG. 9B is a schematic diagram showing the following conditions 1 to 4.
  • (Condition 1) Network flows are sent out from each of the start point interface nodes i 1 , i 2 , and i 4 , and other start point interface nodes including i 3 do not send out network flows.
  • (Condition 2) The total amount of network flows sent out from the starting point interface node i1 is 6.
  • (Condition 3) The total amount of network flows sent out from the starting point interface node i2 is 8.
  • Condition 4) The total amount of network flows sent out from the starting point interface node i4 is 15.
  • the total number of network flows sent out from all source interface nodes within the host is 29. Therefore, according to equation (2), the probabilities that the starting point interface nodes i 1 , i 2 , and i 4 are accessed from the vulnerable node u are 6/29, 8/29, and 15/29, respectively.
  • system administrator may give an arbitrary probability value as the probability r ui . Additionally, the system administrator may provide another method for calculating the probability r ui .
  • Step S5 the risk identification unit 41 defines and sets state variables of each start point interface node, end point interface node, and vulnerability node.
  • the risk identification unit 41 defines a state variable M i representing whether or not unauthorized communication has arrived at the starting point interface node i.
  • the state variable M i is a binary variable, and is 1 if an unauthorized communication has arrived, and 0 otherwise. Let the initial value of the state variable M i be 0.
  • the risk identification unit 41 defines a state variable M j that indicates whether or not unauthorized communication has arrived at the end point interface node j .
  • the state variable M j is a binary variable, and is 1 if an unauthorized communication has arrived, and 0 otherwise. Let the initial value of the state variable M j be 0.
  • the risk identification unit 41 defines a state variable E u representing whether or not the vulnerability node u has been exploited.
  • the state variable E u is a binary variable; it is 1 if it has been exploited and 0 otherwise. Let the initial value of the state variable E u be 0.
  • step S5 the risk identification unit 41 sets the state variable M j of the end point interface node that directly received the unauthorized communication from the outside to 1 for the host group belonging to the list L 0 . That is, the risk identification unit 41 sets the state variable M j using, for example, the following equation (3).
  • ⁇ j is the unit time average of the number of fraudulent communications (occurred IDS/IPS alerts) received by the end point interface j.
  • the risk analysis unit 42 sequentially executes the following processes.
  • the risk analysis unit 42 acquires the analysis graph from the risk identification unit 41.
  • the risk analysis unit 42 calculates risk probabilities regarding node states.
  • the risk analysis unit 42 inputs the obtained risk probability of each node to the assessment result output unit 50.
  • FIG. 10 is a flowchart showing the flow of processing for calculating risk probabilities. Note that the state variable calculation and update method will be described later.
  • the risk analysis unit 42 sets the value of the identifier n of the list L n that specifies dependencies between hosts to 0 (n ⁇ 0: step S31). Then, the risk analysis unit 42 calculates the probability that the state variable of each end point interface node becomes 1 (expected value of the state variable) for all hosts in the list L n , and substitutes it as the value of the new state variable. Update (step S32).
  • step S35 the risk analysis unit 42 increments the value of n (n ⁇ n+1: step S36), and returns to step S32.
  • the risk analysis unit 42 calculates and updates the value (expected value) of the state variable of each node sequentially from the top to the bottom along the edge direction.
  • the risk analysis unit 42 calculates the weight of each edge by regarding it as a conditional probability regarding the state variable.
  • the risk analysis unit 42 performs calculations assuming that the analysis graph is a BN.
  • step S32 the risk analysis unit 42 calculates the weight of the edge extending from the source interface node to the destination interface node as "the probability that the unauthorized communication will reach the destination interface node if the unauthorized communication reaches the source interface node.” Perform calculations assuming that.
  • the risk analysis unit 42 regards the analysis graph as a BN and calculates the probability that the state variable of the end point interface node becomes 1, this becomes the risk probability as it is.
  • the risk probability in this case is the probability that fraudulent communication has reached the end point interface node.
  • Step S33 the risk analysis unit 42 considers the weight of the edge extending from the end point interface node to the vulnerable node as "the probability that the vulnerable node will be exploited when fraudulent communication reaches the end point interface node.” Perform calculations.
  • the risk analysis unit 42 regards the analysis graph as a BN and calculates the probability that the state variable of the vulnerable node becomes 1, this becomes the risk probability.
  • the risk probability in this case is the probability that a vulnerable node is being exploited.
  • Step S35 the risk analysis unit 42 regards the weight of the edge extending from the vulnerable node to the starting point interface node as "the probability that unauthorized communication will occur from the starting point interface node if the vulnerable node is exploited.” Do the calculations.
  • the risk analysis unit 42 regards the analysis graph as a BN and calculates the probability that the state variable of the starting point interface node becomes 1, this becomes the risk probability as it is.
  • the risk probability in this case is the probability that fraudulent communication has reached the starting point interface node.
  • the risk analysis unit 42 may apply a BN calculation method using a Conditional Probability Table (CPT).
  • CPT Conditional Probability Table
  • the calculation method by which the risk analysis unit 42 obtains each risk probability is not limited to the BN calculation method using CPT.
  • the risk analysis unit 42 can also obtain the risk probability by calculating the expected value of the state variable of the end point interface node using the following equation (4).
  • the risk analysis unit 42 can also obtain the risk probability by calculating the expected value of the state variable of the vulnerable node using the following equation (5).
  • the risk analysis unit 42 can also obtain the risk probability by calculating the expected value of the state variable of the starting point interface node using the following equation (6).
  • node A is called the parent node of node B.
  • P j is a set of parent nodes of the end point interface node j.
  • P u is a set of parent nodes of the vulnerable node u.
  • P i is a set of parent nodes of the starting point interface node i.
  • the risk assessment unit 40 can perform an objective risk assessment that takes into consideration specific threats. For comparison, for example, in the conventional technology described in Non-Patent Document 2, only the "probability that the network system will be attacked” is considered as a threat parameter, and in addition, the "probability that the network system will be attacked” is considered. It was necessary to arbitrarily set it manually. On the other hand, the risk assessment unit 40 considers actual threat information. In other words, the risk assessment unit 40 determines the endpoint interface that directly received unauthorized communications from the outside based on the unit time average ⁇ j of the number of unauthorized communications (occurred IDS/IPS alerts), as shown in equation (3) above. A node state variable M j is set and used to calculate the risk probability.
  • the risk assessment unit 40 can perform a more objective analysis than the conventional technology described in Non-Patent Document 2. Moreover, by suppressing the increase in the size of the analysis graph, scalability can be improved compared to the case of using the BAG of Non-Patent Document 2.
  • the assessment result output unit 50 provides various risk probabilities (ie, assessment results) received from the risk assessment unit 40 to the system administrator 2 and the like.
  • the assessment result output unit 50 may provide the results to a system administrator or the like after processing the data, such as determining a risk probability threshold or sorting. This allows system administrators and the like to obtain assessment results that are easier to understand.
  • risk probabilities system administrators and the like can take risk countermeasures, such as patching vulnerabilities with higher risk probabilities preferentially.
  • system administrators and the like can take risk countermeasures, such as preferentially analyzing IDS/IPS alerts related to ports with higher risk probability.
  • the risk assessment unit 40 can automate and mechanize risk identification and risk analysis in security risk assessment work. Such automation and mechanization not only reduce human operation (human work time and workload), but also contribute to speeding up assessments.
  • the security risk assessment system 1 according to the first embodiment can automate everything from information collection to risk identification and risk analysis. Automation also makes it easier for system administrators without advanced expertise to conduct risk assessments, minimizing the possibility of human error.
  • a security risk assessment system 1B according to a second embodiment will be described. Note that the same components as the security risk assessment system 1 shown in FIG. 1 are denoted by the same reference numerals, and the description thereof will be omitted.
  • the security risk assessment system 1B is different from the security risk assessment system 1 in that it includes an asset value information input section 60 and also includes a risk evaluation section 43 in the risk assessment section 40B.
  • the asset value information input section 60 is a function for inputting asset values and risk criteria.
  • Asset value is the value of information assets in a network system.
  • the asset value may be, for example, the cost of loss when a risk materializes.
  • the information input from the asset value information input section 60 is stored in the database 30 like other various security information. Information input from the asset value information input section 60 is transmitted from the database 30 to the risk evaluation section 43.
  • the risk evaluation unit 43 is a function that performs risk evaluation using the various risk probabilities calculated by the risk analysis unit 42 and the input asset value and risk criteria. For example, a system administrator or the like inputs the asset value and risk standard of each information asset.
  • An asset value is assigned to each end point interface node.
  • the asset value is assigned to each starting point interface node.
  • an asset value is assigned to each vulnerable node.
  • a case will be explained in which an asset value is given to an end point interface node.
  • a system administrator or the like may give asset values in real numbers to all end point interface nodes on the analysis graph.
  • a system administrator or the like may give asset values in real numbers to some end point interface nodes that are considered to be particularly important on the analysis graph.
  • an information asset that satisfies the following conditions 1 to 3 exists in the target network system.
  • Important company management information is stored in an application (database, etc.) on host h.
  • the application can be accessed from end port d on host h.
  • Condition 3) The estimated loss cost if important management information of the company is lost or leaked is 50 million yen.
  • the system administrator or the like inputs "50 million yen" as the asset value of the corresponding end point interface node (h, d) of the analysis graph using the asset value information input unit 60.
  • the input asset value is not limited to the monetary amount, and may be, for example, the human working time (man-months) required for risk treatment.
  • the input asset values can be divided into concepts such as "large,””medium,” and “small” by arbitrarily setting thresholds for each range such as “large,””medium,” and “small.” There may be.
  • the risk criterion is the acceptable cost of loss.
  • the risk criterion may be, for example, an upper bound on the expected value of tolerable loss costs.
  • the system administrator or the like may input "1 million yen" as the risk standard using the asset value information input unit 60.
  • the information input to the asset value information input section 60 does not necessarily need to be stored in the database 30.
  • a system administrator or the like may directly input asset values and risk standards into the risk evaluation section 43.
  • the risk evaluation unit 43 calculates the estimated loss cost using the asset value information input by the system administrator etc. and the risk probability calculated by the risk analysis unit 42, and compares the estimated loss cost with the risk standard. determine the acceptability of the risk.
  • the risk evaluation unit 43 calculates the expected value of loss cost for all end point interface nodes from their risk probabilities and asset values. At this time, for nodes for which asset values have not been input, default values may be determined in advance according to some kind of policy. Note that the risk evaluation unit 43 saves the calculated information in the database 30 or the like so that it can be referenced by a system administrator or the like. Therefore, even if the risk is deemed to be acceptable, the system administrator or the like can refer to the information on the estimated loss cost calculated by the risk evaluation unit 43.
  • the risk evaluation unit 43 can determine the acceptability of the risk by comparing the calculated estimated loss cost with the risk standard.
  • the risk evaluation unit 43 uses the risk standard as a threshold to determine whether the estimated loss cost is smaller than the risk standard (threshold), and if it is smaller, determines that the risk is acceptable.
  • the risk standard as a threshold to determine whether the estimated loss cost is smaller than the risk standard (threshold), and if it is smaller, determines that the risk is acceptable.
  • the risk evaluation unit 43 determines that the risk is "acceptable" when the estimated loss cost is less than 1 million yen. In this way, when the risk evaluation unit 43 determines that the risk is acceptable, there is no need to notify the system administrator or the like of the determination result.
  • the risk evaluation unit 43 when the estimated loss cost exceeds 1 million yen, the risk evaluation unit 43 notifies the system administrator etc. of the specific estimated loss cost. In this case, it is preferable that the risk evaluation unit 43 also notify the system administrator, etc. of the location and risk probability of the interface node (or vulnerable node) that is considered to be particularly important.
  • the risk assessment unit 40B according to the second embodiment can automate and mechanize risk identification, risk analysis, and risk evaluation in security risk assessment work. Furthermore, the security risk assessment system 1B according to the second embodiment can automate everything from information collection to risk identification, risk analysis, and risk evaluation.
  • the third embodiment differs from the first and second embodiments in that the function of the risk identification section 41 in the risk assessment sections 40 and 40B is expanded.
  • the risk identification unit 41 according to the first and second embodiments always inputs 0 or 1 when setting the state variable of each node of the analysis graph.
  • the risk identification unit 41 according to the third embodiment inputs a probability value (an arbitrary value within the closed interval [0,1]) to each node of the analysis graph in advance. Two specific examples will be described below.
  • the risk identification unit 41 inputs in advance the IDS/IPS false negative probability to all end point interface nodes that have not received fraudulent communication in the analysis graph.
  • the false negative probability of IDS/IPS is the probability of overlooking fraudulent communications that should have raised an alert.
  • the risk identification unit 41 sets the initial value of the state variable M j of all end point interface nodes j that have not received unauthorized communication (no alert has occurred) in the created analysis graph to 0 instead of 0. Let it be the false negative probability.
  • the risk analysis unit 42 determines that the actually calculated risk probability and the false negative probability set as the initial value exist at the same time. I will do it.
  • the risk probability is the probability that the node's state variable becomes 1 or the expected value of the state variable. Then, there is a possibility that the risk probability and the false negative probability, which exist simultaneously in the risk analysis section 42, conflict with each other. In such a case, the risk analysis unit 42 may determine the risk probability by comparing the two and taking the maximum value.
  • the risk identification unit 41 inputs the probability of receiving fraudulent communication during a predetermined period in the future for each end point interface node of the analysis graph.
  • the risk probability in the past predetermined period t is calculated based on the track record of actually receiving fraudulent communications in the past predetermined period t.
  • the risk probability for a predetermined period in the future is determined based on past performance.
  • the risk identification unit 41 calculates the probability that each endpoint interface node will receive fraudulent communication during a predetermined period in the future based on the past record of receiving fraudulent communications, and calculates the probability value for each endpoint interface node to receive fraudulent communications in the corresponding endpoint interface. Set the initial value of the node's state variable. Thereby, the risk analysis unit 42 subsequent to the risk identification unit 41 calculates the risk probability for a predetermined period in the future. However, if there is a conflict between the actually calculated risk probability and the probability value set as the initial value, the risk analysis section 42 will determine the risk probability by comparing them and taking the maximum value. Just let it happen. Note that when setting a value other than 0,1 to the state variable, the risk analysis unit 42 cannot perform calculations using CPT, and therefore uses a calculation method using expected values.
  • the accuracy of risk assessment can be further improved than in the first and second embodiments.
  • FIG. 12 is a hardware configuration diagram showing an example of a computer 900 that implements the functions of the risk assessment sections 40 and 40B according to this embodiment.
  • the computer 900 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, a RAM (Random Access Memory) 903, an HDD (Hard Disk Drive) 904, an input/output I/F (Interface) 905, and a communication I/F 906. and a media I/F 907.
  • CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • HDD Hard Disk Drive
  • I/F Interface
  • the CPU 901 operates based on a program stored in the ROM 902 or HDD 904.
  • the ROM 902 stores a boot program executed by the CPU 901 when the computer 900 is started, programs related to the hardware of the computer 900, and the like.
  • the CPU 901 controls an input device 910 such as a mouse and a keyboard, and an output device 911 such as a display and a printer via an input/output I/F 905.
  • the CPU 901 obtains data from the input device 910 via the input/output I/F 905 and outputs the generated data to the output device 911.
  • a GPU Graphics Processing Unit
  • the like may be used in addition to the CPU 901 as the processor.
  • the HDD 904 stores programs executed by the CPU 901 and data used by the programs.
  • Communication I/F 906 receives data from other devices via communication network 920 and outputs it to CPU 901 , and also transmits data generated by CPU 901 to other devices via communication network 920 .
  • the media I/F 907 reads the program or data stored in the recording medium 912 and outputs it to the CPU 901 via the RAM 903.
  • the CPU 901 loads a program related to target processing from the recording medium 912 onto the RAM 903 via the media I/F 907, and executes the loaded program.
  • the recording medium 912 is an optical recording medium such as a DVD (Digital Versatile Disc) or a PD (Phase change rewritable disk), a magneto-optical recording medium such as an MO (Magneto Optical disk), a magnetic recording medium, a semiconductor memory, or the like.
  • the CPU 901 executes the program loaded onto the RAM 903 to realize the functions of the risk assessment units 40 and 40B. Furthermore, the data in the RAM 903 is stored in the HDD 904 .
  • the CPU 901 reads a program related to target processing from the recording medium 912 and executes it. In addition, the CPU 901 may read a program related to target processing from another device via the communication network 920.
  • the security risk assessment device evaluates the security risks inherent in the network system based on security information about information assets, vulnerabilities, and threats in the network system collected by multiple security devices.
  • a risk identification unit 41 that creates an analysis graph that describes the probabilistic dependencies of the identified security risks in the form of an acyclic directed graph; and a risk analysis unit that uses the analysis graph to calculate risk probabilities regarding node states. It is characterized by comprising a section 42.
  • the risk identification unit 41 can calculate the probability that fraudulent communication has reached each information asset in the network system based on the security information. Further, the risk identification unit 41 can calculate the probability that each vulnerability in the network system is exploited based on the security information. Therefore, a system administrator or the like can refer to the risk probability calculated by the security risk assessment device without manually creating a BN. In addition, the security risk assessment device automates risk identification and analysis, making it easier for system administrators without advanced specialized knowledge to conduct risk assessments and minimizing the possibility of human error intervening. It also leads to becoming
  • the loss cost when the risk materializes is estimated based on the risk probability calculated by the risk analysis unit 42 and predetermined asset value information, and the estimated loss cost is calculated in advance.
  • the present invention is characterized in that it further includes a risk evaluation unit 43 that determines that the risk is acceptable if the risk is smaller than a predetermined risk standard.
  • the risk evaluation unit 43 can obtain an estimated loss cost based on the risk probability and the asset value information input by the system administrator or the like. Further, the risk evaluation unit 43 can automatically determine the acceptability of the risk based on the estimated loss cost and the risk criteria input by the system administrator or the like.
  • the nodes of the analysis graph are nodes representing either the start port, end port, or vulnerability in the host, and the risk identification unit 41 collects traffic and data as security information collected over a predetermined period.
  • a first process that describes the dependencies between hosts by classifying multiple hosts in the network system into multiple host groups according to their distance from the outside based on a warning against unauthorized communication; and a first process that describes the dependencies between the hosts.
  • a second process that calculates the weight of the edge
  • a third process that calculates the probability that the vulnerability will be accessed from the end port and exploited within each host as the weight of the edge between the nodes
  • a fourth process in each host that calculates the probability of unauthorized communication from the source port after the vulnerability is exploited as the weight of the edge between the nodes, and sets the state variable of each node based on the alert.
  • the present invention is characterized in that an analytical graph is created by performing the fifth process.
  • the risk identification unit 41 can calculate the probability that unauthorized communication from outside the network system will propagate from the source port to the destination port as the weight of the edge. Furthermore, the risk identification unit 41 can calculate the probability that propagation of unauthorized communication from the outside will lead to a vulnerability attack, etc. as the edge weight.
  • the risk analysis unit 42 sequentially calculates the expected value of the state variable of the node along the direction of the edge in the analysis graph, assuming that the weight of each edge is a conditional probability regarding the state variable of the node. It is characterized by calculating and updating.
  • the risk analysis unit 42 performs calculations by regarding the weight of the edge extending from the end port to the vulnerability as the probability that the vulnerability will be exploited if unauthorized communication reaches the end port. be able to. Furthermore, the risk analysis unit 42 can perform calculations by regarding the weight of the edge extending from the vulnerability to the source port as the probability that unauthorized communication will occur from the source port if the vulnerability is exploited. Further, the risk analysis unit 42 can perform calculations by regarding the weight of the edge extending from the start port to the end port as the probability that the fraudulent communication will arrive at the end port if the unauthorized communication has arrived at the start port. .
  • the security risk assessment system includes an information acquisition unit 10 that automatically collects security information regarding information assets, vulnerabilities, and threats in a network system collected by a plurality of security devices, and security information acquired by the information acquisition unit 10.
  • the security risk assessment device 40 includes a data processing unit 20 that processes data to format the information according to predetermined requirements and stores data associated with the time at which security information was collected in a database 30, and the security risk assessment device 40. is characterized in that an analysis graph is created based on data stored in the database 30.
  • the database 30 receives a predetermined period as a query from the risk identification unit 41 of the security risk assessment device 40, and provides each data regarding information assets, vulnerabilities, and threats for the predetermined period to the risk identification unit 41. can do.
  • the security risk assessment method is a security risk assessment method performed by the security risk assessment device 40, in which the security risk assessment device 40 performs security assessment on information assets, vulnerabilities, and threats in a network system, each collected by a plurality of security devices.
  • the security risk assessment device 40 can calculate the probability that unauthorized communication has reached each information asset in the network system based on the security information. Furthermore, the security risk assessment device 40 can calculate the probability that each vulnerability in the network system has been exploited based on the security information. Therefore, a system administrator or the like can refer to the risk probability calculated by the security risk assessment device 40 without manually creating a BN.
  • the security risk assessment device 40 estimates the loss cost when the risk materializes based on the risk probability and predetermined asset value information, and the estimated loss cost is calculated in advance. If the risk is smaller than a predetermined risk standard, the method further includes determining that the risk is acceptable.
  • the security risk assessment device 40 can obtain an estimated loss cost based on the risk probability and the asset value information input by the system administrator or the like. Furthermore, the security risk assessment device 40 can automatically determine the acceptability of the risk based on the estimated loss cost and the risk criteria input by the system administrator or the like.
  • Security risk assessment system 10 Information acquisition unit 11 Network flow information acquisition unit 12 Vulnerability information acquisition unit 13 Threat information acquisition unit 20, 20B Data processing unit 30 Database 40, 40B Risk assessment unit (security risk assessment device) 41 Risk Identification Department 42 Risk Analysis Department 43 Risk Evaluation Department 50 Assessment Results Output Department 60 Asset Value Information Input Department

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

A security risk assessment device comprises a risk identification unit (41) and a risk analysis unit (42). The risk identification unit (41): identifies a security risk inherent in a network system on the basis of security information pertaining to information assets, vulnerabilities, and threats within the network system that are respectively collected by a plurality of security devices; and creates an analysis graph in which the probabilistic dependency relationships of the identified security risks are described in the form of a directed acyclic graph. The risk analysis unit (42) uses the analysis graph and calculates risk probabilities pertaining to node states.

Description

セキュリティリスクアセスメント装置、セキュリティリスクアセスメントシステム、セキュリティリスクアセスメント方法およびプログラムSecurity risk assessment device, security risk assessment system, security risk assessment method and program
 本発明は、ネットワークシステムのセキュリティリスクマネジメントおよびセキュリティリスクアセスメントに係り、特に、セキュリティリスクアセスメント装置、セキュリティリスクアセスメントシステム、セキュリティリスクアセスメント方法およびプログラムに関する。 The present invention relates to security risk management and security risk assessment of network systems, and particularly relates to a security risk assessment device, a security risk assessment system, a security risk assessment method, and a program.
 近年、ネットワークシステムは様々なサイバー攻撃の脅威に晒されており、セキュリティリスクマネジメント(Security Risk Management)の重要性がますます高まってきている。正しく効率的なセキュリティリスクマネジメントのためには、ネットワークシステムのセキュリティリスクを正確に特定し定量的に分析および評価するセキュリティリスクアセスメント(Security Risk Assessment)が不可欠である。 In recent years, network systems have been exposed to the threat of various cyber attacks, and security risk management has become increasingly important. Security Risk Assessment, which accurately identifies, quantitatively analyzes and evaluates network system security risks, is essential for correct and efficient security risk management.
 セキュリティリスクアセスメントでは、膨大なセキュリティ情報を収集および解析し、リスクを特定、分析、評価する。セキュリティ情報は、ネットワークの構成管理情報や脆弱性情報、不正侵入検知システム(Intrusion Detection System; IDS)/不正侵入防止システム(Intrusion Prevention System; IPS)の警報(アラート)等である。 Security risk assessment involves collecting and analyzing a huge amount of security information to identify, analyze, and evaluate risks. Security information includes network configuration management information, vulnerability information, intrusion detection system (IDS)/intrusion prevention system (IPS) alarms, etc.
 特にエンタープライズネットワーク等の大規模なネットワークシステムにおいて、システム運用者が、こうした一連のリスクアセスメント業務のすべてを人手でおこなうことは、作業時間や作業量の観点から現実的とは言えない。また、ネットワークシステムを取り巻く環境やネットワークシステムが抱えるリスクの特性や大きさは日々刻々と変化するため、セキュリティリスクアセスメントは、正確性だけでなく、動的かつ迅速に実施される必要がある。 Particularly in large-scale network systems such as enterprise networks, it is not realistic for system operators to manually perform all of these risk assessment tasks in terms of time and workload. Furthermore, because the environment surrounding network systems and the characteristics and magnitude of the risks that network systems face change every day, security risk assessments need not only to be accurate, but also to be performed dynamically and quickly.
 非特許文献1には、Bayesian Attack Graph(BAG)を用いたセキュリティリスクアセスメント手法が記載されている。BAGは、ネットワークシステムに内在する脆弱性間の確率的な依存関係を表現するためのBayesian Network(BN)であり、攻撃者が、ネットワークシステム内の情報資産を攻撃する際に取り得る経路(攻撃手順)を網羅的に記述したグラフである。BAGにおいて、ノードはシステム状態を表し、エッジはシステム状態が遷移する確率を表している。BAGにおいて、システム状態とは、システム変数の一意の組合せであり、例えば、「特定の情報資産(ホスト等)の管理者権限が攻撃者に渡った状態」等のことである。そして、システム状態の遷移は、「脆弱性の悪用(の成功)」に対応する。BAGを用いることで、脆弱性の依存関係を考慮しながら、各システム状態に遷移する確率を機械的かつ定量的に算出することができる。非特許文献2には、Attack Graph(AG)を自動的に作成する技術が記載されている。 Non-Patent Document 1 describes a security risk assessment method using Bayesian Attack Graph (BAG). BAG is a Bayesian Network (BN) for expressing probabilistic dependencies between vulnerabilities inherent in network systems. This is a graph that comprehensively describes the procedure. In BAG, nodes represent system states, and edges represent probabilities of system state transitions. In BAG, a system state is a unique combination of system variables, such as "a state in which administrator authority for a specific information asset (host, etc.) has been handed over to an attacker." The transition of the system state corresponds to "(successful) exploitation of vulnerability." By using BAG, it is possible to mechanically and quantitatively calculate the probability of transition to each system state while considering vulnerability dependencies. Non-Patent Document 2 describes a technique for automatically creating an Attack Graph (AG).
 非特許文献1においては、BAGの作成方法については議論されていない。非特許文献1に記載された従来技術では、セキュリティ情報の収集やリスク特定に関しては、完全にヒトに任せる形になっている。つまり、従来技術では、システム管理者等が人手でBAGを作成する必要がある。BAGを作成するシステム管理者等には、各種セキュリティ情報を収集および分析し、脆弱性間の依存関係を正確に特定することが求められる。したがって、BAGの作成者には、ネットワークシステムや脆弱性等に関する高度な専門的知識が求められる。また、BAGの作成にかかる手間や時間は膨大となる。非特許文献2に記載された従来技術は、AGを自動的に作成するために、構成管理情報や脆弱性の依存関係といった膨大な入力情報を人手で用意する必要がある。 Non-Patent Document 1 does not discuss the method of creating a BAG. In the conventional technology described in Non-Patent Document 1, the collection of security information and risk identification are completely left to humans. In other words, in the conventional technology, it is necessary for a system administrator or the like to manually create a BAG. System administrators and others who create BAGs are required to collect and analyze various security information and accurately identify dependencies between vulnerabilities. Therefore, BAG creators are required to have highly specialized knowledge regarding network systems and vulnerabilities. Additionally, the amount of effort and time required to create a BAG is enormous. In the conventional technology described in Non-Patent Document 2, in order to automatically create an AG, it is necessary to manually prepare a huge amount of input information such as configuration management information and vulnerability dependencies.
 そこで、本発明では、上記の問題を解決し、セキュリティリスクアセスメント業務におけるリスク特定およびリスク分析を自動化することを課題とする。 Therefore, the present invention aims to solve the above problems and automate risk identification and risk analysis in security risk assessment operations.
 本発明に係るセキュリティリスクアセスメント装置は、複数のセキュリティ装置によってそれぞれ収集されたネットワークシステム内の情報資産、脆弱性、および脅威に関するセキュリティ情報に基づいて、前記ネットワークシステムに内在していたセキュリティリスクを特定し、特定されたセキュリティリスクの確率的な依存関係を非巡回有向グラフの形式で記述した解析グラフを作成するリスク特定部と、前記解析グラフを用いてノードの状態に関するリスク確率を算出するリスク分析部と、を備えることを特徴とする。 The security risk assessment device according to the present invention identifies security risks inherent in the network system based on security information regarding information assets, vulnerabilities, and threats in the network system collected by a plurality of security devices. a risk identification unit that creates an analysis graph that describes the probabilistic dependencies of the identified security risks in the form of an acyclic directed graph; and a risk analysis unit that uses the analysis graph to calculate risk probabilities regarding node states. It is characterized by comprising the following.
 本発明によれば、セキュリティリスクアセスメント業務にけるリスク特定およびリスク分析を自動化することができる。 According to the present invention, risk identification and risk analysis in security risk assessment work can be automated.
本発明の第1実施形態に係るセキュリティリスクアセスメント装置を含むシステムの概略構成図である。1 is a schematic configuration diagram of a system including a security risk assessment device according to a first embodiment of the present invention. ネットワークシステムにおける情報資産、脆弱性、脅威、各種セキュリティ情報の確率的な依存関係をそれぞれ示す模式図である。FIG. 2 is a schematic diagram showing probabilistic dependencies of information assets, vulnerabilities, threats, and various security information in a network system. セキュリティリスクアセスメント装置が作成する解析グラフの模式図である。FIG. 2 is a schematic diagram of an analysis graph created by the security risk assessment device. 解析グラフを作成する処理の流れを示すフローチャートである。3 is a flowchart showing the flow of processing for creating an analysis graph. 図4におけるステップS1の流れを示すフローチャートである。5 is a flowchart showing the flow of step S1 in FIG. 4. FIG. 図4におけるステップS1と解析グラフとの対応関係を示す模式図である。FIG. 5 is a schematic diagram showing the correspondence between step S1 in FIG. 4 and an analysis graph. 図4におけるステップS2と解析グラフとの対応関係を示す模式図である。FIG. 5 is a schematic diagram showing the correspondence between step S2 in FIG. 4 and an analysis graph. 図4におけるステップS2でおこなう計算例を示す模式図である。5 is a schematic diagram showing an example of calculation performed in step S2 in FIG. 4. FIG. 図4におけるステップS3と解析グラフとの対応関係を示す模式図である。FIG. 5 is a schematic diagram showing the correspondence between step S3 in FIG. 4 and an analysis graph. 図4におけるステップS4と解析グラフとの対応関係を示す模式図である。FIG. 5 is a schematic diagram showing the correspondence between step S4 in FIG. 4 and an analysis graph. 図4におけるステップS4でおこなう計算例を示す模式図である。5 is a schematic diagram showing an example of calculation performed in step S4 in FIG. 4. FIG. 解析グラフを用いてノードの状態に関するリスク確率を算出する処理の流れを示すフローチャートである。12 is a flowchart showing the flow of processing for calculating risk probabilities regarding node states using an analysis graph. 本発明の第2実施形態に係るセキュリティリスクアセスメント装置を含むシステムの概略構成図である。FIG. 2 is a schematic configuration diagram of a system including a security risk assessment device according to a second embodiment of the present invention. 本発明の各実施形態に係るセキュリティリスクアセスメント装置および各装置の機能を実現するコンピュータの一例を示すハードウェア構成図である。1 is a hardware configuration diagram showing an example of a security risk assessment device according to each embodiment of the present invention and a computer that implements the functions of each device.
 以下、本実施形態に係るセキュリティリスクアセスメント装置について図面を参照して詳細に説明する。
[システム構成の概要]
 図1に示すように、セキュリティリスクアセスメントシステム1は、情報取得部10と、データ処理部20と、データベース30と、リスクアセスメント部(セキュリティリスクアセスメント装置)40と、アセスメント結果出力部50と、を備えている。情報取得部10は、複数のセキュリティ装置によってそれぞれ収集されたネットワークシステム内の情報資産、脆弱性、および脅威に関するセキュリティ情報を自動で収集するものである。データ処理部20は、情報取得部10が取得したセキュリティ情報を所定の要件で整形する処理をしてセキュリティ情報が収集された時間と関連付けたデータをデータベース30に格納するものである。リスクアセスメント部40は、データベース30に格納されたデータに基づいてリスクアセスメントをおこなうものである。アセスメント結果出力部50は、システム管理者2等にリスクアセスメントの結果を提示するものである。
Hereinafter, the security risk assessment device according to the present embodiment will be described in detail with reference to the drawings.
[System configuration overview]
As shown in FIG. 1, the security risk assessment system 1 includes an information acquisition section 10, a data processing section 20, a database 30, a risk assessment section (security risk assessment device) 40, and an assessment result output section 50. We are prepared. The information acquisition unit 10 automatically collects security information regarding information assets, vulnerabilities, and threats in a network system, which are collected by a plurality of security devices. The data processing unit 20 processes the security information acquired by the information acquisition unit 10 to format it according to predetermined requirements, and stores in the database 30 data associated with the time at which the security information was collected. The risk assessment unit 40 performs risk assessment based on data stored in the database 30. The assessment result output unit 50 presents the results of the risk assessment to the system administrator 2 and the like.
[前提]
 本実施形態の前提について図2を参照して説明する。図2は、ネットワークシステムにおける情報資産、脆弱性、脅威、各種セキュリティ情報の確率的な依存関係をそれぞれ示す模式図である。本実施形態では、ネットワークシステムにおける情報資産を、「ホスト」と「アプリケーションやプロトコル」のタプルとして表現する。したがって、両者それぞれを一意に識別可能なIDが予め与えられていることを前提とする。以下では、IPアドレスとポート番号をIDとして用いるが、実際には任意のIDを用いてよい。
[Premise]
The premise of this embodiment will be explained with reference to FIG. FIG. 2 is a schematic diagram showing the probabilistic dependencies of information assets, vulnerabilities, threats, and various security information in a network system. In this embodiment, information assets in a network system are expressed as a tuple of "host" and "application or protocol." Therefore, it is assumed that an ID that can uniquely identify each of them is given in advance. In the following, an IP address and a port number are used as IDs, but in reality, any ID may be used.
 例として、ホストを識別するためのIDとしてIPアドレスを用い、また、アプリケーションまたはプロトコルを識別するためのIDとしてTCP/UDPのポート番号を用いる。例えば、情報資産を (196.216.0.1, TCP3306 (MySQL(登録商標))) 等と表現する。例示したこのタプルは、当該IPアドレスを持つホストにおいてMySQL(登録商標)を用いるアプリケーション(DB等)を指す。 For example, an IP address is used as an ID to identify a host, and a TCP/UDP port number is used as an ID to identify an application or protocol. For example, an information asset is expressed as (196.216.0.1, TCP3306 (MySQL (registered trademark))). This example tuple indicates an application (DB, etc.) that uses MySQL (registered trademark) on the host having the IP address.
 ネットワークシステム内の脆弱性については、「脆弱性ID」によって一意に識別する。ここでは、共通脆弱性識別子(Common Vulnerabilities and Exposures)のCVE識別番号(CVE-ID)を脆弱性IDとして用いる。ネットワークシステムの脅威としては、情報資産に対する外部からの攻撃や不正通信を想定する。ここでは、各情報資産に対する「アラートの件数の単位時間平均」を脅威情報とする。上記の各種セキュリティ情報の確率的な依存関係については、主にネットワークフロー(通信量)をもとに算出する。
 以下、セキュリティリスクアセスメントシステム1の各部の詳細について説明する。
Vulnerabilities within a network system are uniquely identified using a "vulnerability ID." Here, the CVE identification number (CVE-ID) of Common Vulnerabilities and Exposures is used as the vulnerability ID. As threats to network systems, we assume external attacks on information assets and unauthorized communications. Here, the threat information is the "average number of alerts per unit time" for each information asset. The probabilistic dependencies of the various security information mentioned above are calculated mainly based on network flow (communication amount).
The details of each part of the security risk assessment system 1 will be described below.
[情報取得部10]
 情報取得部10は、図1に示すように、ネットワークフロー情報取得部11と、脆弱性情報取得部12と、脅威情報取得部13と、を備えている。ネットワークフロー情報取得部11は、情報資産に関するセキュリティ情報を取得する。脆弱性情報取得部12は、脆弱性に関するセキュリティ情報を取得する。脅威情報取得部13は、脅威に関するセキュリティ情報を取得する。
[Information acquisition unit 10]
The information acquisition unit 10 includes a network flow information acquisition unit 11, a vulnerability information acquisition unit 12, and a threat information acquisition unit 13, as shown in FIG. The network flow information acquisition unit 11 acquires security information regarding information assets. The vulnerability information acquisition unit 12 acquires security information regarding vulnerability. The threat information acquisition unit 13 acquires security information regarding threats.
 情報取得部10に関しては市中製品を用いることができる。ネットワークフロー情報取得部11には、ネットワーク内に設置されたネットワークフローコレクタ(例えば、NetFlow)を適用することができる。ネットワークフローコレクタは、ルータを流れるパケットの統計情報を得ることができる。脆弱性情報取得部12には、ネットワーク内に設置された脆弱性検知ツール(例えば、Vuls)を適用することができる。脅威情報取得部13には、ネットワーク内に設置されたIDS/IPS(例えば、Suricata)を適用することができる。Suricataは、IDSやIPSを組み合わせることで、攻撃を特定することができる。情報取得部10の各種セキュリティ製品から得られる情報は、ログやアラート、検知結果などである。 As for the information acquisition unit 10, commercially available products can be used. A network flow collector (for example, NetFlow) installed within the network can be applied to the network flow information acquisition unit 11. A network flow collector can obtain statistical information on packets flowing through a router. A vulnerability detection tool (for example, Vuls) installed within the network can be applied to the vulnerability information acquisition unit 12. As the threat information acquisition unit 13, an IDS/IPS (for example, Suricata) installed within the network can be applied. Suricata can identify attacks by combining IDS and IPS. The information obtained from various security products of the information acquisition unit 10 includes logs, alerts, detection results, and the like.
[データ処理部20、データベース30]
 データ処理部20は、情報取得部10の各種セキュリティ製品から得られる情報を集約し、例えば、以下の要件を満たす形式に整理および整形してからデータベース30に格納する。データベース30には、以下の表に示す情報が格納される。
[Data processing unit 20, database 30]
The data processing unit 20 aggregates information obtained from various security products of the information acquisition unit 10, organizes and formats the information into a format that satisfies the following requirements, and stores the information in the database 30. The database 30 stores information shown in the table below.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
 本明細書において、ホストと始点ポートとのタプルを「始点インタフェース」と呼ぶ。また、ホストと終点ポートとのタプルを「終点インタフェース」と呼ぶ。なお、ここでは便宜上、データを記号でも表現する。ホストをh、始点ポートをs、終点ポートをdとする。始点インタフェースをi=(h,s) と表現する。終点インタフェースをj=(h,d) と表現する。脆弱性をvとし、ホストhに内在する脆弱性vを脆弱性ノードu=(h,v)としてあらわす。 In this specification, a tuple of a host and a source port is referred to as a "source interface." Furthermore, a tuple of a host and a destination port is called a "destination interface." Note that for convenience, data is also expressed in symbols here. Let h be the host, s be the start port, and d be the end port. The starting point interface is expressed as i=(h,s). The end point interface is expressed as j=(h,d). Let v be the vulnerability, and express the vulnerability v inherent in host h as the vulnerability node u=(h,v).
 後記するように、データベース30は、リスク特定部41から、ある特定の期間をクエリとして受け取り、その期間における各インタフェースに関するデータフローと、悪性通信件数の単位時間平均と、その期間において各ホストに内在していた脆弱性情報と、をそれぞれリスク特定部41に提供する。したがって、本要件を満たせるよう、これらのデータは、時間情報とともに保存される。このような処理は、あらかじめデータ処理部20で実施され、データベース30も本要件を満たすものとする。 As will be described later, the database 30 receives a specific period as a query from the risk identification unit 41, and stores the data flow related to each interface during that period, the unit time average of the number of malicious communications, and the internal information of each host during that period. The vulnerability information that was previously used is provided to the risk identification unit 41, respectively. Therefore, to meet this requirement, these data are stored with time information. It is assumed that such processing is performed in advance by the data processing unit 20, and that the database 30 also satisfies this requirement.
[リスクアセスメント部40]
 リスクアセスメント部40は、データベース30の情報をもとに、「ある期間において各資産(インタフェース)に不正通信が到達した確率」および「ある期間において各脆弱性が悪用(エクスプロイト)された確率」を算出する。そのために、リスクアセスメント部40は、図3に示すような非巡回有向グラフを用いる。以下、このような非巡回有向グラフを解析グラフと呼ぶ。
[Risk Assessment Department 40]
Based on the information in the database 30, the risk assessment unit 40 calculates the "probability that unauthorized communication reaches each asset (interface) in a certain period" and the "probability that each vulnerability is exploited in a certain period." calculate. For this purpose, the risk assessment unit 40 uses a directed acyclic graph as shown in FIG. Hereinafter, such a directed acyclic graph will be referred to as an analytic graph.
 解析グラフには、ノードとして、ネットワークシステム内の始点インタフェース (Src Interface) と、終点インタフェース (Dst Interface)と、各ホストに内在する脆弱性 (Vulnerability)とが、記述される。解析グラフには、エッジおよびその重みとして、外部からの不正通信がインタフェース間を伝搬していく確率や、それが脆弱性攻撃につながる確率等が表現される。 In the analysis graph, the starting point interface (Src Interface) in the network system, the ending point interface (Dst Interface), and the vulnerability inherent in each host are described as nodes in the analysis graph. The analysis graph expresses the probability that unauthorized communication from the outside will propagate between interfaces, the probability that it will lead to a vulnerability attack, etc. as edges and their weights.
 各ノードは、固有の状態変数を持つ。ノードがインタフェースノードである場合、固有の状態変数は、不正通信が到達した (1) か否 (0) かをあらわす二値変数である。ノードが脆弱性ノードである場合、固有の状態変数は、悪用された (1) か否 (0) かをあらわす二値変数である。リスクアセスメント部40は、これらの状態変数が各々1となる確率(すなわち、各状態変数の期待値)をグラフ理論に基づいて求めることで、外部からの不正通信がインタフェース間を伝搬していく確率や、それが脆弱性攻撃につながる確率を算出する。 Each node has its own state variables. If the node is an interface node, the unique state variable is a binary variable that indicates whether (1) an unauthorized communication has arrived or not (0). If the node is a vulnerable node, the unique state variable is a binary variable that indicates whether it has been exploited (1) or not (0). The risk assessment unit 40 determines the probability that unauthorized communication from the outside will propagate between interfaces by determining the probability that each of these state variables becomes 1 (that is, the expected value of each state variable) based on graph theory. and the probability that it will lead to a vulnerability attack.
 なお、非特許文献1に記載された従来技術におけるBNのノードは、システム状態(システム変数の一意の組合せ)をあらわすものなので、その従来技術を大規模ネットワークに適用しようとすると、システム変数が組合せ爆発を起こし、計算コストの観点からBNを作成すること自体が困難になる問題がある。これに対して、リスクアセスメント部40において作成されるBNのノードは、ホストにおけるポートまたは脆弱性をあらわすため、グラフが肥大化しにくく、大規模ネットワークへの適用が可能になる。 Note that the BN nodes in the prior art described in Non-Patent Document 1 represent system states (unique combinations of system variables), so when applying that prior art to a large-scale network, the system variables are There is a problem in that it causes an explosion and makes it difficult to create a BN in terms of computational cost. On the other hand, the BN nodes created by the risk assessment unit 40 represent ports or vulnerabilities in the host, so the graph is less likely to become enlarged and can be applied to large-scale networks.
 リスクアセスメント部40は、図1に示すように、リスク特定部41と、リスク分析部42と、を備えている。リスク特定部41は、セキュリティ情報に基づいて、ネットワークシステムに内在していたセキュリティリスクを特定し、特定されたセキュリティリスクの確率的な依存関係を非巡回有向グラフの形式で記述した解析グラフを作成する。リスク分析部42は、解析グラフを用いてノードの状態に関するリスク確率を算出する。
 以下、リスクアセスメント部40の各部の詳細について説明する。
As shown in FIG. 1, the risk assessment section 40 includes a risk identification section 41 and a risk analysis section 42. The risk identification unit 41 identifies security risks inherent in the network system based on the security information, and creates an analytical graph that describes the probabilistic dependencies of the identified security risks in the form of an acyclic directed graph. . The risk analysis unit 42 calculates the risk probability regarding the state of the node using the analysis graph.
The details of each part of the risk assessment section 40 will be explained below.
[リスク特定部41]
 リスク特定部41は、ある特定の期間において情報取得部10から得られた情報に基づいて、当該期間においてネットワークシステムに内在していたリスクを特定し、その確率的な依存関係を解析グラフとして記述する。そのために、システム管理者等が、過去における任意の期間tを指定する。期間tは、例えば、5分前から現在までの期間としてもよい。リスク特定部41は、次の処理を順次実行する。リスク特定部41は、データベース30に期間tをクエリとして入力し、当該期間における各種情報を取得する。リスク特定部41は、データベース30から得られた情報を入力として、リスク分析に用いる解析グラフを作成する。リスク特定部41は、作成した解析グラフをリスク分析部42へ入力する。
[Risk identification department 41]
Based on the information obtained from the information acquisition unit 10 during a certain period, the risk identification unit 41 identifies risks inherent in the network system during the period, and describes the probabilistic dependencies thereof as an analysis graph. do. For this purpose, a system administrator or the like specifies an arbitrary period t in the past. The period t may be, for example, a period from 5 minutes ago to the present. The risk identification unit 41 sequentially executes the following processes. The risk identification unit 41 inputs the period t into the database 30 as a query and acquires various information for the period. The risk identification unit 41 receives information obtained from the database 30 and creates an analysis graph for use in risk analysis. The risk identification unit 41 inputs the created analysis graph to the risk analysis unit 42.
 次に、リスク特定部41が解析グラフを作成する処理の流れについて図4を参照(適宜図3参照)して説明する。
 リスク特定部41は、所定期間tにおいて、ホスト間の依存関係を特定する(ステップS1)。ここでは、リスク特定部41は、データフローおよびアラート情報に基づいて、ホスト間の依存関係を記述する。そして、リスク特定部41は、ホスト群とホスト群との間において、インタフェース間の依存関係を特定する(ステップS2)。ここでは、リスク特定部41は、ステップS1で特定したホスト間の依存関係にしたがって、ポート(インタフェース)間の依存関係、すなわち、インタフェース間を不正通信が流れる確率、を算出し、記述する。
Next, the flow of processing in which the risk identification unit 41 creates an analysis graph will be described with reference to FIG. 4 (see FIG. 3 as appropriate).
The risk identifying unit 41 identifies dependencies between hosts in a predetermined period t (step S1). Here, the risk identification unit 41 describes dependencies between hosts based on data flow and alert information. Then, the risk identification unit 41 identifies inter-interface dependencies between the host groups (step S2). Here, the risk identification unit 41 calculates and describes the dependency relationship between ports (interfaces), that is, the probability that unauthorized communication will flow between the interfaces, according to the dependency relationship between hosts identified in step S1.
 そして、リスク特定部41は、各ホスト内において、終点インタフェースと脆弱性との間の依存関係を特定する(ステップS3)。ここでは、リスク特定部41は、終点ポート(終点インタフェース)から脆弱性にアクセスされて、その脆弱性が悪用される確率を計算、記述する。
 そして、リスク特定部41は、各ホスト内において、脆弱性と始点インタフェースとの間の依存関係を特定する(ステップS4)。ここでは、リスク特定部41は、脆弱性が悪用されたのちに、始点ポート(始点インタフェース)から不正通信がおこなわれる確率を計算、記述する。
 そして、リスク特定部41は、各ホスト内において、アラート情報に基づいて各ノードの状態変数を設定する(ステップS5)。
Then, the risk identifying unit 41 identifies the dependency relationship between the end point interface and the vulnerability in each host (step S3). Here, the risk identification unit 41 calculates and describes the probability that a vulnerability will be accessed from the endpoint port (endpoint interface) and exploited.
Then, the risk identifying unit 41 identifies the dependency relationship between the vulnerability and the starting point interface in each host (step S4). Here, the risk identification unit 41 calculates and describes the probability that unauthorized communication will be performed from the source port (source interface) after the vulnerability is exploited.
The risk identification unit 41 then sets the state variable of each node in each host based on the alert information (step S5).
 以下、前記ステップS1~ステップS5の詳細について、図面や数式を参照して詳細に説明する。
(ステップS1)
 前記ステップS1では、リスク特定部41は、当該期間tにおけるホスト間の依存関係L0, L1, L2,….Ll を得る。ここで、Ll はインターネット(外部)からの“距離”が l であるホストの集合である。外部からの距離が l であるホスト群が列挙されるリストをリストLl という。リストL0 には、外部からの不正通信を直接受けたホストが列挙される。リストL1 には、その直前のリストL0 に属するホストから直接通信を受けたホストが入る。同様に、リストL2 以降についても、その直前のリストに属するホストから直接通信を受けたホストが入る。
 なお、l の値は入力情報(ネットワーク環境)に応じて自動的に定まる。また、そもそも通信をおこなったホストが存在しない場合には、これらの依存関係が得られないため、グラフを作成できず、アセスメントの実施も不可である。
Hereinafter, details of steps S1 to S5 will be explained in detail with reference to drawings and formulas.
(Step S1)
In step S1, the risk identification unit 41 obtains the inter-host dependency relationships L 0 , L 1 , L 2 ,...L l during the period t. Here, L l is a set of hosts whose “distance” from the Internet (external) is l. A list in which hosts whose distance from the outside is l is enumerated is called a list L l . List L 0 enumerates hosts that have directly received unauthorized communications from the outside. List L1 contains hosts that have received direct communication from hosts belonging to the immediately preceding list L0 . Similarly, for list L2 and subsequent lists, hosts that have received direct communication from hosts belonging to the immediately preceding list are entered.
Note that the value of l is automatically determined according to the input information (network environment). Furthermore, if the host that communicated does not exist in the first place, these dependencies cannot be obtained, so a graph cannot be created and an assessment cannot be performed.
 次に、ステップS1についての詳細な処理の流れについて図5および図6を参照して説明する。なお、図6は、リスク特定部41による解析グラフの作成処理が完了したときの、ホスト間の依存関係L0, L1, L2,….Llと解析グラフとの対応関係を示す模式図である。
 この解析グラフを作成するために、まず、リスク特定部41は、当該期間tに通信(アラート、ネットワークフロー)をおこなったホストをリストLAに列挙する(ステップS11)。このリストLAには、当初、当該期間t内に外部からの通信によってアラートが発生したすべてのホスト、または、ネットワークシステム内の他のホストと何らかの通信をおこなったすべてのホストが列挙される。そして、リスク特定部41は、以下の処理の過程で、ホスト間の依存関係に応じて、リストLAからホストを順次削除する更新をおこなう。
Next, a detailed process flow regarding step S1 will be described with reference to FIGS. 5 and 6. Note that FIG. 6 is a schematic diagram showing the correspondence between the inter-host dependencies L 0 , L 1 , L 2 ,….L l and the analysis graph when the analysis graph creation process by the risk identification unit 41 is completed. It is a diagram.
In order to create this analysis graph, the risk identification unit 41 first enumerates hosts that communicated (alerts, network flows) during the period t in a list LA (step S11). This list L A initially lists all hosts for which an alert has been generated due to communication from the outside within the period t, or all hosts that have communicated with other hosts within the network system. Then, during the following process, the risk identification unit 41 performs an update to sequentially delete hosts from the list LA according to the dependency relationship between the hosts.
 次に、リスク特定部41は、リストLAのうち、当該期間tに外部からの悪性通信(アラート)を受けたホストをリストL0に列挙する(ステップS12)。リストL0には、図6に示すように、アラートを受けたすべてのホストが外部(Internet)に近接して列挙される。そして、リスク特定部41は、リストL0は空であるか否かを判別する(ステップS13)。リストL0が空ではない場合(ステップS13:No)、リスク特定部41は、リストL0内のホストをリストLAから削除し(ステップS14)、リストLAを更新する。リスク特定部41は、以下のように、リストLAが空になるまでリストLAの更新を続ける。まず、リスク特定部41は、リストLAは空であるか否かを判別する(ステップS15)。リストLAが空ではない場合(ステップS15:No)、リスク特定部41は、インターネット(外部)からの距離lの値を0にして(l ← 0:ステップS16)、以下の繰り返し処理をおこなう。 Next, the risk identification unit 41 enumerates, in the list L0 , hosts that have received malicious communications (alerts) from the outside during the period t from the list LA (step S12 ). In the list L0 , as shown in FIG. 6, all hosts that have received an alert are listed in close proximity to the outside (Internet). The risk identification unit 41 then determines whether the list L 0 is empty (step S13). If the list L 0 is not empty (step S13: No), the risk identification unit 41 deletes the host in the list L 0 from the list LA (step S14), and updates the list LA . The risk identification unit 41 continues updating the list LA until the list LA becomes empty, as described below. First, the risk identification unit 41 determines whether the list LA is empty (step S15). If the list L A is not empty (step S15: No), the risk identification unit 41 sets the value of distance l from the Internet (external) to 0 (l ← 0: step S16), and repeats the following process. .
 繰り返し処理において、まず、リスク特定部41は、当該期間tにリストLl内の各ホストから通信(ネットワークフロー)を直接受けたホストのうち、リストLAに含まれるものをリストLl+1に列挙する(ステップS21)。なお、ステップS21において距離lの値が0である場合、リストLl+1はL1となる。このときリストL1には、図6に示すように、リストL0内の各ホストから通信(ネットワークフロー)を直接受けたすべてのホストが、リストL0に近接して列挙される。 In the iterative process, first, the risk identification unit 41 selects the hosts included in the list L A from among the hosts that directly received communication (network flow) from each host in the list L l during the period t into a list L l+1. (Step S21). Note that when the value of the distance l is 0 in step S21, the list L l+1 becomes L 1 . At this time, as shown in FIG. 6 , in the list L1, all hosts that have directly received communication (network flow) from each host in the list L0 are listed in proximity to the list L0 .
 ステップS21に続いて、リスク特定部41は、リストLl+1は空であるか否かを判別する(ステップS22)。リストLl+1が空ではない場合(ステップS22:No)、リスク特定部41は、リストLl+1内のホストをリストLAから削除し(ステップS23)、リストLAを更新する。続いて、リスク特定部41は、リストLAは空であるか否かを判別する(ステップS24)。リストLAが空ではない場合(ステップS24:No)、リスク特定部41は、lの値をインクリメントして(l ← l+1:ステップS25)、ステップS21に戻る。
 なお、ステップS21において距離lの値が1である場合、リストLl+1はL2となる。このときリストL2には、図6に示すように、リストL1内の各ホストから通信(ネットワークフロー)を直接受けたすべてのホストが、リストL1に近接して列挙される。
 同様に、ステップS21においてリストLlには、図6に示すように、リストL1-1内の各ホストから通信(ネットワークフロー)を直接受けたすべてのホストが、リストL1-1に近接して列挙される。
Following step S21, the risk identification unit 41 determines whether the list L l+1 is empty (step S22). If the list L l+1 is not empty (step S22: No), the risk identification unit 41 deletes the host in the list L l+1 from the list LA (step S23) and updates the list LA . Subsequently, the risk identification unit 41 determines whether the list LA is empty (step S24). If the list LA is not empty (step S24: No), the risk identification unit 41 increments the value of l (l←l+1: step S25), and returns to step S21.
Note that when the value of the distance l is 1 in step S21, the list L l+1 becomes L 2 . At this time, as shown in FIG. 6, in list L2 , all hosts that have directly received communication (network flow) from each host in list L1 are listed in proximity to list L1 .
Similarly, in step S21, list L l includes all hosts that have directly received communication (network flow) from each host in list L 1-1 that are close to list L 1-1 , as shown in FIG. are listed.
 一方、ステップS24においてリストLAが空である場合(ステップS24:Yes)、リスク特定部41は、lの値をインクリメントし(l ← l+1:ステップS26)、当該時間におけるホスト間の依存関係として、L0, L1, L2,….Llを得て(ステップS27)、終了する。なお、ステップS22においてリストLl+1が空である場合(ステップS22:Yes)、リスク特定部41は、ステップS27に進む。
 また、ステップS15においてリストLAが空である場合(ステップS15:Yes)、リスク特定部41は、当該時間におけるホスト間の依存関係として、L0を得て(ステップS28)、終了する。
 また、ステップS13においてリストL0が空である場合(ステップS13:Yes)、リスク特定部41は、処理を終了する。
On the other hand, if the list L A is empty in step S24 (step S24: Yes), the risk identification unit 41 increments the value of l (l ← l+1: step S26), and The relationships L 0 , L 1 , L 2 ,...L l are obtained (step S27), and the process ends. Note that if the list L l+1 is empty in step S22 (step S22: Yes), the risk identification unit 41 proceeds to step S27.
Furthermore, if the list LA is empty in step S15 (step S15: Yes), the risk identification unit 41 obtains L 0 as the inter-host dependency relationship at the time (step S28), and ends the process.
Further, if the list L 0 is empty in step S13 (step S13: Yes), the risk identification unit 41 ends the process.
(ステップS2)
 前記ステップS2では、リスク特定部41は、各始点インタフェースノードと終点インタフェースノードとの間で悪性通信が発生する条件付き確率を算出し、その値を重みとするエッジを作成する。エッジは、互いに隣接するホスト群(Ln )とホスト群(Ln+1 )との間で定義される。ここでは、始点インタフェースノード i=(hi,s), hi∈Ln に悪性通信が到達していた場合において、始点インタフェースノード i=(hi,s), hi∈Lnと、終点インタフェースノード j=(hj,d), hj∈Ln+1 と、の間で悪性通信が発生する条件付き確率をpijとおく。この確率pij は、ノードiとノードjとの間のエッジの重みとして表現される。
(Step S2)
In step S2, the risk identification unit 41 calculates the conditional probability that malicious communication will occur between each start point interface node and end point interface node, and creates an edge using the calculated value as a weight. An edge is defined between a host group (L n ) and a host group (L n+1 ) that are adjacent to each other. Here, in the case where the malicious communication reaches the starting point interface node i=(h i ,s), h i ∈L n , the starting point interface node i=(h i ,s), h i ∈L n , Let p ij be the conditional probability that malicious communication occurs between the end point interface node j=(h j ,d), h j ∈L n+1 . This probability p ij is expressed as the weight of the edge between node i and node j.
 図7Aは、前記ステップS2と解析グラフとの対応関係を示す模式図である。図7Aに示す解析グラフにおいて、例えば仮想線で囲む領域201は、ホスト群L0に属するホストの始点インタフェースノードと、ホスト群L1に属するホストの終点インタフェースノードと、の間で発生する通信を模式的に示している。また、例えば仮想線で囲む領域202は、ホスト群L1に属するホストの始点インタフェースノードと、ホスト群L2に属するホストの終点インタフェースノードと、の間で発生する通信を模式的に示している。 FIG. 7A is a schematic diagram showing the correspondence between step S2 and the analysis graph. In the analysis graph shown in FIG. 7A, for example, an area 201 surrounded by a virtual line indicates communication occurring between a starting point interface node of a host belonging to host group L 0 and a destination point interface node of a host belonging to host group L 1 . Shown schematically. Further, for example, an area 202 surrounded by a virtual line schematically shows communication occurring between a starting point interface node of a host belonging to host group L 1 and a destination point interface node of a host belonging to host group L 2 . .
 前記ステップS2において、リスク特定部41は、ネットワークフローの比を正規化して確率pijとする。リスク特定部41は、例えば次の式(1)により、確率pij を求める。 In step S2, the risk identification unit 41 normalizes the ratio of network flows to a probability p ij . The risk identification unit 41 calculates the probability p ij using, for example, the following equation (1).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 式(1)において、μijは、始点インタフェースノードi と終点インタフェースノードj との間のネットワークフローである。また、ρiは、始点インタフェースノードi から送出されるネットワークフローの総量である。 In equation (1), μ ij is the network flow between the source interface node i and the destination interface node j. Further, ρ i is the total amount of network flows sent out from the starting point interface node i.
 次に、リスク特定部41によって算出される確率pijの具体例について図7Bを参照して説明する。例えば図7Bに示すように、始点インタフェースノードiから終点インタフェースノードj1、j2、j3へのネットワークフロー(通信量)がそれぞれ、2、5、4である場合、始点インタフェースノードiから送出されるネットワークフローの総量ρiは11である。したがって、式(1)によって、始点インタフェースノードiから終点インタフェースノードj1、j2、j3へ通信が発生する確率はそれぞれ、2/11、5/11、4/11となる。 Next, a specific example of the probability p ij calculated by the risk identification unit 41 will be described with reference to FIG. 7B. For example, as shown in FIG. 7B, if the network flows (communication volumes) from the source interface node i to the destination interface nodes j 1 , j 2 , and j 3 are 2, 5, and 4, respectively, the amount of data sent from the source interface node i is The total amount of network flows ρ i is 11. Therefore, according to equation (1), the probabilities that communication will occur from the starting point interface node i to the ending point interface nodes j 1 , j 2 , and j 3 are 2/11, 5/11, and 4/11, respectively.
 なお、システム管理者が、確率pijとして任意の確率値を与えてもよい。また、システム管理者が、確率pijの別の計算方法を与えてもよい。 Note that the system administrator may give an arbitrary probability value as the probability p ij . Also, the system administrator may provide another method for calculating the probability p ij .
 (ステップS3)
 前記ステップS3では、リスク特定部41は、「各終点インタフェースノードから各脆弱性ノードにアクセスされた後、その脆弱性ノードが悪用される確率」を算出し、その値を重みとするエッジを作成する。エッジは、同一のホスト内の終点インタフェース(ポート)ノードと脆弱性ノードとの間で定義される。ここでは、終点インタフェースノードj=(h,d)に悪性通信が到達していた場合において、終点インタフェースノードj=(h,d)から、脆弱性ノードu=(h,v)にアクセスされる条件付き確率をqju とおき、その後さらに脆弱性ノードu が悪用される条件付き確率をwu とおく。また、終点インタフェースノードjと脆弱性ノードuとの間のエッジの重みをqju×wuとする。
(Step S3)
In step S3, the risk identification unit 41 calculates "the probability that each vulnerable node will be exploited after each endpoint interface node accesses each vulnerable node" and creates an edge with that value as a weight. do. Edges are defined between endpoint interface (port) nodes and vulnerable nodes within the same host. Here, if malicious communication reaches end point interface node j=(h,d), vulnerable node u=(h,v) is accessed from end point interface node j=(h,d). Let q ju be the conditional probability, and let w u be the conditional probability that vulnerable node u will be further exploited. Also, let the weight of the edge between the end point interface node j and the vulnerability node u be q ju ×w u .
 図8は、ステップS3と解析グラフとの対応関係を示す模式図である。図8に示す解析グラフにおいて、例えば仮想線で囲む領域301は、ホスト群L0に属するホスト内における、終点インタフェースノードから脆弱性ノードへのアクセスを模式的に示している。仮想線で囲む領域302は、ホスト群L1に属するホスト内における、終点インタフェースノードから脆弱性ノードへのアクセスを模式的に示している。仮想線で囲む領域303は、ホスト群L2に属するホスト内における、終点インタフェースノードから脆弱性ノードへのアクセスを模式的に示している。仮想線で囲む領域304は、ホスト群Llに属するホスト内における、終点インタフェースノードから脆弱性ノードへのアクセスを模式的に示している。 FIG. 8 is a schematic diagram showing the correspondence between step S3 and the analysis graph. In the analysis graph shown in FIG. 8, for example, an area 301 surrounded by a virtual line schematically shows access from an end point interface node to a vulnerable node in a host belonging to host group L0 . An area 302 surrounded by a virtual line schematically shows access from the end point interface node to the vulnerable node within the hosts belonging to the host group L1 . An area 303 surrounded by a virtual line schematically shows access from the end point interface node to the vulnerable node within the hosts belonging to the host group L2 . An area 304 surrounded by a virtual line schematically shows access from the end point interface node to the vulnerable node within the host belonging to the host group L l .
 確率qjuは、例えば0または1の2値で定義される。例えば、脆弱性 v を内包しているアプリケーション a が、終点ポートとして d を利用していれば 1であり、そうでなければ 0 である、と定義することができる。また、確率wu は、例えば、Common Vulnerability Scoring System (CVSS) に基づいた計算方法を用いて算出することができる。なお、CVSSに基づいた計算方法は非特許文献1に記載されている。 The probability q ju is defined as a binary value of 0 or 1, for example. For example, it can be defined as 1 if application a with vulnerability v uses d as the end port, and 0 otherwise. Further, the probability w u can be calculated using, for example, a calculation method based on the Common Vulnerability Scoring System (CVSS). Note that a calculation method based on CVSS is described in Non-Patent Document 1.
 なお、システム管理者が、確率qjuや確率wuとして任意の確率値を与えてもよい。また、システム管理者が、確率qjuや確率wuの別の計算方法を与えてもよい。 Note that the system administrator may give arbitrary probability values as the probability q ju and the probability w u . Further, the system administrator may provide another calculation method for the probability q ju and the probability w u .
(ステップS4)
 前記ステップS4では、リスク特定部41は、「各脆弱性ノードが悪用された後、各始点インタフェースノードから不正な通信がおこなわれる確率」を算出し、その値を重みとするエッジを作成する。エッジは、同一のホスト内の脆弱性ノードと始点インタフェース(ポート)ノードとの間で定義される。ここでは、脆弱性ノードu=(h,v) が悪用されていた場合において、脆弱性ノードu=(h,v) から始点インタフェースノードi=(h,s) にアクセスされる確率をrui とおく。この確率rui は、ノードuとノードiとの間のエッジの重みとして表現される。
(Step S4)
In step S4, the risk identification unit 41 calculates "the probability that unauthorized communication will be performed from each start point interface node after each vulnerable node is exploited" and creates an edge using the calculated value as a weight. An edge is defined between a vulnerable node and a source interface (port) node within the same host. Here, if the vulnerable node u=(h,v) is exploited, the probability that the starting point interface node i=(h,s) is accessed from the vulnerable node u=(h,v) is expressed as r ui far. This probability r ui is expressed as the weight of the edge between node u and node i.
 図9Aは、ステップS4と解析グラフとの対応関係を示す模式図である。図9Aに示す解析グラフにおいて、例えば仮想線で囲む領域401は、ホスト群L0に属するホスト内における、脆弱性ノードから始点インタフェースノードへのアクセスを模式的に示している。仮想線で囲む領域402は、ホスト群L1に属するホスト内における、脆弱性ノードから始点インタフェースノードへのアクセスを模式的に示している。仮想線で囲む領域403は、ホスト群L2に属するホスト内における、脆弱性ノードから始点インタフェースノードへのアクセスを模式的に示している。 FIG. 9A is a schematic diagram showing the correspondence between step S4 and the analysis graph. In the analysis graph shown in FIG. 9A, for example, an area 401 surrounded by a virtual line schematically shows access from a vulnerable node to a starting point interface node in a host belonging to host group L0 . An area 402 surrounded by a virtual line schematically shows access from a vulnerable node to a starting point interface node within a host belonging to host group L1 . An area 403 surrounded by a virtual line schematically shows access from a vulnerable node to a starting point interface node in a host belonging to host group L2 .
 リスク特定部41によって算出される確率rui については、例えばホストが用いる始点ポートがランダムであると仮定して、各始点インタフェースノードから送出されるネットワークフローの比率を、そのまま確率ruiであると定義することができる。この場合、リスク特定部41は、次の式(2)により、確率rui を求める。式(2)において、ρiは、始点インタフェースノードi から送出されるネットワークフローの総量である。 Regarding the probability r ui calculated by the risk identification unit 41, for example, assuming that the starting point ports used by the host are random, the ratio of network flows sent out from each starting point interface node is expressed as the probability r ui . can be defined. In this case, the risk identification unit 41 calculates the probability r ui using the following equation (2). In equation (2), ρ i is the total amount of network flows sent out from the source interface node i.
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
 次に、リスク特定部41によって算出される確率rui の具体例について図9Bを参照して説明する。図9Bは、次の条件1~条件4を示す模式図である。
(条件1)始点インタフェースノードi1、i2、i4からそれぞれネットワークフローが送出され、i3を含むその他の始点インタフェースノードは、ネットワークフローを送出していない。
(条件2)始点インタフェースノードi1から送出されたネットワークフローの総量が6である。
(条件3)始点インタフェースノードi2から送出されたネットワークフローの総量が8である。
(条件4)始点インタフェースノードi4から送出されたネットワークフローの総量が15である。
 この場合、当該ホスト内における全始点インタフェースノードから送出されたネットワークフローの合計は29である。したがって、式(2)によって、脆弱性ノードuから始点インタフェースノードi1、i2、i4にアクセスされる確率は、それぞれ、6/29、8/29、15/29となる。
Next, a specific example of the probability r ui calculated by the risk identification unit 41 will be described with reference to FIG. 9B. FIG. 9B is a schematic diagram showing the following conditions 1 to 4.
(Condition 1) Network flows are sent out from each of the start point interface nodes i 1 , i 2 , and i 4 , and other start point interface nodes including i 3 do not send out network flows.
(Condition 2) The total amount of network flows sent out from the starting point interface node i1 is 6.
(Condition 3) The total amount of network flows sent out from the starting point interface node i2 is 8.
(Condition 4) The total amount of network flows sent out from the starting point interface node i4 is 15.
In this case, the total number of network flows sent out from all source interface nodes within the host is 29. Therefore, according to equation (2), the probabilities that the starting point interface nodes i 1 , i 2 , and i 4 are accessed from the vulnerable node u are 6/29, 8/29, and 15/29, respectively.
 なお、システム管理者が、確率ruiとして任意の確率値を与えてもよい。また、システム管理者が、確率ruiの別の計算方法を与えてもよい。 Note that the system administrator may give an arbitrary probability value as the probability r ui . Additionally, the system administrator may provide another method for calculating the probability r ui .
(ステップS5)
 前記ステップS5では、リスク特定部41は、各始点インタフェースノード、終点インタフェースノードおよび脆弱性ノードの状態変数を定義、設定する。
 リスク特定部41は、始点インタフェースノードi について、不正通信が到達したか否かをあらわす状態変数Miを定義する。状態変数Miは二値変数であり、不正通信が到達していたら 1であり、そうでなければ 0である。状態変数Miの初期値を 0 とする。
 リスク特定部41は、終点インタフェースノードj について、不正通信が到達したか否かをあらわす状態変数Mjを定義する。状態変数Mjは二値変数であり、不正通信が到達していたら 1であり、そうでなければ 0である。状態変数Mjの初期値を 0 とする。
 リスク特定部41は、脆弱性ノードu について、悪用されたか否かをあらわす状態変数Euを定義する。状態変数Euは二値変数であり、悪用されていたら 1であり、そうでなければ 0である。状態変数Euの初期値を 0 とする。
(Step S5)
In step S5, the risk identification unit 41 defines and sets state variables of each start point interface node, end point interface node, and vulnerability node.
The risk identification unit 41 defines a state variable M i representing whether or not unauthorized communication has arrived at the starting point interface node i. The state variable M i is a binary variable, and is 1 if an unauthorized communication has arrived, and 0 otherwise. Let the initial value of the state variable M i be 0.
The risk identification unit 41 defines a state variable M j that indicates whether or not unauthorized communication has arrived at the end point interface node j . The state variable M j is a binary variable, and is 1 if an unauthorized communication has arrived, and 0 otherwise. Let the initial value of the state variable M j be 0.
The risk identification unit 41 defines a state variable E u representing whether or not the vulnerability node u has been exploited. The state variable E u is a binary variable; it is 1 if it has been exploited and 0 otherwise. Let the initial value of the state variable E u be 0.
 前記ステップS5において、リスク特定部41は、リストL0に属するホスト群について、外部からの不正通信を直接受けた終点インタフェースノードの状態変数Mj を 1 とする。すなわち、リスク特定部41は、例えば次の式(3)により、状態変数Mj を設定する。式(3)において、λjは、終点インタフェースjが受信した、不正通信(発生したIDS/IPSアラート)の件数の単位時間平均である。 In step S5, the risk identification unit 41 sets the state variable M j of the end point interface node that directly received the unauthorized communication from the outside to 1 for the host group belonging to the list L 0 . That is, the risk identification unit 41 sets the state variable M j using, for example, the following equation (3). In equation (3), λ j is the unit time average of the number of fraudulent communications (occurred IDS/IPS alerts) received by the end point interface j.
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
 なお、システム管理者が、この状態変数Mjの別の設定方法を与えてもよい。
 以上でリスク特定部41の処理は完了であり、作成された解析グラフはリスク分析部42に送られる。
Note that the system administrator may provide another setting method for this state variable Mj .
The processing of the risk identification unit 41 is now complete, and the created analysis graph is sent to the risk analysis unit 42.
[リスク分析部42]
 リスク分析部42は次の処理を順次実行する。リスク分析部42は、リスク特定部41から解析グラフを取得する。リスク分析部42は、ノード状態に関するリスク確率を計算する。リスク分析部42は、得られた各ノードのリスク確率をアセスメント結果出力部50へ入力する。
[Risk analysis department 42]
The risk analysis unit 42 sequentially executes the following processes. The risk analysis unit 42 acquires the analysis graph from the risk identification unit 41. The risk analysis unit 42 calculates risk probabilities regarding node states. The risk analysis unit 42 inputs the obtained risk probability of each node to the assessment result output unit 50.
 次に、リスク分析部42がリスク確率を計算する処理の流れについて図10を参照(適宜図1および図6参照)して説明する。図10は、リスク確率を計算する処理の流れを示すフローチャートである。なお、状態変数の計算および更新手法については後記する。
 まず、リスク分析部42は、ホスト間の依存関係を特定するリストLnの識別子nの値を0にする(n ← 0:ステップS31)。そして、リスク分析部42は、リストLn内のすべてのホストについて、各終点インタフェースノードの状態変数が1となる確率(状態変数の期待値)を計算し、新たな状態変数の値として代入および更新する(ステップS32)。ただし、n=0のとき(すなわち、リストL0に属するホスト群について)は、すでにリスク特定部41で終点インタフェースノードの状態変数を設定済みであるため、ステップS32を省略する。そして、リスク分析部42は、リストLn内のすべてのホストについて、各脆弱性ノードの状態変数が1となる確率(状態変数の期待値)を計算し、新たな状態変数の値として代入および更新する(ステップS33)。そして、リスク分析部42は、n=lであるか否かを判別する(ステップS34)。n=lではない場合(ステップS34:No)、リスク分析部42は、リストLn内のすべてのホストについて、各始点インタフェースノードの状態変数が1となる確率(状態変数の期待値)を計算し、新たな状態変数の値として代入および更新する(ステップS35)。続いて、リスク分析部42は、nの値をインクリメントして(n ← n+1:ステップS36)、ステップS32に戻る。一方、ステップS33において、n=lである場合(ステップS34:Yes)、リスク分析部42は、リスク確率を計算する処理を終了する。
Next, the flow of processing in which the risk analysis unit 42 calculates the risk probability will be described with reference to FIG. 10 (see FIGS. 1 and 6 as appropriate). FIG. 10 is a flowchart showing the flow of processing for calculating risk probabilities. Note that the state variable calculation and update method will be described later.
First, the risk analysis unit 42 sets the value of the identifier n of the list L n that specifies dependencies between hosts to 0 (n ← 0: step S31). Then, the risk analysis unit 42 calculates the probability that the state variable of each end point interface node becomes 1 (expected value of the state variable) for all hosts in the list L n , and substitutes it as the value of the new state variable. Update (step S32). However, when n=0 (that is, for the host group belonging to list L0 ), the state variable of the end point interface node has already been set in the risk identification unit 41, so step S32 is omitted. Then, the risk analysis unit 42 calculates the probability that the state variable of each vulnerable node becomes 1 (expected value of the state variable) for all hosts in the list L n , and substitutes it as the value of the new state variable. Update (step S33). Then, the risk analysis unit 42 determines whether n=l (step S34). If n=l is not true (step S34: No), the risk analysis unit 42 calculates the probability that the state variable of each start point interface node becomes 1 (expected value of the state variable) for all hosts in the list L n . and is substituted and updated as a new state variable value (step S35). Subsequently, the risk analysis unit 42 increments the value of n (n←n+1: step S36), and returns to step S32. On the other hand, in step S33, if n=l (step S34: Yes), the risk analysis unit 42 ends the process of calculating the risk probability.
 リスク分析部42は、エッジの方向に沿って上方から下方へ順次、各ノードの状態変数の値(期待値)を計算、更新していく。ここで、リスク分析部42は、各エッジの重みを、状態変数に関する条件付き確率であるとみなして計算する。つまり、リスク分析部42は、解析グラフがBNであるとみなして計算する。 The risk analysis unit 42 calculates and updates the value (expected value) of the state variable of each node sequentially from the top to the bottom along the edge direction. Here, the risk analysis unit 42 calculates the weight of each edge by regarding it as a conditional probability regarding the state variable. In other words, the risk analysis unit 42 performs calculations assuming that the analysis graph is a BN.
 以下、前記ステップS32、ステップS33、ステップS35の詳細について説明する。
(ステップS32)
 前記ステップS32では、リスク分析部42は、始点インタフェースノードから終点インタフェースノードへ伸びるエッジの重みを、「始点インタフェースノードに不正通信が到達していた場合に終点インタフェースノードへ不正通信が到達する確率」であるとみなして計算をおこなう。リスク分析部42が、解析グラフをBNとみなして、終点インタフェースノードの状態変数が 1 になる確率を算出すると、それがそのままリスク確率となる。この場合のリスク確率は、終点インタフェースノードに不正通信が到達している確率である。
The details of step S32, step S33, and step S35 will be explained below.
(Step S32)
In step S32, the risk analysis unit 42 calculates the weight of the edge extending from the source interface node to the destination interface node as "the probability that the unauthorized communication will reach the destination interface node if the unauthorized communication reaches the source interface node." Perform calculations assuming that. When the risk analysis unit 42 regards the analysis graph as a BN and calculates the probability that the state variable of the end point interface node becomes 1, this becomes the risk probability as it is. The risk probability in this case is the probability that fraudulent communication has reached the end point interface node.
(ステップS33)
 前記ステップS33では、リスク分析部42は、終点インタフェースノードから脆弱性ノードへ伸びるエッジの重みを、「終点インタフェースノードに不正通信が到達していた場合に脆弱性ノードが悪用される確率」とみなして計算をおこなう。リスク分析部42が、解析グラフをBNとみなして、脆弱性ノードの状態変数が 1 になる確率を算出すると、それがそのままリスク確率となる。この場合のリスク確率は、脆弱性ノードが悪用されている確率である。
(Step S33)
In step S33, the risk analysis unit 42 considers the weight of the edge extending from the end point interface node to the vulnerable node as "the probability that the vulnerable node will be exploited when fraudulent communication reaches the end point interface node." Perform calculations. When the risk analysis unit 42 regards the analysis graph as a BN and calculates the probability that the state variable of the vulnerable node becomes 1, this becomes the risk probability. The risk probability in this case is the probability that a vulnerable node is being exploited.
(ステップS35)
 前記ステップS35では、リスク分析部42は、脆弱性ノードから始点インタフェースノードへ伸びるエッジの重みを、「脆弱性ノードが悪用されていた場合に始点インタフェースノードから不正通信がおこなわれる確率」とみなして計算をおこなう。リスク分析部42が、解析グラフをBNとみなして、始点インタフェースノードの状態変数が 1 になる確率を算出すると、それがそのままリスク確率となる。この場合のリスク確率は、始点インタフェースノードに不正通信が到達している確率である。リスク分析部42は、Conditional Probability Table (CPT) を用いたBNの計算手法を適用すればよい。
(Step S35)
In step S35, the risk analysis unit 42 regards the weight of the edge extending from the vulnerable node to the starting point interface node as "the probability that unauthorized communication will occur from the starting point interface node if the vulnerable node is exploited." Do the calculations. When the risk analysis unit 42 regards the analysis graph as a BN and calculates the probability that the state variable of the starting point interface node becomes 1, this becomes the risk probability as it is. The risk probability in this case is the probability that fraudulent communication has reached the starting point interface node. The risk analysis unit 42 may apply a BN calculation method using a Conditional Probability Table (CPT).
 前記ステップS32、ステップS33、ステップS35において、リスク分析部42が各リスク確率を得る計算手法は、CPTを用いたBNの計算手法に限定されるものではない。
 リスク分析部42は、以下の式(4)により、終点インタフェースノードの状態変数の期待値を算出してもリスク確率を得ることができる。
 リスク分析部42は、以下の式(5)により、脆弱性ノードの状態変数の期待値を算出してもリスク確率を得ることができる。
 リスク分析部42は、以下の式(6)により、始点インタフェースノードの状態変数の期待値を算出してもリスク確率を得ることができる。
In steps S32, S33, and S35, the calculation method by which the risk analysis unit 42 obtains each risk probability is not limited to the BN calculation method using CPT.
The risk analysis unit 42 can also obtain the risk probability by calculating the expected value of the state variable of the end point interface node using the following equation (4).
The risk analysis unit 42 can also obtain the risk probability by calculating the expected value of the state variable of the vulnerable node using the following equation (5).
The risk analysis unit 42 can also obtain the risk probability by calculating the expected value of the state variable of the starting point interface node using the following equation (6).
Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005
 ここで、非ゼロの重みを持つエッジが、ノードAからノードBへ伸びていた場合、ノードAをノードBの親ノードと呼ぶ。式(4)において、Pjは、終点インタフェースノード jの親ノードの集合である。式(5)において、Puは、脆弱性ノード uの親ノードの集合である。式(6)において、Piは、始点インタフェースノード iの親ノードの集合である。このように期待値のみを考慮する計算手法は、CPTを用いたBNの計算手法よりも、より高速に処理できる可能性が高い。 Here, if an edge with a non-zero weight extends from node A to node B, node A is called the parent node of node B. In equation (4), P j is a set of parent nodes of the end point interface node j. In equation (5), P u is a set of parent nodes of the vulnerable node u. In equation (6), P i is a set of parent nodes of the starting point interface node i. A calculation method that considers only expected values in this way is likely to be able to process faster than a BN calculation method that uses CPT.
 以上でリスク分析部42の処理は完了であり、算出されたリスク確率は、アセスメント結果出力部50に送られる。 The processing of the risk analysis unit 42 is now complete, and the calculated risk probability is sent to the assessment result output unit 50.
 リスクアセスメント部40は、具体的な脅威を考慮した客観的なリスクアセスメントをおこなうことができる。比較として、例えば非特許文献2に記載された従来技術では、脅威のパラメータとして、「ネットワークシステムが攻撃を受ける確率」のみを考慮しており、加えて、「ネットワークシステムが攻撃を受ける確率」を人手で任意に設定する必要があった。これに対して、リスクアセスメント部40では、実際の脅威情報を考慮している。すなわち、リスクアセスメント部40は、前記した式(3)に示すように、不正通信(発生したIDS/IPSアラート)の件数の単位時間平均λjによって、外部からの不正通信を直接受けた終点インタフェースノードの状態変数Mj を設定し、これを用いてリスク確率を算出している。したがって、リスクアセスメント部40は、非特許文献2に記載された従来技術に比べて、より客観的な解析をおこなうことが可能である。また、解析グラフのサイズの増大を抑制することで、非特許文献2のBAGを用いる場合と比べてスケーラビリティの向上を図ることができる。 The risk assessment unit 40 can perform an objective risk assessment that takes into consideration specific threats. For comparison, for example, in the conventional technology described in Non-Patent Document 2, only the "probability that the network system will be attacked" is considered as a threat parameter, and in addition, the "probability that the network system will be attacked" is considered. It was necessary to arbitrarily set it manually. On the other hand, the risk assessment unit 40 considers actual threat information. In other words, the risk assessment unit 40 determines the endpoint interface that directly received unauthorized communications from the outside based on the unit time average λ j of the number of unauthorized communications (occurred IDS/IPS alerts), as shown in equation (3) above. A node state variable M j is set and used to calculate the risk probability. Therefore, the risk assessment unit 40 can perform a more objective analysis than the conventional technology described in Non-Patent Document 2. Moreover, by suppressing the increase in the size of the analysis graph, scalability can be improved compared to the case of using the BAG of Non-Patent Document 2.
[アセスメント結果出力部50]
 アセスメント結果出力部50は、リスクアセスメント部40から受け取った各種リスク確率(すなわちアセスメント結果)をシステム管理者2等に提供する。
 アセスメント結果出力部50は、例えばリスク確率の閾値判定やソートをおこなう等のデータ加工をした後に、結果をシステム管理者等に提供してもよい。これにより、システム管理者等は、よりわかり易いアセスメント結果を得ることができる。
 システム管理者等は、リスク確率を参照することで、よりリスク確率の高い脆弱性に優先的にパッチを当てるといったリスク対応をとることができる。また、システム管理者等は、リスク確率を参照することで、よりリスク確率の高いポートに関連するIDS/IPSアラートを優先的に解析する等といったリスク対応もとることができる。
[Assessment result output unit 50]
The assessment result output unit 50 provides various risk probabilities (ie, assessment results) received from the risk assessment unit 40 to the system administrator 2 and the like.
The assessment result output unit 50 may provide the results to a system administrator or the like after processing the data, such as determining a risk probability threshold or sorting. This allows system administrators and the like to obtain assessment results that are easier to understand.
By referring to the risk probabilities, system administrators and the like can take risk countermeasures, such as patching vulnerabilities with higher risk probabilities preferentially. Furthermore, by referring to the risk probability, system administrators and the like can take risk countermeasures, such as preferentially analyzing IDS/IPS alerts related to ports with higher risk probability.
 以上説明したように、第1実施形態に係るリスクアセスメント部40は、セキュリティリスクアセスメント業務にけるリスク特定およびリスク分析を自動化、機械化することができる。このような自動化、機械化は、人的稼働(ヒトの作業時間や作業量)の削減に加えて、アセスメントの高速化にも資する。また、第1実施形態に係るセキュリティリスクアセスメントシステム1は、情報収集からリスク特定、リスク分析までを一気通貫で自動化することができる。また、自動化によって、高度な専門知識を持たないシステム管理者がリスクアセスメントをおこなうことを容易にし、ヒューマンエラーが介在する可能性を最小化することにもつながる。 As explained above, the risk assessment unit 40 according to the first embodiment can automate and mechanize risk identification and risk analysis in security risk assessment work. Such automation and mechanization not only reduce human operation (human work time and workload), but also contribute to speeding up assessments. Furthermore, the security risk assessment system 1 according to the first embodiment can automate everything from information collection to risk identification and risk analysis. Automation also makes it easier for system administrators without advanced expertise to conduct risk assessments, minimizing the possibility of human error.
(第2実施形態)
 次に、図11を参照して、第2実施形態に係るセキュリティリスクアセスメントシステム1Bについて説明する。なお、図1に示すセキュリティリスクアセスメントシステム1と同じ構成には同じ符号を付して説明を省略する。セキュリティリスクアセスメントシステム1Bは、資産価値情報入力部60を備えていると共に、リスクアセスメント部40Bにリスク評価部43を備えている点がセキュリティリスクアセスメントシステム1と相違する。
(Second embodiment)
Next, with reference to FIG. 11, a security risk assessment system 1B according to a second embodiment will be described. Note that the same components as the security risk assessment system 1 shown in FIG. 1 are denoted by the same reference numerals, and the description thereof will be omitted. The security risk assessment system 1B is different from the security risk assessment system 1 in that it includes an asset value information input section 60 and also includes a risk evaluation section 43 in the risk assessment section 40B.
[資産価値情報入力部60]
 資産価値情報入力部60は、資産価値やリスク基準を入力するための機能である。資産価値は、ネットワークシステムにおける情報資産の価値である。資産価値は、例えば、リスクが顕在化した際の損失コストであってもよい。資産価値情報入力部60から入力された情報は、他の各種セキュリティ情報と同様にデータベース30に保存される。資産価値情報入力部60から入力された情報は、データベース30からリスク評価部43に送信される。
[Asset value information input section 60]
The asset value information input section 60 is a function for inputting asset values and risk criteria. Asset value is the value of information assets in a network system. The asset value may be, for example, the cost of loss when a risk materializes. The information input from the asset value information input section 60 is stored in the database 30 like other various security information. Information input from the asset value information input section 60 is transmitted from the database 30 to the risk evaluation section 43.
[リスク評価部43]
 リスク評価部43は、リスク分析部42が算出した各種リスク確率と、入力された資産価値およびリスク基準を用いてリスク評価を実施する機能である。例えばシステム管理者等が、各情報資産の資産価値およびリスク基準を入力する。
[Risk assessment department 43]
The risk evaluation unit 43 is a function that performs risk evaluation using the various risk probabilities calculated by the risk analysis unit 42 and the input asset value and risk criteria. For example, a system administrator or the like inputs the asset value and risk standard of each information asset.
 資産価値は、終点インタフェースノードごとに付与される。または、資産価値は、始点インタフェースノードごとに付与される。あるいは、資産価値は、脆弱性ノードごとに付与される。例として、終点インタフェースノードに対して資産価値を与える場合について説明する。
 システム管理者等は、例えば解析グラフ上におけるすべての終点インタフェースノードについて実数で資産価値を与えてもよい。または、システム管理者等は、解析グラフ上において特に重要と思われる一部の終点インタフェースノードについて実数で資産価値を与えてもよい。
An asset value is assigned to each end point interface node. Alternatively, the asset value is assigned to each starting point interface node. Alternatively, an asset value is assigned to each vulnerable node. As an example, a case will be explained in which an asset value is given to an end point interface node.
For example, a system administrator or the like may give asset values in real numbers to all end point interface nodes on the analysis graph. Alternatively, a system administrator or the like may give asset values in real numbers to some end point interface nodes that are considered to be particularly important on the analysis graph.
 例えば、対象のネットワークシステム内に、次の条件1~条件3を満たす情報資産が存在する場合を想定する。
(条件1)ホスト h 上のアプリケーション(データベース等)に、会社の重要な経営情報が保管されている。
(条件2)そのアプリケーションには、ホスト h 上の終点ポート d からアクセス可能である。
(条件3)その会社の重要な経営情報が紛失または流出した際の推定損失コストが 5,000万円である。
 この場合、システム管理者等は、資産価値情報入力部60によって、解析グラフの該当する終点インタフェースノード (h,d) の資産価値として、「5,000万円」を入力する。
 なお、入力される資産価値は、金額に限らず、例えば、リスク対応(treatment)に要する人的稼働時間(人月)などであってもよい。また、入力される資産価値は、「大」、「中」、「小」等の各範囲の閾値を予め任意に定めておけば、「大」、「中」、「小」等の概念であってもよい。
For example, assume that an information asset that satisfies the following conditions 1 to 3 exists in the target network system.
(Condition 1) Important company management information is stored in an application (database, etc.) on host h.
(Condition 2) The application can be accessed from end port d on host h.
(Condition 3) The estimated loss cost if important management information of the company is lost or leaked is 50 million yen.
In this case, the system administrator or the like inputs "50 million yen" as the asset value of the corresponding end point interface node (h, d) of the analysis graph using the asset value information input unit 60.
Note that the input asset value is not limited to the monetary amount, and may be, for example, the human working time (man-months) required for risk treatment. In addition, the input asset values can be divided into concepts such as "large,""medium," and "small" by arbitrarily setting thresholds for each range such as "large,""medium," and "small." There may be.
 リスク基準は、許容可能な損失コストである。リスク基準は、例えば、許容可能な損失コストの期待値の上限であってもよい。例として、システム管理者等は、資産価値情報入力部60によって、リスク基準として、「100万円」を入力してもよい。 The risk criterion is the acceptable cost of loss. The risk criterion may be, for example, an upper bound on the expected value of tolerable loss costs. For example, the system administrator or the like may input "1 million yen" as the risk standard using the asset value information input unit 60.
 なお、資産価値情報入力部60に入力される情報は、必ずしもデータベース30に保存される必要はない。例えばシステム管理者等が、資産価値やリスク基準をリスク評価部43に直接入力するようにしてもよい。 Note that the information input to the asset value information input section 60 does not necessarily need to be stored in the database 30. For example, a system administrator or the like may directly input asset values and risk standards into the risk evaluation section 43.
 リスク評価部43は、システム管理者等によって入力された資産価値情報と、リスク分析部42によって算出されたリスク確率とを用いて、推定損失コストを算定し、推定損失コストをリスク基準と比較してリスクの受容可能性を判断する。 The risk evaluation unit 43 calculates the estimated loss cost using the asset value information input by the system administrator etc. and the risk probability calculated by the risk analysis unit 42, and compares the estimated loss cost with the risk standard. determine the acceptability of the risk.
 例として、終点インタフェースノードに対して資産価値が与えられた場合について説明する。リスク評価部43は、すべての終点インタフェースノードについて、そのリスク確率と資産価値から損失コストの期待値を算出する。このとき、資産価値が入力されていなかったノードについては、何らかのポリシにしたがって、デフォルト値を予め決めておく等すればよい。なお、リスク評価部43は、算出した情報をシステム管理者等が参照可能なようにデータベース30等に保存する。このため、リスクを受容可能とみなす場合であっても、システム管理者等は、リスク評価部43の算定した推定損失コストの情報を参照することが可能である。 As an example, a case will be explained in which an asset value is given to the end point interface node. The risk evaluation unit 43 calculates the expected value of loss cost for all end point interface nodes from their risk probabilities and asset values. At this time, for nodes for which asset values have not been input, default values may be determined in advance according to some kind of policy. Note that the risk evaluation unit 43 saves the calculated information in the database 30 or the like so that it can be referenced by a system administrator or the like. Therefore, even if the risk is deemed to be acceptable, the system administrator or the like can refer to the information on the estimated loss cost calculated by the risk evaluation unit 43.
 リスク評価部43は、算定された推定損失コストとリスク基準とを比較することによって、リスクの受容可能性の判断が可能である。リスク評価部43は、リスク基準を閾値として、推定損失コストがリスク基準(閾値)よりも小さいか否かを判別し、小さい場合に、リスクを受容可能と判定する。
 例として、システム管理者等が、資産価値情報入力部60によって、リスク基準として、「100万円」を入力した場合を想定する。この場合、リスク評価部43は、推定損失コストが100万円未満だったときには、「リスクを受容可能」と判定する。このようにリスク評価部43は、リスクを受容可能と判定した場合、判定結果をシステム管理者等に通知する必要がない。
 一方、推定損失コストが100万円を超えたとき、リスク評価部43は、具体的な推定損失コストをシステム管理者等に通知する。この場合、リスク評価部43は、特に重要と思われるインタフェースノード(または脆弱性ノード)の位置やリスク確率等も併せてシステム管理者等に通知することが好ましい。
The risk evaluation unit 43 can determine the acceptability of the risk by comparing the calculated estimated loss cost with the risk standard. The risk evaluation unit 43 uses the risk standard as a threshold to determine whether the estimated loss cost is smaller than the risk standard (threshold), and if it is smaller, determines that the risk is acceptable.
As an example, assume that a system administrator or the like inputs "1 million yen" as the risk standard using the asset value information input unit 60. In this case, the risk evaluation unit 43 determines that the risk is "acceptable" when the estimated loss cost is less than 1 million yen. In this way, when the risk evaluation unit 43 determines that the risk is acceptable, there is no need to notify the system administrator or the like of the determination result.
On the other hand, when the estimated loss cost exceeds 1 million yen, the risk evaluation unit 43 notifies the system administrator etc. of the specific estimated loss cost. In this case, it is preferable that the risk evaluation unit 43 also notify the system administrator, etc. of the location and risk probability of the interface node (or vulnerable node) that is considered to be particularly important.
 以上説明したように、第2実施形態に係るリスクアセスメント部40Bは、セキュリティリスクアセスメント業務にけるリスク特定、リスク分析およびリスク評価を自動化、機械化することができる。また、第2実施形態に係るセキュリティリスクアセスメントシステム1Bは、情報収集からリスク特定、リスク分析、リスク評価までを一気通貫で自動化することができる。 As explained above, the risk assessment unit 40B according to the second embodiment can automate and mechanize risk identification, risk analysis, and risk evaluation in security risk assessment work. Furthermore, the security risk assessment system 1B according to the second embodiment can automate everything from information collection to risk identification, risk analysis, and risk evaluation.
(第3実施形態)
 次に、第3実施形態に係るセキュリティリスクアセスメントシステムについて図1および図11を参照して説明する。第3実施形態では、リスクアセスメント部40,40Bにおけるリスク特定部41の機能を拡張した点が、第1および第2実施形態と相違する。
 第1および第2実施形態に係るリスク特定部41は、解析グラフの各ノードの状態変数を設定する際に、必ず0または1を入力する。
 これに対して、第3実施形態に係るリスク特定部41は、解析グラフの各ノードに対して、予め、確率値(閉区間[0,1]内の任意の値)を入力する。以下、2つの具体例について説明する。
(Third embodiment)
Next, a security risk assessment system according to a third embodiment will be described with reference to FIGS. 1 and 11. The third embodiment differs from the first and second embodiments in that the function of the risk identification section 41 in the risk assessment sections 40 and 40B is expanded.
The risk identification unit 41 according to the first and second embodiments always inputs 0 or 1 when setting the state variable of each node of the analysis graph.
On the other hand, the risk identification unit 41 according to the third embodiment inputs a probability value (an arbitrary value within the closed interval [0,1]) to each node of the analysis graph in advance. Two specific examples will be described below.
[第1の具体例]
 第1の具体例として、リスク特定部41は、解析グラフの不正通信を受信していないすべての終点インタフェースノードに対して、予め、IDS/IPSの偽陰性確率を入力する。ここで、IDS/IPSの偽陰性確率は、本来アラートを上げるべき不正通信を見逃してしまう確率である。具体的には、リスク特定部41は、作成された解析グラフにおける不正通信を受信していない(アラートが発生しなかった)すべての終点インタフェースノードjの状態変数Mjの初期値を 0 ではなく偽陰性確率とする。
[First specific example]
As a first specific example, the risk identification unit 41 inputs in advance the IDS/IPS false negative probability to all end point interface nodes that have not received fraudulent communication in the analysis graph. Here, the false negative probability of IDS/IPS is the probability of overlooking fraudulent communications that should have raised an alert. Specifically, the risk identification unit 41 sets the initial value of the state variable M j of all end point interface nodes j that have not received unauthorized communication (no alert has occurred) in the created analysis graph to 0 instead of 0. Let it be the false negative probability.
 なお、リスク特定部41が偽陰性確率を入力する場合、リスク特定部41の後段のリスク分析部42において、実際に算出されたリスク確率と、初期値として設定された偽陰性確率とが同時に存在することになる。ここで、リスク確率は、ノードの状態変数が 1 になる確率、または、状態変数の期待値となる。そして、リスク分析部42において同時に存在する、リスク確率と偽陰性確率とがコンフリクトしてしまう可能性が考えられる。そのような場合には、リスク分析部42が、双方を比較して最大値をとる等して、リスク確率を確定させればよい。 Note that when the risk identification unit 41 inputs the false negative probability, the risk analysis unit 42 at the subsequent stage of the risk identification unit 41 determines that the actually calculated risk probability and the false negative probability set as the initial value exist at the same time. I will do it. Here, the risk probability is the probability that the node's state variable becomes 1 or the expected value of the state variable. Then, there is a possibility that the risk probability and the false negative probability, which exist simultaneously in the risk analysis section 42, conflict with each other. In such a case, the risk analysis unit 42 may determine the risk probability by comparing the two and taking the maximum value.
[第2の具体例]
 第2の具体例として、リスク特定部41は、解析グラフの各終点インタフェースノードに対して、将来の所定期間に不正通信を受信する確率を入力する。
 第1および第2実施形態では、過去の所定期間tにおいて実際に不正通信を受信した実績に基づいて、過去の所定期間tにおけるリスク確率が算出される。
 これに対して、第3実施形態の本具体例では、将来の所定期間におけるリスク確率を過去の実績に基づいて求める。
[Second specific example]
As a second specific example, the risk identification unit 41 inputs the probability of receiving fraudulent communication during a predetermined period in the future for each end point interface node of the analysis graph.
In the first and second embodiments, the risk probability in the past predetermined period t is calculated based on the track record of actually receiving fraudulent communications in the past predetermined period t.
In contrast, in this specific example of the third embodiment, the risk probability for a predetermined period in the future is determined based on past performance.
 具体的には、リスク特定部41は、過去に不正通信を受信した実績から、将来の所定期間に、各終点インタフェースノードが不正通信を受信する確率を求め、その確率値を、対応する終点インタフェースノードの状態変数の初期値に設定する。これにより、リスク特定部41の後段のリスク分析部42において、将来の所定期間におけるリスク確率が算出される。ただし、実際に算出されたリスク確率と、初期値として設定された確率値とがコンフリクトした場合には、リスク分析部42が、双方を比較して最大値をとる等して、リスク確率を確定させればよい。なお、状態変数に 0,1 以外の値を設定する場合、リスク分析部42は、CPTを用いた計算をすることができないため、期待値を用いた計算方法をとる。 Specifically, the risk identification unit 41 calculates the probability that each endpoint interface node will receive fraudulent communication during a predetermined period in the future based on the past record of receiving fraudulent communications, and calculates the probability value for each endpoint interface node to receive fraudulent communications in the corresponding endpoint interface. Set the initial value of the node's state variable. Thereby, the risk analysis unit 42 subsequent to the risk identification unit 41 calculates the risk probability for a predetermined period in the future. However, if there is a conflict between the actually calculated risk probability and the probability value set as the initial value, the risk analysis section 42 will determine the risk probability by comparing them and taking the maximum value. Just let it happen. Note that when setting a value other than 0,1 to the state variable, the risk analysis unit 42 cannot perform calculations using CPT, and therefore uses a calculation method using expected values.
 第3実施形態によれば、第1および第2実施形態よりもリスクアセスメントの精度をさらに向上させることができる。 According to the third embodiment, the accuracy of risk assessment can be further improved than in the first and second embodiments.
[ハードウェア構成]
 前記各実施形態に係るリスクアセスメント部(セキュリティリスクアセスメント装置)40,40Bは、例えば図12に示すような構成のコンピュータ900によって実現される。
 図12は、本実施形態に係るリスクアセスメント部40,40Bの機能を実現するコンピュータ900の一例を示すハードウェア構成図である。コンピュータ900は、CPU(Central Processing Unit)901、ROM(Read Only Memory)902、RAM(Random Access Memory)903、HDD(Hard Disk Drive)904、入出力I/F(Interface)905、通信I/F906およびメディアI/F907を有する。
[Hardware configuration]
The risk assessment units (security risk assessment devices) 40, 40B according to each of the embodiments described above are realized by, for example, a computer 900 configured as shown in FIG. 12.
FIG. 12 is a hardware configuration diagram showing an example of a computer 900 that implements the functions of the risk assessment sections 40 and 40B according to this embodiment. The computer 900 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, a RAM (Random Access Memory) 903, an HDD (Hard Disk Drive) 904, an input/output I/F (Interface) 905, and a communication I/F 906. and a media I/F 907.
 CPU901は、ROM902またはHDD904に記憶されたプログラムに基づき作動する。ROM902は、コンピュータ900の起動時にCPU901により実行されるブートプログラムや、コンピュータ900のハードウェアに係るプログラム等を記憶する。 The CPU 901 operates based on a program stored in the ROM 902 or HDD 904. The ROM 902 stores a boot program executed by the CPU 901 when the computer 900 is started, programs related to the hardware of the computer 900, and the like.
 CPU901は、入出力I/F905を介して、マウスやキーボード等の入力装置910、および、ディスプレイやプリンタ等の出力装置911を制御する。CPU901は、入出力I/F905を介して、入力装置910からデータを取得するともに、生成したデータを出力装置911へ出力する。なお、プロセッサとしてCPU901とともに、GPU(Graphics Processing Unit)等を用いても良い。 The CPU 901 controls an input device 910 such as a mouse and a keyboard, and an output device 911 such as a display and a printer via an input/output I/F 905. The CPU 901 obtains data from the input device 910 via the input/output I/F 905 and outputs the generated data to the output device 911. Note that a GPU (Graphics Processing Unit) or the like may be used in addition to the CPU 901 as the processor.
 HDD904は、CPU901により実行されるプログラムおよび当該プログラムによって使用されるデータ等を記憶する。通信I/F906は、通信網920を介して他の装置からデータを受信してCPU901へ出力し、また、CPU901が生成したデータを、通信網920を介して他の装置へ送信する。 The HDD 904 stores programs executed by the CPU 901 and data used by the programs. Communication I/F 906 receives data from other devices via communication network 920 and outputs it to CPU 901 , and also transmits data generated by CPU 901 to other devices via communication network 920 .
 メディアI/F907は、記録媒体912に格納されたプログラムまたはデータを読み取り、RAM903を介してCPU901へ出力する。CPU901は、目的の処理に係るプログラムを、メディアI/F907を介して記録媒体912からRAM903上にロードし、ロードしたプログラムを実行する。記録媒体912は、DVD(Digital Versatile Disc)、PD(Phase change rewritable Disk)等の光学記録媒体、MO(Magneto Optical disk)等の光磁気記録媒体、磁気記録媒体、又は半導体メモリ等である。 The media I/F 907 reads the program or data stored in the recording medium 912 and outputs it to the CPU 901 via the RAM 903. The CPU 901 loads a program related to target processing from the recording medium 912 onto the RAM 903 via the media I/F 907, and executes the loaded program. The recording medium 912 is an optical recording medium such as a DVD (Digital Versatile Disc) or a PD (Phase change rewritable disk), a magneto-optical recording medium such as an MO (Magneto Optical disk), a magnetic recording medium, a semiconductor memory, or the like.
 例えば、コンピュータ900が前記各実施形態に係るリスクアセスメント部40,40Bとして機能する場合、CPU901は、RAM903上にロードされたプログラムを実行することによりリスクアセスメント部40,40Bの機能を実現する。また、HDD904には、RAM903内のデータが記憶される。CPU901は、目的の処理に係るプログラムを記録媒体912から読み取って実行する。この他、CPU901は、他の装置から通信網920を介して目的の処理に係るプログラムを読み込んでもよい。 For example, when the computer 900 functions as the risk assessment units 40 and 40B according to each of the embodiments, the CPU 901 executes the program loaded onto the RAM 903 to realize the functions of the risk assessment units 40 and 40B. Furthermore, the data in the RAM 903 is stored in the HDD 904 . The CPU 901 reads a program related to target processing from the recording medium 912 and executes it. In addition, the CPU 901 may read a program related to target processing from another device via the communication network 920.
[効果]
 以上説明したように、セキュリティリスクアセスメント装置は、複数のセキュリティ装置によってそれぞれ収集されたネットワークシステム内の情報資産、脆弱性、および脅威に関するセキュリティ情報に基づいて、ネットワークシステムに内在していたセキュリティリスクを特定し、特定されたセキュリティリスクの確率的な依存関係を非巡回有向グラフの形式で記述した解析グラフを作成するリスク特定部41と、解析グラフを用いてノードの状態に関するリスク確率を算出するリスク分析部42と、を備えることを特徴とする。
[effect]
As explained above, the security risk assessment device evaluates the security risks inherent in the network system based on security information about information assets, vulnerabilities, and threats in the network system collected by multiple security devices. a risk identification unit 41 that creates an analysis graph that describes the probabilistic dependencies of the identified security risks in the form of an acyclic directed graph; and a risk analysis unit that uses the analysis graph to calculate risk probabilities regarding node states. It is characterized by comprising a section 42.
 このようにすることにより、セキュリティリスクアセスメント装置において、リスク特定部41は、セキュリティ情報に基づいて、ネットワークシステム内の各情報資産に不正通信が到達した確率を算出することができる。また、リスク特定部41は、セキュリティ情報に基づいて、ネットワークシステム内の各脆弱性が悪用された確率を算出することができる。したがって、システム管理者等は、人手でBNの作成をすることなく、セキュリティリスクアセスメント装置において算出されたリスク確率を参照することができる。また、セキュリティリスクアセスメント装置によれば、リスク特定やリスク分析が自動化されるので、高度な専門知識を持たないシステム管理者がリスクアセスメントをおこなうことを容易にし、ヒューマンエラーが介在する可能性を最小化することにもつながる。 By doing so, in the security risk assessment device, the risk identification unit 41 can calculate the probability that fraudulent communication has reached each information asset in the network system based on the security information. Further, the risk identification unit 41 can calculate the probability that each vulnerability in the network system is exploited based on the security information. Therefore, a system administrator or the like can refer to the risk probability calculated by the security risk assessment device without manually creating a BN. In addition, the security risk assessment device automates risk identification and analysis, making it easier for system administrators without advanced specialized knowledge to conduct risk assessments and minimizing the possibility of human error intervening. It also leads to becoming
 セキュリティリスクアセスメント装置において、リスク分析部42によって算出されたリスク確率と予め定められた資産価値情報とに基づいて、リスクが顕在化した際の損失コストを推定し、その推定された損失コストが予め定められたリスク基準よりも小さい場合、当該リスクを受容可能であると判別するリスク評価部43をさらに備えることを特徴とする。 In the security risk assessment device, the loss cost when the risk materializes is estimated based on the risk probability calculated by the risk analysis unit 42 and predetermined asset value information, and the estimated loss cost is calculated in advance. The present invention is characterized in that it further includes a risk evaluation unit 43 that determines that the risk is acceptable if the risk is smaller than a predetermined risk standard.
 このようにすることにより、セキュリティリスクアセスメント装置において、リスク評価部43は、リスク確率とシステム管理者等によって入力された資産価値情報とに基づいて損失コストの推定値を得ることができる。また、リスク評価部43は、損失コストの推定値とシステム管理者等によって入力されたリスク基準とに基づいて、当該リスクの受容可能性を自動で判断することができる。 By doing so, in the security risk assessment device, the risk evaluation unit 43 can obtain an estimated loss cost based on the risk probability and the asset value information input by the system administrator or the like. Further, the risk evaluation unit 43 can automatically determine the acceptability of the risk based on the estimated loss cost and the risk criteria input by the system administrator or the like.
 セキュリティリスクアセスメント装置において、解析グラフのノードは、ホストにおける始点ポート、終点ポート、脆弱性のいずれかを表すノードであり、リスク特定部41は、所定期間に収集されたセキュリティ情報としての通信量および不正通信に対する警報に基づいてネットワークシステム内の複数のホストを、外部からの距離に応じて複数のホスト群に分類することでホスト間の依存関係を記述する第1処理と、ホスト間の依存関係にしたがって、所定のホスト群に含まれるホストの始点ポートと、前記所定のホスト群に依存する他のホスト群に含まれるホストの終点ポートとの間を不正通信が流れる確率を、当該ノード間のエッジの重みとして算出する第2処理と、各ホスト内において、終点ポートから脆弱性にアクセスされて、その脆弱性が悪用される確率を当該ノード間のエッジの重みとして計算する第3処理と、各ホスト内において、脆弱性が悪用された後に、始点ポートから不正通信が行われる確率を当該ノード間のエッジの重みとして計算する第4処理と、警報に基づいて各ノードの状態変数を設定する第5処理と、をおこなうことで、解析グラフを作成することを特徴とする。 In the security risk assessment device, the nodes of the analysis graph are nodes representing either the start port, end port, or vulnerability in the host, and the risk identification unit 41 collects traffic and data as security information collected over a predetermined period. A first process that describes the dependencies between hosts by classifying multiple hosts in the network system into multiple host groups according to their distance from the outside based on a warning against unauthorized communication; and a first process that describes the dependencies between the hosts. According to a second process that calculates the weight of the edge; a third process that calculates the probability that the vulnerability will be accessed from the end port and exploited within each host as the weight of the edge between the nodes; A fourth process in each host that calculates the probability of unauthorized communication from the source port after the vulnerability is exploited as the weight of the edge between the nodes, and sets the state variable of each node based on the alert. The present invention is characterized in that an analytical graph is created by performing the fifth process.
 このようにすることにより、セキュリティリスクアセスメント装置において、リスク特定部41は、ネットワークシステムの外部からの不正通信が始点ポートから終点ポートに伝搬していく確率をエッジの重みとして計算することができる。また、リスク特定部41は、外部からの不正通信の伝搬が脆弱性攻撃につながる確率等をエッジの重みとして計算することができる。 By doing so, in the security risk assessment device, the risk identification unit 41 can calculate the probability that unauthorized communication from outside the network system will propagate from the source port to the destination port as the weight of the edge. Furthermore, the risk identification unit 41 can calculate the probability that propagation of unauthorized communication from the outside will lead to a vulnerability attack, etc. as the edge weight.
 セキュリティリスクアセスメント装置において、リスク分析部42は、解析グラフにおいて、各エッジの重みをノードの状態変数に関する条件付き確率であるものとして、エッジの方向に沿って、ノードの状態変数の期待値を順次計算して更新することを特徴とする。 In the security risk assessment device, the risk analysis unit 42 sequentially calculates the expected value of the state variable of the node along the direction of the edge in the analysis graph, assuming that the weight of each edge is a conditional probability regarding the state variable of the node. It is characterized by calculating and updating.
 このようにすることにより、リスク分析部42は、終点ポートから脆弱性へ伸びるエッジの重みを、終点ポートに不正通信が到達していた場合に脆弱性が悪用される確率とみなして計算をおこなうことができる。また、リスク分析部42は、脆弱性から始点ポートへ伸びるエッジの重みを、脆弱性が悪用されていた場合に始点ポートから不正通信がおこなわれる確率とみなして計算をおこなうことができる。また、リスク分析部42は、始点ポートから終点ポートへ伸びるエッジの重みを、始点ポートに不正通信が到達していた場合に終点ポートへ不正通信が到達する確率とみなして計算をおこなうことができる。 By doing so, the risk analysis unit 42 performs calculations by regarding the weight of the edge extending from the end port to the vulnerability as the probability that the vulnerability will be exploited if unauthorized communication reaches the end port. be able to. Furthermore, the risk analysis unit 42 can perform calculations by regarding the weight of the edge extending from the vulnerability to the source port as the probability that unauthorized communication will occur from the source port if the vulnerability is exploited. Further, the risk analysis unit 42 can perform calculations by regarding the weight of the edge extending from the start port to the end port as the probability that the fraudulent communication will arrive at the end port if the unauthorized communication has arrived at the start port. .
 セキュリティリスクアセスメントシステムは、複数のセキュリティ装置によってそれぞれ収集されたネットワークシステム内の情報資産、脆弱性、および脅威に関するセキュリティ情報を自動で収集する情報取得部10と、情報取得部10が取得したセキュリティ情報を所定の要件で整形する処理をしてセキュリティ情報が収集された時間と関連付けたデータをデータベース30に格納するデータ処理部20と、前記セキュリティリスクアセスメント装置40と、を備え、セキュリティリスクアセスメント装置40は、データベース30に格納されたデータに基づいて解析グラフを作成することを特徴とする。 The security risk assessment system includes an information acquisition unit 10 that automatically collects security information regarding information assets, vulnerabilities, and threats in a network system collected by a plurality of security devices, and security information acquired by the information acquisition unit 10. The security risk assessment device 40 includes a data processing unit 20 that processes data to format the information according to predetermined requirements and stores data associated with the time at which security information was collected in a database 30, and the security risk assessment device 40. is characterized in that an analysis graph is created based on data stored in the database 30.
 このようにすることにより、データベース30は、セキュリティリスクアセスメント装置40のリスク特定部41から所定期間をクエリとして受け取り、所定期間における情報資産、脆弱性および脅威に関するそれぞれのデータをリスク特定部41に提供することができる。 By doing so, the database 30 receives a predetermined period as a query from the risk identification unit 41 of the security risk assessment device 40, and provides each data regarding information assets, vulnerabilities, and threats for the predetermined period to the risk identification unit 41. can do.
 セキュリティリスクアセスメント方法は、セキュリティリスクアセスメント装置40のセキュリティリスクアセスメント方法であって、セキュリティリスクアセスメント装置40は、複数のセキュリティ装置によってそれぞれ収集されたネットワークシステム内の情報資産、脆弱性、および脅威に関するセキュリティ情報に基づいて、ネットワークシステムに内在していたセキュリティリスクを特定し、特定されたセキュリティリスクの確率的な依存関係を非巡回有向グラフの形式で記述した解析グラフを作成するステップと、解析グラフを用いてノードの状態に関するリスク確率を算出するステップと、を実行することを特徴とする。 The security risk assessment method is a security risk assessment method performed by the security risk assessment device 40, in which the security risk assessment device 40 performs security assessment on information assets, vulnerabilities, and threats in a network system, each collected by a plurality of security devices. A step of identifying security risks inherent in the network system based on the information, and creating an analytical graph that describes the probabilistic dependencies of the identified security risks in the form of an acyclic directed graph; and a step of using the analytical graph. and calculating a risk probability regarding the state of the node.
 このようにすることにより、セキュリティリスクアセスメント方法において、セキュリティリスクアセスメント装置40は、セキュリティ情報に基づいて、ネットワークシステム内の各情報資産に不正通信が到達した確率を算出することができる。また、セキュリティリスクアセスメント装置40は、セキュリティ情報に基づいて、ネットワークシステム内の各脆弱性が悪用された確率を算出することができる。したがって、システム管理者等は、人手でBNの作成をすることなく、セキュリティリスクアセスメント装置40において算出されたリスク確率を参照することができる。 By doing so, in the security risk assessment method, the security risk assessment device 40 can calculate the probability that unauthorized communication has reached each information asset in the network system based on the security information. Furthermore, the security risk assessment device 40 can calculate the probability that each vulnerability in the network system has been exploited based on the security information. Therefore, a system administrator or the like can refer to the risk probability calculated by the security risk assessment device 40 without manually creating a BN.
 セキュリティリスクアセスメント方法において、セキュリティリスクアセスメント装置40は、リスク確率と、予め定められた資産価値情報とに基づいて、リスクが顕在化した際の損失コストを推定し、その推定された損失コストが予め定められたリスク基準よりも小さい場合、当該リスクを受容可能であると判別するステップをさらに実行することを特徴とする。 In the security risk assessment method, the security risk assessment device 40 estimates the loss cost when the risk materializes based on the risk probability and predetermined asset value information, and the estimated loss cost is calculated in advance. If the risk is smaller than a predetermined risk standard, the method further includes determining that the risk is acceptable.
 このようにすることにより、セキュリティリスクアセスメント方法において、セキュリティリスクアセスメント装置40は、リスク確率とシステム管理者等によって入力された資産価値情報とに基づいて損失コストの推定値を得ることができる。また、セキュリティリスクアセスメント装置40は、損失コストの推定値とシステム管理者等によって入力されたリスク基準とに基づいて、当該リスクの受容可能性を自動で判断することができる。 By doing so, in the security risk assessment method, the security risk assessment device 40 can obtain an estimated loss cost based on the risk probability and the asset value information input by the system administrator or the like. Furthermore, the security risk assessment device 40 can automatically determine the acceptability of the risk based on the estimated loss cost and the risk criteria input by the system administrator or the like.
 なお、本発明は、以上説明した実施例に限定されるものではなく、多くの変形が本発明の技術的思想内で当分野において通常の知識を有する者により可能である。 Note that the present invention is not limited to the embodiments described above, and many modifications can be made within the technical idea of the present invention by those having ordinary knowledge in this field.
 1,1B セキュリティリスクアセスメントシステム
 10  情報取得部
 11  ネットワークフロー情報取得部
 12  脆弱性情報取得部
 13  脅威情報取得部
 20,20B  データ処理部
 30  データベース
 40,40B リスクアセスメント部(セキュリティリスクアセスメント装置)
 41  リスク特定部
 42  リスク分析部
 43  リスク評価部
 50  アセスメント結果出力部
 60  資産価値情報入力部
1, 1B Security risk assessment system 10 Information acquisition unit 11 Network flow information acquisition unit 12 Vulnerability information acquisition unit 13 Threat information acquisition unit 20, 20B Data processing unit 30 Database 40, 40B Risk assessment unit (security risk assessment device)
41 Risk Identification Department 42 Risk Analysis Department 43 Risk Evaluation Department 50 Assessment Results Output Department 60 Asset Value Information Input Department

Claims (8)

  1.  複数のセキュリティ装置によってそれぞれ収集されたネットワークシステム内の情報資産、脆弱性、および脅威に関するセキュリティ情報に基づいて、前記ネットワークシステムに内在していたセキュリティリスクを特定し、特定されたセキュリティリスクの確率的な依存関係を非巡回有向グラフの形式で記述した解析グラフを作成するリスク特定部と、
     前記解析グラフを用いてノードの状態に関するリスク確率を算出するリスク分析部と、を備えることを特徴とするセキュリティリスクアセスメント装置。
    Based on the security information regarding information assets, vulnerabilities, and threats within the network system collected by multiple security devices, the security risks inherent in the network system are identified, and the probabilistic analysis of the identified security risks is performed. a risk identification unit that creates an analysis graph that describes dependencies in the form of an acyclic directed graph;
    A security risk assessment device comprising: a risk analysis unit that calculates a risk probability regarding the state of a node using the analysis graph.
  2.  前記リスク分析部によって算出されたリスク確率と予め定められた資産価値情報とに基づいて、リスクが顕在化した際の損失コストを推定し、その推定された損失コストが予め定められたリスク基準よりも小さい場合、当該リスクを受容可能であると判別するリスク評価部をさらに備えることを特徴とする請求項1に記載のセキュリティリスクアセスメント装置。 Based on the risk probability calculated by the risk analysis department and predetermined asset value information, the loss cost when the risk materializes is estimated, and the estimated loss cost is calculated based on the predetermined risk standard. 2. The security risk assessment device according to claim 1, further comprising a risk evaluation unit that determines that the risk is acceptable if the risk is also small.
  3.  前記解析グラフのノードは、ホストにおける始点ポート、終点ポート、脆弱性のいずれかを表すノードであり、
     前記リスク特定部は、
     所定期間に収集されたセキュリティ情報としての通信量および不正通信に対する警報に基づいて前記ネットワークシステム内の複数のホストを、外部からの距離に応じて複数のホスト群に分類することでホスト間の依存関係を記述する第1処理と、
     前記ホスト間の依存関係にしたがって、所定のホスト群に含まれるホストの始点ポートと、前記所定のホスト群に依存する他のホスト群に含まれるホストの終点ポートとの間を不正通信が流れる確率を、当該ノード間のエッジの重みとして算出する第2処理と、
     各ホスト内において、終点ポートから脆弱性にアクセスされて、その脆弱性が悪用される確率を当該ノード間のエッジの重みとして計算する第3処理と、
     各ホスト内において、脆弱性が悪用された後に、始点ポートから不正通信が行われる確率を当該ノード間のエッジの重みとして計算する第4処理と、
     前記警報に基づいて各ノードの状態変数を設定する第5処理と、をおこなうことで、前記解析グラフを作成することを特徴とする請求項1または請求項2に記載のセキュリティリスクアセスメント装置。
    The nodes of the analysis graph are nodes representing either a starting port, an ending port, or a vulnerability in the host,
    The risk identification department is
    Dependency between hosts is reduced by classifying multiple hosts within the network system into multiple host groups according to distance from the outside based on traffic volume as security information collected over a predetermined period and warnings against unauthorized communications. a first process of describing the relationship;
    The probability that unauthorized communication will flow between a starting point port of a host included in a predetermined host group and a destination port of a host included in another host group that depends on the predetermined host group, according to the dependency relationship between the hosts. a second process of calculating the weight of the edge between the nodes;
    a third process of calculating the probability that the vulnerability will be accessed from the end port and exploited in each host as the weight of the edge between the nodes;
    a fourth process of calculating the probability that unauthorized communication will be performed from the source port after the vulnerability is exploited in each host as the weight of the edge between the nodes;
    3. The security risk assessment device according to claim 1, wherein the analysis graph is created by performing a fifth process of setting a state variable of each node based on the alarm.
  4.  前記リスク分析部は、前記解析グラフにおいて、各エッジの重みをノードの状態変数に関する条件付き確率であるものとして、エッジの方向に沿って、ノードの状態変数の期待値を順次計算して更新することを特徴とする請求項3に記載のセキュリティリスクアセスメント装置。 The risk analysis unit sequentially calculates and updates the expected value of the state variable of the node along the direction of the edge, assuming that the weight of each edge is a conditional probability regarding the state variable of the node in the analysis graph. The security risk assessment device according to claim 3, characterized in that:
  5.  複数のセキュリティ装置によってそれぞれ収集されたネットワークシステム内の情報資産、脆弱性、および脅威に関するセキュリティ情報を自動で収集する情報取得部と、
     前記情報取得部が取得したセキュリティ情報を所定の要件で整形する処理をして前記セキュリティ情報が収集された時間と関連付けたデータをデータベースに格納するデータ処理部と、
     請求項1または請求項2に記載のセキュリティリスクアセスメント装置と、を備え、
     前記セキュリティリスクアセスメント装置は、前記データベースに格納されたデータに基づいて前記解析グラフを作成することを特徴とするセキュリティリスクアセスメントシステム。
    an information acquisition unit that automatically collects security information regarding information assets, vulnerabilities, and threats in the network system collected by a plurality of security devices;
    a data processing unit that formats the security information acquired by the information acquisition unit according to predetermined requirements and stores data associated with the time at which the security information was collected in a database;
    The security risk assessment device according to claim 1 or claim 2,
    A security risk assessment system, wherein the security risk assessment device creates the analysis graph based on data stored in the database.
  6.  セキュリティリスクアセスメント装置のセキュリティリスクアセスメント方法であって、
     前記セキュリティリスクアセスメント装置は、
     複数のセキュリティ装置によってそれぞれ収集されたネットワークシステム内の情報資産、脆弱性、および脅威に関するセキュリティ情報に基づいて、前記ネットワークシステムに内在していたセキュリティリスクを特定し、特定されたセキュリティリスクの確率的な依存関係を非巡回有向グラフの形式で記述した解析グラフを作成するステップと、
     前記解析グラフを用いてノードの状態に関するリスク確率を算出するステップと、
    を実行することを特徴とするセキュリティリスクアセスメント方法。
    A security risk assessment method for a security risk assessment device, the method comprising:
    The security risk assessment device includes:
    Based on the security information regarding information assets, vulnerabilities, and threats within the network system collected by multiple security devices, the security risks inherent in the network system are identified, and the probabilistic analysis of the identified security risks is performed. a step of creating an analytic graph that describes the dependencies in the form of an acyclic directed graph;
    calculating a risk probability regarding the state of the node using the analysis graph;
    A security risk assessment method characterized by performing.
  7.  前記セキュリティリスクアセスメント装置は、
     前記リスク確率と、予め定められた資産価値情報とに基づいて、リスクが顕在化した際の損失コストを推定し、その推定された損失コストが予め定められたリスク基準よりも小さい場合、当該リスクを受容可能であると判別するステップをさらに実行することを特徴とする請求項6に記載のセキュリティリスクアセスメント方法。
    The security risk assessment device includes:
    The loss cost when the risk materializes is estimated based on the risk probability and predetermined asset value information, and if the estimated loss cost is smaller than the predetermined risk standard, the risk is 7. The security risk assessment method according to claim 6, further comprising determining that the security risk assessment method is acceptable.
  8.  コンピュータを、請求項1または請求項2に記載のセキュリティリスクアセスメント装置として機能させるためのプログラム。 A program for causing a computer to function as the security risk assessment device according to claim 1 or 2.
PCT/JP2022/020265 2022-05-13 2022-05-13 Security risk assessment device, security risk assessment system, security risk assessment method, and program WO2023218660A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/020265 WO2023218660A1 (en) 2022-05-13 2022-05-13 Security risk assessment device, security risk assessment system, security risk assessment method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/020265 WO2023218660A1 (en) 2022-05-13 2022-05-13 Security risk assessment device, security risk assessment system, security risk assessment method, and program

Publications (1)

Publication Number Publication Date
WO2023218660A1 true WO2023218660A1 (en) 2023-11-16

Family

ID=88730203

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/020265 WO2023218660A1 (en) 2022-05-13 2022-05-13 Security risk assessment device, security risk assessment system, security risk assessment method, and program

Country Status (1)

Country Link
WO (1) WO2023218660A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016091402A (en) * 2014-11-07 2016-05-23 株式会社日立製作所 Risk evaluation system and risk evaluation method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016091402A (en) * 2014-11-07 2016-05-23 株式会社日立製作所 Risk evaluation system and risk evaluation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YOSHIAKI ISOBE; AKIHIRO SUGIMOTO: "Development of Security Risk Analyzing System based on Dynamically Modeling using the Bayesian Network", IPSJ SIG TECHNICAL REPORT, INFORMATION PROCESSING SOCIETY OF JAPAN, JP, vol. 2015-CSEC-68, no. 6, 26 February 2015 (2015-02-26), JP, pages 1 - 6, XP009544605 *

Similar Documents

Publication Publication Date Title
US11876824B2 (en) Extracting process aware analytical attack graphs through logical network analysis
US10805321B2 (en) System and method for evaluating network threats and usage
US20220124108A1 (en) System and method for monitoring security attack chains
JP6622928B2 (en) Accurate real-time identification of malicious BGP hijacking
US20210256528A1 (en) Automated cloud security computer system for proactive risk detection and adaptive response to risks and method of using same
US7530105B2 (en) Tactical and strategic attack detection and prediction
EP2498198B1 (en) Information system security based on threat vectors
KR102295654B1 (en) Method and apparatus for predicting attack target based on attack graph
Kumar et al. A robust intelligent zero-day cyber-attack detection technique
US10129276B1 (en) Methods and apparatus for identifying suspicious domains using common user clustering
EP1768046A2 (en) Systems and methods of associating security vulnerabilities and assets
US11424993B1 (en) Artificial intelligence system for network traffic flow based detection of service usage policy violations
Alissa et al. Botnet attack detection in iot using machine learning
Elfeshawy et al. Divided two-part adaptive intrusion detection system
Patel et al. Od-ids2022: generating a new offensive defensive intrusion detection dataset for machine learning-based attack classification
JP2023525127A (en) Protect computer assets from malicious attacks
WO2023218660A1 (en) Security risk assessment device, security risk assessment system, security risk assessment method, and program
Yeboah-Ofori et al. Cyber resilience in supply chain system security using machine learning for threat predictions
Chakir et al. A real-time risk assessment model for intrusion detection systems using pattern matching
Aslanyan et al. Comparative analysis of attack graphs
JP7033560B2 (en) Analytical equipment and analytical method
Skopik The limitations of national cyber security sensor networks debunked: Why the human factor matters
Nhlabatsi et al. Threatriskevaluator: A tool for assessing threat-specific security risks in the cloud
Akshaya et al. Enhancing Zero-Day Attack Prediction a Hybrid Game Theory Approach with Neural Networks
Byers et al. Real-time fusion and projection of network intrusion activity

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22941736

Country of ref document: EP

Kind code of ref document: A1