WO2021028060A1 - Security automation system - Google Patents

Security automation system Download PDF

Info

Publication number
WO2021028060A1
WO2021028060A1 PCT/EP2019/071971 EP2019071971W WO2021028060A1 WO 2021028060 A1 WO2021028060 A1 WO 2021028060A1 EP 2019071971 W EP2019071971 W EP 2019071971W WO 2021028060 A1 WO2021028060 A1 WO 2021028060A1
Authority
WO
WIPO (PCT)
Prior art keywords
security
asset
risk
automation system
assets
Prior art date
Application number
PCT/EP2019/071971
Other languages
French (fr)
Inventor
Harri Hakala
Anu PUHAKAINEN
Ari PIETIKÄINEN
Kyösti TOIVANEN
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2019/071971 priority Critical patent/WO2021028060A1/en
Publication of WO2021028060A1 publication Critical patent/WO2021028060A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1433Vulnerability analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1425Traffic logging, e.g. anomaly detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/034Test or assess a computer or a system

Definitions

  • the proposed technology generally relates to Information Security (IS) and Information Technology (IT), and specifically concerns a security automation system configured for security management of an Information Technology system, and a network entity of a communication system or network comprising such a security automation system, a method for security management of an Information Technology system as well as a corresponding computer program and computer-program product.
  • IS Information Security
  • IT Information Technology
  • SIEM Security Incident and Event Management
  • SOAR Security Orchestration, Automation and Response
  • Verifying existing vulnerabilities with scanners is used to check servers one by one if there are existing known vulnerabilities in the servers or sub-components. Vulnerability management reports are analyzed usually manually by a security expert.
  • Vulnerability information for different 3pp software components are received or fetched from public sources for manual processing and mapping to the servers or sub-components.
  • Threat intelligence information is received or fetched from commercial or public sources for manual processing.
  • Asset inventories are used to store information and characteristics about servers.
  • IDS/IPS Intrusion Detection Systems/Intrusion Prevention System
  • Compliance tools exist but are limited to snapshot views rather than continuous compliance monitoring and verification. Risk management is usually performed as a desktop/paper exercise at specific time slots and risk management applications are rare. Thus, the current security management systems are to a large extent not operating in real time to document and report on vulnerabilities, threats, security controls and risks.
  • SIEM and SOAR systems are designed for use in enterprise server security monitoring and management for enterprise business use cases, and are not apt for telecom network elements and business use cases without major customization.
  • SIEM state-of-the-art SIEM
  • a SIEM system provides a central console for viewing, monitoring and managing security-related events and log data from across the considered enterprise.
  • a SIEM system can enable an analyst to identify and respond to suspicious behavior patterns.
  • Another object is to provide a method for security management of an Information Technology system.
  • Another specific object is to provide situational awareness of an Information Technology system security posture, including e.g. a visualization platform and means of notification to human security analysts.
  • Yet another object is to provide a computer program for performing, when executed, security management of an Information Technology system.
  • Still another object is to provide a complementary security automation system configured for security management of an Information Technology system.
  • a security automation system configured for security management of an Information Technology (IT) system.
  • the Information Technology system has a number of interacting system components, each system component being associated with one or more operational assets for the operation of the system component.
  • the operational assets are organized in one or more security domains according to a system topology.
  • the security automation system is configured to obtain: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration.
  • the security automation system is further configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration.
  • the security automation system is also configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications.
  • the security automation system is configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains based on at least a subset of the determined risk levels.
  • a network entity of a communication system or network, comprising such a security automation system.
  • a method for security management of an Information Technology system having a number of interacting system components.
  • Each system component is associated with one or more operational assets relevant to the operation of the system component, and the operational assets are organized in one or more security domains according to a system topology.
  • the method comprises: obtaining: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration; determining, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration; determining, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications; and determining one or more security control actions related to at least a subset of the operational assets and/or domains based on at least a subset of the determined risk levels.
  • a computer program for performing, when executed, security management of an Information Technology system having a number of interacting system components.
  • Each system component is associated with one or more operational assets relevant to the operation of the system component, and the operational assets are organized in one or more security domains according to a system topology.
  • the computer program comprises instructions, which when executed by at least one processor, cause the at least one processor to: obtain: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration; determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration; determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications; and determine one or more security control actions related to at least a subset of the operational assets and/or domains
  • a computer-program product comprising a non-transitory computer-readable medium having stored thereon such a computer program.
  • a security automation system configured for security management of an Information Technology system.
  • the Information Technology system has a number of interacting system components, each system component being associated with one or more operational assets relevant to the operation of the system component.
  • the operational assets are organized in one or more security domains according to a system topology.
  • the security automation system comprises: a trust engine configured to perform trust valuation of at least a subset of the operational assets and/or domains at least partly based on security configuration information representative of asset and/or domain security configuration; a risk engine configured to perform risk assessment for each of at least a subset of the operational assets and/or for each of a number of the security domains at least partly based on the trust evaluation performed by the trust engine, and security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets, asset information representative of the system topology, asset configuration and/or dependency between assets; and a security adaptation engine configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains at least partly based on the risk assessment performed by the risk engine.
  • the security adaptation engine may be configured to communicate, via workflows to at least a subset of security automation system sub-components, risk information and events at least partly based on the risk assessment performed by the risk engine.
  • the security automation system comprises a security configuration engine configured to set initial security profiles of the assets with help of executable security controls; and to configure new dynamic or static policies or decommission existing ones automatically for permanent and/or temporary protection based on the instructions of the risk engine via workflows of the security adaptation engine.
  • FIG. 1 is a schematic diagram illustrating an example of an improved security automation process or procedure according to an embodiment.
  • FIG. 2 is a schematic diagram illustrating an example of a security automation system according to an embodiment.
  • FIG. 3 is a schematic diagram illustrating another example of a security automation system according to an embodiment.
  • FIG. 4 is a schematic diagram illustrating an example of a monitored Information Technology system.
  • FIG. 5 is a schematic diagram illustrating another example of a monitored Information Technology system.
  • FIG. 6 is a schematic diagram illustrating an example of a communication system/network having Information Technology components.
  • FIG. 7 is a schematic flow diagram illustrating an example of a method for security management of an Information Technology system according to an embodiment.
  • FIG. 8 is a schematic diagram illustrating an example of security automation system in connection with a managed environment.
  • FIG. 9 is a schematic diagram illustrating an example of a security automation system according to a specific embodiment.
  • FIG. 10 is a schematic flow diagram illustrating an example operation of a risk engine according to an embodiment.
  • FIG. 11 is a schematic flow diagram illustrating an example operation of an asset trust index assigner according to an embodiment.
  • FIG. 12 is a schematic flow diagram illustrating an example operation of an asset trust index calculator according to an embodiment.
  • FIG. 13 is a schematic flow diagram illustrating an example of possible initialization of a security automation system according to an embodiment.
  • FIG. 14 is a schematic diagram illustrating an example of security domain organization according to an embodiment.
  • FIG. 15 is a schematic diagram illustrating a use case example according to an embodiment.
  • FIG. 16 is a schematic diagram illustrating a use case example according to another embodiment.
  • FIG. 17A is a schematic block diagram illustrating an example of a security automation system according to an embodiment.
  • FIG. 17B is a schematic block diagram illustrating an example of a network entity (node) comprising a security automation system according to an embodiment.
  • FIG. 18 is a schematic diagram illustrating an example of a computer-implementation according to an embodiment.
  • trust may for example refer to the extent to which an entity is willing to depend on another entity in a given situation with a sense of relative security, even though negative consequences may arise.
  • risk may for example relate to a situation in which it is possible but not certain that some unwanted or undesirable event will occur.
  • the term may often be used synonymously with the probability of the unwanted event to occur.
  • risk may be regarded as the product of threat impact and probability of threat occurrence, where threat impact is defined as the value of losses due to a threat being realized.
  • risk level may for example refer to a calculated value of risk.
  • trustworthiness may for example refer to the assurance of selected security principles/requirements and/or availability and/or reliability requirements related to an operational asset, device, component and/or system, including hardware and/or software components as well as configurations and/or communication interfaces and/or protocols of such hardware and/or software components as well as dependencies or relations between operational assets.
  • Trustworthiness normally includes an understanding of the resilience of the operational asset, device, component and/or system to conditions that stress the security, availability and/or reliability requirements. In other words, trustworthiness can be regarded as an indication as to how much an operational asset, device, component and/or system can be trusted from a technical point of view.
  • trust indication and “trust index” may for example refer to a value of perceived trustworthiness of an asset.
  • Trust index may for example refer to a value of perceived trustworthiness of an asset.
  • Information Technology system may refer to any technological system for generating, processing, storing and/or transferring information.
  • server may generally refer to any technical entity, component, device, system and/or or node located in connection with a network environment such as an enterprise network
  • network entity may generally refer to any technical entity, component, device, system and/or or node located in connection with a network environment such as a wired and/or wireless communication network, including Internet components, servers and/or network nodes of wireless communication systems.
  • network node may refer to any node or device located in connection with a communication network, including but not limited to devices in access networks, core networks and similar network structures.
  • the term network node may also encompass cloud-based network devices.
  • engine may refer to any functional module or unit for executing one or more process steps and/or performing one or more actions such as computations, decisions, execution of security control workflows and so forth, and may be implemented in hardware or software executing on processing hardware.
  • a security automation system configured for security management of an Information Technology (IT) system.
  • the Information Technology system has a number of interacting system components, each system component being associated with one or more operational assets for the operation of the system component.
  • the operational assets are organized in one or more security domains according to a system topology.
  • the security automation system is configured to obtain: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration.
  • the security automation system is further configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration.
  • the security configuration information representative of asset and/or domain security configuration may thus include information representative of asset security configuration and/or information representative of domain security configuration.
  • the security automation system is also configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications.
  • the security automation system is configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains based on at least a subset of the determined risk levels.
  • FIG. 1 is a schematic diagram illustrating an example of an improved security automation process or procedure according to an embodiment.
  • risk evaluation and trust evaluation may be executed, wherein results of the trust evaluation may be used as effective input to the risk evaluation to thereby improve the risk evaluation. Based on such improved risk evaluation, proper security adaptation and/or configuration may be determined and/or updated to decide on and/or provide one or more security control actions.
  • information about the determined and/or updated security configuration may be used as feedback to provide useful information for the trust evaluation and/or risk evaluation.
  • FIG. 2 is a schematic diagram illustrating an example of a security automation system according to an embodiment.
  • the security automation system 100 comprises a trust engine 110, a risk engine 120, a security adaptation engine 130 and an associated database 140.
  • the security automation system 100 may implement interconnection and mapping of information from different sources and create automated security control action flows, keeping the risk level acceptable, and the context compliant and trustworthy according to given policies.
  • the security automation system 100 may be configured to determine one or more security control actions for a fully automated or security-analyst- assisted workflow to mitigate security risks and/or security threats.
  • FIG. 3 is a schematic diagram illustrating another example of a security automation system according to an embodiment.
  • the security automation system 100 further includes a security policy based security configuration engine 150, and optionally also a security analytics engine 160, a threat intelligence engine 170 and an asset management engine 180, and a Risk and Trust Awareness Dashboard 105 as will be described in greater detail later on.
  • the security automation system may interconnect different security functions.
  • Threat intelligence from external sources may be combined with system- internal security monitoring information to compose so-called asset trust indexes.
  • Trust indexes may be used for risk evaluation, which makes it possible to asses security status comprehensively across the entire system.
  • the security automation system may provide fully automated or security-analyst-assisted workflows to mitigate security risks for any given context.
  • the security automation system may provide end-to-end visibility to the trust and risk status of the system and make automated risk qualifications in near real-time, e.g. with the aid of machine learning technologies to adjust security controls, insert security functions and/or provide information to external systems
  • FIG. 4 is a schematic diagram illustrating an example of a monitored Information Technology system.
  • the Information Technology system 200 has a number of interacting system components 205, each system component 205 being associated with one or more operational assets 210 for the operation of the system component.
  • the system components 205 and the associated operational assets 210 are organized in one or more security domains 220 according to a system topology. Different security domains 220 may be interconnected via an optional interconnect 230, such as a firewall (FW).
  • FW firewall
  • the operational assets 210 may use various communication interfaces and/or protocols, and appropriate security controls may be deployed for the operational asset(s) or planned for deployment.
  • the operational assets 210 may include at least one of:
  • the operational assets 210 include at least one digital asset
  • the one or more security control actions typically include at least one executable security control operating on the at least one digital asset and/or an associated security domain.
  • FIG. 5 is a schematic diagram illustrating another example of a monitored Information Technology system.
  • another asset topology is illustrated, with assets 210 being organized in different security domains that may be interconnected to one another.
  • the assets 210 within a security domain 220 may have various interdependencies.
  • the security automation system may be configured to determine, for each of at least a subset of the operational assets 210 and/or for each of a number security domains 220, a risk level also at least partly based on dependency between assets 210.
  • the security automation system is configured to determine, for each of at least a subset of the operational assets 210 and/or for each of a number security domains 220, a risk level also at least partly based on asset placement in the system topology, the number and nature of security risks and/or threats, communication interface(s) and/or protocol(s) used by the operational asset(s) and/or deployed security controls for the operational asset(s).
  • the security automation system may be configured to determine, for each of at least a subset of the operational assets 210 and/or for each of a number of the security domains 220, a trust indication representing trustworthiness of the operational asset and/or domain also at least partly based on asset information representative of the system topology, asset configuration and/or dependency between assets.
  • the security automation system may be configured to determine one or more security control actions based on Machine Learning (ML).
  • ML Machine Learning
  • the security automation system may be configured to perform ML-based risk treatment analysis on how to handle security risks when the determined risk levels exceed a given threshold and determine one or more of the security control actions related to at least a subset of the operational assets 210 and/or domains 220 based on the ML-based risk treatment analysis.
  • the security automation system may be configured to detect one or more security risks, system vulnerabilities and/or security threats based on Machine Learning (ML).
  • ML Machine Learning
  • the one or more security control actions includes updating a security policy, adjusting a security configuration, removing an existing security function and/or inserting a new security function.
  • the security automation system may be configured to adaptively determine trust indication(s), risk level(s) and security control action(s) in an automated manner.
  • the security automation system is configured as an adaptive system.
  • the security automation system may be configured, e.g. to re-determine at least one risk level after deployment of the determine security control action(s).
  • Information Technology system may be any system for generating, processing, storing and/or transferring information.
  • the security automation system may be configured for security management of the Information Technology system 200 of a communication system or network 300.
  • FIG. 6 is a schematic diagram illustrating an example of a communication system/network having Information Technology components.
  • the overall communication system/network 300 may include wireless communication systems such as radio access networks 310, wired communication systems such as server/data networks 320 as well as interconnecting networks 330, and may include system components such as radio base stations, access points, radio controllers, network management units, communication gateways, routers, servers and the like, each of which may have Information Technology components for generating, processing, storing and/or transferring information.
  • the Information Technology system may be an integrated part of the communication system or network 300 and the interacting system components 205 of the Information Technology system 200 may involve network nodes and/or elements of the communication system or network 300.
  • the security automation system 100 may include a trust engine 110, a risk engine 120 and a security adaptation engine 130 that are operatively interconnected.
  • the trust engine 110 may be configured for trust determination or valuation
  • the risk engine 120 may be configured for risk evaluation or assessment.
  • the security adaptation engine 130 may be seen as an orchestration layer for effectively interworking with the other system components.
  • the security adaptation engine may be configured to communicate, via workflows to at least a subset of security automation system sub-components, risk information and events at least partly based on the risk assessment performed by the risk engine.
  • the trust engine 110 may be configured to perform trust valuation of at least a subset of the operational assets and/or domains at least partly based on security configuration information representative of asset and/or domain security configuration.
  • the risk engine 120 may be configured to perform risk assessment for each of at least a subset of the operational assets and/or for each of a number of the security domains at least partly based on the trust evaluation performed by the trust engine, and security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets, asset information representative of the system topology, asset configuration and/or dependency between assets.
  • the security adaptation engine 130 may be configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains at least partly based on the risk assessment performed by the risk engine.
  • the trust engine 110 may be configured to determine, for each of at least a subset of the operational assets and/or domains, the trust indication representing trustworthiness of the operational asset and/or domain.
  • the risk engine 120 may be configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, the risk level of the operational asset and/or security domain.
  • the security adaptation engine 130 may be configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains. In essence, this means that the security control action(s) may be performed per asset and/or per domain.
  • the security automation system 100 may include a security configuration engine 150 for performing a security configuration according to the determined security control action(s).
  • the security adaptation engine 130 and the security configuration engine 150 may be integrated, at least partly.
  • the security configuration engine 150 may be configured to set initial security profiles of the assets with help of executable security controls, and to configure new dynamic or static policies or decommission existing ones automatically for permanent and/or temporary protection based on the instructions of the risk engine 120 via workflows of the security adaptation engine 130.
  • database 140 may be a shared trust and risk database accessible by at least the trust engine 110 and the risk engine 120.
  • the security adaptation engine 130 may be based on a rule engine that commands or runs digital security control workflows to make decisions automatically based on predefined rules.
  • the security adaptation engine 130 may be configured to learn from previous security control actions and use Machine Learning (ML) to adapt rules and/or create new rules.
  • ML Machine Learning
  • the risk and trust awareness dashboard 105 may be regarded as a visualization and/or notification platform for enabling situational awareness of the security posture of the Information Technology system, e.g. for presenting suitable trust, risk or other security information and/or for notification to human security analysts. It can be seen as a unified display of information from multiple sources, to the end to end risk and trust status of the managed context. It gives immediate visibility to the security operators for the main potential issues and risks in the network requiring constant monitoring and immediate actions hence helping to target the mitigation actions to the areas at most risk at any given time.
  • the security analytics engine 160 may be configured to detect security events/threats
  • the threat intelligence engine 170 may be configured to provide security information from external sources.
  • the asset management engine 180 or asset manager, is configured to manage and/or provide information on assets and system topology including aspects such as asset configurations, interfaces and/or interdependencies.
  • FIG. 7 is a schematic flow diagram illustrating an example of a method for security management of an Information Technology system according to an embodiment.
  • the method comprises the following basic steps:
  • S1 obtaining: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration;
  • S2 determining, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration;
  • S3 determining, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications;
  • S4 determining one or more security control actions related to at least a subset of the operational assets and/or domains based on at least a subset of the determined risk levels.
  • FIG. 8 is a schematic diagram illustrating an example of security automation system in connection with a managed environment.
  • the managed environment comprises assets organized in security domains.
  • FIG. 9 is a schematic diagram illustrating an example of a security automation system according to a specific embodiment.
  • the security automation system or apparatus 100 comprises a risk and trust awareness dashboard 105, a trust engine 110, a risk engine 120, an adaptation engine 130, a risk and trust index database 140, a security configuration engine 150, a security analytics engine 160, a threat intelligence engine 170 and an asset management engine 180.
  • the trust engine 110 and risk engine 120 may be connected to a risk and trust awareness dashboard 105, as will be explained later on.
  • the trust engine 110 which may also be referred to as a trust evaluation platform, may include an asset trust index calculator, and asset trust index assigner.
  • the risk engine 120 which may also be referred to as a risk evaluation and/or treatment platform, may include a risk index calculator and a risk treatment handler.
  • the adaptation engine 130 may for example include an event/action analyzing module, and modules for workflow actions such as interworking with risk and trust engine and interworking with sub-components.
  • the security automation system may interconnect different security functions.
  • Threat intelligence from external sources may be combined with system- internal security monitoring information to compose so-called asset trust indexes.
  • Trust indexes may be used for risk evaluation, which makes it possible to asses security status comprehensively across the entire system.
  • the security automation system may provide fully automated or security-analyst-assisted workflows to mitigate security risks for any given context.
  • the security automation system may provide end-to-end visibility to the trust and risk status of the system and make automated risk qualifications in near real-time, e.g. with the aid of machine learning technologies to adjust security controls, insert security functions and/or provide information to external systems
  • the Adaptation Engine 130 is a rule engine that runs digital “workflows” to make decisions automatically based on predefined rules without manual intervention. It determines and facilitates the flow of information, tasks, and events and how these actions flow from one Adaptive Security Automation component to another. It learns from the action performed and uses ML algorithms to create new rules and to trigger actions towards other components.
  • the Risk engine 120 calculates risk level for the assets under monitoring and carries out a risk treatment analysis how to treat unacceptable risks.
  • Risk Engine includes three main components:
  • Risk and Trust Index Database 140 may include e.g. the following information:
  • Asset identity and estimated asset value • Time of entry of the Asset to Risk and Trust Index Database
  • Communication interfaces within security domain o Communication interfaces across security domains o Number of unpatched known vulnerabilities o The highest CVE score of known vulnerabilities o The average score of known vulnerabilities o The time of latest configuration change
  • the Risk and Trust Index Database may be initialized with information collected from the various sources. All known Threats and Vulnerabilities are collected, downloaded and indexed from external sources. System specific asset information (asset & asset contextual information) is inserted from the asset inventory. The security controls are downloaded from the Policy automation databases and mitigation indexes are allocated.
  • the Risk and Trust Index Database does regular queries to external sources to update its content regarding the new Threats, Vulnerabilities, Assets and Controls.
  • Asset contextual information contains information about the asset placement in security domain topology and asset dependencies on other assets.
  • Asset dependency factor is a variable that is used to consider an increased risk factor due to high-risk neighboring asset(s).
  • the Asset dependency factor impacts the risk level as a function of high-risk neighboring assets, that is, more high-risk neighboring assets or even one very high-risk neighboring asset increases the dependency factor values.
  • Asset Initial Risk Level [(Impact x Asset Dependency Factor) x (Probability x 1 /Trust Index)]
  • Impact Asset Value Index X Threat Severity Index. Impact value e.g. within range [1 -5]
  • Probability Vulnerability Criticality Index X 1/Control Mitigation Index. Probability value e.g. within range [1 ...5].
  • Trust Index Value received from the Asset Trust Index calculation function: Trust Index value within range [0.5...1.5]
  • Asset dependency factor Factor which considers impacts the risk level as a function of high-risk neighboring assets. The factor value within range [0.5...1 .5].
  • the Trust Index is considered when calculating Risk probability. It is also considered a novelty that dependencies to other assets is/are considered when calculating Risk Impact.
  • the security domain initial level risk level is an average of risk level of risk levels of all assets within the security domain.
  • AIRL Asset Initial Risk Level calculated by the formula 1.
  • An event can be for instance a new or changed threat or vulnerability, a new added asset or control, or an old removed or changed asset or control.
  • the Risk Level can be calculated using same formula as initialization phase, except that the Index-decay multiplier is also considered.
  • the Index-decay multiplier can be dedicated multiplier for a threat, vulnerability, asset or control or there can be one common multiplier all of them.
  • the index-decay multiplier normally weakens the index value of the threat, vulnerability, asset or control as a function time.
  • the decay multiplier can be e.g. X % per time period (a week, a month), increasing or decreasing according to machine learning algorithms. After considerable change in a threat, vulnerability, asset or control, the time period can be restarted.
  • Index-Decay Multiplier value is within range [0.5...1.5]
  • the risk level calculation formula for the known threat, vulnerability, asset or control is:
  • Asset Risk Level Known (ARLK) [Impact x Asset Dependency Factor) X (Probability x 1/T rust index)] x 1 /Index-Decay Multiplier
  • Impact Asset Value Index X Threat Severity Index. Impact value e.g. within range [1...5]
  • Probability Vulnerability Criticality Index X 1/Control Mitigation Index. Probability value e.g. within range [1 ...5].
  • Trust Index Value received from the Asset Trust Index calculation function: Trust Index value within range [0.5...1.5].
  • Asset dependency factor Factor which considers impacts the risk level as a function of high-risk neighboring assets. The factor value within range [0.5...1 .5].
  • Index-Decay Multiplier Multiplier which affects a threat, vulnerability, asset or control as a function time. Index-Decay Multiplier value within range [0.5...1 .5].
  • Index-Decay Multiplier is bringing time aspects into the Risk level calculation.
  • the security domain level risk level is an average of risk level of risk levels of all assets within the security domain.
  • the security domain risk level calculation formula for the known threat, vulnerability, asset or control is:
  • ARLK Asset Risk Level for Known threat, vulnerability, asset or control calculated by the formula 3.
  • the uncertainty factor is a parameter that is used to compensate the difficulty to estimate the level of insecurity in risk calculation because of a previously unassessed threat or vulnerability.
  • the uncertainty factor increases the risk level as a function of the new unknown, that is, more lack of knowledge of threat or vulnerability the higher the index value is.
  • the formula for the unknown threat or vulnerability risk level calculation is:
  • Impact Asset Value Index X Threat Severity Index. Impact value e.g. within range [1 -5].
  • Probability Vulnerability Criticality Index X 1/Control Mitigation Index. Probability value e.g. within range [1 ...5].
  • Trust Index Value received from the Asset Trust Index calculation function: Trust Index value within range [0.5...1.5].
  • Asset dependency factor Factor which considers impacts the risk level as a function of high-risk neighboring assets. The factor value within range [0.5...1 .5].
  • Index-Decay Multiplier Multiplier which affects a threat, vulnerability, asset or control as a function time. Index-Decay Multiplier value within range [0.5...1.5].
  • the uncertainty factor Factor increases the risk level as a function of the level of unknown.
  • the factor value e.g. within range [1 ...2]
  • the Risk Treatment Handler is queried to carry out a proper risk treatment for the risk.
  • the unacceptable risk level Index is not static, but a changing value based on the machine learning algorithm taking into account all the time the changing security posture of the system.
  • Risk and Trust index database is updated after each risk level calculation to reflect a new Security Risk posture of the monitored system.
  • Risk and trust visibility status is also updated in real time, so that changes in the different applications are immediately updated to reflect the change in risk status of the system.
  • FIG. 10 is a schematic flow diagram illustrating an example operation of a risk engine according to an embodiment.
  • the Risk Treatment Handler After the calculated or estimated risks are compared against the risk acceptance criteria, the Risk Treatment Handler carries out machine learning based risk treatment analysis how to treat each unacceptable risk.
  • Risk Treatment Handler performs the risk treatment analysis for the risk which requires risk modification decisions. It selects the proper controls to provide additional protection for the monitored system. It forwards the protection instructions to the Security Configuration Engine via Adaptation Engine to introduce, remove or alter dynamic or static security controls automatically for the permanent or temporary protection.
  • the Trust Engine 110 calculates, stores and reports Asset-specific Trust Indexes.
  • Asset Trust Index gives a snapshot to policy compliance status based on different characteristics requirements e.g. security, privacy, resilience, reliability and safety, and the respective controls deployed for the Asset.
  • Asset Trust Index is used as input information for Risk and Trust Awareness Dashboard, impacting the Risk probability value either increasing through e.g. new vulnerabilities and threats or decreasing due to newly implemented controls.
  • Trust Engine includes three main components:
  • Asset Trust Index Assigner takes as input Asset related policy, configuration and contextual information. If a new asset is introduced, or there are changes to an existing asset, Asset Trust Index Assigner requests Asset Trust Index Calculator to define an Asset Trust Index for the asset and then updates the Risk and Trust Index Database accordingly. If an existing Asset is to be removed, the Asset Trust Index Assigner erases the relevant Risk and Trust Index Database information. In both cases, notifications of the changes are issued to the Adaptation Engine and the Risk and Trust Awareness Dashboard.
  • FIG. 11 is a schematic flow diagram illustrating an example operation of an asset trust index assigner according to an embodiment.
  • Asset Trust Index Calculator first defines an appropriate Asset T rust Index calculation uncertainty function for the Asset, based on contextual information of the Asset e.g. using machine learning. Then the Asset Trust Index Calculator applies Asset policy and configuration information to the Asset Trust Index calculation function and returns an Asset Trust Index value within range [0,5..1,5] to the requesting entity. This approach ensures that if there is uncertainty of asset trustworthiness even if policies are complied with, Asset Trust Index can be lower than 1 .
  • Trusted Asset Weighted compliance level to Security Policies relevant for the asset.
  • Asset Trustworthiness Uncertainty function takes parameters including, but not exclusively, the following: o Number of unpatched known vulnerabilities (reflects known attack surface) o The highest CVE score of known vulnerabilities (reflects highest severity of known attack surface) o The average score of known vulnerabilities (reflects the average severity of known attack surface) o The time of latest patching (reflects attack time window since latest patching) o The time of latest configuration change (reflects attack window for the current configuration) o attestation capability (reflects external judgement) o asset complexity (reflects statistical likelihood of unknown weaknesses)
  • AssetCompliance is the metric of how well Asset fulfils the relevant security controls defined in the Asset specific security policy, see Equation 8.
  • AssetTrustworthinessUncertainty is the metric of the uncertainty in accuracy of knowledge of Asset’s security attributes, see Equation 11).
  • the range of the function output may be normalized to [0,5..1,5]) by using the constant 0,5 in Equation (7).
  • Uncertainty factor is used when calculating Asset trustworthiness.
  • AVE denotes averaging the weighted ActualControls[1..n] are the set of implemented security controls according to the defined security policy for the Asset, and
  • Weighty ..n] is the set of weight factors of the ActualControls[1..]
  • the range of the function output is [0.01 ...NumberOfActual Controls]).
  • RELCON AVE( ⁇ RelevantControl[1..n]*RELCONWeight[1..n] ⁇ ) where:
  • AVE denotes averaging the weighted RelevantControls[1 ..n] are the set of security controls according to the defined security policy for the Asset, and
  • RELCONWeight[1..n] is the set of weight factors of the RelevantControls[1..]
  • the range of the function output is [0.01 ...NumberOfRelevantControls].
  • Uncertainty attributes are used to defined asset specific uncertainties of asset trustworthiness.
  • ATU VariableFunction(UncertaintyParameter(1..n)) where:
  • VariableFunction is initially the average of uncertainty attribute values and in further iterations a weighted average of uncertainty attributes, where the weights are defined based on Asset history.
  • the range of the function output is [0...1].
  • Asset contextual information is received from the Adaptation Engine and is composed of Asset-specific information e.g. historical security events and related adaptations, Asset placement in system topology, interfaces and protocols deployed by the Asset, and other characteristics that are relevant to the Asset’s security posture.
  • Asset-specific information e.g. historical security events and related adaptations, Asset placement in system topology, interfaces and protocols deployed by the Asset, and other characteristics that are relevant to the Asset’s security posture.
  • Asset Trust Index Calculator information and contextual information e.g. Asset’s topological localization and active interfaces and protocols used to communicate with other components.
  • FIG. 12 is a schematic flow diagram illustrating an example operation of an asset trust index calculator according to an embodiment.
  • the Risk and Trust Index Database stores Asset Trust information of all assets handled by Asset Trust Index Engine.
  • Asset Trust Index DB stores Asset Trust information of all assets handled by Trust Engine.
  • the information may include but is not restricted to:
  • Asset TrustLeap factors Number of unpatched known vulnerabilities o The highest CVE score of known vulnerabilities o The average score of known vulnerabilities o The time of latest patching o The time of latest configuration change o asset complexity (reflects statistical likelihood of unknown weaknesses) o Asset Lifecycle Status
  • the Risk and Trust Awareness Dashboard 105 gives single pane of glass, as a unified display of information from multiple sources, to the end to end risk and trust status of the managed context. It gives immediate visibility to the security operators for the main potential issues and risks in the network requiring constant monitoring and immediate actions hence helping to target the mitigation actions to the areas at most risk at any given time.
  • the dashboard can be seen as an intuitive graphical user interface.
  • the Security Configuration Engine 150 configures the initial security profiles of the assets with help of the executable security controls.
  • an acknowledgement is sent to risk and trust engines via adaptation engine to re-run risk calculation to confirm the latest risk and trust level of the system.
  • the Security analytics engine 160 is a supporting component, which uses rule and machine-learning based analytics for detecting known and unknown threats across different network domains. It provides constant visibility to the risk landscape and helps to detect the riskiest areas where actions are needed to reduce the attack surface. The functionality is fully configurable, enabling definition of new analytics correlation rules and behavioral algorithms as the threat landscape evolves. Security analytics triggers a new risk calculation when a new security event or anomaly is detected.
  • the Threat Intelligence engine 170 is a supporting component, which collects and shares threat and vulnerability information from the internal and external sources.
  • Threat information in form of lOCs is received from the cybersecurity communities or it is collected using open source specifications that are widely supported by common SIEMs or other security control systems.
  • the found threat or vulnerability information is transmitted to the policy and trust engines for risk and trust evaluation. Based on the risk evaluation results, new security controls can be requested to be added by the Security Configuration Engine. Asset Management
  • the Asset Management engine 180 is a sub-component, which keeps track of assets.
  • Assets are context-dependent; e.g. nodes and elements in the network, configurations, databases, connections between assets, hardware and software subcomponents, workflows, policies, data (at rest, in transit, in use - which have different risk levels), human specialists etc.
  • An asset always has an identity.
  • assets are organized in domains. Domain topology is itself an asset.
  • Asset Management provides asset related information to other components of the system/apparatus.
  • Key functionality in the Risk Engine may include one or more of:
  • Risk evaluation is adaptive, highly automated, and near real-time, e.g. using automatically updated Asset Trust Indexes in Risk probability calculation.
  • Risk Probability depends on several factors e.g. asset placement in system topology, number and nature of known threats, interface and protocols the asset deploys, and mitigating controls as specified in the risk level calculation formulas 1- 6.
  • Threat Intelligence related to development toolkit e.g. source code language and compiler having a history of high number of vulnerabilities over time, can be taken into account, enabling evaluation of the probability of an Asset being subject to unknown threats.
  • Trust index is connected into Risk status, where risk status takes trust index into account for risk evaluation.
  • Risks are calculated not only for individual Assets, but also for Domains, taking into account the trust relations between the Assets based on the risk level calculation formulas 2, 4 and 6.
  • Adaptation Engine Key functionality in the Adaptation Engine may include one or more of:
  • the whole system security posture may be continuously modified based on changes in threat surface and associated risks, taking input from external components and processes that in traditional systems act individually or with limited interaction. Mitigation actions can be enforced automatically or be subjected to human interaction. No unnecessary security controls remain in use after the respective risks have ceased to exist.
  • Key functionality in machine learning may involve using historical data with adjustable time decay factors to ensure that also slowly developing changes in threat environment are detected.
  • Key functionality in the Trust Engine may include one or more of:
  • FIG. 13 is a schematic flow diagram illustrating an example of possible initialization of a security automation system according to an embodiment.
  • the idea of the initialization is to collect input from internal and/or external sources and to define initial configuration tasks.
  • a context specific asset and network topology information is downloaded from Asset Management.
  • Policy automation is initialized by security standard frameworks such as NIST SP800-53r4, IS027001 :2013, IS027002:2013, IS027552, NIST Cybersecurity Framework 1 .1 , CIS Benchmarks and EU GDPR. Policy sets based on the pre defined policy families, policies, and controls are set according to the customer security policies. The defined policy sets are bound with the assets.
  • Initial trust index is calculated based on asset related policy, network configuration and contextual information.
  • Initial risk level is calculated using risk calculation formula (1 ) based context specific asset information, defined controls, known threats and vulnerabilities and network topology.
  • FIG. 14 An example of a system/network topology for a use case is illustrated in FIG. 14.
  • the illustrated system/network topology separates and groups different types of assets, e.g. network elements, into three different security domains according to their different functionality, security policies and security protection requirements.
  • FIG. 15 and FIG. 16 Examples of action sequences of two different the use cases are shown in FIG. 15 and FIG. 16, as described below.
  • FIG. 15 is a schematic diagram illustrating a use case example according to an embodiment.
  • a threat event is found by Security Analytics impacting assets within one security domain. 1.
  • a new security event indicating that a security policy change concerning disallowing a specific protocol, SNMPv2, has been made, is detected by the Security Analytics.
  • a threat event is transmitted to the Adaptation engine.
  • the rules of the adaptation engine trigger a risk and trust analysis on the assets A1 and A2 that currently support the SNMPv2 protocol that needs to be deprecated.
  • Trust engine calculates ATI for A1 and A2 using the new compliance information:
  • Trust Index Calculator stores and reports Trust Index to be updated in the Risk and Trust Awareness dashboard.
  • a trust index is reported back and forwarded to the Risk Engine.
  • Risk index calculator performs a risk level calculation for the known threats and vulnerabilities using the risk calculation formulas 3, 4, that is, the risk level is calculated to each asset within the security domain A and also to security domain A.
  • Asset value index
  • Vulnerability Criticality factor (weak SNMP protocol in use): 3 Control Mitigation Index (SNMP3 in use): 1
  • Asset Initial Risk Levels [(Impact x Asset Dependency Factor) x (Probability x Trust Index)]:
  • Asset Risk Level Known [Impact x Asset Dependency Factor) X (Probability x Trust index)] x Index-Decay Multiplier
  • A1:2,5x 3x0,6 4,5
  • A2: 2,3x3x0,7 4,83
  • A3: 1,2x3x0,7 2,52
  • 1 x3x0,6 1,8
  • Vulnerability Criticality factor (weak SNMP protocol in use): 3 Control Mitigation Index (SNMP3 in use): 1 Control Mitigation Index (SNMP2 in use): 0,8
  • ARLK Asset Risk Levels Known
  • Risk calculation indicates unacceptable risk level for the security domain A (> 10) and assets A1 (> 15) and A2 (> 15) in security domain A, but there is no indication of the increased risk for the assets A3 and A4 nor security domains B or C.
  • the risk and trust awareness dashboard is updated to illustrate a new Risk status to the assets A1 and A2.
  • the Risk Treatment Handler performs a risk treatment analysis resulting in that additional security controls are required to put in place to protect Assets A1 and A2 and reports mitigation instructions to Adaptation Engine.
  • the adaptation engine forwards mitigation request to the Security Configuration Engine.
  • the Security Configuration Engine chooses from the policy catalog two new security controls. Enforcement of controls, that is, the actual security configuration of assets is performed executing scripts on the asset A1 and A2. Results of the adding security controls is returned to the adaption engine.
  • the rules of the adaptation engine trigger a new risk and trust analysis
  • a trust index is reported back and forwarded to the Risk Engine.
  • Risk index calculator performs a new risk level calculation using similar calculation as shown in the point 5 taking into account that two new security controls are added to Asset A1 and A2.
  • This time Risk calculation indicates an acceptable risk status for the monitored system.
  • the risk and trust awareness dashboard is updated to illustrate an updated Risk status.
  • FIG. 16 is a schematic diagram illustrating a use case example according to another embodiment.
  • a threat is found by Threat Intelligence/Security Analytics that has impact beyond one security domain.
  • the rules of the adaptation engine trigger a risk and trust analysis.
  • Trust engine calculates, stores and reports Trust Index to be updated in the Risk and Trust Awareness dashboard.
  • Asset trust index indicates impacts not only to assets in security domain A, but also assets in security domain B.
  • a trust index is reported back and forwarded to the Risk Engine.
  • Risk index calculator performs a risk level calculation for the unknown vulnerability using the risk calculation formulas 5 and 6, that is, the risk level is calculated to each asset within the security domain A and also to security domain A.
  • Risk calculation indicates an unacceptable risk for the assets A3 and A4 in the security domain A, which may also cause increased security risk to assets in the security domain B and C.
  • the risk and trust awareness dashboard is updated to illustrate increased Risk status to the assets A1 and A2 and the security domains A, B and C.
  • the Risk Treatment Handler performs a risk treatment analysis resulting in that additional security controls are required to put in place to protect Assets A1 and A2.
  • Risk treatment analysis also advice to add additional firewall rules to the network FW between security domains.
  • Risk Engine reports mitigation instructions to Adaptation Engine.
  • the Security Configuration Engine chooses from the policy catalog two new security controls to be added to the assets A and B. Enforcement of controls, that is, the actual security configuration of assets are performed executing scripts on the asset A1 and A2. New firewall rules are also configured to the Network Firewall to provide an additional protection for the Security Domains B and C. Results of the adding security controls is returned to the Adaption Engine.
  • a trust index is reported back and forwarded to the Risk Engine.
  • Risk Engine Risk index calculator performs a new risk level calculation taking into account that two new security controls are added to Asset A1 and A2 and new firewall rules are set to network FW.
  • This time Risk calculation indicates an acceptable risk status for the monitored system.
  • the risk and trust awareness dashboard is updated to illustrate an updated Risk status.
  • Rules of the Adaptation Engine are updated according to new risk and trust levels.
  • embodiments may be implemented in hardware, or in software for execution by suitable processing circuitry, or a combination thereof.
  • steps, functions, procedures, modules and/or blocks described herein may be implemented in hardware using any conventional technology, such as discrete circuit or integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry. Alternatively, or as a complement, at least some of the steps, functions, procedures, modules and/or blocks described herein may be implemented in software such as a computer program for execution by suitable processing circuitry such as one or more processors or processing units.
  • processing circuitry and “processor” may be used interchangeably in parts of this disclosure.
  • processing circuitry includes, but is not limited to, one or more microprocessors, one or more Digital Signal Processors (DSPs), one or more Central Processing Units (CPUs), video acceleration hardware, and/or any suitable programmable logic circuitry such as one or more Field Programmable Gate Arrays (FPGAs), or one or more Programmable Logic Controllers (PLCs).
  • DSPs Digital Signal Processors
  • CPUs Central Processing Units
  • FPGAs Field Programmable Gate Arrays
  • PLCs Programmable Logic Controllers
  • FIG. 17A is a schematic block diagram illustrating an example of a security automation system according to an embodiment.
  • the security automation system 400 comprises processing circuitry 410 including one or more processors and a memory 420, the memory 420 comprising instructions executable by the processing circuitry 410, whereby the security automation system 400 is operative to perform security management of the Information Technology system.
  • processing circuitry is operative to perform at least some of the steps, actions and/or functions described herein, including the operations of the security automation system 400.
  • the security automation system 400 may also include a communication circuit 430.
  • the communication circuit 430 may include functions for wired and/or wireless communication with other devices and/or network nodes in the network.
  • the communication circuit 430 may be based on radio circuitry for communication with one or more other nodes, including transmitting and/or receiving information.
  • the communication circuit 430 may be interconnected to the processing circuitry 410 and/or memory 420.
  • the communication circuit 430 may include any of the following: a receiver, a transmitter, a transceiver, input/output (I/O) circuitry, input port(s) and/or output port(s).
  • FIG. 17B is a schematic block diagram illustrating an example of a network entity (node) comprising a security automation system according to an embodiment.
  • the network entity 350 may be a network node or part thereof and/or a cloud-based network device.
  • FIG. 18 is a schematic diagram illustrating an example of a computer-implementation according to an embodiment.
  • a computer program 525; 535 which is loaded into the memory 520 for execution by processing circuitry including one or more processors 510.
  • the processor(s) 510 and memory 520 are interconnected to each other to enable normal software execution.
  • An optional input/output device 540 may also be interconnected to the processor(s) 510 and/or the memory 520 to enable input and/or output of relevant data such as input parameter(s) and/or resulting output parameter(s).
  • processor should be interpreted in a general sense as any system or device capable of executing program code or computer program instructions to perform a particular processing, determining or computing task.
  • the processing circuitry including one or more processors 510 is thus configured to perform, when executing the computer program 525, well-defined processing tasks such as those described herein.
  • the processing circuitry does not have to be dedicated to only execute the above- described steps, functions, procedure and/or blocks, but may also execute other tasks.
  • a computer program 525; 535 for performing, when executed, security management of an Information Technology system having a number of interacting system components, each system component being associated with one or more operational assets relevant to the operation of the system component, wherein the operational assets are organized in one or more security domains according to a system topology.
  • the computer program 525; 535 comprises instructions, which when executed by at least one processor 510, cause the at least one processor 510 to: obtain: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration; determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration; determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications; and determine one or more security control actions related to at least a subset
  • a computer-program product comprising a non-transitory computer-readable medium 520; 530 having stored thereon such a computer program 525; 535.
  • the proposed technology also provides a carrier comprising the computer program, wherein the carrier is one of an electronic signal, an optical signal, an electromagnetic signal, a magnetic signal, an electric signal, a radio signal, a microwave signal, or a computer-readable storage medium.
  • the software or computer program 525; 535 may be realized as a computer program product, which is normally carried or stored on a computer-readable medium 520; 530, in particular a non-volatile medium.
  • the computer-readable medium may include one or more removable or non-removable memory devices including, but not limited to a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc (CD), a Digital Versatile Disc (DVD), a Blu-ray disc, a Universal Serial Bus (USB) memory, a Hard Disk Drive (HDD) storage device, a flash memory, a magnetic tape, or any other conventional memory device.
  • the computer program may thus be loaded into the operating memory of a computer or equivalent processing device for execution by the processing circuitry thereof.
  • the flow diagram or diagrams presented herein may be regarded as a computer flow diagram or diagrams, when performed by one or more processors.
  • a corresponding apparatus may be defined as a group of function modules, where each step performed by the processor corresponds to a function module.
  • the function modules are implemented as a computer program running on the processor.
  • the computer program residing in memory may thus be organized as appropriate function modules configured to perform, when executed by the processor, at least part of the steps and/or tasks described herein.
  • any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses.
  • Each virtual apparatus may comprise a number of these functional units.
  • These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like.
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein.
  • the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
  • module(s) predominantly by hardware modules, or alternatively by hardware, with suitable interconnections between relevant modules.
  • Particular examples include one or more suitably configured digital signal processors and other known electronic circuits, e.g. discrete logic gates interconnected to perform a specialized function, and/or Application Specific Integrated Circuits (ASICs) as previously mentioned.
  • Other examples of usable hardware include input/output (I/O) circuitry and/or circuitry for receiving and/or sending signals.
  • I/O input/output
  • the virtual apparatus may comprise processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like.
  • the processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc.
  • Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments.
  • module or unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. It is also becoming increasingly popular to provide computing services (hardware and/or software) in network devices such as network nodes and/or servers where the resources are delivered as a service to remote locations over a network. By way of example, this means that functionality, as described herein, can be distributed or re located to one or more separate physical nodes or servers.
  • the functionality may be re-located or distributed to one or more jointly acting physical and/or virtual machines that can be positioned in separate physical node(s), i.e. in the so-called cloud.
  • This is sometimes also referred to as cloud computing, which is a model for enabling ubiquitous on-demand network access to a pool of configurable computing resources such as networks, servers, storage, applications and general or customized services.
  • a Network Device may generally be seen as an electronic device being communicatively connected to other electronic devices in the network.
  • the network device may be implemented in hardware, software or a combination thereof.
  • the network device may be a special-purpose network device or a general purpose network device, or a hybrid thereof.
  • a special-purpose network device may use custom processing circuits and a proprietary operating system (OS), for execution of software to provide one or more of the features or functions disclosed herein.
  • OS operating system
  • a general purpose network device may use common off-the-shelf (COTS) processors and a standard OS, for execution of software configured to provide one or more of the features or functions disclosed herein.
  • COTS off-the-shelf
  • a special-purpose network device may include hardware comprising processing or computing resource(s), which typically include a set of one or more processors, and physical network interfaces (Nls), which sometimes are called physical ports, as well as non-transitory machine readable storage media having stored thereon software.
  • a physical Nl may be seen as hardware in a network device through which a network connection is made, e.g. wirelessly through a wireless network interface controller (WNIC) or through plugging in a cable to a physical port connected to a network interface controller (NIC).
  • WNIC wireless network interface controller
  • NIC network interface controller
  • the software may be executed by the hardware to instantiate a set of one or more software instance(s).
  • Each of the software instance(s), and that part of the hardware that executes that software instance may form a separate virtual network element.
  • a general purpose network device may for example include hardware comprising a set of one or more processor(s), often COTS processors, and network interface controller(s) (NICs), as well as non-transitory machine readable storage media having stored thereon software.
  • the processor(s) executes the software to instantiate one or more sets of one or more applications.
  • one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization - for example represented by a virtualization layer and software containers.
  • one such alternative embodiment implements operating system-level virtualization, in which case the virtualization layer represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple software containers that may each be used to execute one of a sets of applications.
  • each of the software containers also called virtualization engines, virtual private servers, or jails
  • a user space instance typically a virtual memory space.
  • the virtualization layer represents a hypervisor (sometimes referred to as a Virtual Machine Monitor (VMM)) or the hypervisor is executed on top of a host operating system; and 2) the software containers each represent a tightly isolated form of software container called a virtual machine that is executed by the hypervisor and may include a guest operating system.
  • VMM Virtual Machine Monitor
  • a hypervisor is the software/hardware that is responsible for creating and managing the various virtualized instances and in some cases the actual physical hardware.
  • the hypervisor manages the underlying resources and presents them as virtualized instances. What the hypervisor virtualizes to appear as a single processor may actually comprise multiple separate processors. From the perspective of the operating system, the virtualized instances appear to be actual hardware components.
  • a virtual machine is a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine; and applications generally do not know they are running on a virtual machine as opposed to running on a “bare metal” host electronic device, though some systems provide para- virtualization which allows an operating system or application to be aware of the presence of virtualization for optimization purposes.
  • the instantiation of the one or more sets of one or more applications as well as the virtualization layer and software containers if implemented, are collectively referred to as software instance(s).
  • Each set of applications, corresponding software container if implemented, and that part of the hardware that executes them forms a separate virtual network element(s).
  • the virtual network element(s) may perform similar functionality compared to Virtual Network Element(s) (VNEs). This virtualization of the hardware is sometimes referred to as Network Function Virtualization (NFV)).
  • NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in data centers, NDs, and Customer Premise Equipment (CPE).
  • CPE Customer Premise Equipment
  • different embodiments may implement one or more of the software container(s) differently.
  • embodiments are illustrated with each software container corresponding to a VNE, alternative embodiments may implement this correspondence or mapping between software container-VNE at a finer granularity level; it should be understood that the techniques described herein with reference to a correspondence of software containers to VNEs also apply to embodiments where such a finer level of granularity is used.
  • a hybrid network device which includes both custom processing circuitry/proprietary OS and COTS processors/standard OS in a network device, e.g. in a card or circuit board within a network device ND.
  • a platform Virtual Machine such as a VM that implements functionality of a special-purpose network device, could provide for para-virtualization to the hardware present in the hybrid network device.

Abstract

There is provided a security automation system a security automation system (100) configured for security management of an Information Technology. The security automation system (100) comprises a trust engine (110) configured to perform trust valuation of at least a subset of a number of operational assets and/or security domains at least partly based on security configuration information representative of asset and/or domain security configuration. The security automation system (100) further comprises a risk engine (120) configured to perform risk assessment for each of at least a subset of the operational assets and/or for each of a number of security domains at least partly based on the trust evaluation performed by the trust engine, and security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets, asset information representative of the system topology, asset configuration and/or dependency between assets. The security automation system (100) also comprises a security adaptation engine (130) configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains at least partly based on the risk assessment performed by the risk engine (120).

Description

SECURITY AUTOMATION SYSTEM
TECHNICAL FIELD
The proposed technology generally relates to Information Security (IS) and Information Technology (IT), and specifically concerns a security automation system configured for security management of an Information Technology system, and a network entity of a communication system or network comprising such a security automation system, a method for security management of an Information Technology system as well as a corresponding computer program and computer-program product.
BACKGROUND
Current security management related solutions are targeted for IS/IT environments in enterprise networks, and are normally specific to one particular purpose such as:
• Security Incident and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) systems may collect security events and logs from different servers and perform filtering and/or alerting procedures based on those and raise alarms on suspicious events.
• Configuration of servers with scripts and tools is meant for hardening the network nodes, this is usually performed one by one per server and is often a manual procedure.
• Verifying existing vulnerabilities with scanners is used to check servers one by one if there are existing known vulnerabilities in the servers or sub-components. Vulnerability management reports are analyzed usually manually by a security expert.
• Vulnerability information for different 3pp software components are received or fetched from public sources for manual processing and mapping to the servers or sub-components.
• Threat intelligence information is received or fetched from commercial or public sources for manual processing. • Asset inventories are used to store information and characteristics about servers.
• Intrusion Detection Systems/Intrusion Prevention System (IDS/IPS) are used to block traffic based on known anomaly fingerprints.
• Workflow engines are targeted to automate simple incident response flows.
Compliance tools exist but are limited to snapshot views rather than continuous compliance monitoring and verification. Risk management is usually performed as a desktop/paper exercise at specific time slots and risk management applications are rare. Thus, the current security management systems are to a large extent not operating in real time to document and report on vulnerabilities, threats, security controls and risks.
The following issues can be identified with the current technical solutions:
1. They are not interconnected, and they are usually separate systems or solutions.
2. They are not combining the information automatically or on-line from different systems or solutions into actionable end to end threat information valid for the whole network context.
3. They require many manual steps to be performed by security analyst to map the security event information, threat and vulnerability information, historical data and other data with the actual network asset inventory in order to be able to decide what protective actions need to be performed.
4. They do not provide visibility to the end to end security status as status information is scattered into different systems.
5. Manual steps are time consuming and the amount of data makes it ineffective and error prone to try to find links and connections between different data elements.
6. There is no effective end to end security management automation in technical or operational level.
7. Current systems are focusing on technical security mechanisms.
8. Multiple individual systems are operated based on pre-defined rules that are specific to each system. 9. Current workflow systems are focusing on incident response flows, not combining information from different sources, mapping them to the managed context and orchestrating between security risk management, security configurations and policies and security analytics.
10. There is no on-line, risk insight available that is regularly updated by the different information flows.
11 . There are no solutions for evaluating or judging trust in the telecom network e.g. between network elements or security domains based on risks, threats and vulnerabilities.
12. Current solutions do not take into account that a change of security posture in one network node impacts other network nodes.
13. Often, deployed security controls remain in place even if the respective threats/risks have disappeared.
14. Existing SIEM and SOAR systems are designed for use in enterprise server security monitoring and management for enterprise business use cases, and are not apt for telecom network elements and business use cases without major customization.
A representative example of state-of-the-art SIEM is disclosed in the SANS Whitepaper “An Evaluator’s Guide to Next Gen SIEM” by Barbara Filkins (Chris Crowley), December 2018. As stated therein, a SIEM system provides a central console for viewing, monitoring and managing security-related events and log data from across the considered enterprise. A SIEM system can enable an analyst to identify and respond to suspicious behavior patterns.
However, there is still a general need for improvements related to security management of Information Technology systems in various technological applications.
SUMMARY
It is a general object to provide improved security management of an Information Technology system. It is a specific object to provide a security automation system configured for security management of an Information Technology system.
It is also an object to provide a network entity, of a communication system or network, comprising such a security automation system.
Another object is to provide a method for security management of an Information Technology system.
Another specific object is to provide situational awareness of an Information Technology system security posture, including e.g. a visualization platform and means of notification to human security analysts.
Yet another object is to provide a computer program for performing, when executed, security management of an Information Technology system.
It is also an object to provide a corresponding computer-program product.
Still another object is to provide a complementary security automation system configured for security management of an Information Technology system.
These and other objects are met by embodiments of the proposed technology.
According to a first aspect, there is provided a security automation system configured for security management of an Information Technology (IT) system. The Information Technology system has a number of interacting system components, each system component being associated with one or more operational assets for the operation of the system component. The operational assets are organized in one or more security domains according to a system topology.
The security automation system is configured to obtain: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration.
The security automation system is further configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration.
The security automation system is also configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications.
The security automation system is configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains based on at least a subset of the determined risk levels.
According to a second aspect, there is provided a network entity, of a communication system or network, comprising such a security automation system.
According to a third aspect, there is provided a method for security management of an Information Technology system having a number of interacting system components. Each system component is associated with one or more operational assets relevant to the operation of the system component, and the operational assets are organized in one or more security domains according to a system topology.
The method comprises: obtaining: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration; determining, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration; determining, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications; and determining one or more security control actions related to at least a subset of the operational assets and/or domains based on at least a subset of the determined risk levels.
According to a fourth aspect, there is provided a computer program for performing, when executed, security management of an Information Technology system having a number of interacting system components. Each system component is associated with one or more operational assets relevant to the operation of the system component, and the operational assets are organized in one or more security domains according to a system topology.
The computer program comprises instructions, which when executed by at least one processor, cause the at least one processor to: obtain: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration; determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration; determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications; and determine one or more security control actions related to at least a subset of the operational assets and/or domains based on at least a subset of the determined risk levels.
According to a fifth aspect, there is provided a computer-program product comprising a non-transitory computer-readable medium having stored thereon such a computer program.
According to a sixth aspect, there is provided a security automation system configured for security management of an Information Technology system. The Information Technology system has a number of interacting system components, each system component being associated with one or more operational assets relevant to the operation of the system component. The operational assets are organized in one or more security domains according to a system topology.
The security automation system comprises: a trust engine configured to perform trust valuation of at least a subset of the operational assets and/or domains at least partly based on security configuration information representative of asset and/or domain security configuration; a risk engine configured to perform risk assessment for each of at least a subset of the operational assets and/or for each of a number of the security domains at least partly based on the trust evaluation performed by the trust engine, and security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets, asset information representative of the system topology, asset configuration and/or dependency between assets; and a security adaptation engine configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains at least partly based on the risk assessment performed by the risk engine.
By way of example, the security adaptation engine may be configured to communicate, via workflows to at least a subset of security automation system sub-components, risk information and events at least partly based on the risk assessment performed by the risk engine.
Optionally, the security automation system comprises a security configuration engine configured to set initial security profiles of the assets with help of executable security controls; and to configure new dynamic or static policies or decommission existing ones automatically for permanent and/or temporary protection based on the instructions of the risk engine via workflows of the security adaptation engine.
In this way, it is possible to effectively provide and/or support improved security management of an Information Technology system.
Other advantages will be appreciated when reading the detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
The embodiments, together with further objects and advantages thereof, may best be understood by making reference to the following description taken together with the accompanying drawings, in which:
FIG. 1 is a schematic diagram illustrating an example of an improved security automation process or procedure according to an embodiment.
FIG. 2 is a schematic diagram illustrating an example of a security automation system according to an embodiment. FIG. 3 is a schematic diagram illustrating another example of a security automation system according to an embodiment.
FIG. 4 is a schematic diagram illustrating an example of a monitored Information Technology system.
FIG. 5 is a schematic diagram illustrating another example of a monitored Information Technology system.
FIG. 6 is a schematic diagram illustrating an example of a communication system/network having Information Technology components.
FIG. 7 is a schematic flow diagram illustrating an example of a method for security management of an Information Technology system according to an embodiment.
FIG. 8 is a schematic diagram illustrating an example of security automation system in connection with a managed environment.
FIG. 9 is a schematic diagram illustrating an example of a security automation system according to a specific embodiment.
FIG. 10 is a schematic flow diagram illustrating an example operation of a risk engine according to an embodiment.
FIG. 11 is a schematic flow diagram illustrating an example operation of an asset trust index assigner according to an embodiment.
FIG. 12 is a schematic flow diagram illustrating an example operation of an asset trust index calculator according to an embodiment.
FIG. 13 is a schematic flow diagram illustrating an example of possible initialization of a security automation system according to an embodiment. FIG. 14 is a schematic diagram illustrating an example of security domain organization according to an embodiment.
FIG. 15 is a schematic diagram illustrating a use case example according to an embodiment.
FIG. 16 is a schematic diagram illustrating a use case example according to another embodiment.
FIG. 17A is a schematic block diagram illustrating an example of a security automation system according to an embodiment.
FIG. 17B is a schematic block diagram illustrating an example of a network entity (node) comprising a security automation system according to an embodiment.
FIG. 18 is a schematic diagram illustrating an example of a computer-implementation according to an embodiment.
DETAILED DESCRIPTION
Throughout the drawings, the same reference designations are used for similar or corresponding elements.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
As used herein, the non-limiting term “trust” may for example refer to the extent to which an entity is willing to depend on another entity in a given situation with a sense of relative security, even though negative consequences may arise.
As used herein, the non-limiting term “risk” may for example relate to a situation in which it is possible but not certain that some unwanted or undesirable event will occur. The term may often be used synonymously with the probability of the unwanted event to occur. In some cases, risk may be regarded as the product of threat impact and probability of threat occurrence, where threat impact is defined as the value of losses due to a threat being realized.
As used herein, the non-limiting term “risk level” may for example refer to a calculated value of risk.
As used herein, the non-limiting term “trustworthiness” may for example refer to the assurance of selected security principles/requirements and/or availability and/or reliability requirements related to an operational asset, device, component and/or system, including hardware and/or software components as well as configurations and/or communication interfaces and/or protocols of such hardware and/or software components as well as dependencies or relations between operational assets. Trustworthiness normally includes an understanding of the resilience of the operational asset, device, component and/or system to conditions that stress the security, availability and/or reliability requirements. In other words, trustworthiness can be regarded as an indication as to how much an operational asset, device, component and/or system can be trusted from a technical point of view.
As used herein, the non-limiting terms “trust indication” and “trust index” may for example refer to a value of perceived trustworthiness of an asset. As used herein, the non-limiting term “Information Technology system” may refer to any technological system for generating, processing, storing and/or transferring information.
As used herein, the non-limiting term “server” may generally refer to any technical entity, component, device, system and/or or node located in connection with a network environment such as an enterprise network
As used herein, the non-limiting term “network entity” may generally refer to any technical entity, component, device, system and/or or node located in connection with a network environment such as a wired and/or wireless communication network, including Internet components, servers and/or network nodes of wireless communication systems.
As used herein, the non-limiting term “network node” may refer to any node or device located in connection with a communication network, including but not limited to devices in access networks, core networks and similar network structures. The term network node may also encompass cloud-based network devices.
As used herein, the non-limiting term “engine” may refer to any functional module or unit for executing one or more process steps and/or performing one or more actions such as computations, decisions, execution of security control workflows and so forth, and may be implemented in hardware or software executing on processing hardware.
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
As mentioned in the background, there are a number of issues and/or problems with the current technical solutions for security management of Information Technology systems. The proposed technology provides an improved solution, as exemplified below with reference to exemplary embodiments.
According to a first aspect, there is provided a security automation system configured for security management of an Information Technology (IT) system. The Information Technology system has a number of interacting system components, each system component being associated with one or more operational assets for the operation of the system component. The operational assets are organized in one or more security domains according to a system topology.
The security automation system is configured to obtain: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration.
The security automation system is further configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration.
The security configuration information representative of asset and/or domain security configuration may thus include information representative of asset security configuration and/or information representative of domain security configuration.
The security automation system is also configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications. The security automation system is configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains based on at least a subset of the determined risk levels.
In this way, it is possible to effectively provide and/or support improved security management of an Information Technology system.
FIG. 1 is a schematic diagram illustrating an example of an improved security automation process or procedure according to an embodiment.
Based on various bases (sets) of information, such as information on security risks, threats and/or vulnerabilities, and/or asset information including asset context, asset topology, configuration and/or interfaces, risk evaluation and trust evaluation may be executed, wherein results of the trust evaluation may be used as effective input to the risk evaluation to thereby improve the risk evaluation. Based on such improved risk evaluation, proper security adaptation and/or configuration may be determined and/or updated to decide on and/or provide one or more security control actions.
Optionally, information about the determined and/or updated security configuration may be used as feedback to provide useful information for the trust evaluation and/or risk evaluation.
FIG. 2 is a schematic diagram illustrating an example of a security automation system according to an embodiment. In this particular example, the security automation system 100 comprises a trust engine 110, a risk engine 120, a security adaptation engine 130 and an associated database 140.
By way of example, the security automation system 100 may implement interconnection and mapping of information from different sources and create automated security control action flows, keeping the risk level acceptable, and the context compliant and trustworthy according to given policies. By way of example, the security automation system 100 may be configured to determine one or more security control actions for a fully automated or security-analyst- assisted workflow to mitigate security risks and/or security threats.
The configuration and operation of such a security automation system or apparatus will be described in greater detail later on.
FIG. 3 is a schematic diagram illustrating another example of a security automation system according to an embodiment. In this particular example, the security automation system 100 further includes a security policy based security configuration engine 150, and optionally also a security analytics engine 160, a threat intelligence engine 170 and an asset management engine 180, and a Risk and Trust Awareness Dashboard 105 as will be described in greater detail later on.
By way of example, instead of deploying singular security controls to detect local deficiencies, the security automation system may interconnect different security functions. Threat intelligence from external sources may be combined with system- internal security monitoring information to compose so-called asset trust indexes. Trust indexes may be used for risk evaluation, which makes it possible to asses security status comprehensively across the entire system.
For example, based on risk evaluation and treatment decisions reflecting applicable security policies, the security automation system may provide fully automated or security-analyst-assisted workflows to mitigate security risks for any given context. The security automation system may provide end-to-end visibility to the trust and risk status of the system and make automated risk qualifications in near real-time, e.g. with the aid of machine learning technologies to adjust security controls, insert security functions and/or provide information to external systems
The security automation system of the proposed technology may be operated more or less autonomously in connection with an Information Technology system that needs improved security, or may be operated on top of existing SIEM and/or SOAR systems and utilize such systems for information retrieval and risk evaluation. FIG. 4 is a schematic diagram illustrating an example of a monitored Information Technology system. As can be seen, the Information Technology system 200 has a number of interacting system components 205, each system component 205 being associated with one or more operational assets 210 for the operation of the system component. The system components 205 and the associated operational assets 210 are organized in one or more security domains 220 according to a system topology. Different security domains 220 may be interconnected via an optional interconnect 230, such as a firewall (FW).
The operational assets 210 may use various communication interfaces and/or protocols, and appropriate security controls may be deployed for the operational asset(s) or planned for deployment.
For example, the operational assets 210 may include at least one of:
• hardware and/or software components of the Information Technology system
200,
• configurations, workflows, communication interfaces, and/or associated databases of the hardware and/or software components, and
• organization of the assets 210 in one or more security domains 220 according to the system topology.
In a particular example, the operational assets 210 include at least one digital asset, and the one or more security control actions typically include at least one executable security control operating on the at least one digital asset and/or an associated security domain.
FIG. 5 is a schematic diagram illustrating another example of a monitored Information Technology system. In the example of FIG. 5, another asset topology is illustrated, with assets 210 being organized in different security domains that may be interconnected to one another. The assets 210 within a security domain 220 may have various interdependencies.
For example, the security automation system may be configured to determine, for each of at least a subset of the operational assets 210 and/or for each of a number security domains 220, a risk level also at least partly based on dependency between assets 210.
In a particular example, the security automation system is configured to determine, for each of at least a subset of the operational assets 210 and/or for each of a number security domains 220, a risk level also at least partly based on asset placement in the system topology, the number and nature of security risks and/or threats, communication interface(s) and/or protocol(s) used by the operational asset(s) and/or deployed security controls for the operational asset(s).
Optionally, the security automation system may be configured to determine, for each of at least a subset of the operational assets 210 and/or for each of a number of the security domains 220, a trust indication representing trustworthiness of the operational asset and/or domain also at least partly based on asset information representative of the system topology, asset configuration and/or dependency between assets.
As an example, the security automation system may be configured to determine one or more security control actions based on Machine Learning (ML).
For example, the security automation system may be configured to perform ML-based risk treatment analysis on how to handle security risks when the determined risk levels exceed a given threshold and determine one or more of the security control actions related to at least a subset of the operational assets 210 and/or domains 220 based on the ML-based risk treatment analysis.
Optionally, the security automation system may be configured to detect one or more security risks, system vulnerabilities and/or security threats based on Machine Learning (ML).
In a particular example, the one or more security control actions includes updating a security policy, adjusting a security configuration, removing an existing security function and/or inserting a new security function. By way of example, the security automation system may be configured to adaptively determine trust indication(s), risk level(s) and security control action(s) in an automated manner.
Preferably, the security automation system is configured as an adaptive system. In this way, the security automation system may be configured, e.g. to re-determine at least one risk level after deployment of the determine security control action(s).
It should be understood that the Information Technology system may be any system for generating, processing, storing and/or transferring information.
In a particular, non-limiting example, the security automation system may be configured for security management of the Information Technology system 200 of a communication system or network 300.
FIG. 6 is a schematic diagram illustrating an example of a communication system/network having Information Technology components. The overall communication system/network 300 may include wireless communication systems such as radio access networks 310, wired communication systems such as server/data networks 320 as well as interconnecting networks 330, and may include system components such as radio base stations, access points, radio controllers, network management units, communication gateways, routers, servers and the like, each of which may have Information Technology components for generating, processing, storing and/or transferring information.
For example, the Information Technology system may be an integrated part of the communication system or network 300 and the interacting system components 205 of the Information Technology system 200 may involve network nodes and/or elements of the communication system or network 300.
With reference once again to FIG. 2 and FIG. 3, it is clear that the security automation system 100 may include a trust engine 110, a risk engine 120 and a security adaptation engine 130 that are operatively interconnected. For example, the trust engine 110 may be configured for trust determination or valuation, and the risk engine 120 may be configured for risk evaluation or assessment.
In a sense, the security adaptation engine 130 may be seen as an orchestration layer for effectively interworking with the other system components. By way of example, the security adaptation engine may be configured to communicate, via workflows to at least a subset of security automation system sub-components, risk information and events at least partly based on the risk assessment performed by the risk engine.
In a particular example, the trust engine 110 may be configured to perform trust valuation of at least a subset of the operational assets and/or domains at least partly based on security configuration information representative of asset and/or domain security configuration. The risk engine 120 may be configured to perform risk assessment for each of at least a subset of the operational assets and/or for each of a number of the security domains at least partly based on the trust evaluation performed by the trust engine, and security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets, asset information representative of the system topology, asset configuration and/or dependency between assets. The security adaptation engine 130 may be configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains at least partly based on the risk assessment performed by the risk engine.
By way of example, the trust engine 110 may be configured to determine, for each of at least a subset of the operational assets and/or domains, the trust indication representing trustworthiness of the operational asset and/or domain. The risk engine 120 may be configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, the risk level of the operational asset and/or security domain. The security adaptation engine 130 may be configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains. In essence, this means that the security control action(s) may be performed per asset and/or per domain. As can be seen in FIG. 3, the security automation system 100 may include a security configuration engine 150 for performing a security configuration according to the determined security control action(s).
Optionally, the security adaptation engine 130 and the security configuration engine 150 may be integrated, at least partly.
Optionally, the security configuration engine 150 may be configured to set initial security profiles of the assets with help of executable security controls, and to configure new dynamic or static policies or decommission existing ones automatically for permanent and/or temporary protection based on the instructions of the risk engine 120 via workflows of the security adaptation engine 130.
For example, database 140 may be a shared trust and risk database accessible by at least the trust engine 110 and the risk engine 120.
As an example, the security adaptation engine 130 may be based on a rule engine that commands or runs digital security control workflows to make decisions automatically based on predefined rules.
Preferably, the security adaptation engine 130 may be configured to learn from previous security control actions and use Machine Learning (ML) to adapt rules and/or create new rules.
The risk and trust awareness dashboard 105 may be regarded as a visualization and/or notification platform for enabling situational awareness of the security posture of the Information Technology system, e.g. for presenting suitable trust, risk or other security information and/or for notification to human security analysts. It can be seen as a unified display of information from multiple sources, to the end to end risk and trust status of the managed context. It gives immediate visibility to the security operators for the main potential issues and risks in the network requiring constant monitoring and immediate actions hence helping to target the mitigation actions to the areas at most risk at any given time. For example, the security analytics engine 160 may be configured to detect security events/threats, and the threat intelligence engine 170 may be configured to provide security information from external sources. The asset management engine 180, or asset manager, is configured to manage and/or provide information on assets and system topology including aspects such as asset configurations, interfaces and/or interdependencies.
FIG. 7 is a schematic flow diagram illustrating an example of a method for security management of an Information Technology system according to an embodiment.
The method comprises the following basic steps:
S1 : obtaining: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration;
S2: determining, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration;
S3: determining, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications; and
S4: determining one or more security control actions related to at least a subset of the operational assets and/or domains based on at least a subset of the determined risk levels. For a better understanding, the proposed technology will now be described with reference to one or more non-limiting examples.
FIG. 8 is a schematic diagram illustrating an example of security automation system in connection with a managed environment. The managed environment comprises assets organized in security domains.
FIG. 9 is a schematic diagram illustrating an example of a security automation system according to a specific embodiment. In this example, the security automation system or apparatus 100 comprises a risk and trust awareness dashboard 105, a trust engine 110, a risk engine 120, an adaptation engine 130, a risk and trust index database 140, a security configuration engine 150, a security analytics engine 160, a threat intelligence engine 170 and an asset management engine 180.
By way of example, the trust engine 110 and risk engine 120 may be connected to a risk and trust awareness dashboard 105, as will be explained later on.
As an example, the trust engine 110, which may also be referred to as a trust evaluation platform, may include an asset trust index calculator, and asset trust index assigner.
For example, the risk engine 120, which may also be referred to as a risk evaluation and/or treatment platform, may include a risk index calculator and a risk treatment handler.
The adaptation engine 130 may for example include an event/action analyzing module, and modules for workflow actions such as interworking with risk and trust engine and interworking with sub-components.
As mentioned, instead of deploying singular security controls to detect local deficiencies, the security automation system may interconnect different security functions. Threat intelligence from external sources may be combined with system- internal security monitoring information to compose so-called asset trust indexes. Trust indexes may be used for risk evaluation, which makes it possible to asses security status comprehensively across the entire system.
For example, based on risk evaluation and treatment decisions reflecting applicable security policies, the security automation system may provide fully automated or security-analyst-assisted workflows to mitigate security risks for any given context. The security automation system may provide end-to-end visibility to the trust and risk status of the system and make automated risk qualifications in near real-time, e.g. with the aid of machine learning technologies to adjust security controls, insert security functions and/or provide information to external systems
The proposed technology may offer one or more of the following advantages:
• Ability to combine individual pieces of information into risks that have risk impacting changes to the managed context and need to be followed, monitored and/or mitigated;
• Ability to adjust security posture of telco network and effectiveness of risk response measures by updating security policies according to changing risks;
• Ability to utilize historical threat and vulnerability database to evaluate and assess faster new security events in the future;
• Ability to combine individual security tools into “community” sharing common rules and machine learning algorithms to provide unified security management system on the same day, whereas the median time to respond to a security event is usually several months according to market studies;
• Ability to trigger automated and security analyst assisted workflows to facilitate faster response to security events. As of today, the median time to respond to a security event is 101 days according to market studies;
• Provides continuous Risk and Trust status, so that changes in the threat and vulnerability landscape, relevant to the managed context, will be immediately reflected. The status highlights effectiveness of risk response measures, and overall trustworthiness of the context at any given time;
• Provides fast Risk Treatment based on the real-time Risk Analysis results;
• Provides ability to remove security controls that are no more required; and • Trust evaluation and trust index calculation is connected to risk management, risk evaluation and risk status.
In the following, non-limiting examples will be given to provide a better feeling and understanding of certain aspects of the proposed technology. For improved readability, reference can once again be made to FIG. 3 and FIG. 9.
Example components of the Security Automation System
Adaptation Engine
The Adaptation Engine 130 is a rule engine that runs digital “workflows” to make decisions automatically based on predefined rules without manual intervention. It determines and facilitates the flow of information, tasks, and events and how these actions flow from one Adaptive Security Automation component to another. It learns from the action performed and uses ML algorithms to create new rules and to trigger actions towards other components.
However, the criteria of what to automate and to what extent depends on the required actions, and in some cases human actions may be integrated into the flow as well.
Risk Engine
The Risk engine 120 calculates risk level for the assets under monitoring and carries out a risk treatment analysis how to treat unacceptable risks.
Risk Engine includes three main components:
• Risk and Trust Index Database (shared Database with Trust Engine)
• Risk Index Calculator and
• Risk Treatment Handler
Risk and Trust Index Database 140 may include e.g. the following information:
• Asset identity and estimated asset value • Time of entry of the Asset to Risk and Trust Index Database
• Asset contextual information o Placement in security domain topology
Asset dependencies on other assets
Communication interfaces within security domain o Communication interfaces across security domains o Number of unpatched known vulnerabilities o The highest CVE score of known vulnerabilities o The average score of known vulnerabilities o The time of latest configuration change
• Threat IDs and their severity Indexes
• Vulnerability IDs and their Criticality indexes
• Control IDs and their Mitigation Indexes
• Unacceptable risk level o Asset level index o Security domain level index
• Index-decay multiplier
• Uncertainty factor
• Latest Trust index function
• Latest result of Trust index
• Latest asset dependency change list
At the initialization phase the Risk and Trust Index Database may be initialized with information collected from the various sources. All known Threats and Vulnerabilities are collected, downloaded and indexed from external sources. System specific asset information (asset & asset contextual information) is inserted from the asset inventory. The security controls are downloaded from the Policy automation databases and mitigation indexes are allocated.
The Risk and Trust Index Database does regular queries to external sources to update its content regarding the new Threats, Vulnerabilities, Assets and Controls.
Risk Index Calculator The Risk Index Calculator calculates the initial asset specific risk levels based on the formula 1) below, instead of the generic risk calculation formula (Risk = Impact x Probability) that does not evaluate the interdependencies and does not consider trust when calculating the Probability.
Asset contextual information contains information about the asset placement in security domain topology and asset dependencies on other assets. Asset dependency factor is a variable that is used to consider an increased risk factor due to high-risk neighboring asset(s). The Asset dependency factor impacts the risk level as a function of high-risk neighboring assets, that is, more high-risk neighboring assets or even one very high-risk neighboring asset increases the dependency factor values.
1) Asset Initial Risk Level (AIRL) = [(Impact x Asset Dependency Factor) x (Probability x 1 /Trust Index)] where:
AIRL = max 25
Impact = Asset Value Index X Threat Severity Index. Impact value e.g. within range [1 -5]
Probability = Vulnerability Criticality Index X 1/Control Mitigation Index. Probability value e.g. within range [1 ...5].
Trust Index = Value received from the Asset Trust Index calculation function: Trust Index value within range [0.5...1.5]
Asset dependency factor = Factor which considers impacts the risk level as a function of high-risk neighboring assets. The factor value within range [0.5...1 .5].
For example, it is considered a novelty that the Trust Index is considered when calculating Risk probability. It is also considered a novelty that dependencies to other assets is/are considered when calculating Risk Impact.
It is further considered a novelty that the Risk calculation may be applied on domain level.
The security domain initial level risk level is an average of risk level of risk levels of all assets within the security domain.
2) Security Domain Initial Risk Level = (AIRL1 +...+ AIRLn) / n where:
AIRL = Asset Initial Risk Level calculated by the formula 1.
After the initialization phase, each time when a new event is received from the Adaptation Engine it triggers a new Risk level calculation in Risk Engine. An event can be for instance a new or changed threat or vulnerability, a new added asset or control, or an old removed or changed asset or control.
If a threat, vulnerability, asset or control in the event is known by the Risk Index database, the Risk Level can be calculated using same formula as initialization phase, except that the Index-decay multiplier is also considered. The Index-decay multiplier can be dedicated multiplier for a threat, vulnerability, asset or control or there can be one common multiplier all of them. The index-decay multiplier normally weakens the index value of the threat, vulnerability, asset or control as a function time. The decay multiplier can be e.g. X % per time period (a week, a month), increasing or decreasing according to machine learning algorithms. After considerable change in a threat, vulnerability, asset or control, the time period can be restarted. Index-Decay Multiplier value is within range [0.5...1.5] The risk level calculation formula for the known threat, vulnerability, asset or control is:
3) Asset Risk Level Known (ARLK) = [Impact x Asset Dependency Factor) X (Probability x 1/T rust index)] x 1 /Index-Decay Multiplier where:
AIRL = max 25
Impact = Asset Value Index X Threat Severity Index. Impact value e.g. within range [1...5]
Probability = Vulnerability Criticality Index X 1/Control Mitigation Index. Probability value e.g. within range [1 ...5].
Trust Index = Value received from the Asset Trust Index calculation function: Trust Index value within range [0.5...1.5].
Asset dependency factor = Factor which considers impacts the risk level as a function of high-risk neighboring assets. The factor value within range [0.5...1 .5].
Index-Decay Multiplier = Multiplier which affects a threat, vulnerability, asset or control as a function time. Index-Decay Multiplier value within range [0.5...1 .5].
For example, it is considered a novelty that Index-Decay Multiplier is bringing time aspects into the Risk level calculation.
If one common Index-Decay Multiplier is not used then dedicated multipliers for a threat, vulnerability, asset or control can be used in the following way:
Impact = (Asset Value Index x Index-decay multiplier) x (Threat Severity Index x Index- decay multiplier) X Asset Dependency Factor. Probability = (Vulnerability Criticality Index x Index-decay multiplier) x (1 /Control Mitigation x Index-decay multiplier) x Trust index.
It is considered a novelty that the Risk level calculation may be applied on Domain level.
The security domain level risk level is an average of risk level of risk levels of all assets within the security domain. The security domain risk level calculation formula for the known threat, vulnerability, asset or control is:
4) Security Domain Risk Level Known = (ARLKi +...+ ARLKn ) / n where:
ARLK = Asset Risk Level for Known threat, vulnerability, asset or control calculated by the formula 3.
If an event contains a new type of threat or vulnerability, they are estimated in the risk calculation using machine learning based capabilities and with the uncertainty factor. The uncertainty factor is a parameter that is used to compensate the difficulty to estimate the level of insecurity in risk calculation because of a previously unassessed threat or vulnerability. The uncertainty factor increases the risk level as a function of the new unknown, that is, more lack of knowledge of threat or vulnerability the higher the index value is.
The formula for the unknown threat or vulnerability risk level calculation is:
5) Asset Risk Level Unknown = [Impact x Asset Dependency Factor) X
(Probability x 1/T rust index)] x 1 /Index-Decay Multiplier x Uncertainty factor where: AIRL = max 25.
Impact = Asset Value Index X Threat Severity Index. Impact value e.g. within range [1 -5].
Probability = Vulnerability Criticality Index X 1/Control Mitigation Index. Probability value e.g. within range [1 ...5].
Trust Index = Value received from the Asset Trust Index calculation function: Trust Index value within range [0.5...1.5].
Asset dependency factor = Factor which considers impacts the risk level as a function of high-risk neighboring assets. The factor value within range [0.5...1 .5].
Index-Decay Multiplier = Multiplier which affects a threat, vulnerability, asset or control as a function time. Index-Decay Multiplier value within range [0.5...1.5].
The uncertainty factor = Factor increases the risk level as a function of the level of unknown. The factor value e.g. within range [1 ...2]
For example, it is considered a novelty that the Uncertainty factor is applied in the Risk level calculation.
It is considered a novelty that the Risk Level uncertainty may be applied on Domain level.
The formula for the unknown threat or vulnerability risk level calculation for a security domain is:
6) Security Domain Risk Level Unknown = (ARLUi +...+ ARLUn) ) / n where: ARLU = Asset Risk Level for Unknown threat or vulnerability calculated by the formula 5.
If the Risk level for asset is higher than the defined unacceptable risk level, the Risk Treatment Handler is queried to carry out a proper risk treatment for the risk. The unacceptable risk level Index is not static, but a changing value based on the machine learning algorithm taking into account all the time the changing security posture of the system.
Risk and Trust index database is updated after each risk level calculation to reflect a new Security Risk posture of the monitored system.
Risk and trust visibility status is also updated in real time, so that changes in the different applications are immediately updated to reflect the change in risk status of the system.
FIG. 10 is a schematic flow diagram illustrating an example operation of a risk engine according to an embodiment.
Risk Treatment Handler
After the calculated or estimated risks are compared against the risk acceptance criteria, the Risk Treatment Handler carries out machine learning based risk treatment analysis how to treat each unacceptable risk.
The well-known risk treatment options are risk modification, risk avoidance and risk sharing. Two latter ones usually require human interaction, deeper treatment analysis and often also higher management decision, thus they are handled using separate risk treatment workflow. Risk Treatment Handler performs the risk treatment analysis for the risk which requires risk modification decisions. It selects the proper controls to provide additional protection for the monitored system. It forwards the protection instructions to the Security Configuration Engine via Adaptation Engine to introduce, remove or alter dynamic or static security controls automatically for the permanent or temporary protection.
Trust Engine
The Trust Engine 110 calculates, stores and reports Asset-specific Trust Indexes.
Asset Trust Index gives a snapshot to policy compliance status based on different characteristics requirements e.g. security, privacy, resilience, reliability and safety, and the respective controls deployed for the Asset. Asset Trust Index is used as input information for Risk and Trust Awareness Dashboard, impacting the Risk probability value either increasing through e.g. new vulnerabilities and threats or decreasing due to newly implemented controls.
Trust Engine includes three main components:
• Asset Trust Index Calculator
• Asset Trust Index Assigner
• Asset Trust Index DB.
Asset Trust Index Assigner takes as input Asset related policy, configuration and contextual information. If a new asset is introduced, or there are changes to an existing asset, Asset Trust Index Assigner requests Asset Trust Index Calculator to define an Asset Trust Index for the asset and then updates the Risk and Trust Index Database accordingly. If an existing Asset is to be removed, the Asset Trust Index Assigner erases the relevant Risk and Trust Index Database information. In both cases, notifications of the changes are issued to the Adaptation Engine and the Risk and Trust Awareness Dashboard.
FIG. 11 is a schematic flow diagram illustrating an example operation of an asset trust index assigner according to an embodiment. Asset Trust Index Calculator first defines an appropriate Asset T rust Index calculation uncertainty function for the Asset, based on contextual information of the Asset e.g. using machine learning. Then the Asset Trust Index Calculator applies Asset policy and configuration information to the Asset Trust Index calculation function and returns an Asset Trust Index value within range [0,5..1,5] to the requesting entity. This approach ensures that if there is uncertainty of asset trustworthiness even if policies are complied with, Asset Trust Index can be lower than 1 .
Trusted Asset (Compliance) = Weighted compliance level to Security Policies relevant for the asset.
Asset Trustworthiness Uncertainty function takes parameters including, but not exclusively, the following: o Number of unpatched known vulnerabilities (reflects known attack surface) o The highest CVE score of known vulnerabilities (reflects highest severity of known attack surface) o The average score of known vulnerabilities (reflects the average severity of known attack surface) o The time of latest patching (reflects attack time window since latest patching) o The time of latest configuration change (reflects attack window for the current configuration) o attestation capability (reflects external judgement) o asset complexity (reflects statistical likelihood of unknown weaknesses)
Number of external interfaces
Code complexity index o Asset Lifecycle Status (reflects the security lifecycle induced asset trustworthiness)
Time from first release
Time from end of maintenance Rate of functional feature changes
Rate of patching
The formula for the Asset Trust Index calculation is:
7) ATI = 0,5 + (AssetCompliance * AssetTrustworthinessUncertainty) where:
AssetCompliance (AC) is the metric of how well Asset fulfils the relevant security controls defined in the Asset specific security policy, see Equation 8), and
AssetTrustworthinessUncertainty (ATU) is the metric of the uncertainty in accuracy of knowledge of Asset’s security attributes, see Equation 11).
The range of the function output may be normalized to [0,5..1,5]) by using the constant 0,5 in Equation (7).
For example, it is considered a novelty that the Uncertainty factor is used when calculating Asset trustworthiness.
The formula for the Asset Compliance calculation is:
8) AC = (ActualControls/RelevantControls) where:
ActualControls (ACCON) are the implemented security controls according to the defined security policy for the Asset, see Equation 9), and
RelevantControls (RELCON) is the set of security controls defined for the Asset, see Equation 10). The range of the function output is [0...1]. The formula for the ActualControls calculation is:
9) ACCON = ActualControls =
AVE({ActualControl[1..n]*ACCONWeight[1..n]}) where:
AVE denotes averaging the weighted ActualControls[1..n] are the set of implemented security controls according to the defined security policy for the Asset, and
Weighty ..n] is the set of weight factors of the ActualControls[1..] The range of the function output is [0.01 ...NumberOfActual Controls]).
The formula for the RelevantControls calculation is:
10) RELCON = AVE({RelevantControl[1..n]*RELCONWeight[1..n]}) where:
AVE denotes averaging the weighted RelevantControls[1 ..n] are the set of security controls according to the defined security policy for the Asset, and
RELCONWeight[1..n] is the set of weight factors of the RelevantControls[1..] The range of the function output is [0.01 ...NumberOfRelevantControls].
For example, it is considered a novelty that Uncertainty attributes are used to defined asset specific uncertainties of asset trustworthiness.
The formula for the Asset Trustworthiness Uncertainty (ATU) calculation is: 11) ATU = VariableFunction(UncertaintyParameter(1..n)) where:
VariableFunction is initially the average of uncertainty attribute values and in further iterations a weighted average of uncertainty attributes, where the weights are defined based on Asset history. The range of the function output is [0...1].
For example, it is considered a novelty that instead of utilizing one common calculation function to calculate Trustworthiness, Variable function depending on asset’s context is used.
Asset contextual information is received from the Adaptation Engine and is composed of Asset-specific information e.g. historical security events and related adaptations, Asset placement in system topology, interfaces and protocols deployed by the Asset, and other characteristics that are relevant to the Asset’s security posture.
Asset Trust Index Calculator information and contextual information e.g. Asset’s topological localization and active interfaces and protocols used to communicate with other components.
FIG. 12 is a schematic flow diagram illustrating an example operation of an asset trust index calculator according to an embodiment.
The Risk and Trust Index Database stores Asset Trust information of all assets handled by Asset Trust Index Engine.
Asset Trust Index DB stores Asset Trust information of all assets handled by Trust Engine.
The information may include but is not restricted to:
- Asset identity of the Asset to Risk and Trust Index Database - Time of entry of the Asset to Risk and Trust Index Database
- Asset contextual information o Asset placement in security domain topology
Asset dependencies on other assets
• Communication interfaces within security domain
• Communication interfaces across security domains
- Asset TrustLeap factors o Number of unpatched known vulnerabilities o The highest CVE score of known vulnerabilities o The average score of known vulnerabilities o The time of latest patching o The time of latest configuration change o attestation capability (reflects external judgement) o asset complexity (reflects statistical likelihood of unknown weaknesses) o Asset Lifecycle Status
Maturity
Rate of functional feature changes
Rate of patching
- Latest Trust index function
- Latest result of Trust index
- Latest asset dependency list.
Reference may once again be made to FIG. 3 and/or FIG. 9 for the description below.
Risk and Trust Awareness Dashboard
The Risk and Trust Awareness Dashboard 105 gives single pane of glass, as a unified display of information from multiple sources, to the end to end risk and trust status of the managed context. It gives immediate visibility to the security operators for the main potential issues and risks in the network requiring constant monitoring and immediate actions hence helping to target the mitigation actions to the areas at most risk at any given time. The dashboard can be seen as an intuitive graphical user interface. Security Policy Based Security Configuration Engine
The Security Configuration Engine 150 configures the initial security profiles of the assets with help of the executable security controls.
It also configures new dynamic or static policies (or decommission existing ones) automatically for permanent or temporary protection based on the instructions of the risk engine via workflows of the security adaptation engine.
After enforcing new policy configuration, an acknowledgement is sent to risk and trust engines via adaptation engine to re-run risk calculation to confirm the latest risk and trust level of the system.
Security Analytics
The Security analytics engine 160 is a supporting component, which uses rule and machine-learning based analytics for detecting known and unknown threats across different network domains. It provides constant visibility to the risk landscape and helps to detect the riskiest areas where actions are needed to reduce the attack surface. The functionality is fully configurable, enabling definition of new analytics correlation rules and behavioral algorithms as the threat landscape evolves. Security analytics triggers a new risk calculation when a new security event or anomaly is detected.
Threat Intelligence
The Threat Intelligence engine 170 is a supporting component, which collects and shares threat and vulnerability information from the internal and external sources.
Threat information in form of lOCs (Indicator of Compromise) is received from the cybersecurity communities or it is collected using open source specifications that are widely supported by common SIEMs or other security control systems.
The found threat or vulnerability information is transmitted to the policy and trust engines for risk and trust evaluation. Based on the risk evaluation results, new security controls can be requested to be added by the Security Configuration Engine. Asset Management
The Asset Management engine 180 is a sub-component, which keeps track of assets.
Assets are context-dependent; e.g. nodes and elements in the network, configurations, databases, connections between assets, hardware and software subcomponents, workflows, policies, data (at rest, in transit, in use - which have different risk levels), human specialists etc. An asset always has an identity. Within the context, assets are organized in domains. Domain topology is itself an asset.
Asset Management provides asset related information to other components of the system/apparatus.
Key functionality in the Risk Engine may include one or more of:
• Risk evaluation is adaptive, highly automated, and near real-time, e.g. using automatically updated Asset Trust Indexes in Risk probability calculation.
• Risk Probability depends on several factors e.g. asset placement in system topology, number and nature of known threats, interface and protocols the asset deploys, and mitigating controls as specified in the risk level calculation formulas 1- 6.
• Threat Intelligence related to development toolkit, e.g. source code language and compiler having a history of high number of vulnerabilities over time, can be taken into account, enabling evaluation of the probability of an Asset being subject to unknown threats.
• Trust index is connected into Risk status, where risk status takes trust index into account for risk evaluation.
• Risks are calculated not only for individual Assets, but also for Domains, taking into account the trust relations between the Assets based on the risk level calculation formulas 2, 4 and 6.
Key functionality in the Adaptation Engine may include one or more of:
• The whole system security posture may be continuously modified based on changes in threat surface and associated risks, taking input from external components and processes that in traditional systems act individually or with limited interaction. Mitigation actions can be enforced automatically or be subjected to human interaction. No unnecessary security controls remain in use after the respective risks have ceased to exist.
• Gathering a history of events and subsequent adaptations provides further insight to which assets are more vulnerable and which have been less impacted by threats. That information is used to adjust weighting of threats in later adaptation decisions.
Key functionality in machine learning may involve using historical data with adjustable time decay factors to ensure that also slowly developing changes in threat environment are detected.
Key functionality in the Trust Engine may include one or more of:
• The Asset Trust Index calculation method utilizing Asset-specific contextual information to determine the calculation function of the Asset Trust Index and connecting that to the risk evaluation, see Equations 7 to 11 in chapter 4.
• Whenever Asset’s trustworthiness may have changed, the Asset specific Trust Index is recalculated and made available for updating the Asset’s Risk.
FIG. 13 is a schematic flow diagram illustrating an example of possible initialization of a security automation system according to an embodiment. The idea of the initialization is to collect input from internal and/or external sources and to define initial configuration tasks.
In a particular example, the following non-limiting initialization process may be performed:
1 . A context specific asset and network topology information is downloaded from Asset Management.
2. Policy automation is initialized by security standard frameworks such as NIST SP800-53r4, IS027001 :2013, IS027002:2013, IS027552, NIST Cybersecurity Framework 1 .1 , CIS Benchmarks and EU GDPR. Policy sets based on the pre defined policy families, policies, and controls are set according to the customer security policies. The defined policy sets are bound with the assets.
3. Context specific threat and vulnerability information is collected from the external sources.
4. Initial Threat and Machine Learning rules are defined taking into account the asset and network topology information.
5. Pre-defined, initial “workflows” are defined based on the underlying network context
6. Initial trust index is calculated based on asset related policy, network configuration and contextual information.
7. Initial risk level is calculated using risk calculation formula (1 ) based context specific asset information, defined controls, known threats and vulnerabilities and network topology.
8. Initial risk and trust status are illustrated in the risk and trust awareness dashboard.
Example use cases
Threat Intelliqence/Securitv
An example of a system/network topology for a use case is illustrated in FIG. 14. The illustrated system/network topology separates and groups different types of assets, e.g. network elements, into three different security domains according to their different functionality, security policies and security protection requirements.
Examples of action sequences of two different the use cases are shown in FIG. 15 and FIG. 16, as described below.
FIG. 15 is a schematic diagram illustrating a use case example according to an embodiment. In this example, a threat event is found by Security Analytics impacting assets within one security domain. 1. A new security event, indicating that a security policy change concerning disallowing a specific protocol, SNMPv2, has been made, is detected by the Security Analytics.
A threat event is transmitted to the Adaptation engine.
2. The rules of the adaptation engine trigger a risk and trust analysis on the assets A1 and A2 that currently support the SNMPv2 protocol that needs to be deprecated.
3. Trust engine calculates ATI for A1 and A2 using the new compliance information:
A1 ActualControl(l) = MinimumPasswordLength set to 12 (Weight = 1)
A1 ActualControl(2) = MaximumLoginAttemptsln5min set to 3 (Weight = 1)
A1 ActualControl(3) = SNMPv2 set to supported (Weight = 1)
A1 ActualControl(4) = AuditLogging is set to enabled (Weight = 1)
A1 RelevantControl(l) = set MinimumPasswordLength higher than 10 (Weight = 1)
A1 RelevantControl(2) = set MaximumLoginAttemptsln5min to less than 6 (Weight = 1 )
A1 RelevantControl(3) = disable SNMPv2 (Weight = 1)
A1 RelevantControl(4) = enable AuditLogging (Weight = 1)
A2 ActualControl(l) = MinimumPasswordLength set to 12 (Weight = 1)
A2 ActualControl(2) = MaximumLoginAttemptsln5min set to 3 (Weight = 1)
A2 ActualControl(3) = SNMPv2 set to disabled (Weight = 1)
A2 ActualControl(4) = AuditLogging is set to enabled (Weight = 1)
A2 RelevantControl(l) = set MinimumPasswordLength higher than 10 (Weight = 1)
A2 RelevantControl(2) = set MaximumLoginAttemptsln5min to less than 6 (Weight = 1 )
A2 RelevantControl(3) = disable SNMPv2 (Weight = 1)
A2 RelevantControl(4) = enable AuditLogging (Weight = 1)
A1 ATU(1) = AssetComplexity = 0,8 A1 ATU(2) = AssetLifecycleStatus = 0,5 A1 ATU(3) = NumOfUnpatchedVulns = 1 A1 ATU(4) = TimeOfLastPatching = 0,2 A1 ATU(5) = AttestationCapability = 0,8
A2 ATU(1 ) = AssetComplexity = 0,7 A2 ATU(2) = AssetLifecycleStatus = 0,9 A2 ATU(3) = NumOfUnpatchedVulns = 0,8 A2 ATU(4) = TimeOfLastPatching = 0,95 A2 ATU(5) = AttestationCapability = 0,8
[Equation 9] ACCON(A1 ) = AVE({1 ,1 ,1 ,1}*{1 ,1 ,0,1}) = (1 *1 +1 *1 +1 *0+1*1 )/4 = 0,75 [Equation 10] RELCON(A1 ) = AVE({1 ,1 ,1 ,1}*{1 , 1 ,1 ,1}) = (1 *1 + 1 *1 +1 *1 +1 *1 )/4 = 1 [Equation 11] ATU(A1) = AVE(0,8+0,5+1+0,2+0,8) = (0, 8+0, 5+1 +0, 2+0, 8)/5 = 0,66 [Equation 8] AC(A1 ) = ACCON(A1 )/RELCON(A1 ) = 0,66/1 = 0,66 [Equation 7] ATI(A1) = 0,5+AC(A1 )*ATU(A1 ) = 0,5+0,75*0,66 = 0,995
[Equation 9] ACCON(A2) = AVE({1 ,1 ,1 ,1}*{1 ,1 ,1 ,1}) = (1 *1 +1 *1 +1 *1 +1 *1 )/4 = 1 [Equation 10] RELCON(A2) = AVE({1 ,1 ,1 ,1}*{1 , 1 ,1 ,1}) = (1 *1+1*1 +1 *1 +1 *1 )/4 = 1 [Equation 11] ATU(A2) = AVE(0,7+0,9+0,8+0,95+0,8) = (0,7+0, 9+0, 8+0, 95+0, 8)/5 = 0,83
[Equation 8] AC(A2) = ACCON(A2)/RELCON(A2) = 1/1 = 1 [Equation 7] ATI(A2) = 0,5+AC(A2)*ATU(A2) = 0,5+1*0,77 = 1 ,27
This yields ATI values of 0,995 and 1 ,27 to assets A1 and A2 respectively.
Trust Index Calculator stores and reports Trust Index to be updated in the Risk and Trust Awareness dashboard.
4. A trust index is reported back and forwarded to the Risk Engine.
5. Risk index calculator performs a risk level calculation for the known threats and vulnerabilities using the risk calculation formulas 3, 4, that is, the risk level is calculated to each asset within the security domain A and also to security domain A.
At the initial phase the following values were defined: Asset value index:
A1: 2,5 A2: 2,3 A3:1 ,2 A4:1
Asset dependency factor for all assets = 0,6
Threat severity index (unauthorized access) = 3
Thus, the Impact values are for:
A1:2,5x 3x0,6 = 4,5 A2: 2,3x3x0,6 = 4,14 A3: 1,2x3x0,6 = 2,16 A4: 1 x3x0,6 = 1,8
Vulnerability Criticality factor (weak SNMP protocol in use): 3 Control Mitigation Index (SNMP3 in use): 1
Trust Indexes:
A1: 1,3 A2: 1,27 A3: 1,1 A4: 1,15
Thus, the probability values are for:
A1 : 3 x 1 x 1/1 ,3 = 2,3 A2:3x1 x 1/1,27 = 2,4 A3: 3x 1 x 1/1,1 = 2,7 A4:3x1 x 1/1,15 = 2,6 And the Asset Initial Risk Levels (AIRL) (formula 1) = [(Impact x Asset Dependency Factor) x (Probability x Trust Index)]:
A1 : 4,5 x 2,3 = 10,35 A2: 4,14x2,4 = 10,58 A3: 2,16x2,7 = 5,8 A4: 1,8 x 2,6 = 4,68 and the Security Domain Risk Level Known (Formula 2) = (ARLK1 +...+ ARLKn )/ n for the domain A was: (10,35 + 10,58 + 5,8 + 4,68) / 4 = 7,85
Now, the risk level calculation (Formula 3) for the known threat, vulnerability, asset and control is:
Asset Risk Level Known (ARLK) = [Impact x Asset Dependency Factor) X (Probability x Trust index)] x Index-Decay Multiplier
Asser dependency factor:
A1 : 0,6 same as initial state
A2: 0,7 (increased due to higher risk for the A1 )
A3: 0,7 (increased due to higher risk for the A1 )
A4: 0,6 (same as initial state, no dependency to A1)
Threat severity index (unauthorized access) = 3
Thus, the Impact values are for
A1:2,5x 3x0,6 = 4,5 A2: 2,3x3x0,7 = 4,83 A3: 1,2x3x0,7 = 2,52 A4: 1 x3x0,6 = 1,8 Vulnerability Criticality factor (weak SNMP protocol in use): 3 Control Mitigation Index (SNMP3 in use): 1 Control Mitigation Index (SNMP2 in use): 0,8
Trust Indexes:
A1 : 0,995 A2: 1,27 A3: 1,1 A4: 1,15
Thus, the probability values are for:
A1: 3x 1/0, 8x 1/0, 995 = 3, 77 A2: 3 x 1 x 1/1 ,27 = 3,81 A3: 3x 1 x 1/1,1 = 3,3 A4: 3 x 1 x 1/1,15 = 3,45
Common Index-Decay Multiplier for a threat, vulnerability, asset and control: 1 ,1
Asset Risk Levels Known (ARLK):
A1: 4,5x3,77 x 1/1,1 = 15,42 A2: 4,83x3,81 x 1/1,1 = 16,72 A3: 2,52 x 3,3 x 1/1,1 = 7,56 A4: 1,8x3,45x1/1,1 =5,6
The Security Domain Risk Level Known = (ARLK1 +...+ ARLKn ) / n for the domain A is (15,42 + 16,72 + 7,56 + 5,6) / 4 = 11 ,25
Risk calculation indicates unacceptable risk level for the security domain A (> 10) and assets A1 (> 15) and A2 (> 15) in security domain A, but there is no indication of the increased risk for the assets A3 and A4 nor security domains B or C. The risk and trust awareness dashboard is updated to illustrate a new Risk status to the assets A1 and A2.
6. The Risk Treatment Handler performs a risk treatment analysis resulting in that additional security controls are required to put in place to protect Assets A1 and A2 and reports mitigation instructions to Adaptation Engine.
7. Based on the instructions from the Risk Engine the adaptation engine forwards mitigation request to the Security Configuration Engine.
8. The Security Configuration Engine chooses from the policy catalog two new security controls. Enforcement of controls, that is, the actual security configuration of assets is performed executing scripts on the asset A1 and A2. Results of the adding security controls is returned to the adaption engine.
9. The rules of the adaptation engine trigger a new risk and trust analysis
10. Trust engine 3’ trust index in the Risk and Trust Awareness Dashboard is updated, using similar calculation as shown in the point 3.
11. A trust index is reported back and forwarded to the Risk Engine.
12. Risk index calculator performs a new risk level calculation using similar calculation as shown in the point 5 taking into account that two new security controls are added to Asset A1 and A2.
This time Risk calculation indicates an acceptable risk status for the monitored system. The risk and trust awareness dashboard is updated to illustrate an updated Risk status.
13. Rules of the Adaptation Engine are updated according to new risk and trust levels. FIG. 16 is a schematic diagram illustrating a use case example according to another embodiment. In this example, a threat is found by Threat Intelligence/Security Analytics that has impact beyond one security domain.
1 . A new unknown vulnerability found by a Threat Intelligence. A new threat event is transmitted to the Adaptation engine.
2. The rules of the adaptation engine trigger a risk and trust analysis.
3. Trust engine calculates, stores and reports Trust Index to be updated in the Risk and Trust Awareness dashboard. Asset trust index indicates impacts not only to assets in security domain A, but also assets in security domain B.
4. A trust index is reported back and forwarded to the Risk Engine.
5. Risk index calculator performs a risk level calculation for the unknown vulnerability using the risk calculation formulas 5 and 6, that is, the risk level is calculated to each asset within the security domain A and also to security domain A.
Risk calculation indicates an unacceptable risk for the assets A3 and A4 in the security domain A, which may also cause increased security risk to assets in the security domain B and C. The risk and trust awareness dashboard is updated to illustrate increased Risk status to the assets A1 and A2 and the security domains A, B and C.
6. The Risk Treatment Handler performs a risk treatment analysis resulting in that additional security controls are required to put in place to protect Assets A1 and A2.
Risk treatment analysis also advice to add additional firewall rules to the network FW between security domains. Risk Engine reports mitigation instructions to Adaptation Engine.
7. Based on the instructions from the Risk Engine the adaptation engine forwards mitigation request to the Security Configuration Engine. 8. The Security Configuration Engine chooses from the policy catalog two new security controls to be added to the assets A and B. Enforcement of controls, that is, the actual security configuration of assets are performed executing scripts on the asset A1 and A2. New firewall rules are also configured to the Network Firewall to provide an additional protection for the Security Domains B and C. Results of the adding security controls is returned to the Adaption Engine.
9. The rules of the Adaptation Engine trigger a new risk and trust analysis.
10. Trust engine 3’ trust index in the Risk and Trust Awareness Dashboard is updated.
11. A trust index is reported back and forwarded to the Risk Engine.
12. Risk Engine Risk index calculator performs a new risk level calculation taking into account that two new security controls are added to Asset A1 and A2 and new firewall rules are set to network FW.
This time Risk calculation indicates an acceptable risk status for the monitored system. The risk and trust awareness dashboard is updated to illustrate an updated Risk status.
13. Rules of the Adaptation Engine are updated according to new risk and trust levels.
It will be appreciated that the methods and arrangements described herein can be implemented, combined and re-arranged in a variety of ways.
For example, embodiments may be implemented in hardware, or in software for execution by suitable processing circuitry, or a combination thereof.
The steps, functions, procedures, modules and/or blocks described herein may be implemented in hardware using any conventional technology, such as discrete circuit or integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry. Alternatively, or as a complement, at least some of the steps, functions, procedures, modules and/or blocks described herein may be implemented in software such as a computer program for execution by suitable processing circuitry such as one or more processors or processing units. The terms “processing circuitry” and “processor” may be used interchangeably in parts of this disclosure.
Examples of processing circuitry includes, but is not limited to, one or more microprocessors, one or more Digital Signal Processors (DSPs), one or more Central Processing Units (CPUs), video acceleration hardware, and/or any suitable programmable logic circuitry such as one or more Field Programmable Gate Arrays (FPGAs), or one or more Programmable Logic Controllers (PLCs).
It should also be understood that it may be possible to re-use the general processing capabilities of any conventional device or unit in which the proposed technology is implemented. It may also be possible to re-use existing software, e.g. by reprogramming of the existing software or by adding new software components.
FIG. 17A is a schematic block diagram illustrating an example of a security automation system according to an embodiment.
In this particular example, the security automation system 400 comprises processing circuitry 410 including one or more processors and a memory 420, the memory 420 comprising instructions executable by the processing circuitry 410, whereby the security automation system 400 is operative to perform security management of the Information Technology system.
In other words, the processing circuitry is operative to perform at least some of the steps, actions and/or functions described herein, including the operations of the security automation system 400.
Optionally, the security automation system 400 may also include a communication circuit 430. The communication circuit 430 may include functions for wired and/or wireless communication with other devices and/or network nodes in the network. In a particular example, the communication circuit 430 may be based on radio circuitry for communication with one or more other nodes, including transmitting and/or receiving information. The communication circuit 430 may be interconnected to the processing circuitry 410 and/or memory 420. By way of example, the communication circuit 430 may include any of the following: a receiver, a transmitter, a transceiver, input/output (I/O) circuitry, input port(s) and/or output port(s).
FIG. 17B is a schematic block diagram illustrating an example of a network entity (node) comprising a security automation system according to an embodiment.
For example, the network entity 350 may be a network node or part thereof and/or a cloud-based network device.
FIG. 18 is a schematic diagram illustrating an example of a computer-implementation according to an embodiment. In this particular example, at least some of the steps, functions, procedures, modules and/or blocks described herein are implemented in a computer program 525; 535, which is loaded into the memory 520 for execution by processing circuitry including one or more processors 510. The processor(s) 510 and memory 520 are interconnected to each other to enable normal software execution. An optional input/output device 540 may also be interconnected to the processor(s) 510 and/or the memory 520 to enable input and/or output of relevant data such as input parameter(s) and/or resulting output parameter(s).
The term ‘processor’ should be interpreted in a general sense as any system or device capable of executing program code or computer program instructions to perform a particular processing, determining or computing task.
The processing circuitry including one or more processors 510 is thus configured to perform, when executing the computer program 525, well-defined processing tasks such as those described herein.
The processing circuitry does not have to be dedicated to only execute the above- described steps, functions, procedure and/or blocks, but may also execute other tasks. In a particular embodiment, there is provided a computer program 525; 535 for performing, when executed, security management of an Information Technology system having a number of interacting system components, each system component being associated with one or more operational assets relevant to the operation of the system component, wherein the operational assets are organized in one or more security domains according to a system topology. The computer program 525; 535 comprises instructions, which when executed by at least one processor 510, cause the at least one processor 510 to: obtain: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration; determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration; determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications; and determine one or more security control actions related to at least a subset of the operational assets and/or domains based on at least a subset of the determined risk levels.
According to another aspect, there is provided a computer-program product comprising a non-transitory computer-readable medium 520; 530 having stored thereon such a computer program 525; 535. The proposed technology also provides a carrier comprising the computer program, wherein the carrier is one of an electronic signal, an optical signal, an electromagnetic signal, a magnetic signal, an electric signal, a radio signal, a microwave signal, or a computer-readable storage medium.
By way of example, the software or computer program 525; 535 may be realized as a computer program product, which is normally carried or stored on a computer-readable medium 520; 530, in particular a non-volatile medium. The computer-readable medium may include one or more removable or non-removable memory devices including, but not limited to a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc (CD), a Digital Versatile Disc (DVD), a Blu-ray disc, a Universal Serial Bus (USB) memory, a Hard Disk Drive (HDD) storage device, a flash memory, a magnetic tape, or any other conventional memory device. The computer program may thus be loaded into the operating memory of a computer or equivalent processing device for execution by the processing circuitry thereof.
The flow diagram or diagrams presented herein may be regarded as a computer flow diagram or diagrams, when performed by one or more processors. A corresponding apparatus may be defined as a group of function modules, where each step performed by the processor corresponds to a function module. In this case, the function modules are implemented as a computer program running on the processor.
The computer program residing in memory may thus be organized as appropriate function modules configured to perform, when executed by the processor, at least part of the steps and/or tasks described herein.
Any appropriate steps, methods, features, functions, or benefits disclosed herein may be performed through one or more functional units or modules of one or more virtual apparatuses. Each virtual apparatus may comprise a number of these functional units. These functional units may be implemented via processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory (RAM), cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein. In some implementations, the processing circuitry may be used to cause the respective functional unit to perform corresponding functions according one or more embodiments of the present disclosure.
Alternatively it is possible to realize such module(s) predominantly by hardware modules, or alternatively by hardware, with suitable interconnections between relevant modules. Particular examples include one or more suitably configured digital signal processors and other known electronic circuits, e.g. discrete logic gates interconnected to perform a specialized function, and/or Application Specific Integrated Circuits (ASICs) as previously mentioned. Other examples of usable hardware include input/output (I/O) circuitry and/or circuitry for receiving and/or sending signals. The extent of software versus hardware is purely implementation selection.
For example, the virtual apparatus may comprise processing circuitry, which may include one or more microprocessor or microcontrollers, as well as other digital hardware, which may include digital signal processors (DSPs), special-purpose digital logic, and the like. The processing circuitry may be configured to execute program code stored in memory, which may include one or several types of memory such as read-only memory (ROM), random-access memory, cache memory, flash memory devices, optical storage devices, etc. Program code stored in memory includes program instructions for executing one or more telecommunications and/or data communications protocols as well as instructions for carrying out one or more of the techniques described herein, in several embodiments.
The term module or unit may have conventional meaning in the field of electronics, electrical devices and/or electronic devices and may include, for example, electrical and/or electronic circuitry, devices, modules, processors, memories, logic solid state and/or discrete devices, computer programs or instructions for carrying out respective tasks, procedures, computations, outputs, and/or displaying functions, and so on, as such as those that are described herein. It is also becoming increasingly popular to provide computing services (hardware and/or software) in network devices such as network nodes and/or servers where the resources are delivered as a service to remote locations over a network. By way of example, this means that functionality, as described herein, can be distributed or re located to one or more separate physical nodes or servers. The functionality may be re-located or distributed to one or more jointly acting physical and/or virtual machines that can be positioned in separate physical node(s), i.e. in the so-called cloud. This is sometimes also referred to as cloud computing, which is a model for enabling ubiquitous on-demand network access to a pool of configurable computing resources such as networks, servers, storage, applications and general or customized services.
There are different forms of virtualization that can be useful in this context, including one or more of:
• Consolidation of network functionality into virtualized software running on customized or generic hardware. This is sometimes referred to as network function virtualization.
• Co-location of one or more application stacks, including operating system, running on separate hardware onto a single hardware platform. This is sometimes referred to as system virtualization, or platform virtualization.
• Co-location of hardware and/or software resources with the objective of using some advanced domain level scheduling and coordination technique to gain increased system resource utilization. This is sometimes referred to as resource virtualization, or centralized and coordinated resource pooling.
Although it may often desirable to centralize functionality in so-called generic data centers, in other scenarios it may in fact be beneficial to distribute functionality over different parts of the network.
A Network Device (ND) may generally be seen as an electronic device being communicatively connected to other electronic devices in the network. By way of example, the network device may be implemented in hardware, software or a combination thereof. For example, the network device may be a special-purpose network device or a general purpose network device, or a hybrid thereof.
A special-purpose network device may use custom processing circuits and a proprietary operating system (OS), for execution of software to provide one or more of the features or functions disclosed herein.
A general purpose network device may use common off-the-shelf (COTS) processors and a standard OS, for execution of software configured to provide one or more of the features or functions disclosed herein.
By way of example, a special-purpose network device may include hardware comprising processing or computing resource(s), which typically include a set of one or more processors, and physical network interfaces (Nls), which sometimes are called physical ports, as well as non-transitory machine readable storage media having stored thereon software. A physical Nl may be seen as hardware in a network device through which a network connection is made, e.g. wirelessly through a wireless network interface controller (WNIC) or through plugging in a cable to a physical port connected to a network interface controller (NIC). During operation, the software may be executed by the hardware to instantiate a set of one or more software instance(s). Each of the software instance(s), and that part of the hardware that executes that software instance, may form a separate virtual network element.
By way of another example, a general purpose network device may for example include hardware comprising a set of one or more processor(s), often COTS processors, and network interface controller(s) (NICs), as well as non-transitory machine readable storage media having stored thereon software. During operation, the processor(s) executes the software to instantiate one or more sets of one or more applications. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization - for example represented by a virtualization layer and software containers. For example, one such alternative embodiment implements operating system-level virtualization, in which case the virtualization layer represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple software containers that may each be used to execute one of a sets of applications. In an example embodiment, each of the software containers (also called virtualization engines, virtual private servers, or jails) is a user space instance (typically a virtual memory space). These user space instances may be separate from each other and separate from the kernel space in which the operating system is executed; the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes. Another such alternative embodiment implements full virtualization, in which case: 1 ) the virtualization layer represents a hypervisor (sometimes referred to as a Virtual Machine Monitor (VMM)) or the hypervisor is executed on top of a host operating system; and 2) the software containers each represent a tightly isolated form of software container called a virtual machine that is executed by the hypervisor and may include a guest operating system.
A hypervisor is the software/hardware that is responsible for creating and managing the various virtualized instances and in some cases the actual physical hardware. The hypervisor manages the underlying resources and presents them as virtualized instances. What the hypervisor virtualizes to appear as a single processor may actually comprise multiple separate processors. From the perspective of the operating system, the virtualized instances appear to be actual hardware components.
A virtual machine is a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine; and applications generally do not know they are running on a virtual machine as opposed to running on a “bare metal” host electronic device, though some systems provide para- virtualization which allows an operating system or application to be aware of the presence of virtualization for optimization purposes.
The instantiation of the one or more sets of one or more applications as well as the virtualization layer and software containers if implemented, are collectively referred to as software instance(s). Each set of applications, corresponding software container if implemented, and that part of the hardware that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared by software containers), forms a separate virtual network element(s). The virtual network element(s) may perform similar functionality compared to Virtual Network Element(s) (VNEs). This virtualization of the hardware is sometimes referred to as Network Function Virtualization (NFV)). Thus, NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in data centers, NDs, and Customer Premise Equipment (CPE). Flowever, different embodiments may implement one or more of the software container(s) differently. For example, while embodiments are illustrated with each software container corresponding to a VNE, alternative embodiments may implement this correspondence or mapping between software container-VNE at a finer granularity level; it should be understood that the techniques described herein with reference to a correspondence of software containers to VNEs also apply to embodiments where such a finer level of granularity is used.
According to yet another embodiment, there is provided a hybrid network device, which includes both custom processing circuitry/proprietary OS and COTS processors/standard OS in a network device, e.g. in a card or circuit board within a network device ND. In certain embodiments of such a hybrid network device, a platform Virtual Machine (VM), such as a VM that implements functionality of a special-purpose network device, could provide for para-virtualization to the hardware present in the hybrid network device.
The embodiments described above are merely given as examples, and it should be understood that the proposed technology is not limited thereto. It will be understood by those skilled in the art that various modifications, combinations and changes may be made to the embodiments without departing from the present scope as defined by the appended claims. In particular, different part solutions in the different embodiments can be combined in other configurations, where technically possible.

Claims

1. A security automation system (100; 400; 500) configured for security management of an Information Technology (IT) system (200), said Information Technology system (200) having a number of interacting system components (205), each system component (205) being associated with one or more operational assets (210) for the operation of the system component, wherein the operational assets (210) are organized in one or more security domains (220) according to a system topology, wherein the security automation system is configured to obtain: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets (210) of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets (210), and/or iii) security configuration information representative of asset and/or domain security configuration; wherein the security automation system (100; 400; 500) is configured to determine, for each of at least a subset of the operational assets (210) and/or for each of a number of the security domains (220), a trust indication representing trustworthiness of the operational asset (210) and/or domain (220) at least partly based on the security configuration information representative of asset and/or domain security configuration; wherein the security automation system (100; 400; 500) is configured to determine, for each of at least a subset of the operational assets (210) and/or for each of a number of the security domains (220), a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications; and wherein the security automation system (100; 400; 500) is configured to determine one or more security control actions related to at least a subset of the operational assets (210) and/or domains (220) based on at least a subset of the determined risk levels.
2. The security automation system of claim 1 , wherein the security automation system (100; 400; 500) is configured to determine one or more security control actions for a fully automated or security-analyst-assisted workflow to mitigate security risks and/or security threats.
3. The security automation system of claim 1 or 2, wherein the security automation system (100; 400; 500) is configured to determine, for each of at least a subset of the operational assets (210) and/or for each of a number security domains (220), a risk level also at least partly based on dependency between assets (210).
4. The security automation system of any of the claims 1 to 3, wherein the security automation system (100; 400; 500) is configured to determine, for each of at least a subset of the operational assets (210) and/or for each of a number security domains (220), a risk level also at least partly based on asset placement in the system topology, the number and nature of security risks and/or threats, communication interface(s) and/or protocol(s) used by the operational asset(s) and/or deployed security controls for the operational asset(s).
5. The security automation system of any of the claims 1 to 4, wherein the security automation system (100; 400; 500) is configured to determine, for each of at least a subset of the operational assets (210) and/or for each of a number of the security domains (220), a trust indication representing trustworthiness of the operational asset and/or domain also at least partly based on asset information representative of the system topology, asset configuration and/or dependency between assets.
6. The security automation system of any of the claims 1 to 5, wherein the security automation system (100; 400; 500) is configured to determine one or more security control actions based on Machine Learning (ML).
7. The security automation system of claim 6, wherein the security automation system (100; 400; 500) is configured to perform ML-based risk treatment analysis on how to handle security risks when the determined risk levels exceed a given threshold and determine one or more of the security control actions related to at least a subset of the operational assets (210) and/or domains (220) based on the ML-based risk treatment analysis.
8. The security automation system of any of the claims 1 to 7, wherein the security automation system (100; 400; 500) is configured to detect one or more security risks, system vulnerabilities and/or security threats based on Machine Learning (ML).
9. The security automation system of any of the claims 1 to 8, wherein the one or more security control actions includes updating a security policy, adjusting a security configuration, removing an existing security function and/or inserting a new security function.
10. The security automation system of any of the claims 1 to 9, wherein the security automation system (100; 400; 500) is configured to adaptively determine trust indication(s), risk level(s) and security control action(s) in an automated manner.
11 . The security automation system of any of the claims 1 to 10, wherein the security automation system (100; 400; 500) is configured to re-determine at least one risk level after deployment of the determine security control action(s).
12. The security automation system of any of the claims 1 to 11 , wherein the operational assets (210) include at least one of hardware and/or software components of the Information Technology system (200), configurations, workflows, communication interfaces, and/or associated databases of the hardware and/or software components, and organization of the assets (210) in one or more security domains (220) according to the system topology.
13. The security automation system of any of the claims 1 to 12, wherein the operational assets (210) include at least one digital asset, and the one or more security control actions include at least one executable security control operating on the at least one digital asset and/or an associated security domain.
14. The security automation system of any of the claims 1 to 13, wherein the Information Technology system (200) is any system for generating, processing, storing and/or transferring information.
15. The security automation system of any of the claims 1 to 14, wherein the security automation system (100; 400; 500) is configured for security management of the Information Technology system (200) of a communication system or network (300).
16. The security automation system of claim 15, wherein the Information Technology system (200) is an integrated part of the communication system or network (300) and the interacting system components (205) of the Information Technology system (200) involves network nodes and/or elements of the communication system or network (300).
17. The security automation system of any of the claims 1 to 16, wherein the security automation system (100; 400; 500) comprises a trust engine (110), a risk engine (120) and a security adaptation engine (130) that are operatively interconnected, wherein the trust engine (110) is configured to determine, for each of at least a subset of the operational assets and/or domains, the trust indication representing trustworthiness of the operational asset and/or domain; wherein the risk engine (120) is configured to determine, for each of at least a subset of the operational assets and/or for each of a number of the security domains, the risk level of the operational asset and/or security domain; and wherein the security adaptation engine (130) is configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains.
18. The security automation system of claim 17, wherein the security automation system (100; 400; 500) comprises a security configuration engine (150) for performing a security configuration according to the determined security control action(s).
19. The security automation system of claim 17 or 18, wherein the security adaptation engine (130) and the security configuration engine (150) are integrated.
20. The security automation system of any of the claims 17 to 19, wherein the security automation system (100; 400; 500) comprises a shared trust and risk database (140) accessible by the trust engine (110) and the risk engine (120).
21. The security automation system of any of the claims 17 to 20, wherein the security adaptation engine (130) is based on a rule engine that commands or runs digital security control workflows to make decisions automatically based on predefined rules.
22. The security automation system of claim 21 , wherein the security adaptation engine (130) is configured to learn from previous security control actions and use Machine Learning (ML) to adapt rules and/or create new rules.
23. The security automation system of any of the claims 1 to 22, wherein the security automation system comprises a visualization and/or notification platform for enabling situational awareness of the security posture of the Information Technology system.
24. The security automation system of any of the claims 1 to 23, wherein the security automation system (100; 400; 500) comprises processing circuitry (410; 510) and memory (420; 520), the memory (420; 520) comprising instructions executable by the processing circuitry (410; 510), whereby the security automation system (100; 400; 500) is operative to perform security management of the Information Technology system (200).
25. A network entity (350) of a communication system or network (300) comprising a security automation system (100; 400; 500) according to any of the claims 1 to 24.
26. The network entity of claim 25, wherein the network entity (350) is a network node or a cloud-based network device.
27. A method for security management of an Information Technology system (200) having a number of interacting system components (205), each system component (205) being associated with one or more operational assets (210) relevant to the operation of the system component, wherein the operational assets (210) are organized in one or more security domains (220) according to a system topology, wherein the method comprises:
(S1) obtaining: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets (210) of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration;
(52) determining, for each of at least a subset of the operational assets (210) and/or for each of a number of the security domains (220), a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration;
(53) determining, for each of at least a subset of the operational assets (210) and/or for each of a number of the security domains (220), a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications; and
(54) determining one or more security control actions related to at least a subset of the operational assets (210) and/or domains (220) based on at least a subset of the determined risk levels.
28. A computer program (525; 535) for performing, when executed, security management of an Information Technology system (200) having a number of interacting system components (205), each system component (205) being associated with one or more operational assets (210) relevant to the operation of the system component, wherein the operational assets (210) are organized in one or more security domains (220) according to a system topology, wherein the computer program (525; 535) comprises instructions, which when executed by at least one processor (510), cause the at least one processor (510) to: obtain: i) security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets (210) of the system components, ii) asset information representative of the system topology, asset configuration and/or dependency between assets, and/or iii) security configuration information representative of asset and/or domain security configuration; determine, for each of at least a subset of the operational assets (210) and/or for each of a number of the security domains (220), a trust indication representing trustworthiness of the operational asset and/or domain at least partly based on the security configuration information representative of asset and/or domain security configuration; determine, for each of at least a subset of the operational assets (210) and/or for each of a number of the security domains (220), a risk level at least partly based on the security information, the asset information and at least a subset of the determined trust indications; and determine one or more security control actions related to at least a subset of the operational assets (210) and/or domains (220) based on at least a subset of the determined risk levels.
29. A computer-program product comprising a non-transitory computer-readable medium (520; 530) having stored thereon a computer program (525; 535) of claim 27.
30. A security automation system (100; 400; 500) configured for security management of an Information Technology system (200), said Information Technology system (200) having a number of interacting system components (205), each system component (205) being associated with one or more operational assets (210) relevant to the operation of the system component, wherein the operational assets (210) are organized in one or more security domains (220) according to a system topology, wherein the security automation system (100; 400; 500) comprises: a trust engine (110) configured to perform trust valuation of at least a subset of the operational assets (210) and/or domains (220) at least partly based on security configuration information representative of asset and/or domain security configuration; a risk engine (120) configured to perform risk assessment for each of at least a subset of the operational assets (210) and/or for each of a number of the security domains (220) at least partly based on the trust evaluation performed by the trust engine (110), and security information representative of security risks, system vulnerabilities and/or security threats related to at least a subset of the operational assets, asset information representative of the system topology, asset configuration and/or dependency between assets; and a security adaptation engine (130) configured to determine one or more security control actions related to at least a subset of the operational assets and/or domains at least partly based on the risk assessment performed by the risk engine (120).
PCT/EP2019/071971 2019-08-15 2019-08-15 Security automation system WO2021028060A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/071971 WO2021028060A1 (en) 2019-08-15 2019-08-15 Security automation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/071971 WO2021028060A1 (en) 2019-08-15 2019-08-15 Security automation system

Publications (1)

Publication Number Publication Date
WO2021028060A1 true WO2021028060A1 (en) 2021-02-18

Family

ID=67660566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/071971 WO2021028060A1 (en) 2019-08-15 2019-08-15 Security automation system

Country Status (1)

Country Link
WO (1) WO2021028060A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792292A (en) * 2021-09-14 2021-12-14 山石网科通信技术股份有限公司 Response method and device of security script, storage medium and processor
CN114338145A (en) * 2021-12-27 2022-04-12 绿盟科技集团股份有限公司 Safety protection method and device and electronic equipment
WO2022177760A1 (en) * 2021-02-19 2022-08-25 Mcafee, Llc Methods and apparatus to orchestrate personal protection across digital assets

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130191919A1 (en) * 2012-01-19 2013-07-25 Mcafee, Inc. Calculating quantitative asset risk
US20180124072A1 (en) * 2016-10-31 2018-05-03 Acentium Inc. Systems and methods for computer environment situational awareness

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130191919A1 (en) * 2012-01-19 2013-07-25 Mcafee, Inc. Calculating quantitative asset risk
US20180124072A1 (en) * 2016-10-31 2018-05-03 Acentium Inc. Systems and methods for computer environment situational awareness

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BARBARA FILKINS, SANS WHITEPAPER ''AN EVALUATOR'S GUIDE TO NEXT GEN SIEM, December 2018 (2018-12-01)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022177760A1 (en) * 2021-02-19 2022-08-25 Mcafee, Llc Methods and apparatus to orchestrate personal protection across digital assets
CN113792292A (en) * 2021-09-14 2021-12-14 山石网科通信技术股份有限公司 Response method and device of security script, storage medium and processor
CN114338145A (en) * 2021-12-27 2022-04-12 绿盟科技集团股份有限公司 Safety protection method and device and electronic equipment
CN114338145B (en) * 2021-12-27 2023-09-26 绿盟科技集团股份有限公司 Safety protection method and device and electronic equipment

Similar Documents

Publication Publication Date Title
EP3654220A1 (en) Prioritized remediation of information security vulnerabilities based on service model aware multi-dimensional security risk scoring
EP3343867B1 (en) Methods and apparatus for processing threat metrics to determine a risk of loss due to the compromise of an organization asset
US9923917B2 (en) System and method for automatic calculation of cyber-risk in business-critical applications
US8095984B2 (en) Systems and methods of associating security vulnerabilities and assets
US11829484B2 (en) Cyber risk minimization through quantitative analysis of aggregate control efficacy
EP4104410B1 (en) Security automation system with machine learning functions
US10686825B2 (en) Multiple presentation fidelity-level based quantitative cyber risk decision support system
US20120232679A1 (en) Cyberspace security system
WO2019136282A1 (en) Control maturity assessment in security operations environments
WO2021028060A1 (en) Security automation system
US9456004B2 (en) Optimizing risk-based compliance of an information technology (IT) system
US11763005B2 (en) Dynamic security policy
WO2016018382A1 (en) Creating a security report for a customer network
US11916964B2 (en) Dynamic, runtime application programming interface parameter labeling, flow parameter tracking and security policy enforcement using API call graph
US20200174867A1 (en) Holo-entropy adaptive boosting based anomaly detection
US20230412620A1 (en) System and methods for cybersecurity analysis using ueba and network topology data and trigger - based network remediation
US11888872B2 (en) Protecting computer assets from malicious attacks
EP3707633B1 (en) Security configuration determination
KR101994664B1 (en) Vulnerability checking system based on cloud service
US8627472B2 (en) Determining heavy distinct hitters in a data stream
EP3556084B1 (en) Application-sensitive strategy for server decommissioning
Palma et al. Enhancing trust and liability assisted mechanisms for ZSM 5G architectures
Koufos et al. Dynamic risk management
GB2568114A (en) Dynamic security policy
US11704403B2 (en) Detecting and preventing unauthorized command injection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19755608

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19755608

Country of ref document: EP

Kind code of ref document: A1