EP4032246A1 - Systeme und verfahren zur überwachung und korrektur von sicherheitspraktiken in einem computersystem - Google Patents

Systeme und verfahren zur überwachung und korrektur von sicherheitspraktiken in einem computersystem

Info

Publication number
EP4032246A1
EP4032246A1 EP20865458.2A EP20865458A EP4032246A1 EP 4032246 A1 EP4032246 A1 EP 4032246A1 EP 20865458 A EP20865458 A EP 20865458A EP 4032246 A1 EP4032246 A1 EP 4032246A1
Authority
EP
European Patent Office
Prior art keywords
security policy
procedures
risk management
computing
changes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20865458.2A
Other languages
English (en)
French (fr)
Other versions
EP4032246A4 (de
Inventor
Jack Allen Jones
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Risklens LLC
Original Assignee
RiskLens Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/573,175 external-priority patent/US11258828B2/en
Application filed by RiskLens Inc filed Critical RiskLens Inc
Publication of EP4032246A1 publication Critical patent/EP4032246A1/de
Publication of EP4032246A4 publication Critical patent/EP4032246A4/de
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3495Performance evaluation by tracing or monitoring for systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1433Vulnerability analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/84Using snapshots, i.e. a logical point-in-time copy of the data

Definitions

  • the present disclosure relates to systems and methods for determining the efficacy of security measures taken for a computer system.
  • An organization’s cyber risk landscape is comprised of its assets (e.g., hardware, software, data, etc.), the threats to those assets (e.g., cyber criminals), and the controls intended to manage the frequency and magnitude of cyber-related loss.
  • assets e.g., hardware, software, data, etc.
  • threats to those assets e.g., cyber criminals
  • controls intended to manage the frequency and magnitude of cyber-related loss.
  • threat intelligence includes both a tactical and strategic point of view — i.e. , what’s going on right now and is likely to happen in the near future, versus how the threat landscape is evolving.
  • making well-informed risk-based decisions can be a complex and challenging process.
  • Risk management efficacy boils down to the ability to make well-informed decisions (e.g., prioritization, solution selection, identification and treatment of root causes, etc.) and reliable execution. These are difficult to measure directly, particularly when related to a party or asset to which on-site access is limited or nonexistent. Also, it is often desirable to evaluate a party or asset more objectively and efficiently than can be achieved using questionnaires or site visits. Further, while known methods can be effective to reduce risk, such as by patching a security hole, there is no ability to analyze system management quality. For example, there is no reliable way of determining whether the corrective actions are effective and efficient over time because the complex interplay between elements and dynamic nature of computing systems makes cause and effect difficult, if not impossible, to correlate.
  • KPI Key Performance Indicators
  • KRIs Key Risk Indicators
  • KRIs Key Risk Indicators
  • KRIs signal increased probability of events that have a negative impact on business performance.
  • a KPI could be factory output while a KRI could be a power outage that affects equipment, or a cyber-attack.
  • cyber risk KRIs include, new vulnerabilities (the results of scanning tools), patching compliance levels, security education and awareness levels, malware infections, detected attacks, lost laptops/mobile devices, number of open “high risk” audit findings, and on-time remediation of audit findings.
  • KRIs thresholds defined for their (KRIs).
  • the response to exceeding the threshold is often overlooked and treated in an ad hoc fashion. It is very difficult to analyze and measure the risk associated with the normal or abnormal states of KRIs. Therefore, most organizations have no clear notion of where to define their KRI thresholds, resulting in not being able to determine the correct response to an abnormal state and not being able to determine if a particular response, or series of responses, was effective.
  • FIG. 3 illustrates a typical risk landscape architecture.
  • risk 310 is determined by threats, assets, and impact.
  • Risk management 320 includes execution, and decisions.
  • a feedback loop which runs from Risk 310 to monitoring and testing 330 to analysis and reporting 340 to risk management 320, provides decision-makers with intelligence regarding their risk landscape. The better this feedback loop is operating, the better able decision-makers will be to make appropriate risk management choices.
  • this feedback loop includes not just information about risk (threats, assets, and impact) but also information regarding the efficacy of risk management practices. For the reasons noted above, the feedback loop of FIG. 3 is often unreliable and inaccurate.
  • One implementation includes a system configured for monitoring and correcting cyber security practices on a computing environment having computing assets, the system comprising: one or more hardware processors configured by machine-readable instructions to: (a) receive a cyber security policy defining changes to be applied to computing assets, the cyber security policy further defining procedures to be taken by an organization for effecting the changes, the cyber security policy further defining timing at which the procedures should be implemented; (b) determine a set of risk management parameters which indicate a state of the computing environment at a time of collection of the risk management parameters, wherein the set of risk management parameters is determined based on at least one of the procedures; (c) collect successive sets of values of the risk management parameters at predetermined times; (d) determine, based on at least two of the sets of values of the risk management parameters, that at least one of the procedures has not resulted in the corresponding changes being applied to the computing assets based on the timing at which the procedures should be implemented defined by the cyber security policy; and (e) in response to (d), adjust at least one of the procedures of the
  • Another implementation includes a computer-implemented method for monitoring and correcting a cyber security policy on a computing environment having computing assets, the method comprising: (a) receiving a cyber security policy defining changes to be applied to computing assets, the cyber security policy further defining procedures to be taken by an organization for effecting the changes, the cyber security policy further defining timing at which the procedures should be implemented;(b) determining a set of risk management parameters which indicate a state of the computing environment at a time of collection of the risk management parameters, wherein the set of risk management parameters is determined based on at least one of the procedures; (c) collecting successive sets of values of the risk management parameters at predetermined times; (d) determining, based on at least two of the sets of values of the risk management parameters, that at least one of the procedures has not resulted in the corresponding changes being applied to the computing assets based on the timing at which the procedures should be implemented defined by the cyber security policy; and (e) in response to (d), adjusting at least one of the procedures of the security policy to create an
  • FIG. 1 is a schematic block diagram of a distributed computing system configured for determining the efficacy of cyber security measures, in accordance with one or more implementations.
  • FIG. 2 is a flowchart of a method for determining the efficacy of cyber security measures, in accordance with one or more implementations.
  • FIG. 3 is a schematic diagram of a risk management architecture.
  • the disclosed embodiments can be analogized to the optical concept of parallax.
  • an understanding of our universe begins with measurements of distance — e.g., how far away a star, galaxy, etc., is.
  • This parallax principle is used to measure the distance of objects in space, but in order to make this work given the great distances involved, there must be a very large distance between two points of perspective.
  • the “snapshots” (the two points that form the base of the triangle) can be taken when the earth is at extreme points in the opposite ends of its orbit of the sun. That significant distance in space from one side of the orbit to the other provides sufficient changes in perspective to allow the use simple geometry to derive distances to objects in deep space.
  • other important characteristics such as whether the object is moving toward or away from the earth, how hot a star is, whether it is getting hotter or cooler, and whether it is moving in concert with the objects around it, can be measured.
  • the interval between applied snapshots can be adjusted based on upon the current risk concern. For example, snapshots taken in more frequent intervals will help to capture sudden changes to the normal state, which can be used to detect time- sensitive changes in a timely fashion. Snapshots taken at longer intervals will be more effective at detecting systemic and strategic conditions. Over time, examined conditions can be correlated across a large base of organizations (e.g., within the government) with loss experience, to begin statistically forecasting loss expectancy given certain conditions. The snapshots can be taken at a fixed relatively short interval and longer intervals can be created by using non-successive snapshots.
  • KRIs Key Risk Indicators
  • KPIs Key Performance Indicators
  • the disclosed embodiments can identify and manage fundamental and/or systemic weaknesses that might otherwise go unresolved.
  • Machine learning algorithms can be applied to provide automated analysis and reporting.
  • the embodiments can be used in the following situations: third-party risk management, development and tracking of KRIs and KPIs, policy/process definition (e.g., visibility levels, change management practices) as well as compliance to those expectations, strategic reporting to management, Regulatory reporting/oversight, Identifying and managing differences in RM capabilities across different parts of large, decentralized organizations (e.g., government, global entities, etc.), and cyber warfare (to identify and strategically target weaknesses in an enemy)
  • FIG. 1 illustrates a system 100 configured for determining the efficacy of security measures take for a collection of computing assets 118, in accordance with one or more implementations.
  • system 100 may include one or more servers 102.
  • Server(s) 102 may be configured to communicate with one or more client computing platforms 104 and system computing assets 118, according to a client/server architecture and/or other architectures.
  • Client computing platform(s) 104 are used to provide user interaction (e.g., monitoring and control) and may be configured to communicate with other client computing platforms via server(s) 102 and/or according to a peer-to-peer architecture and/or other architectures. Users may access system 100 via client computing platform(s) 104.
  • Assets 118 make up the collection of assets for which risk is being managed.
  • Assets 118 can be remotely distributed in any manner and under the control and/or possession of one or multiple parties.
  • Server(s) 102 may be configured by machine-readable instructions 106.
  • Machine-readable instructions 106 may include one or more instruction modules.
  • the instruction modules may include computer program modules.
  • the instruction modules may include one or more of a parameter set determination module 108, a set collection module 110, a determining module 112, a parameter adjusting module 114, an efficacy determination module 116, and/or other instruction modules.
  • Parameter set determination module 108 may be configured to determine a set of risk parameters of the computing system to be collected from assets 118.
  • Risk parameters are system state parameters or variables which can be indicative of potential risk or lack thereof. The selection of risk parameters to be included in the set depends on the composition and operation of the system being monitored, the threat landscape, and the risk tolerance of the organization of concern. Risk parameter selection is based upon what one is trying to understand/learn about the risk management program. For example, if one wants to understand whether the organization has incomplete visibility into their risk landscape, vulnerability scanning data that shows some systems aren't being patched at all could be collected, even though the organization's policies and practices would dictate patching every 30 days.
  • the set of risk parameters may include parameters related to asset existence, asset value, control conditions, network traffic volume, and/or threat landscape.
  • the set of risk parameters may include parameters collected by anti virus software, network-based vulnerability scanners such as NetRecon, network read/write utilities such as Netcat, data loss prevention technologies, configuration management database (CMDB) technologies, and/or vulnerability scanning technologies.
  • the threat landscape can encompass various known components, such as human threats (e.g. hackers, internal personnel error, malicious internal personnel) natural phenomena (e.g. power outages or hardware damage due to weather or earthquake) and internal asset threats (e.g. security weaknesses in mobile devices, software bugs).
  • human threats e.g. hackers, internal personnel error, malicious internal personnel
  • natural phenomena e.g. power outages or hardware damage due to weather or earthquake
  • internal asset threats e.g. security weaknesses in mobile devices, software bugs.
  • Set collection module 110 may be configured to collect a first set of values of the risk parameters of the computing system at a first time t1 , collect a second set of values at of the risk parameters of the computing system at a second time t2, collect a third set of values at of the risk parameters of the computing system at a third time t3, and collect a fourth set of values at of the risk parameters of the computing system at a fourth time t4.
  • Time t1 may be a time that is before the adjustment is made
  • t2 may be a time after the adjustment is made
  • t3 may be a time after t2
  • t4 may be a time after t3.
  • the timing of collection of sets of values can be predetermined and can have fixed intervals.
  • the collected sets of values for any determination need not be consecutive and thus the interval between sets of values used in a determination can be varied. Any number of sets of values of risk parameters can be collected and the time interval therebetween can vary based on specific application parameters.
  • the period between successive times of value collection may be constant.
  • the successive time may include a time that is in regular succession without gaps, according to some implementations.
  • the timer period between successive times may vary.
  • Determining module 112 may be configured to determine, based on the first set of parameters that there is a cyber risk management issue relating to the computing system.
  • Parameter adjusting module 114 may be configured to adjust operate parameters of the computer system to address the cyber risk management issue. Adjustments to identified risk management deficiencies might include; changes to policies, changes to procedures, additional training or enforcement activities, implementation of new technologies, and the like.
  • Efficacy determination module 116 may be configured to determine the efficacy of the adjustment based on a comparison of two of the sets of values, or more sets of values, and an elapsed time between collection of two of the sets of values. For example, if a problem had been identified and adjustments had been made the degree to which the adjustments eradicated the original problem can be monitored and determined.
  • An algorithm applied to the determine the efficacy of the adjustment may vary based on the elapsed time.
  • the algorithm may be a rule.
  • the algorithm may include a precise rule specifying how to solve some problem, according to some implementations. Examples of the algorithm may include one or more of :
  • Server(s) 102, client computing platform(s) 104, and/or assets 108 may be operatively linked via one or more electronic communication links.
  • electronic communication links may be established, at least in part, via a network such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which server(s) 102, client computing platform(s) 104, and/or assets 108 may be operatively linked via some other communication media.
  • Server(s) 102 may include electronic storage 120, one or more processors 122, and/or other components. Server(s) 102 may include communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of server(s) 102 in FIG. 1 is not intended to be limiting. Server(s) 102 may include a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to server(s) 102. For example, server(s) 102 may be implemented by a cloud of computing platforms operating together as server(s) 102. Assets 108 may be under to possession and/or control of external entities participating with system 100, and/or other resources. Assets 108 make up the computer system/architecture that is being managed.
  • Electronic storage 120 may comprise non-transitory storage media that electronically stores information.
  • the electronic storage media of electronic storage 120 may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with server(s) 102 and/or removable storage that is removably connectable to server(s) 102 via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.).
  • a port e.g., a USB port, a firewire port, etc.
  • a drive e.g., a disk drive, etc.
  • Electronic storage 120 may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.
  • Electronic storage 120 may include one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources).
  • Electronic storage 120 may store software algorithms, information determined by processor(s) 122, information received from server(s) 102, information received from client computing platform(s) 104, and/or other information that enables server(s) 102 to function as described herein.
  • Processor(s) 122 may be configured to provide information processing capabilities in server(s) 102.
  • processor(s) 122 may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
  • processor(s) 122 is shown in FIG. 1 as a single entity, this is for illustrative purposes only.
  • processor(s) 122 may include a plurality of processing units. These processing units may be physically located within the same device, or processor(s) 122 may represent processing functionality of a plurality of devices operating in coordination.
  • Processor(s) 122 may be configured to execute modules 108, 110, 112, 114, 116, and/or other modules.
  • Processor(s) 122 may be configured to execute modules 108, 110, 112, 114, 116, and/or other modules by software; hardware; firmware; some combination of software, hardware, and/or firmware; and/or other mechanisms for configuring processing capabilities on processor(s) 122.
  • the term “module” may refer to any component or set of components that perform the functionality attributed to the module. This may include one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components.
  • modules 108, 110, 112, 114, and 116 are illustrated in FIG. 1 as being implemented within a single processing unit, in implementations in which processor(s) 122 includes multiple processing units, one or more of modules 108, 110, 112, 114, and/or 116 may be implemented remotely from the other modules.
  • the description of the functionality provided by the different modules 108, 110, 112, 114, and/or 116 described below is for illustrative purposes, and is not intended to be limiting, as any of modules 108, 110, 112, 114, and/or 116 may provide more or less functionality than is described.
  • modules 108, 110, 112, 114, and/or 116 may be eliminated, and some or all of its functionality may be provided by other ones of modules 108, 110, 112, 114, and/or 116.
  • processor(s) 122 may be configured to execute one or more additional modules that may perform some or all of the functionality attributed below to one of modules 108, 110, 112, 114, and/or 116.
  • FIG. 2 illustrates a method 200 for determining the efficacy of security measures take for a computer system, in accordance with one or more implementations.
  • the operations of method 200 presented below are intended to be illustrative. In some implementations, method 200 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 200 are illustrated in FIG. 2 and described below is not intended to be limiting.
  • method 200 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
  • the one or more processing devices may include one or more devices executing some or all of the operations of method 200 in response to instructions stored electronically on an electronic storage medium.
  • the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of method 200.
  • An operation 202 may include determining a set of risk parameters of the computing system. Operation 202 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to parameter set determination module 108, in accordance with one or more implementations.
  • An operation 204 may include collecting a first set of values of the risk parameters of the computing system at a first time t1. Operation 204 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to set collection module 110, in accordance with one or more implementations.
  • An operation 206 may include determining, based on the first set of parameters that there is a cyber risk management issue relating to the computing system. Operation 206 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to determining module 112, in accordance with one or more implementations.
  • An operation 208 may include adjusting operating parameters of the computer system to address the cyber risk management issue. Operation 208 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to parameter adjusting module 114, in accordance with one or more implementations.
  • An operation 210 may include collecting a second set of values at of the risk parameters of the computing system at a second time t2. Operation 210 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to set collection module 110, in accordance with one or more implementations.
  • An operation 212 may include collecting a third set of values at of the risk parameters of the computing system at a third time t3. Operation 212 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to set collection module 110, in accordance with one or more implementations.
  • An operation 214 may include collecting a fourth set of values at of the risk parameters of the computing system at a fourth time t4. Operation 214 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to set collection module 110, in accordance with one or more implementations.
  • An operation 216 may include determining the efficacy of the adjustment based on a comparison of two of the sets of values and an elapsed time between collection of two of the sets of values. Operation 216 may be performed by one or more hardware processors configured by machine-readable instructions including a module that is the same as or similar to efficacy determination module 116, in accordance with one or more implementations.
  • Intentional and desirable changes to a cyber risk landscape tend to happen either: on a set schedule (e.g., weekly system patches), as part of a project (e.g., implementation of a new business or security technology), or in response to undesirable changes that have occurred (e.g., in response to notification of a new threat exploit, discovery of non-compliant systems, etc.)
  • a set schedule e.g., weekly system patches
  • undesirable changes that have occurred e.g., in response to notification of a new threat exploit, discovery of non-compliant systems, etc.
  • the timing and scope of these changes e.g., the systems or software where these changes are expected to occur, will be well defined.
  • data from the organization’s landscape should confirm that the changes occurred when and where expected. However, if data shows that the intended changes occurred in some places but not others, it strongly suggests that wherever the changes did not occur are poorly managed or controlled.
  • cybersecurity-related surprises This also can be a reliable indicator of an organization’s overall cyber risk management maturity.
  • Antivirus technologies e.g., McAfeeTM
  • Sensitive data discovery technologies e.g., ProofPointTM
  • Access privilege management technologies e.g., CyberArkTM
  • Password strength analysis technologies e.g., Password MeterTM
  • RMGD risk management groundhog day
  • Some assets may show signs of active management (e.g., regular patching, etc.), and yet evidence may exist that suggests an absence of visibility into changes that occur to control conditions.
  • a population of assets may receive regular patching and yet one or more of the assets may revert to an unpatched state soon after the patches have been applied. If this deficient state persists until the next patch cycle, it may suggest that the organization does not monitor control conditions for changes and deficiencies. This can be a particularly important consideration for assets that exist in high-threat landscapes (e.g., are Internet-facing).
  • implementations disclosed herein can indicate whether or not an enterprises cyber risk management is improving, how fast it is improving (or backsliding), and which controls have had the greatest effect.
  • the number of problem areas, as well as the frequency and severity in rebounding can provide objective, data-driven evidence regarding an organization’s risk management effectiveness.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)
EP20865458.2A 2019-09-17 2020-08-05 Systeme und verfahren zur überwachung und korrektur von sicherheitspraktiken in einem computersystem Pending EP4032246A4 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/573,175 US11258828B2 (en) 2018-05-28 2019-09-17 Systems and methods for monitoring and correcting computer system security practices
PCT/US2020/044948 WO2021055112A1 (en) 2019-09-17 2020-08-05 Systems and methods for monitoring and correcting computer system security practices

Publications (2)

Publication Number Publication Date
EP4032246A1 true EP4032246A1 (de) 2022-07-27
EP4032246A4 EP4032246A4 (de) 2023-10-18

Family

ID=74884515

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20865458.2A Pending EP4032246A4 (de) 2019-09-17 2020-08-05 Systeme und verfahren zur überwachung und korrektur von sicherheitspraktiken in einem computersystem

Country Status (4)

Country Link
EP (1) EP4032246A4 (de)
AU (1) AU2020348194A1 (de)
CA (1) CA3150264A1 (de)
WO (1) WO2021055112A1 (de)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6735701B1 (en) * 1998-06-25 2004-05-11 Macarthur Investments, Llc Network policy management and effectiveness system
US20150033323A1 (en) * 2003-07-01 2015-01-29 Securityprofiling, Llc Virtual patching system, method, and computer program product
US20130179936A1 (en) * 2012-01-09 2013-07-11 International Business Machines Corporation Security policy management using incident analysis
DE102012209829A1 (de) 2012-04-20 2013-10-24 Robert Bosch Gmbh Kraftfahrzeugbordnetz mit Teilnetzen und Generatoranordnung, Generatoranordnung und Verfahren zum Betreiben eines Bordnetzes
US10135874B1 (en) * 2016-11-16 2018-11-20 VCE IP Holding Company LLC Compliance management system and method for an integrated computing system

Also Published As

Publication number Publication date
EP4032246A4 (de) 2023-10-18
CA3150264A1 (en) 2021-03-25
AU2020348194A1 (en) 2022-03-31
WO2021055112A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
US11693964B2 (en) Cyber security using one or more models trained on a normal behavior
Allodi et al. Security events and vulnerability data for cybersecurity risk estimation
EP3211854B1 (de) Cyber-sicherheit
Mu et al. An intrusion response decision-making model based on hierarchical task network planning
Onwubiko Cyber security operations centre: Security monitoring for protecting business and supporting cyber defense strategy
Sendi et al. Real time intrusion prediction based on optimized alerts with hidden Markov model
WO2019231826A1 (en) Systems and methods for determining the efficacy of computer system security policies
Beigh et al. Intrusion Detection and Prevention System: Classification and Quick
Dressler et al. Operational data classes for establishing situational awareness in cyberspace
US11979426B2 (en) Predictive vulnerability management analytics, orchestration, automation and remediation platform for computer systems. networks and devices
Judijanto et al. Edge of Enterprise Architecture in Addressing Cyber Security Threats and Business Risks
Bristow A sans 2021 survey: Ot/ics cybersecurity
US11258828B2 (en) Systems and methods for monitoring and correcting computer system security practices
Crowley et al. The Definition of SOC-cess
Mitsarakis Contemporary Cyber Threats to Critical Infrastructures: Management and Countermeasures
Panguluri et al. Cyber security: protecting water and wastewater infrastructure
EP4032246A1 (de) Systeme und verfahren zur überwachung und korrektur von sicherheitspraktiken in einem computersystem
Akheel Vulnerability Assessment and Analysis of SCADA and Foundation Fieldbus on Industrial Control System (ICS) Networks: A Literature Revie.
Ikuomola et al. A framework for collaborative, adaptive and cost sensitive intrusion response system
Irfan et al. Information Security Framework Targeting DDOS attacks in Financial Institutes
US20240134990A1 (en) Monitoring and remediation of cybersecurity risk based on calculation of cyber-risk domain scores
Malik Cybersecurity: Security Automation and Continous Monitoring
Dimitrios Security information and event management systems: benefits and inefficiencies
Jumaat Incident prioritisation for intrusion response systems
Masera et al. ICT aspects of power systems and their security

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220311

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Free format text: PREVIOUS MAIN CLASS: H04L0029060000

Ipc: H04L0009400000

A4 Supplementary search report drawn up and despatched

Effective date: 20230919

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 21/57 20130101ALI20230913BHEP

Ipc: G06F 11/00 20060101ALI20230913BHEP

Ipc: H04L 9/40 20220101AFI20230913BHEP

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: RISKLENS, LLC