CN111164597A - Dynamic reassembly of patch groups using stream clustering - Google Patents
Dynamic reassembly of patch groups using stream clustering Download PDFInfo
- Publication number
- CN111164597A CN111164597A CN201880062930.6A CN201880062930A CN111164597A CN 111164597 A CN111164597 A CN 111164597A CN 201880062930 A CN201880062930 A CN 201880062930A CN 111164597 A CN111164597 A CN 111164597A
- Authority
- CN
- China
- Prior art keywords
- risk
- server
- computer
- server group
- group
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 62
- 238000012544 monitoring process Methods 0.000 claims abstract description 28
- 230000004044 response Effects 0.000 claims abstract description 18
- 238000012502 risk assessment Methods 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims description 27
- 238000003860 storage Methods 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 18
- 238000012986 modification Methods 0.000 claims description 6
- 230000004048 modification Effects 0.000 claims description 6
- 238000010801 machine learning Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 35
- 230000000670 limiting effect Effects 0.000 description 24
- 230000006870 function Effects 0.000 description 22
- 230000008569 process Effects 0.000 description 14
- 238000004458 analytical method Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 8
- 238000007726 management method Methods 0.000 description 8
- 238000012706 support-vector machine Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 241000700605 Viruses Species 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000000835 fiber Substances 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 230000001010 compromised effect Effects 0.000 description 2
- 229910052802 copper Inorganic materials 0.000 description 2
- 239000010949 copper Substances 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 230000007123 defense Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 235000019580 granularity Nutrition 0.000 description 2
- 230000005055 memory storage Effects 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/561—Virus type analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3404—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for parallel or distributed programming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
- G06F11/3495—Performance evaluation by tracing or monitoring for systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/566—Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/568—Computer malware detection or handling, e.g. anti-virus arrangements eliminating virus, restoring damaged files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/577—Assessing vulnerabilities and evaluating computer system security
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Virology (AREA)
- General Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computing Systems (AREA)
- Debugging And Monitoring (AREA)
- Stored Programmes (AREA)
Abstract
Techniques are provided herein for a dynamic server group that can be patched together using a stream clustering algorithm and provide a learning component to reuse repeatable patterns using machine learning. In one example, in response to a first risk associated with a first server device, a risk assessment component patches a server group to mitigate vulnerabilities of the first server device and a second server device, wherein the server group includes the first server device and the second server device. Further, the monitoring component monitors data associated with a second risk to the server group to mitigate the second risk to the server group.
Description
Technical Field
The present disclosure relates to cloud management, and more particularly, to dynamic reassembly of patch groups (patch groups) using stream clustering.
Background
Machines operating within the same environment may benefit from the same software patch (e.g., patch set) to mitigate common vulnerabilities (vunneralities). For example, an application may execute on multiple servers. Thus, a web server, database, and/or cache may run applications on multiple servers or different physical machines, which may increase vulnerabilities to malware, viruses, attacks, and the like. Typically, the machines can be repaired one at a time.
The patch groups share a common vulnerability so that they need to be processed together. Patch groups are created manually and statically based on knowledge of subject matter experts (subject matter experts), but this does not reflect different granularities, such as shared infrastructure, networks, workloads, etc.
Accordingly, there is a need in the art to address the above-mentioned problems.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of one or more embodiments of the disclosure. This summary is not intended to identify key or critical elements or to delineate any scope of the particular embodiments or any scope of the claims. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later. In one or more embodiments described herein, an apparatus, system, computer-implemented method, apparatus, and/or computer program product is described that facilitates dynamically reorganizing a patch group using stream clustering.
Viewed from a first aspect, the present invention provides a system for managing a group of servers, comprising: a memory storing computer-executable components; and a processor that executes computer-executable components stored in the memory, wherein the computer-executable components comprise: a risk assessment component operable to: in response to a first risk associated with a first server device, patching a server group to mitigate vulnerabilities of the first server device and a second server device, wherein the server group includes the first server device and the second server device; and a monitoring component operable to: data associated with a second risk to the server group is monitored to mitigate the second risk to the server group.
Viewed from another aspect, the present invention provides a computer-implemented method for managing a group of servers, comprising: in response to a first risk associated with a first server device, patching, by a device operatively coupled to a processor, a server group to mitigate vulnerabilities of the first server device and a second server device, wherein the server group includes the first server device and the second server device; and monitoring, by the device, data associated with the second risk to the server group to mitigate the second risk to the server group.
Viewed from another aspect, the present invention provides a computer program product for managing a server group, the computer program product comprising a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method for performing the steps of the present invention.
Viewed from another aspect, the present invention provides a computer program stored on a computer readable medium and loadable into the internal memory of a digital computer, comprising software code portions, when said program is run on a computer, for performing the steps of the invention.
According to an embodiment, a system may include a memory storing computer-executable components, and a processor executing the computer-executable components stored in the memory. The computer-executable components of the system may include a risk management component that, in response to a first risk associated with a first server device, patches a server group to mitigate vulnerabilities of the first server device and a second server device, wherein the server group includes the first server device and the second server device. The computer-executable components of the system may also include a monitoring component that monitors data associated with a second risk to the group of servers to mitigate the second risk to the group of servers.
According to another embodiment, a computer program product for facilitating server group patching may include a computer-readable storage medium having program instructions embodied therein. The program instructions may be executable by a processor, and the processor may patch a server group to mitigate vulnerabilities of a first server device and a second server device in response to a first risk associated with the first server device, wherein the server group includes the first server device and the second server device. The program instructions are also executable to monitor, by the processor, data associated with a second risk to the server group to mitigate the second risk to the server group.
According to yet another embodiment, a computer-implemented method is provided. The computer-implemented method may include patching, by a device operatively coupled to a processor, a server group to mitigate vulnerabilities of a first server device and a second server device in response to a first risk associated with the first server device, wherein the server group includes the first server device and the second server device. The computer-implemented method may also include monitoring, by the device, data associated with a second risk to the server group to mitigate the second risk to the server group.
According to yet another embodiment, a system may include a memory storing computer-executable components, and a processor executing the computer-executable components stored in the memory. The computer-executable components of the system may include a risk management component that, in response to a first risk associated with a first server device, patches a server group to mitigate vulnerabilities of the first server device and a second server device, wherein the server group includes the first server device and the second server device. The computer-executable components of the system may also include a monitoring component that monitors data associated with a second risk to the group of servers to mitigate the second risk to the group of servers. Further, the computer-executable components of the system can also include a learning component that analyzes risk data associated with prior risks received from workstation devices, resulting in risk predictions.
According to yet another embodiment, a computer program product for facilitating server group patching may include a computer-readable storage medium having program instructions embodied therein. The program instructions may be executable by a processor, and the processor may patch a server group to mitigate vulnerabilities of a first server device and a second server device in response to a first risk associated with the first server device, wherein the server group includes the first server device and the second server device. The program instructions are also executable to monitor, by the processor, data associated with a second risk to the server group to mitigate the second risk to the server group. The program instructions may also be executable to analyze, by the processor, risk data associated with a previous risk received from the workstation device, resulting in a risk prediction.
In some embodiments, one or more of the above elements described in connection with a system, computer-implemented method, and/or computer program may be embodied in different forms, such as a computer-implemented method, computer program product, or system.
Drawings
The invention will now be described, by way of example only, with reference to preferred embodiments, as illustrated in the following figures:
FIG. 1 illustrates a block diagram of an example non-limiting system that facilitates grouping and patching of machines in accordance with one or more embodiments described herein.
FIG. 2 illustrates another block diagram of an example non-limiting system that facilitates machine group analysis in accordance with one or more embodiments described herein.
FIG. 3 illustrates an additional block diagram of an example non-limiting machine patching component in accordance with one or more embodiments described herein.
FIG. 4 illustrates yet another block diagram of an example non-limiting system that facilitates flow clustering in accordance with one or more embodiments described herein.
FIG. 5 illustrates an additional block diagram of an example non-limiting system that facilitates micro-streaming clustering in accordance with one or more embodiments described herein.
FIG. 6 illustrates a flow diagram of an example non-limiting process overview in accordance with one or more embodiments described herein.
FIG. 7 illustrates a flow diagram of an example non-limiting flow diagram that facilitates evaluation of a risk measure according to one or more embodiments described herein.
FIG. 8 illustrates a flow diagram of an example non-limiting computer-implemented method of facilitating grouping and patching of machines according to one or more embodiments described herein.
FIG. 9 sets forth a flow chart of another exemplary non-limiting computer-implemented method of facilitating grouping and patching of machines according to one or more embodiments described herein.
FIG. 10 illustrates a flow diagram of another example non-limiting computer-implemented method of facilitating grouping and patching of machines according to one or more embodiments described herein.
FIG. 11 illustrates a block diagram of an example non-limiting operating environment in which one or more embodiments described herein can be facilitated.
FIG. 12 shows a block diagram of an example non-limiting cloud computing operating environment in accordance with one or more embodiments described herein.
FIG. 13 illustrates a block diagram of an example non-limiting abstraction model layer in accordance with one or more embodiments described herein.
Detailed Description
The following detailed description is merely illustrative and is not intended to limit the embodiments and/or the application or uses of the embodiments. Furthermore, there is no intention to be bound by any expressed or implied information presented in the preceding background or summary or the detailed description.
One or more embodiments are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a more thorough understanding of one or more embodiments. In various instances, however, it may be evident that one or more embodiments may be practiced without these specific details.
It should be understood at the outset that although this disclosure includes a detailed description of cloud computing, implementation of the techniques set forth therein is not limited to a cloud computing environment, but may be implemented in connection with any other type of computing environment, whether now known or later developed.
Cloud computing is a service delivery model for convenient, on-demand network access to a shared pool of configurable computing resources. Configurable computing resources are resources that can be deployed and released quickly with minimal administrative cost or interaction with a service provider, such as networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services. Such a cloud model may include at least five characteristics, at least three service models, and at least four deployment models.
Machines operating within the same environment may benefit from the same software patch (e.g., patch group) to mitigate common vulnerabilities. For example, an application may execute on multiple servers. Thus, a web server, database, and/or cache may run applications on multiple servers or different physical machines, which may increase vulnerabilities to malware, viruses, attacks, and the like. Because some machines may share a common vulnerability, grouping the machines together and applying patches to the group may create better defense and/or security against the common vulnerability, rather than patching the machines one at a time.
One or more embodiments described herein may patch a set of machines that share a common vulnerability. The patch groups may be defined at different granularities and may be dynamically adjusted. One or more embodiments described herein include systems, computer-implemented methods, apparatuses, and computer program products that facilitate patching of a machine.
FIG. 1 illustrates a block diagram of an example non-limiting system that facilitates grouping and patching of machines in accordance with one or more embodiments described herein. In various embodiments, the system 100 may be associated with or included in a data analysis system, a data processing system, a graph analysis system, a graph processing system, a big data system, a social networking system, a speech recognition system, an image recognition system, a graph modeling system, a bioinformatics system, a data compression system, an artificial intelligence system, an authentication system, a syntactic pattern recognition system, a medical system, a health monitoring system, a networking system, a computer networking system, a communication system, a router system, a server system, or the like.
As shown, the system 100 may include a control device 112 and one or more physical machines 104, 106, 110) communicatively coupled to one or more cloud networks 102, 108. In some embodiments, the cloud networks 102, 108 may include virtual machines (not shown). In one or more embodiments, the control device 112, the physical machines 104, 106, 110, and/or one or more virtual machines may be electrically and/or communicatively coupled to each other.
In one embodiment, the control device 112 may perform a fix of machines by identifying machines that share a common risk. Common risks may include, but are not limited to: malware, viruses, memory leaks, cyber attacks, and the like. It should be understood, with reference to this disclosure, that a machine may be a server device, a virtual machine, a Central Processing Unit (CPU), a physical machine, and so forth. In one embodiment, the control device 112 may group the physical machines 104, 106, 110 in the cloud network 102. The physical machines 104, 106, 110 may also represent virtual machines.
In some embodiments, the physical machines 104, 106, 110 may be grouped by the control device 112 based on common characteristics of the physical machines 104, 106, 110 and/or common threats or vulnerabilities. The control device 112 is capable of monitoring individual physical machines and/or groups of physical machines. For example, if an Operating System (OS) of the physical machine experiences a memory leak, the control device 112 may receive data associated with a vulnerability of the physical machine 104. Because the physical machine 106 is grouped with the physical machine 104 in the cloud network 102, the control device 112 may determine that the physical machine 106 may also be at risk of a memory leak, and that a vulnerability of the physical machine 104 may expose the OS of the physical machine 106 to an attack. Accordingly, the control device 112 may apply a public patch to software, middleware, and/or an OS running within an environment associated with the cloud network 102 to mitigate the vulnerability. It should also be understood that from time to time, the control device 112 may perform a system check on the physical machines 104, 106, 110 to proactively determine whether there are vulnerabilities associated with one or more of the physical machines, rather than waiting for data to be sent from the physical machines 104, 106, 110.
In some embodiments, cloud operations such as migration, extensibility, snapshot, and replication may result in a change in the state of an application or physical machine. For example, the control device 112 may determine that a state (e.g., in use, restarted, etc.) of an application or physical machine has changed. In this way, a change in state may change the data structure of the environment, and in one or more embodiments described herein, the group change is prompted by the control device 112, resulting in a dynamic environment change. For example, at a first point in time, a physical machine may run an application in a first geographic area, and at a second point in time, the application may be run by another physical machine in a second geographic area. Because applications may run in various geographic locations due to migration, extending multiple instances of an application into a geographically distributed data center may increase vulnerability exposure. Thus, in various embodiments described herein, a patch set may be dynamically reassembled whenever there is a change in the environment.
Thus, in response to another physical machine 110 exhibiting the same characteristics and/or vulnerabilities as the physical machines 104, 106 associated with the cloud network 102, the control device 112 may group the physical machines into the cloud network 108, causing a dynamic environment change. In a dynamically changing environment, changes in the space of physical machines or data structures are also common. Thus, the control device 112 may send information to the physical machines 104, 106, 110 to cause changes in the set of patches and/or to cause the patches to be applied to software associated with the physical machines 104, 106, 110. A patch group is a group of machines operating within the same environment that may benefit from the same software patch to mitigate a common vulnerability. Because some machines may share a common vulnerability, grouping machines together and applying patches to the group may create better defense and/or security against the common vulnerability, rather than individually fixing the machines.
Patches may be developed and sent to and/or received by a machine to fix or mitigate machine vulnerabilities. The patch may be rated according to the vulnerability and the effect of the patch on the fix. Thus, the vulnerability of the measurement software may be expressed as a common exposure vulnerability (CVE) number. For example, CVE relates to scoring vulnerabilities (e.g., low, medium, high). Thus, a patch may be associated with a CVE number (e.g., a fraction of 0-10). Thus, the higher the risk of exposure, the higher the CVE number. Thus, the pressing need for the problem to be remedied is highly indicated. For example, if a particular physical machine has a CVE of ten, a patch with a rating of ten may be applied to that physical machine to mitigate the vulnerability. Furthermore, patches may be applied to software running in any environment, which is particularly important for cloud network 102, 108 operations, as anyone can access the cloud.
In a cloud network 102, 108, a machine may have an application running on multiple servers. Thus, if one server is compromised, the other servers may be compromised. For example, if a firewall associated with a server is breached, one or more of the servers behind the firewall may be seen (not just the server that experienced the breach). Thus, patching a group of servers that share a common vulnerability may generate system efficiency. Based on the vulnerability and/or the CVE, the control device 112 may accurately determine what patch to use to mitigate the vulnerability.
The system 100 may be used to solve problems that are highly technical in nature (e.g., software patching, machine grouping, stream clustering, etc.) using hardware and/or software that are not abstract and cannot be performed by humans as a set of mental actions due to, for example, the processing power required to facilitate machine grouping and patching. Further, some of the processes performed (e.g., computer processing, vulnerability pattern recognition, etc.) may be performed by a special purpose computer for performing defined tasks related to memory operations. For example, a special purpose computer may be employed to perform tasks related to software patching, etc. The special purpose computer may automatically adjust the cloud network environment in response to an indication that the physical machine 104, 106, 110 is susceptible to the vulnerability.
FIG. 2 illustrates another block diagram of an example non-limiting system 200 that facilitates machine group analysis in accordance with one or more embodiments described herein. Repeated descriptions of similar elements employed in other embodiments described herein are omitted for the sake of brevity.
In another embodiment, the control device 112 may perform machine grouping based on characteristics associated with the physical machine. For example, as shown in fig. 2, group a physical machines may be a workload group of machines, group B physical machines may be a network group of machines, and group C physical machines may be an infrastructure group of machines.
Although the control device 112 may group the physical machines individually according to their function (e.g., workload, network, infrastructure, etc.), some physical machines may include multiple characteristics and be grouped into multiple groups. For example, two physical machines 110 may be grouped into all three groups A, B, C, and thus, there is overlap of all three groups A, B, C of physical machines 110. Thus, if any other physical machine 104, 202B, 202C within any other group ABC is exposed to a vulnerability, then the two physical machines 110 associated with all three groups A, B, C are also exposed to the same vulnerability. Thus, if any other physical machine 104, 202B, 202C within any other group receives a particular patch from the control device 112, both physical machines 110 may also receive the same patch from the control device 112. Likewise, the physical machine 104 is associated with a group A, B. Thus, if either of the physical machines 104, 202B is exposed to a vulnerability, the physical machine 104 may also be exposed to the vulnerability. Thus, patches applied to the physical machines 104, 202B should also be applied to the physical machines 104. It should be understood that although each variation of the foregoing scenario is not discussed with respect to fig. 2, the same principles may be applied to other physical machines grouped in a similar manner.
FIG. 3 illustrates an additional block diagram of an example non-limiting machine patching component in accordance with one or more embodiments described herein. Repeated descriptions of similar elements employed in other embodiments described herein are omitted for the sake of brevity.
In the embodiment illustrated in FIG. 3, the system 300 may include a machine patching component 302 that may receive input data from a remote workstation device 316. It should be noted that the sub-components (e.g., the monitoring component 304, the learning component 310, the adjustment component 306, the risk assessment component 308, the pattern database 318, and the patch database 320), the processor 314, and the memory 312 can be electrically and/or communicatively coupled to one another. It should also be noted that in alternative embodiments, other components including, but not limited to, subcomponents, processor 314 and/or memory 312 may be external to the machine patching component 302. For example, in another embodiment, the pattern database 318 and the patch database 320 may be external to the machine patching component 302.
In one aspect of fig. 3, the risk assessment component 308 can assess risk associated with a server device. For example, the risk may be determined based on previous risks, software, manual input, degradation in performance, and the like. For example, if the control device 112 receives an indication that the physical machine 104, 106, 110 has experienced a performance degradation, the control device 112 may scan the physical machine 104, 106, 110 for an indication of malware. If the malware is found, the control device may determine a CVE associated with the malware and send a patch related to the CVE associated with the malware. Once the risk assessment component 308 determines the risk with respect to the first server device, the risk assessment component 308 can patch the server group to mitigate vulnerabilities of the first server device and the second server device of the server group. Monitoring component 304 can monitor risks associated with various machines. For example, if a third server device of the server group experiences a vulnerability, monitoring component 304 may observe the vulnerability and indicate that other server devices of the server group should be patched concurrently with the third server device.
The monitoring component 304 can further include a learning component 310, wherein the learning component 310 can analyze previous risks and input data received from the remote workstation device 316 to predict future risks or vulnerabilities to the server device. For example, if the physical machine 104 was previously susceptible to malware attacks, the physical machine 104, and software patches mitigating the malware attacks may be stored according to a learning algorithm. The learning algorithm may then use the data to determine whether the same or similar malware attack may occur on the physical machine 104 and share the same or similar patch with the grouped physical machines 106.
The learning component 310 can employ a probabilistic and/or statistical-based analysis to prognose or infer an action that can be performed. A Support Vector Machine (SVM) is one example of a classifier that may be employed. The SVM may operate by finding a hypersurface (hypersurface) in the space of possible inputs. Other directed and undirected classification approaches include, for example, na iotave bayes, bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classifying risk according to CVE number as used herein may also include statistical regression for developing a priority model. The disclosed aspects can employ classifiers that are explicitly trained (e.g., via a generic training data) as well as implicitly trained (e.g., via observing the use of an entry as it relates to software code, receiving extrinsic information, etc.).
The learning component 310 can utilize SVMs to employ learning-based algorithms for multi-labels for group patterns. The multi-tag group schema is a group schema associated with different environments. For example, the group mode may be associated with a network infrastructure (network group) and still be associated with an infrastructure group. The inputs to the SVM may include: set of labels D1Unlabeled set DuThe number of steps T and the number of instances St per iteration are 1. When t is<T-times, may be based on training data D1And training the multi-label SVM classifier f. For x at DuOne or more instances of (a). The SVM may predict the tag vector y using a Loss Reduction (LR) based prediction method:
equation 1: ds=argmaxDs(ΣX∈DsΣi=1((1-yifi(x))/2)),
Constrained to yiE { -1,1} (maximum loss equation)
With the greatest confidence reduction).
The most confident label vector y can be used to calculate the expected loss reduction:
equation 2: score (x) ═ Σk i=1((1-yifi(x))/2)。
The score (x) can be paired D by the learning component 310uAll x in (a) are sorted in descending order. Thus, a set of S instances D with the largest score may be selected (or entered via the remote workstation device 316)sAnd may update the training set D1<-D1+D*s. Thereafter, having D1Can be trained by the learning component 310, wherein t ═ t + 1. f. ofi(x) Is the SVM classifier associated with class i. x is the number of1..xnThe data points may be feature vectors for one or more grouping patterns, such as: network segmentation, workload, data drive. Learning algorithms can be used to define group patterns and reuse the history to capture patterns and reuse patches of known vulnerabilities.
The adjustment component 306 can modify and/or adjust the cloud network 102, 108 environment. For example, if a particular physical machine 104 is susceptible to an elevated risk factor, the adjustment component 306 may remove the physical machine 104 from the group of physical machines 106, 110 to mitigate the risk. It should be noted that the adjustment component 306 can also add the physical machine 110 to a particular set of physical machines 104, 106 such that patches associated with the physical machine 110 can be applied to other physical machines 104, 106. Rather, the remote workstation device 316 can be used to provide additional input data for analysis by the machine patching component 302. For example, the user input may override (override) the automated process in response to the automated process conflicting with the user input from the remote workstation device 316.
Input data from the remote workstation device 316 can be sent to the learning component 310 for analysis and reuse by the machine patching component 302. Input data from the remote workstation device 316 may also be stored in the patch database 320 and/or the pattern database 318. The pattern database 318 may be configured to store data associated with patch groups, clusters, dynamic environments, and identify patterns associated therewith. For example, patterns identified by the learning component 310 can be stored in the pattern database 318. Patch database 320 may be configured to store data relating to various patches and which vulnerabilities they mitigate for certain patch groups.
Aspects of processor 314 may constitute machine-executable component(s) embodied within machine(s), e.g., in one or more computer-readable medium(s) associated with the machine(s). When executed by one or more machines (e.g., computer(s), computing device, virtual machine(s), etc.), such component(s) can cause the machine(s) to perform the operations described by machine patching component 302. In an aspect, the machine patching component 302 may also include a memory 312 that stores computer-executable components and instructions.
FIG. 4 illustrates yet another block diagram of an example non-limiting system 400 that facilitates flow clustering in accordance with one or more embodiments described herein. Repeated descriptions of similar elements employed in other embodiments described herein are omitted for the sake of brevity.
The system 400 includes a data abstraction component 402, a data structure statistical summary component 404, and a clustering component 406, wherein the aforementioned components are electrically and/or communicatively coupled to each other.
In response to the control device 112 determining that the physical machine 104 is experiencing a lag (lag), the control device 112 may generate and send control information to the physical machine 106 to cause the physical machine 106 to increase the operating speed to compensate for the performance of the physical machine 106 (e.g., dynamic environmental changes). In this way, the control device 112 may group the physical machines 104 and 106 in different groups. Because data points may be continuously and continuously received in response to dynamic environmental changes, the data stream may be processed into data chunks. Clustering algorithms that create and/or delete groups of physical machines may be used to modify the cloud networks 102, 108. The continuous data stream may represent and cause a change in the environment. Thus, when a clustering algorithm is executed (in micro-clusters as discussed later with reference to FIG. 5), the data abstraction component 402 can summarize the data as input to the algorithm. Because the data can be specific or customized to its data source, the data can be formatted according to a target data structure of the data structure statistical summary component 404 (e.g., average, frequency, etc.). The clustering component 406 can then receive the formatted data to perform clustering.
FIG. 5 illustrates an additional block diagram of an example non-limiting system that facilitates microfluidic clustering in accordance with one or more embodiments described herein. Repeated descriptions of similar elements employed in other embodiments described herein are omitted for the sake of brevity.
FIG. 6 illustrates a flow diagram of an example non-limiting process overview in accordance with one or more embodiments described herein. Repeated descriptions of similar elements employed in other embodiments described herein are omitted for the sake of brevity.
The method 600 may be used to facilitate the above-described system. At element 602, an environment from which the physical machine operates can be discovered, noting that the environment is subject to dynamic changes. The system can run a commonality analysis at element 604 to compose patch group sets and sharing elements (e.g., via risk assessment component 308). The commonality analysis may be based on perceived vulnerabilities associated with the physical machine. At element 606, based on the data received from the patch database 320, the physical machine may be determined to be a group patch set, and at element 608, a risk measure for the group patch set may be evaluated (e.g., via the risk evaluation component 308).
Based on the risk measure at element 608, at element 610, the patch policy may be applied to the group patch set (e.g., via monitoring component 304). At element 612, servers can be monitored (e.g., server additions, server deletions, etc.) and a micro-cluster set can be composed (e.g., via monitoring component 304). Then, the risk measure for the micro-group can be evaluated at element 614 (e.g., via the risk evaluation component 308), and another patch for the micro-group can be applied at element 616 (e.g., via the monitoring component 304). At element 618, group data may be updated and evaluated for the patterns stored in the pattern database 318. At element 620, the system can determine whether the pattern is known (e.g., via the learning component 310 utilizing a machine learning algorithm). At element 620, the method 600 may include identifying a schema associated with a vulnerability and a patch ground. If no pattern is found, the process repeats at element 612. However, if a pattern is found, the pattern may be determined to be a known pattern at element 622 and updated in the pattern database 318. Thus, at element 624, the new schema may be updated to be monitored at element 612.
Fig. 7 illustrates a flow diagram of an example non-limiting flow diagram that facilitates evaluation of a risk measure 700 according to one or more embodiments described herein. Repeated descriptions of similar elements employed in other embodiments described herein are omitted for the sake of brevity.
The process for evaluating a risk measure may begin at block 702. At block 704, the system may determine whether the machine is susceptible to risk (e.g., via the risk assessment component 308). If the machine is not prone to risk, the process may end at block 706. However, if the system does identify a risk-prone machine at block 704, the system can estimate (e.g., via monitoring component 304) a risk associated with that machine. For example, if there is a greater than one percent chance of high risk and/or there is a greater than twenty percent intermediate risk, at block 710 the machine may be flagged as a high risk machine (e.g., via risk assessment component 308) and the process may end at block 706. Conversely, if the high risk is less than one percent of the medium risk and/or twenty percent of the medium risk, the system may continue to check for other risk factors. If the system then determines that the server has a high risk machine greater than zero percent and less than or equal to one percent, or a medium risk machine greater than two to twenty percent, or a low risk machine greater than fifty percent, the machine may be flagged (e.g., via the risk assessment component 308) as medium risk and the process may proceed to end at block 706. Conversely, if any of the above factors are not satisfied, the system may mark the machine as low risk (e.g., via risk assessment component 308) at element 716 and proceed to end at block 706. It should be understood that although the above-described risk percentages and divisions are used with respect to the present disclosure, other percentages and divisions may be used based on the risk threshold.
FIG. 8 illustrates a flow diagram of an example non-limiting computer-implemented method 800 of facilitating grouping and patching of machines in accordance with one or more embodiments described herein. Repeated descriptions of similar elements employed in other embodiments described herein are omitted for the sake of brevity.
At element 802, the method 800 includes patching (e.g., via the risk assessment component 308), by a device operatively coupled to the processor, a server group to mitigate vulnerabilities of a first server device and a second server device in response to a first risk associated with the first server device, wherein the server group includes the first server device and the second server device. At element 804, the method 800 includes monitoring (e.g., via the monitoring component 304), by the device, data associated with the second risk to the server group to mitigate the second risk to the server group.
FIG. 9 illustrates a flow diagram of another example non-limiting computer-implemented method 900 of facilitating grouping and patching of machines in accordance with one or more embodiments described herein. Repeated descriptions of similar elements employed in other embodiments described herein are omitted for the sake of brevity.
At element 902, the method 900 includes patching (e.g., via the risk assessment component 308), by a device operatively coupled to the processor, a server group to mitigate vulnerabilities of a first server device and a second server device in response to a first risk associated with the first server device, wherein the server group includes the first server device and the second server device. At element 904, the method 900 includes monitoring (e.g., via the monitoring component 304), by the device, data associated with the second risk to the server group to mitigate the second risk to the server group. Additionally, at element 906, the method 900 includes modifying (e.g., via the adjustment component 306) the server group by the appliance to mitigate the second risk of the server group, resulting in a server group modification.
FIG. 10 illustrates a flow diagram of another example non-limiting computer-implemented method 1000 of facilitating grouping and patching of machines in accordance with one or more embodiments described herein. Repeated descriptions of similar elements employed in other embodiments described herein are omitted for the sake of brevity.
At element 1002, the method 1000 includes, in response to a first risk associated with a first server device, patching (e.g., via the risk assessment component 308) a server group by a device operatively coupled to the processor to mitigate vulnerabilities of the first server device and a second server device, wherein the server group includes the first server device and the second server device. At element 1004, the method 1000 includes monitoring (e.g., via the monitoring component 304), by the device, data associated with the second risk to the server group to mitigate the second risk to the server group. Further, at element 1006, the method can include receiving an indication that the server group has been modified (e.g., via the adjusting component 306).
In order to provide a context for the various aspects of the disclosed subject matter, FIG. 11 as well as the following discussion are intended to provide a general description of a suitable environment in which the various aspects of the disclosed subject matter may be implemented. FIG. 11 illustrates a block diagram of an example non-limiting operating environment in which one or more embodiments described herein can be facilitated. Repeated descriptions of similar elements employed in other embodiments described herein are omitted for the sake of brevity. With reference to fig. 11, a suitable operating environment 1100 for implementing various aspects of the disclosure may also include a computer 1112. The computer 1112 may also include a processing unit 1114, a system memory 1116, and a system bus 1118. The system bus 1118 couples system components including, but not limited to, the system memory 1116 to the processing unit 1114. The processing unit 1114 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 1114. The system Bus 1118 may be any of several types of Bus structure(s) including the memory Bus or memory controller, a Peripheral Bus or external Bus, and/or a Local Bus using any of a variety of available Bus architectures including, but not limited to, Industrial Standard Architecture (ISA), Micro-channel Architecture (MSA), Extended ISA (Extended ISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), card Bus, Universal Serial Bus (USB), Advanced Graphics Port (Advanced Graphics Port, SCSI), firewire (IEEE 1394), and Small Computer Systems Interface (SCSI).
The system memory 1116 may also include volatile memory 1120 and nonvolatile memory 1122. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within the computer 1112, such as during start-up, is stored in nonvolatile memory 1122. By way of illustration, and not limitation, nonvolatile memory 1122 can include Read Only Memory (ROM), programmable ROM (prom), electrically programmable ROM (eprom), electrically erasable programmable ROM (eeprom), flash memory, or nonvolatile Random Access Memory (RAM) (e.g., ferroelectric RAM (feram)), volatile memory 1120 can also include Random Access Memory (RAM) which acts as external cache memory, and by way of illustration and not limitation, RAM is available in a number of forms such as static RAM (sram), dynamic RAM (dram), synchronous dram (sdram), double data rate sdram (ddr sdram), enhanced sdram (esdram), synchronous dram (sldram), Direct Rambus RAM (DRRAM), direct Rambus dynamic dram (drdram), and Rambus dynamic RAM.
Referring now to FIG. 12, an exemplary cloud computing environment 1200 is shown. As shown, cloud computing environment 1200 includes cloud 50 and one or more cloud computing nodes 10 with which local computing devices used by cloud consumers, such as Personal Digital Assistants (PDAs) or mobile phones 54A, desktops 54B, laptops 54C, and/or automobile computer systems 54N may communicate. The cloud computing nodes 10 may communicate with each other. Cloud computing nodes 10 may be physically or virtually grouped (not shown) in one or more networks including, but not limited to, private, community, public, or hybrid clouds, or a combination thereof, as described above. In this way, a cloud consumer can request infrastructure as a service (IaaS), platform as a service (PaaS), and/or software as a service (SaaS) provided by the cloud computing environment 1200 without having to maintain resources on the local computing devices. It should be appreciated that the types of computing devices 54A-N shown in fig. 12 are merely illustrative and that cloud computing node 10, as well as cloud 50, may communicate with any type of computing device over any type of network and/or network addressable connection (e.g., using a web browser).
Referring now to FIG. 13, therein is shown a set of functional abstraction layers 1300 provided by cloud computing environment 1200 (FIG. 12). It should be understood at the outset that the components, layers, and functions illustrated in FIG. 13 are illustrative only and that embodiments of the present invention are not limited thereto. As shown, the following layers and corresponding functions are provided:
the hardware and software layer 60 includes hardware and software components. Examples of hardware components include: a host computer 61; a RISC (reduced instruction set computer) architecture based server 62; a server 63; a blade server 64; a storage device 65; networks and network components 66. Examples of software components include: web application server software 67 and database software 68.
The virtual layer 70 provides an abstraction layer that can provide examples of the following virtual entities: virtual server 71, virtual storage 72, virtual network 73 (including a virtual private network), virtual applications and operating system 74, and virtual client 75.
In one example, the management layer 80 may provide the following functions: the resource provisioning function 81: providing dynamic acquisition of computing resources and other resources for performing tasks in a cloud computing environment; metering and pricing function 82: cost tracking of resource usage and billing and invoicing therefor is performed within a cloud computing environment. In one example, the resource may include an application software license. The safety function is as follows: identity authentication is provided for cloud consumers and tasks, and protection is provided for data and other resources. User portal function 83: access to the cloud computing environment is provided for consumers and system administrators. Service level management function 84: allocation and management of cloud computing resources is provided to meet the requisite level of service. Service Level Agreement (SLA) planning and fulfillment function 85: the future demand for cloud computing resources predicted according to the SLA is prearranged and provisioned.
The present invention may be embodied as a system, method, apparatus, and/or computer program product, in any combination of these specific details. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, integrated circuit configuration data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational acts to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
While the subject matter has been described above in the general context of computer-executable instructions of a computer program product that runs on one or more computers, those skilled in the art will recognize that the disclosure also can be implemented in, or in combination with, other program modules. Generally, program modules include routines, programs, components, data structures, etc. that perform particular tasks and/or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the computer-implemented methods of the invention may be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as computers, hand-held computing devices (e.g., PDAs, telephones), microprocessor-based or programmable consumer or industrial electronics, and the like. The illustrated aspects may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. However, some, if not all aspects of the disclosure can be practiced on stand-alone computers. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
As used in this application, the terms "component," "system," "platform," "interface," and the like can refer to and/or can include a computer-related entity or an entity associated with an operating machine having one or more specific functions. The entities disclosed herein may be hardware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In another example, respective components can execute from various computer readable media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems via the signal). As another example, a component may be a device having a particular function provided by mechanical parts operated by an electrical or electronic circuit operated by a software or firmware application executed by a processor. In this case, the processor may be internal or external to the apparatus and may execute at least a portion of a software or firmware application. As yet another example, a component may be an apparatus that provides specific functionality through electronic components rather than mechanical components, where an electronic component may include a processor or other means for executing software or firmware that imparts, at least in part, functionality to an electronic component. In an aspect, a component may emulate an electronic component, for example, within a cloud computing system, via a physical machine.
Furthermore, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise, or clear from context, "X employs a or B" is intended to mean any of the natural inclusive permutations. That is, if X employs A; x is B; or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances. In addition, the articles "a" and "an" as used in this specification and the drawings should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form. As used herein, the terms "example" and/or "exemplary" are used to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by these examples. Moreover, any aspect or design described herein as "exemplary" and/or "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to exclude equivalent exemplary structures and techniques known to those of ordinary skill in the art.
As used in this specification, the term "processor" may refer to substantially any computing processing unit or device, including, but not limited to, a single-core processor; a single processor with software multi-threaded execution capability; a multi-core processor; a multi-core processor having software multi-thread execution capability; a multi-core processor having hardware multithreading; a parallel platform; and parallel platforms with distributed shared memory. Additionally, a processor may refer to an integrated circuit, an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Controller (PLC), a Complex Programmable Logic Device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Further, processors may employ nanoscale infrastructure, such as, but not limited to, molecular and quantum dot based transistors, switches, and gates, in order to optimize space usage or enhance performance of user equipment. A processor may also be implemented as a combination of computing processing units. In this disclosure, terms such as "storage," "data storage," "database," and substantially any other information storage component related to the operation and function of the component are used to refer to "memory components," entities embodied in "memory," or components including memory. It will be appreciated that the memory and/or memory components described herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of illustration, and not limitation, nonvolatile memory can include Read Only Memory (ROM), programmable ROM (prom), electrically programmable ROM (eprom), electrically erasable ROM (eeprom), flash memory, or nonvolatile Random Access Memory (RAM) (e.g., ferroelectric RAM (feram)), volatile memory can include RAM, which can be used as external cache memory, e.g., by way of illustration and not limitation, RAM is available in many forms, such as synchronous RAM (sram), dynamic RAM (dram), Synchronous Dram (SDRAM), double data rate (DDR SDRAM), enhanced SDRAM (esdram), synchronous link dram (sldram), Direct Rambus RAM (DRRAM), direct Rambus dynamic RAM (drdram), and Rambus dynamic RAM (rdram).
What has been described above includes examples of systems and computer-implemented methods only. It is, of course, not possible to describe every conceivable combination of components or computer-implemented methods for purposes of describing the present disclosure, but one of ordinary skill in the art may recognize that many further combinations and permutations of the present disclosure are possible. Furthermore, to the extent that the terms "includes," "has," "possessing," and the like are used in the detailed description, claims, appendices, and drawings, these terms are intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim.
The foregoing description of the embodiments has been presented for purposes of illustration and not limitation, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (20)
1. A system for managing a server group, comprising:
a memory storing computer-executable components; and
a processor that executes the computer-executable components stored in the memory, wherein the computer-executable components comprise:
a risk assessment component operable to;
in response to a first risk associated with a first server device, patching the server group to mitigate vulnerabilities of the first server device and a second server device, wherein the server group includes the first server device and the second server device; and
a monitoring component operable to:
monitoring data associated with a second risk to the server group to mitigate the second risk to the server group.
2. The system of claim 1, wherein the risk is associated with the vulnerability of the first server device to malware attack.
3. The system of any one of the preceding claims, wherein the computer-executable components further comprise:
an adjustment component operable to modify the server group to mitigate the second risk of the server group, resulting in a server group modification.
4. The system of claim 3, wherein the server group modification removes the second server device from the server group to mitigate the second risk.
5. The system of any one of the preceding claims, wherein the monitoring component is operable to receive an indication that the group of servers has been modified.
6. The system of any one of the preceding claims, wherein the risk assessment component is operable to receive risk data representing a risk associated with the first server device.
7. The system of any one of the preceding claims, wherein the server group is a first server group, and wherein the risk assessment component is operable to assess a third risk for a second server group that is not the first server group.
8. The system of any one of the preceding claims, further comprising:
a learning component that:
risk data associated with previous risks received from workstation devices is analyzed, resulting in a risk prediction.
9. The system of claim 8, wherein the risk prediction is used as an input to the monitoring component to mitigate a third risk to the server group.
10. A computer-implemented method for managing server groups, comprising:
in response to a first risk associated with a first server device, patching, by a device operatively coupled to a processor, the server group to mitigate vulnerabilities of the first server device and a second server device, wherein the server group includes the first server device and the second server device; and is
Monitoring, by the device, data associated with a second risk to the server group to mitigate the second risk to the server group.
11. The computer-implemented method of claim 10, wherein the risk is associated with the vulnerability of the first server device to malware attack.
12. The computer-implemented method of any of claims 10 or 11, further comprising:
modifying, by the device, the server group to mitigate the second risk of the server group, resulting in a server group modification.
13. The computer-implemented method of claim 12, wherein the server group modification removes the second server device from the server group to mitigate the second risk.
14. The computer-implemented method of any of claims 10 to 13, further comprising:
receiving an indication that the server group has been modified.
15. The computer-implemented method of any of claims 10 to 14, further comprising receiving risk data representing a risk associated with the first server device.
16. The method of any of claims 10 to 15, wherein the server group is a first server group, the method further comprising evaluating a third risk for a second server group that is not the first server group.
17. The method of any of claims 10 to 16, further comprising:
risk data associated with previous risks received from workstation devices is analyzed, resulting in a risk prediction.
18. The method of claim 17, wherein the risk prediction is used as an input to a monitoring component to mitigate a third risk to the set of servers.
19. A computer program product for managing a server farm, the computer program product comprising:
a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing the method of any of claims 10-18.
20. A computer program stored on a computer readable medium and loadable into the internal memory of a digital computer, comprising software code portions, when said program is run on a computer, for performing the method of any of claims 10 to 18.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/721,566 | 2017-09-29 | ||
US15/721,566 US10540496B2 (en) | 2017-09-29 | 2017-09-29 | Dynamic re-composition of patch groups using stream clustering |
PCT/IB2018/057407 WO2019064176A1 (en) | 2017-09-29 | 2018-09-25 | Dynamic re-composition of patch groups using stream clustering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111164597A true CN111164597A (en) | 2020-05-15 |
CN111164597B CN111164597B (en) | 2024-08-23 |
Family
ID=65896062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880062930.6A Active CN111164597B (en) | 2017-09-29 | 2018-09-25 | Method and system for managing server groups |
Country Status (6)
Country | Link |
---|---|
US (3) | US10540496B2 (en) |
JP (1) | JP7129474B2 (en) |
CN (1) | CN111164597B (en) |
DE (1) | DE112018004284T5 (en) |
GB (1) | GB2582460B (en) |
WO (1) | WO2019064176A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021203975A1 (en) * | 2020-11-11 | 2021-10-14 | 平安科技(深圳)有限公司 | Server dispatching method and apparatus, and device and storage medium |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210352095A1 (en) * | 2020-05-05 | 2021-11-11 | U.S. Army Combat Capabilities Development Command, Army Research Labortary | Cybersecurity resilience by integrating adversary and defender actions, deep learning, and graph thinking |
US11783068B2 (en) * | 2021-03-24 | 2023-10-10 | Bank Of America Corporation | System for dynamic exposure monitoring |
US20230315438A1 (en) * | 2022-03-30 | 2023-10-05 | Kyndryl, Inc. | Contextually cognitive edge server manager |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1577272A (en) * | 2003-07-16 | 2005-02-09 | 微软公司 | Automatic detection and patching of vulnerable files |
CN1610887A (en) * | 2001-12-31 | 2005-04-27 | 大本营安全软件公司 | Automated computer vulnerability resolution system |
US20070204346A1 (en) * | 2006-02-27 | 2007-08-30 | Microsoft Corporation | Server security schema |
US20140189873A1 (en) * | 2009-12-21 | 2014-07-03 | Symantec Corporation | System and method for vulnerability risk analysis |
US20150066575A1 (en) * | 2013-08-28 | 2015-03-05 | Bank Of America Corporation | Enterprise risk assessment |
CN106716432A (en) * | 2014-09-22 | 2017-05-24 | 迈克菲股份有限公司 | Pre-launch process vulnerability assessment |
Family Cites Families (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7313822B2 (en) * | 2001-03-16 | 2007-12-25 | Protegrity Corporation | Application-layer security method and system |
JP2005530239A (en) * | 2002-06-18 | 2005-10-06 | コンピュータ アソシエイツ シンク,インコーポレイテッド | Method and system for managing enterprise assets |
KR100599451B1 (en) * | 2004-07-23 | 2006-07-12 | 한국전자통신연구원 | Device for Treatment of Internet Worm and System Patch using Movable Storage Unit and Method thereof |
US9325728B1 (en) * | 2005-01-27 | 2016-04-26 | Leidos, Inc. | Systems and methods for implementing and scoring computer network defense exercises |
JP2006350543A (en) * | 2005-06-14 | 2006-12-28 | Mitsubishi Electric Corp | Log analyzing apparatus |
US7647637B2 (en) | 2005-08-19 | 2010-01-12 | Sun Microsystems, Inc. | Computer security technique employing patch with detection and/or characterization mechanism for exploit of patched vulnerability |
US8321941B2 (en) * | 2006-04-06 | 2012-11-27 | Juniper Networks, Inc. | Malware modeling detection system and method for mobile platforms |
US8321944B1 (en) * | 2006-06-12 | 2012-11-27 | Redseal Networks, Inc. | Adaptive risk analysis methods and apparatus |
US7900259B2 (en) * | 2007-03-16 | 2011-03-01 | Prevari | Predictive assessment of network risks |
US8689330B2 (en) * | 2007-09-05 | 2014-04-01 | Yahoo! Inc. | Instant messaging malware protection |
US8839225B2 (en) | 2008-01-23 | 2014-09-16 | International Business Machines Corporation | Generating and applying patches to a computer program code concurrently with its execution |
US20090282457A1 (en) * | 2008-05-06 | 2009-11-12 | Sudhakar Govindavajhala | Common representation for different protection architectures (crpa) |
JP5148442B2 (en) * | 2008-09-30 | 2013-02-20 | 株式会社東芝 | Vulnerability response priority display device and program |
US8769683B1 (en) * | 2009-07-07 | 2014-07-01 | Trend Micro Incorporated | Apparatus and methods for remote classification of unknown malware |
US8793681B2 (en) * | 2011-06-24 | 2014-07-29 | International Business Machines Corporation | Determining best practices for applying computer software patches |
CN102404715A (en) | 2011-11-18 | 2012-04-04 | 广东步步高电子工业有限公司 | Method for resisting worm virus of mobile phone based on friendly worm |
US9069969B2 (en) * | 2012-06-13 | 2015-06-30 | International Business Machines Corporation | Managing software patch installations |
US20140025796A1 (en) | 2012-07-19 | 2014-01-23 | Commvault Systems, Inc. | Automated grouping of computing devices in a networked data storage system |
US9083689B2 (en) * | 2012-12-28 | 2015-07-14 | Nok Nok Labs, Inc. | System and method for implementing privacy classes within an authentication framework |
KR101901911B1 (en) * | 2013-05-21 | 2018-09-27 | 삼성전자주식회사 | Method and apparatus for detecting malware and medium record of |
US10489861B1 (en) * | 2013-12-23 | 2019-11-26 | Massachusetts Mutual Life Insurance Company | Methods and systems for improving the underwriting process |
WO2015105486A1 (en) | 2014-01-08 | 2015-07-16 | Hewlett-Packard Development Company, L.P. | Dynamically applying a software patch to a computer program |
US10462158B2 (en) * | 2014-03-19 | 2019-10-29 | Nippon Telegraph And Telephone Corporation | URL selection method, URL selection system, URL selection device, and URL selection program |
US9405530B2 (en) | 2014-09-24 | 2016-08-02 | Oracle International Corporation | System and method for supporting patching in a multitenant application server environment |
US9430219B2 (en) | 2014-12-16 | 2016-08-30 | Sap Se | Revision safe upgrade in a hybrid cloud landscape |
US9521160B2 (en) | 2014-12-29 | 2016-12-13 | Cyence Inc. | Inferential analysis using feedback for extracting and combining cyber risk information |
US9699209B2 (en) | 2014-12-29 | 2017-07-04 | Cyence Inc. | Cyber vulnerability scan analyses with actionable feedback |
US9923912B2 (en) * | 2015-08-28 | 2018-03-20 | Cisco Technology, Inc. | Learning detector of malicious network traffic from weak labels |
US10084811B1 (en) * | 2015-09-09 | 2018-09-25 | United Services Automobile Association (Usaa) | Systems and methods for adaptive security protocols in a managed system |
US10021120B1 (en) * | 2015-11-09 | 2018-07-10 | 8X8, Inc. | Delayed replication for protection of replicated databases |
US10142362B2 (en) * | 2016-06-02 | 2018-11-27 | Zscaler, Inc. | Cloud based systems and methods for determining security risks of users and groups |
US10728261B2 (en) * | 2017-03-02 | 2020-07-28 | ResponSight Pty Ltd | System and method for cyber security threat detection |
US11436113B2 (en) * | 2018-06-28 | 2022-09-06 | Twitter, Inc. | Method and system for maintaining storage device failure tolerance in a composable infrastructure |
US10853046B2 (en) * | 2018-12-13 | 2020-12-01 | Salesforce.Com, Inc. | Deployment of software applications on server clusters |
-
2017
- 2017-09-29 US US15/721,566 patent/US10540496B2/en active Active
-
2018
- 2018-09-25 JP JP2020516821A patent/JP7129474B2/en active Active
- 2018-09-25 WO PCT/IB2018/057407 patent/WO2019064176A1/en active Application Filing
- 2018-09-25 CN CN201880062930.6A patent/CN111164597B/en active Active
- 2018-09-25 DE DE112018004284.7T patent/DE112018004284T5/en active Granted
- 2018-09-25 GB GB2006140.4A patent/GB2582460B/en active Active
-
2019
- 2019-12-04 US US16/703,200 patent/US10977366B2/en active Active
-
2020
- 2020-12-28 US US17/135,553 patent/US11620381B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1610887A (en) * | 2001-12-31 | 2005-04-27 | 大本营安全软件公司 | Automated computer vulnerability resolution system |
CN1577272A (en) * | 2003-07-16 | 2005-02-09 | 微软公司 | Automatic detection and patching of vulnerable files |
US20070204346A1 (en) * | 2006-02-27 | 2007-08-30 | Microsoft Corporation | Server security schema |
US20140189873A1 (en) * | 2009-12-21 | 2014-07-03 | Symantec Corporation | System and method for vulnerability risk analysis |
US20150066575A1 (en) * | 2013-08-28 | 2015-03-05 | Bank Of America Corporation | Enterprise risk assessment |
CN106716432A (en) * | 2014-09-22 | 2017-05-24 | 迈克菲股份有限公司 | Pre-launch process vulnerability assessment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021203975A1 (en) * | 2020-11-11 | 2021-10-14 | 平安科技(深圳)有限公司 | Server dispatching method and apparatus, and device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2019064176A1 (en) | 2019-04-04 |
GB202006140D0 (en) | 2020-06-10 |
JP7129474B2 (en) | 2022-09-01 |
US20200110877A1 (en) | 2020-04-09 |
US20210150029A1 (en) | 2021-05-20 |
JP2020535515A (en) | 2020-12-03 |
US10977366B2 (en) | 2021-04-13 |
DE112018004284T5 (en) | 2020-05-14 |
US11620381B2 (en) | 2023-04-04 |
US20190102548A1 (en) | 2019-04-04 |
GB2582460A (en) | 2020-09-23 |
GB2582460B (en) | 2021-01-20 |
US10540496B2 (en) | 2020-01-21 |
CN111164597B (en) | 2024-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11411979B2 (en) | Compliance process risk assessment | |
CN110914809B (en) | Compliance-aware runtime generation based on application patterns and risk assessment | |
US11620381B2 (en) | Dynamic re-composition of patch groups using stream clustering | |
US11601468B2 (en) | Detection of an adversarial backdoor attack on a trained model at inference time | |
US10649758B2 (en) | Group patching recommendation and/or remediation with risk assessment | |
US11023325B2 (en) | Resolving and preventing computer system failures caused by changes to the installed software | |
US11374958B2 (en) | Security protection rule prediction and enforcement | |
WO2022083742A1 (en) | Context based risk assessment of computing resource vulnerability | |
JP7332250B2 (en) | AI-assisted rule generation | |
US20180025160A1 (en) | Generating containers for applications utilizing reduced sets of libraries based on risk analysis | |
US10673775B2 (en) | Orchestration engine using a blockchain for a cloud resource digital ledger | |
US11853017B2 (en) | Machine learning optimization framework | |
US12003390B2 (en) | Orchestration engine blueprint aspects for hybrid cloud composition | |
US20190190798A1 (en) | Orchestration engine blueprint aspects for hybrid cloud composition | |
US20220303302A1 (en) | Shift-left security risk analysis | |
CA3176367A1 (en) | Protecting computer assets from malicious attacks | |
US11212162B2 (en) | Bayesian-based event grouping | |
US11221908B1 (en) | Discovery of an inexplicit link between a change and an incident in a computing environment | |
US11294759B2 (en) | Detection of failure conditions and restoration of deployed models in a computing environment | |
US11012463B2 (en) | Predicting condition of a host for cybersecurity applications | |
US11075954B2 (en) | Identifying systems where a policy as a code is applicable |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220216 Address after: New York, United States Applicant after: Qindarui Co. Address before: New York grams of Armand Applicant before: International Business Machines Corp. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |