GB2548147A - Self-propagating cloud-aware distributed agents for benign cloud exploitation - Google Patents

Self-propagating cloud-aware distributed agents for benign cloud exploitation Download PDF

Info

Publication number
GB2548147A
GB2548147A GB1604143.6A GB201604143A GB2548147A GB 2548147 A GB2548147 A GB 2548147A GB 201604143 A GB201604143 A GB 201604143A GB 2548147 A GB2548147 A GB 2548147A
Authority
GB
United Kingdom
Prior art keywords
agent
data store
data
node
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1604143.6A
Other versions
GB201604143D0 (en
Inventor
Benkhelifa Elhadj
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Staffordshire University
Original Assignee
Staffordshire University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Staffordshire University filed Critical Staffordshire University
Priority to GB1604143.6A priority Critical patent/GB2548147A/en
Publication of GB201604143D0 publication Critical patent/GB201604143D0/en
Publication of GB2548147A publication Critical patent/GB2548147A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/53Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • G06F21/566Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Virology (AREA)
  • Computer And Data Communications (AREA)

Abstract

A virtual computing architecture protection system, and a method for monitoring a virtual computer system, comprising: a first agent 24A having a controller component 26 for propagating one or more further agent 24B; a data store component 36 for storage of data pertaining to computational nodes VM1, VM2 of the virtual computing architecture; and one or more sensor component 28 arranged to sense data communications between virtual computing applications on one or more node, wherein the first agent controller is arranged to interpret the sensed data communications and to query the data store to identify a computational node of the virtual computing architecture that is susceptible to agent propagation, the first agent controller propagating and deploying the further agent on the said computational node once identified. The first agent may be configured to determine if a first or further agent is already installed on the identified computational node.

Description

TITLE
Self-Propaaating Cloud-aware Distributed Agents for Benign Cloud Exploitation BACKGROUND OF THE INVENTION
This disclosure concerns computational agents for deployment within a cloud-based system architecture.
Cloud computing models have been emerging for a number of years and offer alternative approaches for providing computing resources to more conventional approaches in which hardware and software/services are provided and managed locally. A number of potential advantages to the use of cloud computing have been well documented, including efficiencies due to centralised/shared computational resources and flexibility in how computational services are provided to users.
The outsourcing of computational services to a third party provider can potentially lead to increased security due to the provider having greater security resources at its disposal. Although this is contentious as the integrity of the provider and it’s employees is often difficult to verify. It has also been proposed in the art that virtualization technology used to support cloud environments is secure over conventional shared resources due to the inherent isolation. However, as with all these software, there are also a number of potential vulnerabilities that may be susceptible to exploitation and subversion, for example due to the complexity and heterogeneous nature of virtual environments.
Malicious software, or ‘malware’, exists to automate the process of malicious activity via third party computer systems based on an understanding of the vulnerabilities of the system to which it is applied. Whilst more traditional malware has been minimalistic in size, e.g. to evade detection, and tailored to specific platforms, malware development practices have evolved alongside computational paradigms and greater resources are expended on malware development for potentially greater economic returns or societal disruption.
In the past, malware was typically categorized into a number of distinct forms, as follows: • viruses typically infect executables, using them to propagate across fie systems; • worms propagate across nodes in a network, often at a rapid rate, via any form of exploitable vulnerability to which a node may be susceptible; • Trojan horses disguise themselves as legitimate software in order to infect a machine; • Remote Administration Tools (RATs), once executed allow a remote user complete or partial control over the infected machine; • root-kits replace parts of an operating system with compromised malicious tools in order to gain control over a system and typically hide other software.
The overall purpose of malicious software over recent years has generally shifted away from being one of research and curiosity to that of crime, political motivation and financial gain. Thus many of the above distinct forms of malware have begun to disappear as malicious software has evolved to become larger and more complex hybrid software encompassing aspects from all forms of malicious software.
Cloud services are typically delivered in one of three different types: • Software-as-a-Service (SaaS) where an application is provided on demand; • Platform-as-a-Service (PaaS) where development resources are provided on demand; and • Infrastructure-as-a-Service (laaS), where typical computer resources are provided on demand (e.g. computer hardware, networking hardware, storage etc.).
As laaS is the model typically used by cloud software and is a focus of the invention disclosed herein. On demand infrastructure services are usually offered as virtual hardware due to the practical advantages over using physical hardware being rather apparent e.g. for ease of provisioning, recovery and storage etc. Virtual hardware is known as a virtual machine (VM), which is executed and managed by a virtual machine monitor (VMM) or hypervisor. VMMs execute CPU instructions on behalf of the guest, screening them where appropriate.
One claim to enhanced security when running virtualized hardware is that the VMs may be isolated from each other and as such provide a high level of security, although the inventor’s prior research into this area has revealed vulnerabilities, as with all software systems. A Cloud will consist of multiple, networked VMMs which allow guests to be migrated between monitors in order to optimize hardware usage, allowing for scaling of resources according to demand and continuation of operations in case of hardware failure. VMMs are known as ‘type 1 ’ or ‘type 2’ hypervisors. A type 2 hypervisor will run hosted within an operating system, whereas a type 1 hypervisor runs ‘bare-metal’ on the base computers hardware. There are a number of popular VMMs widely in use and, although such VMMs generally perform similar functions, they can differ in architecture, features and availability of source code. Therefore each one may suffer from different vulnerabilities and therefore need to be assessed separately. In addition, a cloud may be homogeneous in nature, utilizing only one VMM type, or heterogeneous and utilize a mixture.
When it is considered that any specific cloud environment can result from an array of different configuration options, software versions, etc. it will be appreciated that the security of virtual environments is a complex issue, which is not easily broached by generic solutions.
Some conventional techniques for detection of malware within cloud environments utilize VM introspection, or network traffic analysis from the hypervisor level to examine a particular machine within the network. Much of this work focuses upon analysis of a virtual machine and thus does not take into account malware which is aware of the cloud environment and thus may exist within the hypervisor or network layers of the cloud environment. Therefore malware that can exist in this higher privilege level can survive detection and removal in this context.
Wang, L, Peng, Y., Liu, W., Gao, H.\ “Vmsecurexec: Transparent on-access virus detection for virtual machine in the cloud.” (ICT and Energy Efficiency and Workshop on Information Theory and Security (CI-ICT 2012) Symposium, pp. 116{121 (2012)) discusses the increased efficiency of performing malware scanning from outside the system versus traditional scanning techniques. The authors use VM introspection for real-time transparent analysis of execution flow for malware detection. However the computational expense of an additional VM is an obvious questionable factor.
Harrison, K., Bordbar, B., AH, S., Dalton, C., Norman, A.: “A framework for detecting malware in cloud by identifying symptoms” (Enterprise Distributed Object Computing Conference (EDOC), 2012 IEEE 16th International, pp. 164(172 (2012)) discloses usage of VM introspection to conduct a different malware detection technique via behaviour analysis. As with the previous work, VMs are examined by additional VMs. However these examining VMs are minimalistic and search for individual symptoms whilst cooperating via secure message exchange.
It has been found that malware, which can exist in the hypervisor itself, may thwart such systems by existing in a higher level of privilege in a root-kit like fashion. Additionally, the complex nature of the hypervisor system and its access to VMs make it a target for malware.
Benkhelifa, E., Welsh, 7.: “A novel architecture for self-propagating malicious software within cloud infrastructures” (Cloud Computing and Big Data (CloudCom-Asia), 2013 International Conference on, pp. 60-67 (2013)) and Benkhelifa, E., Welsh, 7.: “Towards Malware Inspired Cloud Self-Protection” (IEEE International Conference on Cloud and Autonomic Computing (2014)) disclose the use of agents to inhabit cloud environments. The present invention represents the outcome of further research and development of those principles with the aim of producing an implementable system.
Accordingly it is an aim of the invention to provide a computational security/maintenance system or agent by which the complexities and/or vulnerabilities of cloud-based computing architecture can be better accommodated. It may be considered an additional or alternative aim to provide an autonomic security/maintenance system which is better adapted to a cloud environment.
BRIEF SUMMARY OF THE INVENTION A virtual computing architecture protection system comprising a first agent having a controller component for propagating one or more further agent, a data store for storage of data pertaining to computational nodes of the virtual computing architecture and one or more sensor component arranged to sense data communications between virtual computing applications on one or more node, wherein the agent controller is arranged to interpret the sensed data communications and to query the data store to identify a computational node of the virtual computing architecture that is susceptible to agent propagation, the agent controller propagating and deploying the further agent on the said computational node once identified.
Multiple distributed agents may be provided in use. The distributed agents may collectively or individually interpret the sensed data communications and/or control deployment of further agents.
The invention may rely on agents having a component-based architecture. The automated self-propagating component based agent architecture is beneficial in providing for continued and comprehensive inhabitation of a dynamic and/or heterogeneous cloud computing architecture. The first agent may replicate itself or any component thereof as required to suit a further agent location within the cloud architecture network.
The agent controller may identify a computational node that is susceptible to agent propagation by determining that a first agent, or an instance thereof, is not already deployed on the computational node. Additionally or alternatively, the agent controller may identify a computational node that is susceptible to agent propagation by determining that a further agent, or an instance thereof, is not already deployed on the computational node.
The agent controller may identify a computational node that is susceptible to agent propagation, at least in part, by determining a type of the computational node and/or matching the determined type against a list of predetermined types in the data store. The computational node type may comprise a machine or operating system type, e.g. including a virtual machine or hypervisor type.
The data store, or a component thereof, may comprise a library which is stored locally on a node occupied by the first agent and/or as a distributed library across at least a portion of the system, e.g. across a plurality of agents and/or associated nodes. The library may comprise a library of attack vectors or exploits, for example comprising data of computational node types. The library may comprise exploit data associated with node types. The controller may query the library by way of an attack vector request based on the received sensor data. The controller component may determine attack vectors for local neighbour nodes in the virtual network. The agent may contain a library of exploits suitable for a variety of cloud architecture networks and software platforms to enable survival of one or more agent in a number of different scenarios. Such exploits may be additional to back doors provided by a service provider which allow access to a given system in a more legitimate/conventional fashion.
The controller may determine any or any combination of data communication frequency, size, address and/or content or header data from the sensor component.
The data store, or a portion thereof, may comprise an agent component library.
The agent component library may comprise a plurality of modules for construction/propagation of the further agent. The data store may comprise a plurality of optional or alternative agent component modules. The controller may select one or a plurality of modules from the agent component library according to an attack vector for the further agent. Advantageously, the first agent may compose one or more further agent according to the sensor data and/or querying the data store. Agents may be created that are context-aware or bespoke to their environment.
The first agent may comprise a cloud manager or hypervisor agent. The further agent may comprise a hypervisor agent or a virtual machine (VM) agent. The invention is beneficial in ensuring that changes in the cloud computing environment, e.g. due to new VM instigation, existing VM migrations or other virtual network restructuring operations, are accommodated without jeopardising the protection afforded by the agent system. The agent system can thus adapt to network restructuring.
The multi-agent architecture of the present invention can offer a distributed and collaborative agent system architecture providing reliability and redundancy.
The data store may comprise a distributed data store. The distributed data store may be shared amongst a plurality of agents. The data store may comprise a first data store, e.g. on a hypervisor or a relatively higher level of a virtual environment architecture, and a further data store, e.g. on a virtual machine or a relatively lower level of the virtual environment architecture. The further data store may comprise a partial data store, e.g. comprising only partial data or data types form the first data store. The first agent controller may instigate the further data store, e.g. upon deployment of a sensor or the further agent. In this manner, hypervisor level agent storage may be larger and shared with other lower level agents
The location of deployment of the/each further agent may be logged in the data store.
The data store may comprise a log of identified addresses (e.g. IP addresses) for the virtual computing environment. The log of identified addresses may comprise node type and/or status data, e.g. machine operational status and/or agent inhabitation status. A node may comprise a VM, a hypervisor, a cloud network manager, a communication module or operating system thereof, or a virtual network communication device. The first agent may reside on a virtual machine monitor (VMM), e.g. comprising applications for supervision of the virtual environment, such as managing VM initiation and/or migration.
The controller component of the first and/or further agent may comprise an agent payload. The payload may comprise one or more anti-malware, active defence, VM or network maintenance and/or health monitoring application. The payload implemented by an agent may be dependent on its location/context. The agent payload may comprise a reporting tool, e.g. to understand agent operation or the effect of malware within cloud environments.
The first agent controller may implement one or more sensor and/or deploy the further agent at a different level in the virtual computing architecture from its own. The first agent may instigate vertical and/or horizontal deployment of further agents or sensors, e.g. so as to cross one or more virtualisation boundary in the network. Traversal of one or more hierarchical layers in a cloud architecture has been found to be highly advantageous in ensuring comprehensive coverage.
The sensor component may comprise an active or passive sensor component. A plurality of active and/or passive sensor components may be provided. The system may comprise one or more active sensor arranged to poll one or more computational node. The system may comprise one or more passive sensor to identify the presence/location of, or monitor the status of, a computational node.
The first agent controller component may provide for localized coordination of any or any combination of the sensor(s), further agent(s) and or data store. The first agent controller component may passively or actively place sensors within the cloud network
The first agent, or any other agent within the distributed system, may instigate data communication or transfer with a computational node by tunnelling. An in/out data channel for computational node may be used. An agent may create a data store/disk image. An agent may initiate mounting of the image by the node. The node may mount the image as a data store drive. Data communication may be established between virtualisation boundaries using this method.
The first agent controller, or any other agent of the distributed system, may search for or identify one or more computational node initiation, removal or migration, e.g. according to the received sensor data.
The first agent controller may cause a VM migration within the virtual computing architecture/network. The first agent controller may inhibit communication to/from a VM. The first agent may implement a denial-of-service attack on a VM, e.g. to trigger migration of the VM by a hypervisor.
The first agent may propagate and/or deploy a further agent upon sensing of VM migration or shutdown. The first agent may deploy the further agent onto the migrating memory of a VM.
The first and/or further agent may comprise one or more module for escalating agent privileges.
The first and/or further agent controller may operate according to the general control algorithm of: i. Sense the environment; ii. Determine vulnerable hosts/nodes; iii. Infect hosts; iv. Execute payload
The location and status of each of the first and further agents may be logged, so as to generate a propagation record, which may be updated over time. In some embodiments, each further agent may ping or otherwise alert another agent or sensor of its address once established on a node.
The sensor or a further sensor in the system may comprise a HTTP listener or a HTTP request reflector, which may populate an agent status log. Thus the prorogation rate of the agent system through the network can be tracked.
According to a second aspect of the invention, there is provided a cloud architecture based network comprising the agent system according to the first aspect.
According to a third aspect of the invention, there is provided a method of monitoring a virtual computing system comprising: installing a first agent having a controller component for propagating one or more further agent, maintaining a data store containing data of potential exploits for computational nodes of the virtual computing system and sensing data communications between virtual computing applications on one or more node using one or more agent sensor, interpreting the sensed data communications at the first agent controller and querying the data store to identify a computational node of the virtual computing system that is susceptible to agent propagation, automatically propagating the further agent by the first agent controller and deploying the further agent on said computational node once identified.
According to a fourth aspect of the invention, there is provided a data carrier comprising machine-readable code for the operation of one or more computational process to operate according to the method of the third aspect and/or to implement the system of the first aspect.
Any of the agent, system and/or method of the aspects defined above may operate in an autonomic fashion, and may implement the recited functionality without user intervention.
BRIEF DESCRIPTION OF THE DRAWINGS
Practicable examples of the invention are described in further detail below with reference to the accompanying drawings, of which:
Fig. 1 shows an example of a cloud infrastructure having a plurality of hierarchical layers for deployment of virtualized environments, within which infrastructure agents according to examples of the invention have been tested;
Fig. 2 shows a schematic arrangement of components constituting an operational agent according to an example of the invention;
Fig. 3 shows a relationship between different types of agent according to an example of the invention;
Fig. 4 shows a bespoke agent development framework, which may be implemented as a process for construction of agents in accordance with examples of the invention;
Fig. 5 shows an overview of basic hypervisor agent operation according to an example of the invention;
Fig. 6 shows a process by which agents operate to inhabit a cloud infrastructure including cloud network components;
Fig. 7 shows a process by which data may be transferred between a hypervisor and a VM according to an example of implementation of the invention;
Fig. 8 shows an example of a vertical agent propagation scenario;
Fig. 9 shows an example of a horizontal agent propagation scenario;
Fig. 10 shows an example of an agent propagation model according to an example of the invention; and,
Fig. 11 shows a further implementation of an agent propagation model according to an example of the invention.
DETAILED DESCRIPTION OF THE INVENTION
The invention derives from the basic concept of utilising general malware-based principles and reapplying them to benign agent applications.
With reference to Fig. 1, there is shown an environment developed specifically for evaluation of potential attacks on cloud infrastructure 10. The virtualization boundaries 12, 14 within the architecture represent different layers within the architecture. A top level hypervisor 16 has oversight of a plurality of sub-level hypervisors 18 as well as managing top level a top level data store 20, e.g. comprising an image store.
The sub-level or mini hypervisors 18 manage a plurality of virtual machines 22, i.e. guest machines, such as virtual PC’s or virtual servers, which host guest services, e.g. accessible to remote users of the cloud architecture.
It will also be appreciated to the skilled person that the virtual machines will have a guest user space comprising a guest kernel space, thereby defining a further level of the hierarchy. Each of the virtual machines 22 under common supervision may potentially communicate with each other and so a mini hypervisor 18 may manage a virtual network of guest machines and associates services. The mini hypervisor 18 runs its own host operating system. Similarly the top level hypervisor 16 runs its own operating system above the context of the mini hypervisors 18. Thus additional levels may be defined according to partitions between the virtualized systems and services running within each of the macroscopic levels of the hierarchy. Communication/transfer of data to another VM, system or service within a common level of the hierarchy may be referred to as a horizontal movement, whereas a vertical movement involves communication/transfer of data to a higher or lower level.
The hierarchy defines the abilities for machines/hypervisors existing at one level to communicate with the level above in the hierarchy and/or with machines at the same level. This ‘field of view’ of each level conventionally preserves security and ensures that cloud services hosted at the lowest level within virtualisation boundary 14 are effectively unaware of the top level hypervisor 16 and unable to communicate directly therewith.
The following technical description of agents according to examples of the invention provides functionality allowing each agent to self-propagate within the cloud infrastructure 10 e.g. to cross layer/virtualisation boundaries 12,14 as well as the interface between guest kernel space, user space and hypervisor operating system. In order to survive in the hostile, non-deterministic and dynamic cloud environment, a multi-component agent architecture 24 is provided, an example of which is shown in Fig. 2. Although the term ‘agent’ is used consistently herein, it will be appreciated by the skilled person in the art that such a propagating entity could also be described as a worm.
The example of Fig. 2 shows the components of an individual agent 24 and their interactions, although it will be described below how certain components may be shared/distributed between agents. A controller 26 comprises a component of an agent that controls its operation, including - in some examples - the replication process for constructing further agents. The controller 26 comprises modules for communication with the other agent components shown in Fig 2 for identifying its environment and/or identifying attack vectors, e.g. for candidate locations in the cloud environment in which to instigate an action or deploy one or more further agent. The controller 26 may thus instruct/control operation of the other agent components shown in Fig. 2 within the local environment. The controller 26 may also communicate with other agents, either directly or via a commonly accessible data store 27.
Whilst Fig. 2 shows the generic agent architecture, Fig. 3 draws a distinction between two different types of agent, namely a hypervisor/constructor agent 24A and a VM agent 24B. The hypervisor agent 24A controller 26A has module(s) for generation of other agents 24 and can thus construct and deploy other agents on VMs or other hypervisors within the virtual architecture. The controller 26A may control operation and/or removal of the agents 24B. The VM agents 24B are less complex and are arranged to deploy the intended agent action (i.e. the agent payload) within its local environment. However the VM agents 24B may additionally undertake any of the common sensing, communicating and/or data storage/retrieval actions described below.
The controller 26 of VM 24B and/or hypervisor 24A agents (herein referred to collectively as agents 24) may also carry the agent payload, comprising one or more module of code for performing the intended function of the agent within the cloud environment. It is intended that an agent 24 according to the present invention will bear a benevolent payload serving to protect or maintain the desired operational state of the cloud environment 10 itself and/or hypervisors 16,18 or VMs 22 operating therein. Thus each agent may comprise a payload for any or any combination of: monitoring activity of hypervisors and/or VMs; reporting normal/abnormal activity; implementing changes to the environment; and/or implementing changes to VM/hypervisor operation. There exist a significant number of functions that could be employed for implementing the intention of a benevolent agent and it is not intended to disclose all of the options for, or precise details of, the payload carried by the agent since a number of desirable functions would be known to the skilled person in the field of anti-malware tools. However it is envisioned that any or any combination of the following non-exhaustive list of example functions could be made available through the agent payload: identification of third party agents or malware; blocking/inhibiting/isolating malware or malware communications; rectifying erroneous/malicious changes to applications or services and associated registry entries; maintaining white/black lists and/or modifying account or address data; inhibiting operation of applications or services; backing up and restoring VMs or associated application data.
The agent architecture 24 comprises one or more sensor 28, 30. Sensors are used to understand the agent’s environment and to identify candidate attack vectors for deployment of further agents and/or agent payload. The sensor data is assessed by the controller 26 and used to identify possible exploits in the cloud environment. Reconnaissance can thus be performed using sensors to determine attack vectors.
The sensors may additionally or alternatively be used to identify activity requiring attention according to the controller payload.
In the examples of Figs. 2 and 3, both an active sensor type 28 and a passive sensor type 30 is shown. The passive sensor 30 may monitor data traffic or other operations without implementing/instructing any change in the monitored system/environment. The passive sensor may monitor data flows 32, such as network traffic, external of the agent itself, e.g. between applications of a virtual machine, over a virtual network and/or across any of the virtualization boundaries described above.
One or more active sensor 28 may be used to probe or poll an external process or application 34, such as an operating system or application. The sensor may transmit and receive resulting response data from the external process 34 under investigation. The received responses may be communicated to the controller 26 in order to identify/assess the external process 34. Active sensors may be under the control of the controller 26, i.e. for control of the active polling process. Passive sensors may be external of the host machine. The controller 26 instructs the desired sensor operation according to available data about its environment. The active sensor 28 is prone to easier detection by other applications but may obtain data which is not readily or quickly available to passive sensors. A combination of both active and passive sensors has been found beneficial to rapid and effective deployment of agents.
The agent system makes use of a shared and/or distributed data store 27. A data storage system of this kind can be accessed by multiple agents and thus allows collation of data from multiple sources as well as shared access to the stored data. Any of the agent controllers 26 and/or sensors 28, 30 may add data to the data store. The data store may be replicated in more than one virtual environment, e.g. on multiple VMs or hypervisors and/or at a plurality of levels in the virtualization hierarchy. The distributed approach thus provides redundancy and resilience within the dynamic cloud environment. The data store may comprise one or more database.
The data store 27 logs sensor results and any interpreted findings of sensor results, e.g. indicating the data signals identified, the responses received from polling of external processes 34, the inferred location and type of nodes within the cloud architecture.
The data store may include the location and/or status of agents 26 or node within the cloud environment, e.g. including IP address and/or a most recent or pending action of each agent. The data store may include an ‘infected’ status of each identified node/VM/hypervisor, e.g. according to the status of an agent or sensor for the node. The data store may include other operational status identifiers for the nodes, such as a normal, idle or abnormal status, e.g. inoperative, unresponsive, or displaying abnormal processes.
As well as the data store 27, an agent architecture library 36 is also shown in Fig. 2. The library contains details of existing/known backdoors/vulnerabilities/exploits, e.g. as a library of attack vectors to support heterogeneous environments.
This library may comprise modules/elements for agent construction and replication, e.g. defining agent diagnostic/monitoring abilities and/or payload.
An existing agent controller 26, typically constructor agent 26A, can submit vector requests to the library 36 identifying potential nodes for agent deployment, e.g. including the relevant information gathered by sensors 28, 30 and/or maintained in the data store 27. The vector request will typically indicate the location and type of node as well as any other relevant information about its local environment, e.g. neighbouring nodes and the like. The pertinent node elements for construction/replication of an agent are returned from the library such that the relevant type of agent is deployed to the available node. A null response form the library may be returned if no meaningful/compatible agent elements or payload is available, at which point the node status may be logged in the data store. Ongoing sensing may be implemented to monitor such a node or else the inhospitable nature of the node may be logged and agent resources may be deployed elsewhere.
In some embodiments the library may be maintained within the data store, although in some examples, a local instance of the library may be made available to agents, e.g. hypervisor agents 26A, for the specific purpose of configuring new agents for the available attack vectors.
Prior to discussion of examples of full operation of the benign agent system in cloud environments, reference is made to Fig. 4, which shows the process 38 of agent design/configuration. In its current state of development, a user inputs configuration requirements/limits 40 in to a configuration parser 42 in order to initiate the process of constructing an agent adapted to those requirements. This agent development framework allows semi-automated and rapid development of malware-based agents for benign purposes. Current applications include developing malware and/or benign agents to test novel security systems and processes within academic research; as well as for use by security professionals in testing the security of an organisations network, e.g. in addition to a variety of protective tasks.
Similar to the cloud-aware agent construction described above in relation to Figs 2 and 3, this application consists of multiple components. The current software framework of Fig. 4 is not run dynamically on a network by agents themselves. It is instead configured and run prior to network infiltration, allowing advanced optimised configurability of agents for the particular cloud environment/architecture in which it is to be deployed. However similar components do exist between the system used in Fig. 4 and the process followed by the constructor agents and accordingly it is proposed that any or any combination of the features described and shown in Fig. 4 may be implemented in the live system of Fig. 2 or 3, e.g. such that automated/sensed agent requirements may be used in place of manually-entered user configuration requirements 40. The agent is constructed according to a highly detailed configuration file 40 entered by the user. It contains such information as any or any combination of: stealth requirements (e.g. indicating to what extent the agent must remain undetected); performance requirements; relevant payload execution instructions; information regarding the target environment; and/or any specific exploits for execution.
The parser 42 takes the inputs 40, e.g. including any or any combination of payload instructions, agent performance requirements, virtual environment features/identifiers and/or stealth requirements (e.g. indicating to what extent the agent must remain undetected) and parses the information in order to output target system data (e.g. including potential exploit data) and a parsed agent specification. The system data is fed to an exploit developer and the identified exploits are logged in an exploit database 46.
An agent software and architecture database 48 maintains the modules/element used to construct agents and is accessed by the agent constructor 50 upon receiving the parsed agent specification in order to construct the agent elements to fit the specification. Thus the combined datastores 46 and 48 may or may not make up the library 36 of Fig. 1 and contain security attacks for a wide variety of known scenarios, as well as allowing a novel exploit development tool.
The agent specification is also logged in the storage template constructor 52, which may create and log an agent type or template for future use. The outputs of both the constructors 50 and 52 are sent to the compiler 54, which outputs the operational agent 24 as an aggregation of the selected modules.
The resulting agents can exist within multiple layers of the cloud environment in order to collaborate in allowing self-propagation, forming an expansive, redundant and distributed system which is suited to surviving the elasticity of the cloud environment. The multi-component architecture allows these agents to be constructed depending upon the operating environment of the layer it is to reside in, for example prior reconnaissance of the environment via sensors will allow appropriate attack vectors to be included with the agent, facilitating communication with the rest of the agents once in place.
Some components will be customized to suit the resource constraints. For example, an agent within a hypervisor may have more available space for storage and therefore has a full storage system which acts as a hub allowing other agents to store data within it. In contrast, agents with smaller constraints may only have a partial storage system, containing only what is necessary. Figs. 5 and 6 show simplified examples of how agents can inhabit their environment. Dashed objects indicate agent components whilst solid objects indicate cloud network components. HS and FS indicate half (i.e. partial) and full distributed data stores.
In Figs. 5 and 6 a hypervisor agent 24A is deployed initially at stage 56 which deploys a sensor 28 to understand the environment of Hypervisor 1. The agent 24A implements a full data store 27 on Hypervisor 1 and collates sensor data at 58 indicating potential attack vectors for VM 1 and the Cloud Manager. At step 60, the agent controller sends the vector requests to the agent library 36, which responds indicating the available nodes, and configures 62 and constructs the relevant payload of hypervisor agent 24A which infects the hypervisor file system at step 64 based on the sensed address data.
From that point the agent 24A can implement further sensors and/or receive further data from existing sensor 28 by which the agent 24A the environment data in the data store 27 is updated at 66. Further vector requests with the amassing iibrary reveal the attack vector for VM1, allowing construction of agent 24B which is deployed according to the relevant IP address for VM 1. A partial data store may be implemented on VM1 for its local environment.
The passive sensor 30 may identify communications to the Cloud Manager and may allow deployment of another sensor 30 for detection of traffic indicative of Hypervisor 2. Also the agent 24B on VM1 and/or sensor 28 on Hypervisor 1 may allow for implementation of a further sensor 28 on VM2.
Thus the iterative process of vector identification and attack/infection may proceed until no further unexploited vectors are identified.
One of the principal aspects of the cloud-aware self-propagating software agent system is the ability to traverse multiple layers of the cloud architecture. The software may travel horizontally, e.g. from hypervisor to hypervisor when given an appropriate attack vector or vertically, e.g. from hypervisor to virtual machine. Vertical propagation is only dependent upon an appropriate attack vector when moving up through the hierarchy.
Post propagation, the individual agents must establish communication channels across layers. Network based communication builds upon already established layers. However through virtualization software, novel techniques must be used. Arguably, the most difficult is the ability to travel from VM to hypervisor and vice versa.
Turning to Fig. 7, there is shown an example of how this has been achieved in a preliminary implementation of the invention. A method of tunnelling data between hypervisor 18 and VM via using a data I/O channel was used. The hypervisor agent 26A creates a disk image 38, which can then be mounted on the VM 22 as a USB drive 70 via the relevant virtual drivers 72. The mounted drive 70 thus allows communication to/from the VM and thus to/from the VM agent 24B thereon. The VM agent can write to and read from the mounted drive 70, which is in turn communicated to the hypervisor agent 24A as the disk image data.
Further data tunnelling techniques may be enabled through manipulation of network traffic, e.g. to employ steganographic techniques, which may enable hidden channels in a variety of protocols and formats.
Data is written and read in a minimally processed and/or lossless compression format, such as RAW file format, to facilitate communication.
Using the technique shown in Fig. 7, external agents 24 can communicate with a remote VM agent 24B across a virtualisation boundary in order to send commands and/or share sensor data. A data store on the VM 22 may also be populated from another layer in this manner.
Figs. 8 and 9 show two different examples of how the architecture of the agents could traverse the layers of a virtual network.
Turning firstly to Fig. 8, there is shown a vertical propagation bearing a relatively low probability of success. Dashed lines indicate agent entities or data flow. In this example, an agent, a1, begins with low privileges at the bottom level of the virtual system hierarchy on VM1 and thus attempts to move up the hierarchy. Due to this, the probability of success would be considered low, but feasible. An example of the process undertaken in Fig. 8 is as follows: 1. Agent a1 is executed within a purposely crafted, custom kernel inside a guest image. 2. The agent analyses its environment and determines its type, e.g. it is within a Xen hypervisor. A library is queried for an attack vector and, giving a number of tests to conduct, the worm determines that FLASK is disabled on the hypervisor. 3. The agent executes an attack via an exploitation of the FLASK bug and elevates privileges to the domO. 4. The agent a2 executes a sensor under these new privileges, scanning the network for live VM migrations occurring between hypervisors. No live migrations occur but other hypervisors are identified. 5. Agent a2 queries the library and produces a denial-of-service (DoS) attack and a migration interception for Hypervisor 2. 6. The DoS is executed on Hypervisor 2, causing a live migration of VM2 to occur to Hypervisor 3. 7. The agent infects the migrating memory and implants another agent a3 into the memory of the migrated VM, successfully propagating, whilst also elevating its privileges. 8. The two agents then establish a communication route via the network.
Turning now to Fig. 9, there is shown an example of vertical propagation with a higher starting success probability. In this example, an agent a1 begins with higher level privileges in the middle of the hierarchy, and thus can move between any layers given an appropriate attack vector. The example process is as follows: 1. Agent a1 is executed within a VMM (Hypervisor 1). During a local VM (VM1) shutdown, the agent infects VM1 with another agent, a2. 2. A failed VMM exploit check occurs. A sensor, S, installed on the network reveals that the local VMMs are not susceptible to any known remote exploits. 3. So as to cross layers to another VMM, agent a1 executes a local DoS. 4. The cloud manager notices the drop in capacity at Hypervisor 1 and migrates VM1 to another VMM (Hypervisor 2). 5. With low probability, a2 on VM1 infects other VMs belonging to Hypervisor 2, therefore propagating horizontally across the network.
Many possible techniques are available and more will continue to become available. Some are as follows: • Vertical propagation can be enabled via tampering with data transfers going into a target VM. Binary code can be replaced with back doors and thus enable privileged execution. • Introspection can enable recovery of encryption keys, passwords and other information to enable malware to falsely authenticate as a privileged user. • A highly effective method is through exploiting back doors put in place by the system administrator. Due to the benign worm-based system being employed by them, it is most likely it will be given access to different areas. Most other methods are necessary to breach systems that must occur discreetly or out of the administrator’s control.
The above examples show how differently agents could potentially act in different environments. The following description details cloud propagation modelling for the agent system described above. The models have been derived using a simulated cloud environment, affording basic features of an elastic cloud network; so as to determine the effect of varying workloads on the propagation rate of the agents.
Typically these models are based upon the classical epidemic model, whereby each individual within a set (the population), at any time interval may be in one of two states: either (i) infected or (ii) susceptible.
Where β is the infection rate l(t) returns the infected nodes at time t and [N - l(t)] returns the quantity of susceptible nodes. This model does not take into account any of the population recovering, or the effect of the worm on the topology of the network. The Kermack-Mckendrick model improves upon the simple epidemic model by assuming some infected nodes become immune via patching, etc. and therefore include a rate of node recovery. Where R(t) denotes the number of removed hosts and y is the rate of removal (or death rate) and J(t) denotes the number of infected hosts at time t:
A two-factor worm model may also take into account human counter-measures, such as patching and removal of hosts and in addition takes into account the worms effect upon network traffic by having a variable birth rate. This is accomplished by applying functions which remove hosts from the infected and susceptible populations. A model which applies to malware whose propagation rate is dependent upon local topology is known as the spatial-temporal model. The network is represented as a directed graph, where the probability of each node being infected is dependent upon the number of locally infected nodes at any time. This is best represented as a graph structure as shown in Fig. 10, where all visible nodes are those either infected, I, or susceptible, S. The arrows of the graph denote the direction(s) in which the agent may propagate from one node to another. Along the edge of the graph is given the birth rate (βη), or otherwise the probability of infection of the node. At each node is given the agent death rate, or chance of recovery, (δη), where ‘n’ is the node number or another suitable identifier.
The direction of each edge in Fig. 10 thus denotes a possible attack path for an agent.
Another model which bases propagation upon topology is represented by a connected network G = (N; E) where N is the nodes and E the edges. A standard Susceptible-Infected function is defined:
Then, the state of each node at time t is dependent upon the state of its neighbours at time t - 1. Susceptible nodes may be infected with probability βί where:
Function g denotes the infection function, with function m returning the number of infected neighbours. Therefore the number of infected hosts at any point can be calculated with:
Traditional susceptible-infected models make few assumptions about the epidemic but regardless have tended to be quite accurate, whilst newer models make more assumptions and provide a more accurate description of the malware's propagation but might not always take into account complex types of networks.
For modelling of propagation in conjunction with the present invention, it is considered to be beneficial to use models which take into account the topology of the network, due to the elasticity of cloud environments.
Malware propagation within cloud networks is related to the quantity of local neighbours which are infected; as this changes due do its dynamic structure.
The propagation model is thus underpinned by the probability of a node being infected based upon location and time (i.e. topological based propagation). Therefore, the model developed for use with the present invention is similar to the spatial-temporal model previously discussed. It is represented structurally as a directed graph as shown in Fig. 11, where an edge and its associated direction denotes the availability of an attack path The weighting of an edge denotes the birth-rate or the probability of successful agent propagation via the attack path. Birth rates differ due to the varying difficulty in traversing an attack path, e.g. according to the nature/type of attack required, the type of node to be infected and/or the layer of the node relative to the agent on an existing node.
Nodes may be one of two types, virtual machines or hypervisors. In Fig. 11, it can be seen that propagation vertically downwards in the hierarchy from hypervisor to one of its associated VMs bears a high likelihood of success, whereas propagation between hypervisors is generally of lower success probability, and propagation between VMs is potentially lower still.
The malware propagation stage of the proposed model is similar to that presented in Gu, Y., Song, Y., Jiang, G., Wang, S.: A new susceptible infected model of malware propagation in the internet. In: Young Computer Scientists, 2008. ICYCS 2008. The 9th International Conference for, pp. 2771 {2775 (2008). The worm propagation susceptibility is dependent upon the topology of the network and the status of the other nodes. However, since the environment is heterogeneous in nature, each node belongs to a separate subset of N where H contains all hypervisors, C contains all cloud managers and V contains all virtual machines such that:
As the cloud has inherent elasticity, the structure will continuously vary somewhat depending upon a number of factors. At each point in time t, there is a potential for hardware failures to occur within the cloud network so H will vary according to the probability of global hardware failure threshold at any point in t such that that the probability of each h in H failing is:
Where R is a continuous random value where 0 < R < 1 and Θ is the global failure threshold. In addition, as the network is elastic depending upon demand, V will also directly vary in line with the workload as the cloud manager will balance resources to maintain an equal workload distribution across the hyper visors. V = W, where W is a discrete random integer representing the workload such that 0 < w < 100. Therefore the average quantity of v directly connected to each member of H is:
Where P is the global, individual hypervisor capacity. Propagation is then considered according to which set the node resides in. Therefore by adapting the original model, the birth rate β is replaced by function b(i), giving:
A significant number of experiments of simulated propagation have been run so as to assess the effect of varying cloud factors on the rate of propagation and thereby examine the dynamic cloud susceptibility to agents. It was found in general that the more dynamic the network was, the greater the rate of propagation, as migrations catalyse the agents’ ability to reach new, previously isolated, hosts. Indeed the failure rate and migration rate had a very strong correlation on propagation rate, with only a small decrease in migration rate significantly increasing the time take for the agents to propagate.
However testing has also been carried out to examine the effects of workload on propagation rate. This also had the effect of causing the network to be dynamic, instigating migrations and removal of nodes but through two means instead of one. It was found that increasing workload generally correlated to increasing propagation rate, but only up to a certain value of workload, beyond which the effectiveness of the agents becomes diminished, e.g. due to the rapid rate at which the network changes. This is likely due to the infected nodes not having a chance to propagate to another node before they are removed when the network restructures. A number of tests were run under varying conditions of network size. Static networks were first tested as a baseline comparison and then various network sizes were tested. The system collected two log files per experiment. The first describes the network structure at each stage; including simulated hardware failures, cloud workload values etc. The second log file is executed by an http listener. As each agent/worm infects a VM it pings this listener with the mac address of its host environment. Thus we can monitor the prorogation rate of the worm through the network as it pings from different IP addresses. Below is a small sample of an http listener file which clearly shows the malware propagation across the cloud network. 192.168.100.252- [15/Aug/2014 16:28:46] "GET/worm.py HTTP/1.1" 200 - 192.168.100.252- [15/Aug/2014 16:30:52] "GET/00:16:3e:1d:49:3d ΗΤΤΡ/1.Γ 404 192.168.100.252- [15/Aug/2014 16:30:55] "GET/00:16:3e:4b:85:30 HTTP/1.1" 404 192.168.100.252- [15/Aug/2014 16:33:34] "GET/00:16:3e:66:15:29 HTTP/1.1" 404 192.168.100.46- [15/Aug/2014 16:35:58] "GET/00:16:3e:66:15:29 HTTP/1.1" 404 192.168.100.46- [15/Aug/2014 16:36:32] "GET/00:16:3e:4b:85:30 HTTP/1.1" 404 192.168.100.46- [15/Aug/2014 16:39:31 ] "GET /00:16:3e:0c:9c:03 HTTP/1.1" 404 192.168.100.252- [15/Aug/2014 16:50:16] "GET /00:16:3e:31:9c:62 HTTP/1.1" 404 192.168.100.43- [15/Aug/2014 16:50:17] "GET /00:16:3e:7e:7b:cf HTTP/1.1" 404 192.168.100.43- [15/Aug/2014 16:50:23] "GET /00:16:3e:7e:7b:cf HTTP/1.1" 404 192.168.100.46- [15/Aug/2014 16:50:27] "GET /00:16:3e:22:58:0f HTTP/1.1" 404 192.168.100.46- [15/Aug/2014 16:50:34] "GET /00:16:3e:15:ea:32 HTTP/1.1" 404 192.168.100.17- [15/Aug/2014 16:51:06] "GET /00:16:3e:0b:34:dc HTTP/1.1" 404 192.168.100.17- [15/Aug/2014 16:51:11] "GET /00:16:3e:0b:34:dc HTTP/1.1" 404
The monitoring of propagation progress may be beneficial also in a working implementation of the invention, in order to establish proper coverage of the virtualization network. Thus the sending of short messages indicating status and/or location of agents, possibly also including identification of identified node status, to a central repository such as the hypervisor memory store or a remote monitor may form a part of the deployed system according to various aspects of the invention.
In examples of use, the results using the monitoring tools may be compared to the developed propagation model results in order to assess/refine propagation models. This can facilitate improvements to the agent system and/or understanding of propagation behaviour.
It will be appreciated based on the above disclosure that the invention provides a novel approach to cloud system/architecture defence, maintenance and/or monitoring. The modelling and/or reporting system developed alongside the novel agent approach may allow examining of methods of propagation and propagation effectiveness in order to assess cloud operational status. The modelling and/or reporting system may also aid development of future generations of security technologies for the cloud, e.g. by understanding not only benevolent agent behaviour but also the malevolent worm/malware behaviour that the cloud architecture is to defend against.

Claims (26)

1. A virtual computing architecture protection system comprising: a first agent having a controller component for propagating one or more further agent; a data store component for storage of data pertaining to computational nodes of the virtual computing architecture; and one or more sensor component arranged to sense data communications between virtual computing applications on one or more node, wherein the first agent controller is arranged to interpret the sensed data communications and to query the data store to identify a computational node of the virtual computing architecture that is susceptible to agent propagation, the first agent controller propagating and deploying the further agent on the said computational node once identified.
2. The system of claim 1, wherein the first agent controller is arranged to determine whether an instance of the first or further agent is already installed on a computational node and to identify a computational node type and one or more exploit for said node type in order to identify whether the node is susceptible to agent propagation.
3. The system of claim 1 or claim 2, wherein the data store comprises a library of exploits for known computational node types and the first agent controller queries the library in order to identify whether the node is susceptible to agent propagation.
4. The system of claim 3, wherein a positive library query returns an attack vector confirmation to the first agent controller for deploying the further agent.
5. The system of any preceding claim, wherein the data store comprises an agent component library, comprising a plurality of further agent modules for propagation of the further agent, the further agent modules being selected from the agent component library according to the identification of the susceptible computational node.
6. The system of claim 5, wherein the propagation of the further agent comprises composing the further agent using the selected modules from the agent component library.
7. The system of any preceding claim, wherein the first agent is a cloud manager agent or hypervisor agent.
8. The system of any preceding claim, wherein the further agent is a hypervisor agent or a virtual machine (VM) agent.
9. The system of any preceding claim, wherein the data store is a distributed data store, comprising a first data store on a relatively higher level of the virtual computing architecture, and a further data store on a relatively lower level of the virtual environment architecture.
10. The system of claim 9, wherein the further data store is a partial data store replicating only partial data or data types form the first data store.
11. The system of any preceding claim, wherein the data store comprises the location within the virtual computing architecture of the/each further agent and/or its status.
12. The system of any preceding claim, wherein the data store comprises a log of identified addresses for the computational nodes in the virtual computing architecture, said log comprising node operational status and/or agent inhabitation status.
13. The system of any preceding claim, wherein the controller component of the first and/or further agent comprises an agent payload having one or more of an anti-malware, active defence, VM or network maintenance and/or health monitoring application.
14. The system of any preceding claim, wherein the first agent and the further agent exist at a different hierarchical levels and/or on opposing sides of a virtualisation boundary in the virtual computing architecture.
15. The system of any preceding claim, wherein the first agent comprises both an active and a passive sensor component, the active sensor component being arranged to poll one or more computational node.
16. The system of any preceding claim, wherein the further agent comprises a further controller component and/or one or more further sensor component.
17. The system of any preceding claim, wherein the first agent instigates data transfer with a computational node by tunnelling.
18. The system of claim 17, wherein the first agent creates a data store image and initiates mounting of the image by the computational node as a data store drive, the data within the data store image being transferred to the computational node via the data store drive.
19. The system of any preceding claim, wherein the first agent controller identifies one or more predetermined action of the computational node based on the received sensor data and deploys the further agent in response to said action identification, wherein the predetermined action comprises one or more of initiation, removal or migration of the computational node.
20. The system of any preceding claim, wherein the first agent controller is arranged to inhibit communication to/from the identified computational node so as to trigger migration, removal or replacement of a virtual machine by a hypervisor of the virtual computing architecture.
21. The system of any preceding claim, wherein each further agent communicates its location and status to the data store or a monitoring application so as to generate an agent propagation record.
22. The system of any preceding claim, further comprising a HTTP listener or a HTTP request reflector, arranged to log data for an agent propagation record.
23. The system of any preceding claim, in which multiple agents act as a collective distributed system to provide any or any combination of: the controller component for propagating one or more further agent; the data store component for distributed storage of data pertaining to computational nodes of the virtual computing architecture; and/or a plurality of the sensor components.
24. A method of monitoring a virtual computing system comprising: installing a first agent having a controller component for propagating one or more further agent; maintaining a data store containing data of potential exploits for computational nodes of the virtual computing system; and sensing data communications between virtual computing applications on one or more node using one or more agent sensor; interpreting the sensed data communications at the first agent controller; querying the data store to identify a computational node of the virtual computing system that is susceptible to agent propagation; automatically propagating the further agent by the first agent controller; and, deploying the further agent on said computational node once identified.
25. A data carrier comprising machine-readable code for the operation of one or more computational processor to operate according to the system of any one of claims 1 -23 or to perform the method of claim 24.
26. A system or method substantially as hereinbefore described with reference to the accompanying drawings.
GB1604143.6A 2016-03-10 2016-03-10 Self-propagating cloud-aware distributed agents for benign cloud exploitation Withdrawn GB2548147A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1604143.6A GB2548147A (en) 2016-03-10 2016-03-10 Self-propagating cloud-aware distributed agents for benign cloud exploitation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1604143.6A GB2548147A (en) 2016-03-10 2016-03-10 Self-propagating cloud-aware distributed agents for benign cloud exploitation

Publications (2)

Publication Number Publication Date
GB201604143D0 GB201604143D0 (en) 2016-04-27
GB2548147A true GB2548147A (en) 2017-09-13

Family

ID=55952157

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1604143.6A Withdrawn GB2548147A (en) 2016-03-10 2016-03-10 Self-propagating cloud-aware distributed agents for benign cloud exploitation

Country Status (1)

Country Link
GB (1) GB2548147A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021001235A1 (en) * 2019-06-30 2021-01-07 British Telecommunications Public Limited Company Impeding threat propagation in computer network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040128530A1 (en) * 2002-12-31 2004-07-01 Isenberg Henri J. Using a benevolent worm to assess and correct computer security vulnerabilities
US20050204150A1 (en) * 2003-08-22 2005-09-15 Cyrus Peikari Attenuated computer virus vaccine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040128530A1 (en) * 2002-12-31 2004-07-01 Isenberg Henri J. Using a benevolent worm to assess and correct computer security vulnerabilities
US20050204150A1 (en) * 2003-08-22 2005-09-15 Cyrus Peikari Attenuated computer virus vaccine

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021001235A1 (en) * 2019-06-30 2021-01-07 British Telecommunications Public Limited Company Impeding threat propagation in computer network

Also Published As

Publication number Publication date
GB201604143D0 (en) 2016-04-27

Similar Documents

Publication Publication Date Title
El Kafhali et al. Security threats, defense mechanisms, challenges, and future directions in cloud computing
US9906538B2 (en) Automatic network attack detection and remediation using information collected by honeypots
US11245702B2 (en) Security vulnerability assessment for users of a cloud computing environment
US20220229909A1 (en) Firmware Protection Using Multi-Chip Storage of Firmware Image
US10630643B2 (en) Dual memory introspection for securing multiple network endpoints
US10270807B2 (en) Decoy and deceptive data object technology
US9769250B2 (en) Fight-through nodes with disposable virtual machines and rollback of persistent state
US10050999B1 (en) Security threat based auto scaling
US11290492B2 (en) Malicious data manipulation detection using markers and the data protection layer
Tank et al. Virtualization vulnerabilities, security issues, and solutions: a critical study and comparison
US10044740B2 (en) Method and apparatus for detecting security anomalies in a public cloud environment using network activity monitoring, application profiling and self-building host mapping
US10715554B2 (en) Translating existing security policies enforced in upper layers into new security policies enforced in lower layers
US10277625B1 (en) Systems and methods for securing computing systems on private networks
EP3111364A1 (en) Systems and methods for optimizing scans of pre-installed applications
US9977898B1 (en) Identification and recovery of vulnerable containers
Shringarputale et al. Co-residency attacks on containers are real
US9652615B1 (en) Systems and methods for analyzing suspected malware
Man et al. A collaborative intrusion detection system framework for cloud computing
Toumi et al. Cooperative trust framework for cloud computing based on mobile agents
Deshane An Attack-Resistant and Rapid Recovery Desktop System
Mishra et al. VAED: VMI‐assisted evasion detection approach for infrastructure as a service cloud
Abed et al. Resilient intrusion detection system for cloud containers
Di Pietro et al. CloRExPa: Cloud resilience via execution path analysis
US10382456B2 (en) Remote computing system providing malicious file detection and mitigation features for virtual machines
Buzzio-Garcia Creation of a high-interaction honeypot system based-on docker containers

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)