CN106775942B - Cloud application-oriented solid-state disk cache management system and method - Google Patents

Cloud application-oriented solid-state disk cache management system and method Download PDF

Info

Publication number
CN106775942B
CN106775942B CN201611127232.9A CN201611127232A CN106775942B CN 106775942 B CN106775942 B CN 106775942B CN 201611127232 A CN201611127232 A CN 201611127232A CN 106775942 B CN106775942 B CN 106775942B
Authority
CN
China
Prior art keywords
module
virtual machine
decision
cloud application
solid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611127232.9A
Other languages
Chinese (zh)
Other versions
CN106775942A (en
Inventor
黄涛
唐震
吴恒
魏峻
王伟
支孟轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Software of CAS
Original Assignee
Institute of Software of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Software of CAS filed Critical Institute of Software of CAS
Priority to CN201611127232.9A priority Critical patent/CN106775942B/en
Publication of CN106775942A publication Critical patent/CN106775942A/en
Application granted granted Critical
Publication of CN106775942B publication Critical patent/CN106775942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines

Abstract

The invention relates to a cloud application-oriented solid-state disk cache management system and method, which have the core idea that the mapping relation between virtual machines and a solid-state disk is described by using a multilayer network model from the perspective of cloud application, and the optimal solid-state disk cache size obtained by each virtual machine is further determined. When the workload of the cloud application changes, the system can automatically trigger an adjustment process to execute dynamic migration of the virtual machine and cache capacity change, so that the performance of the cloud application is improved, and the utilization rate of the solid-state disk is improved.

Description

Cloud application-oriented solid-state disk cache management system and method
Technical Field
The invention relates to a cloud application-oriented solid-state disk cache management system and method, in particular to a multilayer network-based solid-state disk cache allocation and cache-oriented virtual machine dynamic migration method. Belonging to the technical field of software.
Background
Virtualization technology is currently in widespread use. By means of the virtualization technology, a plurality of virtual machines can be aggregated (association) on one physical server, so that the utilization rate of hardware resources is effectively improved. A Hard Disk Drive (HDD) based on a magnetic medium or a mass storage shared storage connected to a backend through a protocol such as iSCSI is typically deployed on a virtualization server (Hypervisor) to store virtual machine images. In this architecture, the IO performance of the virtualization server directly affects the performance of the virtual machine itself.
As a fast nonvolatile medium, a Solid State Disk (SSD) is usually deployed on a Hypervisor and used as a read-write cache for a backend virtual machine image storage. The IO request of the virtual machine firstly passes through the cache, if the cache is hit, the cache data can be returned immediately, and the read-write operation with low relative speed aiming at the rear end HDD or the shared storage can not be further triggered, so that the IO performance is effectively improved. The solid-state disk cache deployed on one Hypervisor is shared by all virtual machines borne on the Hypervisor, so that reasonable use of solid-state disk cache resources is of great importance.
Cloud applications are the primary service model in a virtualized environment. A typical cloud application is generally composed of multiple virtual machines, on which different components are deployed, associated with each other, and collectively provide services to the outside. A typical cloud application scenario in a virtualized environment is a Web-based transactional cloud application. This class of cloud applications typically consists of a front-end load balancer, service middleware, and a back-end database or persistent storage, typically providing an http(s) protocol based interface for users to access through a browser, or opening a RESTful style API, such as for third-party open platform applications. Such applications may also connect complex back-end transaction logic, such as social networking graphs, big data analytics, and the like. For this kind of application, the average response time is the most critical index, and directly affects the end user experience, and the optimization response time needs to be considered from the perspective of the application, and the optimization goal and means cannot be directly determined from the application cluster.
In addition, the load of the cloud application is dynamically changed, a user has a certain mode for using the cloud application, the cloud application is reflected in that the cloud application can be expressed as distinct IO load expressions on a virtual machine, and the cloud application also needs to have self-adaptive capacity when the solid-state disk cache resource management is performed, so that the load mode change of the cloud application can be sensed and a new round of adjustment of the solid-state disk resources can be triggered, and the optimal application performance can be ensured.
However, the current mainstream solid-state disk cache resource management method mainly solves the resource allocation problem of the solid-state disk cache from the perspective of the virtual machine and the cache itself, and aims to allocate the solid-state disk cache to all the virtual machines as reasonably as possible and achieve optimal performance. The indicators of interest for this type of work are the Miss rate of the cache and related derived indicators such as IO response time, IO bandwidth, etc. observed from the virtual machine. The cache Miss rate is used as the most key index of the cache, directly reflects the use condition of the cache, and is closely related to the average IO response time of the client (namely, the virtual machine) finally using the cache, so that the reduction of the cache Miss rate is an intuitive and effective means for improving the IO performance. However, it is worth noting that the performance influence of the upper layer workload on the bottom layer storage device is neglected by simply switching in from the bottom layer IO index, and a certain difference exists between the actual adjustment effect and the theoretical model calculation result.
In addition, the cache resource management method treats the virtual machines as independent units, independently adjusts the virtual machines, does not consider natural association between the virtual machines belonging to the same cloud application, and does not evaluate the overall average response time of the application from the application perspective. Different virtual machines belonging to the same cloud application may have different priorities and IO loads and may undertake processing of different transactions. Adjusting the cache capacity for different virtual machines will have different effects on the average response time of the application. This may result in that, although the optimal Miss rate, IO response time, or IO bandwidth may be achieved from the perspective of each independent virtual machine, the optimal performance may not be achieved from the perspective of the application when performing solid-state disk cache resource management.
When the solid-state disk is used as a cache resource, in addition to the characteristics of the cache, the characteristics of the storage medium also include the characteristics of the storage medium, that is, the service capability of the storage medium, such as bandwidth and IO Operands Per Second (IOPS), which are upper-limited. When multiple virtual machines simultaneously use one solid-state disk cache and high-IO-load applications are run on the virtual machines, the demand for the service capacity of the solid-state disk cache may exceed the supply of the solid-state disk cache, so that resource contention is generated, IO requests are queued at one end of the solid-state disk, the average delay of the solid-state disk cache is far higher than an ideal value, and the caching effect is reduced.
Therefore, in the management of cache resources, in addition to the allocation of the cache capacity of the solid-state disk, the placement of virtual machines from the cluster point of view is also required to be considered, so as to maximally utilize the service capability of the solid-state disk and avoid resource contention as much as possible. The characteristics of cloud application need to be combined, an appropriate Hypervisor is selected for a virtual machine in the application, and further, the specific allocation of the solid state disk cache on the virtual machine carried on the same Hypervisor is determined according to the requirements of the virtual machine on the solid state disk cache and the influence degree on the application performance. Existing work does not combine virtual machine placement with solid state disk resource allocation well.
At present, no relevant literature reports exist.
Disclosure of Invention
The invention aims to: aiming at the defects of the prior art, the invention provides a cloud application-oriented solid-state disk cache management method and system.
The technical scheme of the invention is as follows: a cloud application-oriented solid-state disk cache management method and system are mainly composed of a control module, a monitoring module, an analysis module and a decision module, as shown in figure 1. The main responsibilities, interaction modes and implementations of the modules are as follows:
a control module: the system is used for coordinating the work of the monitoring module, the analysis module, the decision module and the execution module, interacting with each module and collecting results, and realizing solid-state disk cache management based on a self-adaptive closed loop. In a complete closed-loop execution process, a control module interacts with a monitoring module firstly, the monitoring module is relied on to continuously monitor the working load of the cloud application and the dependency relationship between the virtual machines, and the cloud application state and performance data are collected for subsequent analysis and use; then, the cloud application state and performance data are transmitted to the analysis module through interaction with the analysis module, and the generated related information of the multilayer network model is collected; interacting with a decision module, transmitting the data to a multi-layer network model and relying on the decision module to complete solid-state disk cache management decisions, such as calculation of a virtual machine placement scheme and further calculation of a cache allocation scheme on a Hypervisor; and finally, interacting with an execution module, transmitting a specific solid-state disk cache management decision, and finishing the online migration of a specific virtual machine and the dynamic adjustment of the cache capacity by the execution module. The control module is also responsible for triggering a new round of closed-loop execution when the cloud application workload is detected to be mutated;
a monitoring module: the system comprises a monitoring module deployed on a Hypervisor and a monitoring module deployed on a virtual machine; the monitoring module deployed on the Hypervisor is responsible for monitoring the related information of the Hypervisor, the solid state disk and the cache, wherein the related information comprises an idle CPU and a memory resource of the Hypervisor; maximum bandwidth and IOPS of the solid state disk, currently used bandwidth and IOPS; the utilization rate, the read-write times and the hit rate of the cache; the monitoring module deployed on the virtual machine is responsible for monitoring relevant information of cloud application components deployed on the virtual machine and IO performance of the virtual machine, wherein the relevant information of the cloud application components comprises the proportion of each transaction, the execution time of each component when the cloud application deals with the current workload, and network interaction and dependency relationship among the components; the IO performance of the virtual machine comprises the used bandwidth and the IOPS, and the IO load condition of the virtual machine obtained by calculation, namely the proportion of the used IO resources in the available resources; the monitoring module can continuously monitor the information, receives a request of the control module in the closed-loop execution process, and returns corresponding cloud application state and performance data for subsequent analysis and decision; the control module firstly interacts with the monitoring module in the execution process of the self-adaptive closed loop to acquire necessary information for a subsequent analysis module, a decision module and an execution module to use;
an analysis module: the monitoring module is used for receiving information transmitted by the monitoring module and constructing a multi-layer network model; the multi-layer network model comprises a resource demand end, a resource supply end and a plurality of decision layers for matching; the method comprises the steps that a dependency relationship graph between virtual machines is constructed by analyzing network interaction conditions of cloud application components and IO (input/output) performance of the virtual machines, which are transmitted back by monitoring modules deployed in the virtual machines, and all strongly-connected components of the dependency relationships of all the virtual machines in a cluster are further traversed, so that the boundary of cloud application is described; then, the requirement of the virtual machine on the solid-state disk cache is established by combining the dependency relationship diagram and the IO load condition of the virtual machine, and the establishment of a resource demand end is completed; then, the analysis module assembles a decision module to be called and adapts to a resource supply end at the rear end to complete the construction of a multi-layer network model; the control module transmits the relevant information of the cloud application and the IO performance information of the virtual machine, which are collected from the monitoring module, to the analysis module in the execution process of the self-adaptive closed loop, and receives the topological structure of the multilayer network model and the decision layer selected from the topological structure, which are calculated by the analysis module, of the multilayer network model for use in subsequent interaction with the decision module;
a decision module: the method is used for matching a supply end and a demand end in a multilayer network model to realize the optimal matching of resource supply and demand; the decision-making module is based on a specific algorithm to realize the aim of resource management and is represented as a decision-making layer in a multi-layer network; the method comprises the following steps that two default decision layers are included at present, and a first decision layer adopts a bipartite graph matching algorithm to calculate a placement scheme of a virtual machine; the second decision layer calculates the optimal cache size to be allocated to each virtual machine on the Hypervisor by adopting a minimum cost maximum flow algorithm in the network flow; more decision layers can be expanded according to different requirements during implementation; after the execution control module receives the multilayer network model transmitted back by the analysis module, the appointed decision layer is called to complete the calculation of the corresponding resource management scheme, after all decision layers selected in the model are called, the matching of the resource demand end and the resource supply end is completed, and the final resource management scheme is generated, wherein the final resource management scheme comprises a virtual machine placing scheme and a cache allocation scheme;
an execution module: deployed on a Hypervisor; the execution module receives the virtual machine placement scheme obtained by calculation of the decision module and the solid-state disk cache size of the virtual machine on each Hypervisor, and executes specific dynamic migration and cache capacity adjustment operations of the virtual machine; the control module interacts with the execution module finally in the execution process of the self-adaptive closed loop, transmits a resource management scheme generated by the decision module and applies the resource management scheme to the virtual machine; in addition, when the control module triggers a new round of self-adaptive closed-loop execution, the execution module also calculates a virtual machine dynamic migration scheme with the least steps and a cache capacity adjustment scheme with the least influence on the workload of the virtual machine, and adjusts the scheme; and finally, finishing the cloud application-oriented solid-state disk cache management.
The control module is specifically realized as follows:
(1) constructing a self-adaptive closed-loop mechanism: the self-adaptive closed loop comprises four steps of monitoring, analyzing, deciding and executing, wherein the four main steps are respectively and specifically finished by a monitoring module, an analyzing module, a deciding module and an executing module. The self-adaptive closed loop is a core control flow of the cloud application-oriented solid-state disk cache management method.
(2) On the basis, a communication protocol is constructed, and information transmission formats and interaction modes among the control module, the monitoring module, the analysis module, the decision module and the execution module are specified;
(3) interaction with the monitoring module, the analysis module, the decision module and the execution module is realized based on a communication protocol, and execution of a self-adaptive closed loop is completed;
3.1 calling a monitoring module to continuously monitor the cloud application information and the performance information of the virtual machine and the Hypervisor;
3.2 calling an analysis module, inputting cloud application information and performance information, and constructing a multi-layer network model consisting of a resource demand end, a resource supply end and a plurality of decision layers;
3.3 according to the decision layer selected in the constructed multilayer network model, calling a specific decision module to complete resource supply and demand matching and generating resource management decisions (including cache allocation and virtual machine migration);
3.4 calling an execution module to complete specific cache allocation operation and virtual machine migration operation according to the cache allocation and virtual machine migration decision generated by the decision module;
(4) a continuous trigger mechanism of the self-adaptive closed loop is constructed, and a new round of self-adaptive closed loop execution is triggered when the load of the cloud application is monitored to be suddenly changed (the access mode of the cloud application is changed, the dependency relationship is changed, and the topology of a virtual machine bearing the cloud application is changed);
the monitoring module is specifically realized as follows:
(1) performing program instrumentation in a target cloud application so as to obtain fine-grained performance information;
1.1 selecting a cloud application function module needing program insertion for different types of cloud applications;
1.2, performing program instrumentation on the cloud application function modules deployed on the virtual machine, and writing monitoring codes. The main plug-in points are network interaction operations (such as HTTP (S), TCP, UDP connection and the like) and persistent operations (such as database operations and the like);
1.3 the monitoring code transmits the message by outputting a log;
(2) on the basis of program instrumentation, collecting logs output by monitoring codes, and calculating to obtain the execution time of the cloud application on each module when the cloud application processes a user request and the network interaction and dependency relationship among the modules forming the cloud application;
(3) deploying Agent (Agent) modules on the virtual machine and the Hypervisor;
3.1 the agent module deployed on the virtual machine is used for monitoring IO performance information of the virtual machine, including bandwidth and IOPS;
3.2 the agent module deployed on the Hypervisor is used for monitoring cache related information, including cache usage and read-write hit rate.
(4) The monitoring module interacts with the agent module and collects related performance data;
(5) and sorting the execution time, network interaction and dependency relationship of each module of the cloud application output by the monitoring code and the agent module, and the performance information of the virtual machine and the Hypervisor, and outputting the information as the monitoring module.
The analysis module is specifically realized as follows:
(1) receiving information transmitted by the control module, and generating a resource demand side:
1.1, the topology of the cloud application, the association between the application components and the dependency information are transmitted by the control module;
1.2, the control module transmits IO load information of the virtual machine;
1.3, constructing a dependency relationship graph among virtual machines by computing network interaction of each component of the cloud application, and further finding out a strongly-connected component in the dependency relationship graph so as to depict the boundary of the cloud application;
1.4, describing IO dependency degree among the cloud application components by calculating networks and IO loads among the cloud application components;
1.5, describing the requirement of the cloud application on the solid-state disk cache according to the IO load of each component of the cloud application and the IO dependence degree of other components;
(2) receiving information transmitted by the control module, generating a resource supplier:
2.1 receiving Hypervisor resource information transmitted by the control module, mainly including CPU performance, memory capacity and network bandwidth
2.2 accepting resource information of the solid state disk, including the capacity, bandwidth and IOPS of the solid state disk, which is transmitted by the control module
2.3 quantifying the resources of the CPU, the memory, the network and the solid-state disk by one unified score, and constructing a resource supply end;
(3) determining a decision module to be used according to the matching requirements of the resource demand end and the resource supply end, and constructing a plurality of decision layers;
(4) combining a resource supply end, a resource demand end and a plurality of layers of decision layers into a plurality of layers of network models for a decision module to use;
the decision module is specifically implemented as follows:
(1) the quantized resource supply end and the resource demand end are transmitted by the control module;
(2) the control module transmits information of a decision layer needing to be combined;
(3) calling a specific decision layer to complete the matching of a resource supply end and a resource demand end and generating an adjusting scheme, wherein the adjusting scheme mainly comprises a virtual machine cache capacity allocation scheme and a virtual machine placement scheme;
(4) sending the adjustment scheme back to the control module for use by the execution module;
the implementation of the execution module is as follows:
(1) the control module transmits a solid-state disk cache allocation scheme and a virtual machine placement scheme;
(2) safely removing the existing virtual machine cache;
(3) starting from the placement of the virtual machines of the current cluster, calculating a virtual machine migration operation set;
(4) carrying out virtual machine migration according to the principle of reducing the service quality default as much as possible;
(5) reconfiguring the cache on the target virtual machine according to a solid-state disk cache allocation scheme;
a cloud application-oriented solid-state disk cache management method comprises the following implementation steps:
(1) performing program Instrumentation (Instrumentation) in a target application, so as to support a monitoring module to perform fine-grained monitoring on a cloud application; and (3) inserting HTTP (S) access operation and access operation of a database and persistent storage aiming at a specific application type and a cloud application module borne on a virtual machine, wherein the inserted code gives execution time and an access target of the operation in a log output mode.
(2) The control module begins a full adaptive closed-loop execution. The control module interacts with the monitoring module firstly, and the monitoring module analyzes the execution time of the cloud application on each module when responding to the user request, the specific network interaction flow and the network access target on the basis of obtaining the log output by the plug-in code in the step (1) and returns the execution time to the control module.
The control module interacts with the monitoring module, and the monitoring module calls an Agent module deployed on the Hypervisor and the virtual machine to monitor the performance. The Agent module deployed on the virtual machine monitors the execution time, average response time and bandwidth of the IO operation of the disk, and IO load. The Agent module deployed on the Hypervisor monitors the cache use condition, the solid state disk condition and the CPU, memory and network resource use condition corresponding to each virtual machine. The cache use condition comprises cache hit rate, use rate, read-write operation times and the like. The solid state disk conditions include bandwidth, IOPS, and average response time. The CPU, memory, network resource usage includes currently used and idle CPU time, used and idle memory capacity, and used and idle network bandwidth. These performance data are ultimately returned to the control module.
(3) After the cloud application information and the performance information returned by the monitoring module in the steps (2) and (3) are obtained, the control module transmits the information to the analysis module. The analysis module performs the construction of a multi-layer network model.
The first is to build the resource demand side. After the analysis module obtains the performance information, the IO dependency relationship of the virtual machine on other virtual machines in the cloud application, the IO load of the corresponding virtual machine and the random access frequency are combined to calculate to obtain the requirement of the virtual machine on the solid-state disk cache. After the analysis module obtains the cloud application information, the dependency relationship and network interaction between the virtual machines forming the cloud application are converted into a dependency graph of the virtual machines. The nodes in the dependency graph represent independent modules forming the cloud application, and the nodes correspond to the virtual machines one by one. The weights on the directed edges represent the previously computed requirements of the virtual machine for solid state disk caching. The status of a cloud application when executing a particular workload may be mapped as a subgraph of a dependency graph.
The second is to build the resource supply. After the analysis module obtains the performance information, the CPU, the memory, the network and the solid-state disk resources of the Hypervisor are quantized, and a uniform resource supply model is constructed. The CPU resource is quantized by taking the CPU time as an index, the memory resource is quantized by taking the capacity as an index, the network resource is quantized by taking the bandwidth as an index, and the solid-state disk resource is quantized by taking the IOPS and the capacity as indexes.
And finally, the analysis module determines a decision module required by adjustment according to the current workload condition of the cloud application, determines the application sequence of the decision module and constructs a multilayer decision layer.
And the analysis module returns the constructed multilayer network model consisting of the resource supply end, the resource demand end and the multilayer decision layer to the control module.
(4) The control module calls corresponding decision modules according to the sequence of the decision layers in the multilayer network model, transmits the whole multilayer network model to the decision modules, realizes the matching of resource supply and demand by the decision modules according to different strategies, and finally derives an adjustment scheme of the solid-state disk resources, wherein the adjustment scheme comprises the adjustment of the solid-state disk cache capacity of the virtual machine and the adjustment of the placement of the virtual machine. The decision module sends the adjustment back to the control module.
When the virtual machine placement scheme is decided, the decision module converts the virtual machine placement scheme into bipartite graph matching under a specific constraint condition, namely, all virtual machines are distributed in a balanced manner on the premise that the maximum service capacity (CPU, memory, network and solid-state disk resources) of a supply end is met. And the decision module realizes supply and demand matching of resources by using a bipartite graph matching algorithm.
When the solid-state disk cache allocation decision of the virtual machine is made, the decision module converts the decision into the problem of the minimum cost optimal (maximum) flow on the subgraph, namely, the solid-state disk cache is preferentially allocated to the virtual machine with high cost performance under the condition that the capacity limit of the solid-state disk cache is met. The decision module analyzes the values of different virtual machines in the current working load mode according to the transmitted relevant information of the cloud application, and maps the values to the flow on the virtual machine dependency graph, so that the solid-state disk cache proportion which is required to be obtained by each virtual machine is determined by using a minimum cost maximum flow algorithm, and the solid-state disk cache allocation decision is completed.
(5) And after the control module obtains the adjustment scheme, the control module transmits the adjustment scheme to the execution module, and calls the execution module to finish the execution of the specific adjustment scheme. The execution module firstly removes the solid disk cache used by the current virtual machine safely, then calculates the optimized dynamic migration sequence of the virtual machine according to the current virtual machine placement scheme and the introduced target placement scheme, then executes the dynamic migration of the virtual machine according to the principle of reducing the service quality default as much as possible, and finally carries out cache allocation on the target virtual machine according to the introduced cache allocation scheme.
(6) The steps (2) to (5) constitute a complete adaptive closed-loop execution. And (5) continuously calling the monitoring module by the control module, and triggering a new round of self-adaptive closed-loop execution according to the sequence of the steps (2) to (5) when the sudden change of the cloud application workload is monitored. The cloud application workload mutation is mainly characterized in that the composition mode (proportion of each request) of the workload is mutated, and the intensity (IO operation intensity on each node) of the workload is mutated.
In addition, the control module sets a threshold to periodically trigger the adaptive closed-loop adjustment, i.e., the execution of steps (2) - (6).
Compared with the prior art, the invention has the advantages that:
(1) the present invention takes into account the associative relationships between the virtual machines that make up an application. The virtual machines naturally have certain association due to belonging to the same application, but since the related art considers the virtual machines as independent units, the association is split or ignored.
(2) The associations between virtual machines are characterized using a multi-layer network model. The prior art does not extract an abstract model with sufficient expression capability.
(3) The concept of software defined solid state disk caching is introduced. The system can dynamically control the use of the solid-state disk cache from two aspects of solid-state disk cache allocation and solid-state disk bandwidth management according to the workload and the user demand, and balance of demand and supply is realized.
Drawings
FIG. 1 is a system architecture diagram of the present invention;
FIG. 2 is a flow chart of an embodiment of the present invention;
FIG. 3 is a flow chart of the operation of the monitoring module;
FIG. 4 is a flow chart of the operation of the analysis module;
FIG. 5 is a flow chart of the operation of the decision module;
FIG. 6 is a flow chart of the execution of a module.
Detailed Description
The present invention will be described in more detail below with reference to specific embodiments and the accompanying drawings.
The invention provides a cloud application-aware solid-state disk cache management method and system. The system mainly comprises a control module, a monitoring module, an analysis module, a decision module and an execution module, and the main deployment mode of the system is shown in figure 1. The control module is deployed on an independent virtual machine or physical machine, coordinates the monitoring module, the analysis module, the decision module and the execution module, completes the execution of the self-adaptive closed loop, and triggers a new round of self-adaptive closed loop execution when monitoring that the cloud application workload is mutated. The monitoring module is deployed on the Hypervisor and the virtual machine and used for collecting performance information of the Hypervisor and the virtual machine, including resource information of a CPU, a memory, a network and a solid state disk of the Hypervisor and IO load conditions of the virtual machine. The analysis module is deployed on an independent virtual machine or a physical machine and used for receiving the information generated by the monitoring module and constructing a multi-layer network model. The decision module is deployed on an independent virtual machine or a physical machine, completes supply and demand matching of resources by executing a specific algorithm (bipartite graph matching and minimum cost maximum flow), and generates a virtual machine placement scheme and a cache capacity adjustment scheme. The execution module is deployed on the Hypervisor and used for calculating the dynamic migration sequence of the virtual machines from the generated virtual machine placement scheme and executing specific dynamic migration and cache allocation operations of the virtual machines.
The following describes the specific steps as an example, as shown in fig. 2. The target cloud application of the embodiment of the invention is constructed based on Java language, the Web front-end application server is Apache Tomcat, the application server bearing the service middleware is also Apache Tomcat, and the back-end database server is MySQL. In addition, the virtual machine carrying these components is a Linux operating system. The physical machine cluster is composed of physical hypervisors, each Hypervisor is connected to the same shared storage, and independent solid state disks are deployed. Each Hypervisor bears a plurality of virtual machines, a plurality of cloud applications are deployed on a virtual machine cluster, and the virtual machines subordinate to the cloud applications may be located on different hypervisors.
The specific steps for this example are as follows:
(1) and carrying out program instrumentation in the target application. For the cloud application based on Java and Apache Tomcat, instrumentation is mainly performed on servlets, so that all request types acceptable by the cloud application are obtained; furthermore, inserting a part for calling HTTP request and JDBC connection in the Servlet, so as to obtain a calling chain and dependency degree among different components of the cloud application and access of the cloud application to a back-end database; and finally, performing instrumentation on the file operation to obtain the related information of the read-write file in the program execution process. The instrumentation code will give the execution time and access goals of the operation by outputting a log to a text file.
(2) The control module begins a full adaptive closed-loop execution. The control module interacts with the monitoring module first, and the execution flow of the monitoring module is shown in fig. 3. And (3) analyzing and obtaining the execution time of the cloud application on each module when the cloud application responds to the user request, a specific network interaction flow and a network access target on the basis of obtaining the log output by the instrumentation code in the step (1), and returning the execution time, the specific network interaction flow and the network access target to the control module.
In this example, the workload of the cloud application may be characterized as a collection of Web requests initiated by a user to the cloud application over a particular period of time. After sensing all request types which can be accepted by the cloud application, the monitoring module monitors the time spent on each component when the application component processes various requests when the cloud application deals with a specific workload; since the file operation of the application component is monitored, the time can be further divided into IO time and non-IO time, the IO time is divided into IO time of sequential read-write operation and random read-write operation, and finally the execution time of the cloud application on each module when the cloud application responds to the user request is obtained.
In order to further monitor and obtain a network interaction flow and a network access target, the monitoring module monitors the process on the virtual machine and is realized by matching ps commands with a self-contained/proc file system in a Linux environment. For processes of particular interest, such as java and mysqld, the command line parameters are further analyzed to obtain the instance locations of Apache Tomcat and MySQL, and thus obtain configuration files and related configuration parameters, which mainly include the open port information of Apache Tomcat and the database file location information and port information of MySQL. The information is matched with the log returned by the instrumentation code, and the network access information can be obtained.
When monitoring the use conditions of the related resources of the virtual machine and the Hypervisor, the use conditions are mainly realized by means of a sysstat toolkit in a Linux environment. CPU monitoring is realized through an mptat command; the memory monitoring directly accesses the Linux self-contained/proc file system; the disk monitoring is realized through an iostat command; the network connection monitoring is realized through a netstat command, and the network flow monitoring is realized through an API of a Linux kernel.
When monitoring the use condition of the cache of the virtual machine, the method is mainly realized by monitoring the state of the solid-state disk cache bound to the virtual machine on the Hypervisor. The solid-state disk cache is realized based on dm-cache of Linux, the cache state of the solid-state disk is obtained by calling a specific method of the dm-cache, and the cache utilization rate, the hit rate and the solid-state disk utilization rate are mainly monitored.
(3) After the cloud application information and the performance information returned by the monitoring module in the steps (2) and (3) are obtained, the control module transmits the information to the analysis module. The analysis module performs the construction of the multi-layer network model, and the specific execution flow is shown in fig. 4.
The first is to build the resource demand side. After the analysis module obtains the performance information, the IO dependency relationship of the virtual machine on other virtual machines in the cloud application, the IO load of the corresponding virtual machine and the random access frequency are combined to calculate to obtain the requirement of the virtual machine on the solid-state disk cache. After the analysis module obtains the cloud application information, the dependency relationship and network interaction between the virtual machines forming the cloud application are converted into a dependency graph of the virtual machines. The nodes in the dependency graph represent independent modules forming the cloud application, and the nodes correspond to the virtual machines one by one. The weights on the directed edges represent the previously computed requirements of the virtual machine for solid state disk caching. The status of a cloud application when executing a particular workload may be mapped as a subgraph of a dependency graph.
In this example, a dependency graph of the cloud application is constructed according to the monitored cloud application boundaries (i.e., all virtual machines included in the cloud application) and network connections established between the virtual machines. And then, refining the dependency graph of the cloud application and depicting the importance degree of each component. The finally formed cloud application dependency graph comprises the dependencies among the components and the dependencies on the IO capacity of other components, so that a demand end of the multi-layer network model is formed.
The second is to build the resource supply. After the analysis module obtains the performance information, the CPU, the memory, the network and the solid-state disk resources of the Hypervisor are quantized, and a uniform resource supply model is constructed. The CPU resource is quantized by taking the CPU time as an index, the memory resource is quantized by taking the capacity as an index, the network resource is quantized by taking the bandwidth as an index, and the solid-state disk resource is quantized by taking the IOPS and the capacity as indexes.
In this example, the resource supply end is constructed by combining the resource usage conditions of the physical machine and the solid-state disk cache and the maximum resource supply capacity, which are monitored on the Hypervisor, according to the deployment relationship of the physical cluster.
And finally, the analysis module determines a decision module required by adjustment according to the current workload condition of the cloud application, determines the application sequence of the decision module and constructs a multilayer decision layer. In this example, the analysis module configures the supply-demand relationship between the resource demand side and the resource supply side, and configures two policy layers, namely, virtual machine placement scheme optimization and solid-state disk cache capacity allocation optimization, in cooperation with the dynamic plugging mechanism of the policy layer, to finally complete the construction of the multi-layer network model.
And the analysis module returns the constructed multilayer network model consisting of the resource supply end, the resource demand end and the multilayer decision layer to the control module.
(4) The control module calls the corresponding decision modules according to the order of the decision layers in the multi-layer network model, and the execution flow of the decision modules is shown in fig. 5. The control module transmits the whole multilayer network model to the decision module, the decision module realizes the matching of resource supply and demand according to different strategies, and finally derives an adjustment scheme of the solid-state disk resources, wherein the adjustment scheme comprises the steps of adjusting the solid-state disk cache capacity of the virtual machine and adjusting the placement of the virtual machine. The decision module sends the adjustment back to the control module.
When the virtual machine placement scheme is decided, the decision module converts the virtual machine placement scheme into bipartite graph matching under a specific constraint condition, namely, all virtual machines are distributed in a balanced manner on the premise that the maximum service capacity (CPU, memory, network and solid-state disk resources) of a supply end is met. And the decision module realizes supply and demand matching of resources by using a bipartite graph matching algorithm.
When the solid-state disk cache allocation decision of the virtual machine is made, the decision module converts the decision into the problem of the minimum cost optimal (maximum) flow on the subgraph, namely, the solid-state disk cache is preferentially allocated to the virtual machine with high cost performance under the condition that the capacity limit of the solid-state disk cache is met. The decision module analyzes the values of different virtual machines in the current working load mode according to the transmitted relevant information of the cloud application, and maps the values to the flow on the virtual machine dependency graph, so that the solid-state disk cache proportion which is required to be obtained by each virtual machine is determined by using a minimum cost maximum flow algorithm, and the solid-state disk cache allocation decision is completed.
(5) After obtaining the adjustment scheme, the control module transfers the adjustment scheme to the execution module, and calls the execution module to complete the execution of the specific adjustment scheme, where the execution flow of the execution module is shown in fig. 6. The execution module firstly removes the solid disk cache used by the current virtual machine safely, then calculates the optimized dynamic migration sequence of the virtual machine according to the current virtual machine placement scheme and the introduced target placement scheme, then executes the dynamic migration of the virtual machine according to the principle of reducing the service quality default as much as possible, and finally carries out cache allocation on the target virtual machine according to the introduced cache allocation scheme.
(6) The steps (2) to (5) constitute a complete adaptive closed-loop execution. And (5) continuously calling the monitoring module by the control module, and triggering a new round of self-adaptive closed-loop execution according to the sequence of the steps (2) to (5) when the sudden change of the cloud application workload is monitored. The cloud application workload mutation is mainly characterized in that the composition mode (proportion of each request) of the workload is mutated, and the intensity (IO operation intensity on each node) of the workload is mutated.
In addition, the control module sets a threshold to periodically trigger the adaptive closed-loop adjustment, i.e., the execution of steps (2) - (6).
Although specific embodiments of the invention have been disclosed for illustrative purposes and the accompanying drawings, which are included to provide a further understanding of the invention and are incorporated by reference, those skilled in the art will appreciate that: various substitutions, changes and modifications are possible without departing from the spirit and scope of the present invention and the appended claims. Therefore, the present invention should not be limited to the disclosure of the preferred embodiments and the accompanying drawings.

Claims (2)

1. A cloud application-oriented solid-state disk cache management system, comprising: the device comprises a control module, a monitoring module, an analysis module, a decision module and an execution module;
a control module: the system comprises a control module, a monitoring module, an analysis module, a decision module and an execution module, wherein the control module is used for coordinating the work of the monitoring module, the analysis module, the decision module and the execution module, interacting with each module and collecting results, realizing solid-state disk cache management based on a self-adaptive closed loop, interacting with the monitoring module by the control module firstly in a complete closed-loop execution process, continuously monitoring the working load of cloud application and the dependency relationship among virtual machines by the monitoring module, and collecting cloud application state and performance data for subsequent analysis and use; then, the cloud application state and performance data are transmitted to the analysis module through interaction with the analysis module, and the generated related information of the multilayer network model is collected; interacting with a decision module, transmitting the data to a multi-layer network model and relying on the decision module to complete solid-state disk cache management decision, wherein the decision comprises calculation of a virtual machine placement scheme and further calculation of a cache allocation scheme on the Hypervisor; finally, interacting with an execution module, transmitting a specific solid-state disk cache management decision, and finishing the online migration of a specific virtual machine and the dynamic adjustment of cache capacity by the execution module; the control module is also responsible for triggering a new round of closed-loop execution when the cloud application workload is detected to be mutated;
a monitoring module: the system comprises a monitoring module deployed on a Hypervisor and a monitoring module deployed on a virtual machine; the monitoring module deployed on the Hypervisor is responsible for monitoring the related information of the Hypervisor, the solid state disk and the cache, wherein the related information comprises idle CPU and memory resources of the Hypervisor, the maximum bandwidth and IOPS of the solid state disk, the currently used bandwidth and IOPS, the utilization rate of the cache, the read-write times and the hit rate; the monitoring module deployed on the virtual machine is responsible for monitoring relevant information of cloud application components deployed on the virtual machine and IO performance of the virtual machine, wherein the relevant information of the cloud application components comprises the proportion of each transaction, the execution time of each component when the cloud application deals with the current workload, and network interaction and dependency relationship among the components; the IO performance of the virtual machine comprises the used bandwidth and the IOPS, and the IO load condition of the virtual machine obtained by calculation, namely the proportion of the used IO resources in the available resources; the monitoring module can continuously monitor the information, receives a request of the control module in the closed-loop execution process, and returns corresponding cloud application state and performance data for subsequent analysis and decision; the control module firstly interacts with the monitoring module in the execution process of the self-adaptive closed loop to acquire necessary information for a subsequent analysis module, a decision module and an execution module to use;
an analysis module: the monitoring module is used for receiving information transmitted by the monitoring module and constructing a multi-layer network model; the multi-layer network model comprises a resource demand end, a resource supply end and a plurality of decision layers for matching; the method comprises the steps that a dependency relationship graph between virtual machines is constructed by analyzing network interaction conditions of cloud application components and IO (input/output) performance of the virtual machines, which are transmitted back by monitoring modules deployed in the virtual machines, and all strongly-connected components of the dependency relationships of all the virtual machines in a cluster are further traversed, so that the boundary of cloud application is described; then, the requirement of the virtual machine on the solid-state disk cache is established by combining the dependency relationship diagram and the IO load condition of the virtual machine, and the establishment of a resource demand end is completed; then, the analysis module assembles a decision module to be called and adapts to a resource supply end at the rear end to complete the construction of a multi-layer network model; the control module transmits the relevant information of the cloud application and the IO performance information of the virtual machine, which are collected from the monitoring module, to the analysis module in the execution process of the self-adaptive closed loop, and receives the topological structure of the multilayer network model and the decision layer selected from the topological structure, which are calculated by the analysis module, of the multilayer network model for use in subsequent interaction with the decision module;
a decision module: the method is used for matching a supply end and a demand end in a multilayer network model to realize the optimal matching of resource supply and demand; the decision-making module is based on a specific algorithm to realize the aim of resource management and is represented as a decision-making layer in a multi-layer network; the method comprises the following steps that two default decision layers are included at present, and a first decision layer adopts a bipartite graph matching algorithm to calculate a placement scheme of a virtual machine; the second decision layer calculates the optimal cache size to be allocated to each virtual machine on the Hypervisor by adopting a minimum cost maximum flow algorithm in the network flow; more decision layers can be expanded according to different requirements during implementation; after receiving the multi-layer network model transmitted back by the analysis module, the control module calls a decision layer designated in the decision module to complete the calculation of a corresponding resource management scheme, and after all decision layers selected in the model are called, the matching of a resource demand end and a resource supply end is completed, and a final resource management scheme is generated, wherein the final resource management scheme comprises a virtual machine placement scheme and a cache allocation scheme;
an execution module: deployed on a Hypervisor; the execution module receives the virtual machine placement scheme obtained by calculation of the decision module and the solid-state disk cache size of the virtual machine on each Hypervisor, and executes specific dynamic migration and cache capacity adjustment operations of the virtual machine; the control module interacts with the execution module finally in the execution process of the self-adaptive closed loop, transmits a resource management scheme generated by the decision module and applies the resource management scheme to the virtual machine; in addition, when the control module triggers a new round of self-adaptive closed-loop execution, the execution module also calculates a virtual machine dynamic migration scheme with the least steps and a cache capacity adjustment scheme with the least influence on the workload of the virtual machine, and adjusts the scheme; and finally, finishing the cloud application-oriented solid-state disk cache management.
2. A cloud application-oriented solid-state disk cache management method is characterized by comprising the following implementation steps:
(1) performing program Instrumentation (Instrumentation) in a target application, so as to support a monitoring module to perform fine-grained monitoring on a cloud application; inserting HTTP access operation and access operation of a database and persistent storage aiming at a specific application type and a cloud application module borne on a virtual machine, wherein an insertion code can give execution time and an access target of the operation in a log output mode;
(2) the control module starts a complete self-adaptive closed-loop execution, the control module firstly interacts with the monitoring module, and the monitoring module analyzes and obtains the execution time of the cloud application on each module when responding to the user request, a specific network interaction flow and a network access target on the basis of obtaining the log output by the plug-in code in the step (1) and returns the execution time to the control module;
the control module interacts with the monitoring module, and the monitoring module calls an Agent module deployed on the Hypervisor and the virtual machine to monitor the performance; the Agent module deployed on the virtual machine can monitor the execution time, average response time and bandwidth of IO operation of a disk and IO load, and the Agent module deployed on the Hypervisor can monitor the cache use condition, the solid-state disk condition and the CPU, memory and network resource use condition corresponding to each virtual machine; the cache use condition comprises cache hit rate, use rate and read-write operation times; solid state disk conditions include bandwidth, IOPS, and average response time; the CPU, memory and network resource use conditions comprise the current used and idle CPU time, the used and idle memory capacity and the used and idle network bandwidth, and the performance data are finally returned to the control module;
(3) after the cloud application information and the performance information returned by the monitoring module in the steps (2) and (3) are obtained, the control module transmits the information to the analysis module, and the analysis module can execute the construction of a multilayer network model;
firstly, a resource demand end is constructed, and after performance information is obtained by an analysis module, the IO dependency relationship of a virtual machine on other virtual machines in cloud application, IO loads of corresponding virtual machines and random access frequency are combined to calculate to obtain the demand of the virtual machine on solid-state disk cache; after the analysis module obtains cloud application information, converting the dependency relationship and network interaction between virtual machines forming the cloud application into a dependency graph of the virtual machines, wherein nodes in the dependency graph represent independent modules forming the cloud application, the nodes correspond to the virtual machines one by one, the weights on directed edges represent the requirements of the virtual machines on solid-state disk cache obtained by calculation in the past, and the condition of the cloud application when a specific workload is executed can be mapped into a subgraph of the dependency graph;
secondly, a resource supply end is constructed, after the analysis module obtains performance information, a CPU, a memory, a network and a solid-state disk resource of the Hypervisor are quantized, and a unified resource supply model is constructed, wherein the CPU resource is quantized by taking CPU time as an index, the memory resource is quantized by taking capacity as an index, the network resource is quantized by taking bandwidth as an index, and the solid-state disk resource is quantized by taking IOPS and the capacity as indexes;
finally, the analysis module determines a decision module required for adjustment according to the current working load condition of the cloud application, determines the application sequence of the decision module and constructs a multilayer decision layer;
the analysis module returns the constructed multilayer network model consisting of the resource supply end, the resource demand end and the multilayer decision layer to the control module;
(4) the control module calls corresponding decision modules according to the sequence of decision layers in the multilayer network model, transmits the whole multilayer network model to the decision modules, realizes the matching of resource supply and demand by the decision modules according to different strategies, and finally derives an adjustment scheme of the solid-state disk resources, wherein the adjustment scheme comprises the adjustment of the cache capacity of the solid-state disk of the virtual machine and the adjustment of the placement of the virtual machine, and the decision modules send the adjustment scheme back to the control module;
when the virtual machine placement scheme is decided, the decision module converts the virtual machine placement scheme into bipartite graph matching under a specific constraint condition, namely, all virtual machines are uniformly distributed on the premise of meeting the maximum service capacity of a supply end and including CPU (central processing unit), memory, network and solid-state disk resources, and the decision module realizes supply and demand matching of the resources by using a bipartite graph matching algorithm;
when a solid-state disk cache allocation decision of a virtual machine is made, a decision module converts the problem into the problem of minimum cost optimization or maximum flow on a subgraph, namely the problem that the solid-state disk cache is preferentially allocated to a high-cost-performance virtual machine under the condition that the solid-state disk cache capacity limit is met, the decision module analyzes the values of different virtual machines in the current working load mode according to the transmitted relevant information of cloud application and maps the values to the flow on a virtual machine dependency graph, so that the solid-state disk cache proportion which is required to be obtained by each virtual machine is determined by using a minimum cost maximum flow algorithm, and the solid-state disk cache allocation decision is completed;
(5) after obtaining the adjustment scheme, the control module transmits the adjustment scheme to the execution module, and calls the execution module to complete the execution of the specific adjustment scheme, the execution module firstly safely removes the solid-state disk cache used by the current virtual machine, then calculates the optimized dynamic migration sequence of the virtual machine according to the current virtual machine placement scheme and the introduced target placement scheme, then executes the dynamic migration of the virtual machine according to the principle of reducing the service quality default as much as possible, and finally performs cache allocation on the target virtual machine according to the introduced cache allocation scheme;
(6) the steps (2) to (5) form a complete self-adaptive closed-loop execution, the control module continuously calls the monitoring module, when the cloud application workload is monitored to be mutated, a new round of self-adaptive closed-loop execution is triggered according to the sequence of the steps (2) to (5), the cloud application workload mutation is mainly expressed as a composition mode of the workload, namely the mutation of the proportion of each request and the mutation of the intensity of the workload, namely the mutation of the IO operation intensity on each node;
in addition, the control module sets a threshold to periodically trigger the adaptive closed-loop adjustment, i.e., the execution of steps (2) - (6).
CN201611127232.9A 2016-12-09 2016-12-09 Cloud application-oriented solid-state disk cache management system and method Active CN106775942B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611127232.9A CN106775942B (en) 2016-12-09 2016-12-09 Cloud application-oriented solid-state disk cache management system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611127232.9A CN106775942B (en) 2016-12-09 2016-12-09 Cloud application-oriented solid-state disk cache management system and method

Publications (2)

Publication Number Publication Date
CN106775942A CN106775942A (en) 2017-05-31
CN106775942B true CN106775942B (en) 2020-06-16

Family

ID=58882022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611127232.9A Active CN106775942B (en) 2016-12-09 2016-12-09 Cloud application-oriented solid-state disk cache management system and method

Country Status (1)

Country Link
CN (1) CN106775942B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10824562B2 (en) 2018-01-09 2020-11-03 Hossein Asadi Reconfigurable caching
US11099999B2 (en) * 2019-04-19 2021-08-24 Chengdu Haiguang Integrated Circuit Design Co., Ltd. Cache management method, cache controller, processor and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6345241B1 (en) * 1999-02-19 2002-02-05 International Business Machines Corporation Method and apparatus for simulation of data in a virtual environment using a queued direct input-output device
CN102385532A (en) * 2011-12-02 2012-03-21 浪潮集团有限公司 Method for improving cloud application property via non-transparent CACHE
CN103870312A (en) * 2012-12-12 2014-06-18 华为技术有限公司 Method and device for establishing storage cache shared by virtual machines
CN104050014A (en) * 2014-05-23 2014-09-17 上海爱数软件有限公司 Efficient storage management method based on virtualization platform
CN102662725B (en) * 2012-03-15 2015-01-28 中国科学院软件研究所 Event-driven high concurrent process virtual machine realization method
CN105323282A (en) * 2014-07-28 2016-02-10 神州数码信息系统有限公司 Enterprise application deployment and management system for multiple tenants
CN105718280A (en) * 2015-06-24 2016-06-29 乐视云计算有限公司 Method and management platform for accelerating IO of virtual machine
CN103026347B (en) * 2010-05-27 2016-08-03 思科技术公司 Virutal machine memory in multicore architecture divides
CN105868020A (en) * 2015-02-09 2016-08-17 国际商业机器公司 Method for running virtual manager scheduler and virtual manager scheduler unit
CN103457775B (en) * 2013-09-05 2016-09-14 中国科学院软件研究所 A kind of high available virtual machine pond management system of based role

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9514507B2 (en) * 2011-11-29 2016-12-06 Citrix Systems, Inc. Methods and systems for maintaining state in a virtual machine when disconnected from graphics hardware
US10152340B2 (en) * 2014-03-07 2018-12-11 Vmware, Inc. Configuring cache for I/O operations of virtual machines

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6345241B1 (en) * 1999-02-19 2002-02-05 International Business Machines Corporation Method and apparatus for simulation of data in a virtual environment using a queued direct input-output device
CN103026347B (en) * 2010-05-27 2016-08-03 思科技术公司 Virutal machine memory in multicore architecture divides
CN102385532A (en) * 2011-12-02 2012-03-21 浪潮集团有限公司 Method for improving cloud application property via non-transparent CACHE
CN102662725B (en) * 2012-03-15 2015-01-28 中国科学院软件研究所 Event-driven high concurrent process virtual machine realization method
CN103870312A (en) * 2012-12-12 2014-06-18 华为技术有限公司 Method and device for establishing storage cache shared by virtual machines
CN103457775B (en) * 2013-09-05 2016-09-14 中国科学院软件研究所 A kind of high available virtual machine pond management system of based role
CN104050014A (en) * 2014-05-23 2014-09-17 上海爱数软件有限公司 Efficient storage management method based on virtualization platform
CN105323282A (en) * 2014-07-28 2016-02-10 神州数码信息系统有限公司 Enterprise application deployment and management system for multiple tenants
CN105868020A (en) * 2015-02-09 2016-08-17 国际商业机器公司 Method for running virtual manager scheduler and virtual manager scheduler unit
CN105718280A (en) * 2015-06-24 2016-06-29 乐视云计算有限公司 Method and management platform for accelerating IO of virtual machine

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种收益敏感的虚拟资源按需提供方法;吴恒,等;;《软件学报》;20130831;第24卷(第8期);1963-1980 *
虚拟化环境下面向多目标优化的自适应SSD缓存系统;唐震,等;;《软件学报》;20170831;第28卷(第8期);1982-1998 *

Also Published As

Publication number Publication date
CN106775942A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
Shi et al. MDP and machine learning-based cost-optimization of dynamic resource allocation for network function virtualization
CN108182105B (en) Local dynamic migration method and control system based on Docker container technology
US8595364B2 (en) System and method for automatic storage load balancing in virtual server environments
Jindal et al. Function delivery network: Extending serverless computing for heterogeneous platforms
Tudoran et al. Overflow: Multi-site aware big data management for scientific workflows on clouds
US10896059B2 (en) Dynamically allocating cache in a multi-tenant processing infrastructure
JP2018514018A (en) Timely resource migration to optimize resource placement
Fu et al. Layered virtual machine migration algorithm for network resource balancing in cloud computing
TW201820165A (en) Server and cloud computing resource optimization method thereof for cloud big data computing architecture
US9152640B2 (en) Determining file allocation based on file operations
US20120221730A1 (en) Resource control system and resource control method
US10411977B2 (en) Visualization of workload distribution on server resources
US11347550B1 (en) Autoscaling and throttling in an elastic cloud service
Mostafavi et al. A stochastic approximation approach for foresighted task scheduling in cloud computing
US11755576B1 (en) Data-driven task-execution scheduling using machine learning
Zhang et al. Zeus: Improving resource efficiency via workload colocation for massive kubernetes clusters
CN115718644A (en) Computing task cross-region migration method and system for cloud data center
Bourhim et al. Inter-container communication aware container placement in fog computing
CN106775942B (en) Cloud application-oriented solid-state disk cache management system and method
CN106210120B (en) A kind of recommended method and its device of server
CN109062669A (en) Virtual machine migration method and system under a kind of Random Load
Zhang A QoS-enhanced data replication service in virtualised cloud environments
Zhang et al. Speeding up vm startup by cooperative vm image caching
KR102054068B1 (en) Partitioning method and partitioning device for real-time distributed storage of graph stream
Costa Filho et al. An adaptive replica placement approach for distributed key‐value stores

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant