CN112181601A - Memory pre-copying and virtual machine migration method and system based on dirtying rate prediction - Google Patents

Memory pre-copying and virtual machine migration method and system based on dirtying rate prediction Download PDF

Info

Publication number
CN112181601A
CN112181601A CN202011139343.8A CN202011139343A CN112181601A CN 112181601 A CN112181601 A CN 112181601A CN 202011139343 A CN202011139343 A CN 202011139343A CN 112181601 A CN112181601 A CN 112181601A
Authority
CN
China
Prior art keywords
virtual machine
memory
copying
rate prediction
dirty
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011139343.8A
Other languages
Chinese (zh)
Inventor
黄辰林
蹇松雷
谭郁松
李宝
王晓川
张建锋
董攀
丁滟
任怡
谭霜
马俊
罗军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202011139343.8A priority Critical patent/CN112181601A/en
Publication of CN112181601A publication Critical patent/CN112181601A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45575Starting, stopping, suspending or resuming virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances

Abstract

The invention discloses a memory pre-copying and virtual machine migration method and system based on dirty rate prediction, the method comprises the steps of calculating the dirty rate of a memory page before the ith round of memory pre-copying relative to a memory page after the ith-1 round of memory pre-copying during each round of memory pre-copying, and skipping to execute dirty rate detection again if the dirty rate exceeds a preset upper limit value; if not, further judging whether the dirty rate is lower than a preset lower limit value, if not, transmitting the dirty page to a target node in a compressed mode, and then executing the next round of memory pre-copying; and if the dirtying rate is lower than a preset lower limit value, finishing memory pre-copying and exiting. The invention can reduce the downtime of the virtual machine caused by memory synchronization, reduce the iteration times of memory pre-copying, improve the online migration efficiency of the virtual machine and reduce the service interruption time of the virtual machine in the online migration process.

Description

Memory pre-copying and virtual machine migration method and system based on dirtying rate prediction
Technical Field
The invention relates to a virtual machine migration technology, in particular to a memory pre-copying and virtual machine migration method and system based on dirty rate prediction.
Background
With the rapid development of the internet, data generated by various industries show an explosive growth trend, and data to be processed also grows in a geometric manner, so that many existing systems cannot meet the requirement of data processing, and therefore, the calculation mode is changed greatly. In this context, cloud computing is rapidly developing and becoming the computing means of choice for more and more industries. As a core of cloud computing, virtualization technology enables a large amount of infrastructure resources in a cloud computing platform center to be used efficiently. Among all the technologies constituting the cloud computing ecological environment, the virtualization technology is the basis of cloud computing, and there is no cloud computing without support of virtualization, because an idea of cloud computing is to use resources as needed, that is, to virtualize and pool the resources, and how many resources a user needs to use are allocated to the user from the resource pool.
VMware, VirtualPC, KVM) and para-virtualization (e.g., XEN), both of which require an additional complete operating system (Guest OS) to run in the virtual machine, which wastes resources and reduces the efficiency of the virtual machine. While another lightweight virtualization technology, which is rarely mentioned in academic circles and recently becomes popular in the industry, is operating system level virtualization (the system virtualization technology studied herein is Linux providers, i.e., LXC, which is becoming popular in the industry and supported by Linux kernel mainline), the above problems are well solved, the system level virtualization does not need to run an additional complete operating system (Guest OS) in the virtual machine, each virtual machine shares the same kernel (kernel) with the located physical host, instructions run in the virtual machine do not need to be converted or simulated, but are directly executed in the CPU on the located physical host, which well avoids a large amount of waste of resources and improves the operating efficiency of the virtual machine. The LXC is Linux container (also called operating system virtualization), which is a lightweight virtualization technology with smaller granularity than the conventional virtualization, such as XEN and KVM, and it does not require virtualization hardware, nor instruction-level simulation and just-in-time compilation. The idea was initially to create multiple virtual machines with the same operating system and to isolate these virtualized environments on a common operating system, saving the portion of resources needed to run multiple operating system kernels, without wasting resources.
The cloud service provider guarantees the quality of service provided for the customer through a Service Level Agreement (SLA) with the customer. This requires that when a service node fails or has a reduced quality of service, the cloud platform can automatically take measures to resolve the service node failure and restore to the previous quality of service, and the failed service cannot affect other normal services. When the load of a certain host is overlarge, the virtual machine providing the service can be transparently migrated to the idle host through dynamic migration to realize load balance; when certain hardware needs to be stopped for maintenance, the virtual machine can be transparently moved to other hosts through dynamic migration; it is even possible to move a virtual machine providing a service to a client by live migration to achieve an efficient local service. Therefore, migration technology of the virtual machine becomes an object of extensive research by various cloud service providers. A Service Level Agreement (SLA) is a contract between a network service provider and a customer, where terms such as service type, quality of service, and customer payment are defined. Traditionally, SLAs have included guarantees on service availability, such as guarantees on troubleshooting time, service timeouts, and the like. But with the wide spread of more commercial applications in the Internet, there is an increasing need for the performance (e.g., response time) of SLA. This need will become important as more and more businesses are developing over the Internet. In practice, the guarantees of an SLA are defined in the form of a series of Service Level Objectives (SLOs). A service level objective is a combination of measurements of one or more defined service components. An SLO is implemented to mean that the measured values of those defined components are within a defined range. SLO has a so-called operation period in which SLO must be implemented. But due to the statistical nature of the Internet, it is not possible to implement these guarantees at all times. The SLA therefore typically has a period of fulfillment and a proportion of fulfillment. The achievement proportion is defined as the ratio of the time that an SLA must be achieved to the achievement period. For example: under the premise that the workload is <100 transactions/s, the service response time from 8 am to 5 pm is <85ms, the service effective rate is > 95%, and the total realization proportion in one month is > 97%.
The virtual machine live migration technology is a research hotspot in cloud computing, and not only is virtualization an important cornerstone for forming cloud computing, but also the virtualization should be an important means for realizing load balancing and improving resource utilization rate. At present, the domestic research on the virtual machine hot migration technology is quite extensive, but mainly focuses on two aspects: 1) a live migration of a shared file system. In this case, the runtime memory and CPU state of the virtual machine are simply migrated from one host to another, while the image file of the virtual machine remains stored in the original host. 2) A hot migration of a non-shared file system. Under the condition of not sharing a file system, the runtime memory and the CPU state of the virtual machine are migrated in a hot mode, and simultaneously, the image files used by the virtual machine are migrated to the target host computer. The study of migrating runtime memory and CPU states is similar, whether it is a shared file system or a non-shared file system.
For example, chinese patent document entitled "method for predicting and migrating active memory in virtual machine migration" (application number CN201911278389.5) discloses a method for predicting and migrating active memory in virtual machine migration, which belongs to the technical field of virtual machine migration. And finally, taking the priority weight of the memory page as a judgment basis of the priority, and adjusting the sending sequence of the active memory pages. The invention needs a great deal of preparation before migrating the virtual machine, is not suitable for being used in a reproduction environment, and simultaneously, the virtual machine manager can generate extra expenses.
The chinese patent document entitled "cross-platform virtual machine online migration method and related component" (application number 201911025091.3) discloses a cross-platform virtual machine online migration method, which does not require an agent program to be installed in a virtual machine, avoids the threat to a disk file caused by the illegal use of the agent program in the virtual machine, and ensures the security of the disk file; meanwhile, in the execution process, firstly reading the disk data in the virtual machine to be migrated, writing the disk data into a new platform, and not influencing the normal operation of the virtual machine in the data reading and writing processes, namely, the virtual machine migration is executed under the condition that the virtual machine is not powered off, the virtual machine to be migrated is controlled to pause after the disk change data relative to the disk data in the operation of the virtual machine to be migrated is determined, at the moment, the service pause time is equivalent to the writing time of the disk change data, the service pause time of the virtual machine is greatly shortened, and the operation continuity of the running service application can be effectively ensured. The invention has the operation of suspending the virtual machine in the migration process, and although the service suspension time of the virtual machine can be shortened, the invention has no real-time migration effect.
Chinese patent document entitled "control method, control apparatus, and control device for virtual machine live migration" (publication number CN110941476A) discloses a control method for virtual machine live migration, which generates a memory snapshot of a first virtual machine when a live migration command for the first virtual machine is received, copies the memory snapshot of the first virtual machine to a target host to generate a second virtual machine on the target host, synchronizes an instruction received by the first virtual machine from the generation of the memory snapshot of the first virtual machine to the target host, and finally updates memory data of the second virtual machine according to the instruction to implement live migration of the first virtual machine. The migration process needs to be carried out back and forth between the source host and the destination host, and the migration process is not convenient to operate.
Chinese patent document entitled "a block chain-based self-perception method and system for virtual machine migration behavior" (application number CN201910884864.7) discloses a block chain-based self-perception method for virtual machine migration behavior, which includes: 1) running a virtual machine monitoring program a on a user virtual machine A, wherein the virtual machine monitoring program a is used for monitoring the migration characteristics of the user virtual machine A; 2) the virtual machine monitoring program a uploads monitoring data to a block chain; the intelligent contract in the block chain is used for storing structured monitoring data, and the file storage system in the block chain is used for storing monitoring files; 3) and judging whether the virtual machine A is migrated or not according to the monitoring data uploaded by the virtual machine monitoring program a. Although the invention realizes the intellectualization of the migration and the credible storage of the migration data through the block chain, the invention does not relate to the memory copy technology in the application and does not consider the efficiency problem of the migration.
Chinese patent document entitled "a control method of virtual machine migration in cloud environment" (application No. 201910870048.0) discloses a control method of virtual machine migration in cloud environment, which includes traversing all servers and detecting states of the servers; when the server is in a closed state, detecting whether a virtual machine runs on the server or not; if the virtual machines are detected to run, adding all the virtual machines into a migration list; when the server is in an open state, calculating the resource utilization rate of the server and judging whether the server needs to be migrated or not; and migrating the virtual machine according to the migration list.
Although the virtual machine is migrated according to the service condition of the server resource to achieve the purpose of reasonably using the resource, the memory copy technology in the application is not involved, and the problem of migration efficiency is not considered.
Chinese patent document entitled "method for migrating virtual machines in a cloudy environment" (application No. 201910743328.5) discloses a method for migrating virtual machines in a cloudy environment, which is to snapshot a disk of a virtual machine and mount the snapshot to a virtual machine a; the virtual machine A reads the block device of the snapshot and generates a qcow2 metadata file and a data identification file; uploading the logical qcow2 file blocks to an object storage of the cloud platform; the qcow2 file on the object store is registered as the virtual machine image file and the virtual machine is recreated from the image file. The method solves the problem caused by directly converting the block device into the mirror image file during the migration of the virtual machine in the cloud environment; the method can be used for migrating the virtual machine in the multi-cloud environment. Although the invention carries out snapshot processing on the original virtual machine and restores the virtual machine again at other nodes after the snapshot is mounted so as to achieve the purpose of migration, the invention does not relate to the real-time migration in the application and keeps the service in the virtual machine uninterrupted, and does not consider the problem of uninterrupted service in the migration process.
Chinese patent document entitled "network target range-oriented virtual machine dynamic migration method" (application No. 201910519583.1) discloses a network target range-oriented virtual machine dynamic migration method, which detects load information of each physical machine of a target range system at a certain period, judges the overall load condition of the target range by using a low threshold, a high threshold and an adaptive threshold, predicts the future load trend of the target range by using a load prediction mechanism if the load of the physical machine is higher than the high threshold, and determines whether to trigger migration according to the prediction result; after triggering migration, selecting a migration source virtual machine according to the resource characteristics of the virtual machine, the maximum remaining life of the virtual machine and the influence of migration on communication overhead; and taking the physical machine with the load lower than the low-order threshold value as a candidate migration target physical machine, and determining the final migration target physical machine by utilizing a first adaptive descent algorithm. The invention requires additional overhead to detect physical machine load to determine whether to migrate a virtual machine. Meanwhile, the memory copy technology in the application is not involved, and the problem of migration efficiency is not considered.
The chinese patent document entitled "a load dynamic migration method in a container and virtual machine mixed cloud environment" (application number CN201910494970.4) discloses a load dynamic migration method in a container and virtual machine mixed cloud environment, which first judges a virtual machine load and a container load, finds out a part having a large influence on a server load, migrates the entire virtual machine if the virtual machine has a large influence, analyzes container historical load data if the container has a large influence, predicts a container load for the next period of time, and selects a container in an increasing situation for migration for the next period of time. Although the purpose of reasonably using resources is achieved by judging the use condition of the load and migrating the virtual machine, the memory copy technology in the application is not involved, and the problem of migration efficiency is not considered.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the requirement of service application availability in the online migration process of the virtual machine, the invention can reduce the virtual machine shutdown time caused by memory synchronization through the memory pre-copying method based on the dirty rate prediction, reduce the iteration times of the memory pre-copying through the prediction of the write probability of the memory page of the virtual machine, and reduce the data volume transmitted by the memory page through the compression transmission of the memory page of the virtual machine, thereby improving the online migration efficiency of the virtual machine and reducing the service interruption time in the online migration process of the virtual machine.
In order to solve the technical problems, the invention adopts the technical scheme that:
a memory pre-copy method based on dirtying rate prediction comprises the following steps executed by a source node:
1) initializing iteration times i to be 1, and executing the ith round of memory pre-copying to a target node aiming at a target file;
2) judging whether the target file is copied, if so, ending and exiting; if the copying is not finished, adding 1 to the iteration times i, and skipping to execute the next step;
3) calculating the dirtying rate of the memory page before the ith round of memory pre-copying relative to the memory page after the ith-1 round of memory pre-copying, and skipping to execute the step 3 again if the dirtying rate exceeds a preset upper limit value); otherwise, further judging whether the dirty rate is lower than a preset lower limit value, if the dirty rate is not lower than the preset lower limit value, transmitting the dirty page to a target node in a compressed mode, and skipping to execute the step 2); and if the dirtying rate is lower than a preset lower limit value, finishing memory pre-copying and exiting.
In addition, the invention also provides a virtual machine migration method based on the dirty rate prediction, which comprises the following steps:
firstly, transmitting a file system of a source node to a destination node;
freezing all processes in the source node virtual machine;
collecting and storing the complete states of all processes in the source node virtual machine and the virtual machine, wherein the complete states comprise the state information of a memory and a CPU (central processing unit) in operation, and storing the complete states of the source node virtual machine in a check point file;
fourthly, transmitting the file system of the source node to the destination node again;
fifthly, transmitting the check point file to a destination node by adopting the memory pre-copying method based on the dirtiness rate prediction in claim 1;
creating and starting a virtual machine on a destination node;
step (c), restoring the execution of the process in the destination node virtual machine according to the check point file;
step eight, finishing all processes in the source node virtual machine and unloading the file system of the source node virtual machine;
and ninthly, removing the file system and the configuration file of the source node virtual machine.
Optionally, the step of transmitting the file system of the source node to the destination node in the first step is specifically realized by a file synchronization tool.
Optionally, the step (iv) of transmitting the file system of the source node to the destination node is specifically implemented by a file synchronization tool.
In addition, the invention also provides a memory pre-copy system based on the dirty rate prediction, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the steps of the memory pre-copy method based on the dirty rate prediction; or a computer program programmed or configured to perform the memory pre-copy method based on a dirtying rate prediction is stored in the memory.
In addition, the invention also provides a virtual machine migration system based on the dirty rate prediction, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the steps of the memory pre-copy method based on the dirty rate prediction; or a computer program programmed or configured to perform the memory pre-copy method based on a dirtying rate prediction is stored in the memory.
In addition, the invention also provides a virtual machine migration system based on the dirty rate prediction, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the steps of the virtual machine migration method based on the dirty rate prediction; or a computer program programmed or configured to perform the virtual machine migration method based on the dirty rate prediction is stored in the memory.
Furthermore, the present invention also provides a computer readable storage medium having stored therein a computer program programmed or configured to execute the memory pre-copy method based on the dirty rate prediction, or a computer program programmed or configured to execute the virtual machine migration method based on the dirty rate prediction.
Compared with the prior art, the invention has the following advantages: aiming at the requirement of service application availability in the online migration process of the virtual machine, the invention can reduce the shutdown time of the virtual machine caused by memory synchronization by a memory pre-copying method based on the dirtying rate prediction, reduce the iteration times of memory pre-copying by predicting the writing probability of the memory page of the virtual machine, and reduce the data volume transmitted by the memory page of the virtual machine by compressing and transmitting the memory page of the virtual machine, thereby improving the online migration efficiency of the virtual machine and reducing the service interruption time in the online migration process of the virtual machine.
Drawings
Fig. 1 is a schematic diagram of a basic flow of a memory pre-copy method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating a principle of memory pre-copy in an embodiment of the present invention.
Fig. 3 is a schematic diagram of an online migration process of a virtual machine according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an online migration system of a virtual machine in the embodiment of the present invention.
Detailed Description
Referring to fig. 1, the memory pre-copy method based on the dirty rate prediction in this embodiment includes the following steps performed by the source node:
1) initializing iteration times i to be 1, and executing the ith round of memory pre-copying to a target node aiming at a target file;
2) judging whether the target file is copied, if so, ending and exiting; if the copying is not finished, adding 1 to the iteration times i, and skipping to execute the next step;
3) calculating the dirtying rate of the memory page before the ith round of memory pre-copying relative to the memory page after the ith-1 round of memory pre-copying, and skipping to execute the step 3 again if the dirtying rate exceeds a preset upper limit value); otherwise, further judging whether the dirty rate is lower than a preset lower limit value, if the dirty rate is not lower than the preset lower limit value, transmitting the dirty page to a target node in a compressed mode, and skipping to execute the step 2); and if the dirtying rate is lower than a preset lower limit value, finishing memory pre-copying and exiting.
In this embodiment, the dirtying rate calculation method is as follows: the dirtying rate can be obtained by comparing the change amount of the memory page before the i-th round of memory pre-copying with respect to the memory page after the i-1-th round of memory pre-copying and then dividing the change amount by the total amount of the memory pages after the i-1-th round of memory pre-copying.
As shown in fig. 2, the memory pre-copy method based on the dirty rate prediction of the embodiment improves the continuous iteration method in the conventional pre-copy algorithm during migration, when each round (starting from the second round) of pre-copy iteration starts, the dirty rate of the current memory page relative to the memory page after the previous round of pre-copy is first determined, if the rate is too high, the current round of iteration is immediately ended (the dotted arrow in fig. 2 is the end of the current round of iteration), and then the above process is repeated again; if the rate is lower than a certain value, the iteration will continue to be performed, and the dirty page will be transmitted to the destination host (the solid arrow in fig. 2 is to continue the iteration); if this rate is lower than another, smaller value, the iterative execution of the pre-copy is ended, thereby speeding up the iterative process. By the method, the transmission of redundant data can be obviously reduced, and when the dirtying rate of the related memory page is too high, the current pre-copying process is stopped, so that the redundant dirty data is prevented from being continuously transmitted in the current round.
As shown in fig. 3, the present embodiment further provides a virtual machine migration method based on a dirty rate prediction, including:
firstly, transmitting a file system of a source node to a destination node;
freezing all processes in the source node virtual machine;
collecting and storing the complete states of all processes in the source node virtual machine and the virtual machine, wherein the complete states comprise the state information of a memory and a CPU (central processing unit) in operation, and storing the complete states of the source node virtual machine in a check point file;
fourthly, transmitting the file system of the source node to the destination node again;
fifthly, transmitting the check point file to a destination node by adopting the memory pre-copying method based on the dirtiness rate prediction in claim 1;
creating and starting a virtual machine on a destination node;
step (c), restoring the execution of the process in the destination node virtual machine according to the check point file;
step eight, finishing all processes in the source node virtual machine and unloading the file system of the source node virtual machine;
and ninthly, removing the file system and the configuration file of the source node virtual machine.
In order to reduce the complexity of virtual machine migration, the virtual machine migration method based on the dirty rate prediction in this embodiment provides a matching migration tool, and the migration tool can complete the migration of the virtual machine only by providing the IP address of the migration destination host and the name of the virtual machine to be migrated. As shown in fig. 4, the system architecture corresponding to the virtual machine migration method in this embodiment includes an LXC, a CRIU, and a migration tool, and by providing an IP address of the destination host and executing the migration tool, the virtual machine can be migrated from the source host to the destination host to continue to run. The CRIU is a software tool (Checkpoint/Restore In Userspace) running on a linux operating system, and the function of the CRIU is to realize Checkpoint/Restore function In user space. Using this tool, you can freeze a running program and checkpoint it to a series of files, and then you can use these files to restore the program to the frozen point on any host.
Virtual machines virtualized by operating system level virtualization technology are essentially a set of isolated process groups. Since a virtual machine is an isolated instance (meaning that all inter-process relationships, such as parent-child process relationships and inter-process communication, are confined within the virtual machine), the complete state of the virtual machine can be saved in a disk file, which is called a checkpoint (restore), and then the virtual machine can be recovered from the checkpoint file. Thus, migration can also be summarized in three steps: first, stop it and save its state to the image file; second, the remote host is allowed to access the mirror image; third, the task is reconstructed from the image. The time to task stop is reduced by using a pre-copy (pre-dump) action of the CRIU tool. In pre-copy (pre-copy) memory migration, the virtual machine manager typically copies all memory pages from the source host to the destination host, at which stage the virtual machine is still running on the source host. If some of the memory pages are changed during this process, this situation is referred to as the memory pages being dirty. Since the dirty rate of the memory pages is unpredictable when the virtual machine is running, if the dirty rate of the memory pages is greater than the transmission rate in the internal network, the pre-copy iteration will go on endlessly, and thus the virtual machine memory migration fails because it cannot go on, which is obviously not feasible. And the runtime memory is crucial to the running of the virtual machine, which determines that the migration of the runtime memory is the most complex part of the live migration process of the whole virtual machine. In the embodiment, a continuous iteration method in a traditional pre-copy algorithm is improved in a hot migration process, when each round of pre-copy iteration (starting from the second round) starts, firstly, the dirtying rate of the current memory page relative to the memory page after the previous round of pre-copy is judged, if the dirtying rate is too high and is greater than the transmission rate of the memory in the network, the iteration of the current round is immediately ended, and then, the process is repeated again; if the speed is lower than a certain value, the iteration of the current round is continuously executed, and the dirty page is transmitted to the target host; if this rate is lower than another, smaller value, the iterative execution of the pre-copy is ended. The method has the advantages that the transmission of redundant data can be obviously reduced, and when the dirtiness rate of related memory pages is too high, the current pre-copying processing can be stopped, so that the redundant dirty data can be prevented from being continuously transmitted in the current round.
An operating system level virtual machine is a group of isolated processes, because a virtual machine is an isolated instance (meaning that all inter-process relationships, such as parent-child process relationships and inter-process communication, are confined within the virtual machine), the virtual machine can save its complete state in a disk file through a checkpoint (checkpoint), and then the virtual machine can recover (restore) from the checkpoint file: the process of the checkpoint mechanism comprises the following 3 phases: (1) freeze all processes (freesteprocesses) suspend all processes to a known state while the network is disabled. (2) Copy virtual machine (dumptothecontainer) collects and saves all processes in the virtual machine and the complete state of the virtual machine itself (runtime memory and CPU state information), and then saves the virtual machine complete state in a checkpoint file. (3) And (4) stopping the virtual machine (stophecontainer), namely ending all processes in the virtual machine and unloading the file system of the virtual machine. The recovery process is the reverse of the checkpoint process and also contains 3 phases: (1) restart virtual machine (restart virtual machine) a virtual machine of the same state is created based on the information stored in the checkpoint file. (2) And (4) restarting processes, namely creating all processes in the frozen state in the virtual machine, and recovering memory pages and occupied resources of all processes from the information in the checkpoint file. (3) Resume virtual machine (resumthecontainer) the execution of processes in the virtual machine is resumed, leaving the network available, after which the virtual machine can resume its previous normal operation. At the heart of migration is a checkpoint and recovery mechanism (CR) that enables a running virtual machine to be moved from one host to another without requiring a reboot.
As an optional implementation manner, the transmission of the file system of the source node to the destination node in step 1) is specifically implemented by a file synchronization tool, and the file synchronization tool may adopt a file synchronization tool such as rsync as needed.
As an optional implementation manner, the step 4) of transmitting the file system of the source node to the destination node is specifically implemented by a file synchronization tool, and the file synchronization tool may adopt a file synchronization tool such as rsync as needed.
In this embodiment, two physical hosts (hardware configuration is memory 2G, 200G hard disk) in the local area network simulate a source node and a destination node, and the two physical hosts are installed with the same ubuntu16.04 system and CRIU and LXC operating environments. The same ubuntu16.04 system is run in the LXC operating system-level virtual machine is respectively created, and the isomorphism of the system is kept. In two physical hosts it is assumed that a virtual machine is migrated from host 1 to host 2. The execution of the migration tool at host 1 begins to perform the live migration. In this case, the migration process is divided into the following steps and is sequentially performed. But the performer does not perceive it as steps are packed by the migration tool. (1) The file system of the virtual machine is synchronized. The file system of the host 1 virtual machine is transferred to the destination host 2, which may be implemented by a tool such as rsync. (2) The virtual machine is frozen. All processes in the host 1 virtual machine are frozen. (3) The virtual machine is replicated. The method comprises the steps of collecting and storing all processes in the host 1 virtual machine and the complete state (the state information of a memory and a CPU (Central processing Unit) in the running process) of the virtual machine, and then storing the complete state of the host 1 virtual machine in a check point file. (4) The file system of the virtual machine is synchronized again. During the first synchronization, the host 1 virtual machine is still running, so some files on the destination host 2 may have passed, so the host 1 virtual machine will also need to perform the synchronization of the file system again after freezing. (5) A checkpoint file is transmitted. The checkpoint file is transmitted to the destination host 2. (6) The virtual machine is started on the destination host. At this stage, the virtual machine is created on the destination host 2, and the process therein is again created from the information saved in the checkpoint file, and after the execution at this stage is completed, the process in the host 1 virtual machine is still in a frozen state. (7) And restoring the virtual machine. Execution of the process in the virtual machine is resumed on the destination host 2. (8) The virtual machine on the source host is terminated. All processes in the virtual machine on host 1 are terminated and its file system is unloaded. (9) The virtual machine on the source host is deleted. The file system and configuration files of the virtual machine on the source host 1 are removed. From this point, the virtual machine migrates from host 1 to host 2 to continue running. Verification proves that the memory pre-copying method based on the dirty rate prediction and the virtual machine migration method based on the dirty rate prediction are verified on a domestic CPU Feiteng processor and a domestic kylin operating system at the same time, and the verification result proves that the memory pre-copying method based on the dirty rate prediction and the virtual machine migration method based on the dirty rate prediction can be well compatible with a domestic platform.
In addition, the present embodiment also provides a memory pre-copy system based on a dirty rate prediction, which includes a microprocessor and a memory connected to each other, wherein the microprocessor is programmed or configured to execute the steps of the memory pre-copy method based on a dirty rate prediction; or a computer program programmed or configured to perform the aforementioned memory pre-copy method based on a dirtying rate prediction.
In addition, the present embodiment further provides a virtual machine migration system based on a dirty rate prediction, which includes a microprocessor and a memory connected to each other, where the microprocessor is programmed or configured to execute the steps of the memory pre-copy method based on a dirty rate prediction; or a computer program programmed or configured to perform the aforementioned memory pre-copy method based on a dirtying rate prediction.
In addition, the present embodiment also provides a virtual machine migration system based on a dirty rate prediction, which includes a microprocessor and a memory connected to each other, where the microprocessor is programmed or configured to execute the steps of the foregoing virtual machine migration method based on a dirty rate prediction; or the memory has stored therein a computer program programmed or configured to perform the foregoing virtual machine migration method based on the dirty rate prediction.
Furthermore, the present embodiment also provides a computer readable storage medium, in which a computer program programmed or configured to execute the foregoing memory pre-copy method based on the dirty rate prediction is stored, or a computer program programmed or configured to execute the foregoing virtual machine migration method based on the dirty rate prediction is stored.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is directed to methods, apparatus (systems), and computer program products according to embodiments of the application wherein instructions, which execute via a flowchart and/or a processor of the computer program product, create means for implementing functions specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (8)

1. A memory pre-copy method based on a dirtying rate prediction, characterized by comprising the following steps performed by a source node:
1) initializing iteration times i to be 1, and executing the ith round of memory pre-copying to a target node aiming at a target file;
2) judging whether the target file is copied, if so, ending and exiting; if the copying is not finished, adding 1 to the iteration times i, and skipping to execute the next step;
3) calculating the dirtying rate of the memory page before the ith round of memory pre-copying relative to the memory page after the ith-1 round of memory pre-copying, and skipping to execute the step 3 again if the dirtying rate exceeds a preset upper limit value); otherwise, further judging whether the dirty rate is lower than a preset lower limit value, if the dirty rate is not lower than the preset lower limit value, transmitting the dirty page to a target node in a compressed mode, and skipping to execute the step 2); and if the dirtying rate is lower than a preset lower limit value, finishing memory pre-copying and exiting.
2. A virtual machine migration method based on a dirtying rate prediction is characterized by comprising the following steps:
firstly, transmitting a file system of a source node to a destination node;
freezing all processes in the source node virtual machine;
collecting and storing the complete states of all processes in the source node virtual machine and the virtual machine, wherein the complete states comprise the state information of a memory and a CPU (central processing unit) in operation, and storing the complete states of the source node virtual machine in a check point file;
fourthly, transmitting the file system of the source node to the destination node again;
fifthly, transmitting the check point file to a destination node by adopting the memory pre-copying method based on the dirtiness rate prediction in claim 1;
creating and starting a virtual machine on a destination node;
step (c), restoring the execution of the process in the destination node virtual machine according to the check point file;
step eight, finishing all processes in the source node virtual machine and unloading the file system of the source node virtual machine;
and ninthly, removing the file system and the configuration file of the source node virtual machine.
3. The virtual machine migration method based on the dirty rate prediction as claimed in claim 2, wherein the step (r) of transferring the file system of the source node to the destination node is implemented by a file synchronization tool.
4. The virtual machine migration method based on the dirty rate prediction as claimed in claim 2, wherein the step of transferring the file system of the source node to the destination node in the step of (iv) is implemented by a file synchronization tool.
5. A memory pre-copy system based on dirty rate prediction, comprising a microprocessor and a memory connected to each other, wherein the microprocessor is programmed or configured to perform the steps of the memory pre-copy method based on dirty rate prediction of claim 1; or a computer program programmed or configured to perform the dirtying rate prediction based memory copy method of claim 1.
6. A virtual machine migration system based on dirty rate prediction, comprising a microprocessor and a memory connected to each other, characterized in that the microprocessor is programmed or configured to perform the steps of the dirty rate prediction based memory pre-copy method of claim 1; or a computer program programmed or configured to perform the dirtying rate prediction based memory copy method of claim 1.
7. A virtual machine migration system based on dirty rate prediction, comprising a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the steps of the virtual machine migration method based on dirty rate prediction according to any one of claims 2-4; or the memory stores a computer program programmed or configured to execute the virtual machine migration method based on the dirty rate prediction according to any one of claims 2 to 4.
8. A computer-readable storage medium, wherein the computer-readable storage medium stores therein a computer program programmed or configured to execute the method for pre-copying a memory based on a dirtying rate prediction according to claim 1, or the computer-readable storage medium stores therein a computer program programmed or configured to execute the method for migrating a virtual machine based on a dirtying rate prediction according to any one of claims 2 to 4.
CN202011139343.8A 2020-10-22 2020-10-22 Memory pre-copying and virtual machine migration method and system based on dirtying rate prediction Pending CN112181601A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011139343.8A CN112181601A (en) 2020-10-22 2020-10-22 Memory pre-copying and virtual machine migration method and system based on dirtying rate prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011139343.8A CN112181601A (en) 2020-10-22 2020-10-22 Memory pre-copying and virtual machine migration method and system based on dirtying rate prediction

Publications (1)

Publication Number Publication Date
CN112181601A true CN112181601A (en) 2021-01-05

Family

ID=73923173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011139343.8A Pending CN112181601A (en) 2020-10-22 2020-10-22 Memory pre-copying and virtual machine migration method and system based on dirtying rate prediction

Country Status (1)

Country Link
CN (1) CN112181601A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640597A (en) * 2022-02-24 2022-06-17 烽台科技(北京)有限公司 Network target range configuration migration method and device, computer equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114640597A (en) * 2022-02-24 2022-06-17 烽台科技(北京)有限公司 Network target range configuration migration method and device, computer equipment and medium
CN114640597B (en) * 2022-02-24 2023-08-15 烽台科技(北京)有限公司 Network target range configuration migration method, device, computer equipment and medium

Similar Documents

Publication Publication Date Title
US10156986B2 (en) Gang migration of virtual machines using cluster-wide deduplication
US10298670B2 (en) Real time cloud workload streaming
Medina et al. A survey of migration mechanisms of virtual machines
Sahni et al. A hybrid approach to live migration of virtual machines
Zheng et al. Workload-aware live storage migration for clouds
Nicolae et al. Going back and forth: Efficient multideployment and multisnapshotting on clouds
US20150331635A1 (en) Real Time Cloud Bursting
Svärd et al. Principles and performance characteristics of algorithms for live VM migration
US10055307B2 (en) Workflows for series of snapshots
Sharma et al. A technical review for efficient virtual machine migration
Patel et al. Improved pre-copy algorithm using statistical prediction and compression model for efficient live memory migration
Li et al. Efficient live virtual machine migration for memory write-intensive workloads
Patel et al. Survey on a combined approach using prediction and compression to improve pre-copy for efficient live memory migration on Xen
CN112181601A (en) Memory pre-copying and virtual machine migration method and system based on dirtying rate prediction
Modi et al. Live migration of virtual machines with their local persistent storage in a data intensive cloud
US20210064250A1 (en) Preparing a data storage system for maintenance operations
Anala et al. Application performance analysis during live migration of virtual machines
Li et al. Adaptive live migration of virtual machines under limited network bandwidth
US10891154B2 (en) Hosting virtual machines on a secondary storage system
Chen et al. A method of self-adaptive pre-copy container checkpoint
CN111459607A (en) Virtual server cluster building method, system and medium based on cloud desktop virtualization
Tsao et al. Efficient virtualization-based fault tolerance
US11836512B2 (en) Virtual machine replication strategy based on predicted application failures
Sharma et al. A Review on Efficient Virtual Machine Live Migration: Challenges, requirements and technology of VM migration in cloud
Babu et al. Optimised pre-copy live VM migration approach for evaluating mathematical expression by dependency identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination