CN114443211A - Virtual machine live migration method, equipment and storage medium - Google Patents
Virtual machine live migration method, equipment and storage medium Download PDFInfo
- Publication number
- CN114443211A CN114443211A CN202111577821.8A CN202111577821A CN114443211A CN 114443211 A CN114443211 A CN 114443211A CN 202111577821 A CN202111577821 A CN 202111577821A CN 114443211 A CN114443211 A CN 114443211A
- Authority
- CN
- China
- Prior art keywords
- vcpu
- dirty page
- virtual machine
- page rate
- rate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45583—Memory management, e.g. access or allocation
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a virtual machine live migration method, equipment and a storage medium, wherein the method comprises the following steps: in the process of virtual machine live migration, determining the dirty page rate of each virtual processor VCPU of a virtual machine; determining the dirty page rate of the virtual machine according to the dirty page rate of each VCPU; if the set migration convergence condition is determined not to be met according to the dirty page rate of the virtual machine, selecting a target VCPU meeting the set dirty page rate condition from each VCPU; and limiting the memory access bandwidth of the target VCPU, wherein the memory access bandwidth limitation is used for reducing the dirty page rate of the target VCPU. Therefore, the invention improves the success rate of the virtual machine live migration.
Description
Technical Field
The invention relates to the technical field of virtual machines, in particular to a virtual machine live migration method, equipment and a storage medium.
Background
The virtual machine live migration technology can migrate a virtual machine from one physical machine to another physical machine without service interruption.
Before the start of the live migration, the virtual machine runs in the source physical machine, after the start of the live migration, the virtual machine is started in the target physical machine and the state of the virtual machine is set to be suspended, the virtual machine continuously receives the memory data sent by the source physical machine until the amount of the remaining memory of the source physical machine is small enough, and finally the source physical machine is suspended, and the remaining memory is copied to the target physical machine at one time.
However, during the live migration of the virtual machine, a large number of dirty memory pages may be generated, which reduces the success rate of the live migration of the virtual machine.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, a device, and a storage medium for virtual machine live migration, so as to solve the problem that a large number of dirty memory pages may be generated during a virtual machine live migration process, thereby reducing a success rate of virtual machine live migration.
According to a first aspect, an embodiment of the present invention provides a virtual machine live migration method, including:
in the process of virtual machine live migration, determining the dirty page rate of each virtual processor VCPU of a virtual machine;
determining the dirty page rate of the virtual machine according to the dirty page rate of each VCPU;
if the set migration convergence condition is determined not to be met according to the dirty page rate of the virtual machine, selecting a target VCPU meeting the set dirty page rate condition from each VCPU;
and performing memory access bandwidth limitation on the target VCPU, wherein the memory access bandwidth limitation is used for reducing the dirty page rate of the target VCPU.
In the embodiment of the application, when it is determined that the set migration convergence condition is not met according to the dirty page rate of the virtual machine, a target VCPU meeting the set dirty page rate condition is selected from each VCPU, and the target VCPU is subjected to memory access bandwidth limitation, so that computing resources are saved, and meanwhile, the success rate of the hot migration of the virtual machine is improved.
With reference to the first aspect, in a first implementation manner of the first aspect, the setting the migration convergence condition includes: the dirty page rate of the virtual machine is less than or equal to the migration transmission rate of the virtual machine.
In the embodiment of the application, whether migration is converged can be determined according to the magnitude relation between the dirty page rate of the virtual machine and the migration transmission rate of the virtual machine, so that the reliability of judging whether migration is converged is improved.
With reference to the first aspect, in a second implementation manner of the first aspect, the selecting, from each VCPU, a target VCPU that satisfies a set dirty page rate condition includes:
according to the sequence of the dirty page rate from large to small, performing first sequencing on each VCPU, and sequentially selecting a set number of VCPUs from the head end of the first sequencing backwards, wherein the set number of VCPUs are the target VCPUs; or
And performing second sorting on each VCPU according to the sequence of the dirty page rates from small to large, and sequentially selecting the set number of VCPUs from the tail end of the second sorting to the front, wherein the set number of VCPUs are the target VCPUs.
In the embodiment of the application, one or more target VCPUs with higher dirty page rates can be determined in a sequencing mode according to the dirty page rate of each VCPU, so that the efficiency and the accuracy of selecting the target VCPUs are improved.
With reference to the first aspect, or the first implementation manner of the first aspect, or the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the determining a dirty page rate of each VCPU of the virtual machine includes:
and aiming at any VCPU in each VCPU, according to the number of the dirty page addresses increased in the set time by the dirty page modification record PML cache corresponding to the VCPU, determining the dirty page rate of the VCPU.
In the embodiment of the application, the dirty page rate of the VCPU can be determined according to the number of the dirty page addresses added in the set time of the dirty page modification record corresponding to the VCPU, and the efficiency of determining the dirty page rate can be improved.
With reference to the third embodiment of the first aspect, in a fourth embodiment, the method further includes:
and starting the PML cache allocated to each VCPU.
In the embodiment of the application, when the virtual machine is started, the PML cache can be allocated to each VCPU, and in the process of live migration of the virtual machine, the PML cache allocated to each VCPU can be started, so that the dirty page speed of the VCPU is determined by modifying the number of the dirty page addresses increased in the set time according to the dirty pages corresponding to the VCPU, and the reliability of determining the dirty page speed is improved.
With reference to the first aspect, or the first implementation manner of the first aspect, or the second implementation manner of the first aspect, in a fifth implementation manner, the determining a dirty page rate of the virtual machine according to the dirty page rate of each virtual processor includes:
and calculating the sum of the dirty page rates of each virtual processor, wherein the obtained calculation result is the dirty page rate of the virtual machine.
In the embodiment of the application, the dirty page rate of the virtual machine can be determined by calculating the sum of the dirty page rates of each virtual processor, and then whether migration is converged can be judged according to the dirty page rate of the virtual machine, so that the accuracy of judging whether migration is converged is improved.
With reference to the first aspect, or the first implementation manner of the first aspect, or the second implementation manner of the first aspect, in a sixth implementation manner, the performing memory access bandwidth limitation on the target VCPU includes:
determining the upper limit value of the memory access bandwidth of the thread corresponding to the target VCPU;
and limiting the memory access bandwidth of the target VCPU according to the memory access bandwidth upper limit value.
In the embodiment of the application, the memory access bandwidth of the target VCPU can be limited by the memory access bandwidth upper limit value, so that the memory access time delay can be increased, and the dirty page amount generated by the virtual machine VCPU in unit time can be effectively reduced.
According to a second aspect, an embodiment of the present invention provides an electronic device, including:
the virtual machine hot migration processing device comprises a first determining module, a second determining module and a control module, wherein the first determining module is used for determining the dirty page rate of each virtual processor VCPU of a virtual machine in the hot migration process of the virtual machine;
a second determining module, configured to determine a dirty page rate of the virtual machine according to the dirty page rate of each VCPU;
the selecting module is used for selecting a target VCPU meeting the set dirty page rate condition from each VCPU if the set migration convergence condition is determined not to be met according to the dirty page rate of the virtual machine;
and the memory access bandwidth limiting module is used for limiting the memory access bandwidth of the target VCPU, and the memory access bandwidth limitation is used for reducing the dirty page rate of the target VCPU.
With reference to the second aspect, in a first embodiment of the second aspect, the setting of the migration convergence condition includes: the dirty page rate of the virtual machine is less than or equal to the migration transmission rate of the virtual machine.
With reference to the second aspect, in a second implementation manner of the second aspect, the selecting module is specifically configured to:
according to the sequence of the dirty page rate from large to small, performing first sequencing on each VCPU, and sequentially selecting a set number of VCPUs from the head end of the first sequencing backwards, wherein the set number of VCPUs are the target VCPUs; or
And performing second sorting on each VCPU according to the sequence of the dirty page rates from small to large, and sequentially selecting the set number of VCPUs from the tail end of the second sorting to the front, wherein the set number of VCPUs are the target VCPUs.
With reference to the second aspect, or the first embodiment of the second aspect, or the second embodiment of the second aspect, in a third embodiment of the second aspect, the first determining module is specifically configured to:
and aiming at any VCPU in each VCPU, according to the number of the dirty page addresses increased in the set time by the dirty page modification record PML cache corresponding to the VCPU, determining the dirty page rate of the VCPU.
With reference to the third embodiment of the second aspect, in a fourth embodiment, the electronic device further includes:
and the starting module is used for starting the PML cache distributed for each VCPU.
With reference to the second aspect, the first embodiment of the second aspect, or the second embodiment of the second aspect, in a fifth embodiment, the second determining module is specifically configured to:
and calculating the sum of the dirty page rates of each virtual processor, wherein the obtained calculation result is the dirty page rate of the virtual machine.
With reference to the second aspect, or the first implementation manner of the second aspect, or the second implementation manner of the second aspect, in a sixth implementation manner, the memory access bandwidth limiting module is specifically configured to:
determining the upper limit value of the memory access bandwidth of the thread corresponding to the target VCPU;
and limiting the memory access bandwidth of the target VCPU according to the memory access bandwidth upper limit value.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: the virtual machine hot migration method includes a memory and a processor, where the memory and the processor are communicatively connected to each other, the memory stores computer instructions, and the processor executes the computer instructions to execute the virtual machine hot migration method described in the first aspect or any one of the implementation manners of the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores computer instructions for causing a computer to execute the virtual machine live migration method described in the first aspect or any one implementation manner of the first aspect.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way, and in which:
FIG. 1 illustrates a method flow diagram of a virtual machine live migration method.
FIG. 2 illustrates another method flow diagram of a virtual machine live migration method.
FIG. 3 shows a schematic graph of the calculation of the dirty page rate for each VCPU.
Fig. 4 shows a schematic structural diagram of an electronic device.
Fig. 5 shows another schematic structural diagram of an electronic device.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows a method flowchart of a virtual machine live migration method, which may be used for an electronic device controlling virtual machine live migration, where the electronic device may be located in a source physical machine in virtual machine live migration. As shown in fig. 1, the virtual machine live migration method includes:
Specifically, a large number of internal memory dirty pages are generated in the live migration process, and if a plurality of VCPUs are configured in the virtual machine, the dirty page rate of each VCPU generating the dirty pages needs to be calculated.
The Virtual Machine in this embodiment may be a Kernel-based Virtual Machine (KVM).
And step 102, determining the dirty page rate of the virtual machine according to the dirty page rate of each VCPU.
Specifically, if the virtual machine is configured with multiple VCPUs, the dirty page rate of the virtual machine needs to be determined according to the dirty page rates of all VCPUs in calculating the dirty page rate of the virtual machine. Such as: and determining the dirty page rate of the virtual machine according to a certain rule by combining the dirty page rates of all VCPUs.
And 103, if the set migration convergence condition is determined not to be met according to the dirty page rate of the virtual machine, selecting a target VCPU meeting the set dirty page rate condition from each VCPU.
Specifically, for a virtual machine with a high memory load, a situation that a set migration convergence condition is not met (that is, the number of dirty pages generated by the virtual machine cannot be converged) may occur, in order to improve the success rate of the virtual machine live migration, one or more target VCPUs with a higher dirty page rate (that is, target VCPUs meeting the set dirty page rate condition) may be selected from a plurality of VCPUs configured by the virtual machine, and then the dirty page rate of the target VCPU is reduced by performing memory access bandwidth limitation on the target VCPU; instead of forcing all VCPUs to exit from the user state (i.e., Guest state) to the Host state (i.e., Host state), the dirty page rate is reduced, thereby avoiding the overhead of VCPU state switching and saving the computing resources.
And step 104, performing memory access bandwidth limitation on the target VCPU, wherein the memory access bandwidth limitation is used for reducing the dirty page rate of the target VCPU.
Specifically, the target VCPU may be memory access bandwidth limited by Intel (Intel) RDT MBA technology. The RDT is an abbreviation of Resource Director Technology (Resource Director Technology); MBA is an abbreviation of Memory Bandwidth Allocation (Memory Bandwidth Allocation).
After the step 102, if it is determined that the set migration convergence condition is satisfied according to the dirty page rate of the virtual machine, it may be determined that the virtual machine can be successfully live migrated.
The above steps 101 to 104 may be used in a situation where the migration convergence condition is not satisfied in each memory iteration copy round, that is, the dirty page rate of each VCPU of the virtual machine needs to be determined in each memory iteration copy round; determining the dirty page rate of the virtual machine according to the dirty page rate of each VCPU; if the set migration convergence condition is determined not to be met according to the dirty page rate of the virtual machine, selecting a target VCPU meeting the set dirty page rate condition from each VCPU; performing memory access bandwidth limitation on a target VCPU; … … until the set transition convergence condition is satisfied.
Namely: when each iteration of migration is finished, updating the dirty page rate of each VCPU and re-judging whether the migration converges (i.e. whether the set migration convergence condition is satisfied), if not (i.e. the set migration convergence condition is not satisfied), continuing to perform memory access bandwidth limitation, for example: the memory access bandwidth limitation is carried out by reducing the upper limit value of the VCPU memory access bandwidth.
In the embodiment of the application, when it is determined that the set migration convergence condition is not met according to the dirty page rate of the virtual machine, the target VCPU meeting the set dirty page rate condition is selected from each VCPU, and the memory access bandwidth limitation is performed on the target VCPU, so that the computing resources are saved, and the success rate of the hot migration of the virtual machine is improved.
In an alternative embodiment, in performing step 101, the following implementation may be adopted:
for any VCPU in each VCPU, determining the dirty Page rate of the VCPU according to the number of dirty Page addresses increased by a Page Modification recording (PML) corresponding to the VCPU in a set time.
Specifically, during the virtual machine live migration, newly-added dirty page address entries in the PML cache may be periodically counted, and the dirty page rate of each VCPU in unit time is calculated.
In the embodiment of the application, the dirty page rate of the VCPU can be determined according to the number of the dirty page addresses added in the set time of the dirty page modification record corresponding to the VCPU, and the efficiency of determining the dirty page rate can be improved.
In an optional embodiment, in executing step 101, the method may further include:
and starting the PML cache allocated to each VCPU.
Specifically, upon startup of the virtual machine, a PML cache may be allocated for each VCPU to save its dirty page addresses upon migration of the virtual machine. In the process of virtual machine live migration, a PML cache allocated to each VCPU may be started, so that a dirty page address generated in the process of migration may be stored in a PML Buffer, and meanwhile, newly added dirty page address entries in the PML cache are periodically counted, and a dirty page rate of each VCPU in unit time is calculated.
In the embodiment of the application, when the virtual machine is started, the PML cache can be allocated to each VCPU, and in the process of live migration of the virtual machine, the PML cache allocated to each VCPU can be started, so that the number of dirty page addresses increased in the set time according to the dirty page modification record corresponding to the VCPU is ensured, the dirty page rate of the VCPU is determined, and the reliability of determining the dirty page rate is improved.
In an alternative embodiment, in performing step 102, the following implementation may be employed:
and calculating the sum of the dirty page rates of each virtual processor, wherein the obtained calculation result is the dirty page rate of the virtual machine.
In particular, the dirty page rate of a virtual machine may be the sum of the dirty page rates of each virtual processor of the virtual machine.
In the embodiment of the application, the dirty page rate of the virtual machine can be determined by calculating the sum of the dirty page rates of each virtual processor, and then whether migration is converged can be judged according to the dirty page rate of the virtual machine, so that the accuracy of judging whether migration is converged is improved.
In an alternative embodiment, the setting of the migration convergence condition in step 103 may include: the dirty page rate of the virtual machine is less than or equal to the migration transfer rate of the virtual machine.
Specifically, when the dirty page rate of the virtual machine is greater than the migration transmission rate of the virtual machine, it indicates that the memory pressure of the virtual machine is large, and the number of the generated dirty pages cannot be converged. And when the dirty page rate of the virtual machine is less than or equal to the migration transmission rate of the virtual machine, the dirty page number generated by the virtual machine can be converged.
In the embodiment of the application, whether migration is converged can be determined according to the size relationship between the dirty page rate of the virtual machine and the migration transmission rate of the virtual machine, so that the reliability of judging whether migration is converged is improved.
In an alternative embodiment, in performing step 103, the following implementation may be employed:
according to the sequence of the dirty page rate from large to small, carrying out first sequencing on each VCPU, sequentially selecting a set number of VCPUs from the head end of the first sequencing backwards, and setting the number of VCPUs as target VCPUs; or
And performing second sorting on each VCPU according to the sequence of the dirty page rates from small to large, and sequentially selecting a set number of VCPUs from the tail end of the second sorting to the front, wherein the set number of VCPUs is the target VCPU.
Specifically, the set number of VCPUs may be one or more. In order to improve the success rate of the virtual machine live migration, one or more target VCPUs with higher dirty page rate can be selected from a plurality of VCPUs configured in the virtual machine.
In the embodiment of the application, one or more target VCPUs with higher dirty page rate can be determined in a sequencing mode according to the dirty page rate of each VCPU, and the efficiency and the accuracy of selecting the target VCPUs are improved.
In an alternative embodiment, in performing step 104, the following implementation may be employed:
determining the upper limit value of the memory access bandwidth of the thread corresponding to the target VCPU;
and limiting the memory access bandwidth of the target VCPU according to the memory access bandwidth upper limit value.
Specifically, the memory access bandwidth upper limit value of the thread corresponding to the target VCPU may be set by an Intel RDT MBA tool, and the memory access bandwidth of the target VCPU is limited.
In the embodiment of the application, the memory access bandwidth of the target VCPU can be limited through the upper limit value of the memory access bandwidth, so that the memory access time delay can be increased, and the dirty page amount generated by the virtual machine VCPU in unit time can be effectively reduced.
The implementation process of the virtual machine live migration is described below by using a specific example.
Wherein the dirty page rate of each VCPU is calculated as shown in fig. 3. In fig. 3, the VCPU of a Virtual Machine (VM) includes: VCPU1, VCPU2, and VCPU 3.
In calculating the Dirty Page rates for VCPU1, VCPU2, and VCPU3, calculations may be based on the Intel PML mechanism and the Dirty Page Ring (Dirty Page Ring). The specific process comprises the following steps:
the PML hardware records the memory Page frame number (GPA) accessed by the VCPU of the virtual machine into the PML cache, the PML cache is a table with the size fixed to 512 items, and the KVM synchronizes the content in the PML cache to the Dirty Page Ring of each VCPU. Dirty Page Ring is a reusable Ring and the implementation can be array or lock-free queue. The Dirty Page Ring may include a Physical Frame Number (PFN) of each VCPU of the virtual machine accessing the memory Page, such as: user Frame Number (Guest Frame Number, GFN).
Further, the Dirty Page information acquiring module of the QEMU periodically accesses the content of the Dirty Page information acquiring module of the Dirty, checks whether the VCPU has a newly added internal memory Dirty Page, and calculates the Dirty Page rate of each VCPU. And taking the sum of the VCPU dirty page rates as the virtual machine dirty page rate, comparing the virtual machine dirty page rate with the migration transmission rate, and judging whether the migration is converged.
And step 204, when each iteration of migration is finished, updating the dirty page rate of each VCPU, and judging whether the migration is converged again, if not, continuously reducing the upper limit of the memory access bandwidth of the VCPU until the migration is converged.
It can be seen from the above embodiments that the VCPU memory access bandwidth can be reduced by the Intel RDT MBA technology to make migration convergence, and the Dirty Page rate of each VCPU can be calculated by the Dirty Page statistical data structure Dirty Page Ring newly added to the KVM; the dirty page rate for the virtual machine may also be calculated by accumulating the dirty page rates for each VCPU.
Fig. 4 shows a schematic structural diagram of an electronic device. The electronic device may be a device for controlling virtual machine live migration. As shown in fig. 4, the electronic apparatus includes:
a first determining module 11, configured to determine a dirty page rate of each virtual processor VCPU of a virtual machine during a live migration of the virtual machine;
a second determining module 12, configured to determine a dirty page rate of the virtual machine according to the dirty page rate of each VCPU;
a selecting module 13, configured to select, if it is determined according to the dirty page rate of the virtual machine that the set migration convergence condition is not satisfied, a target VCPU that satisfies the set dirty page rate condition from each VCPU;
and a memory access bandwidth limiting module 14, configured to perform memory access bandwidth limitation on the target VCPU, where the memory access bandwidth limitation is used to reduce a dirty page rate of the target VCPU.
In one possible implementation manner, the setting of the migration convergence condition includes: the dirty page rate of the virtual machine is less than or equal to the migration transmission rate of the virtual machine.
In a possible implementation manner, the selecting module 13 is specifically configured to:
according to the sequence of the dirty page rate from large to small, performing first sequencing on each VCPU, and sequentially selecting a set number of VCPUs from the head end of the first sequencing backwards, wherein the set number of VCPUs are the target VCPUs; or
And performing second sorting on each VCPU according to the sequence of the dirty page rates from small to large, and sequentially selecting the set number of VCPUs from the tail end of the second sorting to the front, wherein the set number of VCPUs are the target VCPUs.
In a possible implementation manner, the first determining module 11 is specifically configured to:
and aiming at any VCPU in each VCPU, according to the number of the dirty page addresses increased in the set time by the dirty page modification record PML cache corresponding to the VCPU, determining the dirty page rate of the VCPU.
In one possible implementation, the electronic device further includes:
and the starting module is used for starting the PML cache distributed for each VCPU.
In a possible implementation manner, the second determining module 12 is specifically configured to:
and calculating the sum of the dirty page rates of each virtual processor, wherein the obtained calculation result is the dirty page rate of the virtual machine.
In a possible implementation manner, the memory access bandwidth limiting module 14 is specifically configured to:
determining the upper limit value of the memory access bandwidth of the thread corresponding to the target VCPU;
and limiting the memory access bandwidth of the target VCPU according to the memory access bandwidth upper limit value.
It should be noted that the electronic device provided in the embodiment of the present application can implement all the method steps implemented by the above method embodiment, and can achieve the same technical effect, and detailed descriptions of the same parts and beneficial effects as those of the method embodiment in this embodiment are not repeated herein.
Fig. 5 shows another schematic structural diagram of an electronic device. As shown in fig. 5, the electronic device may include: a processor (processor)510, a communication Interface (Communications Interface)520, a memory (memory)530 and a communication bus 540, wherein the processor 510, the communication Interface 520 and the memory 530 communicate with each other via the communication bus 540. Processor 510 may call logic instructions in memory 530 to perform a virtual machine live migration method comprising:
in the process of virtual machine live migration, determining the dirty page rate of each virtual processor VCPU of a virtual machine;
determining the dirty page rate of the virtual machine according to the dirty page rate of each VCPU;
if the set migration convergence condition is determined not to be met according to the dirty page rate of the virtual machine, selecting a target VCPU meeting the set dirty page rate condition from each VCPU;
and performing memory access bandwidth limitation on the target VCPU, wherein the memory access bandwidth limitation is used for reducing the dirty page rate of the target VCPU.
Furthermore, the logic instructions in the memory 530 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer being capable of executing the virtual machine live migration method provided by the above methods, the method including:
in the process of virtual machine live migration, determining the dirty page rate of each virtual processor VCPU of a virtual machine;
determining the dirty page rate of the virtual machine according to the dirty page rate of each VCPU;
if the set migration convergence condition is determined not to be met according to the dirty page rate of the virtual machine, selecting a target VCPU meeting the set dirty page rate condition from each VCPU;
and performing memory access bandwidth limitation on the target VCPU, wherein the memory access bandwidth limitation is used for reducing the dirty page rate of the target VCPU.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the virtual machine live migration method provided above, the method including:
in the process of virtual machine live migration, determining the dirty page rate of each virtual processor VCPU of a virtual machine;
determining the dirty page rate of the virtual machine according to the dirty page rate of each VCPU;
if the set migration convergence condition is determined not to be met according to the dirty page rate of the virtual machine, selecting a target VCPU meeting the set dirty page rate condition from each VCPU;
and performing memory access bandwidth limitation on the target VCPU, wherein the memory access bandwidth limitation is used for reducing the dirty page rate of the target VCPU.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for virtual machine live migration, comprising:
in the process of virtual machine live migration, determining the dirty page rate of each virtual processor VCPU of a virtual machine;
determining the dirty page rate of the virtual machine according to the dirty page rate of each VCPU;
if the set migration convergence condition is determined not to be met according to the dirty page rate of the virtual machine, selecting a target VCPU meeting the set dirty page rate condition from each VCPU;
and performing memory access bandwidth limitation on the target VCPU, wherein the memory access bandwidth limitation is used for reducing the dirty page rate of the target VCPU.
2. The method of claim 1, wherein the setting migration convergence conditions comprises: the dirty page rate of the virtual machine is less than or equal to the migration transmission rate of the virtual machine.
3. The method of claim 1, wherein said selecting a target VCPU from said each VCPU that satisfies a set dirty page rate condition comprises:
according to the sequence of the dirty page rate from large to small, performing first sequencing on each VCPU, and sequentially selecting a set number of VCPUs from the head end of the first sequencing backwards, wherein the set number of VCPUs are the target VCPUs; or
And performing second sorting on each VCPU according to the sequence of the dirty page rates from small to large, and sequentially selecting the set number of VCPUs from the tail end of the second sorting to the front, wherein the set number of VCPUs are the target VCPUs.
4. The method of any of claims 1 to 3, wherein determining a dirty page rate for each VCPU of a virtual machine comprises:
and aiming at any VCPU in each VCPU, according to the number of the dirty page addresses increased in the set time by the dirty page modification record PML cache corresponding to the VCPU, determining the dirty page rate of the VCPU.
5. The method of claim 4, further comprising:
and starting the PML cache allocated to each VCPU.
6. The method of any of claims 1 to 3, wherein said determining a dirty page rate for the virtual machine based on the dirty page rate for each virtual processor comprises:
and calculating the sum of the dirty page rates of each virtual processor, wherein the obtained calculation result is the dirty page rate of the virtual machine.
7. The method according to any of claims 1 to 3, wherein the performing memory access bandwidth limitation on the target VCPU comprises:
determining the upper limit value of the memory access bandwidth of the thread corresponding to the target VCPU;
and limiting the memory access bandwidth of the target VCPU according to the memory access bandwidth upper limit value.
8. An electronic device, comprising:
the virtual machine hot migration processing device comprises a first determining module, a second determining module and a control module, wherein the first determining module is used for determining the dirty page rate of each virtual processor VCPU of a virtual machine in the hot migration process of the virtual machine;
a second determining module, configured to determine a dirty page rate of the virtual machine according to the dirty page rate of each VCPU;
the selecting module is used for selecting a target VCPU meeting the set dirty page rate condition from each VCPU if the set migration convergence condition is determined not to be met according to the dirty page rate of the virtual machine;
and the memory access bandwidth limiting module is used for limiting the memory access bandwidth of the target VCPU, and the memory access bandwidth limitation is used for reducing the dirty page rate of the target VCPU.
9. An electronic device, comprising: a memory and a processor, the memory and the processor being communicatively coupled to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the virtual machine live migration method of any one of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform the virtual machine live migration method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111577821.8A CN114443211A (en) | 2021-12-22 | 2021-12-22 | Virtual machine live migration method, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111577821.8A CN114443211A (en) | 2021-12-22 | 2021-12-22 | Virtual machine live migration method, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114443211A true CN114443211A (en) | 2022-05-06 |
Family
ID=81364211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111577821.8A Pending CN114443211A (en) | 2021-12-22 | 2021-12-22 | Virtual machine live migration method, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114443211A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115794315A (en) * | 2023-02-02 | 2023-03-14 | 天翼云科技有限公司 | Statistical method and device of dirty page rate, electronic equipment and storage medium |
CN115827169A (en) * | 2023-02-07 | 2023-03-21 | 天翼云科技有限公司 | Virtual machine migration method and device, electronic equipment and medium |
-
2021
- 2021-12-22 CN CN202111577821.8A patent/CN114443211A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115794315A (en) * | 2023-02-02 | 2023-03-14 | 天翼云科技有限公司 | Statistical method and device of dirty page rate, electronic equipment and storage medium |
CN115827169A (en) * | 2023-02-07 | 2023-03-21 | 天翼云科技有限公司 | Virtual machine migration method and device, electronic equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11372688B2 (en) | Resource scheduling method, scheduling server, cloud computing system, and storage medium | |
US11093297B2 (en) | Workload optimization system | |
US8578377B2 (en) | Accelerator and its method for realizing supporting virtual machine migration | |
CN113377540A (en) | Cluster resource scheduling method and device, electronic equipment and storage medium | |
CN114443211A (en) | Virtual machine live migration method, equipment and storage medium | |
US10261918B2 (en) | Process running method and apparatus | |
US10884667B2 (en) | Storage controller and IO request processing method | |
CN102141931A (en) | Virtual machine establishing method, virtual machine monitor and virtual machine system | |
US11188365B2 (en) | Memory overcommit by speculative fault | |
CN112328365A (en) | Virtual machine migration method, device, equipment and storage medium | |
CN110580195A (en) | Memory allocation method and device based on memory hot plug | |
CN111338688B (en) | Data long-acting caching method and device, computer system and readable storage medium | |
CN113032102A (en) | Resource rescheduling method, device, equipment and medium | |
US9658775B2 (en) | Adjusting page sharing scan rates based on estimation of page sharing opportunities within large pages | |
CN116467235B (en) | DMA-based data processing method and device, electronic equipment and medium | |
CN116089477B (en) | Distributed training method and system | |
CN116560803A (en) | Resource management method and related device based on SR-IOV | |
US20200371827A1 (en) | Method, Apparatus, Device and Medium for Processing Data | |
CN107436795B (en) | Method for guaranteeing online migration service quality of virtual machine | |
CN115543222A (en) | Storage optimization method, system, equipment and readable storage medium | |
CN112463037B (en) | Metadata storage method, device, equipment and product | |
CN109032510B (en) | Method and device for processing data based on distributed structure | |
CN108139980B (en) | Method for merging memory pages and memory merging function | |
CN115033337A (en) | Virtual machine memory migration method, device, equipment and storage medium | |
CN109617954B (en) | Method and device for creating cloud host |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |