CN110134490B - Virtual machine dynamic migration method, device and storage medium - Google Patents

Virtual machine dynamic migration method, device and storage medium Download PDF

Info

Publication number
CN110134490B
CN110134490B CN201810130116.5A CN201810130116A CN110134490B CN 110134490 B CN110134490 B CN 110134490B CN 201810130116 A CN201810130116 A CN 201810130116A CN 110134490 B CN110134490 B CN 110134490B
Authority
CN
China
Prior art keywords
iteration
memory
dirty page
virtual machine
page rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810130116.5A
Other languages
Chinese (zh)
Other versions
CN110134490A (en
Inventor
童遥
李华
申光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201810130116.5A priority Critical patent/CN110134490B/en
Publication of CN110134490A publication Critical patent/CN110134490A/en
Application granted granted Critical
Publication of CN110134490B publication Critical patent/CN110134490B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Abstract

The invention discloses a dynamic migration method of a virtual machine, which comprises the following steps: calculating a first dirty page rate of each memory page in the to-be-migrated list in the iteration of the round; judging whether the first dirty page rate is smaller than a preset current dirty page rate threshold value or not; if yes, sending the memory pages meeting the conditions to a receiving end, wherein the memory pages meeting the conditions are that the first dirty page rate is smaller than the preset current dirty page rate threshold value. In addition, the invention also discloses equipment and a storage medium. By adopting the method and the device, the data transmission quantity is reduced based on the pre-copy of the dirty page rate prediction, the quantity of the memory pages transmitted in each iteration is reduced, only the memory pages with the dirty page rate smaller than the current dirty page rate threshold value can be transmitted, the total iteration times are reduced, and the migration time is shortened.

Description

Virtual machine dynamic migration method, device and storage medium
Technical Field
The present invention relates to the field of cloud computing data processing technologies, and in particular, to a virtual machine dynamic migration method, device, and storage medium.
Background
The virtualization, especially the server virtualization, can improve the utilization rate of hardware resources, fully exert the computing capability of physical machines, and rapidly cope with the changing demands, and improve the automation level of offices, so that the method is widely applied to environments such as data centers, cloud computing and the like. Virtual machine dynamic migration is one of core technologies of virtualization, and is to quickly and completely migrate the running state of the whole virtual machine from an original physical machine to another physical machine in the running process of the virtual machine, and the whole migration process is smooth, so that a user hardly perceives any difference. The virtual machine dynamic migration technology can effectively realize load balancing, online maintenance and upgrading, disaster recovery and the like of the server.
The physical memory of the virtual machine is the largest and most complex part of the processing state required in the migration process, because the virtual machine still provides real-time services in dynamic migration, and the contradiction between the downtime and the total migration time must be considered in memory migration, so that the virtual machine and the total migration time are reduced as far as possible.
In general, the process of memory migration can be divided into three phases:
(1) Iterative copy (Push copy) phase: the source virtual machine transmits a portion of the memory pages to the target host during run-time, and the modified memory pages must be transferred again to ensure consistency.
(2) Shutdown Copy (Stop-and-Copy) phase: the source virtual machine is shutdown, the memory pages are copied to the target host, and then the target virtual machine is started.
(3) Copy on demand (Pull copy) phase: during the operation of the target virtual machine, if the memory page required by the process on the target virtual machine is not copied, the memory page generates a page error, and the page is copied from the source virtual machine through a network.
In the prior art, there are commonly used methods of Post-copy (Post-copy) and Pre-copy (Pre-copy) methods. The post Copy adopts two stages of Stop-and-Copy and Pull, firstly, the source virtual machine is paused, only key memory pages with a kernel structure are transmitted to the target host machine in a short Stop-and-Copy stage, then the target virtual machine is started, and the rest other memory pages are retrieved from the source virtual machine through a network when page errors are generated in the first use. This approach, while very short in downtime, will greatly increase the overall migration time and severely degrade the performance of the target virtual machine during the Pull phase. On the other hand, post-copying is extremely unstable, and once one of the source virtual machine and the target virtual machine is in error in the Pull phase, migration cannot restore the virtual machine to the correct state.
The pre-Copy consists of a number of iterative Push phases and a very short Stop-and-Copy phase. Although the Pre-Copy method balances the contradiction between downtime and total migration time to some extent, it does not address the problem of whether a memory page should be transferred in the Pre-Copy stage or in the Stop-and-Copy stage.
Disclosure of Invention
The invention mainly aims to provide a virtual machine dynamic migration method, equipment and a storage medium, and aims to solve the problems of long dynamic migration process, easy error, uncertain memory page transmission stage and the like of a virtual machine.
In order to achieve the above object, the present invention provides a virtual machine dynamic migration method, which includes the steps of:
calculating a first dirty page rate of each memory page in the to-be-migrated list in the iteration of the round;
judging whether the first dirty page rate is smaller than a preset current dirty page rate threshold value or not;
if yes, sending the memory pages meeting the conditions to a receiving end, wherein the memory pages meeting the conditions are that the first dirty page rate is smaller than the preset current dirty page rate threshold value.
In addition, in order to achieve the above object, the present invention also proposes an apparatus comprising a processor and a memory;
the processor is configured to execute the virtual machine dynamic migration sequence stored in the memory to implement the method described above.
In addition, to achieve the above object, the present invention also proposes a computer-readable storage medium storing one or more programs executable by one or more processors to implement the above method.
According to the virtual machine dynamic migration method, the virtual machine dynamic migration equipment and the storage medium, through calculation of the first dirty page rate of each memory page in the list to be migrated in the iteration of the round, when the first dirty page rate is smaller than the preset current dirty page rate threshold value, the memory pages meeting the conditions are sent to the receiving end. By adopting the virtual machine dynamic migration method, the data transmission quantity is reduced based on the pre-copy of the dirty page rate prediction, the quantity of memory pages transmitted in each iteration is reduced, only the memory pages with the dirty page rate smaller than the current dirty page rate threshold value are transmitted, the total iteration times are reduced, and the migration time is shortened.
Drawings
FIG. 1 is a flowchart illustrating a virtual machine dynamic migration method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating another process of the virtual machine dynamic migration method according to the first embodiment of the present invention;
FIG. 3 is a second flow chart of a virtual machine dynamic migration method according to the first embodiment of the present invention;
FIG. 4 is a schematic diagram of another process of the virtual machine dynamic migration method according to the first embodiment of the present invention;
FIG. 5 is a flowchart illustrating a virtual machine dynamic migration method according to a second embodiment of the present invention;
FIG. 6 is a flowchart illustrating a virtual machine dynamic migration method according to a third embodiment of the present invention;
fig. 7 is a schematic diagram of a device hardware architecture according to a fourth embodiment of the present invention;
FIG. 8 is a block diagram illustrating a dynamic migration process of the virtual machine in FIG. 7.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present invention, and have no specific meaning per se. Thus, "module," "component," or "unit" may be used in combination.
First embodiment
Fig. 1 is a schematic flow chart of a virtual machine dynamic migration method according to a first embodiment of the present invention. In fig. 1, the virtual machine dynamic migration method includes the steps of:
step 110, calculating a first dirty page rate of each memory page in the to-be-migrated list in the present iteration.
Specifically, memory migration is started and at the same time, the dirty page rate is predicted, and in the present invention, the dirty page rate is: the probability of the memory page being rewritten in a period of time is estimated according to the rewriting condition of the memory page in a previous period of time. As will be appreciated by those skilled in the art, the dirty page rate is related to the rate at which memory pages are rewritten. The overwrite rate represents the average number of overwrites of a memory page over a period of time. The larger the overwrite rate, the higher the dirty page rate. Conversely, since the dirty page rate is a probability value, which indicates the possibility that the memory page is modified, the higher the dirty page rate, the higher the overwrite rate is not necessarily.
Alternatively, in this embodiment, the memory page that is transferred to the destination and modified is referred to as a dirty page. If a memory page is frequently modified, i.e., the overwrite rate is high, the memory page is referred to as a high dirty page.
In this embodiment, the to-be-migrated list includes: marking a new dirty page in the previous iteration, namely: memory pages need to be transferred in this round of iteration. For the first iteration, all memory pages in the to-be-migrated list are in the first iteration.
During the iteration process, the dirty page rate is dynamically changed as the frequency with which the memory pages are modified is changed. In step 110, a first dirty page rate is calculated for each memory page in the to-be-migrated list at a first time. In this embodiment, the time interval from the first iteration start time to the first time is taken as the time interval for updating the dirty page rate.
Step 120, judging whether the first dirty page rate is smaller than a preset current dirty page rate threshold; if yes, go to step 130, if no, go to step 140.
Specifically, since the dirty page rate is dynamically changing, the dirty page rate threshold is also dynamically changing. As the number of iterations increases, the duration of each iteration round is reduced in overall trend, and the number of actually generated dirty pages within each iteration round will decrease, so that the dirty page rate threshold can be appropriately increased, so that more memory pages are transferred in the pre-copy stage, and the downtime is reduced.
And 130, sending the memory page meeting the condition to the receiving end.
Specifically, when the first dirty page rate is smaller than the current dirty page rate threshold, the memory page is a memory page meeting the condition, and can be transmitted to the receiving end in the present iteration.
Optionally, after sending the memory page meeting the condition to the receiving end, the memory page is cleared from the to-be-migrated list.
And 140, suspending transmission of the memory page.
Specifically, when the first dirty page rate is greater than or equal to the current dirty page rate threshold, it is indicated that there is a high probability that the memory page is modified in the iteration in a subsequent time, and transmission is suspended in the iteration of this round.
Optionally, before step 110, the following steps may be further included:
judging whether the iteration is the first iteration, if so, the memory pages in the to-be-migrated list are all memory pages; if not, the memory page in the to-be-migrated list is the modified memory page generated in the previous iteration process.
In this embodiment, the first iteration and each iteration after the first iteration may select the memory page to be transferred according to the dirty page rate, and the memory page that is not rewritten is transferred in the first iteration; for the memory pages which are rarely rewritten, if the memory pages are modified after the x-th transmission after the first round, the dirty page rate of the memory pages is increased and rapidly attenuated, and the memory pages are transmitted after the dirty page rate of the memory pages is reduced, wherein x is a positive integer greater than 1; for frequently rewritten memory pages, the improved method can even predict that the memory pages belong to high dirty pages in the first round, and the memory pages should be reserved for transmission in subsequent iterations or transmission in a shutdown phase.
Optionally, to avoid process blocking during migration, the high dirty pages may be removed from the to-be-migrated list, and added to the to-be-migrated list when the dirty page rate of the removed memory pages is predicted to decrease.
Optionally, as a further improvement of this embodiment, for a memory page whose dirty page rate is higher than a preset current dirty page rate threshold and whose transmission is suspended, there are three ways to continue transmission to the receiving end in subsequent iterations:
mode one:
as shown in fig. 2, after step 140, the virtual machine dynamic migration method further includes:
step 210, in the present iteration, calculating a second dirty page rate of the memory page for at least one preset time period after the first time;
step 220, judging whether the second dirty page rate is smaller than a preset current dirty page rate threshold; if yes, go to step 230;
step 230, sending the memory page to the receiving end.
Specifically, after the first time in the present iteration, every preset period (T), the dirty page rate (e.g., the second dirty page rate) of the memory page is updated. When the second dirty page rate is less than the preset current dirty page rate threshold, the corresponding memory page is modified in the current iteration, but the probability of not being rewritten again or being rewritten in the subsequent process is lower, the current iteration time is relatively poorer, and after a plurality of preset time periods, the dirty page rate of the memory page is reduced, and the memory page is sent to the receiving end in the current iteration. Otherwise, continuing to suspend the transmission of the memory page.
Mode two:
as shown in fig. 3, after step 140, the virtual machine dynamic migration method further includes:
step 310, in the present iteration, calculating a second dirty page rate of at least one preset time period after the first time;
step 320, determining whether the second dirty page rate is greater than the first dirty page rate, and whether the difference between the second dirty page rate and the first dirty page rate is less than a preset threshold; if yes, go to step 330;
step 330, sending the memory page to the receiving end.
Specifically, after the first time in the present iteration, every preset period (T), the dirty page rate (e.g., the second dirty page rate) of the memory page is updated. When the second dirty page rate is higher than the first dirty page rate, calculating an added value from the first dirty page rate to the second dirty page rate, and if the added value is smaller than a preset threshold value, iteratively transmitting the memory page to a receiving end. Otherwise, continuing to suspend the transmission of the memory page.
Mode three:
and when the virtual machine enters a shutdown stage, transmitting the memory pages which are not transmitted to the receiving end.
Optionally, as shown in fig. 4, after step 130, the virtual machine dynamic migration method of the present embodiment further includes:
step 410, judging whether the iteration of the round is finished; if yes, go to step 410, if no, return to step 110;
in step 410, the modified memory page generated in the current iteration is marked as the memory page to be transferred in the next iteration to update the to-be-migrated list.
Specifically, when the iteration of the present round is finished, the dirty page newly generated in the iteration of the present round is marked and used as the memory page to be transmitted in the next iteration of the present round, and then the to-be-migrated list is updated.
According to the virtual machine dynamic migration method provided by the embodiment, through calculating the first dirty page rate of each memory page in the to-be-migrated list in the round of iteration, when the first dirty page rate is smaller than the preset current dirty page rate threshold value, the memory pages meeting the conditions are sent to the receiving end. By adopting the virtual machine dynamic migration method, the data transmission quantity is reduced based on the pre-copy of the dirty page rate prediction, the quantity of memory pages transmitted in each iteration is reduced, only the memory pages with the dirty page rate smaller than the current dirty page rate threshold value are transmitted, the total iteration times are reduced, and the migration time is shortened.
Second embodiment
Fig. 5 is a schematic flow chart of a virtual machine dynamic migration method according to a second embodiment of the present invention. In fig. 5, the virtual machine dynamic migration method is a further improvement over the first embodiment, except that the method further includes:
step 510, judging whether the memory page which has been migrated in the current iteration is smaller than a preset value; if yes, go to step 520, if not, return to step 110 and calculate the dirty page rate of the memory page;
step 520, terminating the iteration and the virtual machine entering a shutdown phase;
in step 530, the untransmitted memory page is sent to the receiving end.
Specifically, when the migrated memory page in the current iteration is smaller than the preset value, the iteration is terminated, that is, when there are more high dirty pages, the iteration is terminated rapidly, the virtual machine enters a shutdown phase, the remaining memory pages are transmitted to the receiving end, and in the shutdown phase, other non-memory states are also transmitted to the receiving end.
According to the virtual machine dynamic migration method, when the memory page which is migrated in the round of iteration is smaller than the preset value, the iteration is stopped, the virtual machine enters a shutdown stage, and the untransmitted memory is sent to the receiving end, so that the iteration times are reduced, retransmission of high dirty pages is avoided, and the total migration time is shortened.
Fig. 6 is a schematic flow chart of a virtual machine dynamic migration method according to a third embodiment of the present invention. In fig. 6, the virtual machine dynamic migration method is a further improvement over the first embodiment, except that the method further includes:
step 610, judging whether the iteration number reaches a maximum value; if yes, go to step 620, if not, return to step 110 and calculate the dirty page rate of the memory page;
step 620, terminating the iteration and the virtual machine entering a shutdown phase;
step 630, send the untransmitted memory page to the receiving end.
Specifically, when the iteration number reaches the maximum value, the iteration is terminated, the virtual machine enters a shutdown phase, and transmits the remaining memory pages to the receiving end, and in the shutdown phase, other non-memory states are also transmitted to the receiving end.
According to the virtual machine dynamic migration method, when the iteration number reaches the maximum value, iteration is stopped, the virtual machine enters a shutdown stage, and the untransmitted memory is sent to the receiving end, so that the total iteration number is reduced, and the migration time is shortened.
Fourth embodiment
As shown in fig. 7, a schematic diagram of a device hardware architecture is provided in a fourth embodiment of the present invention. In fig. 7, the apparatus includes: memory 710, processor 720, and virtual machine dynamic migration program 730 stored on the memory 710 and executable on the processor 720. In this embodiment, the virtual machine migration program 730 includes a series of computer program instructions stored on the memory 710, which when executed by the processor 720, implement the virtual machine migration operations of the embodiments of the present invention. In some embodiments, the virtual machine dynamic migration program 730 may be divided into one or more modules based on the particular operations implemented by portions of the computer program instructions. As shown in fig. 8, the virtual machine live migration program 730 includes: a calculation module 810, a determination module 820, a sending module 830, a pause transmission module 840, a marking module 850, and an iteration termination module 860. Wherein,
the calculating module 810 is configured to calculate a first dirty page rate of each memory page in the to-be-migrated list in the present iteration.
Specifically, the memory migration is initiated and the computing module 810 begins to predict the dirty page rate, which in the present invention is: the probability of the memory page being rewritten in a period of time is estimated according to the rewriting condition of the memory page in a previous period of time. As will be appreciated by those skilled in the art, the dirty page rate is related to the rate at which memory pages are rewritten. The overwrite rate represents the average number of overwrites of a memory page over a period of time. The larger the overwrite rate, the higher the dirty page rate. Conversely, since the dirty page rate is a probability value, which indicates the possibility that the memory page is modified, the higher the dirty page rate, the higher the overwrite rate is not necessarily.
Alternatively, in this embodiment, the memory page that is transferred to the destination and modified is referred to as a dirty page. If a memory page is frequently modified, i.e., the overwrite rate is high, the memory page is referred to as a high dirty page.
In this embodiment, the to-be-migrated list includes: marking a new dirty page in the previous iteration, namely: memory pages need to be transferred in this round of iteration. For the first iteration, all memory pages in the to-be-migrated list are in the first iteration.
During the iteration process, the dirty page rate is dynamically changed as the frequency with which the memory pages are modified is changed. The calculation module 810 calculates a first dirty page rate for each memory page in the to-be-migrated list at a first time, respectively, in the present iteration. In this embodiment, the time interval from the first iteration start time to the first time is taken as the time interval for updating the dirty page rate.
A judging module 820, configured to judge whether the first dirty page rate is less than a preset current dirty page rate threshold; if yes, the sending module 830 is triggered, and if not, the pause transmission module 840 is triggered.
Specifically, since the dirty page rate is dynamically changing, the dirty page rate threshold is also dynamically changing. As the number of iterations increases, the duration of each iteration round is reduced in overall trend, and the number of actually generated dirty pages within each iteration round will decrease, so that the dirty page rate threshold can be appropriately increased, so that more memory pages are transferred in the pre-copy stage, and the downtime is reduced.
And the sending module 830 is configured to send the memory page meeting the condition to the receiving end.
Specifically, when the first dirty page rate is smaller than the current dirty page rate threshold, the memory page may be transmitted to the receiving end in this round of iteration.
Optionally, after sending the memory page meeting the condition to the receiving end, the memory page is cleared from the to-be-migrated list.
And a pause transmission module 840, configured to pause transmission of the memory page.
Specifically, when the first dirty page rate is greater than or equal to the current dirty page rate threshold, it indicates that there is a high probability that the memory page is modified in the iteration at a later time, and the pause transmission module 840 pauses transmission in the iteration of this round.
The judging module 820 is further configured to judge whether the current iteration is a first iteration, and if yes, the memory pages in the to-be-migrated list are all memory pages; if not, the memory page in the to-be-migrated list is the modified memory page generated in the previous iteration process.
In this embodiment, the first iteration and each iteration after the first iteration may select the memory page to be transferred according to the dirty page rate, and the memory page that is not rewritten is transferred in the first iteration; for the memory pages which are rarely rewritten, if the memory pages are modified after the x-th transmission after the first round, the dirty page rate of the memory pages is increased and rapidly attenuated, and the memory pages are transmitted after the dirty page rate of the memory pages is reduced, wherein x is a positive integer greater than 1; for frequently rewritten memory pages, the improved method can even predict that the memory pages belong to high dirty pages in the first round, and the memory pages should be reserved for transmission in subsequent iterations or transmission in a shutdown phase.
Optionally, to avoid process blocking during migration, the high dirty pages may be removed from the to-be-migrated list, and added to the to-be-migrated list when the dirty page rate of the removed memory pages is predicted to decrease.
Optionally, as a further improvement of this embodiment, for a memory page whose dirty page rate is higher than a preset current dirty page rate threshold and whose transmission is suspended, there are three ways to continue transmission to the receiving end in subsequent iterations:
mode one:
the calculating module 810 is further configured to calculate, in the present iteration, a second dirty page rate of the memory page for at least one preset period of time after the first time;
a judging module 820, configured to further judge whether the second dirty page rate is less than a preset current dirty page rate threshold; if so, the sending module 830 sends the memory page to the receiving end.
Specifically, after the first time in the present iteration, every preset period (T), the dirty page rate (e.g., the second dirty page rate) of the memory page is updated. When the determining module 820 determines that the second dirty page rate is smaller than the preset current dirty page rate threshold, it indicates that the corresponding memory page is modified in the current iteration, but the probability of not being rewritten again or being rewritten in the subsequent process is low, the current iteration time is relatively poor, and after a plurality of preset time periods, the dirty page rate of the memory page has been reduced, and then the sending module 830 sends the memory page in the current iteration to the receiving end. Otherwise, the suspend transmission module 840 continues to suspend transmitting the memory page.
Mode two:
the judging module 820 is further configured to judge whether the second dirty page rate is greater than the first dirty page rate, and whether a difference between the second dirty page rate and the first dirty page rate is less than a preset threshold; if so, the sending module 830 sends the memory page to the receiving end.
Specifically, after the first time in the present iteration, every preset period (T), the dirty page rate (e.g., the second dirty page rate) of the memory page is updated. When the judging module 820 judges that the second dirty page rate is higher than the first dirty page rate, an increment value from the first dirty page rate to the second dirty page rate is calculated, and if the judging module 820 judges that the increment value is smaller than the preset threshold, the transmitting module 830 transmits the memory page to the receiving end in the iteration of the round. Otherwise, the suspend transmission module 840 continues to suspend transmitting the memory page.
Mode three:
the sending module 830 is further configured to send an untransmitted memory page to the receiving end when the virtual machine enters a shutdown phase.
Optionally, the judging module is further configured to judge whether the iteration of the present round is ended; if yes, a marking module 850 is triggered, configured to mark the modified memory page generated in the current iteration as the memory page to be transferred in the next iteration to update the to-be-migrated list.
Specifically, when the iteration of the present round is finished, the marking module 850 marks the dirty page newly generated in the iteration of the present round, and uses the dirty page as the memory page to be transmitted in the next iteration of the present round, thereby updating the to-be-migrated list.
Optionally, the determining module 820 is further configured to determine whether the memory page that has been migrated in the current iteration is smaller than a preset value; if yes, triggering the iteration termination module 860, and if not, triggering the calculation module 810 to continue calculating the dirty page rate of the memory page;
optionally, in other embodiments, the determining module 820 is further configured to determine whether the iteration number reaches a maximum value; if yes, the iteration termination module 860 is triggered to terminate the iteration and the virtual machine enters the shutdown phase, if not, the calculation module 810 is triggered to continue calculating the dirty page rate of the memory page.
The iteration termination module 860 is configured to terminate the iteration and the virtual machine enters a shutdown stage when the determination module 820 determines that the memory page that has been migrated in the current iteration is smaller than a preset value or the number of iterations reaches a maximum value;
the sending module 830 is further configured to send, in a shutdown phase, an untransmitted memory page to the receiving end.
Specifically, when the determining module 820 determines that the migrated memory page in the current iteration is smaller than the preset value or the number of iterations reaches the maximum value, the iteration is terminated, that is, the iteration is terminated rapidly when there are more high dirty pages, the virtual machine enters a shutdown phase, and transmits the remaining memory pages to the receiving end, and in the shutdown phase, other non-memory states are also transmitted to the receiving end.
The device provided in this embodiment calculates, by the calculating module 810, the first dirty page rate of each memory page in the to-be-migrated list in the present iteration, and when the judging module 820 judges that the first dirty page rate is less than the preset current dirty page rate threshold, the sending module 830 sends the memory page meeting the condition to the receiving end. With the device of the embodiment, the data transmission amount is reduced based on the pre-copy of the dirty page rate prediction, the memory page number transmitted in each iteration is reduced, only the memory pages with the dirty page rate smaller than the current dirty page rate threshold value are transmitted, the total iteration times are reduced, and the migration time is shortened.
Fifth embodiment
The embodiment of the invention also provides a computer readable storage medium. The computer-readable storage medium here stores one or more programs. Wherein the computer readable storage medium may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, hard disk, or solid state disk; the memory may also comprise a combination of the above types of memories. The virtual machine dynamic migration method provided in the above embodiments is implemented when one or more programs in a computer-readable storage medium are executable by one or more processors.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (8)

1. A virtual machine dynamic migration method, the method comprising the steps of:
calculating a first dirty page rate of each memory page in the to-be-migrated list in the iteration of the round;
judging whether the first dirty page rate is smaller than a preset current dirty page rate threshold value or not;
if yes, sending the memory pages meeting the conditions to a receiving end in the round of iteration, wherein the memory pages meeting the conditions are that the first dirty page rate is smaller than the preset current dirty page rate threshold value; the preset current dirty page rate threshold value is continuously increased along with the increase of the iteration times;
when the first dirty page rate is greater than or equal to the preset current dirty page rate threshold value, suspending transmission of the memory pages;
if the first dirty page rate is calculated at the first time, after suspending the transmission of the memory page, the method further includes:
in the present iteration, calculating a second dirty page rate of at least one preset time period after the first time;
judging whether the second dirty page rate is smaller than a preset current dirty page rate threshold value or not; or, judging whether the second dirty page rate is larger than the first dirty page rate, and whether the difference value between the second dirty page rate and the first dirty page rate is smaller than a preset threshold value;
if yes, the memory page is sent to a receiving end.
2. The virtual machine dynamic migration method of claim 1, wherein after suspending the transfer of the memory page, the method further comprises:
and when the virtual machine enters a shutdown stage, transmitting the memory pages which are not transmitted to the receiving end.
3. The virtual machine dynamic migration method of claim 1, further comprising:
judging whether the memory page which is migrated in the iteration of the round is smaller than a preset value or not;
if yes, terminating iteration and enabling the virtual machine to enter a shutdown stage;
and sending the untransmitted memory pages to the receiving end.
4. The virtual machine dynamic migration method of claim 1, further comprising:
judging whether the iteration number reaches the maximum value or not;
if yes, terminating iteration and enabling the virtual machine to enter a shutdown stage;
and sending the untransmitted memory pages to the receiving end.
5. The virtual machine dynamic migration method of claim 1, wherein if the present iteration is a first iteration, the memory pages in the to-be-migrated list are all memory pages; if the iteration is not the first iteration, the memory pages in the to-be-migrated list are modified memory pages generated in the previous iteration process.
6. The virtual machine dynamic migration method of claim 5, wherein after sending the eligible memory pages to the receiving end, the method further comprises:
judging whether the iteration of the round is finished or not;
if yes, marking the modified memory page generated in the iteration of the round as the memory page to be transmitted in the next iteration to update the to-be-migrated list.
7. An apparatus comprising a processor and a memory;
the processor is configured to execute a virtual machine dynamic migration sequence stored in the memory to implement the method of any of claims 1-6.
8. A computer readable storage medium storing one or more programs executable by one or more processors to implement the method of any of claims 1-6.
CN201810130116.5A 2018-02-08 2018-02-08 Virtual machine dynamic migration method, device and storage medium Active CN110134490B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810130116.5A CN110134490B (en) 2018-02-08 2018-02-08 Virtual machine dynamic migration method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810130116.5A CN110134490B (en) 2018-02-08 2018-02-08 Virtual machine dynamic migration method, device and storage medium

Publications (2)

Publication Number Publication Date
CN110134490A CN110134490A (en) 2019-08-16
CN110134490B true CN110134490B (en) 2023-12-29

Family

ID=67567874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810130116.5A Active CN110134490B (en) 2018-02-08 2018-02-08 Virtual machine dynamic migration method, device and storage medium

Country Status (1)

Country Link
CN (1) CN110134490B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559116A (en) * 2019-09-25 2021-03-26 阿里巴巴集团控股有限公司 Memory migration method and device and computing equipment
CN110971468B (en) * 2019-12-12 2022-04-05 广西大学 Delayed copy incremental container check point processing method based on dirty page prediction
CN111638937A (en) * 2020-04-23 2020-09-08 龙芯中科技术有限公司 Virtual machine migration method and device, electronic equipment and storage medium
CN115794315B (en) * 2023-02-02 2023-06-23 天翼云科技有限公司 Dirty page rate statistics method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365704A (en) * 2012-03-26 2013-10-23 中国移动通信集团公司 Memory pre-copying method in virtual machine migration, device executing memory pre-copying method and system
WO2014082459A1 (en) * 2012-11-30 2014-06-05 华为技术有限公司 Method, apparatus, and system for implementing hot migration of virtual machine
CN103955399A (en) * 2014-04-30 2014-07-30 华为技术有限公司 Migrating method and device for virtual machine, as well as physical host
CN104750620A (en) * 2015-04-23 2015-07-01 四川师范大学 Memory migration method and device
CN107479944A (en) * 2017-07-20 2017-12-15 上海交通大学 Mix the adaptive thermophoresis dispatching method of virutal machine memory and system under cloud mode

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365704A (en) * 2012-03-26 2013-10-23 中国移动通信集团公司 Memory pre-copying method in virtual machine migration, device executing memory pre-copying method and system
WO2014082459A1 (en) * 2012-11-30 2014-06-05 华为技术有限公司 Method, apparatus, and system for implementing hot migration of virtual machine
CN103955399A (en) * 2014-04-30 2014-07-30 华为技术有限公司 Migrating method and device for virtual machine, as well as physical host
CN104750620A (en) * 2015-04-23 2015-07-01 四川师范大学 Memory migration method and device
CN107479944A (en) * 2017-07-20 2017-12-15 上海交通大学 Mix the adaptive thermophoresis dispatching method of virutal machine memory and system under cloud mode

Also Published As

Publication number Publication date
CN110134490A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110134490B (en) Virtual machine dynamic migration method, device and storage medium
US10552199B2 (en) System and method for binary throttling for live migration of virtual machines
US10572285B2 (en) Method and apparatus for elastically scaling virtual machine cluster
CN110058966B (en) Method, apparatus and computer program product for data backup
US20140280485A1 (en) Pre-fetching remote resources
US20140115162A1 (en) Providing automated quality-of-service ('qos') for virtual machine migration across a shared data center network
WO2020140634A1 (en) Storage space optimization method and device, computer apparatus, and storage medium
CN108153594B (en) Resource fragment sorting method of artificial intelligence cloud platform and electronic equipment
CN107797859B (en) Scheduling method of timing task and scheduling server
WO2022183802A1 (en) Load balancing method, apparatus, and device, storage medium, and computer program product
CN112527470B (en) Model training method and device for predicting performance index and readable storage medium
Aldhalaan et al. Analytic performance modeling and optimization of live VM migration
US9769022B2 (en) Timeout value adaptation
KR20180122593A (en) How to delete a cloud host in a cloud computing environment, devices, servers, and storage media
CN114443211A (en) Virtual machine live migration method, equipment and storage medium
CN111555987B (en) Current limiting configuration method, device, equipment and computer storage medium
CN115525400A (en) Method, apparatus and program product for managing multiple computing tasks on a batch basis
US11003379B2 (en) Migration control apparatus and migration control method
CN107783826B (en) Virtual machine migration method, device and system
CN112631994A (en) Data migration method and system
CN113254191A (en) Method, electronic device and computer program product for running applications
US20160378536A1 (en) Control method and information processing device
CN114201458B (en) Information updating method, micro-service system and computer readable storage medium
CN112241398A (en) Data migration method and system
CN109117277B (en) Method and device for simulating synchronous blocking in asynchronous environment, storage medium, server and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant