CN115827169A - Virtual machine migration method and device, electronic equipment and medium - Google Patents

Virtual machine migration method and device, electronic equipment and medium Download PDF

Info

Publication number
CN115827169A
CN115827169A CN202310075635.7A CN202310075635A CN115827169A CN 115827169 A CN115827169 A CN 115827169A CN 202310075635 A CN202310075635 A CN 202310075635A CN 115827169 A CN115827169 A CN 115827169A
Authority
CN
China
Prior art keywords
dirty page
virtual machine
dirty
page collection
trend
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310075635.7A
Other languages
Chinese (zh)
Other versions
CN115827169B (en
Inventor
吴重云
邓鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202310075635.7A priority Critical patent/CN115827169B/en
Publication of CN115827169A publication Critical patent/CN115827169A/en
Application granted granted Critical
Publication of CN115827169B publication Critical patent/CN115827169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention provides a virtual machine migration method, a virtual machine migration device, electronic equipment and a storage medium, wherein the method comprises the following steps: in the process of virtual machine live migration, if a virtual machine exits based on a preset memory area full event, counting the virtual machine exits to obtain corresponding counting information; counting a dirty page collection rate change trend from a dirty page collection thread of the virtual machine; and adjusting the dirty page collection rate of the dirty page collection thread according to the counting information and/or the change trend of the dirty page collection rate. The method comprises the step of dynamically adjusting the workload of the dirty page collection thread according to the internal business pressure condition of the virtual machine, wherein the internal business pressure condition of the virtual machine is specifically represented by counting the exit events of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate dirty page collection under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of customers.

Description

Virtual machine migration method and device, electronic equipment and medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a virtual machine migration method, a virtual machine migration apparatus, an electronic device, and a computer-readable storage medium.
Background
In the field of cloud computing virtual machines, online migration of a virtual machine is an important function, and is a function frequently used by operation and maintenance personnel when handling a faulty host. The online migration of the virtual machine requires that memory data, device state information, disk data and the like are migrated without interrupting the service of the client virtual machine, and the client cannot even perceive the occurrence of a live migration action. In the process of online migration of a virtual machine, if a certain memory area of a client virtual machine is modified, a technology is needed to store the modified page information called as the dirty pages of the memory, and with statistics of the dirty pages of the memory, after completing one round of memory copy, a virtualization device can continue to migrate the dirty pages generated in the previous round to a destination in the next round, and finally all latest memory information in the current client virtual machine is migrated to the destination and then switched to the destination to finally complete online (hot) migration through one round of iteration.
The Dirty ring is used as a new internal memory Dirty page tracking characteristic, online migration of a large-capacity internal memory virtual machine is more advantageous, internal memory Dirty pages are more flexibly collected, collection can be carried out based on each vcpu (virtual processor, cpu virtualization technology) so that some characteristics can be developed, but in practical application, when the business pressure of the migrated virtual machine is high, the problems that migration consumes time and the business performance of a client in the migration process is sharply reduced can occur.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a virtual machine migration method and a corresponding virtual machine migration apparatus, an electronic device, and a computer-readable storage medium that overcome or at least partially solve the above problems.
The embodiment of the invention discloses a virtual machine migration method, which comprises the following steps:
in the process of virtual machine live migration, if a virtual machine exit event is generated by the virtual machine based on a preset memory area full event, counting the virtual machine exit event to obtain corresponding counting information;
counting a dirty page collecting rate change trend from a dirty page collecting thread of the virtual machine;
and adjusting the dirty page collection rate of the dirty page collection thread according to the counting information and/or the change trend of the dirty page collection rate, so as to adjust the recovery rate of the preset memory area in the virtual machine by adjusting the dirty page collection rate.
Optionally, the counting information includes first counting information corresponding to a current round of dirty page collection and second counting information corresponding to a previous round of dirty page collection, and the adjusting the dirty page collection rate of the dirty page collection thread according to the counting information and/or the change trend of the dirty page collection rate includes:
and if the count value corresponding to the first counting information is larger than the count value corresponding to the second counting information, increasing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round.
Optionally, the dirty page collection rate variation trend includes a first variation trend of the dirty page collection rate in the current round and a second variation trend of the dirty page collection rate in the previous round, and the adjusting the dirty page collection rate of the dirty page collection thread according to the count information and/or the dirty page collection rate variation trend includes:
and if the count value corresponding to the first counting information is not greater than the count value corresponding to the second counting information, adjusting the dirty page collection rate of the dirty page collection thread according to the first change trend and the second change trend.
Optionally, the counting a dirty page collection rate variation trend from a dirty page collection thread of the virtual machine includes:
counting the quantity information of the collected dirty pages from the dirty page collection thread of the virtual machine; the quantity information comprises first quantity information collected in the current round and second quantity information collected in the previous round;
if the quantity value corresponding to the first quantity information is larger than the quantity value corresponding to the second quantity information, determining that the first change trend of the dirty page collection rate in the current round is an increasing trend;
and if the quantity value corresponding to the first quantity information is smaller than the quantity value corresponding to the second quantity information, determining that the first change trend of the dirty page collection rate in the current round is a reduction trend.
Optionally, the adjusting the dirty page collection rate of the dirty page collection thread according to the first variation trend and the second variation trend includes:
if the first change trend is consistent with the second change trend, increasing a preset numerical value by a count value corresponding to preset stable count information; the preset stable counting information is used for measuring the stability degree of the variation trend; the variation trend comprises an increasing trend and a decreasing trend;
if the increased count value corresponding to the preset stable counting information meets a preset stable counting threshold condition and the first change trend is an increase trend, increasing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round;
and if the increased count value corresponding to the preset stable counting information meets a preset stable counting threshold condition and the first change trend is a decrease trend, reducing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round.
Alternatively,
if the increased count value corresponding to the preset stable counting information meets the preset stable counting threshold condition and the first change trend is an increase trend, increasing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round, including:
if the increased count value corresponding to the preset stable counting information is equal to a preset stable counting threshold value and the first change trend is an increase trend, increasing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round;
if the increased count value corresponding to the preset stable counting information meets a preset stable counting threshold condition and the first change trend is a decrease trend, reducing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round, including:
and if the increased count value corresponding to the preset stable counting information is equal to the preset stable counting threshold value and the first change trend is a decrease trend, reducing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round.
Optionally, the method further comprises:
and if the first change trend is inconsistent with the second change trend, setting a count value corresponding to the preset stable counting information as a preset initial numerical value.
Optionally, the increasing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round includes:
reducing the sleep time of the dirty page collection thread according to a preset first proportion, so as to increase the dirty page collection rate of the dirty page collection thread for collecting dirty pages in the next round.
Optionally, the reducing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round includes:
and increasing the sleep time of the dirty page collection thread according to a preset second proportion so as to reduce the dirty page collection rate of the dirty page collection thread for collecting dirty pages in the next round.
Optionally, the method further comprises:
if the reduced sleep time of the dirty page collection thread is not in a preset sleep time interval, maintaining the sleep time of the dirty page collection thread unchanged;
and if the increased sleep time of the dirty page collection thread is not in the preset sleep time interval, maintaining the sleep time of the dirty page collection thread unchanged.
Optionally, the counting the collected quantity information of the dirty pages from the dirty page collection thread of the virtual machine includes:
traversing a virtual processor of the virtual machine, and determining the preset memory area corresponding to the virtual processor;
storing the collected dirty page information into the preset memory area through the kernel of the virtual machine; the preset memory area is a shared memory area;
and collecting dirty page information from the preset memory area through the dirty page collecting thread, and counting the quantity information of the collected dirty pages.
Optionally, the virtual machine comprises a virtualization component, the virtualization component comprising the dirty page collection thread; the virtualization component is a QEMU component.
The embodiment of the invention also discloses a virtual machine migration device, which comprises:
the counting module is used for counting the virtual machine exit events to obtain corresponding counting information if the virtual machine generates the virtual machine exit events based on the preset memory area full events in the virtual machine live migration process;
the statistical module is used for counting the change trend of the dirty page collecting rate from the dirty page collecting thread of the virtual machine;
and the adjusting module is used for adjusting the dirty page collecting rate of the dirty page collecting thread according to the counting information and/or the change trend of the dirty page collecting rate so as to adjust the recovery rate of the preset memory area in the virtual machine by adjusting the dirty page collecting rate.
Optionally, the count information includes first count information corresponding to a current round of dirty page collection and second count information corresponding to a previous round of dirty page collection, and the adjusting module includes:
and the adding submodule is used for increasing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round if the count value corresponding to the first counting information is greater than the count value corresponding to the second counting information.
Optionally, the trend of the dirty page collecting rate includes a first trend of the dirty page collecting rate in the current round and a second trend of the dirty page collecting rate in the previous round, and the adjusting module includes:
and the adjusting submodule is used for adjusting the dirty page collecting speed of the dirty page collecting thread according to the first change trend and the second change trend if the count value corresponding to the first count information is not greater than the count value corresponding to the second count information.
Optionally, the statistical module includes:
the statistic submodule is used for counting the quantity information of the collected dirty pages from the dirty page collection thread of the virtual machine; the quantity information comprises first quantity information collected in the current round and second quantity information collected in the previous round;
the first determining submodule is used for determining that the first change trend of the dirty page collecting rate in the current round is an increasing trend if the quantity value corresponding to the first quantity information is larger than the quantity value corresponding to the second quantity information;
and the second determining submodule is used for determining that the first change trend of the dirty page collection rate in the current round is a reduction trend if the quantity value corresponding to the first quantity information is smaller than the quantity value corresponding to the second quantity information.
Optionally, the adjusting sub-module includes:
the first increasing unit is used for increasing a count value corresponding to preset stable count information by a preset numerical value if the first change trend is consistent with the second change trend; the preset stable counting information is used for measuring the stability degree of the variation trend; the variation trend comprises an increasing trend and a decreasing trend;
a second increasing unit, configured to increase a dirty page collecting rate at which a dirty page collection thread performs a dirty page collection in a next round if the increased count value corresponding to the preset stable counting information meets a preset stable counting threshold condition and the first change trend is an increasing trend;
and the reducing unit is used for reducing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round if the increased count value corresponding to the preset stable counting information meets the preset stable counting threshold condition and the first change trend is a reduction trend.
Alternatively,
the second adding unit includes:
a first increasing subunit, configured to increase, if the increased count value corresponding to the preset stable counting information is equal to a preset stable counting threshold and the first change trend is an increasing trend, the dirty page collecting rate at which the dirty page collecting thread performs dirty page collection in a next round;
the lowering unit includes:
and the reducing subunit is configured to reduce the dirty page collecting rate at which the dirty page collecting thread performs dirty page collection in a next round if the increased count value corresponding to the preset stable counting information is equal to the preset stable counting threshold and the first change trend is a reduction trend.
Optionally, the apparatus further comprises:
and the setting module is used for setting the counting value corresponding to the preset stable counting information as a preset initial numerical value if the first change trend is inconsistent with the second change trend.
Optionally, the second adding unit includes:
a reducing subunit, configured to reduce sleep time of the dirty page collecting thread according to a preset first ratio, so as to increase a dirty page collecting rate at which a next round of dirty page collection by the dirty page collecting thread is performed by reducing the sleep time of the dirty page collecting thread.
Optionally, the reducing unit includes:
and the second increasing subunit is configured to increase the sleep time of the dirty page collecting thread according to a preset second proportion, so as to reduce the dirty page collecting rate of a next dirty page collecting of the dirty page collecting thread by increasing the sleep time of the dirty page collecting thread.
Optionally, the apparatus further comprises:
the first maintaining module is used for maintaining the sleep time of the dirty page collecting thread unchanged if the reduced sleep time of the dirty page collecting thread is not in a preset sleep time interval;
and the second maintaining module is used for maintaining the sleep time of the dirty page collecting thread unchanged if the increased sleep time of the dirty page collecting thread is not in the preset sleep time interval.
Optionally, the statistics submodule includes:
the traversal and determination unit is used for traversing a virtual processor of the virtual machine and determining the preset memory area corresponding to the virtual processor;
the storage unit is used for storing the collected dirty page information into the preset memory area through the kernel of the virtual machine; the preset memory area is a shared memory area;
and the collecting and counting unit is used for collecting dirty page information from the preset memory area through the dirty page collecting thread and counting the quantity information of the collected dirty pages.
Optionally, the virtual machine comprises a virtualization component, the virtualization component comprising the dirty page collection thread; the virtualization component is a QEMU component.
The embodiment of the invention also discloses an electronic device, which comprises: a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program when executed by the processor implementing a virtual machine migration method as described above.
The embodiment of the invention also discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when being executed by a processor, the computer program realizes the virtual machine migration method.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, in the thermal migration process of the virtual machine, if the virtual machine generates the virtual machine exit event based on the preset memory region full event, the virtual machine exit event can be counted, the change trend of the dirty page collection rate can be counted from the dirty page collection thread of the virtual machine, the dirty page collection rate of the dirty page collection thread is adjusted based on the counting information and/or the change trend of the dirty page collection rate, and then the exit frequency of the virtual machine due to the virtual machine exit event caused by the preset memory region full event is reduced, so that the virtual machine customer service performance in the thermal migration process is optimized, the influence on the customer service is effectively reduced, and the customer service availability in the migration process is improved. By adopting the method, the method for dynamically adjusting the workload of the dirty page collection thread according to the internal service pressure condition of the virtual machine is provided, wherein the internal service pressure condition of the virtual machine is specifically represented by counting the exit events of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate dirty page collection under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of customers.
Drawings
Fig. 1 is a flowchart illustrating steps of a virtual machine migration method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of another virtual machine migration method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a virtual machine migration method according to an embodiment of the present invention;
fig. 4 is a block diagram of a virtual machine migration apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of them. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
At present, the dirty page marking mode of the online migration of the virtual machine has two modes: a bitmap mode and a digit mode.
The bitmap mode is that dirty page information is recorded through a bitmap in a kernel, when a user inquires a dirty page from the kernel through an input/output control (i/o) function, the kernel is responsible for copying the bitmap information from a kernel state to a user state, and a QEMU (virtualization simulator) determines a next round of memory pages which need to be sent after acquiring the dirty page information.
dirty ring mode: the method comprises the steps that a shared memory area called ring is mapped through an IOMMU (input/output management unit), a kernel inserts information such as offset of a dirty page into the ring after the dirty page is collected, and a virtualization simulator obtains dirty page information from the shared ring without copying bitmap data from a kernel state to a user state. And each vcpu corresponds to one ring, so that dirty pages on different cpus can be acquired respectively, which cannot be achieved by the bitmap method.
The existing dirty page collecting thread uses a fixed calling frequency to collect dirty pages, and the condition of service pressure inside a virtual machine is not concerned. If the internal pressure of the virtual machine is high, a dirty page collection thread is not in time to collect dirty pages and trigger the recovery of the shared dirty page space ring, so that the virtual machine frequently exits, and the service performance of a client is seriously influenced.
When different service pressures are met, the workload of a dirty page collection thread is not dynamically adjusted, so that when the service pressure in a virtual machine is large, a user-mode dirty page collection thread collects dirty pages untimely, the ring space in a shared memory region is not timely recovered, a kernel has no available space when marking the dirty pages to trigger the virtual machine to quit, and frequent virtual machine quitting causes the performance of the service in the virtual machine to be remarkably reduced and even the virtual machine is almost unusable. Accordingly, the present invention is intended to provide a virtual machine migration method and a corresponding virtual machine migration apparatus, an electronic device, and a computer-readable storage medium that overcome the above-mentioned problems or at least partially solve the above-mentioned problems.
One of the core concepts of the embodiments of the present invention is that, in a live migration process of a virtual machine, if the virtual machine generates a virtual machine exit event based on a preset memory region full event, the virtual machine exit event may be counted, a dirty page collection rate variation trend may be counted from a dirty page collection thread of the virtual machine, and a dirty page collection rate of the dirty page collection thread may be adjusted based on count information and/or the dirty page collection rate variation trend, so as to reduce an exit frequency of the virtual machine due to the preset memory region full event, thereby optimizing a virtual machine customer service performance in the live migration process, effectively reducing an influence on a customer service, and improving availability of the customer service in the migration process. By adopting the method, the method for dynamically adjusting the workload of the dirty page collection thread according to the internal service pressure condition of the virtual machine is provided, wherein the internal service pressure condition of the virtual machine is specifically represented by counting the exit events of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate dirty page collection under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of customers.
Referring to fig. 1, a flowchart illustrating steps of a virtual machine migration method according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 101, in the process of live migration of a virtual machine, if the virtual machine generates a virtual machine exit event based on a preset memory area full event, counting the virtual machine exit event to obtain corresponding counting information.
The predetermined memory region may be a shared memory region called ring. The preset memory area full event may be a ring full event, the number of ring entries that can be accommodated in the preset memory area ring is limited, 4096 by default, and if the kernel finds that the ring entries are full when the kernel puts dirty page information, an exception is triggered, which may cause a virtual machine to exit, that is, a virtual machine exit event is generated based on the preset memory area full event.
In the embodiment of the invention, the virtual machine exit events can be counted to obtain corresponding counting information.
And 102, counting the change trend of the dirty page collection rate from the dirty page collection thread of the virtual machine.
A virtualization component is included in the virtual machine, the virtualization component including a dirty page collection thread. The dirty page collection thread may be a thread dedicated to collecting dirty memory pages, which is responsible for collecting dirty memory pages for use by the migration thread to query to control whether memory is migrated.
In practical application, if the log-out is caused by the ring full, the memory dirty page collection is triggered once, the memory dirty page information is synchronized to the user-mode dirty page data structure, and thus the kernel can insert new memory dirty page data into the ring.
The trend of the change of the dirty page collection rate can be an increasing trend or a decreasing trend of the dirty page collection rate, and can reflect that the dirty pages are generated faster and faster/the number of the dirty pages is increased in the same time period, or the dirty pages are generated slower and slower/the number of the dirty pages is decreased in the same time period.
In the embodiment of the invention, the change trend of the dirty page collection rate can be counted from the dirty page collection thread of the virtual machine.
And 103, adjusting the dirty page collection rate of the dirty page collection thread according to the counting information and/or the change trend of the dirty page collection rate, so as to adjust the recovery rate of a preset memory area in the virtual machine by adjusting the dirty page collection rate.
According to the counting information aiming at the exit events of the virtual machine and the counted change trend of the dirty page collection rate, the dirty page collection rate of the dirty page collection thread is adjusted, the dirty page collection rate is dynamically adjusted through the two factors to influence the recovery rate of the preset memory area, the frequent exit of the virtual machine due to insufficient dirty page space caused by the fact that dirty pages are not collected is reduced, the service performance of a client in the virtual machine in the live migration process can be optimized, the interference on the client service is reduced, and the service availability of the client is improved. The memory area is preset as a ring memory space (ring buffer) for storing dirty page information, and the memory recovery rate, that is, the recovery rate of the ring memory space, can be influenced by dynamically adjusting the dirty page collection rate.
In summary, in the embodiment of the present invention, in a live migration process of a virtual machine, if the virtual machine generates a virtual machine exit event based on a preset memory region full event, the virtual machine exit event may be counted, a dirty page collection rate variation trend may be counted from a dirty page collection thread of the virtual machine, the dirty page collection rate of the dirty page collection thread may be adjusted based on the count information and/or the dirty page collection rate variation trend, and then an exit frequency of the virtual machine due to the preset memory region full event, which causes the virtual machine exit event, is reduced, thereby optimizing a virtual machine customer service performance in the live migration process, effectively reducing an impact on the customer service, and improving availability of the customer service in the migration process. By adopting the method, the method for dynamically adjusting the workload of the dirty page collection thread according to the internal service pressure condition of the virtual machine is provided, wherein the internal service pressure condition of the virtual machine is specifically represented by counting the exit events of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate dirty page collection under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of customers.
Referring to fig. 2, a flowchart illustrating steps of another virtual machine migration method provided in the embodiment of the present invention is shown, which may specifically include the following steps:
in step 201, in the live migration process of the virtual machine, if the virtual machine generates a virtual machine exit event based on a preset memory area full event, counting the virtual machine exit event to obtain corresponding counting information.
A dirty page tracking technology of a Kernel of a current KVM (Kernel-based virtual Machine) virtual Machine is a ring data structure of a shared memory mapped by an IOMMU, the Kernel KVM directly puts memory dirty page information into a ring, a user-state QEMU regularly takes out dirty pages from the ring and sets a taken-out ring entry mark as a reset to tell the Kernel that the entry space of the ring can be recycled for being used by inserting a new dirty page next time. The user mode QEMU is provided with a thread which is specially used for collecting the memory dirty pages, and the thread is responsible for collecting the memory dirty pages for the migration thread to inquire and use so as to control whether the memory is migrated or not. That is, the virtual machine includes a virtualization component, which is a QEMU component, that includes a dirty page collection thread.
If the log exits because of the ring full, the collection of the internal memory dirty pages is triggered once, and the internal memory dirty page information is synchronized into a user-mode dirty page data structure, so that the kernel can insert new internal memory dirty page data into the ring. In this process, the kernel KVM is in the role of the producer, and the user-mode QEMU is in the role of the consumer, and the two are connected by the shared preset memory area ring, and a scheme is needed to balance the pace (rate) of the producer and the consumer.
If the customer service pressure inside the virtual machine is large, the generated memory dirty page rate is too fast, which causes the kernel to frequently mark the dirty pages and insert the dirty pages into dirty ring, and at this time, if the user-state QEMU is triggered to collect the user-state dirty pages at fixed time intervals (for example, 1 second) and to release the ring space, the kernel cannot insert new dirty pages, which causes the virtual machine to exit, which causes the customer service to pause. The more stressed the virtual machine exits more frequently, the client traffic may be affected and even appear to be nearly unavailable.
In the embodiment of the invention, in the process of the virtual machine live migration, if the virtual machine exit event is generated due to the fact that the preset memory area is filled with the event, the virtual machine exit event can be counted to obtain the corresponding counting information.
Step 202, counting a dirty page collection rate change trend from a dirty page collection thread of the virtual machine.
In the embodiment of the invention, the change trend of the dirty page collection rate can be counted from the dirty page collection thread of the virtual machine. The dirty page collection rate trend may refer to an increasing trend or a decreasing trend of the dirty page collection rate.
The dirty page collection rate trend comprises a first trend of the dirty page collection rate of the current round and a second trend of the dirty page collection rate of the previous round. The variation trend of the dirty page collection rate in the current round can be referred to as a first variation trend, and the variation trend of the dirty page collection rate in the previous round can be referred to as a second variation trend.
In an optional embodiment of the present invention, in step 202, counting a change trend of a dirty page collecting rate from a dirty page collecting thread of a virtual machine, specifically, the following sub-steps may be included:
in the substep S11, information on the number of dirty pages collected is counted from the dirty page collection thread of the virtual machine.
And a substep S12, if the quantity value corresponding to the first quantity information is greater than the quantity value corresponding to the second quantity information, determining that the first change trend of the dirty page collection rate in the round is an increasing trend.
And a substep S13, if the quantity value corresponding to the first quantity information is smaller than the quantity value corresponding to the second quantity information, determining that the first change trend of the dirty page collection rate in the current round is a reduction trend.
The quantity information comprises first quantity information collected in the current round and second quantity information collected in the previous round. The variation tendency includes an increasing tendency and a decreasing tendency.
The quantity information comprises first quantity information collected in the current round and second quantity information collected in the previous round. The number information of the dirty pages collected in the current round may be referred to as first number information, and the number information of the dirty pages collected in the previous round may be referred to as second number information.
If the quantity value corresponding to the first quantity information is larger than the quantity value corresponding to the second quantity information, determining that the first change trend of the dirty page collection rate in the current round is an increasing trend; and if the quantity value corresponding to the first quantity information is smaller than the quantity value corresponding to the second quantity information, determining that the first change trend of the dirty page collection rate in the current round is a reduction trend.
The number of dirty pages collected in this round may be compared with the number of dirty pages collected in the previous round to determine whether the number of dirty pages is increasing or decreasing. If the dirty page collection rate is increased, the change trend of the dirty page collection rate in the current round is shown to be an increasing trend; if the dirty page collection rate is reduced, the change trend of the dirty page collection rate in the current round is shown to be a reduction trend. The variation trend of the dirty page collection rate in the current round can be referred to as a first variation trend, and the variation trend of the dirty page collection rate in the previous round can be referred to as a second variation trend.
In an optional embodiment of the present invention, the step S11 of counting the quantity information of the collected dirty pages from the dirty page collection thread of the virtual machine may specifically include the following steps:
traversing a virtual processor of the virtual machine, and determining a preset memory area corresponding to the virtual processor; storing the collected dirty page information into a preset memory area through a kernel of the virtual machine; and collecting dirty page information from a preset memory area through a dirty page collecting thread, and counting the quantity information of the collected dirty pages.
The preset memory area is a shared memory area.
In practical application, all virtual processors (VCPUs) on the virtual machine can be traversed, a corresponding preset memory region is determined, the collected dirty page information is stored in the preset memory region through a kernel of the virtual machine, then, a dirty page item which is not collected by a dirty page collecting thread can be found out from a shared preset memory region ring, the dirty page information is collected from the preset memory region through the dirty page collecting thread, and the quantity information of the collected dirty pages is counted. In addition, offset information of the dirty page, such as offset, may also be extracted from the dirty page entry in the preset memory area, and the memory dirty page flag bit corresponding to the offset is stored in the bit _ bitmap of the user mode.
The counting information comprises first counting information corresponding to the previous round of dirty page collection and second counting information corresponding to the previous round of dirty page collection. The count information of the virtual machine exit events counted in the previous round of dirty page collection may be referred to as first count information, and the count information of the virtual machine exit events counted in the previous round of dirty page collection may be referred to as second count information.
In step 203, if the count value corresponding to the first count information is greater than the count value corresponding to the second count information, the dirty page collection rate for performing dirty page collection in the next round of the dirty page collection thread is increased.
In the embodiment of the present invention, if the count value corresponding to the first count information is greater than the count value corresponding to the second count information, the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round may be increased.
And 204, if the count value corresponding to the first count information is not greater than the count value corresponding to the second count information, adjusting the dirty page collection rate of the dirty page collection thread according to the first change trend and the second change trend.
In this embodiment of the present invention, if the count value corresponding to the first count information is not greater than the count value corresponding to the second count information, the dirty page collection rate of the dirty page collection thread may be adjusted according to the first variation trend and the second variation trend.
In an optional embodiment of the present invention, in step 204, the dirty page collecting rate of the dirty page collecting thread is adjusted according to the first variation trend and the second variation trend, and specifically includes the following sub-steps:
in the substep S21, if the first variation trend is consistent with the second variation trend, a preset value is added to the count value corresponding to the preset stable count information.
Wherein, the preset stable counting information is used for measuring the stable degree of the variation trend. The stable count is a technique for measuring the degree of stability, and if the count is stable in two consecutive times, the count is increased and is considered to be stable at present when the count is increased to a certain value. If the variation trends obtained by two continuous times of statistics are consistent, the variation trend is considered to be stable, if the variation trends obtained by two continuous times of statistics are inconsistent, the variation trend is considered to be unstable, and the count value corresponding to the preset stable count information can be cleared to indicate that the count value is not stable at present. For example, if the last time is an increasing trend, the count is increased when the last time is also an increasing trend; if the last time is in the increasing trend, clearing the count for the decreasing trend; if the last time is a decreasing trend, the count is also increased when the current time is a decreasing trend.
In the embodiment of the present invention, if the first variation trend is consistent with the second variation trend, a preset value is added to the count value corresponding to the preset stable count information.
And a substep S22, if the count value corresponding to the increased preset stable counting information meets the preset stable counting threshold condition and the first change trend is an increasing trend, increasing a dirty page collecting rate of the dirty page collecting thread for performing dirty page collection in the next round.
And a substep S23, if the increased count value corresponding to the preset stable counting information meets the preset stable counting threshold condition and the first change trend is a decrease trend, reducing a dirty page collecting rate of the dirty page collecting thread for performing dirty page collection in a next round.
If the count value corresponding to the increased preset stable counting information meets the preset stable counting threshold condition and the first change trend is an increase trend, increasing the dirty page collecting speed of the dirty page collecting thread for collecting dirty pages in the next round; and if the count value corresponding to the increased preset stable counting information meets the preset stable counting threshold condition and the first change trend is a reduction trend, reducing the dirty page collecting speed of the dirty page collecting thread for collecting the dirty pages in the next round.
In an optional embodiment of the present invention, if the count value corresponding to the increased preset stable counting information in the substep S22 meets the preset stable counting threshold condition and the first change trend is an increase trend, the dirty page collection rate of the dirty page collection thread for performing the dirty page collection in the next round may be increased, specifically including the substeps of:
and if the count value corresponding to the increased preset stable counting information is equal to the preset stable counting threshold value and the first change trend is an increase trend, increasing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round.
In an optional embodiment of the present invention, if the count value corresponding to the increased preset stable counting information in the sub-step S23 meets the preset stable counting threshold condition and the first change trend is a decrease trend, the dirty page collecting rate of the dirty page collecting thread for performing the dirty page collection in the next round may be reduced, specifically including the following sub-steps:
and if the count value corresponding to the increased preset stable counting information is equal to the preset stable counting threshold value and the first change trend is a decrease trend, reducing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round.
In an optional embodiment of the present invention, the following steps may be specifically performed:
and if the first change trend is inconsistent with the second change trend, setting a count value corresponding to the preset stable count information as a preset initial numerical value.
In practical application, whether the variation trend of the number of the dirty pages in the current round is increased or decreased relative to the number of the dirty pages in the previous round is counted, and whether the variation trend changes or not is determined. If the variation trend changes, clearing the stable count; if the trend of the change does not change (the trend of the increase is continued twice or the trend of the decrease is continued twice), the stable count is increased. The stable count is increased to 3 (i.e., the preset stable count threshold) and then is not increased; if the stable count is 3, the dirty page collection rate of the next dirty page collection thread can be increased when the change trend of the dirty page collection rate of the current round is an increasing trend, specifically, the real adjustment direction mark of the next round is set to speed _ up (the dirty page collection rate is increased); if the stable count is 3, the dirty page collection rate of the next dirty page collection cycle of the dirty page collection thread can be reduced when the change trend of the dirty page collection rate of the current cycle is a reduction trend, specifically, the real adjustment direction flag of the next cycle is set to speed _ down (the dirty page collection rate is reduced).
In an optional embodiment of the present invention, the increasing of the dirty page collecting rate of the dirty page collecting thread in the next round of dirty page collection in the substep S22 may specifically include the following substeps:
and reducing the sleep time of the dirty page collection thread according to a preset first proportion so as to increase the dirty page collection rate of the dirty page collection thread for collecting the dirty pages in the next round.
In an optional embodiment of the present invention, in the substep S23, reducing a dirty page collection rate of a dirty page collection thread for performing a next dirty page collection cycle may specifically include the substeps of:
and increasing the sleep time of the dirty page collection thread according to a preset second proportion so as to reduce the dirty page collection rate of the dirty page collection thread for collecting the dirty pages in the next round.
In an optional embodiment of the present invention, the following steps may be specifically performed:
and if the sleep time of the reduced dirty page collecting thread is not in the preset sleep time interval, maintaining the sleep time of the dirty page collecting thread unchanged.
And if the sleep time of the increased dirty page collecting thread is not in the preset sleep time interval, maintaining the sleep time of the dirty page collecting thread unchanged.
In practical application, assuming that the sleep time T of a dirty page collection thread is initially 1000 milliseconds, if the real adjustment direction flag is speed _ up, the sleep time is adjusted to 50% of the original sleep time; if the real adjustment direction flag is speed _ down, the sleep time is adjusted to 200% of the original sleep time. The adjustable interval range of the sleep time (preset sleep time interval) is 1000 milliseconds to 50 milliseconds, and the original value is kept without adjustment after the interval is exceeded.
The sleep time of the dirty page collection thread is dynamically adjusted to be the workload of the dirty page collection thread, and the lower the sleep time is, the speed of collecting the dirty pages is accelerated, so that the sleep time can be adapted to the generation of the faster dirty pages, and the exit frequency of the virtual machine caused by ring full is reduced.
In general, memory dirty pages are not generated frequently, and if the sleep time is adjusted too short, the dirty page collection thread unnecessarily idles, wasting processor resources. Therefore, the working frequency of the dirty page collection thread needs to be dynamically adjusted according to the real load of the virtual machine client service to achieve a balanced state, so that the client service is ensured not to be suddenly reduced in performance of the client service due to ring full, and the dirty page collection thread is prevented from idling frequently to waste system resources when the pressure is small.
In order to enable those skilled in the art to better understand steps 201 to 204 of the embodiment of the present invention, the following description is provided by way of an example:
referring to fig. 3, a flowchart of a virtual machine migration method according to an embodiment of the present invention is shown, where the specific flow includes:
1. and the dirty page collection thread of the virtual machine applies for dirty ring lock.
2. Traversing all VCPUs on the virtual machine, finding out all dirty page items which are not collected yet from the shared memory area ring, taking out offset of the dirty page from the dirty page items, and storing memory dirty page mark bits corresponding to the offset into dirty _ bitmap of a user state space.
3. And counting the quantity information of the dirty pages collected in the round.
4. Releasing dirty ring lock.
5. The kernel KVM is notified to reclaim the dirty page space ring.
6. And inquiring counting information of virtual machine exit events currently caused by the ringfull.
7. If the count of the current time is increased relative to the last time of temporary storage, setting the real adjustment direction mark of the next round of dirty page collection thread as a dirty page collection speed acceleration (speed _ up), and directly entering a dirty page collection frequency dynamic adjustment process.
8. If the count of the current time relative to the last temporary storage is not increased, executing a filtering jitter process: counting the number of dirty pages collected in the current round in a dirty ring dirty page collection thread of QEMU, determining whether the number of the dirty pages is increased or decreased relative to the previous round, determining the change trend of the dirty page collection rate in the current round, and further determining whether the change trend changes relative to the previous round; if the variation trend changes, clearing the stable count; if the variation trend is not changed, increasing the stable count, and increasing the stable count to a preset stable count threshold value and then not increasing the stable count; if the stable counting is a preset stable counting threshold value, the change trend of the dirty page collecting rate of the current round is an increasing trend, and the real adjusting direction of the next round is set as speed _ up; and if the stable count is a preset stable count threshold value, the change trend of the dirty page collection rate in the current round is a reduction trend, and the real adjustment direction of the next round is set as speed _ down.
9. Executing a dirty page collection frequency dynamic adjustment flow: the sleep time T is initially 1000 milliseconds, and if the real adjustment direction is speed _ up, the sleep time is adjusted to 50 percent of the original sleep time; if the real adjustment direction is speed down, the sleep time is adjusted to 200 percent of the original sleep time. The preset sleep time interval of the sleep time is 1000 milliseconds to 50 milliseconds, and the original value is kept after the preset sleep time interval is exceeded without adjustment.
10. And sleeping according to the sleep time T calculated by the dirty page collection frequency dynamic adjustment process.
11. And (5) continuing to collect the dirty pages in the next round in the same steps 1 to 10.
The virtual machine migration is carried out by adopting the virtual machine migration method, the test data is shown in table 1, the test tool uses a Guestperf tool carried by QEMU, wherein the memory pressure represents how many G memory dirty pages are generated in one second, the performance index of updating 1G memory takes time represents the efficiency of the client for accessing the memory, the shorter the time is, the higher the performance is, and the better the performance of the user memory is, the greater the influence of the performance of the memory on the client service is, the usability of the client service is represented to a great extent. By comparing test data, the optimization scheme can effectively improve the availability of services in the virtual machine in the thermal migration process of the virtual machine under the high pressure condition, and can also shorten the migration time.
Figure SMS_1
TABLE 1
In summary, in the embodiment of the present invention, in a live migration process of a virtual machine, if the virtual machine generates a virtual machine exit event based on a preset memory region full event, the virtual machine exit event may be counted, a dirty page collection rate variation trend may be counted from a dirty page collection thread of the virtual machine, the dirty page collection rate of the dirty page collection thread may be adjusted based on the count information and/or the dirty page collection rate variation trend, and then an exit frequency of the virtual machine due to the preset memory region full event, which causes the virtual machine exit event, is reduced, thereby optimizing a virtual machine customer service performance in the live migration process, effectively reducing an impact on the customer service, and improving availability of the customer service in the migration process. By adopting the method, the method for dynamically adjusting the workload of the dirty page collection thread according to the internal business pressure condition of the virtual machine is provided, wherein the internal business pressure condition of the virtual machine is specifically represented by counting the exit events of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate dirty page collection under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of customers.
The invention provides a method for improving the memory access performance of a virtual machine in online migration based on dirtyring, which optimizes the problem that the performance of a client is sharply reduced or even hardly usable under certain high-pressure scenes due to the online live migration dirty ring characteristic of the virtual machine; the dirty page synchronization efficiency can be accelerated, the memory migration efficiency is improved, and the online migration time of the virtual machine is shortened.
In the invention, the dirty page collecting frequency of the user mode QEMU can be dynamically adjusted, the pace of a kernel dirty page producer and a dirty page consumer is balanced under the condition of dynamically adapting to different dirty page pressures, the frequent exit of a virtual machine caused by insufficient ring of a shared memory area is effectively reduced, the influence on customer service is effectively reduced, and the availability of the customer service in the migration process is improved. When the load of the dirty page collection thread is dynamically adjusted, a filtering jitter flow is introduced to solve the problem of adjustment shock caused by continuous adjustment due to fluctuation of transient dirty pages, and a dirty page collection frequency dynamic adjustment flow is introduced to dynamically adapt to the service pressure change and different pressures of a client virtual machine. The method can dynamically accelerate dirty page collection under high pressure, so that virtual machine exit is reduced, the service availability of a client is greatly improved, and dynamic adjustment can be stably and correctly performed through the provided 'filtering jitter flow' and 'dirty page collection frequency dynamic adjustment flow'.
It should be noted that for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently depending on the embodiment of the invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 4, a block diagram of a virtual machine migration apparatus according to an embodiment of the present invention is shown, and specifically includes the following modules:
a counting module 401, configured to count, in a live migration process of a virtual machine, a virtual machine exit event generated by the virtual machine based on a preset memory area full event, to obtain corresponding counting information;
a counting module 402, configured to count a dirty page collection rate change trend from a dirty page collection thread of the virtual machine;
an adjusting module 403, configured to adjust a dirty page collection rate of the dirty page collection thread according to the count information and/or the change trend of the dirty page collection rate, so as to adjust a recovery rate of the preset memory area in the virtual machine by adjusting the dirty page collection rate.
In an embodiment of the present invention, the counting information includes first counting information corresponding to a current round of dirty page collection and second counting information corresponding to a previous round of dirty page collection, and the adjusting module includes:
and the adding submodule is used for increasing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round if the count value corresponding to the first counting information is greater than the count value corresponding to the second counting information.
In an embodiment of the present invention, the trend of the dirty page collecting rate includes a first trend of the dirty page collecting rate in the current round and a second trend of the dirty page collecting rate in the previous round, and the adjusting module includes:
and the adjusting submodule is used for adjusting the dirty page collecting speed of the dirty page collecting thread according to the first change trend and the second change trend if the count value corresponding to the first count information is not greater than the count value corresponding to the second count information.
In an embodiment of the present invention, the statistical module includes:
the statistic submodule is used for counting the quantity information of the collected dirty pages from the dirty page collection thread of the virtual machine; the quantity information comprises first quantity information collected in the current round and second quantity information collected in the previous round;
the first determining submodule is used for determining that the first change trend of the dirty page collecting rate in the current round is an increasing trend if the quantity value corresponding to the first quantity information is larger than the quantity value corresponding to the second quantity information;
and the second determining submodule is used for determining that the first change trend of the dirty page collection rate in the current round is a reduction trend if the quantity value corresponding to the first quantity information is smaller than the quantity value corresponding to the second quantity information.
In an embodiment of the present invention, the adjusting sub-module includes:
a first increasing unit, configured to increase a count value corresponding to preset stable count information by a preset value if the first change trend is consistent with the second change trend; the preset stable counting information is used for measuring the stable degree of the variation trend; the variation trend comprises an increasing trend and a decreasing trend;
a second increasing unit, configured to increase a dirty page collecting rate at which a dirty page collection thread performs a dirty page collection in a next round if the increased count value corresponding to the preset stable counting information meets a preset stable counting threshold condition and the first change trend is an increasing trend;
and the reducing unit is used for reducing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round if the increased count value corresponding to the preset stable counting information meets the preset stable counting threshold condition and the first change trend is a reduction trend.
In the embodiment of the present invention, it is,
the second adding unit includes:
a first increasing subunit, configured to increase, if the increased count value corresponding to the preset stable counting information is equal to a preset stable counting threshold and the first change trend is an increasing trend, the dirty page collecting rate at which the dirty page collecting thread performs dirty page collection in a next round;
the reduction unit includes:
and the reducing subunit is configured to reduce the dirty page collecting rate at which the dirty page collecting thread performs dirty page collection in a next round if the increased count value corresponding to the preset stable counting information is equal to the preset stable counting threshold and the first change trend is a reduction trend.
In an embodiment of the present invention, the apparatus further includes:
and the setting module is used for setting the counting value corresponding to the preset stable counting information as a preset initial numerical value if the first change trend is inconsistent with the second change trend.
In an embodiment of the present invention, the second adding unit includes:
a reducing subunit, configured to reduce sleep time of the dirty page collecting thread by a preset first ratio, so as to increase a dirty page collecting rate at which the dirty page collecting thread performs dirty page collection in a next round by reducing the sleep time of the dirty page collecting thread.
In an embodiment of the present invention, the reducing unit includes:
a second increasing subunit, configured to increase the sleep time of the dirty page collecting thread according to a preset second proportion, so as to reduce the dirty page collecting rate for performing dirty page collection in a next round of the dirty page collecting thread by increasing the sleep time of the dirty page collecting thread.
In an embodiment of the present invention, the apparatus further includes:
the first maintaining module is used for maintaining the sleep time of the dirty page collecting thread unchanged if the reduced sleep time of the dirty page collecting thread is not in a preset sleep time interval;
and the second maintaining module is used for maintaining the sleep time of the dirty page collecting thread unchanged if the increased sleep time of the dirty page collecting thread is not in the preset sleep time interval.
In an embodiment of the present invention, the statistical submodule includes:
the traversal and determination unit is used for traversing a virtual processor of the virtual machine and determining the preset memory area corresponding to the virtual processor;
the storage unit is used for storing the collected dirty page information into the preset memory area through the kernel of the virtual machine; the preset memory area is a shared memory area;
and the collecting and counting unit is used for collecting dirty page information from the preset memory area through the dirty page collecting thread and counting the quantity information of the collected dirty pages.
In an embodiment of the present invention, the virtual machine comprises a virtualization component, the virtualization component comprising the dirty page collection thread; the virtualization component is a QEMU component.
In summary, in the embodiment of the present invention, in a live migration process of a virtual machine, if the virtual machine generates a virtual machine exit event based on a preset memory region full event, the virtual machine exit event may be counted, a dirty page collection rate variation trend may be counted from a dirty page collection thread of the virtual machine, and a dirty page collection rate of the dirty page collection thread may be adjusted based on count information and/or the dirty page collection rate variation trend, so as to reduce an exit frequency of the virtual machine due to the virtual machine exit event caused by the preset memory region full event, thereby optimizing a virtual machine customer service performance in the live migration process, effectively reducing an influence on a customer service, and improving availability of the customer service in the migration process. By adopting the method, the method for dynamically adjusting the workload of the dirty page collection thread according to the internal service pressure condition of the virtual machine is provided, wherein the internal service pressure condition of the virtual machine is specifically represented by counting the exit events of the virtual machine and counting the change trend of the dirty page collection rate. The method can dynamically accelerate dirty page collection under high pressure, thereby reducing the occurrence of virtual machine exit events and improving the service availability of customers.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides an electronic device, including: the virtual machine migration method comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein when the computer program is executed by the processor, each process of the virtual machine migration method embodiment is realized, the same technical effect can be achieved, and in order to avoid repetition, the details are not repeated.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements each process of the foregoing virtual machine migration method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
The foregoing detailed description is directed to a virtual machine migration method, a virtual machine migration apparatus, an electronic device, and a computer-readable storage medium, which are provided by the present invention, and specific examples are applied herein to explain the principles and embodiments of the present invention, and the descriptions of the foregoing examples are only used to help understand the method and the core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (15)

1. A virtual machine migration method, the method comprising:
in the process of virtual machine live migration, if a virtual machine exit event is generated by the virtual machine based on a preset memory area full event, counting the virtual machine exit event to obtain corresponding counting information;
counting a dirty page collecting rate change trend from a dirty page collecting thread of the virtual machine;
and adjusting the dirty page collection rate of the dirty page collection thread according to the counting information and/or the change trend of the dirty page collection rate, so as to adjust the recovery rate of the preset memory area in the virtual machine by adjusting the dirty page collection rate.
2. The method according to claim 1, wherein the count information includes first count information corresponding to a current round of dirty page collection and second count information corresponding to a previous round of dirty page collection, and the adjusting the dirty page collection rate of the dirty page collection thread according to the count information and/or the dirty page collection rate variation trend includes:
and if the count value corresponding to the first counting information is larger than the count value corresponding to the second counting information, increasing the dirty page collecting speed of the dirty page collecting thread for collecting dirty pages in the next round.
3. The method according to claim 2, wherein the trend of the dirty page collection rate includes a first trend of a dirty page collection rate of a current round and a second trend of a dirty page collection rate of a previous round, and the adjusting the dirty page collection rate of the dirty page collection thread according to the count information and/or the trend of the dirty page collection rate includes:
if the count value corresponding to the first counting information is not larger than the count value corresponding to the second counting information, the dirty page collecting speed of the dirty page collecting thread is adjusted according to the first change trend and the second change trend.
4. The method of claim 3, wherein the counting a dirty page collection rate trend from a dirty page collection thread of the virtual machine comprises:
counting the quantity information of the collected dirty pages from the dirty page collection thread of the virtual machine; the quantity information comprises first quantity information collected in the current round and second quantity information collected in the previous round;
if the quantity value corresponding to the first quantity information is larger than the quantity value corresponding to the second quantity information, determining that the first change trend of the dirty page collection rate in the current round is an increasing trend;
and if the quantity value corresponding to the first quantity information is smaller than the quantity value corresponding to the second quantity information, determining that the first change trend of the dirty page collection rate in the current round is a reduction trend.
5. The method of claim 4, wherein the adjusting the dirty page collection rate of the dirty page collection thread according to the first trend of change and the second trend of change comprises:
if the first change trend is consistent with the second change trend, increasing a preset numerical value by a count value corresponding to preset stable count information; the preset stable counting information is used for measuring the stability degree of the variation trend; the variation trend comprises an increasing trend and a decreasing trend;
if the increased count value corresponding to the preset stable counting information meets a preset stable counting threshold condition and the first change trend is an increase trend, increasing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round;
and if the increased count value corresponding to the preset stable counting information meets a preset stable counting threshold condition and the first change trend is a decrease trend, reducing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round.
6. The method of claim 5,
if the increased count value corresponding to the preset stable counting information meets the preset stable counting threshold condition and the first change trend is an increase trend, increasing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round, including:
if the increased count value corresponding to the preset stable counting information is equal to a preset stable counting threshold value and the first change trend is an increase trend, increasing the dirty page collection rate of the dirty page collection thread for performing dirty page collection in the next round;
if the increased count value corresponding to the preset stable counting information meets a preset stable counting threshold condition and the first change trend is a decrease trend, reducing the dirty page collecting rate of the dirty page collecting thread for collecting dirty pages in the next round, including:
and if the increased count value corresponding to the preset stable counting information is equal to the preset stable counting threshold value and the first change trend is a decrease trend, reducing the dirty page collecting speed of the dirty page collecting thread for collecting dirty pages in the next round.
7. The method of claim 5, further comprising:
and if the first change trend is inconsistent with the second change trend, setting a count value corresponding to the preset stable counting information as a preset initial numerical value.
8. The method of claim 5, wherein said increasing the dirty page collection rate for a next dirty page collection round of the dirty page collection thread comprises:
reducing the sleep time of the dirty page collection thread according to a preset first proportion, so as to increase the dirty page collection rate of the dirty page collection thread for collecting dirty pages in the next round.
9. The method of claim 5, wherein the reducing the dirty page collection rate for a next round of dirty page collection by the dirty page collection thread comprises:
and increasing the sleep time of the dirty page collection thread according to a preset second proportion so as to reduce the dirty page collection rate of the dirty page collection thread for collecting dirty pages in the next round.
10. The method according to claim 8 or 9, characterized in that the method further comprises:
if the reduced sleep time of the dirty page collection thread is not in a preset sleep time interval, maintaining the sleep time of the dirty page collection thread unchanged;
and if the increased sleep time of the dirty page collection thread is not in the preset sleep time interval, maintaining the sleep time of the dirty page collection thread unchanged.
11. The method of claim 4, wherein the counting the collected quantity information of the dirty pages from the dirty page collection thread of the virtual machine comprises:
traversing a virtual processor of the virtual machine, and determining the preset memory area corresponding to the virtual processor;
storing the collected dirty page information into the preset memory area through the kernel of the virtual machine; the preset memory area is a shared memory area;
and collecting dirty page information from the preset memory area through the dirty page collecting thread, and counting the quantity information of the collected dirty pages.
12. The method of claim 11, wherein the virtual machine comprises a virtualization component that comprises the dirty page collection thread; the virtualization component is a QEMU component.
13. An apparatus for virtual machine migration, the apparatus comprising:
the counting module is used for counting the virtual machine exit events to obtain corresponding counting information if the virtual machine generates the virtual machine exit events based on the preset memory area full events in the virtual machine live migration process;
the statistical module is used for counting the change trend of the dirty page collecting rate from the dirty page collecting thread of the virtual machine;
and the adjusting module is used for adjusting the dirty page collecting rate of the dirty page collecting thread according to the counting information and/or the change trend of the dirty page collecting rate so as to adjust the recovery rate of the preset memory area in the virtual machine by adjusting the dirty page collecting rate.
14. An electronic device, comprising: a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program when executed by the processor implementing a virtual machine migration method as claimed in any one of claims 1 to 12.
15. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out a virtual machine migration method according to any one of claims 1 to 12.
CN202310075635.7A 2023-02-07 2023-02-07 Virtual machine migration method and device, electronic equipment and medium Active CN115827169B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310075635.7A CN115827169B (en) 2023-02-07 2023-02-07 Virtual machine migration method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310075635.7A CN115827169B (en) 2023-02-07 2023-02-07 Virtual machine migration method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN115827169A true CN115827169A (en) 2023-03-21
CN115827169B CN115827169B (en) 2023-06-23

Family

ID=85520866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310075635.7A Active CN115827169B (en) 2023-02-07 2023-02-07 Virtual machine migration method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN115827169B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120221710A1 (en) * 2011-02-28 2012-08-30 Tsirkin Michael S Mechanism for Virtual Machine Resource Reduction for Live Migration Optimization
US8402226B1 (en) * 2010-06-18 2013-03-19 Emc Corporation Rate proportional cache write-back in a storage server
CN103365704A (en) * 2012-03-26 2013-10-23 中国移动通信集团公司 Memory pre-copying method in virtual machine migration, device executing memory pre-copying method and system
WO2017049617A1 (en) * 2015-09-25 2017-03-30 Intel Corporation Techniques to select virtual machines for migration
CN107729119A (en) * 2017-09-26 2018-02-23 联想(北京)有限公司 Virtual machine migration method and its system
CN107832118A (en) * 2017-11-18 2018-03-23 浙江网新恒天软件有限公司 A kind of KVM live migration of virtual machine optimization methods of reduction VCPU temperatures
CN109189545A (en) * 2018-07-06 2019-01-11 烽火通信科技股份有限公司 A kind of realization method and system improving live migration of virtual machine reliability
CN110928636A (en) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 Virtual machine live migration method, device and equipment
CN112988332A (en) * 2021-04-26 2021-06-18 杭州优云科技有限公司 Virtual machine live migration prediction method and system and computer readable storage medium
CN113886012A (en) * 2021-09-29 2022-01-04 济南浪潮数据技术有限公司 Method, device and equipment for automatically selecting virtual machine thermal migration acceleration scheme
CN114443211A (en) * 2021-12-22 2022-05-06 天翼云科技有限公司 Virtual machine live migration method, equipment and storage medium
CN114924836A (en) * 2022-05-17 2022-08-19 上海仪电(集团)有限公司中央研究院 Optimized KVM pre-copy virtual machine live migration method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8402226B1 (en) * 2010-06-18 2013-03-19 Emc Corporation Rate proportional cache write-back in a storage server
US20120221710A1 (en) * 2011-02-28 2012-08-30 Tsirkin Michael S Mechanism for Virtual Machine Resource Reduction for Live Migration Optimization
CN103365704A (en) * 2012-03-26 2013-10-23 中国移动通信集团公司 Memory pre-copying method in virtual machine migration, device executing memory pre-copying method and system
WO2017049617A1 (en) * 2015-09-25 2017-03-30 Intel Corporation Techniques to select virtual machines for migration
CN107729119A (en) * 2017-09-26 2018-02-23 联想(北京)有限公司 Virtual machine migration method and its system
CN107832118A (en) * 2017-11-18 2018-03-23 浙江网新恒天软件有限公司 A kind of KVM live migration of virtual machine optimization methods of reduction VCPU temperatures
CN109189545A (en) * 2018-07-06 2019-01-11 烽火通信科技股份有限公司 A kind of realization method and system improving live migration of virtual machine reliability
CN110928636A (en) * 2018-09-19 2020-03-27 阿里巴巴集团控股有限公司 Virtual machine live migration method, device and equipment
CN112988332A (en) * 2021-04-26 2021-06-18 杭州优云科技有限公司 Virtual machine live migration prediction method and system and computer readable storage medium
CN113886012A (en) * 2021-09-29 2022-01-04 济南浪潮数据技术有限公司 Method, device and equipment for automatically selecting virtual machine thermal migration acceleration scheme
CN114443211A (en) * 2021-12-22 2022-05-06 天翼云科技有限公司 Virtual machine live migration method, equipment and storage medium
CN114924836A (en) * 2022-05-17 2022-08-19 上海仪电(集团)有限公司中央研究院 Optimized KVM pre-copy virtual machine live migration method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
方义秋;陈玉洋;葛君伟;: "基于概率预测的虚拟机动态迁移机制改进" *
熊安萍;徐晓龙;: "基于内存迭代拷贝的Xen虚拟机动态迁移机制研究" *
陈玉洋: "基于云计算的虚拟机动态迁移策略研究" *

Also Published As

Publication number Publication date
CN115827169B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
US10997209B2 (en) Creating replicas at user-defined points in time
Waldspurger et al. Efficient {MRC} construction with {SHARDS}
US9418020B2 (en) System and method for efficient cache utility curve construction and cache allocation
Zhao et al. Dynamic memory balancing for virtual machines
Stoica et al. Enabling efficient OS paging for main-memory OLTP databases
US10678596B2 (en) User behavior-based dynamic resource capacity adjustment
EP2772853B1 (en) Method and device for building memory access model
US7181588B2 (en) Computer apparatus and method for autonomic adjustment of block transfer size
CN102868763A (en) Energy-saving dynamic adjustment method of virtual web application cluster in cloud computing environment
TW200805080A (en) Apparatus, system, and method for dynamic adjustment of performance monitoring
CN104731974A (en) Dynamic page loading method based on big data stream type calculation
EP2199915B1 (en) Monitoring memory consumption
CN112346829A (en) Method and equipment for task scheduling
CN104899071A (en) Recovery method and recovery system of virtual machine in cluster
US20140258672A1 (en) Demand determination for data blocks
Balmau et al. Silk+ preventing latency spikes in log-structured merge key-value stores running heterogeneous workloads
CN103399791A (en) Method and device for migrating virtual machines on basis of cloud computing
US11934665B2 (en) Systems and methods for ephemeral storage snapshotting
Wang et al. Dynamic memory balancing for virtualization
Yu et al. {ADOC}: Automatically Harmonizing Dataflow Between Components in {Log-Structured}{Key-Value} Stores for Improved Performance
CN115827169A (en) Virtual machine migration method and device, electronic equipment and medium
CN112559119B (en) Virtual machine migration method and device, electronic equipment and storage medium
CN112783713A (en) Method, device, equipment and storage medium for processing multi-core virtual machine stuck
US20230385159A1 (en) Systems and methods for preventing data loss
CN107018163B (en) Resource allocation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 100007 room 205-32, floor 2, building 2, No. 1 and No. 3, qinglonghutong a, Dongcheng District, Beijing

Patentee after: Tianyiyun Technology Co.,Ltd.

Address before: 100093 Floor 4, Block E, Xishan Yingfu Business Center, Haidian District, Beijing

Patentee before: Tianyiyun Technology Co.,Ltd.