CN109739618B - Virtual machine migration method and device - Google Patents

Virtual machine migration method and device Download PDF

Info

Publication number
CN109739618B
CN109739618B CN201811505526.XA CN201811505526A CN109739618B CN 109739618 B CN109739618 B CN 109739618B CN 201811505526 A CN201811505526 A CN 201811505526A CN 109739618 B CN109739618 B CN 109739618B
Authority
CN
China
Prior art keywords
virtual machine
network card
host
queue
migration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811505526.XA
Other languages
Chinese (zh)
Other versions
CN109739618A (en
Inventor
钟晋明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Cloud Technologies Co Ltd
Original Assignee
New H3C Cloud Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Cloud Technologies Co Ltd filed Critical New H3C Cloud Technologies Co Ltd
Priority to CN201811505526.XA priority Critical patent/CN109739618B/en
Publication of CN109739618A publication Critical patent/CN109739618A/en
Application granted granted Critical
Publication of CN109739618B publication Critical patent/CN109739618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The utility model provides a virtual machine migration method and device, through dispose the combination in the hardware virtualization network card drive of the first host computer and half virtualization network card drive and receive and dispatch the queue, when the first host computer switches to the migration state, synchronize the memory information of the first virtual machine to the second virtual machine, the first virtual machine carries out the interaction of I/O data through the combination that half virtualization network card drives and receives and dispatch the queue and the half virtualization of SR-IOV NIC drives the network card in the whole migration process. Therefore, the method and the device can automatically complete the switching of the migration state on the driving side without adopting binding software, are beneficial to subsequent operation and maintenance, and can keep higher forwarding performance no matter in the migration state or the non-migration state.

Description

Virtual machine migration method and device
Technical Field
The disclosure relates to the technical field of cloud computing, in particular to a virtual machine migration method and device.
Background
SR-IOV (Single-Root I/O Virtualization) technology allows PCIe (Peripheral Component Interconnect Express) devices to be efficiently shared between Virtual machines, thereby directly letting VMs (Virtual machines) use host devices and obtaining higher I/O performance through hardware.
An SR-IOV NIC (Network Interface Card) generally defines a PF (Physical Function) and a plurality of VFs (Virtual functions), the PF and the VFs can be used as separate Network cards, and the VFs generally pass hardware through to a Virtual machine by a PCIe pass-through technology, so that a hardware register of the VF can be directly accessed in the Virtual machine. However, since the SR-IOV VF network card is a PCIe physical network card function, if the SR-IOV VF network card is always located in the virtual client, the virtual client cannot perform live migration.
Disclosure of Invention
In order to overcome the above disadvantages in the prior art, an object of the present disclosure is to provide a virtual machine migration method and apparatus, so as to solve or improve the above problems.
In order to achieve the above purpose, the embodiments of the present disclosure adopt the following technical solutions:
in a first aspect, the present disclosure provides a virtual machine migration method applied to a first host communicatively connected to a second host, where the first host and the second host include a single I/O virtualization network interface card SR-IOV NIC, and the SR-IOV NIC includes a hardware virtualization driver network card and a para-virtualization driver network card, the method including:
after a virtual machine migration instruction is received, sending configuration information of a first virtual machine to be migrated to the second host machine, so that the second host machine creates a second virtual machine according to the configuration information, and configures a hardware virtualization network card driver and a semi-virtualization network card driver for the second virtual machine;
when the second virtual machine completes configuration, switching the current running state of the first virtual machine to a migration state, wherein in the migration state, the first virtual machine runs a paravirtualized network card driver to perform I/O data interaction with a paravirtualized network card of the SR-IOV NIC through a combined transceiving queue of the paravirtualized network card driver;
and sending the memory information of the first virtual machine to the second host machine so that the second host machine synchronizes the memory information of the first virtual machine to the second virtual machine.
In a second aspect, an embodiment of the present disclosure further provides a virtual machine migration method applied to a second host that receives a first virtual machine migrated by a first host, where the first host and the second host include a single I/O virtualization network interface card SR-IOV NIC, and the SR-IOV NIC includes a hardware virtualization driver network card and a paravirtualization driver network card, and the method includes:
receiving configuration information of a first virtual machine to be migrated, which is sent by the first host according to a virtual machine migration instruction;
creating a second virtual machine according to the configuration information, and configuring a hardware virtualization network card driver and a semi-virtualization network card driver for the second virtual machine, so that the first host machine switches the current running state of the first virtual machine to a migration state when the second virtual machine completes configuration, wherein in the migration state, the first virtual machine runs the semi-virtualization network card driver to perform I/O data interaction with the semi-virtualization network card driver network card of the SR-IOV NIC through a combined transceiving queue of the semi-virtualization network card driver;
and receiving the memory information of the first virtual machine sent by the first host, and synchronizing the memory information of the first virtual machine to the second virtual machine.
In a third aspect, an embodiment of the present disclosure further provides a virtual machine migration method, applied to a first host and a second host communicatively connected to each other, where the first host and the second host include a single I/O virtualization network interface card SR-IOV NIC, and the SR-IOV NIC includes a hardware virtualization driver network card and a paravirtualization driver network card, and the method includes:
after receiving a virtual machine migration instruction, the first host machine sends configuration information of a first virtual machine to be migrated to the second host machine;
the second host machine creates a second virtual machine according to the configuration information, and configures a hardware virtualization network card driver and a semi-virtualization network card driver for the second virtual machine;
when the second virtual machine completes configuration, the first host machine switches the current running state of the first virtual machine to a migration state, and sends memory information of the first virtual machine to the second host machine in the migration state, wherein in the migration state, the first virtual machine runs a paravirtualized network card driver to perform I/O data interaction with a paravirtualized network card driver of the SR-IOV NIC through a combined transceiving queue driven by the paravirtualized network card;
and the second host machine synchronizes the memory information of the first virtual machine to the second virtual machine.
In a fourth aspect, an embodiment of the present disclosure further provides a virtual machine migration apparatus applied to a first host communicatively connected to a second host, where the first host and the second host include a single I/O virtualization network interface card SR-IOV NIC, and the SR-IOV NIC includes a hardware virtualization driver network card and a paravirtualization driver network card, the apparatus including:
the configuration information sending module is used for sending the configuration information of the first virtual machine to be migrated to the second host machine after receiving the virtual machine migration instruction, so that the second host machine creates a second virtual machine according to the configuration information and configures a hardware virtualization network card driver and a semi-virtualization network card driver for the second virtual machine;
the first switching module is used for switching the current running state of the first virtual machine to a migration state when the second virtual machine completes configuration, wherein in the migration state, the first virtual machine runs a paravirtualized network card driver to perform I/O data interaction with a paravirtualized network card of the SR-IOV NIC through a combined transceiving queue driven by the paravirtualized network card;
and the memory information sending module is used for sending the memory information of the first virtual machine to the second host machine so that the second host machine synchronizes the memory information of the first virtual machine to the second virtual machine.
In a fifth aspect, an embodiment of the present disclosure further provides a virtual machine migration apparatus applied to a second host that receives a first virtual machine migrated by a first host, where the first host and the second host include a single I/O virtualization network interface card SR-IOV NIC, and the SR-IOV NIC includes a hardware virtualization driver network card and a paravirtualization driver network card, and the apparatus includes:
the configuration information receiving module is used for receiving configuration information of the first virtual machine to be migrated, which is sent by the first host machine according to the virtual machine migration instruction;
a creating module, configured to create a second virtual machine according to the configuration information, and configure a hardware virtualization network card driver and a paravirtualization network card driver for the second virtual machine, so that when the second virtual machine completes configuration, the first host switches a current running state of the first virtual machine to a migration state, where in the migration state, the first virtual machine performs I/O data interaction with a paravirtualization drive network card of the SR-IOV NIC through a combined transceiving queue of the paravirtualization network card driver by running the paravirtualization network card driver;
and the memory information receiving module is used for receiving the memory information of the first virtual machine sent by the first host machine and synchronizing the memory information of the first virtual machine into the second virtual machine.
In a sixth aspect, an embodiment of the present disclosure further provides a server, where the server includes:
a storage medium;
a processor; and
the virtual machine migration apparatus described above, wherein the virtual machine migration apparatus is stored in the storage medium and includes computer-executable instructions executed by the processor.
In a seventh aspect, an embodiment of the present disclosure further provides a readable storage medium, where a computer program is stored in the readable storage medium, and the computer program, when executed, implements the virtual machine migration method described above.
Compared with the prior art, the method has the following beneficial effects:
the virtual machine migration method and device provided by the disclosure are characterized in that a combined receiving and sending queue is configured in a hardware virtualization network card driver and a semi-virtualization network card driver of a first host, when the first host is switched to a migration state, memory information of the first virtual machine is synchronized into a second virtual machine, and the first virtual machine performs I/O data interaction with a semi-virtualization driving network card of an SR-IOV network card through the combined receiving and sending queue driven by the semi-virtualization network card in the whole migration process. Therefore, the method and the device can automatically complete the switching of the migration state on the driving side without adopting binding software, are beneficial to subsequent operation and maintenance, and can keep higher forwarding performance no matter in the migration state or the non-migration state.
In order to make the aforementioned objects, features and advantages of the embodiments of the present disclosure comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present disclosure and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings may be obtained from the drawings without inventive effort.
Fig. 1 is a schematic diagram of an application scenario of a common virtual machine migration method;
fig. 2 is a schematic diagram of an application scenario of another common virtual machine migration method;
fig. 3 is a schematic view of an application scenario of a virtual machine migration method according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a virtual machine migration method according to an embodiment of the present disclosure;
fig. 5 is another schematic flow chart of a virtual machine migration method according to an embodiment of the present disclosure
Fig. 6 is a functional block diagram of a first virtual machine migration apparatus according to an embodiment of the present disclosure;
fig. 7 is another functional block diagram of a first virtual machine migration apparatus according to an embodiment of the present disclosure;
fig. 8 is a functional block diagram of a second virtual machine migration apparatus according to an embodiment of the present disclosure;
fig. 9 is another functional block diagram of a second virtual machine migration apparatus according to an embodiment of the present disclosure;
fig. 10 is a block diagram of a server for implementing the virtual machine migration method according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In order to better understand the technical solution of the present disclosure, the host in the present disclosure is first described in detail below.
The host machine is provided with an SR-IOV NIC and runs a virtualization software layer Hypervisor. The Hypervisor may also be referred to as a Virtual Machine Monitor (VMM), for example, the Hypervisor may be VMWare ESXi, KVM (Kernel-based Virtual Machine), or the like. In addition, the host may further include: a plurality of VMs.
An SR-IOV NIC may be understood as a network interface card that employs SR-IOV technology. The SR-IOV technology is a method that can share physical functions of I/O (Input/Output) ports of I/O devices without requiring virtualization software emulation, and through which a series of VFs of the physical ports of the I/O devices can be created. Each VF can perform a pass-through operation with its corresponding VM, thereby achieving near-native performance. In summary, SR-IOV enables the allocation of PCIe functionality to multiple virtual interfaces to share the resources of a PCI device in a virtualized environment, so that network traffic can be distributed directly to VMs bypassing the Hypervisor, thus reducing the I/O performance overhead in the Hypervisor. Thus, the SR-IOV can provide independent Memory space, interrupt and DMA (Direct Memory Access) flow for each VM. SR-IOV introduces two new types of functions:
physical Function (PF): PCIe functions supporting SR-IOV extension functions, used to configure and manage SR-IOV functional characteristics; and
virtual Functions (VFs): reduced PCIe functionality, including resources necessary for data migration, and a carefully reduced set of configuration resources.
Thus, the SR-IOV NIC may include: one PF and multiple VFs.
In addition, an L2vSwitch (virtual switch) having a basic two-layer switching function is integrated into the SR-IOV NIC. VF is directly connected to the L2vSwitch, and VF corresponds to a port on the L2 vSwitch.
The Hypervisor may include: the vSwitch and the PCI manager, in addition, a driver of the SR-IOV NIC is installed on the Hypervisor and is called PF driver. The PCI manager is used for configuring and managing a PCI bus in the SR-IOV NIC.
A VM refers to a complete computer system with complete hardware system functionality and running in a completely isolated environment through software simulation + hardware assisted virtualization. Through VM software, another virtual computer or multiple virtual computers can be simulated on one host machine. Wherein the VM may run on the Hypervisor. Each VM recognizes the VF network card in the SR-IOV NIC as a common PCIe device, so that a VF driver may be installed on each VM to connect to the corresponding VF network card on the SR-IOV NIC, thereby bypassing the vSwitch in the Hypervisor. That is, the VM sends the data directly to the corresponding VF network card on the SR-IOV NIC, but not to the vSwitch in the Hypervisor, thereby bypassing the participation of the Hypervisor.
On the basis of the above description, the process of receiving the data packet by the VM is as follows: when the data packet is sent to the SR-IOV NIC, the data packet is sent to the L2vSwitch for classification and exchange, a corresponding VF network card is determined according to the destination address of the data packet, so that the data packet is forwarded to the determined VF network card, and the VF network card initiates DMA operation to transmit the data packet to the VM connected with the VF network card. The process of sending the data packet by the VM is as follows: and the VM sends the data packet to a VF network card corresponding to the SR-IOVNIC, so that the data packet is sent through the VF network card.
As for the technical problems known in the background art, the inventor of the present disclosure has found through research that the technical solutions currently used for solving the technical problems include the following two:
the first scheme is as follows: referring to the application scenario shown in fig. 1, in the application scenario, the first host 100 and the second host 200 both include SR-IOV NICs, the SR-IOV NICs operate an SR-IOV VF network card and an SR-IOV PF network card, the first virtual machine is configured in the first host 100, the second virtual machine is configured in the second host 200, and the first virtual machine is configured with a Virtio-net driver and an SR-IOV VF driver. In implementation, the first virtual machine may configure the active/standby mode through binding software, for example, bind information (e.g., MAC address, IP address, etc.) of the SR-IOV VF network card in the active mode, and bind information (e.g., MAC address, IP address, etc.) of the SR-IOVPF network card in the standby mode. Therefore, when the first virtual machine normally runs, the information of the SR-IOV VF network card is bound, and I/O data interaction is completed through the SR-IOV VF drive and the SR-IOV VF network card. When the first virtual machine needs to be migrated to the second host machine 200 and the information of the SR-IOV PF network card is bound, I/O data interaction is completed with the SR-IOV PF network card through the Virtio-net drive.
However, the inventors found in the course of their research that the above-mentioned solution has the following problems:
first, the SR-IOV PF network card of the SR-IOV NIC only functions during the migration of the first virtual machine, occupying physical functional resources.
Secondly, in the migration process of the first virtual machine, I/O data interaction is carried out between the Virtio-net driver and the SR-IOVPF network card, the transceiving performance is low, and the cutoff time is long.
Thirdly, the software binding mode is adopted, and for different operating systems, commands and tools are different, so that standardized operation is difficult to perform, and subsequent operation and maintenance are not facilitated.
The second scheme is as follows: referring to the application scenario shown in fig. 2, in the application scenario, the first host 100 and the second host 200 both include SR-IOV NICs, where the SR-IOV NICs operate a virtual-User network card, the first virtual machine is configured in the first host 100, the second virtual machine is configured in the second host 200, and the first virtual machine is configured with a virtual-net driver.
In the above scheme, the first virtual machine may perform I/O data interaction with the Vhost-User network card of the SR-IOV NIC through the Virtio-net driver, so as to implement zero copy of a data packet from the first virtual machine to the first host 100, and improve I/O performance in comparison with a manner in which the Virtio-net driver performs I/O data interaction with the SR-IOV PF network card.
However, in the research process, the inventor finds that although the scheme supports the thermal migration, generally speaking, the number of times and duration of the thermal migration are small compared with the normal state, and in the normal state, the I/O performance achieved by the scheme is still low relative to the way that the SR-IOVVF driver and the SR-IOV VF network card are used to complete I/O data interaction. For example, in a normal state, when the scheme is adopted for I/O data interaction, a part of performance is sacrificed for program processing, so that the I/O performance is still lower than that of the SR-IOV VF mode. That is, the above scheme is to make the first host 100 support the warm migration by sacrificing a part of the I/O performance in the normal state.
Based on the above findings, the present inventors propose the following technical solutions to solve or improve the above problems. In detail, by configuring a combined transceiving queue in a hardware virtualization network card driver and a paravirtualization network card driver of the first host 100, when the first host 100 switches to a migration state, memory information of the first virtual machine is synchronized into the second virtual machine, and the first virtual machine performs I/O data interaction with the paravirtualization network card of the SR-IOV NIC through the combined transceiving queue of the paravirtualization network card driver in the whole migration process. Therefore, the method and the device can automatically complete the switching of the migration state on the driving side without adopting binding software, are beneficial to subsequent operation and maintenance, and can keep higher forwarding performance no matter in the migration state or the non-migration state.
It should be noted that the above prior art solutions have shortcomings, which are the results of practical and careful study by the inventors, and therefore, the discovery process of the above problems and the solutions proposed by the embodiments of the present disclosure in the following description should be the contribution of the inventors to the present disclosure.
Referring to fig. 3, in an application scenario of the virtual machine migration method provided in the embodiment of the present disclosure, in the application scenario, the first host 100 and the second host 200 both include SR-IOV NICs, each of which includes a hardware virtualization drive network card and a semi-virtualization drive network card, the first virtual machine is configured in the first host 100, and the first virtual machine is configured with a hardware virtualization network card drive and a semi-virtualization network card drive.
In the embodiment of the disclosure, the hardware virtualization driver network card may be an SR-IOV VF network card, the paravirtualization driver network card may be a Vhost-User network card, the hardware virtualization network card driver may be an SR-IOV VF driver, and the paravirtualization network card driver may be a Virtio-net driver.
The hardware virtualization network card driver and the semi-virtualization network card driver are configured with a combined receiving and sending queue, and the combined receiving and sending queue can be obtained by uniformly abstracting a VF queue driven by the hardware virtualization network card and a virtual input/output ring Virtio-ring driven by the semi-virtualization network card, so that the combined receiving and sending queue can have the functions of the VF queue and the Virtio-ring at the same time.
As an embodiment, the combined transceiving queue may be generated by: firstly, queue parameters (for example, hardware virtualization drive parameters of an SR-IOV VF drive) of a virtual function VF queue driven by a hardware virtualization network card and the address of each I/O memory block in a virtual I/O ring Virtio-ring driven by a semi-virtualization network card are obtained, a queue union set of the VF queue and the Virtio-ring is generated according to the queue parameters of the VF queue and the address of each I/O memory block in the Virtio-ring, and the queue union set is used as the combined transceiving queue. Therefore, when the hardware virtualization network card drive is switched to the semi-virtualization network card drive or the semi-virtualization network card drive is switched to the hardware virtualization network card drive, the combined receiving and sending queue can simultaneously represent the Virtio-ring and VF queues, and therefore smooth switching can be completed without unloading the original drive. The Virtio-ring and VF queues can be used for providing a communication channel for communication between the first virtual machine and the SR-IOV network card.
The virtual machine migration method shown in fig. 4, which is executed by the first host 100 shown in fig. 3, is described in detail below with reference to the application scenario shown in fig. 3. It should be understood that, in other embodiments, the order of some steps in the virtual machine migration method according to this embodiment may be interchanged according to actual needs, or some steps may be omitted or deleted. The detailed steps of the virtual machine migration method are described as follows.
Step S110, after receiving the virtual machine migration instruction, sending configuration information of the first virtual machine to be migrated.
In this embodiment, after receiving the virtual machine migration instruction, the first host 100 may obtain, from the virtual machine migration instruction, the virtual machine information of the first virtual machine to be migrated in the first host 100 and the host information of the second host 200 to which the first virtual machine needs to be migrated. The virtual machine information may include a virtual machine name or a virtual machine address (e.g., IP address, MAC address) of the first virtual machine, and the host information may include address information (e.g., IP address, MAC address) of the second host 200.
Next, the first host 100 may obtain configuration information of the first virtual machine according to the virtual machine information, where the configuration information may include network configuration information of the first virtual machine, and for example, may include configuration information of a hardware virtualization network card driver and a semi-virtualization network card driver.
Finally, the first host 100 may transmit the configuration information to the second host 200 according to the host information. For example, the first host 100 may search for a communication port of the second host 200 according to the host information, and send the configuration information to the second host 200 according to the searched communication port.
Then, the second host 200 may create a second virtual machine according to the configuration information, configure a hardware virtualization network card driver and a semi-virtualization network card driver that are the same as those of the first virtual machine for the second virtual machine, and send the first feedback information to the first host 100 after the configuration is completed.
Step S120, switching the current running state of the first virtual machine to a migration state.
In this embodiment, after receiving the first feedback information, the first host 100 determines that the second host 200 has completed the configuration of the second virtual machine, and then determines whether the current running state of the first virtual machine is a non-migration state (Hard state).
And the first virtual machine runs a hardware virtualization network card driver to perform I/O data interaction with a hardware virtualization driver network card of the SR-IOV NIC through a combined transceiving queue of the hardware virtualization network card driver in the Hard state. For example, the first virtual machine may run an SR-IOV VF driver and perform I/O data interaction with an SR-IOV VF network card of an SR-IOV NIC through a combined transceiving queue in the SR-IOV VF driver. Taking sending the data packet as an example, the first virtual machine writes the data packet to be sent into a combined transceiving queue driven by the SR-IOV VF, sends the data packet to an SR-IOV VF network card of the SR-IOV NIC through the VF queue in the combined transceiving queue, and sends the data packet to the destination terminal through the SR-IOV VF network card.
And if the current running state of the first virtual machine is judged to be the Hard state, switching the current running state of the first virtual machine to a provisional migration state PrepareSoft state, and acquiring the number of VF queues undergoing I/O data interaction from a combined transceiving queue driven by a hardware virtualization network card currently running by the first virtual machine in the PrepareSoft state. If the VF queue number is 0, the current running state is switched from the PrepareSoft state to the transition state (Soft state). Therefore, before switching to the migration state, the VF queue is waited to complete transmission of all data packets, and loss of data packets in the switching process is avoided.
Before that, the first virtual machine runs a paravirtualized network card driver in the Soft state to perform interaction of I/O data with a paravirtualized drive network card of the SR-IOV NIC through a combined transceiving queue of the paravirtualized network card driver. For example, the first virtual machine may run a Virtio-net driver and perform I/O data interaction with a Vhost-User driver network card of the SR-IOV NIC through a combined transceiving queue of the Virtio-net driver. Taking sending a data packet as an example, the first virtual machine may write the data packet to be sent into a Virtio-net driven combined transceiving queue, send the data packet to a Vhost-User driven network card of the SR-IOV NIC through Virtio-ring in the combined transceiving queue, and send the data packet to a destination.
Step S130, sending the memory information of the first virtual machine to the second host 200.
In this embodiment, after receiving the memory information of the first virtual machine, the second host 200 synchronizes the memory information of the first virtual machine to the second virtual machine.
Based on the design, in the migration state, I/O data interaction can be carried out between the Virtio-net driver and the Vhost-User network card of the SR-IOV NIC, zero copy of a data packet from the first virtual machine to the first host 100 can be achieved, and I/O performance is improved compared with a mode that the Virtio-net driver and the SR-IOV PF network card carry out I/O data interaction. In a non-migration state, I/O data interaction can be completed through the SR-IOV VF driver and the SR-IOV VF network card, and compared with a mode that I/O data interaction is performed through the Virtio-net driver and the Vhost-User network card of the SR-IOV NIC, I/O performance is improved. Therefore, compared with the prior art, the embodiment can maintain higher forwarding performance in a migration state or a non-migration state. In addition, in the migration process, the SR-IOV PF network card is not needed, and the migration state can be automatically switched on the drive side without using binding software, which is beneficial to subsequent operation and maintenance.
Referring to fig. 5, after the memory information of the first virtual machine is synchronized to the second virtual machine, the second host 200 sends second feedback information to the first host 100, and on this basis, the virtual machine migration method may further include the following steps:
step S140, suspending the operation of the first virtual machine, and sending the third feedback information to the second host 200.
In this embodiment, after receiving the second feedback information, the first host 100 determines that the second host 200 has synchronized the memory information of the first virtual machine into the second virtual machine, and then suspends the operation of the first virtual machine, and sends third feedback information to the second host 200.
After receiving the third feedback information, the second host 200 determines that the first virtual machine has been suspended, then starts the second virtual machine, and switches the current running state of the second virtual machine to a non-migration state (Hard state). In detail, the second virtual machine runs a hardware virtualization network card driver in the Hard state to perform interaction of I/O data with a hardware virtualization driver network card of the SR-IOV NIC according to a combined transceiving queue of the hardware virtualization network card driver.
As an embodiment, a specific implementation manner of the second host 200 switching the current running state of the second virtual machine to the non-migration state may be: judging whether the current running state is a Soft state, if so, switching the current running state to a temporary non-migration state (PrepareHard state), acquiring the number of virtual I/O rings Virtio-ring in I/O data interaction from a combined transceiving queue driven by a paravirtualized network card currently running by a second virtual machine in the PrepareHard state, and if the number of the virtual I/O rings Virtio-ring is 0, switching the current running state from the temporary non-migration state to the non-migration state.
Therefore, before the switching to the non-migration state, the Virtio-ring is waited to finish the transmission of all the data packets, and the data packets are prevented from being lost in the switching process.
Further, still referring to fig. 4, a flowchart of another virtual machine migration method provided by the embodiment of the present disclosure is shown. Different from the above embodiment, the virtual machine migration method provided in this embodiment is executed by the second host 200, and it is understood that steps involved in the virtual machine migration method to be described next have been described in the above embodiment, and specific details of each step may be described with reference to the above embodiment, and only the steps executed by the second host 200 will be briefly described below.
Step S210, a second virtual machine is created according to the configuration information, and a hardware virtualization network card driver and a semi-virtualization network card driver are configured for the second virtual machine.
In this embodiment, after receiving the configuration information of the first virtual machine to be migrated, which is sent by the first host 100 according to the virtual machine migration instruction, the second host 200 creates the second virtual machine according to the configuration information, and configures the hardware virtualization network card driver and the paravirtualization network card driver for the second virtual machine.
When the second host 200 completes configuration of the second virtual machine, the first feedback information is sent to the first host 100, and the first host 100 switches the current running state of the first virtual machine to a migration state, wherein in the migration state, the first virtual machine runs a paravirtualized network card driver to perform interaction of I/O data with a paravirtualized network card driver network card of the SR-IOV NIC through a combined transceiving queue of the paravirtualized network card driver.
Next, the first host 100 transmits the memory information of the first virtual machine to the second host 200.
Step S220, synchronizing the memory information of the first virtual machine to the second virtual machine.
Further, referring to fig. 5, after the memory information of the first virtual machine is synchronized to the second virtual machine, the second host 200 sends second feedback information to the first host 100, and after the first host 100 receives the second feedback information, it is determined that the second host 200 has synchronized the memory information of the first virtual machine to the second virtual machine, and then the first virtual machine is suspended, and third feedback information is sent to the second host 200.
Step S230, starting the second virtual machine, and switching the current running state of the second virtual machine to a non-migration state.
After receiving the third feedback information, the second host 200 determines that the first virtual machine has been suspended, then starts the second virtual machine, and switches the current running state of the second virtual machine to a non-migration state (Hard state). In detail, the second virtual machine runs a hardware virtualization network card driver in the Hard state to perform interaction of I/O data with a hardware virtualization driver network card of the SR-IOV NIC according to a combined transceiving queue of the hardware virtualization network card driver.
Further, still referring to FIG. 4, a flowchart of another virtual machine migration method is shown. Unlike the above embodiment, the virtual machine migration method provided in the present embodiment is executed by the first host 100 and the second host 200 which are communicatively connected to each other, and it is understood that steps involved in the virtual machine migration method to be described next have been described in the above embodiment, and specific details of each step may be described with reference to the above embodiment.
In step S110, after receiving the virtual machine migration instruction, the first host 100 sends the configuration information of the first virtual machine to be migrated to the second host 200.
Step S210, the second host 200 creates a second virtual machine according to the configuration information, and configures a hardware virtualization network card driver and a semi-virtualization network card driver for the second virtual machine.
When the second virtual machine completes configuration, the second host 200 sends first feedback information to the first host 100, and after receiving the first feedback information, the first host 100 performs:
in step S120, the first host 100 switches the current running state of the first virtual machine to the migration state, and executes in the migration state:
step S130, sending the memory information of the first virtual machine to the second host 200.
In the migration state, the first virtual machine runs the paravirtualized network card driver to perform I/O data interaction with the paravirtualized drive network card of the SR-IOV NIC through the combined transceiving queue of the paravirtualized network card driver.
In step S220, the second host 200 synchronizes the memory information of the first virtual machine to the second virtual machine.
Further, referring to fig. 6, an embodiment of the present disclosure further provides a first virtual machine migration apparatus 300, where functions implemented by the first virtual machine migration apparatus 300 may correspond to steps of the virtual machine migration method executed by the first host 100. The first virtual machine migration apparatus 300 may be understood as the first host 100, or a processor of the first host 100, or may be understood as a component that is independent from the first host or the processor and implements the functions of the present disclosure under the control of the first host 100. As shown in fig. 6, the first virtual machine migration apparatus 300 may include a configuration information sending module 310, a first switching module 320, and a memory information sending module 330, and the functional modules of the first virtual machine migration apparatus 300 are described in detail below.
The configuration information sending module 310 may be configured to send the configuration information of the first virtual machine to be migrated to the second host 200 after receiving the virtual machine migration instruction, so that the second host 200 creates the second virtual machine according to the configuration information, and configures a hardware virtualization network card driver and a semi-virtualization network card driver for the second virtual machine. It is understood that the configuration information sending module 310 may be configured to perform the step S110, and for the detailed implementation of the configuration information sending module 310, reference may be made to the content related to the step S110.
The first switching module 320 may be configured to switch a current running state of the first virtual machine to a migration state when the second virtual machine completes configuration, where in the migration state, the first virtual machine runs a paravirtualized network card driver to perform I/O data interaction with a paravirtualized network card of the SR-IOV NIC through a combined transceiving queue driven by the paravirtualized network card. It is understood that the first switching module 320 can be used to perform the step S120, and for the detailed implementation of the first switching module 320, reference can be made to the above description about the step S120.
The memory information sending module 330 may be configured to send the memory information of the first virtual machine to the second host 200, so that the second host 200 synchronizes the memory information of the first virtual machine to the second virtual machine. It is understood that the memory information sending module 330 may be configured to perform the step S130, and for the detailed implementation of the memory information sending module 330, reference may be made to the content related to the step S130.
In a possible implementation manner, the configuration information sending module 310 may specifically send the configuration information of the first virtual machine to be migrated to the second host 200 by:
the virtual machine information of a first virtual machine to be migrated in the first host machine 100 and the host information of a second host machine 200 to which the first virtual machine needs to be migrated are obtained from the virtual machine migration instruction, the configuration information of the first virtual machine is obtained according to the virtual machine information, and the configuration information is sent to the second host machine 200 according to the host information.
In one possible implementation manner, referring to fig. 7, the first virtual machine migration apparatus 300 may further include a pause module 340.
The suspending module 340 may be configured to suspend running the first virtual machine after the memory information of the first virtual machine is synchronized to the second virtual machine, so that when the first virtual machine is suspended, the second host 200 starts the second virtual machine, and switches the current running state of the second virtual machine to a non-migration state, where in the non-migration state, the second virtual machine runs a hardware virtualization network card driver to perform I/O data interaction with a hardware virtualization driver network card of the SR-IOV NIC through a combined transceiving queue of the hardware virtualization network card driver. It is understood that the pause module 340 can be used to execute the step S140, and for the detailed implementation of the pause module 340, reference can be made to the above description about the step S140.
In a possible implementation manner, the first switching module 320 may specifically switch the current running state of the first virtual machine to the migration state by:
judging whether the current running state of the first virtual machine is a non-migration state;
if the current running state is a non-migration state, switching the current running state to a temporary migration state;
in the temporary migration state, acquiring the number of virtual function VF queues carrying out I/O data interaction from a combined receiving and sending queue of a semi-virtualization network card driver in a hardware virtualization network card driver currently running by a first virtual machine;
and if the VF queue number is 0, switching the current running state from the temporary transition state to the transition state.
Further, referring to fig. 8, an embodiment of the present disclosure further provides a second virtual machine migration apparatus 400, where functions implemented by the second virtual machine migration apparatus 400 may correspond to steps of the virtual machine migration method executed by the second host 200. The second vm migration apparatus 400 may be understood as the second host 200 or a processor of the second host 200, or may be understood as a component that is independent from the second host 200 or the processor and implements the functions of the present disclosure under the control of the second host 200. As shown in fig. 8, the second virtual machine migration apparatus 400 may include a configuration information receiving module 410, a creating module 420, and a memory information receiving module 430, and the functional modules of the second virtual machine migration apparatus 400 are described in detail below.
The configuration information receiving module 410 may be configured to receive configuration information of the first virtual machine to be migrated, which is sent by the first host 100 according to the virtual machine migration instruction. It is understood that the configuration information receiving module 410 may be configured to perform the step S210, and for the detailed implementation of the configuration information receiving module 410, reference may be made to the content related to the step S210.
The creating module 420 may be configured to create a second virtual machine according to the configuration information, and configure a hardware virtualization network card driver and a paravirtualization network card driver for the second virtual machine, so that when the second virtual machine completes configuration, the first host 100 switches a current running state of the first virtual machine to a migration state, where in the migration state, the first virtual machine performs I/O data interaction with the paravirtualization drive network card of the SR-IOV NIC through a combined transceiving queue of the paravirtualization network card driver by running the paravirtualization network card driver. It is understood that the creating module 420 can be used to execute the step S210, and the detailed implementation manner of the creating module 420 can refer to the content related to the step S210.
The memory information receiving module 430 may be configured to receive the memory information of the first virtual machine sent by the first host 100, and synchronize the memory information of the first virtual machine into the second virtual machine. It is understood that the memory information receiving module 430 can be used to execute the step S220, and the detailed implementation manner of the memory information receiving module 430 can refer to the content related to the step S220.
In a possible implementation manner, referring to fig. 9, the second virtual machine migration apparatus 400 may further include a second switching module 440.
The second switching module 440 may be configured to start the second virtual machine when the first virtual machine is temporarily running, and switch the current running state of the second virtual machine to a non-migration state, where in the non-migration state, the second virtual machine runs the hardware virtualization network card driver to perform I/O data interaction with the hardware virtualization driver network card of the SR-IOV NIC through a combined transceiving queue of the hardware virtualization network card driver. It is understood that the second switching module 440 can be used to perform the step S230, and for the detailed implementation of the second switching module 440, reference can be made to the above-mentioned content related to the step S230.
In a possible implementation manner, the second switching module 440 may specifically switch the current running state of the second virtual machine to the non-migration state by:
judging whether the current running state is a transition state;
if the current running state is a transition state, switching the current running state to a temporary non-transition state;
in a temporary non-migration state, acquiring the number of virtual I/O rings Virtio-ring in which I/O data interaction is performed from a combined receiving and sending queue driven by a semi-virtualization network card currently running by a second virtual machine;
and if the Virtio-ring number is 0, switching the current running state from the temporary non-transition state to the non-transition state.
Further, referring to fig. 10, an embodiment of the present disclosure further provides a server 500 for implementing the virtual machine migration method, and in this embodiment, the server 500 may be implemented by a bus 510 as a general bus architecture. The bus 510 may include any number of interconnecting buses and bridges depending on the specific application of the server 500 and the overall design constraints. Bus 510 connects together various circuits including processor 520, storage medium 530, and bus interface 540. Alternatively, the server 500 may connect the network adapter 550 or the like via the bus 510 using the bus interface 540. The network adapter 550 may be used to implement signal processing functions of a physical layer in the server 500 and implement transmission and reception of radio frequency signals through an antenna. The user interface 560 may connect external devices such as: a keyboard, a display, a mouse or a joystick, etc. The bus 510 may also connect various other circuits such as timing sources, peripherals, voltage regulators, or power management circuits, which are well known in the art, and therefore, will not be described in detail.
Alternatively, the server 500 may be configured as a general purpose processing system, such as what is commonly referred to as a chip, including: one or more microprocessors providing processing functions, and an external memory providing at least a portion of storage medium 530, all connected together with other support circuitry via an external bus architecture.
Alternatively, server 500 may be implemented using: an ASIC (application specific integrated circuit) having a processor 520, a bus interface 540, a user interface 560; and at least a portion of storage medium 530 integrated in a single chip, or server 500 may be implemented using: one or more FPGAs (field programmable gate arrays), PLDs (programmable logic devices), controllers, state machines, gate logic, discrete hardware components, any other suitable circuitry, or any combination of circuitry capable of performing the various functions described throughout this disclosure.
Among other things, the processor 520 is responsible for managing the bus 510 and general processing (including the execution of software stored on the storage medium 530). Processor 520 may be implemented using one or more general-purpose processors and/or special-purpose processors. Examples of processor 520 include microprocessors, microcontrollers, DSP processors, and other circuits capable of executing software. Software should be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
Storage medium 530 is shown in fig. 10 as being separate from processor 520, however, it will be readily apparent to those skilled in the art that storage medium 530, or any portion thereof, may be located external to server 500. Storage medium 530 may include, for example, a transmission line, a carrier waveform modulated with data, and/or a computer product separate from the wireless node, all of which may be accessed by processor 520 through bus interface 540. Alternatively, storage medium 530, or any portion thereof, may be integrated into processor 520, e.g., may be a cache and/or general purpose registers.
The processor 520 may be configured to perform a virtual machine migration method of the present disclosure.
Further, an embodiment of the present disclosure also provides a non-volatile computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions may execute the virtual machine migration method in any of the above method embodiments.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present disclosure may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
Alternatively, all or part of the implementation may be in software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the disclosure are, in whole or in part, generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as an electronic device, server, data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It will be evident to those skilled in the art that the disclosure is not limited to the details of the foregoing illustrative embodiments, and that the present disclosure may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the disclosure being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (15)

1. A virtual machine migration method applied to a first host machine in communication connection with a second host machine, wherein the first host machine and the second host machine comprise a single I/O virtualization network interface card (SR-IOV NIC) comprising a hardware virtualization driver network card and a para-virtualization driver network card, the method comprising:
after a virtual machine migration instruction is received, sending configuration information of a first virtual machine to be migrated to the second host machine, so that the second host machine creates a second virtual machine according to the configuration information, and configures a hardware virtualization network card driver and a semi-virtualization network card driver for the second virtual machine;
when the second virtual machine completes configuration, switching the current running state of the first virtual machine to a migration state, wherein in the migration state, the first virtual machine runs a paravirtualized network card driver to perform interaction of I/O data with a paravirtualized network card of the SR-IOV NIC through a combined transceiving queue of the paravirtualized network card driver, and the combined transceiving queue is generated in the following manner: acquiring queue parameters of a virtual function VF queue driven by a hardware virtualization network card and the address of each I/O memory block in a virtual I/O ring Virtio-ring driven by a semi-virtualization network card; generating a queue union set of the VF queue and the Virtio-ring according to the queue parameters of the VF queue and the address of each I/O memory block in the Virtio-ring, and taking the queue union set as the combined transceiving queue;
and sending the memory information of the first virtual machine to the second host machine so that the second host machine synchronizes the memory information of the first virtual machine to the second virtual machine.
2. The virtual machine migration method according to claim 1, wherein the step of sending the configuration information of the first virtual machine to be migrated to the second host machine after receiving the virtual machine migration instruction includes:
obtaining the virtual machine information of a first virtual machine to be migrated in the first host machine and the host machine information of a second host machine to which the first virtual machine needs to be migrated from the virtual machine migration instruction;
acquiring configuration information of the first virtual machine according to the virtual machine information;
and sending the configuration information to the second host according to the host information.
3. The method for migrating a virtual machine according to claim 1, wherein after the step of sending the memory information of the first virtual machine to the second host machine, the method further comprises:
after the memory information of the first virtual machine is synchronized to the second virtual machine, pausing running of the first virtual machine so that the second host starts the second virtual machine when the first virtual machine pauses running, and switching the current running state of the second virtual machine to a non-migration state, wherein in the non-migration state, the second virtual machine runs a hardware virtualization network card driver to perform I/O data interaction with the hardware virtualization driver network card of the SR-IOV NIC according to a combined transceiving queue of the hardware virtualization network card driver.
4. The virtual machine migration method according to claim 1, wherein the step of switching the current running state of the first virtual machine to the migration state includes:
judging whether the current running state of the first virtual machine is a non-migration state;
if the current running state is a non-migration state, switching the current running state to a temporary migration state;
in the temporary migration state, acquiring the number of virtual function VF queues undergoing I/O data interaction from a combined transceiving queue driven by a hardware virtualization network card currently running by the first virtual machine;
and if the VF queue number is 0, switching the current running state from the temporary migration state to the migration state.
5. A virtual machine migration method is applied to a second host machine which receives a first virtual machine migrated by a first host machine, wherein the first host machine and the second host machine comprise a single I/O virtualization network interface card (SR-IOV NIC) which comprises a hardware virtualization driving network card and a semi-virtualization driving network card, and the method comprises the following steps:
receiving configuration information of a first virtual machine to be migrated, which is sent by the first host according to a virtual machine migration instruction;
creating a second virtual machine according to the configuration information, and configuring a hardware virtualization network card driver and a paravirtualization network card driver for the second virtual machine, so that the first host switches the current running state of the first virtual machine to a migration state when the second virtual machine completes configuration, wherein in the migration state, the first virtual machine runs the paravirtualization network card driver to perform I/O data interaction with a paravirtualization drive network card of the SR-IOV NIC through a combined transceiving queue of the paravirtualization network card driver, and the combined transceiving queue is generated in the following manner: acquiring queue parameters of a virtual function VF queue driven by a hardware virtualization network card and the address of each I/O memory block in a virtual I/O ring Virtio-ring driven by a semi-virtualization network card; generating a queue union set of the VF queue and the Virtio-ring according to the queue parameters of the VF queue and the address of each I/O memory block in the Virtio-ring, and taking the queue union set as the combined transceiving queue;
and receiving the memory information of the first virtual machine sent by the first host, and synchronizing the memory information of the first virtual machine to the second virtual machine.
6. The method of claim 5, wherein after the step of synchronizing the memory information of the first virtual machine into the second virtual machine, the method further comprises:
and when the first virtual machine stops running, starting the second virtual machine, and switching the current running state of the second virtual machine to a non-migration state, wherein in the non-migration state, the second virtual machine runs a hardware virtualization network card driver to perform I/O data interaction with a hardware virtualization driver network card of the SR-IOV NIC through a combined transceiving queue of the hardware virtualization network card driver.
7. The virtual machine migration method according to claim 6, wherein the step of switching the current running state of the second virtual machine to the non-migration state includes:
judging whether the current running state is a transition state;
if the current running state is a transition state, switching the current running state to a temporary non-transition state;
acquiring the number of virtual I/O rings Virtio-ring in which I/O data interaction is performed from a combined receiving and transmitting queue driven by a semi-virtualization network card currently operated by the second virtual machine in the temporary non-migration state;
and if the Virtio-ring number is 0, switching the current running state from the temporary non-migration state to a non-migration state.
8. A virtual machine migration method, applied to a first host and a second host communicatively connected to each other, wherein the first host and the second host include a single I/O virtualization network interface card (SR-IOV NIC) including a hardware virtualization driver network card and a para-virtualization driver network card, the method comprising:
after receiving a virtual machine migration instruction, the first host machine sends configuration information of a first virtual machine to be migrated to the second host machine;
the second host machine creates a second virtual machine according to the configuration information, and configures a hardware virtualization network card driver and a semi-virtualization network card driver for the second virtual machine;
when the second virtual machine completes configuration, the first host switches the current running state of the first virtual machine to a migration state, and sends memory information of the first virtual machine to the second host in the migration state, wherein in the migration state, the first virtual machine runs a paravirtualized network card driver to perform interaction of I/O data with a paravirtualized network card driver of the SR-IOV NIC through a combined transceiving queue driven by the paravirtualized network card, and the combined transceiving queue is generated in the following manner: acquiring queue parameters of a virtual function VF queue driven by a hardware virtualization network card and the address of each I/O memory block in a virtual I/O ring Virtio-ring driven by a semi-virtualization network card; generating a queue union set of the VF queue and the Virtio-ring according to the queue parameters of the VF queue and the address of each I/O memory block in the Virtio-ring, and taking the queue union set as the combined transceiving queue;
and the second host machine synchronizes the memory information of the first virtual machine to the second virtual machine.
9. A virtual machine migration apparatus applied to a first host communicatively connected to a second host, the first host and the second host comprising a single I/O virtualization network interface card SR-IOV NIC, the SR-IOV NIC comprising a hardware virtualization driver card and a para-virtualization driver card, the apparatus comprising:
the configuration information sending module is used for sending the configuration information of the first virtual machine to be migrated to the second host machine after receiving the virtual machine migration instruction, so that the second host machine creates a second virtual machine according to the configuration information and configures a hardware virtualization network card driver and a semi-virtualization network card driver for the second virtual machine;
a first switching module, configured to switch a current running state of the first virtual machine to a migration state when the second virtual machine completes configuration, where in the migration state, the first virtual machine runs a paravirtualized network card driver to perform I/O data interaction with a paravirtualized network card of the SR-IOV NIC through a combined transceiving queue driven by the paravirtualized network card, and the combined transceiving queue is generated in the following manner: acquiring queue parameters of a virtual function VF queue driven by a hardware virtualization network card and the address of each I/O memory block in a virtual I/O ring Virtio-ring driven by a semi-virtualization network card; generating a queue union set of the VF queue and the Virtio-ring according to the queue parameters of the VF queue and the address of each I/O memory block in the Virtio-ring, and taking the queue union set as the combined transceiving queue;
and the memory information sending module is used for sending the memory information of the first virtual machine to the second host machine so that the second host machine synchronizes the memory information of the first virtual machine to the second virtual machine.
10. The virtual machine migration apparatus according to claim 9, wherein the configuration information sending module sends the configuration information of the first virtual machine to be migrated to the second host specifically by:
obtaining the virtual machine information of a first virtual machine to be migrated in the first host machine and the host machine information of a second host machine to which the first virtual machine needs to be migrated from the virtual machine migration instruction;
acquiring configuration information of the first virtual machine according to the virtual machine information;
and sending the configuration information to the second host according to the host information.
11. The virtual machine migration apparatus according to claim 9, wherein said apparatus further comprises:
a suspension module, configured to suspend running the first virtual machine after memory information of the first virtual machine is synchronized to the second virtual machine, so that the second host starts the second virtual machine when the first virtual machine is suspended from running, and switches a current running state of the second virtual machine to a non-migration state, where in the non-migration state, the second virtual machine runs a hardware virtualization network card driver to perform I/O data interaction with a hardware virtualization network card driver of the SR-IOV NIC through a combined transceiving queue of the hardware virtualization network card driver.
12. The virtual machine migration apparatus according to claim 9, wherein the first switching module switches the current running state of the first virtual machine to the migration state specifically by:
judging whether the current running state of the first virtual machine is a non-migration state;
if the current running state is a non-migration state, switching the current running state to a temporary migration state;
in the temporary migration state, acquiring the number of virtual function VF queues undergoing I/O data interaction from a combined receiving and sending queue of a semi-virtualization network card driver in a hardware virtualization network card driver currently running by the first virtual machine;
and if the VF queue number is 0, switching the current running state from the temporary migration state to the migration state.
13. A virtual machine migration apparatus applied to a second host receiving a first virtual machine migrated by a first host, wherein the first host and the second host comprise a single I/O virtualization network interface card (SR-IOV NIC) including a hardware virtualization driving network card and a paravirtualization driving network card, the apparatus comprising:
the configuration information receiving module is used for receiving configuration information of the first virtual machine to be migrated, which is sent by the first host machine according to the virtual machine migration instruction;
a creating module, configured to create a second virtual machine according to the configuration information, and configure a hardware virtualization network card driver and a paravirtualization network card driver for the second virtual machine, so that when the second virtual machine completes configuration, the first host switches a current running state of the first virtual machine to a migration state, where in the migration state, the first virtual machine performs I/O data interaction with a paravirtualization drive network card of the SR-IOV NIC through a combined transceiving queue of the paravirtualization network card driver by running the paravirtualization network card driver, and the combined transceiving queue is generated in the following manner: acquiring queue parameters of a virtual function VF queue driven by a hardware virtualization network card and the address of each I/O memory block in a virtual I/O ring Virtio-ring driven by a semi-virtualization network card; generating a queue union set of the VF queue and the Virtio-ring according to the queue parameters of the VF queue and the address of each I/O memory block in the Virtio-ring, and taking the queue union set as the combined transceiving queue;
and the memory information receiving module is used for receiving the memory information of the first virtual machine sent by the first host machine and synchronizing the memory information of the first virtual machine into the second virtual machine.
14. The virtual machine migration apparatus according to claim 13, wherein said apparatus further comprises:
and a second switching module, configured to start the second virtual machine when the first virtual machine is suspended from running, and switch a current running state of the second virtual machine to a non-migration state, where in the non-migration state, the second virtual machine runs a hardware virtualization network card driver to perform I/O data interaction with a hardware virtualization driver network card of the SR-IOV NIC through a combined transceiving queue of the hardware virtualization network card driver.
15. The virtual machine migration apparatus according to claim 14, wherein the second switching module switches the current running state of the second virtual machine to the non-migration state specifically by:
judging whether the current running state is a transition state;
if the current running state is a transition state, switching the current running state to a temporary non-transition state;
acquiring the number of virtual I/O rings Virtio-ring in which I/O data interaction is performed from a combined receiving and transmitting queue driven by a semi-virtualization network card currently operated by the second virtual machine in the temporary non-migration state;
and if the Virtio-ring number is 0, switching the current running state from the temporary non-migration state to a non-migration state.
CN201811505526.XA 2018-12-10 2018-12-10 Virtual machine migration method and device Active CN109739618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811505526.XA CN109739618B (en) 2018-12-10 2018-12-10 Virtual machine migration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811505526.XA CN109739618B (en) 2018-12-10 2018-12-10 Virtual machine migration method and device

Publications (2)

Publication Number Publication Date
CN109739618A CN109739618A (en) 2019-05-10
CN109739618B true CN109739618B (en) 2021-04-06

Family

ID=66358812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811505526.XA Active CN109739618B (en) 2018-12-10 2018-12-10 Virtual machine migration method and device

Country Status (1)

Country Link
CN (1) CN109739618B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955517B (en) * 2019-09-03 2021-08-20 华为技术有限公司 Message forwarding method, computer equipment and intermediate equipment
CN110928728A (en) * 2019-11-27 2020-03-27 上海英方软件股份有限公司 Virtual machine copying and switching method and system based on snapshot
CN114691287A (en) * 2020-12-29 2022-07-01 华为云计算技术有限公司 Virtual machine migration method, device and system
CN112596818B (en) * 2020-12-30 2023-12-05 上海众源网络有限公司 Application program control method, system and device
CN113553137B (en) * 2021-06-17 2022-11-01 中国人民解放军战略支援部队信息工程大学 DPDK-based access capability network element high-speed data processing method under NFV architecture
CN113472571B (en) * 2021-06-28 2023-11-03 北京汇钧科技有限公司 Intelligent network card device and bypass detection method of intelligent network card device
CN114760242B (en) * 2022-03-30 2024-04-09 深信服科技股份有限公司 Migration method and device of virtual router, electronic equipment and storage medium
CN114546604B (en) * 2022-04-26 2022-08-05 赛芯半导体技术(北京)有限公司 Thermal migration method and device for virtual machine
CN115499385B (en) * 2022-09-21 2023-09-12 中电云数智科技有限公司 Method for preventing packet loss during thermal migration of vDPA virtual machine
CN116932229B (en) * 2023-09-13 2023-12-12 新华三信息技术有限公司 Memory allocation method and device, network manager and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102594652B (en) * 2011-01-13 2015-04-08 华为技术有限公司 Migration method of virtual machine, switch and virtual machine system
CN102681913A (en) * 2011-12-21 2012-09-19 中兴通讯股份有限公司 Method and device for realizing live migration along virtual machines
CN106302322B (en) * 2015-05-19 2020-05-26 腾讯科技(深圳)有限公司 Virtual machine data flow management method and system
US9858102B2 (en) * 2015-05-21 2018-01-02 Dell Products, L.P. Data path failover method for SR-IOV capable ethernet controller
CN106557444B (en) * 2015-09-30 2022-01-25 中兴通讯股份有限公司 Method and device for realizing SR-IOV network card and method and device for realizing dynamic migration
CN106815067B (en) * 2015-11-30 2020-08-18 中国移动通信集团公司 Online migration method and device for virtual machine with I/O virtualization
CN107544841B (en) * 2016-06-29 2022-12-02 中兴通讯股份有限公司 Virtual machine live migration method and system
CN107729149A (en) * 2017-10-16 2018-02-23 郑州云海信息技术有限公司 A kind of virtual machine migration method and device

Also Published As

Publication number Publication date
CN109739618A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109739618B (en) Virtual machine migration method and device
US10095645B2 (en) Presenting multiple endpoints from an enhanced PCI express endpoint device
US20210232528A1 (en) Configurable device interface
US20180373557A1 (en) System and Method for Virtual Machine Live Migration
US9996484B1 (en) Hardware acceleration for software emulation of PCI express compliant devices
US8533713B2 (en) Efficent migration of virtual functions to enable high availability and resource rebalance
US8776090B2 (en) Method and system for network abstraction and virtualization for a single operating system (OS)
US9031081B2 (en) Method and system for switching in a virtualized platform
US9176767B2 (en) Network interface card device pass-through with multiple nested hypervisors
US7970852B2 (en) Method for moving operating systems between computer electronic complexes without loss of service
US9720712B2 (en) Physical/virtual device failover with a shared backend
US8966480B2 (en) System for migrating a virtual machine between computers
JP2019503599A (en) Packet processing method, host and system in cloud computing system
US10509758B1 (en) Emulated switch with hot-plugging
CN109753346B (en) Virtual machine live migration method and device
US9537797B2 (en) MTU management in a virtualized computer system
CN112306624A (en) Information processing method, physical machine and PCIE (peripheral component interface express) equipment
EP4053706A1 (en) Cross address-space bridging
CN113312143A (en) Cloud computing system, command processing method and virtualization simulation device
CN108737131B (en) Method and device for realizing network equipment virtualization
US9483290B1 (en) Method and system for virtual machine communication
CN104239120A (en) State information synchronization method, state information synchronization device and state information synchronization system for virtual machine
EP3823230B1 (en) Communication apparatus, communication method, and computer-readable medium
WO2019165355A1 (en) Technologies for nic port reduction with accelerated switching
CN107949828B (en) Method and apparatus for dynamically migrating execution of machine code in an application to a virtual machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant