CN116257276A - Virtual host machine user back-end upgrading method supporting virtualized hardware acceleration - Google Patents

Virtual host machine user back-end upgrading method supporting virtualized hardware acceleration Download PDF

Info

Publication number
CN116257276A
CN116257276A CN202310513795.5A CN202310513795A CN116257276A CN 116257276 A CN116257276 A CN 116257276A CN 202310513795 A CN202310513795 A CN 202310513795A CN 116257276 A CN116257276 A CN 116257276A
Authority
CN
China
Prior art keywords
virtual host
queue
user
virtual
host machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310513795.5A
Other languages
Chinese (zh)
Other versions
CN116257276B (en
Inventor
王旭
陈森法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xingyun Zhilian Technology Co Ltd
Original Assignee
Zhuhai Xingyun Zhilian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xingyun Zhilian Technology Co Ltd filed Critical Zhuhai Xingyun Zhilian Technology Co Ltd
Priority to CN202310513795.5A priority Critical patent/CN116257276B/en
Publication of CN116257276A publication Critical patent/CN116257276A/en
Application granted granted Critical
Publication of CN116257276B publication Critical patent/CN116257276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a virtual host machine user back-end upgrading method supporting virtualized hardware acceleration, which is applied to the virtual host machine user back-end. The virtual host user back end is used for supporting virtualized hardware acceleration of a paravirtualized architecture, the paravirtualized architecture comprises a front end driving side and a back end device side, and the virtualized hardware acceleration comprises data plane straight-through between the front end driving side and the back end device side and control plane negotiation between the front end driving side and the back end device side realized through the virtual host user back end. The method does not need the closing and restarting of the virtual machine and the migration of the virtual machine, can meet the requirement of changing and upgrading the back end of the user of the virtual host machine supporting the acceleration of the virtualized hardware, such as the requirement of changing and upgrading the OVS-DPDK software, is beneficial to the stability of the virtual machine during the upgrading period, and has the advantages of high efficiency and low resource occupation.

Description

Virtual host machine user back-end upgrading method supporting virtualized hardware acceleration
Technical Field
The application relates to the technical field of computers, in particular to a virtual host machine user back-end upgrading method supporting acceleration of virtualized hardware.
Background
With the development of computer science, cloud computing, data centers, and the like, virtualization technology (Virtualization) has been widely used. Virtualization technology refers to a configuration environment that converts various computer hardware resources, such as memory, disks, central processing units, etc., into one or more virtual computers that can be partitioned and combined, so that the computer hardware resources can be applied without being limited by existing architecture and physical configuration. The wittigo (i.e., virtio) architecture provides a paravirtualized standard for interactions between a general purpose input and output device within a virtual machine, i.e., a virtio front end, and a virtualization handler, i.e., a virtio back end. A multi-layer virtual switch (OVS) based on a data plane development kit (Data Plane Development Kit, DPDK), also known as OVS-DPDK, provides a virtual io network back-end driving scheme, i.e., virtual host user, in combination with a virtual io architecture. The virtio architecture combines with virtualized hardware acceleration (vhost data path acceleration, VDPA) to also call virtual host data path acceleration, which can realize the pass-through of the data plane and the hardware network card. VDPA means that control information is transferred to the hardware and after the hardware completes the configuration of the data plane, the data communication process is completed by the data plane that is passed through between the virtual machine and the hardware without intervention of the host. However, in a paravirtualized architecture supporting virtualized hardware acceleration, i.e. supporting VPDA, the negotiation of the control plane is also done with OVS-DPDK based on virtual host user protocol. Network cards, virtual machines and the like based on a paravirtualized architecture are generally used for complex networking services such as traffic offloading and the like, and therefore, frequent software changes are required, for example, OVS-DPDK software is changed and upgraded, that is, the back end of a user of a virtual host supporting acceleration of virtualized hardware is required to be changed and upgraded frequently. In the prior art, one upgrade method is to shut down the virtual machine and remove the original virtual host machine user back end module and install a new virtual host machine user back end module before restarting the virtual machine, but this will result in the shutdown and restarting of the virtual machine itself. Another upgrade method is to upgrade the migration of the virtual machine on the current host machine by using the migration of the virtual machine, but the migration of the virtual machine needs to occupy a considerable physical resource for backup, which also results in longer migration time and possibly packet loss during the migration. Yet another upgrade method is to utilize network traffic forwarding to switch between the user mode and the kernel mode, so that network services are maintained during the removal of the old module and the insertion of the new module, but switching of network traffic forwarding causes additional loss and also causes frequent switching between the user mode and the kernel mode in the context of frequent changes and upgrades.
Therefore, the method for upgrading the back end of the virtual host machine user supporting the acceleration of the virtualization hardware is provided, the virtual machine is not required to be turned off or restarted, the virtual machine is not required to be migrated, the requirement for changing and upgrading the back end of the virtual host machine user supporting the acceleration of the virtualization hardware, such as the requirement for changing and upgrading the OVS-DPDK software, is favorable for stability during the upgrading of the virtual machine, and has the advantages of high efficiency and low resource occupation.
Disclosure of Invention
In a first aspect, the present application provides a method for upgrading a user backend of a virtual host supporting acceleration of virtualized hardware. The virtual host machine user back-end upgrading method is applied to a virtual host machine user back-end, the virtual host machine user back-end is used for supporting virtualized hardware acceleration of a paravirtualized architecture, the paravirtualized architecture comprises a front-end driving side and a back-end equipment side, and the virtualized hardware acceleration comprises data plane straight-through between the front-end driving side and the back-end equipment side and control plane negotiation between the front-end driving side and the back-end equipment side, which are realized through the virtual host machine user back-end. The method for upgrading the user back end of the virtual host comprises the following steps: stopping a transceiving packet queue corresponding to the back-end equipment side and draining the transceiving packet queue in response to an upgrading request associated with the back-end of the virtual host user, wherein the transceiving packet queue is used for data interaction between the front-end driving side and the back-end equipment side, and the data interaction between the front-end driving side and the back-end equipment side is realized by updating a first pointer of a ring buffer corresponding to the transceiving packet queue through the front-end driving side and a second pointer of the ring buffer through the back-end equipment side respectively; at least after the receiving and transmitting packet queue is emptied, reading a first position of the second pointer of the annular buffer corresponding to the receiving and transmitting packet queue through the back-end equipment side; transmitting the first position to a system simulator for simulating the front-end driving side through the user back end of the virtual host machine, and then storing the first position by the system simulator; after the first position is transmitted to the system simulator at least by the back end of the virtual host machine user, executing an upgrading flow corresponding to the upgrading request on the back end of the virtual host machine user; after the upgrade process is executed, the system simulator issues the queue parameters including the first position to the back-end equipment side so as to restore the queue state of the receiving-transmitting packet queue to the queue state when the receiving-transmitting packet queue reads the first position at the back-end equipment side.
According to the first aspect of the application, the virtual machine is not required to be turned off or restarted, the virtual machine is not required to be migrated, the requirement of changing and upgrading the back end of the user of the virtual host supporting the acceleration of the virtualized hardware, such as the requirement of changing and upgrading the OVS-DPDK software, is met, stability during the upgrading of the virtual machine is facilitated, and the method has the advantages of being high in efficiency and low in resource occupation.
In a possible implementation manner of the first aspect of the present application, draining the transmit-receive packet queue includes: and processing all data to be processed in the receiving-transmitting packet queue through the back-end equipment side.
In one possible implementation manner of the first aspect of the present application, the virtual host user backend includes a data plane development suite based multi-layer virtual switch, and the upgrade request associated with the virtual host user backend includes an upgrade request associated with the data plane development suite based multi-layer virtual switch.
In a possible implementation manner of the first aspect of the present application, the method for upgrading a back end of a user of a virtual host further includes: during execution of the upgrade procedure corresponding to the upgrade request on the virtual host machine user back end, a connection between the virtual host machine user back end and the paravirtualized architecture is disconnected and traffic of the paravirtualized architecture is disconnected.
In a possible implementation manner of the first aspect of the present application, the paravirtualized architecture is a virtio architecture.
In a possible implementation manner of the first aspect of the present application, the backend device side includes at least one virtualized hardware acceleration device, and the virtualized hardware acceleration includes offloading message forwarding operation hardware to the at least one virtualized hardware acceleration device.
In a possible implementation manner of the first aspect of the present application, the control plane negotiation between the front end driver side and the back end device side implemented by the virtual host user back end includes: the system simulator negotiates with the virtual host user backend based on a virtual host user protocol.
In a possible implementation manner of the first aspect of the present application, the control plane negotiation between the front end driver side and the back end device side implemented by the virtual host user back end further includes: and the virtual host user back end transmits data plane parameters to the back end equipment side, wherein the data plane parameters are used for configuring data plane direct connection between the front end driving side and the back end equipment side.
In a possible implementation manner of the first aspect of the present application, the transferring, by the virtual host user backend, the first location to the system simulator includes: the virtual host user protocol is extended to add a first message format based on which the virtual host communicates the first location to the system simulator.
In a possible implementation manner of the first aspect of the present application, the method for upgrading a back end of a user of a virtual host further includes: and responding to the upgrading request associated with the virtual host machine user back end, and performing resource cleaning on the virtualized hardware acceleration device of the back end equipment side, which is connected with the established virtual host machine user.
In a possible implementation manner of the first aspect of the present application, the queue parameter further includes a queue direct memory access address and a queue length.
In a possible implementation manner of the first aspect of the present application, the first pointer of the ring buffer is used to indicate an available tag of the transmit-receive packet queue, the second pointer of the ring buffer is used to indicate a last available tag of the transmit-receive packet queue, and when the last available tag is consistent with the available tag, there is no data to be processed in the transmit-receive packet queue.
In a possible implementation manner of the first aspect of the present application, restoring the queue state of the transceiver packet queue to the queue state of the transceiver packet queue when the backend device side reads the first location includes: the transmit-receive packet queue is configured such that a last available tag of the transmit-receive packet queue corresponds to the first location.
In a possible implementation manner of the first aspect of the present application, the method for upgrading a back end of a user of a virtual host further includes: and after restoring the queue state of the receiving and transmitting packet queue to the queue state of the receiving and transmitting packet queue when the back-end equipment side reads the first position, synchronizing the physical network card queue of the back-end equipment side by utilizing the receiving and transmitting packet queue so as to restore the input and output functions of the half-virtualization architecture.
In a second aspect, embodiments of the present application further provide a computer device, where the computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements a method according to any implementation manner of any one of the foregoing aspects when the computer program is executed.
In a third aspect, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when run on a computer device, cause the computer device to perform a method according to any one of the implementations of any one of the above aspects.
In a fourth aspect, embodiments of the present application also provide a computer program product comprising instructions stored on a computer-readable storage medium, which when run on a computer device, cause the computer device to perform a method according to any one of the implementations of any one of the above aspects.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a paravirtualized architecture that accelerates using a multi-layer virtual switch based on a data plane development suite;
FIG. 2 is a schematic diagram of a paravirtualized architecture supporting virtualized hardware acceleration according to an embodiment of the present application;
fig. 3 is a flow chart of a method for upgrading a user back end of a virtual host machine supporting acceleration of virtualized hardware according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that in the description of this application, "at least one" means one or more than one, and "a plurality" means two or more than two. In addition, the words "first," "second," and the like, unless otherwise indicated, are used solely for the purposes of description and are not to be construed as indicating or implying a relative importance or order.
FIG. 1 is a schematic diagram of a paravirtualized architecture that accelerates using a multi-layer virtual switch based on a data plane development suite. As shown in fig. 1, host a100 includes user space a110 and kernel space a120. In the user space a110, a system simulator a112 is deployed as a virtual operating system simulator (QEMU) for creating a virtual machine and providing a virtualization simulator such as simulating a hardware device. System simulator A112 creates a virtual machine, client A114. In the context of cloud computing, servers, data centers, and the like, a physical server that provides physical server resources is referred to as a host (host), and the physical server resources of the host are virtualized and combined into multiple virtual servers that run on the host and are logically isolated from each other, which are referred to as clients (guests). In some embodiments, the paravirtualized architecture shown in fig. 1 applies to cloud computing, server, data center, etc. scenarios, so host a100 may be a physical server and client a114 may be a virtual server (simulated generation of client a114 by system simulator a 112) running on host a100 and created using the physical server resources it provides. In the user space a110, a multi-layer virtual switch (OVS) a116, also called OVS-DPDK, based on a data plane development suite (Data Plane Development Kit, DPDK) is also deployed. The multi-layer virtual switch a116, which is based on the data plane development suite, interacts with the client a114 generated by the system simulator a 112. Also shown in fig. 1, host a100 includes a Physical network interface controller (Physical network interface controller, physical NIC) a130. Deployed in Kernel space a120 is Kernel-based virtual machine (Kernel-based Virtual Machine, KVM) a122. The system simulator a112 interacts with the kernel-based virtual machine a122 deployed in the kernel space a120 to provide a paravirtualized solution.
With continued reference to FIG. 1, full-virtualization refers to the technique of fully virtualizing functional components and emulating all the requested instructions. Paravirtualization technology (Para-virtualization) refers to directly applying for use of access to hardware resources to improve virtual machine performance. Hardware-assisted virtualization techniques refer to providing special instruction interception and redirection through a physical platform to enhance hardware virtualization. The paravirtualized architecture shown in fig. 1 combines a paravirtualized technique and a hardware-assisted virtualized technique, completes a part of instructions through hardware and virtualizes a part of instructions to improve input-output performance. As shown in fig. 1, the paravirtualized architecture includes a front-end driver side a140 and a back-end device side a142. Wherein client a114 is created by system simulator a112, front-end driver side a140 is provided inside client a114, e.g. using standard network drivers or source code of various front-end drivers. The back end device side a142 of the paravirtualized architecture is implemented by the system simulator a112 in conjunction with the multi-layer virtual switch a116 of the data plane development suite. Wherein the system simulator a112 negotiates with the front-end driver side a140 inside the client a114 based on a specific protocol (when the paravirtualized architecture is a virtio architecture, the protocol is a virtio protocol), and at the same time, the system simulator a112 negotiates with the multi-layer virtual switch a116 based on a data plane development suite based on a virtual host user (vhost user) protocol, thereby completing the control plane negotiation. After the control plane negotiation is completed, the front end driver side a140 inside the client a114 interacts with the data plane based multi-layer virtual switch a116 of the data plane development suite to implement data plane traffic. The data interaction, i.e. the data plane interaction, between the front-end driver side a140 and the back-end device side a142 is shown in fig. 1 with a line with a double arrow. Control plane negotiation, as described above, needs to be performed depending on the system simulator a112 and the data plane development suite-based multi-layer virtual switch a116, and involves multiple interactions including interactions between the client a114 and the front-end driver side a140, interactions between the system simulator a112 and the client a114, and interactions between the system simulator a112 and the data plane development suite-based multi-layer virtual switch a 116. The kernel-based virtual machine a122 deployed in the kernel space a120 serves as a host kernel module in the paravirtualized architecture for providing an operational kernel, and the system simulator a112 serves to provide device simulation and implement the paravirtualized architecture by interacting with the kernel-based virtual machine a122 deployed in the kernel space a 120. Here, the front-end driver side a140 of the paravirtualized architecture is located inside the client a114, and the client a114 is created by the system simulator a112 deployed in the user space a110 of the host a100, so the front-end driver side a140 provides a driver module in a user state for receiving requests, encapsulating the requests, and notifying the back-end device side a142. The back-end device side a142 provides a back-end handler module for receiving the request, parsing the request according to the transport protocol, and performing the corresponding operations. The backend device side a142 in fig. 1 is implemented by the system simulator a112 in conjunction with the multi-layer virtual switch a116 of the data plane development suite. Based on the virtual host user protocol, the system simulator a112 may offload the network packet processing of the paravirtualized architecture into an application of a data plane development suite, for example into the multi-layer virtual switch a116 based on the data plane development suite. Thus, the paravirtualized architecture shown in fig. 1 using the multi-layer virtual switch based on the data plane development suite for acceleration provides a user-oriented network backend driving scheme of the paravirtualized architecture, that is, the user-oriented backend device side a142 of the paravirtualized architecture based on the data plane development suite is implemented. In addition, the multi-layer virtual switch a116 of the data plane development kit also interacts with the physical network interface controller a130 to enable interaction with the physical network card. In this way, the half-virtualization architecture shown in fig. 1, which uses a multi-layer virtual switch based on a data plane development suite to accelerate, establishes an interaction mechanism between a front-end driving side a140 and a virtualized back-end device side a142, which are general input/output devices inside a virtual machine, such as a client a114, so as to implement an efficient virtualization process, implement virtualization of general input/output devices such as a network card and a disk, and utilize the characteristics of the half-virtualization technology and the hardware-assisted virtualization technology and the data plane development suite to improve the data plane traffic performance.
It should be understood that, in the multi-layer virtual switch using the data plane-based development suite illustrated in fig. 1, the paravirtualized architecture includes interactions between the front-end driver side a140 and the back-end device side a142, which depend on forwarding of the multi-layer virtual switch a116 based on the data plane-based development suite, and the interactions between the front-end driver side a140 and the back-end device side a142 must also satisfy the relevant specifications of the virtual host user protocol, that is, the communication between two processes in the user space a110 is performed in a communication manner specified by the virtual host user protocol, for example, in a communication manner of shared memory. As described above, the system simulator a112 negotiates with the front-end driver side a140 inside the client a114 based on a specific protocol such as the virtio protocol, and at the same time, the system simulator a112 negotiates with the multi-layer virtual switch a116 based on the data plane development suite based on the virtual host user protocol, thereby completing the control plane negotiation. Thus, based on details regarding the feature negotiation and configuration aspects of the virtual host user protocol, the system simulator a112 determines the intersection between the features supported by itself and the virtual host to complete the feature negotiation and perform the shared memory configuration so that the processes of the front-end driver side a140 and the back-end device side a142 can interact in the user space a110 in a communication manner specified by the virtual host user protocol. Depending on the transport protocol employed by the paravirtualized architecture and the specifics of the virtual host user protocol employed, control plane negotiation and data interaction between front-end driver side a140 and back-end device side a142 may be accomplished in any suitable manner. In some embodiments, multiple queues may be set for each device, each for handling different data transmissions. In some embodiments, a ring buffer may be provided between the system simulator A112 and the client A114, from which the system simulator A112 and the front end driver side A140 read data and write data. In some embodiments, the front end driver side a140 may use two virtual queues for receiving and transmitting, respectively, may use any number of virtual queues, and may determine the specific number of queues according to the requirements. In addition, the system simulator a112 may also create more virtual clients, and implement a paravirtualized architecture of more virtual clients with reference to the control flow negotiated by the system simulator a112 and the client a 114.
Fig. 2 is a schematic diagram of a paravirtualized architecture supporting virtualized hardware acceleration according to an embodiment of the present application. As shown in fig. 2, host B200 includes user space B210 and kernel space B220. In the user space B210, a system simulator B212, such as QEMU, is deployed for creating virtual machines, providing a virtualized simulator, such as simulating hardware devices. System simulator B212 creates a virtual machine, client B214. In some embodiments, the paravirtualized architecture shown in fig. 2 applies to cloud computing, server, data center, etc. scenarios, host B200 may be a physical server, and client B214 may be a virtual server running on host B200 and created using the physical server resources it provides. In the user space B210, a multi-layered virtual switch B216 based on a data plane development suite is also deployed. Host B200 also includes a physical network interface controller B230. The physical network interface controller B230 includes a Virtual Function (VF) B232. A kernel-based virtual machine B222 is deployed in kernel space B220. The paravirtualized architecture shown in fig. 2 employs virtualized hardware acceleration (vhost data path acceleration, VDPA), also called virtual host data path acceleration, to achieve pass-through with hardware network cards to achieve high performance networks. The paravirtualized architecture includes a front-end drive side B240 and a back-end device side B242. Wherein client B214 is created by system simulator B212, front-end driver side B240 is provided within client B214, for example using standard network drivers or source code for various front-end drivers. With respect to the backend device side B242, the data plane hardware is offloaded into the hardware, namely the physical network interface controller B230 shown in fig. 2. The negotiation about the control plane is basically similar to the control plane negotiation in the paravirtualized architecture shown in fig. 1, but after the control information is transferred to the hardware and the data plane configuration is completed by the hardware, the data communication process is completed by the pass-through between the virtual machine and the network card. Specifically, the front end driver side B240 is in direct communication with the hardware, i.e., the physical network interface controller B230, so that the back end device side B242 is equivalent to being unloaded by the hardware to the physical network interface controller B230, and the data plane interaction between the front end driver side B240 and the back end device side B242 is performed through the direct communication between the front end driver side B240 and the physical network interface controller B230. Control plane negotiation between the front-end driver side B240 and the back-end device side B242 still relies on the system simulator B212 and the data plane development suite-based multi-layer virtual switch B216 and involves multiple interactions including interactions between the client B214 and the front-end driver side B240, interactions between the system simulator B212 and the client B214, and interactions between the system simulator B212 and the data plane development suite-based multi-layer virtual switch B216. The data plane development suite based multi-layer virtual switch B216 also interacts with a physical network interface controller B230. The multi-layer virtual switch B216 based on the data plane development kit needs to additionally issue data plane parameters to the hardware network card, i.e., the physical network interface controller B230, when negotiating with the system simulator B212, so that the pass-through with the hardware network card on the data plane can be realized. Thus, the paravirtualized architecture supporting virtualized hardware acceleration shown in fig. 2 realizes the pass-through between the general purpose input and output devices inside the virtual machine, such as the client B214, that is, the front end driving side B240 and the hardware, that is, the pass-through between the virtual machine and the hardware on the data plane. A physical device such as a physical network card, for example, the physical network interface controller B230 shown in fig. 2, or a virtual function in the physical network card, for example, the virtual function B232 of the physical network interface controller B230 shown in fig. 2, may be directly allocated to the virtual machine, so that a pass-through between the front end driver side B240 and the physical network interface controller B230 or the virtual function B232 thereof may be achieved. Therefore, the virtualization of the general input/output equipment can be realized, the characteristics of a half-virtualization technology, a hardware-assisted virtualization technology and a data plane development suite are utilized to improve the data plane flow performance, and the virtualization hardware acceleration technology is utilized to realize the direct connection with the hardware network card so as to realize a high-performance network.
With continued reference to fig. 2, kernel-based virtual machine B222 deployed in kernel space B220 acts as a host kernel module in the paravirtualized architecture for providing an operational kernel, and system simulator B212 is used to provide device simulation and to implement the paravirtualized architecture by interacting with kernel-based virtual machine B222 deployed in kernel space B220. Here, the front end driver side B240 of the paravirtualized architecture is located inside the client B214, the client B214 is created by the system simulator B212 deployed in the user space B210 of the host B200, and thus the front end driver side B240 provides a driver module for receiving requests, encapsulating the requests, and notifying the back end device side B242. In some embodiments, the front-end driver side B240 is a virtio network driver inside the client B214, and may be in a kernel mode or a user mode, and in both kernel mode and user mode, the packets are received and transmitted through negotiation with the virtio backend and subsequent steps. The back-end device side B242 provides a back-end handler module for receiving the request, parsing the request according to the transport protocol, and performing the corresponding operations. The data interaction, i.e. the data plane interaction, between the front end driver side B240 and the back end device side B242 is based on a pass-through between the front end driver side B240 and the physical network interface controller B230 or its virtual function B232. Control plane negotiation between the front-end driver side B240 and the back-end device side B242 still relies on the system simulator B212 and the multi-layer virtual switch B216 of the data plane-based development suite. And, the system simulator B212 negotiates with the data plane development suite based multi-layer virtual switch B216 based on the virtual host user protocol, in some embodiments the data plane development suite based multi-layer virtual switch B216 passing the control plane through VDPA drivers (not shown) located in the kernel space B220; in other embodiments, when using the data plane development kit based multi-layer virtual switch B216, i.e., using the OVS-DPDK framework, the VDPA driver (not shown) is located in the OVS-DPDK procedure in the user state, i.e., in the data plane development kit based multi-layer virtual switch B216 in the user space B210. In other words, the system simulator B212 and the client B214 still use a control plane protocol meeting the requirements related to the virtual host user protocol, transfer control information such as data plane parameters to the physical network interface controller B230, configure a data plane according to the control information by the physical network interface controller B230, and the configured data plane is used for pass-through between the front end driving side B240 and the physical network interface controller B230. Because the backend data processing has been offloaded by hardware to hardware such as the physical network interface controller B230, pass-through between the virtual machine and the hardware can be done without host intervention. Also, interrupt information may be sent by hardware, such as physical network interface controller B230, directly to a virtual machine, such as guest B214, without host intervention. It should be appreciated that, although the pass-through between the front end driver side B240 and the physical network interface controller B230 is implemented, that is, the data plane pass-through between the front end driver side B240 and the back end device side B242 is implemented, the control plane negotiation between the front end driver side B240 and the back end device side B242 still complies with the relevant specifications of the virtual host user protocol. In order to ensure that the front-end driver side B240 and the back-end device side B242 of the paravirtualized architecture can perform correct communication, a communication mode specified by a virtual host user protocol, for example, a communication mode of shared memory, needs to be adopted. In other words, the paravirtualized architecture supporting virtualized hardware acceleration shown in fig. 2 needs to employ a communication scheme specified by the virtual host user protocol that is used by the paravirtualized architecture using data plane development suite-based multi-layer virtual switch acceleration shown in fig. 1. Specifically, the system simulator B212 performs shared memory configuration so that the front-end driver side B240 and the back-end device side B242 can interact according to a communication manner specified by the virtual host user protocol.
With continued reference to fig. 2, the multi-layer virtual switch B216 of the data plane development kit provides a user-oriented network back-end driving scheme, i.e., virtual host user back-end, in conjunction with the paravirtualized architecture shown in fig. 2. The paravirtualized architecture combines virtualized hardware acceleration, i.e., VDPA, to achieve the pass-through of the data plane and the hardware network card. VDPA means that control information is transferred to the hardware and after the hardware completes the configuration of the data plane, the data communication process is completed by the data plane that is passed through between the virtual machine and the hardware without intervention of the host. However, the paravirtualized architecture supporting virtualized hardware acceleration shown in fig. 2, the negotiation of the control plane is still completed based on the virtual host user protocol along with the multi-layer virtual switch B216 based on the data plane development suite. Network cards, virtual machines and the like based on a paravirtualized architecture are generally used for complex networking services such as traffic offloading, and therefore, frequent software modification is required, for example, modification and upgrading of software of the multi-layer virtual switch B216 based on a data plane development suite, that is, frequent modification and upgrading of the back end of a user of a virtual host supporting acceleration of virtualized hardware are required. The multi-layer virtual switch B216 based on the data plane development suite may serve as a client of the virtual host machine user and also has the function of offloading the switch, for example, the switch forwarding plane processes the forwarding of the first packet of the traffic and the subsequent packet of the corresponding data flow is directly forwarded by the hardware network card. Therefore, when there is a situation that the back end of the virtual host user supporting the acceleration of the virtualized hardware needs to be changed and upgraded frequently, this may cause frequent upgrade and restart of the multi-layer virtual switch B216 based on the data plane development suite, and thus may cause the loss of queue status information of the physical network card queue, and also cause that the physical network card queue cannot be resynchronized with the transceiving packet queue in the virtual machine, and further cause that the input and output cannot be recovered after the upgrade. Moreover, because the paravirtualized architecture supporting the acceleration of the virtualized hardware shown in fig. 2 offloads the back-end data processing hardware to hardware such as the physical network interface controller B230 so as to implement the data plane pass-through, this means that it is difficult to timely grasp the queue status information of the physical network card queue through the virtual machine such as the client B214, which further aggravates the influence caused by frequently modifying and upgrading the back end of the user of the virtual host supporting the acceleration of the virtualized hardware. Therefore, the embodiment of the application provides a virtual host machine user back-end upgrading method supporting the acceleration of the virtualization hardware, which does not need the closing and restarting of the virtual machine and the migration of the virtual machine, can meet the requirement of changing and upgrading the virtual host machine user back-end supporting the acceleration of the virtualization hardware, such as the requirement of changing and upgrading the OVS-DPDK software, is beneficial to the stability of the virtual machine during the upgrading period, and has the advantages of high efficiency and low resource occupation. This is described in detail below in conjunction with fig. 3.
Fig. 3 is a flowchart of a method for upgrading a user back end of a virtual host machine supporting acceleration of virtualized hardware according to an embodiment of the present application. The virtual host machine user back-end upgrading method is applied to a virtual host machine user back-end, the virtual host machine user back-end is used for supporting virtualized hardware acceleration of a paravirtualized architecture, the paravirtualized architecture comprises a front-end driving side and a back-end equipment side, and the virtualized hardware acceleration comprises data plane straight-through between the front-end driving side and the back-end equipment side and control plane negotiation between the front-end driving side and the back-end equipment side, which are realized through the virtual host machine user back-end. As shown in fig. 3, the method for upgrading the user back-end of the virtual host machine includes the following steps.
Step S302: and responding to an upgrading request associated with the back end of the virtual host user, stopping a receiving and transmitting packet queue corresponding to the back end equipment side and draining the receiving and transmitting packet queue, wherein the receiving and transmitting packet queue is used for data interaction between the front end driving side and the back end equipment side, and the data interaction between the front end driving side and the back end equipment side is realized by updating a first pointer of a ring buffer corresponding to the receiving and transmitting packet queue through the front end driving side and updating a second pointer of the ring buffer through the back end equipment side respectively.
Step S304: and at least after the receiving and transmitting packet queue is emptied, reading a first position of the second pointer of the annular buffer corresponding to the receiving and transmitting packet queue through the back-end equipment side.
Step S306: and transmitting the first position to a system simulator for simulating the front-end driving side through the virtual host machine user back end, and then storing the first position by the system simulator.
Step S308: and executing an upgrading flow corresponding to the upgrading request on the virtual host machine user back end at least after the virtual host machine user back end transmits the first position to the system simulator.
Step S310: after the upgrade process is executed, the system simulator issues the queue parameters including the first position to the back-end equipment side so as to restore the queue state of the receiving-transmitting packet queue to the queue state when the receiving-transmitting packet queue reads the first position at the back-end equipment side.
The virtual host user back-end upgrading method shown in fig. 3 is applied to a virtual host user back-end, and the virtual host user back-end is used for supporting virtualized hardware acceleration of a paravirtualized architecture, wherein the paravirtualized architecture comprises a front-end driving side and a back-end device side, and the virtualized hardware acceleration comprises data plane through between the front-end driving side and the back-end device side and control plane negotiation between the front-end driving side and the back-end device side, which are realized through the virtual host user back-end. Here, the virtual host machine user back-end upgrade method shown in fig. 3 may be applied to the virtual host machine user back-end, which may be used to support a paravirtualized architecture supporting virtualized hardware acceleration as shown in fig. 2. In the method for upgrading the back end of the virtual host user shown in fig. 3, the paravirtualized architecture includes a front end driving side and a back end device side, and may refer to a front end driving side B240 and a back end device side B242 shown in fig. 2, respectively. The virtualized hardware acceleration includes a data plane pass-through between the front end driver side and the back end device side, which may refer to a data plane pass-through between front end driver side B240 and back end device side B242. Control plane negotiation between the front end driver side B240 and the back end device side B242, implemented by the virtual host user back end, may be referred to as control plane negotiation between the front end driver side and the back end device side. Taking the paravirtualized architecture supporting virtualized hardware acceleration as shown in fig. 2 as an example, the virtualized hardware acceleration realizes the pass-through of a data plane and a hardware network card, and means that control information is transferred to hardware, and after the hardware completes the configuration of the data plane, a data communication process is completed by the data plane passed through between the virtual machine and the hardware without intervention of a host. The negotiation of the control plane is still done with the multi-layer virtual switch based on the data plane development suite based on the virtual host user protocol. Network cards, virtual machines and the like based on a paravirtualized architecture are generally used for complex networking services such as traffic offloading and the like, and therefore, frequent software modification is required, for example, software modification and upgrading of a multi-layer virtual switch based on a data plane development suite are required, that is, frequent modification and upgrading of the back end of a user of a virtual host supporting acceleration of virtualized hardware are required. The multi-layer virtual switch based on the data plane development suite may serve as a client of the virtual host machine user and also bears the function of unloading the switch, for example, the switch forwarding plane processes the forwarding of the first packet of the traffic and the subsequent message corresponding to the data flow is directly forwarded by the hardware network card. Therefore, when there is a situation that the back end of the virtual host machine user supporting the acceleration of the virtualized hardware needs to be frequently changed and upgraded, this may cause frequent upgrade and restarting of the multi-layer virtual switch based on the data plane development suite, and thus may cause the loss of queue state information of the physical network card queue, and also cause that the physical network card queue cannot be re-synchronized with the packet receiving and transmitting queue in the virtual machine, and further cause that the input and output cannot be recovered after the upgrade. In addition, because the para-virtualization architecture supporting the acceleration of the virtualized hardware offloads the back-end data processing hardware to hardware such as a physical network interface controller so as to realize the data plane direct connection, the fact that the queue state information of the physical network card queue is difficult to grasp in time through a virtual machine such as a client machine is meant, and the influence caused by frequently changing and upgrading the back end of a user of a virtual host machine supporting the acceleration of the virtualized hardware is further aggravated.
Referring to the above steps, in step S302, in response to an upgrade request associated with the user back-end of the virtual host, the transmit-receive packet queue corresponding to the back-end device side is stopped and the transmit-receive packet queue is drained. Here, the transmit-receive packet queue is used for data interaction between the front-end driver side and the back-end device side. And updating a first pointer of the annular buffer corresponding to the packet receiving and transmitting queue through the front end driving side and updating a second pointer of the annular buffer through the back end equipment side respectively to realize data interaction between the front end driving side and the back end equipment side. The upgrade request associated with the virtual host machine user back-end represents any possible modification to the virtual host machine user back-end, such as modifying a configuration, upgrade version, replacement, or the like therein. Thus, both versions or modules of the old virtual host user back-end and the new virtual host user back-end may result before and after execution of the upgrade request. If the upgrade is performed by means of thermomigration, additional physical resources are required to perform backup, and thermomigration itself may take a long migration time. This can result in a loss of system efficiency and performance in scenarios where it is necessary to frequently respond to upgrade requests associated with the virtual host machine user backend. For this reason, in step S302, by stopping the transmit-receive packet queue corresponding to the backend device side and draining the transmit-receive packet queue, the virtual machine may not be turned off, and no subsequent hot migration is required. Wherein, stopping the packet sending and receiving queue means stopping writing operation to the packet sending and receiving queue after receiving the upgrade request, that is, avoiding receiving new data to be processed, for example, through sensing the upgrade request or corresponding message through the OVS-DPDK and driving the corresponding enabled packet sending and receiving queue to unmap the packet sending and receiving event in the virtual machine by the VDPA device in the OVS-DPDK, stopping the packet sending and receiving queue, and waiting for emptying the packet sending and receiving queue. While draining the transmit-receive packet queue may be understood as processing the data to be processed in the transmit-receive packet queue by hardware, any suitable data processing operation may be included. And the first pointer and the second pointer exist on the annular buffer corresponding to the packet receiving and transmitting queue, and the first pointer is updated through the front end driving side and the second pointer is updated through the back end equipment side, so that data interaction between the front end driving side and the back end equipment side can be realized. This means that the data interaction between the front-end driver side and the back-end device side adopts a communication manner specified by the virtual host user protocol, for example, a communication manner of a shared memory, and taking the paravirtualized architecture shown in fig. 2 as an example, the system simulator B212 performs a shared memory configuration so that the front-end driver side B240 and the back-end device side B242 can interact according to the communication manner specified by the virtual host user protocol.
Next, in step S304, at least after the transmit-receive packet queue is emptied, the first location of the second pointer of the ring buffer corresponding to the transmit-receive packet queue is read by the back-end device side. Then, in step S306, the first location is transferred to a system simulator for simulating the front-end driving side through the virtual host user back end, and then the system simulator saves the first location. The virtualized hardware acceleration includes a data plane pass-through between the front-end driver side and the back-end device side and a control plane negotiation between the front-end driver side and the back-end device side implemented by the virtual host user back-end. The data plane pass-through between the front-end driver side and the back-end device side is based on virtualized hardware acceleration, i.e. VDPA, but control plane negotiation is still implemented through the virtual host user back-end, i.e. through the virtual host user back-end supporting virtualized hardware acceleration of the paravirtualized architecture. For example, the virtual host user backend comprising the OVS-DPDK is used for control plane negotiation, and the data plane needs to be configured by issuing data plane parameters through the OVS-DPDK. Therefore, the user back end of the virtual host machine, such as the framework of the OVS-DPDK and the upgrading and changing of software, can affect the data plane through and possibly cause the loss of queue state information and the asynchronous receiving and transmitting of the packet queues. Furthermore, since the para-virtualization architecture supporting acceleration of the virtualized hardware offloads the backend data processing hardware to hardware such as a physical network interface controller so as to realize data plane pass-through, it means that it is difficult to timely grasp queue state information of a physical network card queue through a virtual machine such as a client. For this purpose, in step S304, at least after the transmit-receive packet queue is emptied, the first location of the second pointer of the ring buffer corresponding to the transmit-receive packet queue is read by the backend device side. This means that the first location of the second pointer of the ring buffer corresponding to the transceiving packet queue after the transceiving packet queue is emptied is read by the backend device side, such as the physical network interface controller B230 of fig. 2, and the first location is subsequently used to recover the queue status information of the physical network card queue. In some embodiments, the information carried by the first location indicates the last available tag of the transmit-receive packet queue, i.e., the transmit-receive packet descriptor location that represents the hardware has processed to completion. However, as the upgrade request associated with the virtual host machine user back end starts to be executed, the connection between the virtual host machine user is disconnected, and the connection between the virtual machine and the hardware is disconnected, so after the upgrade is completed, the back end device side may have difficulty in saving the queue state information of the physical network card queue acquired before, such as the first location. This means that after the upgrade is completed, the back-end device side may start processing according to the default position, so that the back-end processing is wrong, which causes that the queue status information is lost and the packet receiving and transmitting queues cannot be synchronized. For this, in step S306, the first location is transferred to a system simulator for simulating the front-end driving side through the virtual host user back end, and then the system simulator saves the first location. It should be appreciated that the reading of the first location by the back-end device side in step S304 and the passing of the first location to the system simulator by the virtual host user back-end in step S306, such messaging and routing is independent of control plane negotiations, control plane protocols, determined based on virtual host user protocols. The control plane negotiation between the front end driver side and the back end device side implemented by the virtual host user back end is based on a virtual host user protocol. Taking the paravirtualized architecture shown in fig. 2 as an example, the system simulator B212 and the client B214 use a control plane protocol that meets requirements related to a virtual host user protocol, control information such as data plane parameters is transferred to the physical network interface controller B230, the physical network interface controller B230 configures a data plane according to the control information, and the configured data plane is used for direct connection between the front end driving side B240 and the physical network interface controller B230. Thus, passing the first location from the back-end device side, e.g., from the physical network interface controller B230 shown in fig. 2, to the multi-layer virtual switch B216 of the virtual host user back-end data plane development suite as shown in fig. 2 requires adding a new message format (relative to the message formats and characteristics supported by the virtual host user protocol), and configuring the first location or equivalent information to the system simulator based on the newly added message format so that it can be saved in the corresponding queue information structure by the system simulator. In some embodiments, there may be multiple transmit-receive packet queues, for example, data interactions between the front-end driver side and the back-end device side may use two or more virtual queues. In some embodiments, multiple queues may be set for each device, each for handling different data transmissions. For this purpose, the first location of the queue status information for restoring the queue needs to be read for each queue through the backend device side, and the first location corresponding to the queue is configured to the system simulator based on a new message format, and is saved in a queue information structure corresponding to the queue by the system simulator for subsequent issuing of the queue parameters and restoration of the queue status.
Next, in step S308, at least after the virtual host user back end transfers the first location to the system simulator, an upgrade procedure corresponding to the upgrade request is performed on the virtual host user back end. When the upgrade process corresponding to the upgrade request starts to be executed, this means that the connection of the virtual host user is disconnected and the connection between the virtual machine and the hardware is disconnected, so that the upgrade process corresponding to the upgrade request needs to be executed on the virtual host user back end after the virtual host user back end transfers the first location to the system simulator. This ensures that the backend device side can effectively acquire the queue status information of the physical network card queue, such as the first location, and that the first location is passed to the system simulator. Then, in step S310, after the upgrade procedure is performed, the system simulator issues a queue parameter including the first location to the back-end device side so as to restore the queue status of the transmit-receive packet queue to the queue status of the transmit-receive packet queue when the back-end device side reads the first location. After the upgrade is completed, the user back end of the virtual host machine after the upgrade can be restarted, for example, the OVS-DPDK process is restarted, so that the connection of the user of the virtual host machine disconnected before the upgrade is reestablished, and the connection between the virtual machine disconnected before the upgrade and the hardware is reestablished. Then, control plane negotiation is required to be performed based on the updated virtual host user back end, that is, control plane negotiation between the front end driving side and the back end device side is performed by the updated virtual host user back end, so that it is ensured that after the update is completed, data plane direct connection between the front end driving side and the back end device side can be performed normally. The system simulator issues queue parameters to the back-end equipment side, wherein the queue parameters at least comprise the first position, namely the first position which is transmitted to the system simulator by the back-end of the virtual host machine user. After the upgrading is finished, the back-end equipment side can initialize the queue according to the queue parameters issued by the system simulator and re-initialize hardware by utilizing the issued queue parameters, so that the back-end processing can be restored based on the first position, and the queue state information of the physical network card queue can be restored, thereby being beneficial to avoiding the loss of the queue state information, realizing the re-synchronization of the physical network card queue and the receiving and transmitting packet queue in the virtual machine, and further restoring the input and output after the upgrading.
The method for upgrading the user back end of the virtual host machine supporting the acceleration of the virtualized hardware shown in fig. 3 realizes that the state of a device queue is automatically obtained and set in a system simulator when the service of the user back end of the virtual host machine is upgraded and restarted, so that after the restarting and updating are completed, the stored information, namely the first position, can be issued to the user service of the virtual host machine when the user connection of the virtual host machine is rebuilt, and the queue is reconfigured in a driving layer so as to ensure that the state of the queue is consistent with the front end, namely the network driving state in the virtual machine, thereby realizing the stable operation of the virtual machine during the upgrading. And the device state is acquired and saved in the service restart of the user back end of the virtual host machine, and the device data queue supporting the acceleration of the virtualized hardware is issued again after the restart, so that the state of the queue is restored, the start and stop of the virtual machine caused by network upgrading are avoided, the virtual machine does not need to be migrated, and the stability of the virtual machine in the upgrading period is ensured. In addition, by stopping the queue and acquiring the queue state of the hardware, and further adding a new message format to the system simulator, the re-issuing of the queue parameters and the flow recovery can be realized when the user of the virtual host machine after the re-starting is re-connected. In summary, the method for upgrading the back end of the virtual host machine user supporting the acceleration of the virtualization hardware does not need to turn off or restart the virtual machine itself or migrate the virtual machine, can meet the requirement of changing and upgrading the back end of the virtual host machine user supporting the acceleration of the virtualization hardware, such as the requirement of changing and upgrading the OVS-DPDK software, is beneficial to stability during the upgrading of the virtual machine, and has the advantages of high efficiency and low resource occupation.
In one possible implementation, draining the transmit-receive packet queue includes: and processing all data to be processed in the receiving-transmitting packet queue through the back-end equipment side. Draining the transmit-receive packet queue may be understood as processing the data to be processed in the transmit-receive packet queue by hardware, and may include any suitable data processing operation.
In one possible implementation, the virtual host machine user backend comprises a data plane development suite based multi-layer virtual switch, and the upgrade request associated with the virtual host machine user backend comprises an upgrade request associated with the data plane development suite based multi-layer virtual switch. The upgrade request associated with the virtual host machine user back-end represents any possible modification to the virtual host machine user back-end, such as modifying a configuration, upgrade version, replacement, or the like therein. Wherein the upgrade request associated with the data plane development suite-based multi-layer virtual switch includes a request such as a software upgrade.
In one possible implementation manner, the virtual host machine user back-end upgrade method further includes: during execution of the upgrade procedure corresponding to the upgrade request on the virtual host machine user back end, a connection between the virtual host machine user back end and the paravirtualized architecture is disconnected and traffic of the paravirtualized architecture is disconnected. Therefore, the connection between the user back end of the virtual host machine and the paravirtualized architecture is disconnected, and the flow of the paravirtualized architecture is disconnected, so that the normal running of network functions is ensured, and after the upgrading is finished, the connection and the recovery flow can be rebuilt, thereby being beneficial to ensuring the stability of the system upgrading.
In one possible implementation, the paravirtualized architecture is a virtio architecture. Any suitable other paravirtualized architecture may be employed as long as it has virtualized hardware acceleration characteristics. For a virtio architecture or any other paravirtualized architecture, the virtual host user back-end upgrade method is applied to a virtual host user back-end, the virtual host user back-end is used for supporting virtualized hardware acceleration of the paravirtualized architecture, the paravirtualized architecture comprises a front-end driving side and a back-end device side, and the virtualized hardware acceleration comprises data plane through between the front-end driving side and the back-end device side and control plane negotiation between the front-end driving side and the back-end device side realized through the virtual host user back-end.
In one possible implementation, the backend device side includes at least one virtualized hardware acceleration device, and the virtualized hardware acceleration includes offloading message forwarding operation hardware to the at least one virtualized hardware acceleration device. Thus, the hardware unloading message forwarding operation is realized, and the high-performance network is provided.
In one possible implementation, the control plane negotiation between the front end driver side and the back end device side implemented by the virtual host user back end includes: the system simulator negotiates with the virtual host user backend based on a virtual host user protocol. In some embodiments, the control plane negotiation between the front end driver side and the back end device side implemented by the virtual host user back end further comprises: and the virtual host user back end transmits data plane parameters to the back end equipment side, wherein the data plane parameters are used for configuring data plane direct connection between the front end driving side and the back end equipment side. In some embodiments, communicating the first location to the system simulator through the virtual host user backend comprises: the virtual host user protocol is extended to add a first message format based on which the virtual host communicates the first location to the system simulator. As described above, the control plane negotiation between the front-end driver side and the back-end device side implemented by the virtual host user back-end is based on a virtual host user protocol. Taking the paravirtualized architecture shown in fig. 2 as an example, the system simulator B212 and the client B214 use a control plane protocol that meets requirements related to a virtual host user protocol, control information such as data plane parameters is transferred to the physical network interface controller B230, the physical network interface controller B230 configures a data plane according to the control information, and the configured data plane is used for direct connection between the front end driving side B240 and the physical network interface controller B230. Thus, passing the first location from the back-end device side, e.g., from the physical network interface controller B230 shown in fig. 2, to the multi-layer virtual switch B216 of the virtual host user back-end data plane development suite as shown in fig. 2 requires adding a new message format (relative to the message formats and characteristics supported by the virtual host user protocol), and configuring the first location or equivalent information to the system simulator based on the newly added message format so that it can be saved in the corresponding queue information structure by the system simulator. Here, the virtual host user protocol is extended to add a first message format, which subsequently causes the virtual host to communicate the first location to the system simulator based on the first message format.
In one possible implementation manner, the virtual host machine user back-end upgrade method further includes: and responding to the upgrading request associated with the virtual host machine user back end, and performing resource cleaning on the virtualized hardware acceleration device of the back end equipment side, which is connected with the established virtual host machine user. In this way, in response to the upgrade request, the transceiving packet queue corresponding to the back-end equipment side is stopped, the transceiving packet queue is emptied, and resource cleaning is performed on the virtualized hardware acceleration equipment of the user connection of the established virtual host machine of the back-end equipment side, so that the stability of system upgrade is ensured.
In one possible implementation, the queue parameters further include a queue direct memory access address and a queue length. In some embodiments, the first pointer of the ring buffer is used to indicate an available tag of the transmit-receive packet queue, the second pointer of the ring buffer is used to indicate a last available tag of the transmit-receive packet queue, and when the last available tag is consistent with the available tag, there is no data to be processed in the transmit-receive packet queue. In some embodiments, restoring the queue state of the transmit-receive packet queue to the queue state of the transmit-receive packet queue when the backend device side reads the first location comprises: the transmit-receive packet queue is configured such that a last available tag of the transmit-receive packet queue corresponds to the first location. In some embodiments, the virtual host user backend upgrade method further includes: and after restoring the queue state of the receiving and transmitting packet queue to the queue state of the receiving and transmitting packet queue when the back-end equipment side reads the first position, synchronizing the physical network card queue of the back-end equipment side by utilizing the receiving and transmitting packet queue so as to restore the input and output functions of the half-virtualization architecture. As the upgrade request associated with the virtual host user back-end starts to be executed, the connection between the virtual host user is disconnected, and the connection between the virtual machine and the hardware is disconnected, so after the upgrade is completed, the back-end device side may have difficulty in saving the queue state information of the physical network card queue acquired before, such as the first location. Therefore, after the upgrade is completed, the queue parameters are issued, wherein the queue parameters comprise the first position and further comprise the direct memory access address and the length of the queue, so that the queue state of the receiving and transmitting packet queue can be restored to the queue state when the receiving and transmitting packet queue reads the first position on the back-end equipment side based on the issued queue parameters. And, the first pointer of the ring buffer is used to indicate an available tag (available index) of the transmit-receive packet queue, and the second pointer of the ring buffer is used to indicate a last available tag (last available index) of the transmit-receive packet queue. As described above, the ring buffer corresponding to the packet receiving/transmitting queue has the first pointer and the second pointer, and the front end driving side updates the first pointer and the back end device side updates the second pointer, so that data interaction between the front end driving side and the back end device side can be achieved. This means that the data interaction between the front-end driver side and the back-end device side adopts a communication mode specified by a virtual host user protocol, for example, a communication mode of a shared memory. Here, it may be determined whether the transmit-receive packet queue is empty by comparing the first pointer and the second pointer, that is, when the last available tag is consistent with the available tag, there is no data to be processed in the transmit-receive packet queue. Further, by configuring the packet receiving and transmitting queue so that the last available tag of the packet receiving and transmitting queue corresponds to the first position, recovery of the queue state of the packet receiving and transmitting queue to the queue state when the packet receiving and transmitting queue reads the first position at the back-end device side is achieved, so that after upgrading is completed, the back-end device side can initialize the queue according to the queue parameters issued by the system simulator and re-initialize hardware by utilizing the issued queue parameters, and recovery of the back-end processing can be achieved based on the first position, therefore, the queue state information of the physical network card queue can be recovered, loss of the queue state information is avoided, resynchronization of the physical network card and the packet receiving and transmitting queue in the virtual machine is achieved, and input and output are recovered after upgrading.
Fig. 4 is a schematic structural diagram of a computing device provided in an embodiment of the present application, where the computing device 400 includes: one or more processors 410, a communication interface 420, and a memory 430. The processor 410, communication interface 420, and memory 430 are interconnected by a bus 440. Optionally, the computing device 400 may further include an input/output interface 450, where the input/output interface 450 is connected to an input/output device for receiving parameters set by a user, etc. The computing device 400 can be used to implement some or all of the functionality of the device embodiments or system embodiments described above in the embodiments of the present application; the processor 410 can also be used to implement some or all of the operational steps of the method embodiments described above in the embodiments of the present application. For example, specific implementations of the computing device 400 performing various operations may refer to specific details in the above-described embodiments, such as the processor 410 being configured to perform some or all of the steps of the above-described method embodiments or some or all of the operations of the above-described method embodiments. For another example, in the present embodiment, the computing device 400 may be configured to implement some or all of the functions of one or more components of the apparatus embodiments described above, and the communication interface 420 may be configured to implement communication functions and the like necessary for the functions of the apparatuses, components, and the processor 410 may be configured to implement processing functions and the like necessary for the functions of the apparatuses, components.
It should be appreciated that the computing device 400 of fig. 4 may include one or more processors 410, and that the processors 410 may cooperatively provide processing power in a parallelized connection, a serialized connection, a serial-parallel connection, or any connection, or that the processors 410 may constitute a processor sequence or processor array, or that the processors 410 may be separated into primary and secondary processors, or that the processors 410 may have different architectures such as heterogeneous computing architectures. In addition, the computing device 400 shown in FIG. 4, the associated structural and functional descriptions are exemplary and not limiting. In some example embodiments, computing device 400 may include more or fewer components than shown in fig. 4, or combine certain components, or split certain components, or have a different arrangement of components.
The processor 410 may have various specific implementations, for example, the processor 410 may include one or more of a central processing unit (central processing unit, CPU), a graphics processor (graphic processing unit, GPU), a neural network processor (neural-network processing unit, NPU), a tensor processor (tensor processing unit, TPU), or a data processor (data processing unit, DPU), which are not limited in this embodiment. Processor 410 may also be a single-core processor or a multi-core processor. Processor 410 may be comprised of a combination of a CPU and hardware chips. The hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof. The processor 410 may also be implemented solely with logic devices incorporating processing logic, such as an FPGA or digital signal processor (digital signal processor, DSP) or the like. The communication interface 420 may be a wired interface, which may be an ethernet interface, a local area network (local interconnect network, LIN), etc., or a wireless interface, which may be a cellular network interface, or use a wireless local area network interface, etc., for communicating with other modules or devices.
The memory 430 may be a nonvolatile memory such as a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Memory 430 may also be volatile memory, which may be random access memory (random access memory, RAM) used as external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM). Memory 430 may also be used to store program code and data such that processor 410 invokes the program code stored in memory 430 to perform some or all of the operational steps of the method embodiments described above, or to perform corresponding functions in the apparatus embodiments described above. Moreover, computing device 400 may contain more or fewer components than shown in FIG. 4, or may have a different configuration of components.
The bus 440 may be a peripheral component interconnect express (peripheral component interconnect express, PCIe) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, a unified bus (Ubus or UB), a computer quick link (compute express link, CXL), a cache coherent interconnect protocol (cache coherent interconnect for accelerators, CCIX), or the like. The bus 440 may be divided into an address bus, a data bus, a control bus, and the like. The bus 440 may include a power bus, a control bus, a status signal bus, and the like in addition to a data bus. But is shown with only one bold line in fig. 4 for clarity of illustration, but does not represent only one bus or one type of bus.
The method and the device provided in the embodiments of the present application are based on the same inventive concept, and because the principles of solving the problems by the method and the device are similar, the embodiments, implementations, examples or implementation of the method and the device may refer to each other, and the repetition is not repeated. Embodiments of the present application also provide a system that includes a plurality of computing devices, each of which may be structured as described above. The functions or operations that may be implemented by the system may refer to specific implementation steps in the above method embodiments and/or specific functions described in the above apparatus embodiments, which are not described herein.
Embodiments of the present application also provide a computer-readable storage medium having stored therein computer instructions which, when executed on a computer device (e.g., one or more processors), may implement the method steps in the above-described method embodiments. The specific implementation of the processor of the computer readable storage medium in executing the above method steps may refer to specific operations described in the above method embodiments and/or specific functions described in the above apparatus embodiments, which are not described herein again.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. The present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. The computer program product includes one or more computer instructions. When loaded or executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc. that contain one or more collections of available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, tape), optical media, or semiconductor media. The semiconductor medium may be a solid state disk, or may be a random access memory, flash memory, read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, register, or any other form of suitable storage medium.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. Each flow and/or block of the flowchart and/or block diagrams, and combinations of flows and/or blocks in the flowchart and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments. It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. The steps in the method of the embodiment of the application can be sequentially adjusted, combined or deleted according to actual needs; the modules in the system of the embodiment of the application can be divided, combined or deleted according to actual needs. Such modifications and variations of the embodiments of the present application are intended to be included herein, if they fall within the scope of the claims and their equivalents.

Claims (16)

1. The virtual host machine user back end upgrading method supporting the virtualized hardware acceleration is characterized in that the virtual host machine user back end upgrading method is applied to a virtual host machine user back end, the virtual host machine user back end is used for supporting the virtualized hardware acceleration of a half-virtualized architecture, the half-virtualized architecture comprises a front end driving side and a back end device side, the virtualized hardware acceleration comprises data plane through between the front end driving side and the back end device side and control plane negotiation between the front end driving side and the back end device side realized by the virtual host machine user back end, and the virtual host machine user back end upgrading method comprises the following steps:
Stopping a transceiving packet queue corresponding to the back-end equipment side and draining the transceiving packet queue in response to an upgrading request associated with the back-end of the virtual host user, wherein the transceiving packet queue is used for data interaction between the front-end driving side and the back-end equipment side, and the data interaction between the front-end driving side and the back-end equipment side is realized by updating a first pointer of a ring buffer corresponding to the transceiving packet queue through the front-end driving side and a second pointer of the ring buffer through the back-end equipment side respectively;
at least after the receiving and transmitting packet queue is emptied, reading a first position of the second pointer of the annular buffer corresponding to the receiving and transmitting packet queue through the back-end equipment side;
transmitting the first position to a system simulator for simulating the front-end driving side through the user back end of the virtual host machine, and then storing the first position by the system simulator;
after the first position is transmitted to the system simulator at least by the back end of the virtual host machine user, executing an upgrading flow corresponding to the upgrading request on the back end of the virtual host machine user;
After the upgrade process is executed, the system simulator issues the queue parameters including the first position to the back-end equipment side so as to restore the queue state of the receiving-transmitting packet queue to the queue state when the receiving-transmitting packet queue reads the first position at the back-end equipment side.
2. The method of claim 1, wherein draining the transmit-receive packet queue comprises: and processing all data to be processed in the receiving-transmitting packet queue through the back-end equipment side.
3. The virtual host machine user back-end upgrade method of claim 1, wherein the virtual host machine user back-end comprises a data plane development suite based multi-layer virtual switch, and wherein the upgrade request associated with the virtual host machine user back-end comprises an upgrade request associated with the data plane development suite based multi-layer virtual switch.
4. The method for upgrading a back end of a user of a virtual host machine according to claim 1, further comprising: during execution of the upgrade procedure corresponding to the upgrade request on the virtual host machine user back end, a connection between the virtual host machine user back end and the paravirtualized architecture is disconnected and traffic of the paravirtualized architecture is disconnected.
5. The method of claim 1, wherein the paravirtualized architecture is a virtio architecture.
6. The method of claim 1, wherein the back-end device side includes at least one virtualized hardware acceleration device, the virtualized hardware acceleration comprising offloading message forwarding operations hardware to the at least one virtualized hardware acceleration device.
7. The virtual host user back-end upgrade method according to claim 1, wherein the control plane negotiation between the front-end driver side and the back-end device side implemented by the virtual host user back-end comprises: the system simulator negotiates with the virtual host user backend based on a virtual host user protocol.
8. The virtual host user back-end upgrade method of claim 7, wherein the control plane negotiation between the front-end driver side and the back-end device side implemented by the virtual host user back-end further comprises: and the virtual host user back end transmits data plane parameters to the back end equipment side, wherein the data plane parameters are used for configuring data plane direct connection between the front end driving side and the back end equipment side.
9. The virtual host user back-end upgrade method of claim 7, wherein passing the first location to the system simulator through the virtual host user back-end comprises: the virtual host user protocol is extended to add a first message format based on which the virtual host communicates the first location to the system simulator.
10. The method for upgrading a back end of a user of a virtual host machine according to claim 1, further comprising: and responding to the upgrading request associated with the virtual host machine user back end, and performing resource cleaning on the virtualized hardware acceleration device of the back end equipment side, which is connected with the established virtual host machine user.
11. The method of claim 1, wherein the queue parameters further comprise a queue direct memory access address and a queue length.
12. The method for upgrading a user back-end of a virtual host according to claim 11, wherein the first pointer of the ring buffer is used for indicating an available tag of the transmit-receive packet queue, the second pointer of the ring buffer is used for indicating a last available tag of the transmit-receive packet queue, and when the last available tag is consistent with the available tag, no data to be processed exists in the transmit-receive packet queue.
13. The method for upgrading a user back-end of a virtual host according to claim 12, wherein restoring the queue state of the transceiving packet queue to the queue state of the transceiving packet queue when the back-end device side reads the first location comprises: the transmit-receive packet queue is configured such that a last available tag of the transmit-receive packet queue corresponds to the first location.
14. The virtual host machine user back-end upgrade method of claim 13, wherein the virtual host machine user back-end upgrade method further comprises: and after restoring the queue state of the receiving and transmitting packet queue to the queue state of the receiving and transmitting packet queue when the back-end equipment side reads the first position, synchronizing the physical network card queue of the back-end equipment side by utilizing the receiving and transmitting packet queue so as to restore the input and output functions of the half-virtualization architecture.
15. A computer device, characterized in that it comprises a memory, a processor and a computer program stored on the memory and executable on the processor, which processor implements the method according to any of claims 1 to 14 when executing the computer program.
16. A computer readable storage medium storing computer instructions which, when run on a computer device, cause the computer device to perform the method of any one of claims 1 to 14.
CN202310513795.5A 2023-05-09 2023-05-09 Virtual host machine user back-end upgrading method supporting virtualized hardware acceleration Active CN116257276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310513795.5A CN116257276B (en) 2023-05-09 2023-05-09 Virtual host machine user back-end upgrading method supporting virtualized hardware acceleration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310513795.5A CN116257276B (en) 2023-05-09 2023-05-09 Virtual host machine user back-end upgrading method supporting virtualized hardware acceleration

Publications (2)

Publication Number Publication Date
CN116257276A true CN116257276A (en) 2023-06-13
CN116257276B CN116257276B (en) 2023-07-25

Family

ID=86686502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310513795.5A Active CN116257276B (en) 2023-05-09 2023-05-09 Virtual host machine user back-end upgrading method supporting virtualized hardware acceleration

Country Status (1)

Country Link
CN (1) CN116257276B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116800616A (en) * 2023-08-25 2023-09-22 珠海星云智联科技有限公司 Management method and related device of virtualized network equipment

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870311A (en) * 2012-12-10 2014-06-18 华为技术有限公司 Method of accessing to hardware by virtue of para-virtualized drive, back-end drive and front-end drive
CN106406977A (en) * 2016-08-26 2017-02-15 山东乾云启创信息科技股份有限公司 Virtualization implementation system and method of GPU (Graphics Processing Unit)
US20180176233A1 (en) * 2016-12-16 2018-06-21 Nxp Usa, Inc. Stateful Backend Drivers For Security Processing Through Stateless Virtual Interfaces
CN111147391A (en) * 2019-12-05 2020-05-12 深圳市任子行科技开发有限公司 Data transmission method and system between DPDK user mode and linux kernel network protocol stack
CN111211999A (en) * 2019-11-28 2020-05-29 中国船舶工业系统工程研究院 OVS-based real-time virtual network implementation method
CN111352647A (en) * 2020-02-26 2020-06-30 平安科技(深圳)有限公司 Virtual machine upgrading method, device, equipment and storage medium
CN112099916A (en) * 2020-09-07 2020-12-18 平安科技(深圳)有限公司 Virtual machine data migration method and device, computer equipment and storage medium
CN112925581A (en) * 2021-02-22 2021-06-08 百果园技术(新加坡)有限公司 Method and device for starting DPDK container and electronic equipment
CN113312142A (en) * 2021-02-26 2021-08-27 阿里巴巴集团控股有限公司 Virtualization processing system, method, device and equipment
CN114125015A (en) * 2021-11-30 2022-03-01 上海斗象信息科技有限公司 Data acquisition method and system
US20220365729A1 (en) * 2019-01-31 2022-11-17 Intel Corporation Shared memory mechanism to support fast transport of sq/cq pair communication between ssd device driver in virtualization environment and physical ssd
CN115858103A (en) * 2023-02-27 2023-03-28 珠海星云智联科技有限公司 Method, apparatus, and medium for live migration between open stack architecture virtual machines
CN115858102A (en) * 2023-02-24 2023-03-28 珠海星云智联科技有限公司 Method for deploying virtual machine supporting virtualization hardware acceleration
CN116032638A (en) * 2023-01-09 2023-04-28 上海交通大学 Unified paravirtualized framework oriented to heterogeneous encryption and decryption computing resources

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103870311A (en) * 2012-12-10 2014-06-18 华为技术有限公司 Method of accessing to hardware by virtue of para-virtualized drive, back-end drive and front-end drive
CN106406977A (en) * 2016-08-26 2017-02-15 山东乾云启创信息科技股份有限公司 Virtualization implementation system and method of GPU (Graphics Processing Unit)
US20180176233A1 (en) * 2016-12-16 2018-06-21 Nxp Usa, Inc. Stateful Backend Drivers For Security Processing Through Stateless Virtual Interfaces
US20220365729A1 (en) * 2019-01-31 2022-11-17 Intel Corporation Shared memory mechanism to support fast transport of sq/cq pair communication between ssd device driver in virtualization environment and physical ssd
CN111211999A (en) * 2019-11-28 2020-05-29 中国船舶工业系统工程研究院 OVS-based real-time virtual network implementation method
CN111147391A (en) * 2019-12-05 2020-05-12 深圳市任子行科技开发有限公司 Data transmission method and system between DPDK user mode and linux kernel network protocol stack
WO2021169127A1 (en) * 2020-02-26 2021-09-02 平安科技(深圳)有限公司 Virtual machine upgrade method, apparatus and device, and storage medium
CN111352647A (en) * 2020-02-26 2020-06-30 平安科技(深圳)有限公司 Virtual machine upgrading method, device, equipment and storage medium
CN112099916A (en) * 2020-09-07 2020-12-18 平安科技(深圳)有限公司 Virtual machine data migration method and device, computer equipment and storage medium
CN112925581A (en) * 2021-02-22 2021-06-08 百果园技术(新加坡)有限公司 Method and device for starting DPDK container and electronic equipment
CN113312142A (en) * 2021-02-26 2021-08-27 阿里巴巴集团控股有限公司 Virtualization processing system, method, device and equipment
CN114125015A (en) * 2021-11-30 2022-03-01 上海斗象信息科技有限公司 Data acquisition method and system
CN116032638A (en) * 2023-01-09 2023-04-28 上海交通大学 Unified paravirtualized framework oriented to heterogeneous encryption and decryption computing resources
CN115858102A (en) * 2023-02-24 2023-03-28 珠海星云智联科技有限公司 Method for deploying virtual machine supporting virtualization hardware acceleration
CN115858103A (en) * 2023-02-27 2023-03-28 珠海星云智联科技有限公司 Method, apparatus, and medium for live migration between open stack architecture virtual machines

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116800616A (en) * 2023-08-25 2023-09-22 珠海星云智联科技有限公司 Management method and related device of virtualized network equipment
CN116800616B (en) * 2023-08-25 2023-11-03 珠海星云智联科技有限公司 Management method and related device of virtualized network equipment

Also Published As

Publication number Publication date
CN116257276B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
JP5536878B2 (en) Changing access to the Fiber Channel fabric
EP2831732B1 (en) System and method for supporting live migration of virtual machines in an infiniband network
CN109739618B (en) Virtual machine migration method and device
WO2015101128A1 (en) Virtual machine live migration method, virtual machine memory data processing method, server, and virtual machine system
US10671423B2 (en) Hot-plug hardware and software implementation
CN108183871B (en) A kind of virtual switch, virtual switch start method, electronic equipment
KR101242908B1 (en) Distributed virtual switch for virtualized computer systems
US9489230B1 (en) Handling of virtual machine migration while performing clustering operations
US20130254368A1 (en) System and method for supporting live migration of virtual machines in an infiniband network
US10552080B2 (en) Multi-target post-copy guest migration
CN102446119B (en) Virtual machine dynamical migration method based on Passthrough I/O device
CN116257276B (en) Virtual host machine user back-end upgrading method supporting virtualized hardware acceleration
EP3985508A1 (en) Network state synchronization for workload migrations in edge devices
CN104239120A (en) State information synchronization method, state information synchronization device and state information synchronization system for virtual machine
CN115858103B (en) Method, device and medium for virtual machine hot migration of open stack architecture
CN112328365A (en) Virtual machine migration method, device, equipment and storage medium
US20230153140A1 (en) Live migration between hosts of a virtual machine connection to a host interface
CN115858102A (en) Method for deploying virtual machine supporting virtualization hardware acceleration
CN115857995B (en) Method, medium and computing device for upgrading interconnection device
CN109656675B (en) Bus equipment, computer equipment and method for realizing physical host cloud storage
CN114925012A (en) Ethernet frame issuing method, Ethernet frame uploading method and related devices
US20210208920A1 (en) Dynamic reconfiguration of virtual devices for migration across device generations
JPWO2009145098A1 (en) I / O connection system, method and program
EP1815333A1 (en) Migration of tasks in a computing system
US20190065527A1 (en) Information processing device and information processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant