WO2024093574A1 - 虚拟机及其配置方法和装置 - Google Patents
虚拟机及其配置方法和装置 Download PDFInfo
- Publication number
- WO2024093574A1 WO2024093574A1 PCT/CN2023/120629 CN2023120629W WO2024093574A1 WO 2024093574 A1 WO2024093574 A1 WO 2024093574A1 CN 2023120629 W CN2023120629 W CN 2023120629W WO 2024093574 A1 WO2024093574 A1 WO 2024093574A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network interface
- container group
- virtual machine
- dpdk
- network
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 230000005540 biological transmission Effects 0.000 claims abstract description 41
- 230000004044 response Effects 0.000 claims abstract description 38
- 238000004590 computer program Methods 0.000 claims description 25
- 230000006870 function Effects 0.000 claims description 21
- 230000006378 damage Effects 0.000 claims description 18
- 230000001960 triggered effect Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 7
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 4
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 238000002955 isolation Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44505—Configuring for program initiating, e.g. using registry, configuration files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45587—Isolation or security of virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Definitions
- the present disclosure relates to the field of information technology, and in particular to a virtual machine and a configuration method and device thereof.
- containerization In order to facilitate the creation and deployment of applications in different operating systems, containerization is usually used to develop applications. Compared with traditional methods, containerization does not rely on a specific computing environment, and applications can run independently and movably on any platform.
- a method for configuring a virtual machine comprising: in response to starting a container group in the virtual machine, identifying a virtual function VF network interface requirement of the container group; after identifying the VF network interface requirement, triggering a VF controller to mount at least one idle VF network interface to the virtual machine, wherein the VF controller is deployed in a physical machine where the virtual machine is located, and the at least one VF network interface is virtualized by a network card in the physical machine through a single-root I/O virtualization SRIOV; according to the VF network interface requirement, binding the at least one VF network interface mounted on the virtual machine to the container group, so that the container group transmits data via the at least one VF network interface.
- the VF network interface requirement includes the number of VF network interfaces required by the container group.
- the method further includes: in response to the destruction of the container group, unbinding the at least one VF network interface bound to the container group from the container group.
- the method further includes: after the at least one VF network interface is unbound from the container group, triggering the VF controller to release the at least one VF network interface from the virtual machine.
- identifying the virtual function VF network interface requirements of the container group includes: in response to the startup of the container group in the virtual machine, obtaining a user resource custom CRD resource of the container group; and identifying the number of VF network interfaces required for the container group based on the CRD resource of the container group.
- the method further includes: in response to a requirement for using a data plane development kit DPDK of the container group, loading the DPDK for the container group; binding the at least one VF network interface bound to the container group to the DPDK, so that the container group bypasses the kernel-mode network protocol stack of the virtual machine and performs data transmission via the DPDK and the at least one VF network interface.
- DPDK data plane development kit
- the method further includes: in response to the destruction of the container group, releasing the environment of the DPDK.
- the virtual machine is deployed in a Kubernetes system.
- a configuration device for a virtual machine comprising: a first plug-in configured to identify the virtual function VF network interface requirement of the container group in response to the startup of the container group in the virtual machine; bind at least one VF network interface mounted on the virtual machine to the container group according to the VF network interface requirement, so that the container group transmits data via the at least one VF network interface; a second plug-in configured to trigger the VF controller to mount the at least one idle VF network interface to the virtual machine after identifying the network interface requirement, wherein the VF controller is deployed in the physical machine where the virtual machine is located, and the at least one VF network interface is obtained by the network card in the physical machine through SRIOV virtualization.
- the display panel described in any of the above embodiments.
- the first plug-in is further configured to unbind the at least one VF network interface bound to the container group from the container group in response to the destruction of the container group.
- the second plug-in is further configured to trigger the VF controller to release the at least one VF network interface from the virtual machine after the at least one VF network interface is unbound from the container group.
- the device further includes: a third plug-in configured to respond to the data plane development kit DPDK usage requirement of the container group, notify the second plug-in to load the DPDK for the container group; the second plug-in is also configured to bind the at least one VF network interface bound to the container group to the DPDK, so that the container group bypasses the kernel state network protocol stack of the virtual machine and passes Data transmission is performed by the DPDK and the at least one VF network interface.
- a third plug-in configured to respond to the data plane development kit DPDK usage requirement of the container group, notify the second plug-in to load the DPDK for the container group
- the second plug-in is also configured to bind the at least one VF network interface bound to the container group to the DPDK, so that the container group bypasses the kernel state network protocol stack of the virtual machine and passes Data transmission is performed by the DPDK and the at least one VF network interface.
- the third plug-in is embedded in the container group.
- the second plug-in is further configured to release the DPDK environment in response to the destruction of the container group.
- a configuration device for a virtual machine comprising: an identification module, configured to identify a virtual function VF network interface requirement of the container group in response to starting the container group in the virtual machine; a triggering module, configured to trigger a VF controller to mount at least one idle VF network interface to the virtual machine after identifying the VF network interface requirement, wherein the VF controller is deployed in a physical machine where the virtual machine is located, and the at least one VF network interface is obtained by a network card in the physical machine through SRIOV virtualization; a binding module, configured to bind the at least one VF network interface mounted on the virtual machine to the container group according to the VF network interface requirement, so that the container group transmits data via the at least one VF network interface.
- a virtual machine configuration device comprising: a memory; and a processor coupled to the memory, configured to implement the method described in any one of the above embodiments when executed based on instructions stored in the memory.
- a virtual machine comprising the device described in any one of the above embodiments.
- a computer-readable storage medium comprising computer program instructions, wherein when the computer program instructions are executed by a processor, the method described in any one of the above embodiments is implemented.
- a computer program product including a computer program, wherein when the computer program is executed by a processor, the method described in any one of the above embodiments is implemented.
- a computer program/instruction is provided, wherein when the computer program/instruction is executed by a processor, the method described in any one of the above embodiments is implemented.
- the container group by binding the container group deployed in the virtual machine with the VF (Virtual Function) network interface virtualized by the network card in the physical machine where the virtual machine is located through SRIOV (Single Root Input/Output Virtualization), the container group can skip the OVS switch of the virtualization layer and communicate directly with the VF network interface, thereby improving the data transmission efficiency.
- VF Virtual Function
- FIG1 is a flow chart of a method for configuring a virtual machine according to some embodiments of the present disclosure
- FIG2 is a schematic diagram of a data transmission process of a virtual machine according to some embodiments of the present disclosure
- FIG. 3 is a schematic diagram of the deployment position of a virtual machine configuration device in a physical machine according to some embodiments of the present disclosure
- FIG. 4 is a schematic diagram of a configuration flow during the startup and destruction of a container group in a virtual machine according to some embodiments of the present disclosure
- FIG5 is a schematic diagram of the structure of a virtual machine configuration device according to some embodiments of the present disclosure.
- FIG. 6 is a schematic diagram of the structure of a virtual machine configuration device according to yet other embodiments of the present disclosure.
- Containerization technology has defects in resource isolation and security management during application. Therefore, in the related technology, containers are deployed in virtual machines, and resources are allocated and security management is performed for containers through virtual machines, thereby solving the problems of container resource isolation and security management.
- the data transmission efficiency is low. Specifically, when containers are deployed in virtual machines, when the containers interact with other network elements outside the physical machine where the virtual machines are located, the data transmission needs to pass through the open virtual switching standard (OVS, OpenvSwitch) switch of the virtualization layer, resulting in low data transmission efficiency.
- OVS open virtual switching standard
- FIG. 1 is a flowchart of a method for configuring a virtual machine according to some embodiments of the present disclosure.
- the method for configuring a virtual machine includes steps 102 to 106 .
- step 102 in response to starting a container group in a virtual machine, a VF network interface requirement of the container group is identified.
- the container group includes one or more containers.
- the virtual machine is deployed in a Kubernetes system, in which case the container group is a Pod, the smallest deployable unit of the Kubernetes system.
- a Pod can consist of a single container or multiple mutually coupled containers.
- step 104 after identifying the VF network interface requirement, the VF controller is triggered to mount at least one idle VF network interface to the virtual machine.
- the VF controller is deployed in the physical machine where the virtual machine is located, and at least one VF network interface is virtualized by the network card in the physical machine through SRIOV.
- the at least one VF network interface includes only one VF network interface. In other embodiments, the at least one VF network interface includes multiple VF network interfaces.
- the physical function PF Physical Function
- the physical function PF Physical Function
- Each VF corresponds to a VF network interface, and each VF can only perform data I/O (Input/Output) processing.
- the VF controller virtualizes the PF of the network card on the physical machine into at least one VF through SRIOV, and the VFs virtualized by the PF constitute a VF pool.
- the VF network interface in the VF pool is not mounted on the virtual machine, it is an idle VF network interface.
- different VF network interfaces correspond to different identifiers, and the VF controller reads the data on the PF to obtain the network IDs of each VF. The status information of the interface can be used to determine which VF network interfaces are idle.
- step 106 at least one VF network interface mounted on the virtual machine is bound to the container group according to the VF network interface requirement, so that the container group performs data transmission via the at least one VF network interface.
- the container group by binding the container group deployed in the virtual machine with the VF network interface obtained through SRIOV virtualization of the network card in the physical machine where the virtual machine is located, the container group can skip the OVS switch of the virtualization layer and communicate directly with the VF network interface, thereby improving data transmission efficiency.
- the VF network interface is managed and controlled by pass-through. If the VF network interface mounted on the virtual machine is bound to a container group, the VF network interface is not visible on the virtual machine side. In this way, data transmission errors caused by binding the same VF network interface to multiple different container groups can be avoided.
- FIG2 is a flow diagram of data transmission of a virtual machine according to some embodiments of the present disclosure. As shown in FIG2 , container group 205 and container group 206 are deployed in virtual machine 202 , and virtual machine 202 is in physical machine 201 .
- the data transmission path is as follows.
- the data of the container group 206 is first copied to the kernel state of the virtual machine along the dotted line path shown in Figure 2, and then transmitted from the kernel state network protocol stack 207 of the virtual machine to the OVS switch 208 of the virtualization layer, and then transmitted from the OVS switch 208 of the virtualization layer to the network card 203 in the physical machine 201, and then transmitted from the network card 203 to the network element 204.
- both the kernel state network protocol stack 207 of the virtual machine and the OVS switch 208 of the virtualization layer need to occupy the resources (for example, CPU) of the physical machine 201, and the data transmission efficiency is low.
- the VF controller in the physical machine 201 virtualizes the PF of the network card 203 in the physical machine into multiple VFs through the SRIOV technology, which are represented as VF-1 to VF-N. After identifying the VF network interface requirements of the container group 205, the idle VF-1 network interface is bound to the container group 205 according to the method shown in the above embodiment.
- the data transmission direction is from the container group 205 to the network element 204
- the data is forwarded along the solid line path 1 in the figure through the kernel state network protocol stack 207 of the virtual machine, and then directly transmitted to the network element 204 through the VF-1 network interface using the network card 203, without forwarding the data through the OVS switch 208 of the virtualization layer, thereby improving the data transmission efficiency.
- the VF network interface requirement includes the number of VF network interfaces required by the container group.
- the VF obtained through SRIOV virtualization does not distinguish specific functions.
- the VF network interface requirements of the container group can be simply represented by the number of VF network interfaces required by the container group.
- identifying the virtual function VF network interface requirements of the container group includes: in response to starting the container group in the virtual machine, obtaining a user resource customization (CRD, Custom Resource Definition) resource of the container group; and identifying the number of VF network interfaces required by the container group based on the CRD resource of the container group.
- CRD Custom Resource Definition
- multiple CRD resources can be defined through the functions of the Kubernetes system, and the corresponding relationship between the number of container groups and VF network interfaces can be defined through CRD resources.
- multiple CRD resources are defined as CRD-1, CRD-2, and CRD-3, respectively, CRD-1 is a container group corresponding to a VF network interface, CRD-2 is a container group corresponding to two VF network interfaces, and CRD-3 is a container group corresponding to three VF network interfaces.
- the container group deployed in the virtual machine specifies the corresponding configuration through the CRD resource at startup, including the VF network interface requirements of the container group.
- the CRD resource of the container group After binding the VF network interface to the corresponding container group according to the VF network interface requirements, the CRD resource of the container group is instantiated, and the container group can use the VF network interface bound to it to accelerate data processing, thereby improving the data transmission efficiency.
- the number of VF network interfaces mounted on the virtual machine can be one or more, and at the same time, a container group can use one or more VF network interfaces to accelerate data transmission at the same time, which improves the flexibility of data transmission acceleration.
- At least one VF network interface bound to the container group in response to the destruction of the container group, is unbound from the container group.
- the container group is destroyed, there is no need to use the bound VF network interface for data transmission.
- the VF network interface bound to the container group is unbound from the container group to facilitate the subsequent reuse of the unbound VF network interface.
- the VF controller is triggered to release at least one VF network interface from the virtual machine. After being unbound from the container group, the at least one VF network interface is still mounted on the virtual machine. By further triggering the VF controller to release the VF network interface mounted on the virtual machine, the VF network interface is put into an idle state again, and can be bound to other container groups when other container groups are started, thereby realizing the acceleration function of data transmission.
- the above method can improve the utilization rate of resources.
- the VF controller mounts at least one VF network interface to the virtual machine by device-attachment.
- At least one network interface is mounted to a virtual machine through a pipeline operation.
- the life cycle is controlled by the pipeline process itself, and there is no need to control it through code.
- each pipeline operation is scheduled by the pipeline process itself, and there is a resource space corresponding to the pipeline process. There is no need to achieve concurrency through code, which reduces the difficulty of code implementation and the risk of operation.
- pipeline operations can also achieve scheduling on different virtualization layers, thereby being compatible with different virtual machine hypervisors.
- the configuration method of the virtual machine further includes: in response to the use requirement of the data plane development kit DPDK (Data Plane Development Kit) of the container group, loading the DPDK for the container group; binding at least one VF network interface bound to the container group to the DPDK, so that the container group bypasses the kernel state network protocol stack of the virtual machine and transmits data via the DPDK and at least one VF network interface.
- DPDK Data Plane Development Kit
- the above method can further improve the data transmission efficiency.
- the container group 205 transmits data to the network element 204 through the data transmission path 1, although it is not necessary to forward the data through the OVS switch 208 of the virtualization layer, it is still necessary to forward the data through the kernel-mode network protocol stack 207 of the virtual machine.
- DPDK 209 in response to the DPDK loading requirement of the container group 205, DPDK 209 is loaded, and DPDK 209 is bound to the VF-1 network interface bound to the container group 205.
- DPDK 209 runs in the user state space of the virtual machine 202.
- DPDK 209 includes a function library and a driver set for processing data, and uses a polling method to process data packets through the function library and the driver set. In this way, when receiving a data packet from the container group 205, the data packet will not be copied from the user state space to the kernel state, and then the data will be transmitted through the kernel state network protocol stack 207 of the virtual machine.
- the data packet is directly delivered to DPDK 209 for processing, and transmitted to the bound VF-1 network interface through DPDK 209, bypassing the kernel state network protocol stack 207 of the virtual machine, so that the data does not need to be copied between the user state and the kernel state.
- the above method can further improve the data transmission efficiency.
- the configuration method of the virtual machine further includes releasing the DPDK environment in response to the destruction of the container group.
- the resources such as the large page cache occupied by the DPDK during loading and running are released when the container group is destroyed. The above method can improve resource utilization.
- the present disclosure also provides a virtual machine configuration device, which will be introduced below in conjunction with FIG. 3 .
- FIG3 is a schematic diagram of the deployment position of a virtual machine configuration device in a physical machine according to some embodiments of the present disclosure. As shown in FIG3 , container group 205 and container group 206 are deployed in virtual machine 202 , and virtual machine 202 is in physical machine 201 .
- the configuration device of the virtual machine includes: a first plug-in 211 and a second plug-in 212 .
- the first plug-in 211 is configured to identify the VF network interface requirements of the container group 205 in response to the startup of the container group 205 in the virtual machine 202; and bind at least one VF network interface (for example, the VF-1 network interface) mounted on the virtual machine 202 to the container group 205 according to the VF network interface requirements.
- VF network interface for example, the VF-1 network interface
- the second plug-in 212 is configured to trigger the VF controller 210 to mount at least one idle VF network interface to the virtual machine 202 after identifying the VF network interface requirement, wherein the VF controller 210 is deployed in the physical machine 201 where the virtual machine 202 is located, and at least one VF network interface is obtained by the network card 203 in the physical machine 201 through SRIOV virtualization.
- the container group 205 By binding the container group 205 deployed in the virtual machine 202 with the VF-1 network interface virtualized through SRIOV by the network card 203 in the physical machine 201 where the virtual machine 202 is located, the container group 205 can skip the OVS switch 208 of the virtualization layer and communicate directly with the VF-1 network interface, thereby improving data transmission efficiency.
- the configuration device of the virtual machine further includes a third plug-in 213, which is configured to notify the second plug-in 212 to load the DPDK 209 for the container group 205 in response to the DPDK usage requirement of the container group 205.
- the second plug-in 212 is also configured to bind at least one VF network interface bound to the container group 205, namely the VF-1 network interface, to the DPDK 209, so that the container group 205 bypasses the kernel-mode network protocol stack 207 of the virtual machine and transmits data via the DPDK 209 and the VF-1 network interface.
- the above method can further improve the data transmission efficiency.
- the first plug-in 211 is further configured to, in response to the destruction of the container group 205, unbind at least one VF network interface bound to the container group 205, namely, the VF-1 network interface, from the container group 205.
- the container group 205 is destroyed, there is no need to use the VF-1 network interface to transmit data, thereby achieving data transmission.
- the VF network interface occupied by the container group 205 is unbound from the container group, which can facilitate the reuse of the VF network interface resources.
- the second plug-in 212 is further configured to trigger the VF controller 210 to release at least one VF network interface from the virtual machine 202 after at least one VF network interface is unbound from the container group 205. After being unbound from the container group 205, the at least one VF network interface is still mounted on the virtual machine 202, further triggering the VF controller 210 to release the VF network interface mounted on the virtual machine 202, so that the VF network interface is in an idle state again, so that it can be bound to other container groups when other container groups are started, and the data transmission acceleration function is realized.
- the above method can further facilitate the reuse of VF network interface resources and reduce resource waste.
- VF controller 210 mounts at least one VF network interface to the virtual machine 202 by means of device-attach
- at least one VF network interface mounted on the virtual machine 202 is released by means of device-detach, so that the VF network interface is in an idle state again.
- Device-detach corresponds to device-attach, and refers to releasing at least one network interface from the virtual machine 202 by means of a pipeline operation.
- the third plug-in 213 is embedded in the container group 205. Since the second plug-in 212 cannot automatically sense the startup of the container group 205, it cannot spontaneously start the DPDK 209 loading for the container group 205.
- the third plug-in 213 is embedded in the container group 205, which makes it easier to identify the startup of the container group 205, thereby notifying the second plug-in 212 to start the DPDK 209 loading for the container group 205.
- the third plug-in 213 is embedded in the container group 205 in a sidecar manner, so that the DPDK 209 for the container group 205 can be operated and maintained without affecting the applications in the container group 205.
- the third plug-in 213 notifies the second plug-in 212 of the startup of the container group 205 via a remote procedure call (RPC, google Remote Procedure Call, such as gRPC).
- RPC remote procedure call
- gRPC Remote Procedure Call
- the second plug-in 212 is further configured to release the DPDK environment in response to the destruction of the container group 205.
- the large page cache and other resources occupied by the DPDK 209 during its loading and running process are released when the container group 205 is destroyed. The above method can reduce resource waste and improve resource utilization.
- FIG. 4 is a diagram of a configuration diagram of a process of starting and destroying a container group in a virtual machine according to some embodiments of the present disclosure. Configuration flow chart.
- the network card 203 virtualizes 6 VF network interfaces through SRIOV, corresponding to the VF-1 network interface to the VF-6 network interface respectively.
- the VF-2 network interface, the VF-4 network interface and the VF-6 network interface are in an idle state.
- the configuration process of the virtual machine is as described in steps 1 to 6.
- Step 1 When the virtual machine 202 starts the container group 205, the first plug-in 211 is triggered to identify the VF network interface requirements of the container group 205. The identification result is that the container group 205 requires three VF network interfaces.
- Step 2 After identifying the VF network interface requirement, the first plug-in 211 triggers the second plug-in 212 to send a command to the VF controller 210, thereby triggering the VF controller 210 to mount the three idle VF network interfaces, namely the VF-2 network interface, the VF-4 network interface and the VF-6 network interface, to the virtual machine 202.
- Step 3 The first plug-in 211 binds the VF-2 network interface, the VF-4 network interface, and the VF-6 network interface mounted to the virtual machine 202 to the container group 205 according to the VF network interface requirements.
- Step 4 The container group 205 is instantiated. During the instantiation of the container group 205, the third plug-in 213 embedded in the container group 205 notifies the second plug-in 212 to load the DPDK 209 for the container group 205 in response to the DPDK usage requirement of the container group 205.
- Step 5 The second plug-in 212 binds the VF-2 network interface, VF-4 network interface, and VF-6 network interface bound to the container group 205 to the DPDK 209, so that the container group 205 bypasses the kernel-mode network protocol stack 207 of the virtual machine and transmits data via at least one of the DPDK 209, the VF-2 network interface, the VF-4 network interface, and the VF-6 network interface.
- the application 214 deployed in the container group 205 can use the DPDK 209 and multiple VF network interfaces to transmit data.
- the configuration process of the virtual machine is as described in step 6 to step 9.
- Step 6 In response to the destruction of the container group 205 , the first plug-in 211 unbinds the VF-2 network interface, the VF-4 network interface, and the VF-6 network interface bound to the container group 205 from the container group 205 .
- Step 7 The first plug-in 211 notifies the second plug-in 212 to release the DPDK 209 environment of the container group 205 in response to the destruction of the container group 205.
- Step 8 The first plug-in 211 triggers the second plug-in 212 to connect to the VF-2 network interface, the VF-4 network interface, and After the VF-6 network interface is unbound from the container group 205, the VF controller 210 is triggered to release the VF-2 network interface, the VF-4 network interface, and the VF-6 network interface from the virtual machine 202 by means of device-attach.
- Step 9 The destruction of container group 205 is completed, and the VF-2 network interface, the VF-4 network interface, and the VF-6 network interface are restored to an idle state.
- FIG5 is a schematic diagram of the structure of a virtual machine configuration device according to some embodiments of the present disclosure.
- the configuration device of a virtual machine includes: an identification module 501 , a triggering module 502 and a binding module 503 .
- the identification module 501 is configured to identify a virtual function VF network interface requirement of the container group in response to starting the container group in the virtual machine.
- the trigger module 502 is configured to trigger the VF controller to mount at least one idle VF network interface to the virtual machine after identifying the VF network interface requirement, wherein the VF controller is deployed in the physical machine where the virtual machine is located, and at least one VF network interface is obtained by the network card in the physical machine through SRIOV virtualization.
- the binding module 503 is configured to bind at least one VF network interface mounted on the virtual machine to the container group according to the VF network interface requirement, so that the container group performs data transmission via the at least one VF network interface.
- the above-mentioned virtual machine configuration device may also include other modules to execute the virtual machine configuration method of any one of the above-mentioned embodiments.
- FIG. 6 is a schematic diagram of the structure of a virtual machine configuration device according to yet other embodiments of the present disclosure.
- a virtual machine configuration device 600 includes a memory 601 and a processor 602 coupled to the memory 601 .
- the processor 602 is configured to execute a method of any one of the aforementioned embodiments based on instructions stored in the memory 601 .
- the memory 601 may include, for example, a system memory, a fixed non-volatile storage medium, etc.
- the system memory may store, for example, an operating system, an application program, a boot loader, and other programs.
- the virtual machine configuration device 600 may also include an input and output interface 603, a network interface 604, a storage interface
- the input/output interface 603, the network interface 604, the storage interface 605, and the memory 601 and the processor 602 can be connected, for example, via a bus 606.
- the input/output interface 603 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen.
- the network interface 604 provides a connection interface for various networked devices.
- the storage interface 605 provides a connection interface for external storage devices such as an SD card and a USB flash drive.
- the embodiments of the present disclosure further provide a computer-readable storage medium, including computer program instructions, which implement the method of any one of the above embodiments when executed by a processor.
- the embodiments of the present disclosure further provide a computer program product, including a computer program, which implements the method of any one of the above embodiments when executed by a processor.
- the embodiments of the present disclosure also provide a computer program/instruction, which implements the method of any one of the above embodiments when executed by a processor.
- the embodiments of the present disclosure may be provided as methods, systems, or computer program products. Therefore, the present disclosure may take the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present disclosure may take the form of a computer program product implemented on one or more computer-usable non-transient storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program code.
- a computer-usable non-transient storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce a product including an instruction device, which implements one or more processes and/or processes in the flowchart. Or a block diagram specifies the functions in one or more boxes.
- These computer program instructions may also be loaded onto a computer or other programmable data processing device so that a series of operational steps are executed on the computer or other programmable device to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Stored Programmes (AREA)
Abstract
本申请提供了一种虚拟机及其配置方法和装置,涉及信息技术领域,所述方法包括:响应于虚拟机中容器组的启动,识别所述容器组的虚拟功能VF网络接口需求;在识别所述VF网络接口需求后,触发VF控制器将空闲的至少一个VF网络接口挂载至所述虚拟机,其中,所述VF控制器部署于所述虚拟机所在的物理机中,所述至少一个VF网络接口由所述物理机中的网卡通过单根I/O虚拟化SRIOV虚拟得到;根据所述VF网络接口需求将所述虚拟机上挂载的所述至少一个VF网络接口与所述容器组绑定,以使得所述容器组经由所述至少一个VF网络接口进行数据传输,跳过虚拟化层的OVS交换机,从而提高数据的传输效率。
Description
相关申请的交叉引用
本申请是以申请号为202211345026.0、申请日为2022年10月31日的中国专利申请为基础,并主张其优先权,该中国专利申请的公开内容在此作为整体引入本申请中。
本公开涉及信息技术领域,尤其涉及一种虚拟机及其配置方法和装置。
为便于在不同的操作系统中创建和部署应用程序,通常采用容器化方法开发应用程序。相比较传统方法,容器化方法不依赖特定的计算环境,应用程序可以独立地、可移动地在任何平台中运行。
发明内容
本公开实施例提供了如下技术方案。
根据本公开实施例的一方面,提供一种虚拟机的配置方法,包括:响应于虚拟机中容器组的启动,识别所述容器组的虚拟功能VF网络接口需求;在识别所述VF网络接口需求后,触发VF控制器将空闲的至少一个VF网络接口挂载至所述虚拟机,其中,所述VF控制器部署于所述虚拟机所在的物理机中,所述至少一个VF网络接口由所述物理机中的网卡通过单根I/O虚拟化SRIOV虚拟得到;根据所述VF网络接口需求将所述虚拟机上挂载的所述至少一个VF网络接口与所述容器组绑定,以使得所述容器组经由所述至少一个VF网络接口进行数据传输。
在一些实施例中,所述VF网络接口需求包括所述容器组所需的VF网络接口数量。
在一些实施例中,所述方法还包括:响应于所述容器组的销毁,将与所述容器组绑定的所述至少一个VF网络接口与所述容器组解绑。
在一些实施例中,所述方法还包括:在所述至少一个VF网络接口与所述容器组解绑后,触发所述VF控制器从所述虚拟机释放所述至少一个VF网络接口。
在一些实施例中,响应于虚拟机中容器组的启动,识别所述容器组的虚拟功能VF网络接口需求包括:响应于虚拟机中容器组的启动,获取所述容器组的用户资源自定义CRD资源;根据所述容器组的CRD资源识别所述容器组所需的VF网络接口数量。
在一些实施例中,所述方法还包括:响应于所述容器组的数据平面开发套件DPDK使用需求,加载针对所述容器组的DPDK;将与所述容器组绑定的所述至少一个VF网络接口与所述DPDK绑定,以使所述容器组绕过所述虚拟机的内核态网络协议栈并经由所述DPDK和所述至少一个VF网络接口进行数据传输。
在一些实施例中,所述方法还包括:响应于所述容器组的销毁,释放所述DPDK的环境。
在一些实施例中,所述虚拟机部署于Kubernetes系统中。
根据本公开实施例的另一方面,提供一种虚拟机的配置装置,包括:第一插件,被配置为响应于虚拟机中容器组的启动,识别所述容器组的虚拟功能VF网络接口需求;根据所述VF网络接口需求将所述虚拟机上挂载的至少一个VF网络接口与所述容器组绑定,以使得所述容器组经由所述至少一个VF网络接口进行数据传输;第二插件,被配置为在识别所述网络接口需求后,触发VF控制器将空闲的所述至少一个VF网络接口挂载至所述虚拟机,其中,所述VF控制器部署于所述虚拟机所在的物理机中,所述至少一个VF网络接口由所述物理机中的网卡通过SRIOV虚拟得到。上述任意一个实施例所述的显示面板。
在一些实施例中,所述第一插件还被配置为响应于所述容器组的销毁,将与所述容器组绑定的所述至少一个VF网络接口与所述容器组解绑。
在一些实施例中,所述第二插件还被配置为在所述至少一个VF网络接口与所述容器组解绑后,触发所述VF控制器从所述虚拟机释放所述至少一个VF网络接口。
在一些实施例中,所述装置还包括:第三插件,被配置为响应于所述容器组的数据平面开发套件DPDK使用需求,通知所述第二插件加载针对所述容器组的DPDK;所述第二插件还被配置为将与所述容器组绑定的所述至少一个VF网络接口与所述DPDK绑定,以使所述容器组绕过所述虚拟机的内核态网络协议栈并经
由所述DPDK和所述至少一个VF网络接口进行数据传输。
在一些实施例中,所述第三插件嵌入所述容器组中。
在一些实施例中,所述第二插件还被配置为响应于所述容器组的销毁,释放所述DPDK的环境。
根据本公开实施例的再一方面,提供一种虚拟机的配置装置,包括:识别模块,被配置为响应于虚拟机中容器组的启动,识别所述容器组的虚拟功能VF网络接口需求;触发模块,被配置为在识别所述VF网络接口需求后,触发VF控制器将空闲的至少一个VF网络接口挂载至所述虚拟机,其中,所述VF控制器部署于所述虚拟机所在的物理机中,所述至少一个VF网络接口由所述物理机中的网卡通过SRIOV虚拟得到;绑定模块,被配置为根据所述VF网络接口需求将所述虚拟机上挂载的所述至少一个VF网络接口与所述容器组绑定,以使得所述容器组经由所述至少一个VF网络接口进行数据传输。
根据本公开实施例的又一方面,提供一种虚拟机的配置装置,包括:存储器;以及耦接至所述存储器的处理器,被配置为基于存储在所述存储器中的指令,执行时实现上述任意一个实施例所述的方法。
根据本公开实施例的还一方面,提供一种虚拟机,包括上述任意一个实施例所述的装置。
根据本公开实施例的还一方面,提供一种计算机可读存储介质,包括计算机程序指令,其中,所述计算机程序指令被处理器执行时实现上述任意一个实施例所述的方法。
根据本公开实施例的还一方面,提供一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现上述任意一个实施例所述的方法。
根据本公开实施例的还一方面,提供一种计算机程序/指令,其中,所述计算机程序/指令被处理器执行时实现上述任意一个实施例所述的方法。
本公开实施例中,通过将部署于虚拟机中的容器组与虚拟机所处物理机中的网卡通过SRIOV(Single Root Input/Output Virtualization,单根I/O虚拟化)虚拟得到的VF(Virtual Function,虚拟功能)网络接口进行绑定,使容器组可以跳过虚拟化层的OVS交换机,直接与VF网络接口进行通信,从而提高数据的传输效率。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是根据本公开一些实施例的虚拟机的配置方法的流程示意图;
图2是根据本公开一些实施例的虚拟机的数据传输的流程示意图;
图3是根据本公开一些实施例的虚拟机的配置装置在物理机中的部署位置示意图;
图4是根据本公开一些实施例的虚拟机中的容器组的启动和销毁过程中的配置流程示意图;
图5是根据本公开一些实施例的虚拟机的配置装置的结构示意图;
图6是根据本公开又一些实施例的虚拟机的配置装置的结构示意图。
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
在这里示出和讨论的所有示例中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它示例可以具有不同的值。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
容器化技术在应用过程中在资源隔离及安全管控方面存在缺陷,因此,相关技术中,将容器部署于虚拟机中,通过虚拟机为容器进行资源调配和安全管控,从而解决容器资源隔离及安全管控问题。但相关技术中的方式下,数据传输效率较低。具体来说,在虚拟机中部署容器时,容器在与虚拟机所处物理机外部的其他网元进行数据交互时,数据的传输需经过虚拟化层的开放虚拟交换标准(OVS,OpenvSwitch)交换机,导致数据传输效率较低。
本公开的一些实施例提供了虚拟机的配置方法。图1是根据本公开一些实施例的虚拟机的配置方法的流程示意图。
如图1所示,虚拟机的配置方法包括步骤102至步骤106。
在步骤102中,响应于虚拟机中容器组的启动,识别容器组的VF网络接口需求。这里,容器组包括一个或多个容器。
在一些实施例中,上述虚拟机部署于Kubernetes系统中,在该情况下,容器组为Kubernetes系统最小的可部署单元Pod。一个Pod可以由单个容器或者多个相互耦合的容器组成。
在步骤104中,在识别VF网络接口需求后,触发VF控制器将空闲的至少一个VF网络接口挂载至虚拟机。这里,VF控制器部署于虚拟机所在的物理机中,至少一个VF网络接口由物理机中的网卡通过SRIOV虚拟得到。
在一些实施例中,上述至少一个VF网络接口仅包括一个VF网络接口。在另一些实施例中,上述至少一个VF网络接口包括多个VF网络接口。
利用SRIOV,可以将物理网卡的物理功能PF(Physical Function)虚拟出多个VF,每个VF对应一个VF网络接口,每个VF只能进行数据的I/O(Input/Output,输入/输出)处理。
在一些实现方式中,在一些实现方式中,VF控制器通过SRIOV将物理机上网卡的PF虚拟成至少一个VF,通过PF虚拟得到的VF构成VF池,当VF池中未被挂载至虚拟机上的VF网络接口为空闲的VF网络接口。在一些实现方式中,不同VF网络接口对应不同的标识,VF控制器读取PF上的数据获取各个VF网络
接口的状态信息,进而确定有哪些VF网络接口处于空闲状态。
在步骤106中,根据VF网络接口需求将虚拟机上挂载的至少一个VF网络接口与容器组绑定,以使得容器组经由至少一个VF网络接口进行数据传输。
在上述方法中,通过将部署于虚拟机中的容器组与虚拟机所处物理机中的网卡通过SRIOV虚拟得到的VF网络接口进行绑定,使容器组可以跳过虚拟化层的OVS交换机,直接与VF网络接口进行通信,从而提高数据的传输效率。
在一些实施例中,通过透传(pass-trough)的方式实现对VF网络接口的管控,已被挂载至虚拟机上的VF网络接口,如果与容器组已绑定,则该VF网络接口在虚拟机侧不可见。通过上述方式,可以避免同一个VF网络接口与多个不同的容器组绑定而导致数据传输错误。
图2是根据本公开一些实施例的虚拟机的数据传输的流程示意图。如图2所示,容器组205和容器组206部署在虚拟机202中,虚拟机202处于物理机201中。
在容器组206需将数据传输至物理机201外的其他网元204时,数据的传输路径如下所述。容器组206的数据沿图2中所示虚线路径,先拷贝到虚拟机的内核态,由虚拟机的内核态网络协议栈207传输至虚拟化层的OVS交换机208,再由虚拟化层的OVS交换机208传输至物理机201中的网卡203,由网卡203传输至网元204。在上述数据传输过程中,无论是虚拟机的内核态网络协议栈207还是虚拟化层的OVS交换机208,均需占用物理机201的资源(例如,CPU),数据传输效率较低。
在一些实现方式中,处于物理机201中的VF控制器通过SRIOV技术将物理机中的网卡203的PF虚拟为多个VF,分别表示为VF-1至VF-N。在识别容器组205的VF网络接口需求后,根据前述实施例所示的方法,将空闲的VF-1网络接口与容器组205进行绑定。在该情况下,当数据传输方向由容器组205至网元204时,数据沿图中实线路径①,经虚拟机的内核态网络协议栈207转发后,直接经VF-1网络接口利用网卡203传输至网元204,而无需通过虚拟化层的OVS交换机208转发数据,从而提高数据的传输效率。
在一些实施例中,VF网络接口需求包括容器组所需的VF网络接口数量。通
过SRIOV虚拟得到的VF不区分具体功能,容器组的VF网络接口需求可以容器组所需的VF网络接口数量简单表征。
在一些实施例中,响应于虚拟机中容器组的启动,识别容器组的虚拟功能VF网络接口需求包括:响应于虚拟机中容器组的启动,获取容器组的用户资源自定义(CRD,Custom Resource Definition)资源;根据容器组的CRD资源识别容器组所需的VF网络接口数量。
在一些实现方式中,在虚拟机部署于Kubernetes系统中时,可以通过Kubernetes系统的功能定义多个CRD资源,通过CRD资源定义容器组与VF网络接口的数量的对应关系。例如,定义多个CRD资源分别表示为CRD-1、CRD-2和CRD-3,CRD-1为一个容器组对应一个VF网络接口,CRD-2为一个容器组对应两个VF网络接口,CRD-3为一个容器组对应三个VF网络接口。部署于虚拟机中的容器组在启动时通过CRD资源指定对应的配置,包括该容器组的VF网络接口需求。根据VF网络接口需求为相应容器组绑定VF网络接口后,实例化该容器组的CRD资源,该容器组即可使用与其绑定的VF网络接口加速数据的处理,从而提高数据的传输效率。通过上述方式,挂载至虚拟机上的VF网络接口数量可以为一个或者多个,同时,一个容器组可以同时使用一个或多个VF网络接口进行数据传输的加速,提高了数据传输加速的灵活性。
在一些实施例中,响应于容器组的销毁,将与容器组绑定的至少一个VF网络接口与容器组解绑。容器组被销毁就无需再利用所绑定的VF网络接口进行数据的传输,在容器组销毁后,将容器组所绑定的VF网络接口与该容器组解绑,便于后续将解绑的VF网络接口再利用。
在一些实施例中,在至少一个VF网络接口与容器组解绑后,触发VF控制器从虚拟机释放至少一个VF网络接口。与容器组解绑后,上述至少一个VF网络接口还挂载在虚拟机上,通过进一步触发VF控制器将挂载在虚拟机上的VF网络接口释放,使VF网络接口重新处于空闲状态,可以在其他容器组启动时与其他容器组进行绑定,从而实现数据传输的加速功能。上述方式可以提高资源的利用率。
在一些实施例中,VF控制器通过设备附加(device-attach)的方式将至少一个VF网络接口挂载至虚拟机。
在一些实施例中,通过管道操作将至少一个网络接口挂载至虚拟机。相比较其他挂载方式,通过管道操作实现挂载时,生命周期由管道进程本身管控,无需通过代码进行管控。且在并发情况下,每次管道操作都以管道进程自身进行调度,有管道进程对应的资源空间,无需通过代码实现并发,降低了代码实现难度以及运行风险。同时,管道操作还可以实现不同虚拟化层上的调度,从而兼容不同的虚拟机管理程序。
在一些实施例中,上述虚拟机的配置方法还包括:响应于容器组的数据平面开发套件DPDK(Data Plane Development Kit)使用需求,加载针对容器组的DPDK;将与容器组绑定的至少一个VF网络接口与DPDK绑定,以使容器组绕过虚拟机的内核态网络协议栈并经由DPDK和至少一个VF网络接口进行数据传输。上述方式可以进一步提高数据的传输效率。
在一些实现方式中,如图2所示,容器组205在通过数据传输路径①传输数据至网元204时,虽然无需经虚拟化层的OVS交换机208进行数据的转发,但还需经虚拟机的内核态网络协议栈207进行数据的转发。
为进一步提高数据的传输效率,响应于容器组205的DPDK加载需求,加载DPDK 209,并将DPDK 209与该容器组205绑定的VF-1网络接口绑定。DPDK 209运行在虚拟机202的用户态空间中,DPDK 209内包括用于处理数据的函数库和驱动集合,通过函数库和驱动集合使用轮询的方式来处理数据包。这样在收到容器组205的数据包时,不会将数据包从用户态空间拷贝至内核态,然后通过虚拟机的内核态网络协议栈207进行数据的传输,而是如图2中的数据传输路径②,直接将数据包交付DPDK 209来处理,经DPDK 209传输至绑定的VF-1网络接口,绕过虚拟机的内核态网络协议栈207,使数据无需在用户态和内核态之间进行拷贝。上述方式可以进一步提高数据的传输效率。
在一些实施例中,上述虚拟机的配置方法还包括响应于容器组的销毁,释放DPDK的环境。DPDK在加载和运行过程中占用的大页缓存等资源在容器组销毁时被释放,上述方式可以提高资源的利用率。
除数据传输过程中经由的转发节点顺序相反外,数据传输方向由物理机201外的其他网元204传输至容器组205的其他实施例可以参考数据传输方向由容器组
205传输至物理机外的其他网元204的实施例。
除上述虚拟机的配置方法,本公开还提供一种虚拟机的配置装置,下面将结合图3进行介绍。
图3是根据本公开一些实施例的虚拟机的配置装置在物理机中的部署位置示意图。如图3所示,容器组205和容器组206部署在虚拟机202中,虚拟机202处于物理机201中。
在一些实施例中,如图3所示,虚拟机的配置装置包括:第一插件211和第二插件212。
第一插件211被配置为响应于虚拟机202中容器组205的启动,识别容器组205的VF网络接口需求;根据VF网络接口需求将虚拟机202上挂载的至少一个VF网络接口(例如VF-1网络接口)与容器组205绑定。
第二插件212被配置为在识别VF网络接口需求后,触发VF控制器210将空闲的至少一个VF网络接口挂载至虚拟机202,其中,VF控制器210部署于虚拟机202所在的物理机201中,至少一个VF网络接口由物理机201中的网卡203通过SRIOV虚拟得到。
通过将部署于虚拟机202中的容器组205与虚拟机202所处物理机201中网卡203通过SRIOV虚拟得到的VF-1网络接口进行绑定,使容器组205可以跳过虚拟化层的OVS交换机208,直接与VF-1网络接口进行通信,从而提高数据的传输效率。
在一些实施例中,如图3所示,上述虚拟机的配置装置还包括第三插件213,被配置为响应于容器组205的DPDK使用需求,通知第二插件212加载针对容器组205的DPDK 209。相应地,第二插件212还被配置将与容器组205绑定的至少一个VF网络接口即VF-1网络接口与DPDK 209绑定,以使容器组205绕过虚拟机的内核态网络协议栈207,经由DPDK 209和VF-1网络接口进行数据传输。上述方式可以进一步提高数据的传输效率。
在一些实施例中,第一插件211还被配置为响应于容器组205的销毁,将与容器组205绑定的至少一个VF网络接口即VF-1网络接口与容器组205解绑。容器组205被销毁就无需再利用VF-1网络接口进行数据的传输从而实现数据传输的
加速,响应于容器组205的销毁,将被该容器组205占用的VF网络接口与该容器组解绑,可以便于将VF网络接口资源再利用。
在一些实施例中,第二插件212还被配置为在至少一个VF网络接口与容器组205解绑后,触发VF控制器210从虚拟机202释放至少一个VF网络接口。与容器组205解绑后,上述至少一个VF网络接口还挂载在虚拟机202上,进一步触发VF控制器210将挂载在虚拟机202上的VF网络接口释放,使VF网络接口重新处于空闲状态,从而可以在求他容器组启动时与其他容器组进行绑定,实现数据传输的加速功能。上述方式可以进一步便于将VF网络接口资源再利用,减少资源的浪费。
在一些实施例中,在VF控制器210通过device-attach的方式将至少一个VF网络接口挂载至虚拟机202的情况下,通过设备分离(device-detach)的方式将挂载在虚拟机202上的至少一个VF网络接口释放,使VF网络接口重新处于空闲状态。device-detach与device-attach相对应,是指通过管道操作将至少一个网络接口从虚拟机202上释放。
在一些实施例中,第三插件213嵌入容器组205中,由于第二插件212无法自动感知容器组205的启动,因此无法自发性地启动针对该容器组205的DPDK209加载,而第三插件213嵌入容器组205中,更方便识别容器组205的启动,从而通知第二插件212启动针对容器组205的DPDK 209加载。
在一些实现方式中,第三插件213以边车(SideCar)的方式嵌入容器组205中,如此,可以在不影响容器组205中应用的情况下,操作和维护针对该容器组205的DPDK 209。
在一些实现方式中,第三插件213通过远程过程调用(RPC,google Remote Procedure Call,例如gRPC)的方式通知第二插件212容器组205的启动。
在一些实施例中,第二插件212还被配置为响应于容器组205的销毁,释放DPDK环境。DPDK 209在其加载和运行过程中占用的大页缓存等资源在容器组205被销毁时被释放,上述方式可以减少资源浪费,提高资源的利用率。
为便于理解,下面介绍虚拟机中一个容器组的启动和销毁过程中的配置流程。
图4是根据本公开一些实施例的虚拟机中的容器组的启动和销毁过程中的配
置流程示意图。
在一些实施例中,如图4所示,网卡203通过SRIOV虚拟出6个VF网络接口,分别对应VF-1网络接口至VF-6网络接口。VF-2网络接口、VF-4网络接口和VF-6网络接口处于空闲状态。
在一些实施例中,虚拟机202中容器组205启动时,虚拟机的配置流程如步骤1至步骤6所述。
步骤1:在虚拟机202启动容器组205时,触发第一插件211识别容器组205的VF网络接口需求,识别结果为容器组205所需的VF网络接口为3个。
步骤2:第一插件211在识别VF网络接口需求后,触发第二插件212向VF控制器210发送命令,从而触发VF控制器210将空闲的三个VF网络接口,即VF-2网络接口、VF-4网络接口和VF-6网络接口挂载至虚拟机202。
步骤3:由第一插件211根据VF网络接口需求,将已经挂载至虚拟机202的VF-2网络接口、VF-4网络接口和VF-6网络接口与容器组205绑定。
步骤4:容器组205被实例化。在容器组205被实例化的过程中,嵌入容器组205的第三插件213响应于容器组205的DPDK使用需求,通知第二插件212加载针对容器组205的DPDK 209。
步骤5:第二插件212将与容器组205绑定的VF-2网络接口、VF-4网络接口和VF-6网络接口与DPDK 209绑定,以使容器组205绕过虚拟机的内核态网络协议栈207,经由DPDK209、VF-2网络接口、VF-4网络接口和VF-6网络接口中的至少一个进行数据传输。DPDK 209加载完成后,容器组205中部署的应用214即可使用DPDK 209,以及多个VF网络接口传输数据。
在一些实施例中,虚拟机202中容器组205被销毁时,虚拟机的配置流程的如步骤6至如步骤9所述。
步骤6:响应于容器组205的销毁,第一插件211将与容器组205绑定的VF-2网络接口、VF-4网络接口和VF-6网络接口与容器组205解绑。
步骤7:第一插件211通知第二插件212响应于容器组205的销毁,释放容器组205的DPDK 209的环境。
步骤8:第一插件211触发第二插件212在VF-2网络接口、VF-4网络接口和
VF-6网络接口与容器组205解绑后,触发VF控制器210通过device-attach的方式从虚拟机202释放VF-2网络接口、VF-4网络接口和VF-6网络接口。
步骤9:容器组205的销毁结束,VF-2网络接口、VF-4网络接口和VF-6网络接口重新恢复为空闲状态。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于装置实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
图5是根据本公开一些实施例的虚拟机的配置装置的结构示意图。
如图5所示,虚拟机的配置装置包括:识别模块501、触发模块502和绑定模块503。
识别模块501被配置为响应于虚拟机中容器组的启动,识别容器组的虚拟功能VF网络接口需求。
触发模块502被配置为在识别VF网络接口需求后,触发VF控制器将空闲的至少一个VF网络接口挂载至虚拟机,其中,VF控制器部署于虚拟机所在的物理机中,至少一个VF网络接口由物理机中的网卡通过SRIOV虚拟得到。
绑定模块503被配置为根据VF网络接口需求将虚拟机上挂载的至少一个VF网络接口与容器组绑定,以使得容器组经由至少一个VF网络接口进行数据传输。
应理解,上述虚拟机的配置装置还可以包括其他模块以执行上述任意一个实施例的虚拟机的配置方法。
图6是根据本公开又一些实施例的虚拟机的配置装置的结构示意图。
如图6所示,虚拟机的配置装置600包括存储器601以及耦接至该存储器601的处理器602,处理器602被配置为基于存储在存储器601中的指令,执行前述任意一个实施例的方法。
存储器601例如可以包括系统存储器、固定非易失性存储介质等。系统存储器例如可以存储有操作系统、应用程序、引导装载程序(Boot Loader)以及其他程序等。
虚拟机的配置装置600还可以包括输入输出接口603、网络接口604、存储接
口605等。输入输出接口603、网络接口604、存储接口605之间、以及存储器601与处理器602之间例如可以通过总线606连接。输入输出接口603为显示器、鼠标、键盘、触摸屏等输入输出设备提供连接接口。网络接口604为各种联网设备提供连接接口。存储接口605为SD卡、U盘等外置存储设备提供连接接口。
本公开实施例还提供了一种计算机可读存储介质,包括计算机程序指令,该计算机程序指令被处理器执行时实现上述任意一个实施例的方法。
本公开实施例还提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述任意一个实施例的方法。
本公开实施例还提供了一种计算机程序/指令,该计算机程序/指令被处理器执行时实现上述任意一个实施例的方法。
至此,已经详细描述了本公开的各实施例。为了避免遮蔽本公开的构思,没有描述本领域所公知的一些细节。本领域技术人员根据上面的描述,完全可以明白如何实施这里公开的技术方案。
本领域内的技术人员应当明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用非瞬时性存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解,可由计算机程序指令实现流程图中一个流程或多个流程和/或方框图中一个方框或多个方框中指定的功能。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/
或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
虽然已经通过示例对本公开的一些特定实施例进行了详细说明,但是本领域的技术人员应该理解,以上示例仅是为了进行说明,而不是为了限制本公开的范围。本领域的技术人员应该理解,可在不脱离本公开的范围和精神的情况下,对以上实施例进行修改或者对部分技术特征进行等同替换。本公开的范围由所附权利要求来限定。
Claims (20)
- 一种虚拟机的配置方法,包括:响应于虚拟机中容器组的启动,识别所述容器组的虚拟功能VF网络接口需求;在识别所述VF网络接口需求后,触发VF控制器将空闲的至少一个VF网络接口挂载至所述虚拟机,其中,所述VF控制器部署于所述虚拟机所在的物理机中,所述至少一个VF网络接口由所述物理机中的网卡通过单根I/O虚拟化SRIOV虚拟得到;根据所述VF网络接口需求将所述虚拟机上挂载的所述至少一个VF网络接口与所述容器组绑定,以使得所述容器组经由所述至少一个VF网络接口进行数据传输。
- 根据权利要求1所述的方法,其中,所述VF网络接口需求包括所述容器组所需的VF网络接口数量。
- 根据权利要求1或2所述的方法,还包括:响应于所述容器组的销毁,将与所述容器组绑定的所述至少一个VF网络接口与所述容器组解绑。
- 根据权利要求3所述的方法,还包括:在所述至少一个VF网络接口与所述容器组解绑后,触发所述VF控制器从所述虚拟机释放所述至少一个VF网络接口。
- 根据权利要求2所述的方法,其中,响应于虚拟机中容器组的启动,识别所述容器组的虚拟功能VF网络接口需求包括:响应于虚拟机中容器组的启动,获取所述容器组的用户资源自定义CRD资源;根据所述容器组的CRD资源识别所述容器组所需的VF网络接口数量。
- 根据权利要求1-5任意一项所述的方法,还包括:响应于所述容器组的数据平面开发套件DPDK使用需求,加载针对所述容器组的DPDK;将与所述容器组绑定的所述至少一个VF网络接口与所述DPDK绑定,以使所述容器组绕过所述虚拟机的内核态网络协议栈并经由所述DPDK和所述至少一个VF网络接口进行数据传输。
- 根据权利要求6所述的方法,还包括:响应于所述容器组的销毁,释放所述DPDK的环境。
- 根据权利要求1-7任意一项所述的方法,其中,所述虚拟机部署于Kubernetes系统中。
- 一种虚拟机的配置装置,包括:第一插件,被配置为响应于虚拟机中容器组的启动,识别所述容器组的虚拟功能VF网络接口需求;根据所述VF网络接口需求将所述虚拟机上挂载的至少一个VF网络接口与所述容器组绑定,以使得所述容器组经由所述至少一个VF网络接口进行数据传输;第二插件,被配置为在识别所述网络接口需求后,触发VF控制器将空闲的所述至少一个VF网络接口挂载至所述虚拟机,其中,所述VF控制器部署于所述虚拟机所在的物理机中,所述至少一个VF网络接口由所述物理机中的网卡通过SRIOV虚拟得到。
- 根据权利要求9所述的装置,其中,所述第一插件还被配置为响应于所述容器组的销毁,将与所述容器组绑定的所述至少一个VF网络接口与所述容器组解绑。
- 根据权利要求10所述的装置,其中,所述第二插件还被配置为在所述至少一个VF网络接口与所述容器组解绑后,触发所述VF控制器从所述虚拟机释放所述至少一个VF网络接口。
- 根据权利要求9-11任意一项所述的装置,还包括:第三插件,被配置为响应于所述容器组的数据平面开发套件DPDK使用需求,通知所述第二插件加载针对所述容器组的DPDK;所述第二插件还被配置为将与所述容器组绑定的所述至少一个VF网络接口与所述DPDK绑定,以使所述容器组绕过所述虚拟机的内核态网络协议栈并经由 所述DPDK和所述至少一个VF网络接口进行数据传输。
- 根据权利要求12所述的装置,其中,所述第三插件嵌入所述容器组中。
- 根据权利要求12或13所述的装置,其中,所述第二插件还被配置为响应于所述容器组的销毁,释放所述DPDK的环境。
- 一种虚拟机的配置装置,包括:识别模块,被配置为响应于虚拟机中容器组的启动,识别所述容器组的虚拟功能VF网络接口需求;触发模块,被配置为在识别所述VF网络接口需求后,触发VF控制器将空闲的至少一个VF网络接口挂载至所述虚拟机,其中,所述VF控制器部署于所述虚拟机所在的物理机中,所述至少一个VF网络接口由所述物理机中的网卡通过SRIOV虚拟得到;绑定模块,被配置为根据所述VF网络接口需求将所述虚拟机上挂载的所述至少一个VF网络接口与所述容器组绑定,以使得所述容器组经由所述至少一个VF网络接口进行数据传输。
- 一种虚拟机的配置装置,包括:存储器;以及耦接至所述存储器的处理器,被配置为基于存储在所述存储器中的指令,执行权利要求1-8任意一项所述的方法。
- 一种虚拟机,包括如权利要求9-16任意一项所述的装置。
- 一种计算机可读存储介质,包括计算机程序指令,其中,所述计算机程序指令被处理器执行时实现权利要求1-8任意一项所述的方法。
- 一种计算机程序产品,包括计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1-8任意一项所述的方法。
- 一种计算机程序/指令,其中,所述计算机程序/指令被处理器执行时实现权利要求1-8任意一项所述方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211345026.0A CN115509692A (zh) | 2022-10-31 | 2022-10-31 | 虚拟机及其配置方法和装置 |
CN202211345026.0 | 2022-10-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024093574A1 true WO2024093574A1 (zh) | 2024-05-10 |
Family
ID=84511424
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/120629 WO2024093574A1 (zh) | 2022-10-31 | 2023-09-22 | 虚拟机及其配置方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115509692A (zh) |
WO (1) | WO2024093574A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115509692A (zh) * | 2022-10-31 | 2022-12-23 | 中国电信股份有限公司 | 虚拟机及其配置方法和装置 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110688202A (zh) * | 2019-10-09 | 2020-01-14 | 腾讯科技(深圳)有限公司 | 服务进程调度方法、装置、设备及存储介质 |
CN110704155A (zh) * | 2018-07-09 | 2020-01-17 | 阿里巴巴集团控股有限公司 | 容器网络构建方法及装置、物理主机、数据传输方法 |
CN112130957A (zh) * | 2020-09-11 | 2020-12-25 | 烽火通信科技股份有限公司 | 一种容器突破虚拟化隔离使用智能网卡的方法与系统 |
CN112925581A (zh) * | 2021-02-22 | 2021-06-08 | 百果园技术(新加坡)有限公司 | Dpdk容器的启动方法、装置及电子设备 |
CN114124714A (zh) * | 2021-11-11 | 2022-03-01 | 厦门亿联网络技术股份有限公司 | 一种多层级网络部署方法、装置、设备及存储介质 |
US20220279420A1 (en) * | 2021-03-01 | 2022-09-01 | Juniper Networks, Inc. | Containerized router with virtual networking |
US20220276886A1 (en) * | 2019-08-26 | 2022-09-01 | Microsoft Technology Licensing, Llc | Computer device including process isolated containers with assigned virtual functions |
CN115509692A (zh) * | 2022-10-31 | 2022-12-23 | 中国电信股份有限公司 | 虚拟机及其配置方法和装置 |
-
2022
- 2022-10-31 CN CN202211345026.0A patent/CN115509692A/zh active Pending
-
2023
- 2023-09-22 WO PCT/CN2023/120629 patent/WO2024093574A1/zh unknown
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110704155A (zh) * | 2018-07-09 | 2020-01-17 | 阿里巴巴集团控股有限公司 | 容器网络构建方法及装置、物理主机、数据传输方法 |
US20220276886A1 (en) * | 2019-08-26 | 2022-09-01 | Microsoft Technology Licensing, Llc | Computer device including process isolated containers with assigned virtual functions |
CN110688202A (zh) * | 2019-10-09 | 2020-01-14 | 腾讯科技(深圳)有限公司 | 服务进程调度方法、装置、设备及存储介质 |
CN112130957A (zh) * | 2020-09-11 | 2020-12-25 | 烽火通信科技股份有限公司 | 一种容器突破虚拟化隔离使用智能网卡的方法与系统 |
CN112925581A (zh) * | 2021-02-22 | 2021-06-08 | 百果园技术(新加坡)有限公司 | Dpdk容器的启动方法、装置及电子设备 |
US20220279420A1 (en) * | 2021-03-01 | 2022-09-01 | Juniper Networks, Inc. | Containerized router with virtual networking |
CN114124714A (zh) * | 2021-11-11 | 2022-03-01 | 厦门亿联网络技术股份有限公司 | 一种多层级网络部署方法、装置、设备及存储介质 |
CN115509692A (zh) * | 2022-10-31 | 2022-12-23 | 中国电信股份有限公司 | 虚拟机及其配置方法和装置 |
Non-Patent Citations (2)
Title |
---|
"Master's Thesis", 28 April 2021, UNIVERSITY OF ELECTRONIC SCIENCE AND TECHNOLOGY OF CHINA, CN, article HU, XUNPEI: "Research on Implementation Technology of Customized VNF Based on DPDK and Docker Container", pages: 1 - 91, XP009555281, DOI: 10.27005/d.cnki.gdzku.2021.002067 * |
LEIVADEAS ARIS; FALKNER MATTHIAS; PITAEV NIKOLAI: "Analyzing Service Chaining of Virtualized Network Functions with SR-IOV", 2020 IEEE 21ST INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE SWITCHING AND ROUTING (HPSR), IEEE, 11 May 2020 (2020-05-11), pages 1 - 6, XP033773972, DOI: 10.1109/HPSR48589.2020.9098975 * |
Also Published As
Publication number | Publication date |
---|---|
CN115509692A (zh) | 2022-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10120736B2 (en) | Executing a kernel device driver as a user space process | |
US7103625B1 (en) | Virtual resource ID mapping | |
US7979869B2 (en) | Method and system for performing I/O operations using a hypervisor | |
RU2429530C2 (ru) | Управление состоянием распределенных аппаратных средств в виртуальных машинах | |
WO2017152633A1 (zh) | 一种端口绑定实现方法及装置 | |
CN100511156C (zh) | 强制性地终止输入/输出操作阻止的线程的设备和方法 | |
JP5295228B2 (ja) | 複数のプロセッサを備えるシステム、ならびにその動作方法 | |
US20050198647A1 (en) | Snapshot virtual-templating | |
CN103309792B (zh) | 一种日志信息的控制方法及系统 | |
CN103699428A (zh) | 一种虚拟网卡中断亲和性绑定的方法和计算机设备 | |
CN101271401A (zh) | 一种具备单一系统映像的服务器机群系统 | |
WO2024093574A1 (zh) | 虚拟机及其配置方法和装置 | |
CN111213127B (zh) | 用于直接分配的设备的虚拟化操作 | |
US20180239624A1 (en) | Preloading enhanced application startup | |
CN102567090A (zh) | 在计算机处理器中创建执行线程的方法和系统 | |
WO2018040845A1 (zh) | 一种计算资源调度方法及装置 | |
CN108304248A (zh) | 一种多系统虚拟化的移动设备 | |
Avanzini et al. | Integrating Linux and the real-time ERIKA OS through the Xen hypervisor | |
CN102141915B (zh) | 一种基于RTLinux的设备实时控制方法 | |
US9122549B2 (en) | Method and system for emulation of instructions and hardware using background guest mode processing | |
CN101539973A (zh) | 完整性度量技术在可信虚拟域无缝运行的方法 | |
US10152341B2 (en) | Hyper-threading based host-guest communication | |
CN118550595B (zh) | 一种Linux系统组织管理RTOS系统的方法及系统 | |
CN116361033B (zh) | 通信方法、电子设备及存储介质 | |
US20230305875A1 (en) | Virtual networking for special types of nested virtual machines |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23884506 Country of ref document: EP Kind code of ref document: A1 |