CN117076051A - Network implementation method, system, equipment and medium for running virtual machine in container - Google Patents

Network implementation method, system, equipment and medium for running virtual machine in container Download PDF

Info

Publication number
CN117076051A
CN117076051A CN202311076110.1A CN202311076110A CN117076051A CN 117076051 A CN117076051 A CN 117076051A CN 202311076110 A CN202311076110 A CN 202311076110A CN 117076051 A CN117076051 A CN 117076051A
Authority
CN
China
Prior art keywords
virtual machine
port
ingress
pod
tap
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311076110.1A
Other languages
Chinese (zh)
Inventor
廖桥生
李明
金伟毅
种保中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Sicui Industrial Internet Technology Research Institute Co ltd
Original Assignee
Suzhou Sicui Industrial Internet Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Sicui Industrial Internet Technology Research Institute Co ltd filed Critical Suzhou Sicui Industrial Internet Technology Research Institute Co ltd
Priority to CN202311076110.1A priority Critical patent/CN117076051A/en
Publication of CN117076051A publication Critical patent/CN117076051A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a network implementation method, a system, equipment and a medium for running virtual machines in a container, belonging to the technical field of cloud computing and containers, and aiming at solving the technical problems of reducing network delay of the virtual machines in the container and improving network efficiency, the invention adopts the following technical scheme: creating a first tc ingress queue on a tap port of the virtual machine in the container; creating a first tc restricted policy based on a first tc ingress queue, wherein the first tc restricted policy is used for intercepting ingress traffic of a tap port of a virtual machine and redirecting the traffic to a veth pair equipment port in a container; creating a second tc ingress queue on the veth pair device port in the container; and creating a second tc restricted policy based on the second tc ingress queue, wherein the second tc restricted policy is used for intercepting the ingress traffic of the veth pair equipment port in the container and redirecting the traffic to the virtual machine tap port. The system is applied to a Kubernetes platform nanotube virtual machine; the system comprises an interface configuration module, a monitoring module and a Pod configuration module.

Description

Network implementation method, system, equipment and medium for running virtual machine in container
Technical Field
The invention relates to the technical field of cloud computing and containers, in particular to a network implementation method, a system, equipment and a medium for running virtual machines in a container.
Background
kubyirt is a currently mainstream technology of Kubernetes nanotube virtual machine, each kubyirt virtual machine corresponds to one VMI object and one Pod object of Kubernetes, an independent virt-launcher process and a libvirt process are started in the Pod corresponding to each kubyirt virtual machine, the virt-launcher process monitors configuration changes of the VMI object of the kubyirt virtual machine, updates the xml configuration of the virtual machine and issues the configuration to the libvirt process in the Pod, and the libvirt process manages the life cycle of the qemu process of the virtual machine according to the xml configuration of the virtual machine.
Because Kubernetes themselves require some Specifications (CNIs) of the Pod network, the tap port of the kubev irt virtual machine cannot directly communicate with the node where the Pod is located in the network naming space where the Pod belongs, and cannot be connected with a data plane module of the CNI on the node, for example, cannot be connected with a ovs bridge of the kuve-ovn CNI.
At present, kubev irt provides a binding method for connecting a CNI data plane module to support masquerade, bridge, passt, slirp, sriov and other technologies, and because masquerade, passt, slirp technologies cannot configure a real IP address in a virtual machine, cannot support trunk and cannot meet the standard virtual machine network requirements; the sriov technology requires that the node network card supports sriov, the configuration is very complex, the requirement on hardware is higher, and the node network card cannot be used universally. The binding method which is more universal for the kubev irt virtual machine to connect with the CNI data surface module is bridge technology. The CNI (Container Network Interface) data plane module refers to some specifications of the Kubernetes on the Pod network, the network plug-in meeting the CNI specification is called as a CNI plug-in, and the CNI data plane module refers to a data plane module of the CNI plug-in meeting the CNI specification, such as ovs of the kuve-ovn CNI plug-in.
As shown in figure 1, a linux network bridge and a veth pair device are created in a Pod to which a virtual machine belongs, a tap port of the virtual machine in the Pod and one port of the veth pair device are connected to the linux network bridge, and the other port of the veth pair device is connected to a data plane module of a node CNI, so that the tap port of the virtual machine in the Pod can be communicated with a CNI data plane module network on the node, and the virtual machine in the Pod can be communicated with other pods or virtual machines of a container platform through the node CNI data plane module.
However, in bridge technology, the virtual machine network and the CNI data plane module of the node are additionally passed through the linux bridge in the Pod, and also pass through unnecessary kernel network protocol stack paths such as netfilter of the Pod network naming space, so that the network delay is greatly increased and the network efficiency is reduced due to the excessively long network link and unnecessary kernel network protocol stack processing.
Patent application CN114510323a discloses a network optimization implementation method for running virtual machines in a container. In the method, in the scene of Kubernetes, a container group Pod of the virtual machine is created through a host network mode, then host network equipment is created in a container of the Pod through kubeevirt, network setting is completed, and finally virtual machine creation is completed. The technical scheme solves the problem of network delay caused by overlong links from the virtual machine network to the nodes, shortens the links from the virtual machine network to the nodes, reduces the network delay, improves the network transmission efficiency, and reduces the loss of the virtual machine network performance. In addition, the technical scheme improves the flow of creating the virtual machine by kubev irt, simplifies the processing process, eliminates the cni business flow in the whole life cycle of the virtual machine, reduces the coupling with ovn, improves the user experience, and provides assistance for the development of the container and virtual machine integration technology. However, the technology is realized in a manner that the Pod running the virtual machine uses a host network, the network isolation between the Pod containers running the virtual machine is abandoned, and the virtual machine cannot use various CNI plug-in configuration networks of Kubernetes, so that the virtual machine and the common Pod cannot be connected with the same CNI plug-in network.
Therefore, how to reduce the data link length between the virtual machine network and the CNI data plane module of the node where the virtual machine network is located and skip unnecessary kernel network protocol stack processing such as netfilter of the Pod network naming space on the premise of not changing the original architecture of kubeevirt and meeting the requirement of Kubernetes on the specification (CNI) of the Pod network, thereby reducing the network delay of the virtual machine in the container and improving the network efficiency is a technical problem to be solved in the present.
Disclosure of Invention
The technical task of the invention is to provide a network implementation method, a system, equipment and a medium for running a virtual machine in a container, which are used for solving the problems of reducing the data link length between the virtual machine network and a CNI data plane module of a node where the virtual machine is located and skipping unnecessary kernel network protocol stack processing such as netfilter of a name space of the Pod network on the premise of not changing the original architecture of kubevelit and meeting the requirement of Kubernetes on the specification (CNI) of the Pod network, thereby reducing the network delay of the virtual machine in the container and improving the network efficiency.
The technical task of the invention is realized in the following way, a network realization method for running a virtual machine in a container is realized, the method is to open the tap port of the virtual machine in the container and the port network of the veth pair equipment connected with the CNI data surface module through a tc-related technology, reduce the data link length between the virtual machine network and the CNI data surface module, and skip the kernel network protocol stack processing of the naming space of the container network; the method comprises the following steps:
creating a first tc ingress queue on a tap port of the virtual machine in the container;
creating a first tc restricted policy based on a first tc ingress queue, wherein the first tc restricted policy is used for intercepting ingress traffic of a tap port of a virtual machine and redirecting the traffic to a veth pair equipment port in a container;
creating a second tc ingress queue on the veth pair device port in the container;
and creating a second tc restricted policy based on the second tc ingress queue, wherein the second tc restricted policy is used for intercepting the ingress traffic of the veth pair equipment port in the container and redirecting the traffic to the virtual machine tap port.
Preferably, the creating of the first tc ingress queue on the tap port of the virtual machine in the container and the creating of the second tc ingress queue on the veth pair device port in the container are specifically as follows:
running a virtual machine in a Pod with independent network namespaces, wherein the virtual machine is provided with one or more network cards, each network card of the virtual machine is provided with a tap port which corresponds to the virtual machine one by one and can communicate with the virtual machine in the Pod, and one or more tap ports of the virtual machine are positioned in the Pod network namespaces;
each tap port of the virtual machine corresponds to one veth pair device one by one, and each veth pair device is provided with two ports, namely a veth pair device first port and a veth pair device second port; the first port of the veth pair equipment is positioned in a default network naming space of the node to which the Pod belongs and is connected with a CNI data plane module of the node to which the Pod belongs; the second port of the veth pair device is located in the Pod network namespace;
creating respective first tc ingress queues on one or more tap ports of the virtual machine in the Pod namespaces respectively, wherein the first tc ingress queues are used for buffering respective ingress traffic of the one or more tap ports;
and respectively creating respective second tc ingress queues on the second ports of the veth pair equipment corresponding to one or more tap ports of the virtual machine in the Pod namespaces, wherein the respective ingress queues are used for buffering respective ingress traffic of the second ports of the veth pair equipment corresponding to one or more tap ports.
More preferably, a first tc restricted policy is created based on a first tc ingress queue, and the ingress traffic for intercepting a tap port of a virtual machine and redirecting to a veth pair device port in a container is specifically as follows:
based on respective first tc ingress queues created by one or more tap ports of the virtual machine, respective first tc mixed policies are created respectively, and the one or more tap port ingress traffic of the virtual machine is intercepted and redirected to one or more tap ports of the virtual machine corresponding to a veth pair device second port; the ingress traffic of one or more tap ports of the virtual machine refers to uplink traffic of the virtual machine network card corresponding to the one or more tap ports.
More preferably, a second tc modified policy is created based on a second tc ingress queue, and the method is used for intercepting the ingress traffic of the veth pair device port in the container and redirecting the traffic to the virtual machine tap port as follows:
creating respective second tc ingress policies based on respective second tc ingress queues created by the second port of the veth pair device corresponding to one or more tap ports of the virtual machine, and intercepting and redirecting the ingress traffic of the second port of the veth pair device corresponding to one or more tap ports; the ingress traffic of the second port of the veth pair device corresponding to one or more tap ports refers to the downlink traffic of the virtual machine network card corresponding to one or more tap ports.
More preferably, the direction of the first tc restricted policy is egress; the action of the first tc modified strategy is redirect; the target port of the first tc related strategy is a second port of the veth pair equipment; the buffer queue of the first tc modified strategy is a first tc ingress queue associated with the first tc modified strategy; the filter of the first tc modified strategy filters traffic through elements of a source mac address, a destination mac address, a source IP address, a destination IP address, a four-layer protocol, a source port and a destination port;
the direction of the second tc modified strategy is egress; the action of the second tc modified strategy is redirect; the target port of the second tc modified strategy is one or more tap ports of the virtual machine; the buffer queue of the second tc modified strategy is a second tc ingress queue associated with the second tc modified strategy; the filter of the second tc modified policy filters traffic through elements of the source mac address, destination mac address, source IP address, destination IP address, four layer protocol, source port and destination port.
More preferably, intercepting and redirecting one or more tap port ingress traffic of the virtual machine to a second port of the veth pair device corresponding to the one or more tap ports of the virtual machine is specifically as follows:
intercepting and redirecting all ingress traffic of one or more tap ports of the virtual machine to a second port of the veth pair equipment corresponding to the one or more tap ports of the virtual machine;
or,
intercepting and redirecting partial ingress traffic of one or more tap ports of the virtual machine to a second port of the veth pair equipment corresponding to the one or more tap ports of the virtual machine through a filter of a first tc associated strategy;
intercepting and redirecting the ingress traffic of the second port of the veth pair device corresponding to one or more tap ports of the virtual machine to one or more tap ports, wherein the method specifically comprises the following steps:
intercepting and redirecting all ingress traffic of the second port of the veth pair equipment corresponding to one or more tap ports of the virtual machine to one or more tap ports;
or,
and intercepting and redirecting partial ingress traffic of the one or more tap ports corresponding to the second port of the veth pair device to the one or more tap ports through a filter of a second tc restricted strategy.
A network implementation system for running virtual machines in a container, the system being applied to Kubernetes platform nanotube virtual machines; the system comprises an interface configuration module, a monitoring module and a Pod configuration module;
the interface configuration module is used for creating a tc ingress queue and a tc mixed strategy and deleting the tc ingress queue and the tc mixed strategy;
the monitoring module is used for monitoring Pod and resource variation conditions of the virtual machine through a watch API of the Kubernetes;
when the Pod configuration module monitors the Pod of the newly-added running virtual machine on the node through the monitoring module, the Pod configuration module creates a first tc ingress queue on one or more tap ports of the virtual machine in the Pod through the interface configuration module based on the virtual machine newly-added running in the Pod, and creates a first tc-restricted strategy on one or more tap ports of the virtual machine in the Pod based on the first tc ingress queue; the Pod configuration module creates a second tc ingress queue on a second port of the veth pair equipment corresponding to one or more tap ports of the virtual machine in the Pod through the interface configuration module, and configures a second tc restricted policy on the second port of the veth pair equipment corresponding to one or more tap ports of the virtual machine in the Pod based on the second tc ingress queue.
Preferably, when the Pod configuration module monitors that the Pod of the running virtual machine is deleted on the node through the monitoring module, the Pod configuration module deletes a first tc-restricted policy and a first tc-ingress queue on one or more tap ports of the virtual machine in the Pod through the interface configuration module, and deletes a second tc-restricted policy and a second tc-ingress queue on a second port of the veth pair device corresponding to one or more tap ports of the virtual machine in the Pod.
An electronic device, comprising: a memory and at least one processor;
wherein the memory has a computer program stored thereon;
the at least one processor executes the computer program stored by the memory, causing the at least one processor to perform a network implementation method of running virtual machines in a container as described above.
A computer readable storage medium having stored therein a computer program executable by a processor to implement a network implementation method of running a virtual machine in a container as described above.
The "tc" refers to abbreviation of Traffic Control, a tool used for controlling packet sending logic of a network card in a linux system, can be used for simulating network delay, jitter, packet loss, disorder, damage, redirection and the like, is flexible in configuration mode, can limit all Traffic of the whole network card, and can set a network segment or a port according to requirements.
The "tc ingres queue" refers to linux tc ingress qdisc, and each network card in the linux system may be configured with a tc ingres qdisc to buffer the traffic of the network card in the direction of the ingres, and a tc filter or tc policy may be created based on the tc ingres qdisc (policy, tc mired is one of the tc policies).
"tc mixed" is a policy command used by linux tc to mirror or redirect a message received by a network card to another network card.
"Kubernetes" is a container orchestration engine, open-source by google corporation, and is the most mainstream container management platform at present.
"Pod" is a combination of one or more containers, which are specifications of how the Kubernetes can create, schedule, manage, and store, network, and namespaces that the containers share;
"veth pair" is commonly referred to as Virtual Ethernet Pair, a pair of ports, and all packets coming in from one end of the pair will come out from the other end, and vice versa.
The network implementation method, system, equipment and medium for running the virtual machine in the container have the following advantages:
on the premise of not changing the original architecture of kubeevirt and conforming to the requirement of Kubernetes on the specification (CNI) of a Pod network, the invention can open the tap port of the virtual machine in the Pod and the CNI data surface module network of the node where the Pod is positioned through the linux tcmerred technology, thereby reducing the data link length between the virtual machine network and the CNI data surface module of the node where the Pod is positioned, and skipping the unnecessary kernel network protocol stack processing such as netfilter of the Pod network naming space, thereby greatly reducing the network delay of the virtual machine in the container and improving the network efficiency;
secondly, the Pod running the virtual machine can use various CNI plug-ins of the Kubernetes to configure a network, accords with the requirement of the Kubernetes on the specification (CNI) of the Pod network, and opens the tap port of the virtual machine in the Pod and the CNI data plane module network of the node where the Pod is located through a linux tc restricted technology.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a schematic diagram of an application of a method for implementing a network for running virtual machines in a container according to the prior art;
FIG. 2 is a flow diagram of a network implementation method for running virtual machines in a container;
FIG. 3 is a schematic diagram of an application for implementing a method for running a virtual machine network in a container;
FIG. 4 is a schematic diagram of a virtual machine running in a container accessing the outside through a linux tc restricted policy;
fig. 5 is a block diagram of a network implementation system running virtual machines in a container.
Detailed Description
The network implementation method, system, device and medium for running virtual machines in a container according to the present invention are described in detail below with reference to the accompanying drawings and specific embodiments.
Example 1:
as shown in fig. 2, the present embodiment provides a network implementation method for running a virtual machine in a container, where the method specifically includes:
s1, creating a first tc ingress queue on a tap port of a virtual machine in a container.
In this embodiment, a virtual machine is operated in a Pod having an independent network namespace, the virtual machine has one or more network cards, each network card of the virtual machine has a tap port in the Pod, the tap ports of the virtual machine are located in the Pod network namespace, and the tap ports are in one-to-one correspondence and can communicate with the tap ports. As shown in fig. 3, pod1 has an independent network naming space, a virtual machine VM1 is running in Pod1, the virtual machine VM1 has two network cards eth0 and eth1, the virtual machine network cards eth0 and eth1 are respectively connected with a tap0 port and a tap1 port in Pod, and the tap0 port and the tap1 port are both located in the Pod1 network naming space.
In this embodiment, each tap port of a virtual machine running in a Pod corresponds to one veth pair device one by one, and each veth pair device has two ports, specifically a first port of the veth pair device and a second port of the veth pair device; the first port of the veth pair equipment is positioned in a default network naming space of the node to which the Pod belongs and is connected with a CNI data plane module of the node to which the Pod belongs; the veth pair device second port is located in the Pod network namespace. As shown in fig. 3, pod1 has two veth pair devices, where a veth pair device composed of nic-eth0 and veth0 corresponds to a tap0 port, a veth pair device composed of nic-eth1 and veth1 corresponds to a tap1 port, the first ports of the two veth pair devices are veth0 and veth1, respectively, the veth0 and veth1 are located in a default network namespace of node1 to which Pod1 belongs and are connected with CNI data plane module 1 of node1, the second ports of the two veth pair devices are nic-eth0 and nic-eth1, and the nic-eth0 and nic-eth1 are located in a network namespace of Pod1, and are not connected with any bridge.
In this embodiment, in the Pod namespace, respective first tc ingress queues are created on one or more tap ports of the virtual machine, and are used to buffer respective ingress traffic of the one or more tap ports. As shown in fig. 3, in the namespace of Pod1, a first tc ingress queue is created on the tap0 port and the tap1 port of the virtual machine VM1 running in Pod1, respectively.
S2, creating a first tc restricted policy based on the first tc ingress queue, wherein the first tc restricted policy is used for intercepting the ingress traffic of the tap port of the virtual machine and redirecting the traffic to the path equipment port in the container.
In this embodiment, respective first tc-driven policies are created based on respective first tc-ingress queues created by one or more tap ports of a virtual machine, and one or more tap port ingress traffic of the virtual machine is intercepted and redirected to a second port of a veth pair device corresponding to one or more tap ports of the virtual machine. As shown in fig. 3, a first tc-restricted policy of the tap0 port is created on the tap0 port of the virtual machine VM1 running in Pod1 based on a first tc-ingres queue on the tap0 port, and the first tc-restricted policy action on the tap0 port is to intercept traffic on the first tc-ingres queue on the tap0 port and redirect traffic on the first tc-eth 0 port, so as to intercept and redirect ingres traffic on the tap0 port of the virtual machine VM1 to the nic-tap0 port, and likewise, create a first tc-restricted policy of the tap1 port on the tap1 port based on the first tc-ingres queue on the tap1 port, and the first tc-restricted policy action on the tap1 port is to intercept traffic on the first tc-ingres queue on the tap1 port and redirect traffic on the first tc-eth 1 port, so as to intercept and redirect the service traffic on the tap1 port of the virtual machine VM1 to the tap1 port.
S3, creating a second tc ingress queue on the path equipment port in the container.
In this embodiment, in the Pod namespace, respective second tc ingress queues are created on the second ports of the veth pair devices corresponding to one or more tap ports of the virtual machine, and are used to buffer respective ingress traffic of the second ports of the veth pair devices corresponding to one or more tap ports. As shown in fig. 3, in the namespace of Pod1, a second tc ingress queue is created on the virtual machine tap0 port corresponding to the veth pair device second port nic-eth0 and a second tc ingress queue is created on the tap1 port corresponding to the veth pair device second port nic-eth 1.
S4, creating a second tc restricted policy based on the second tc ingress queue, wherein the second tc restricted policy is used for intercepting ingress traffic of a veth pair device port in the container and redirecting the traffic to a virtual machine tap port.
In this embodiment, respective second tc related policies are created based on respective second tc related queues created by one or more tap ports corresponding to a second port of the veth pair device of the virtual machine, and the related ingress traffic of the one or more tap ports corresponding to the second port of the veth pair device is intercepted and redirected to the one or more tap ports. As shown in fig. 3, a second tc-restricted policy of the nic-eth0 port is created on the tap0 port of the virtual machine VM1 running on the Pod1 corresponding to the second port nic-eth0 of the veth pair device based on a second tc-ingres queue on the nic-eth0 port, the second tc-restricted policy action on the nic-eth0 port is to intercept traffic on the second tc-ingres queue on the nic-eth0 port and redirect to the tap0 port, so as to intercept and redirect the ingress traffic of the virtual machine VM1 tap0 port corresponding to the veth pair device second port nic-eth0 to the tap0 port, and likewise, a second tc-restricted policy of the nic-eth1 port is created on the tap1 port running on the nic-eth1 port based on the second tc-eth 1 port, so as to intercept traffic on the tap1 port and redirect the traffic on the tap1 port corresponding to the veth pair device, so as to intercept and redirect the ingress traffic on the tap1 port corresponding to the tap1 port.
In this embodiment, as shown in fig. 4, the virtual machines VM1 eth0 network card and the data links accessed by the eth1 network card and the CNI data plane module 1 are shown as dashed lines. Taking the mutual access of the virtual machine VM1 eth0 network card and the CNI data plane module 1 as an example for details: firstly, after a data packet of a virtual machine VM1 eth0 accessing a CNI data plane module 1 reaches a tap0, the data packet enters a first tc ingress queue on a tap0 port because of the traffic in the direction of ingress, the first tc restricted policy on the tap0 port intercepts the ingress traffic on the tap0 port and forwards the traffic to a nic-eth0 port, and because the nic-eth0 and the veth0 belong to the same veth pair, the data packet received by the nic-eth0 is directly forwarded to the veth0, and the veth0 is bound to the CNI data plane module 1, so that the CNI data plane module 1 receives the data packet sent by the virtual machine VM1 eth 0; similarly, when the CNI data plane module 1 accesses the virtual machine VM1 eth0 network card, the data packet is forwarded to the veth0 interface, and the nic-eth0 and veth0 belong to the same veth pair, the data packet received by veth0 is directly forwarded to the nic-eth0, after the data packet reaches the nic-eth0, the data packet is the traffic in the direction of the ingress, so that the data packet enters a second tc ingress queue on the nic-eth0 port, and the second tc restricted policy on the nic-eth0 port intercepts the ingress traffic on the nic-eth0 port and forwards the traffic to the tap0 port, thereby reaching the virtual machine VM1 eth0 network card.
Example 2:
the embodiment also provides a network implementation system for running the virtual machine in the container, which is applied to the Kubernetes platform nanotube virtual machine.
As shown in fig. 5, the network implementation system for running the virtual machine in the container includes an interface configuration module, a monitoring module and a Pod configuration module;
the interface configuration module is used for creating a tc ingress queue and a tc mixed strategy, and deleting the tc ingress queue and the tc mixed strategy;
the monitoring module is used for monitoring resource variation conditions such as Pod, virtual machines and the like through a watch API of the Kubernetes;
when the Pod configuration module monitors the Pod of the newly-added running virtual machine on the node through the monitoring module, the Pod configuration module creates a first tc ingress queue on one or more tap ports of the virtual machine in the Pod through the interface configuration module based on the virtual machine newly-added running in the Pod, and creates a first tc-restricted strategy on one or more tap ports of the virtual machine in the Pod based on the first tc ingress queue; the Pod configuration module creates a second tc ingress queue on a second port of the veth pair equipment corresponding to one or more tap ports of the virtual machine in the Pod through the interface configuration module, and configures a second tc restricted policy on the second port of the veth pair equipment corresponding to one or more tap ports of the virtual machine in the Pod based on the second tc ingress queue.
In this embodiment, when the Pod configuration module monitors that the Pod of the running virtual machine is deleted on the node through the monitoring module, the Pod configuration module deletes a first tc-restricted policy and a first tc-ingress queue on one or more tap ports of the virtual machine in the Pod through the interface configuration module, and deletes a second tc-restricted policy and a second tc-ingress queue on a second port of the veth pair device corresponding to one or more tap ports of the virtual machine in the Pod.
The technical solutions of the same parts of this embodiment and embodiment 1 are referred to embodiment 1, and are not repeated here. The network implementation system for running the virtual machine in the container provided by the embodiment can be deployed in the scenes of a super fusion integrated machine, a computer, a server, a data center, a virtual cluster, a portable mobile terminal, a Web system, a financial payment platform or an ERP system, a virtual online payment platform/system and the like.
Example 3:
the embodiment also provides an electronic device, including: a memory and a processor;
wherein the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored in the memory, so that the processor executes the network implementation method for running the virtual machine in the container in any embodiment of the present invention.
The processor may be a Central Processing Unit (CPU), but may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store computer programs and/or modules, and the processor implements various functions of the electronic device by running or executing the computer programs and/or modules stored in the memory, and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the terminal, etc. The memory may also include high-speed random access memory, but may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, memory card only (SMC), secure Digital (SD) card, flash memory card, at least one disk storage period, flash memory device, or other volatile solid state memory device.
Example 4:
the embodiment of the invention also provides a computer readable storage medium, wherein a plurality of instructions are stored, and the instructions are loaded by a processor, so that the processor executes the network implementation method for running the virtual machine in the container in any embodiment of the invention. Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium.
In this case, the program code itself read from the storage medium may realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code form part of the present invention.
Examples of storage media for providing program code include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RYM, DVD-RWs, DVD+RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer by a communication network.
Further, it should be apparent that the functions of any of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform part or all of the actual operations based on the instructions of the program code.
Further, it is understood that the program code read out by the storage medium is written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion unit connected to the computer, and then a CPU or the like mounted on the expansion board or the expansion unit is caused to perform part and all of actual operations based on instructions of the program code, thereby realizing the functions of any of the above embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (10)

1. A network implementation method for running virtual machines in a container is characterized in that a tap port of the virtual machine in the container is communicated with a veth pair equipment port network connected with a CNI data surface module through a tc-related technology, so that the data link length between the virtual machine network and the CNI data surface module is reduced, and kernel network protocol stack processing of a naming space of the container network is skipped; the method comprises the following steps:
creating a first tc ingress queue on a tap port of the virtual machine in the container;
creating a first tc restricted policy based on a first tc ingress queue, wherein the first tc restricted policy is used for intercepting ingress traffic of a tap port of a virtual machine and redirecting the traffic to a veth pair equipment port in a container;
creating a second tc ingress queue on the veth pair device port in the container;
and creating a second tc restricted policy based on the second tc ingress queue, wherein the second tc restricted policy is used for intercepting the ingress traffic of the veth pair equipment port in the container and redirecting the traffic to the virtual machine tap port.
2. The network implementation method for running virtual machines in a container according to claim 1, wherein creating a first tc ingress queue on a tap port of the virtual machine in the container and creating a second tc ingress queue on a path pair device port in the container is specifically as follows:
running a virtual machine in a Pod with independent network namespaces, wherein the virtual machine is provided with one or more network cards, each network card of the virtual machine is provided with a tap port which corresponds to the virtual machine one by one and can communicate with the virtual machine in the Pod, and one or more tap ports of the virtual machine are positioned in the Pod network namespaces;
each tap port of the virtual machine corresponds to one veth pair device one by one, and each veth pair device is provided with two ports, namely a veth pair device first port and a veth pair device second port; the first port of the veth pair equipment is positioned in a default network naming space of the node to which the Pod belongs and is connected with a CNI data plane module of the node to which the Pod belongs; the second port of the veth pair device is located in the Pod network namespace;
creating respective first tc ingress queues on one or more tap ports of the virtual machine in the Pod namespaces respectively, wherein the first tc ingress queues are used for buffering respective ingress traffic of the one or more tap ports;
and respectively creating respective second tc ingress queues on the second ports of the veth pair equipment corresponding to one or more tap ports of the virtual machine in the Pod namespaces, wherein the respective ingress queues are used for buffering respective ingress traffic of the second ports of the veth pair equipment corresponding to one or more tap ports.
3. The network implementation method for running a virtual machine in a container according to claim 1 or 2, wherein creating a first tc-driven policy based on a first tc-ingress queue is used for intercepting ingress traffic of a tap port of the virtual machine and redirecting the traffic to a veth pair device port in the container specifically includes:
based on respective first tc ingress queues created by one or more tap ports of the virtual machine, respective first tc mixed policies are created respectively, and the one or more tap port ingress traffic of the virtual machine is intercepted and redirected to one or more tap ports of the virtual machine corresponding to a veth pair device second port; the ingress traffic of one or more tap ports of the virtual machine refers to uplink traffic of the virtual machine network card corresponding to the one or more tap ports.
4. The network implementation method for running a virtual machine in a container according to claim 3, wherein creating a second tc-driven policy based on a second tc-ingress queue is used for intercepting an ingress traffic of a veth pair device port in the container and redirecting the traffic to a tap port of the virtual machine specifically includes:
creating respective second tc ingress policies based on respective second tc ingress queues created by the second port of the veth pair device corresponding to one or more tap ports of the virtual machine, and intercepting and redirecting the ingress traffic of the second port of the veth pair device corresponding to one or more tap ports; the ingress traffic of the second port of the veth pair device corresponding to one or more tap ports refers to the downlink traffic of the virtual machine network card corresponding to one or more tap ports.
5. The network implementation method of running virtual machines in a container according to claim 3, wherein the direction of the first tc restricted policy is egress; the action of the first tc modified strategy is redirect; the target port of the first tc related strategy is a second port of the veth pair equipment; the buffer queue of the first tc modified strategy is a first tc ingress queue associated with the first tc modified strategy; the filter of the first tc modified strategy filters traffic through elements of a source mac address, a destination mac address, a source IP address, a destination IP address, a four-layer protocol, a source port and a destination port;
the direction of the second tc modified strategy is egress; the action of the second tc modified strategy is redirect; the target port of the second tc modified strategy is one or more tap ports of the virtual machine; the buffer queue of the second tc modified strategy is a second tc ingress queue associated with the second tc modified strategy; the filter of the second tc modified policy filters traffic through elements of the source mac address, destination mac address, source IP address, destination IP address, four layer protocol, source port and destination port.
6. The network implementation method for running a virtual machine in a container according to claim 4, wherein intercepting and redirecting traffic of one or more tap ports of the virtual machine to a second port of a veth pair device corresponding to one or more tap ports of the virtual machine is specifically as follows:
intercepting and redirecting all ingress traffic of one or more tap ports of the virtual machine to a second port of the veth pair equipment corresponding to the one or more tap ports of the virtual machine;
or,
intercepting and redirecting partial ingress traffic of one or more tap ports of the virtual machine to a second port of the veth pair equipment corresponding to the one or more tap ports of the virtual machine through a filter of a first tc associated strategy;
intercepting and redirecting the ingress traffic of the second port of the veth pair device corresponding to one or more tap ports of the virtual machine to one or more tap ports, wherein the method specifically comprises the following steps:
intercepting and redirecting all ingress traffic of the second port of the veth pair equipment corresponding to one or more tap ports of the virtual machine to one or more tap ports;
or,
and intercepting and redirecting partial ingress traffic of the one or more tap ports corresponding to the second port of the veth pair device to the one or more tap ports through a filter of a second tc restricted strategy.
7. The network implementation system for running the virtual machine in the container is characterized in that the system is applied to a Kubernetes platform nanotube virtual machine; the system comprises an interface configuration module, a monitoring module and a Pod configuration module;
the interface configuration module is used for creating a tc ingress queue and a tc mixed strategy and deleting the tc ingress queue and the tc mixed strategy;
the monitoring module is used for monitoring Pod and resource variation conditions of the virtual machine through a watch API of the Kubernetes;
when the Pod configuration module monitors the Pod of the newly-added running virtual machine on the node through the monitoring module, the Pod configuration module creates a first tc ingress queue on one or more tap ports of the virtual machine in the Pod through the interface configuration module based on the virtual machine newly-added running in the Pod, and creates a first tc-restricted strategy on one or more tap ports of the virtual machine in the Pod based on the first tc ingress queue; the Pod configuration module creates a second tc ingress queue on a second port of the veth pair equipment corresponding to one or more tap ports of the virtual machine in the Pod through the interface configuration module, and configures a second tc restricted policy on the second port of the veth pair equipment corresponding to one or more tap ports of the virtual machine in the Pod based on the second tc ingress queue.
8. The network implementation system for running a virtual machine in a container according to claim 7, wherein when the Pod configuration module monitors that the Pod running the virtual machine on the node is deleted through the monitoring module, the Pod configuration module deletes a first tc-related policy and a first tc-related queue on one or more tap ports of the virtual machine in the Pod and deletes a second tc-related policy and a second tc-related queue on a second port of a veth pair device corresponding to one or more tap ports of the virtual machine in the Pod through the interface configuration module.
9. An electronic device, comprising: a memory and at least one processor;
wherein the memory has a computer program stored thereon;
the at least one processor executing the computer program stored by the memory causes the at least one processor to perform the network implementation method of running virtual machines in a container as claimed in any one of claims 1 to 6.
10. A computer readable storage medium having stored therein a computer program executable by a processor to implement a network implementation method of running a virtual machine in a container as claimed in any one of claims 1 to 6.
CN202311076110.1A 2023-08-25 2023-08-25 Network implementation method, system, equipment and medium for running virtual machine in container Pending CN117076051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311076110.1A CN117076051A (en) 2023-08-25 2023-08-25 Network implementation method, system, equipment and medium for running virtual machine in container

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311076110.1A CN117076051A (en) 2023-08-25 2023-08-25 Network implementation method, system, equipment and medium for running virtual machine in container

Publications (1)

Publication Number Publication Date
CN117076051A true CN117076051A (en) 2023-11-17

Family

ID=88711297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311076110.1A Pending CN117076051A (en) 2023-08-25 2023-08-25 Network implementation method, system, equipment and medium for running virtual machine in container

Country Status (1)

Country Link
CN (1) CN117076051A (en)

Similar Documents

Publication Publication Date Title
CN112470436B (en) Systems, methods, and computer-readable media for providing multi-cloud connectivity
JP5976942B2 (en) System and method for providing policy-based data center network automation
US20030231632A1 (en) Method and system for packet-level routing
US9548890B2 (en) Flexible remote direct memory access resource configuration in a network environment
CN116366449A (en) System and method for user customization and automation operations on a software defined network
CN110505244B (en) Remote tunnel access technology gateway and server
US10594602B2 (en) Web services across virtual routing and forwarding
US11483398B2 (en) Session management in a forwarding plane
EP4141666A1 (en) Dual user space-kernel space datapaths for packet processing operations
CN115604199B (en) Service routing method and system for cloud native platform micro-service gateway
KR20230162083A (en) Extend cloud-based virtual private networks to wireless-based networks
US20220350637A1 (en) Virtual machine deployment method and related apparatus
US9374308B2 (en) Openflow switch mode transition processing
CN111130978B (en) Network traffic forwarding method and device, electronic equipment and machine-readable storage medium
CN117076051A (en) Network implementation method, system, equipment and medium for running virtual machine in container
CN115913778A (en) Network strategy updating method, system and storage medium based on sidecar mode
US11637770B2 (en) Invalidating cached flow information in a cloud infrastructure
CN116800605B (en) Network implementation method, system, equipment and medium for running virtual machine in container
US11604670B2 (en) Virtual machine live migration method, apparatus, and system
JP6243015B2 (en) System and method for virtual network entity (VNE) based on network operation support system (NOSS)
CN110086702B (en) Message forwarding method and device, electronic equipment and machine-readable storage medium
US12003429B2 (en) Dual user space-kernel space datapaths for packet processing operations
WO2023035777A1 (en) Network configuration method, proxy component, controller, electronic device and storage medium
US20230246956A1 (en) Invalidating cached flow information in a cloud infrastructure
WO2024002101A1 (en) Packet transmission method and apparatus, related device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination