CN115022408A - Data transmission method and device based on service grid and electronic equipment - Google Patents

Data transmission method and device based on service grid and electronic equipment Download PDF

Info

Publication number
CN115022408A
CN115022408A CN202210635793.9A CN202210635793A CN115022408A CN 115022408 A CN115022408 A CN 115022408A CN 202210635793 A CN202210635793 A CN 202210635793A CN 115022408 A CN115022408 A CN 115022408A
Authority
CN
China
Prior art keywords
container
service
scheduling
data transmission
kernel extension
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210635793.9A
Other languages
Chinese (zh)
Inventor
顾欣
王鹏培
凌晨
刘成锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202210635793.9A priority Critical patent/CN115022408A/en
Publication of CN115022408A publication Critical patent/CN115022408A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a data transmission method and device based on a service grid and electronic equipment, and relates to the technical field of cloud computing, wherein the data transmission method comprises the following steps: receiving network traffic data; pushing the network flow data to an agent side car, wherein a kernel extension program is configured in the agent side car in advance; and sending the network flow data to a target service container managed by the target container scheduling unit through the kernel extension program. The invention solves the technical problems that the network link is lengthened and the time consumption is increased due to the fact that the flow of the business container is intercepted and forwarded through the network preset rule in the service grid in the prior art.

Description

Data transmission method and device based on service grid and electronic equipment
Technical Field
The invention relates to the technical field of cloud computing, in particular to a data transmission method and device based on a service grid and electronic equipment.
Background
With the popularity of distributed microservices, a service grid is taken as a new distributed service architecture, service container network flow is proxied through a proxy sidecar/proxy container sidecar in a container scheduling unit, so that services are decoupled from infrastructure, the service iteration cycle is accelerated, and the service grid is widely recognized.
However, in the related art, the agent sidecar in each container scheduling unit occupies a part of system resources, and the accumulated resource overhead is very large and non-trivial on a large cluster scale. Meanwhile, the sidecar agent intercepts and forwards the flow entering and exiting the service container through the existing iptables rule (the condition predefined by the network administrator), so that the original service network calling link also consumes a certain time.
Fig. 1 is a schematic diagram of an alternative distributed service architecture in the prior art, as shown in fig. 1, a registration center is connected to a control platform, information of a plurality of K8S groups is registered to the control platform through the registration center, and a rule is issued to a container scheduling unit pod (all containers in the pod share a network storage resource) of each K8S group through the control platform, where the container scheduling unit includes: the user provider and the agent sidecar, and the K8S group further includes API service modules, as shown in fig. 1, to proxy the traffic container network traffic through an agent sidecar in the pod, thereby facilitating service decoupling from the infrastructure.
Fig. 2 is a schematic diagram of a traffic proxy of a proxy Sidecar sidacr in an optional service grid in the prior art, as shown in fig. 2, an iptables rule is adopted, the proxy Sidecar sidacr performs network interception on input information, and then forwards the information to each APP, information of each APP is also intercepted by the proxy Sidecar sidacr, and information is output by the proxy Sidecar sidacr, in the traffic proxy manner illustrated in fig. 2, the sidacr proxy Sidecar forwards traffic entering and exiting a service container by intercepting, and forces traffic entering and exiting a pod to be redirected to the sidacr, so that service and infrastructure are decoupled, that is, capacities such as gray routing, monitoring, security authentication and the like are realized by proxy communication traffic. However, this method causes the network link to be lengthened, which increases a lot of time consumption for the original service network to call the link, and the iptables rule is not easy to maintain.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a data transmission method and device based on a service grid and electronic equipment, which are used for at least solving the technical problems that in the prior art, the service grid intercepts and forwards traffic of a service container through a network preset rule, so that a network link is lengthened and time consumption is increased.
According to an aspect of the embodiments of the present invention, there is provided a data transmission method based on a service grid, including: receiving network traffic data; pushing the network flow data to an agent side car, wherein a kernel extension program is configured in the agent side car in advance; and sending the network flow data to a target service container managed by a target container scheduling unit through the kernel extension program.
Optionally, before receiving the network traffic data, the method further includes: establishing a link relation with the service container managed by each container scheduling unit by adopting a preset interface scheduling strategy; scheduling a kernel extension interface of a container kernel of the service container based on the link relation; writing the kernel extension program corresponding to the service container based on the kernel extension interface.
Optionally, before receiving the network traffic data, the method further includes: acquiring a namespace of each container scheduling unit; determining lease information corresponding to each of the container scheduling units based on the namespace, wherein the lease information provides scheduling information and link information of the container scheduling units; and respectively setting scheduling tenants corresponding to the container scheduling units in the kernel extension program based on the lease information.
Optionally, the step of sending the network traffic data to a target service container managed by a target container scheduling unit through the kernel extension program includes: extracting service lease information in the network flow data; determining a target scheduling tenant corresponding to the target service container based on the service lease information; and sending the network flow data to the target service container through the target scheduling tenant and the kernel extension program.
Optionally, the data transmission method based on the service grid is applied to a virtual machine, where the virtual machine includes a plurality of container scheduling units, each of the container scheduling units correspondingly manages a plurality of service containers, and all the service containers share the agent sidecar.
Optionally, a plurality of service containers on the virtual machine are interfaced with a container cluster management system, and the container cluster management system manages a plurality of service containers.
According to another aspect of the embodiments of the present invention, there is also provided a data transmission apparatus based on a service grid, including: a receiving unit, configured to receive network traffic data; the pushing unit is used for pushing the network flow data to the agent side car, wherein a kernel extension program is configured in the agent side car in advance; and the transmission unit is used for transmitting the network flow data to a target service container managed by the target container scheduling unit through the kernel extension program.
Optionally, the data transmission apparatus based on a service grid further includes: the establishing unit is used for establishing a link relation with the service container managed by each container scheduling unit by adopting a preset interface scheduling strategy before receiving the network traffic data; the scheduling unit is used for scheduling the kernel extension interface of the container kernel of the service container based on the link relation; and the writing unit is used for writing the kernel extension program corresponding to the service container based on the kernel extension interface.
Optionally, the data transmission apparatus based on a service grid further includes: the first acquisition module is used for acquiring the name space of each container scheduling unit before receiving the network traffic data; a first determining module, configured to determine lease information corresponding to each container scheduling unit based on the namespace, where the lease information provides scheduling information and link information of the container scheduling units; and the first setting module is used for respectively setting scheduling tenants corresponding to the container scheduling units in the kernel extension program based on the lease information.
Optionally, the transmission unit includes: the first extraction module is used for extracting the service lease information in the network flow data; the second determining module is used for determining a target scheduling tenant corresponding to the target service container based on the service lease information; a first sending module, configured to send the network traffic data to the target service container through the target scheduling tenant and the kernel extension program.
Optionally, the data transmission method based on the service grid is applied to a virtual machine, where the virtual machine includes a plurality of container scheduling units, each of the container scheduling units correspondingly manages a plurality of service containers, and all the service containers share the agent sidecar.
Optionally, a plurality of service containers on the virtual machine are interfaced with a container cluster management system, and the container cluster management system manages a plurality of service containers.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, and when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute any one of the above-mentioned data transmission methods based on a service grid.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including one or more processors and a memory, where the memory is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to implement any one of the above-mentioned service grid-based data transmission methods.
In the invention, network flow data is received and pushed to the agent side car, wherein a kernel extension program is pre-configured in the agent side car, and the network flow data is sent to a target service container managed by a target container scheduling unit through the kernel extension program. In the embodiment, the extended interface program can be customized through the host kernel interface, all the pod network flows on the service grid are proxied by a common proxy sidecar agent program, the system resource overhead is greatly reduced, and the network link is simplified, so that the technical problems that the network link is lengthened and the time consumption is increased due to the fact that the service container flow is intercepted and forwarded through the network preset rule in the service grid in the prior art are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an alternative distributed service architecture in the prior art;
FIG. 2 is a schematic illustration of a prior art alternative traffic proxy for a Sidecar Sidecar in a service grid;
FIG. 3 is a flow chart of an alternative method of serving grid based data transmission in accordance with an embodiment of the present invention;
FIG. 4 is a flow diagram illustrating an alternative forwarding process after receiving traffic from a virtual machine in the prior art;
FIG. 5 is a schematic diagram of an alternative traffic forwarding scheme using a kernel extender, according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an alternative service grid-based data transmission arrangement in accordance with an embodiment of the present invention;
fig. 7 is a block diagram of a hardware configuration of an electronic device (or mobile device) of a seat allocation plan determination method according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
To facilitate understanding of the invention by those skilled in the art, some terms or nouns referred to in various embodiments of the invention are explained below:
a container, essentially a process, is a view-isolated, resource-constrained process.
The K8s cluster, which is an abbreviation of kubernets cluster, includes various traffic types, such as long-term-serving (long-running), batch (batch), node-background-supported (node-daemon), and stateful application (stateful application).
The container scheduling unit, Pod, is the minimum unit for running and deploying applications or services in the K8s cluster, and different pods need to be deployed for different service types, so that multiple containers can be supported, multiple containers can be supported to share a network address and a file system in one Pod, and the services can be combined and completed in a simple and efficient manner of interprocess communication and file sharing.
Agent sidecars/agent containers, sidecars, are deployed within each Pod in the prior art, and access to external systems or external services by the Pod is achieved by deploying a container that interfaces with an external service cluster.
The Service Mesh is an infrastructure layer specially used for processing Service communication, and reliable request transmission is realized under a complex topological structure of application composition services. The interior may include, but is not limited to: a set of lightweight network proxies deployed with the application service and transparent to the application service. The service grid is a stack of user agents next to each service, and is formed by adding a group of task management components, wherein the management components are called control platforms or control planes (control planes) and are responsible for communicating with the agents in the control planes and issuing policies and configurations. The agents, referred to as data planes or data planes (data planes) in the service mesh, directly process inbound and outbound data packets, forward, route, health check, load balance, authenticate, generate monitoring data, and the like.
Through the service grid, the following functions can be realized: firstly, decoupling of micro-service management and service logic, stripping most of the capability in the software installation package SDK from application by a service grid, disassembling the capability into an independent process, and deploying the capability in a sidecar mode. The service grid separates service communication and related management and control functions from a service program and sinks the service program to an infrastructure layer, so that the service grid and a service system are completely decoupled, and developers are more concentrated on the service. Secondly, the unified management of the heterogeneous system is along with the development of new technology and personnel replacement, applications and services with different languages and different frameworks often appear in the same company, and in order to uniformly manage and control the services, the service management capability of a main body is sunk to an infrastructure to realize multi-language support operation.
It should be noted that the data transmission method and device based on the service grid in the present disclosure may be used in the case of data information transmission in the cloud computing field, and may also be used in the case of data information transmission in any field other than the cloud computing field.
It should be noted that relevant information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for presentation, analyzed data, etc.) referred to in the present disclosure are information and data that are authorized by the user or sufficiently authorized by various parties. For example, an interface is provided between the system and the relevant user or organization, before obtaining the relevant information, an obtaining request needs to be sent to the user or organization through the interface, and after receiving the consent information fed back by the user or organization, the relevant information is obtained.
The following embodiments of the invention can be applied to a distributed micro-service scene, in particular to a service grid scene, can realize data transmission, and realize the function of sharing a proxy sidecar by tenants represented by a plurality of service containers in the service grid scene.
The present invention will be described in detail with reference to examples.
Example one
In accordance with an embodiment of the present invention, there is provided an embodiment of a data transmission method based on a service grid, it should be noted that the steps illustrated in the flowchart of the accompanying drawings may be executed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be executed in an order different than that herein.
Fig. 3 is a flowchart of an alternative service grid-based data transmission method according to an embodiment of the present invention, as shown in fig. 3, the method includes the following steps:
step S301, receiving network flow data;
step S302, network flow data is pushed to a proxy side car, wherein a kernel extension program is pre-configured in the proxy side car;
step S303, the network traffic data is sent to the target service container managed by the target container scheduling unit through the kernel extension program.
Through the steps, the network flow data can be received and pushed to the agent side car, wherein a kernel extension program is pre-configured in the agent side car, and the network flow data is sent to the target service container managed by the target container scheduling unit through the kernel extension program. In the embodiment, the extended interface program can be customized through the host kernel interface, all the pod network flows on the service grid are proxied by a common proxy sidecar agent program, the system resource overhead is greatly reduced, and the network link is simplified, so that the technical problems that the network link is lengthened and the time consumption is increased due to the fact that the service container flow is intercepted and forwarded through the network preset rule in the service grid in the prior art are solved.
For convenience of describing the embodiment of the present invention, a comparative analysis is performed, and a service traffic forwarding flow in the prior art is described first.
Fig. 4 is an optional forwarding flow diagram after receiving traffic from a virtual machine in the prior art, as shown in fig. 4, after receiving the traffic, a virtual machine Node of an original service grid may pass through a virtual machine network rule (including mapping from an ethernet interface on the left side of fig. 4 to a Tcp/ip address, and then transmitting to a Pod of an application APP through the ethernet interface and a virtual network card interface Veth), and then send to the Pod through the Veth (in the Pod, the APP may pass through the ethernet interface, the Tcp/ip address mapping, and a socket, and reach the APP, and after running the APP, the APP may pass through the socket, the Tcp/ip address mapping, and the ethernet interface again, and finally pass through a software port, and then flow through the ethernet interface, the Tcp/ip address mapping, the socket of a network layer, and reach an agent side car sideccar).
According to the content recorded in the prior art, the existing network iptables intercepts and forwards the traffic of the service container, the traffic interception needs to pass through a plurality of flows such as virtual network card interfaces veth, eth and tcp/ip, and the traffic is redirected to the Sidecr, and then the Sidecr sends the traffic to the App and passes through the traffic, so that the network link is complex and the time consumption is increased.
The network forwarding procedure of this embodiment will be described below.
The following describes embodiments of the present invention in detail with reference to the above-described respective implementation steps.
Optionally, the data transmission method based on the service grid is applied to a virtual machine node, where the virtual machine includes multiple container scheduling units Pod, each container scheduling unit manages multiple service containers correspondingly, and all service containers share an agent Sidecar.
Optionally, the service containers on the multiple virtual machines are docked with a container cluster management system, and the container cluster management system manages multiple service containers.
Optionally, before receiving the network traffic data, further comprising: establishing a link relation with the service container managed by each container scheduling unit by adopting a preset interface scheduling strategy; based on the link relation, scheduling a kernel extension interface of a container kernel of the service container; and writing a kernel extension program corresponding to the service container based on the kernel extension interface.
It should be noted that the kernel extension interface may include, but is not limited to, a BPF (BPF) interface, which may be referred to as a beckeley Packet Filter, and is an original interface of a data link layer in a Unix-like system, and provides for receiving and transmitting an original link layer Packet.
In this embodiment, the kernel extension interface is used, the program is written to extend the kernel capability, and after the virtual machine node receives the network traffic and flows through the sidecar program, the service node is directly sent to the corresponding container of the service Pod through the kernel extension program, thereby reducing network steps.
Another optional step, before receiving the network traffic data, further includes: acquiring a namespace of each container scheduling unit; determining lease information corresponding to each container scheduling unit based on the namespace, wherein the lease information provides scheduling information and link information of the container scheduling units; and respectively setting scheduling tenants corresponding to each container scheduling unit in the kernel extension program based on the lease information.
Because all the service containers on the virtual machine share one sidecar agent program, according to the name space of the service pod, each tenant is respectively arranged on the agent program to correspond to each service program pod, so that sharing is realized. Namely, scheduling tenants corresponding to each container scheduling unit can be respectively set in the kernel extension program based on the lease information, and then agent sidecars are shared and used by scheduling tenant information of the tenants in the using process.
It should be noted that the lease information is to facilitate distinguishing each container scheduling unit Pod, because all pods share one agent Sidecar, at this time, if traffic forwarding is performed through the agent Sidecar, traffic to be forwarded needs to be distinguished.
After the writing of the kernel extension program and the tenant registration are completed, the actual traffic forwarding process is explained below.
Step S301, receiving network traffic data.
The network traffic data may refer to traffic data transmitted by an external service application and received by the virtual machine, and the specific type and the size of the transmitted data of the traffic data are not specifically limited in this embodiment, and are adjusted in a data use manner used by each distributed storage cluster.
Step S302, network flow data is pushed to the agent side car, wherein a kernel extension program is configured in the agent side car in advance.
In this embodiment, the kernel capability may be dynamically expanded by reason of an os interface, and the traffic of all the pod on the virtual machine node in the L3-L7 layer (mainly the network layer portion, including the virtual network card interface, the ethernet interface, Tcp/ip address mapping, and the socket) is directly directed to the shared sdecar on the virtual machine node through the kernel capability, thereby greatly reducing the system resource consumption and the network link.
Step S303, the network traffic data is sent to the target service container managed by the target container scheduling unit through the kernel extension program.
Optionally, the step of sending the network traffic data to the target service container managed by the target container scheduling unit through the kernel extension program includes: extracting service lease information in the network flow data; determining a target scheduling tenant corresponding to the target service container based on the service lease information; and sending the network flow data to the target service container through the target scheduling tenant and the kernel extension program.
Fig. 5 is a schematic diagram of optional traffic forwarding using a kernel extension program according to an embodiment of the present invention, as shown in fig. 5, after a virtual machine node receives network traffic, a sdecar program flows through, that is, after socket resolution is performed by the kernel extension program, the sdecar program directly sends the sdecar program to an APP container of a service Pod (illustrated by APP Pod in fig. 5).
As shown in fig. 5, the traffic of all the pods on the virtual machine node at the L3-L7 layer is directly led to the shared sidecar on the virtual machine node through the kernel capability, thereby greatly reducing the system resource consumption and the network link.
Through the embodiment, the scheme that a plurality of pod tenants share the agent side car in the service grid scene can be realized, the problems that each pod of the service grid needs to independently start a sidecar agent, system resources are wasted, and meanwhile, network links become complex and time is consumed are increased are solved, and the complexity of the iptables redirection network links is reduced and the time consumption of the network is reduced through the kernel interface expansion.
The invention is described below in connection with an alternative embodiment.
Example two
The embodiment of the present invention provides a data transmission device based on a service grid, where each implementation unit included in the data transmission device corresponds to each implementation step in the first embodiment.
Fig. 6 is a schematic diagram of an alternative service grid-based data transmission apparatus according to an embodiment of the present invention, as shown in fig. 6, the data transmission apparatus may include: a receiving unit 61, a pushing unit 62, a transmitting unit 63, wherein,
a receiving unit 61, configured to receive network traffic data;
the pushing unit 62 is configured to push the network traffic data to the agent side car, where a kernel extension program is pre-configured in the agent side car;
and the transmission unit 63 is configured to send the network traffic data to the target service container managed by the target container scheduling unit through the kernel extension program.
In the above steps, the receiving unit 61 may receive the network traffic data, and the pushing unit 62 may push the network traffic data to the agent side car, where a kernel extension program is pre-configured in the agent side car, and the transmitting unit 63 transmits the network traffic data to the target service container managed by the target container scheduling unit through the kernel extension program. In the embodiment, the expansion interface program can be customized through the host kernel interface, all the pod network flows on the service grid are proxied by one public agent sidecar agent program, the system resource expenditure is greatly reduced, and the network link is simplified, so that the technical problems that the service container flow is intercepted and forwarded through the network preset rule in the service grid in the prior art, the network link is lengthened, and the consumed time is increased are solved.
Optionally, the data transmission apparatus based on the service grid further includes: the establishing unit is used for establishing a link relation with the service container managed by each container scheduling unit by adopting a preset interface scheduling strategy before receiving the network traffic data; the scheduling unit is used for scheduling a kernel extension interface of a container kernel of the service container based on the link relation; and the writing unit is used for writing a kernel extension program corresponding to the service container based on the kernel extension interface.
Optionally, the data transmission apparatus based on the service grid further includes: the first acquisition module is used for acquiring the name space of each container scheduling unit before receiving the network traffic data; the first determining module is used for determining lease information corresponding to each container scheduling unit based on the namespace, wherein the lease information provides scheduling information and link information of the container scheduling units; and the first setting module is used for respectively setting scheduling tenants corresponding to each container scheduling unit in the kernel extension program based on the lease information.
Optionally, the transmission unit includes: the first extraction module is used for extracting service lease information in the network flow data; the second determining module is used for determining a target scheduling tenant corresponding to the target service container based on the service lease information; and the first sending module is used for sending the network flow data to the target service container through the target scheduling tenant and the kernel extension program.
Optionally, the data transmission method based on the service grid is applied to a virtual machine, the virtual machine includes a plurality of container scheduling units, each container scheduling unit correspondingly manages a plurality of service containers, and all the service containers share the agent sidecar.
Optionally, the service containers on the multiple virtual machines are docked with a container cluster management system, and the container cluster management system manages multiple service containers.
The data transmission device based on the service grid may further include a processor and a memory, where the receiving unit 61, the pushing unit 62, the transmitting unit 63, and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls a corresponding program unit from the memory. The kernel can set one or more than one kernel, and the network traffic data is sent to the target service container managed by the target container scheduling unit through the kernel extension program by adjusting the kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium for storing a computer program, wherein when the computer program runs, the apparatus on which the computer-readable storage medium is located is controlled to execute any one of the above-mentioned service grid-based data transmission methods.
According to another aspect of embodiments of the present invention, there is also provided an electronic device, including one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the service grid-based data transmission method of any one of the above.
The present application further provides a computer program product adapted to perform a program for initializing the following method steps when executed on a data processing device: receiving network traffic data; pushing the network flow data to an agent side car, wherein a kernel extension program is configured in the agent side car in advance; and sending the network flow data to a target service container managed by the target container scheduling unit through the kernel extension program.
Fig. 7 is a block diagram of a hardware configuration of an electronic device (or mobile device) of a seat allocation plan determination method according to an embodiment of the present invention. As shown in fig. 7, the electronic device may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and memory 104 for storing data. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a keyboard, a power supply, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 7 is only an illustration and is not intended to limit the structure of the electronic device. For example, the electronic device may also include more or fewer components than shown in FIG. 7, or have a different configuration than shown in FIG. 7. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technical content can be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for data transmission based on a serving grid, comprising:
receiving network traffic data;
pushing the network flow data to an agent side car, wherein a kernel extension program is configured in the agent side car in advance;
and sending the network flow data to a target service container managed by a target container scheduling unit through the kernel extension program.
2. The data transmission method of claim 1, further comprising, prior to receiving the network traffic data:
establishing a link relation with the service container managed by each container scheduling unit by adopting a preset interface scheduling strategy;
scheduling a kernel extension interface of a container kernel of the service container based on the link relation;
writing the kernel extension program corresponding to the service container based on the kernel extension interface.
3. The data transmission method of claim 1, further comprising, prior to receiving the network traffic data:
acquiring a namespace of each container scheduling unit;
determining lease information corresponding to each of the container scheduling units based on the namespace, wherein the lease information provides scheduling information and link information of the container scheduling units;
and respectively setting scheduling tenants corresponding to the container scheduling units in the kernel extension program based on the lease information.
4. The data transmission method according to claim 3, wherein the step of sending the network traffic data to the target service container managed by the target container scheduling unit through the kernel extension program comprises:
extracting service lease information in the network flow data;
determining a target scheduling tenant corresponding to the target service container based on the service lease information;
and sending the network flow data to the target service container through the target scheduling tenant and the kernel extension program.
5. The data transmission method according to any one of claims 1 to 4, wherein the data transmission method based on the service grid is applied to a virtual machine, the virtual machine includes a plurality of container scheduling units, each of the container scheduling units correspondingly manages a plurality of service containers, and all the service containers share the agent sidecar.
6. The data transmission method according to claim 5, wherein a container cluster management system is interfaced with the service containers on the plurality of virtual machines, and the container cluster management system manages the plurality of service containers.
7. A data transmission apparatus based on a service grid, comprising:
a receiving unit, configured to receive network traffic data;
the pushing unit is used for pushing the network flow data to the agent side car, wherein a kernel extension program is configured in the agent side car in advance;
and the transmission unit is used for transmitting the network flow data to a target service container managed by the target container scheduling unit through the kernel extension program.
8. The data transmission apparatus according to claim 7, further comprising:
the establishing unit is used for establishing a link relation with the service container managed by each container scheduling unit by adopting a preset interface scheduling strategy before receiving the network traffic data;
the scheduling unit is used for scheduling the kernel extension interface of the container kernel of the service container based on the link relation;
and the writing unit is used for writing the kernel extension program corresponding to the service container based on the kernel extension interface.
9. A computer-readable storage medium, for storing a computer program, wherein when the computer program runs, the computer-readable storage medium is controlled by a device to execute the service grid-based data transmission method according to any one of claims 1 to 6.
10. An electronic device comprising one or more processors and memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the service grid-based data transmission method of any of claims 1 to 6.
CN202210635793.9A 2022-06-07 2022-06-07 Data transmission method and device based on service grid and electronic equipment Pending CN115022408A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210635793.9A CN115022408A (en) 2022-06-07 2022-06-07 Data transmission method and device based on service grid and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210635793.9A CN115022408A (en) 2022-06-07 2022-06-07 Data transmission method and device based on service grid and electronic equipment

Publications (1)

Publication Number Publication Date
CN115022408A true CN115022408A (en) 2022-09-06

Family

ID=83073074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210635793.9A Pending CN115022408A (en) 2022-06-07 2022-06-07 Data transmission method and device based on service grid and electronic equipment

Country Status (1)

Country Link
CN (1) CN115022408A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115733746A (en) * 2022-11-09 2023-03-03 中科驭数(北京)科技有限公司 Service grid unit deployment method, device, equipment and storage medium
CN116032806A (en) * 2023-03-27 2023-04-28 杭州谐云科技有限公司 Flow dyeing method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552496A (en) * 2020-05-07 2020-08-18 上海道客网络科技有限公司 System and method for realizing seamless upgrade of sidecar based on temporary container addition
CN112929230A (en) * 2021-01-22 2021-06-08 中信银行股份有限公司 Test processing method and device, electronic equipment and computer readable storage medium
CN114329443A (en) * 2021-12-28 2022-04-12 杭州谐云科技有限公司 Method and system for generating container sandbox rule, electronic device and storage medium
CN114518969A (en) * 2022-02-18 2022-05-20 杭州朗和科技有限公司 Inter-process communication method, system, storage medium and computer device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111552496A (en) * 2020-05-07 2020-08-18 上海道客网络科技有限公司 System and method for realizing seamless upgrade of sidecar based on temporary container addition
CN112929230A (en) * 2021-01-22 2021-06-08 中信银行股份有限公司 Test processing method and device, electronic equipment and computer readable storage medium
CN114329443A (en) * 2021-12-28 2022-04-12 杭州谐云科技有限公司 Method and system for generating container sandbox rule, electronic device and storage medium
CN114518969A (en) * 2022-02-18 2022-05-20 杭州朗和科技有限公司 Inter-process communication method, system, storage medium and computer device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
THOMAS GRAF: "告别Sidecar——使用EBPF解锁内核级服务网格", 《云原生社区动态》, pages 1 - 12 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115733746A (en) * 2022-11-09 2023-03-03 中科驭数(北京)科技有限公司 Service grid unit deployment method, device, equipment and storage medium
CN115733746B (en) * 2022-11-09 2024-06-07 中科驭数(北京)科技有限公司 Deployment method, device and equipment of service grid unit and storage medium
CN116032806A (en) * 2023-03-27 2023-04-28 杭州谐云科技有限公司 Flow dyeing method and system
CN116032806B (en) * 2023-03-27 2023-06-09 杭州谐云科技有限公司 Flow dyeing method and system

Similar Documents

Publication Publication Date Title
US20220123960A1 (en) Data Packet Processing Method, Host, and System
US10700979B2 (en) Load balancing for a virtual networking system
EP3654620B1 (en) Packet processing method in cloud computing system, host, and system
EP3669532B1 (en) Managing network connectivity between cloud computing service endpoints and virtual machines
WO2016155394A1 (en) Method and device for establishing link between virtual network functions
CN109302466B (en) Data processing method, related device and computer storage medium
CN115022408A (en) Data transmission method and device based on service grid and electronic equipment
US20140269712A1 (en) Tagging virtual overlay packets in a virtual networking system
CN107222324B (en) Service configuration method and device of network service
US10171294B2 (en) Information processing device and system design support method
US11095716B2 (en) Data replication for a virtual networking system
CN107133109B (en) Method and device for communication between modules and computing equipment
CN108200018A (en) Flow forwarding method and equipment, computer equipment and readable medium in cloud computing
CN114942826A (en) Cross-network multi-cluster system, access method thereof and cloud computing equipment
CN112202744A (en) Multi-system data communication method and device
CN111800523A (en) Management method, data processing method and system of virtual machine network
CN103795603A (en) Edge virtual bridging method and device based on multiple network interface cards
CN114124714A (en) Multi-level network deployment method, device, equipment and storage medium
CN116800616B (en) Management method and related device of virtualized network equipment
CN113765801B (en) Message processing method and device applied to data center, electronic equipment and medium
CN110049017B (en) Message intercommunication device and method between heterogeneous platforms
WO2023159956A1 (en) Bare metal server inspection and deployment method and apparatus, and device and medium
CN114172807A (en) Whole machine system and firmware upgrading method of intelligent network card thereof
CN110851512B (en) Data configuration method and device for open source framework
US9787805B2 (en) Communication control system and communication control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination