CN117812159A - Message transmission method, device, equipment and storage medium - Google Patents

Message transmission method, device, equipment and storage medium Download PDF

Info

Publication number
CN117812159A
CN117812159A CN202311685230.1A CN202311685230A CN117812159A CN 117812159 A CN117812159 A CN 117812159A CN 202311685230 A CN202311685230 A CN 202311685230A CN 117812159 A CN117812159 A CN 117812159A
Authority
CN
China
Prior art keywords
target
hardware
flow table
message
software
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311685230.1A
Other languages
Chinese (zh)
Inventor
王绍坤
黄明亮
荆慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yusur Technology Co ltd
Original Assignee
Yusur Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yusur Technology Co ltd filed Critical Yusur Technology Co ltd
Priority to CN202311685230.1A priority Critical patent/CN117812159A/en
Publication of CN117812159A publication Critical patent/CN117812159A/en
Pending legal-status Critical Current

Links

Abstract

The disclosure relates to a message transmission method, a device, equipment and a storage medium, and relates to the technical field of data transmission. In response to receiving a target message, performing hardware flow table matching on the target message based on a hardware acceleration engine; in response to successful matching of the target message with the hardware flow table, the target message is forwarded to the target service container based on the hardware forwarding path determined by the hardware flow table, so that the hardware flow table matching and hardware forwarding of the message can be realized through the hardware acceleration engine, the times of software forwarding the message can be reduced, the occupation of data processing resources on the DPU of the data processor can be greatly reduced, the bandwidth and throughput of DPU network transmission are improved, the time delay of DPU network transmission is reduced, and the DPU network forwarding performance is improved.

Description

Message transmission method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of data transmission, and in particular relates to a method, a device, equipment and a storage medium for transmitting a message.
Background
Since the central processor (Central Processing Unit, CPU) resources purchased by the tenant on the cloud server are very limited, applications running in the service container on the cloud server node want to be able to take up more CPU resources to concentrate on the operation or storage logic of the core business logic. However, during the operation of the service container, tasks and policies such as forwarding, accelerating, etc. of traffic occupy a large amount of CPU resources, so there may be a conflict that limited CPU resources are allocated to an application program or to a network communication task.
In order to divide more CPU resources into core business logic application programs of tenants, a data processor (Data Processing Unit, DPU) is generated, the DPU is used as a processor specially processing network data, and network forwarding tasks which originally occupy server CPU resources, such as a service grid, are sunk on a DPU network card for processing, so that the CPU resources of a cloud server end are released, and the process is called DPU network traffic offloading.
In the current solution, a service grid centralized agent on the DPU uses a software forwarding manner to perform message transmission between service containers. Since software forwarding actually uses software technology, it is necessary to rely on and consume data processing resources on the DPU for packet forwarding, and thus the effect of acceleration of the DPU forwarding is limited by the data processing resources on the DPU. Therefore, although the CPU resource on the cloud server node is released, the software forwarding mode limits the forwarding performance of the DPU network, including bandwidth, throughput, time delay and the like. Therefore, a message transmission method capable of improving the forwarding performance of the DPU network is needed to solve the current problems.
Disclosure of Invention
In order to solve the technical problems, the disclosure provides a method, a device, equipment and a storage medium for transmitting a message.
A first aspect of the present disclosure provides a method for transmitting a message, including:
responding to the received target message, and carrying out hardware flow table matching on the target message based on a hardware acceleration engine;
and in response to successful matching of the target message with the hardware flow table, forwarding the target message to the target service container based on the hardware forwarding path determined by the hardware flow table.
A second aspect of the present disclosure provides a packet transmission device, including:
the first matching module is used for responding to the received target message and carrying out hardware flow table matching on the target message based on the hardware acceleration engine;
and the first forwarding module is used for forwarding the target message to the target service container based on the hardware forwarding path determined by the hardware flow table in response to the success of matching the target message with the hardware flow table.
A third aspect of the present disclosure provides a computer device, including a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the method for transmitting a message according to the first aspect may be implemented.
A fourth aspect of the present disclosure provides a computer readable storage medium having a computer program stored therein, which when executed by a processor, can implement the method for transmitting a message of the first aspect.
Compared with the prior art, the technical scheme provided by the disclosure has the following advantages:
in response to receiving a target message, performing hardware flow table matching on the target message based on a hardware acceleration engine; in response to successful matching of the target message with the hardware flow table, the target message is forwarded to the target service container based on the hardware forwarding path determined by the hardware flow table, so that the hardware flow table matching and hardware forwarding of the message can be realized through the hardware acceleration engine, the times of software forwarding the message can be reduced, the occupation of data processing resources on the DPU of the data processor can be greatly reduced, the bandwidth and throughput of DPU network transmission are improved, the time delay of DPU network transmission is reduced, and the DPU network forwarding performance is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of a conventional message transmission method provided in an embodiment of the present disclosure;
fig. 2 is a flowchart of a message transmission method provided in an embodiment of the present disclosure;
fig. 3 is a flowchart of a message transmission method provided in an embodiment of the present disclosure;
fig. 4 is a flowchart of a message transmission method provided in an embodiment of the present disclosure;
fig. 5 is a flowchart of a message transmission method provided in an embodiment of the present disclosure;
fig. 6 is a flowchart of a message transmission method provided in an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a message transmission device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
Since the central processor (Central Processing Unit, CPU) resources purchased by the tenant on the cloud server are very limited, applications running in the service container on the cloud server node want to be able to take up more CPU resources to concentrate on the operation or storage logic of the core business logic. However, during the operation of the service container, tasks and policies such as forwarding, accelerating, etc. of traffic occupy a large amount of CPU resources, so there may be a conflict that limited CPU resources are allocated to an application program or to a network communication task.
In order to divide more CPU resources into core business logic application programs of tenants, a data processor (Data Processing Unit, DPU) is generated, the DPU is used as a processor specially processing network data, and network forwarding tasks which originally occupy server CPU resources, such as a service grid, are sunk on a DPU network card for processing, so that the CPU resources of a cloud server end are released, and the process is called DPU network traffic offloading.
In the current solution, a service grid centralized agent on the DPU uses a software forwarding manner to perform message transmission between service containers. Since software forwarding actually uses software technology, it is necessary to rely on and consume data processing resources on the DPU for packet forwarding, and thus the effect of acceleration of the DPU forwarding is limited by the data processing resources on the DPU. Therefore, although the CPU resource on the cloud server node is released, the software forwarding mode limits the forwarding performance of the DPU network, including bandwidth, throughput, time delay and the like.
For example, as shown in fig. 1, fig. 1 is a flowchart of an existing packet transmission method provided by an embodiment of the present disclosure, where a centralized proxy is typically used on a DPU and an application Service in a Service container (Service Pod) on a cloud server node forms a centralized cloud native Service grid as shown in fig. 1. The technical framework is that a data forwarding surface, namely a service grid centralized agent, responsible for network forwarding in a container cloud network is unloaded onto a Soc of a DPU, and the service grid centralized agent which originally occupies a large amount of CPU resources of a cloud server node is sunk onto the DPU network card, so that the CPU resources of the cloud server node are released. The DPU is a network card installed on a cloud server node of the data center, and provides a heterogeneous network computing acceleration engine with high bandwidth and low delay for the cloud server node.
The method for transmitting a message provided by the embodiment of the disclosure may be performed by a computer device, where the device may be understood as any device with processing capability and computing capability, and a data processor DPU configured to perform network forwarding is provided in the device.
In order to better understand the inventive concepts of the embodiments of the present disclosure, the technical solutions of the embodiments of the present disclosure are described below in conjunction with exemplary embodiments.
Fig. 2 is a flowchart of a message transmission method provided by an embodiment of the present disclosure, and as shown in fig. 2, the message transmission method provided by the embodiment includes the following steps:
step 210, in response to receiving the target message, performing hardware flow table matching on the target message based on the hardware acceleration engine.
In the embodiment of the disclosure, a hardware flow table is prestored in a data processor DPU in the computer device. After receiving the target message, the data processor DPU in the computer device may perform hardware flow table matching on the target message based on an internal hardware acceleration engine in response to receiving the target message. Specifically, whether a target hardware table item corresponding to the target message exists in the hardware flow table can be judged, the target hardware table item comprises quintuple information corresponding to the target message, and if yes, the success of matching of the target message with the hardware flow table can be determined; if not, the failure of the target message to match the hardware flow table can be determined.
A hardware acceleration engine may be understood as a module inside the DPU that is used to perform hardware flow table matching on a packet and perform hardware forwarding on the packet.
A hardware flow table may be understood as a flow table composed of a plurality of hardware table entries, where each hardware table entry may determine a hardware forwarding path of a packet, and each hardware table entry includes five tuple information corresponding to the hardware forwarding path. The quintuple information includes a source IP address, a source port, a destination IP address, a destination port, and a transport layer protocol.
And 220, responding to successful matching of the target message with the hardware flow table, and forwarding the target message to the target service container based on the hardware forwarding path determined by the hardware flow table.
In the embodiment of the disclosure, if the target packet is successfully matched with the hardware flow table, the data processor DPU in the computer device may forward the target packet to the target service container based on the hardware forwarding path determined by the hardware flow table in response to the target packet being successfully matched with the hardware flow table.
The hardware forwarding path is determined by five-tuple information in the target hardware table item.
A Service Pod (Service Pod) may be understood as a container that packages an application together with its dependent libraries, configuration files, etc. to form an independent operating environment, which facilitates deployment and management of the application.
In some embodiments, forwarding the target message to the target service container based on the hardware forwarding path determined by the hardware flow table may include steps 2201-2204:
step 2201, obtain target five-tuple information in the target hardware table entry.
In the embodiment of the disclosure, the data processor DPU in the computer device may acquire the target five-tuple information in the target hardware table entry.
Step 2202, determining a hardware forwarding path corresponding to the target message based on the target quintuple information.
In the embodiment of the disclosure, the data processor DPU in the computer device may determine a hardware forwarding path corresponding to the target packet based on the target quintuple information.
Step 2203, match the hardware forwarding path with the corresponding hardware queue.
In the embodiment of the disclosure, the interface of each service container corresponds to one or more hardware queues in the DPU, that is, each hardware forwarding path corresponds to one or more hardware queues. For example, the interface of the service container may be a Virtual Function (VF) interface.
After determining the hardware forwarding path corresponding to the target packet, the data processor DPU in the computer device may match the corresponding hardware queue for the hardware forwarding path. For example, according to the priority of the target message, a hardware queue with a corresponding priority can be matched for a hardware forwarding path corresponding to the target message.
The hardware queues may be used to adjust the order of transmission of the messages, and may also be based on a configurable RSS (Receive Side Scaling, receiver scaling) algorithm, such as: hash is carried out according to different source port numbers in the five-tuple information, concurrent traffic is hashed into different queues, and concurrency of traffic forwarding is improved, so that forwarding performance is improved.
Step 2204, forwarding the target message to the target service container based on the hardware queue corresponding to the hardware forwarding path.
In the embodiment of the disclosure, the data processor DPU in the computer device may forward the target packet to the target service container based on the hardware queue corresponding to the hardware forwarding path.
For example, as shown in fig. 3, fig. 3 is a flowchart of a packet transmission method provided by the embodiment of the present disclosure, after a DPU receives a target packet, hardware flow table matching may be performed on the target packet based on a hardware acceleration engine, in response to the target packet matching the hardware flow table being successful, target quintuple information in a target hardware table item is obtained, a hardware forwarding path corresponding to the target packet is determined based on the target quintuple information, then a corresponding hardware queue is matched for the hardware forwarding path, and the target packet is forwarded to a target service container based on the hardware queue corresponding to the hardware forwarding path.
The system on chip SoC (system on chip) is an operating system deployed on the DPU, such as the OpenEuler system.
K8s, collectively known as Kubernetes, is a service container orchestration tool deployed on cloud server nodes.
The service containers of the service applications are deployed on the cloud server node based on K8s, each service container corresponds to a Virtual Function (VF) interface, and the interface on the VF interface corresponds to the DPU soc is called a Virtual Function representing interface (VFr).
A Kubernetes-based deployment service grid control plane (e.g., istio/linker, etc.) is used to obtain dynamic information and custom resource definition (Custom Resource Definition, CRD) configuration for each service container from the K8s service interface (Kubernetes service API).
A service grid control plane proxy, such as Istio (Istio control plane agent) control plane proxy, is deployed on each cloud server node based on Kubernetes for synchronizing with the service grid control plane and acquiring the full-volume service application information and service grid configuration managed by Kubernetes.
A centralized agent (such as Envoy, ngix, linkerd-proxy, etc.) is deployed on the DPU SoC based on a Docker component, and the centralized agent synchronizes with a service grid control plane agent on the cloud server node and acquires the full-scale service application information and service grid configuration managed by Kubernetes.
The centralized agent forms a centralized cloud native service grid unit with a service container of the cloud server node based on the acquired service grid configuration, and a plurality of centralized cloud native service grid units receiving the same service grid control plane information jointly form a cloud native service grid.
The cloud native service grid provides traffic forwarding rules such as service discovery, load balancing, request routing, rule configuration and the like for each service application program according to the configuration information of the service grid control plane.
In the embodiment of the disclosure, the hardware flow table matching is carried out on the target message based on the hardware acceleration engine by responding to the received target message; in response to successful matching of the target message with the hardware flow table, the target message is forwarded to the target service container based on the hardware forwarding path determined by the hardware flow table, so that the hardware flow table matching and hardware forwarding of the message can be realized through the hardware acceleration engine, the times of software forwarding the message can be reduced, the occupation of data processing resources on the DPU of the data processor can be greatly reduced, the bandwidth and throughput of DPU network transmission are improved, the time delay of DPU network transmission is reduced, and the DPU network forwarding performance is improved.
Fig. 4 is a flowchart of a message transmission method provided by an embodiment of the present disclosure, and as shown in fig. 4, the message transmission method provided by the embodiment includes the following steps:
step 410, in response to receiving the target message, performing hardware flow table matching on the target message based on the hardware acceleration engine.
And step 420, in response to successful matching of the target message with the hardware flow table, forwarding the target message to the target service container based on the hardware forwarding path determined by the hardware flow table.
And 430, responding to failure of matching the hardware flow table of the target message, and performing software flow table matching on the target message based on the software acceleration engine.
In the embodiment of the disclosure, if the target packet fails to match the hardware flow table, it is indicated that there is no target hardware table entry corresponding to the target packet in the hardware flow table, that is, a hardware forwarding path corresponding to the target packet cannot be found, and the data processor DPU in the computer device may perform software flow table matching on the target packet based on the software acceleration engine in response to the failure of matching the hardware flow table with the target packet. Specifically, whether a target software flow table corresponding to the target message exists in the software flow table can be judged, the target software flow table can determine a software forwarding path corresponding to the target message, and if the target software flow table exists, the target message can be determined to be successfully matched with the software flow table; if not, the failure of the target message to match the software flow table can be determined.
A software acceleration engine can be understood as a module inside the DPU that is used to perform software flow table matching on the message and software forwarding on the message.
A software flow table may be understood as at least one flow table corresponding to each software forwarding path.
Step 440, in response to the success of the target message matching the software flow table, determining the target software flow table matched by the target message.
In the embodiment of the disclosure, after the target message matches the software flow table successfully, the data processor DPU in the computer device may determine the target software flow table matched with the target message in response to the target message matching the software flow table successfully, and the target software flow table may determine a software forwarding path corresponding to the target message.
And step 450, forwarding the target message to the target service container based on the software forwarding path determined by the target software flow table.
In the embodiment of the disclosure, after obtaining the software forwarding path corresponding to the target packet, the data processor DPU in the computer device may forward the target packet to the target service container based on the software forwarding path determined by the target software flow table.
Therefore, after the hardware flow table of the target message is successfully matched, the hardware flow table matching and the hardware forwarding of the message are realized through the hardware acceleration engine, and after the hardware flow table of the target message is failed to be matched, the software flow table matching and the software forwarding of the message are realized through the software acceleration engine, so that the number of times of forwarding the message through the software can be reduced, the occupation of data processing resources on a DPU (data processor) of the data processor can be greatly reduced, the bandwidth and throughput of DPU network transmission are improved, the time delay of the DPU network transmission is reduced, and the forwarding performance of the DPU network is improved.
In some embodiments of the present disclosure, before performing software flow table matching on the target packet based on the software acceleration engine in response to the failure of matching the hardware flow table with the target packet, the data processor DPU in the computer device may obtain a flow forwarding rule between service containers constructed by the service grid control plane component; and constructing a software flow table corresponding to the software forwarding path between any two service containers based on the flow forwarding rule.
Among other things, a Service Mesh (Service Mesh) can be understood as an infrastructure layer that transparently manages and regulates communication between services, where the basic idea is to separate communication and functions between services from the core logic of the application itself. The service grid allows the developer to concentrate more on the business itself by separating the service communications and related management and control functions from the business program and sinking to the infrastructure layer, which allows it to be completely decoupled from the business system. Each service grid may manage communication of applications in at least one service container.
Traffic forwarding rules may include communication control policies such as service discovery, load balancing, request routing, and rule configuration.
The service grid control plane may be understood as a component that builds traffic forwarding rules between the individual service containers. The service grid can provide services such as service discovery, load balancing, request routing, rule configuration and the like for the application programs in each service container according to the traffic forwarding rules configured by the service grid control plane.
Fig. 5 is a flowchart of a message transmission method provided by an embodiment of the present disclosure, and as shown in fig. 5, the message transmission method provided by the embodiment includes the following steps:
step 510, in response to receiving the target message, performing hardware flow table matching on the target message based on the hardware acceleration engine.
And step 520, responding to failure of matching the hardware flow table of the target message, and performing software flow table matching on the target message based on the software acceleration engine.
And 530, determining the target software flow table matched with the target message in response to the success of the target message matching the software flow table.
Step 540, forwarding the target message to the target service container based on the software forwarding path determined by the target software flow table.
Step 550, performing hardware flow table conversion on the target software flow table based on the software forwarding path determined by the target software flow table, to obtain a target hardware table item corresponding to the target message.
In the embodiment of the disclosure, after obtaining the target software flow table matched with the target message, the data processor DPU in the computer device may obtain a software forwarding path determined by the target software flow table, and perform hardware flow table conversion on the target software flow table based on the software forwarding path to obtain a target hardware table item corresponding to the target message.
In some embodiments, target five-tuple information of the software forwarding path determined by the target software flow table may be obtained; and then constructing a target hardware table item corresponding to the target message based on the target quintuple information.
Step 560, the target hardware table entry is sent to the hardware flow table in the hardware acceleration engine.
In the embodiment of the disclosure, after obtaining the target hardware table entry corresponding to the target packet, the data processor DPU in the computer device may send the target hardware table entry to the hardware flow table in the hardware acceleration engine.
For example, as shown in fig. 6, fig. 6 is a flowchart of a packet transmission method provided in an embodiment of the present disclosure, in which, in response to receiving a target packet, a DPU performs hardware flow table matching on the target packet based on a hardware acceleration engine, in response to failure in matching the hardware flow table with the target packet, in response to success in matching the software flow table with the target packet with the software acceleration engine, determines a target software flow table matched with the target packet, forwards the target packet to a target service container based on a software forwarding path determined by the target software flow table, performs hardware flow table conversion on the target software flow table based on a software forwarding path determined by the target software flow table, obtains a target hardware table item corresponding to the target packet, and sends the target hardware table item to the hardware flow table in the hardware acceleration engine.
In fig. 6, a software acceleration engine is formed by a centralized agent in a DPU soc and an OVS-DPDK, the centralized agent generates a software flow table according to a flow forwarding rule between service containers configured by a service grid control plane, and issues the software flow table to the OVS-DPDK, the OVS-DPDK performs software flow table matching on a target message, determines a target software flow table for matching the target message in response to success of the target message matching the software flow table, performs hardware flow table conversion on the target software flow table based on a software forwarding path determined by the target software flow table, obtains a target hardware table item corresponding to the target message, and sends the target hardware table item to the hardware flow table in the hardware acceleration engine.
Wherein, OVS is open vswitch, called open virtual switching standard.
The DPDK is Data Plane Development Kit, called a data plane development kit, and uses polling (polling) instead of interruption to process the data packet, when the data packet is received, the network card driver reloaded by the DPDK does not inform the CPU through interruption, but directly stores the data packet into a memory, and delivers application layer software to directly process the data packet through an interface provided by the DPDK, so that a great amount of CPU interruption time and memory copying time are saved, the DPDK can be used for a function library and a driver set for fast data packet processing, the data processing performance and throughput can be greatly improved, and the working efficiency of a data plane application program is improved.
Step 570, in response to receiving the target message again, performing hardware flow table matching on the target message based on the hardware acceleration engine.
And 580, in response to successful matching of the target message with the hardware flow table, forwarding the target message to the target service container based on the hardware forwarding path determined by the hardware flow table.
In the embodiment of the disclosure, since the hardware flow table includes the target hardware table item corresponding to the target message, the target message may be successfully matched with the hardware flow table, and the data processor DPU in the computer device may forward the target message to the target service container based on the hardware forwarding path determined by the hardware flow table in response to the target message successfully matched with the hardware flow table.
Therefore, after the matching of the hardware flow table of the target message fails, the software flow table matching and the software forwarding of the message can be realized through the software acceleration engine, the target hardware table item corresponding to the target message is constructed, the target hardware table item is sent to the hardware flow table in the hardware acceleration engine, when the DPU receives the target message again, the hardware flow table matching and the hardware forwarding of the message can be realized through the hardware acceleration engine, the times of the software forwarding of the message can be reduced, the occupation of data processing resources on the DPU of the data processor can be greatly reduced, the bandwidth and throughput of the DPU network transmission are improved, the time delay of the DPU network transmission is reduced, and the performance of the DPU network forwarding is improved.
Fig. 7 is a schematic structural diagram of a message transmission apparatus according to an embodiment of the present disclosure, where the apparatus may be understood as the above-mentioned computer device or a part of functional modules in the above-mentioned computer device. As shown in fig. 7, the message transmission apparatus 700 includes:
a first matching module 710, configured to perform hardware flow table matching on the target packet based on the hardware acceleration engine in response to receiving the target packet;
the first forwarding module 720 is configured to forward the target message to the target service container based on the hardware forwarding path determined by the hardware flow table in response to the target message successfully matching the hardware flow table.
Optionally, the first matching module includes:
the judging submodule is used for judging whether a target hardware table item corresponding to the target message exists in the hardware flow table;
the first determining submodule is used for determining that the target message is successfully matched with the hardware flow table if the target message exists;
and the second determining submodule is used for determining that the target message fails to match the hardware flow table if the target message does not exist.
Optionally, the first forwarding module includes:
and the first forwarding sub-module is used for forwarding the target message to the target service container based on the hardware forwarding path determined by the target hardware table item.
Optionally, the message transmission device includes:
the second matching module is used for responding to failure of matching the hardware flow table of the target message and carrying out software flow table matching on the target message based on the software acceleration engine;
the determining module is used for determining the target software flow table matched with the target message in response to the success of the matching of the target message with the software flow table;
and the second forwarding module is used for forwarding the target message to the target service container based on the software forwarding path determined by the target software flow table.
Optionally, the message transmission device includes:
the acquisition module is used for acquiring the flow forwarding rule among the service containers configured by the service grid control plane;
the construction module is used for constructing a software flow table corresponding to the software forwarding path between any two service containers based on the flow forwarding rule.
Optionally, the message transmission device includes:
the conversion module is used for carrying out hardware flow table conversion on the target software flow table based on the software forwarding path determined by the target software flow table to obtain a target hardware table item corresponding to the target message;
and the sending module is used for sending the target hardware table item to a hardware flow table in the hardware acceleration engine.
Optionally, the conversion module includes:
the first acquisition sub-module is used for acquiring target five-tuple information of the software forwarding path determined by the target software flow table;
and the construction submodule is used for constructing a target hardware table item corresponding to the target message based on the target quintuple information.
Optionally, the first forwarding module includes:
the second acquisition sub-module is used for acquiring target five-tuple information in the target hardware table item;
the third determining submodule is used for determining a hardware forwarding path corresponding to the target message based on the target quintuple information;
the matching sub-module is used for matching the corresponding hardware queue for the hardware forwarding path;
and the second forwarding sub-module is used for forwarding the target message to the target service container based on the hardware queue corresponding to the hardware forwarding path.
The method of any one of the above embodiments may be implemented by the message transmission device provided in the embodiments of the present disclosure, and the implementation manner and the beneficial effects of the method are similar, and are not repeated herein.
The embodiment of the disclosure further provides a computer device, where the computer device includes a processor and a memory, where the memory stores a computer program, and when the computer program is executed by the processor, the method of any one of the foregoing embodiments may be implemented, and an implementation manner and a beneficial effect of the method are similar, and are not repeated herein.
The embodiments of the present disclosure provide a computer readable storage medium, in which a computer program is stored, where when the computer program is executed by a processor, the method of any of the foregoing embodiments may be implemented, and the implementation manner and beneficial effects are similar, and are not described herein again.
The computer readable storage media described above can employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer programs described above may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer device, partly on the user's device, as a stand-alone software package, partly on the user's computer device and partly on a remote computer device or entirely on the remote computer device or server.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for transmitting a message, comprising:
responding to receiving a target message, and carrying out hardware flow table matching on the target message based on a hardware acceleration engine;
and responding to successful matching of the target message with the hardware flow table, and forwarding the target message to a target service container based on a hardware forwarding path determined by the hardware flow table.
2. The method of claim 1, wherein the hardware flow table matching the target message based on the hardware acceleration engine comprises:
judging whether a target hardware table item corresponding to the target message exists in the hardware flow table;
if yes, determining that the target message is successfully matched with the hardware flow table;
if not, determining that the target message fails to match with the hardware flow table;
the hardware forwarding path determined based on the hardware flow table forwards the target message to a target service container, including:
and forwarding the target message to a target service container based on the hardware forwarding path determined by the target hardware table item.
3. The method of claim 1, wherein after the hardware flow table matching is performed on the target message based on the hardware acceleration engine, the method further comprises:
responding to failure of matching the hardware flow table of the target message, and performing software flow table matching on the target message based on a software acceleration engine;
determining a target software flow table matched with the target message in response to the success of the target message matching the software flow table;
and forwarding the target message to a target service container based on the software forwarding path determined by the target software flow table.
4. The method of claim 3, wherein the responding to the failure of the target message to match the hardware flow table, before the software acceleration engine performs the software flow table matching on the target message, the method further comprises:
acquiring a flow forwarding rule among all service containers configured by a service grid control plane;
and constructing a software flow table corresponding to a software forwarding path between any two service containers based on the flow forwarding rule.
5. The method of claim 3, wherein after said determining the target software flow table to which the target message matches, the method further comprises:
performing hardware flow table conversion on the target software flow table based on the software forwarding path determined by the target software flow table to obtain a target hardware table item corresponding to the target message;
and sending the target hardware table item to a hardware flow table in the hardware acceleration engine.
6. The method of claim 5, wherein the performing hardware flow table conversion on the target software flow table based on the software forwarding path determined by the target software flow table to obtain the target hardware table entry corresponding to the target message includes:
acquiring target quintuple information of a software forwarding path determined by the target software flow table;
and constructing a target hardware table item corresponding to the target message based on the target quintuple information.
7. The method of claim 1, wherein the forwarding the target message to a target service container based on the hardware forwarding path determined by the hardware flow table comprises:
acquiring target five-tuple information in the target hardware table item;
determining a hardware forwarding path corresponding to the target message based on the target quintuple information;
matching corresponding hardware queues for the hardware forwarding paths;
and forwarding the target message to a target service container based on the hardware queue corresponding to the hardware forwarding path.
8. A message transmission apparatus, comprising:
the first matching module is used for responding to the received target message and carrying out hardware flow table matching on the target message based on a hardware acceleration engine;
and the first forwarding module is used for forwarding the target message to a target service container based on a hardware forwarding path determined by the hardware flow table in response to the success of matching the target message with the hardware flow table.
9. A computer device, comprising:
a memory and a processor, wherein the memory stores a computer program which, when executed by the processor, implements the message transmission method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the storage medium has stored therein a computer program which, when executed by a processor, implements the message transmission method according to any of claims 1-7.
CN202311685230.1A 2023-12-08 2023-12-08 Message transmission method, device, equipment and storage medium Pending CN117812159A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311685230.1A CN117812159A (en) 2023-12-08 2023-12-08 Message transmission method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311685230.1A CN117812159A (en) 2023-12-08 2023-12-08 Message transmission method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117812159A true CN117812159A (en) 2024-04-02

Family

ID=90434167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311685230.1A Pending CN117812159A (en) 2023-12-08 2023-12-08 Message transmission method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117812159A (en)

Similar Documents

Publication Publication Date Title
US11706158B2 (en) Technologies for accelerating edge device workloads
US20210320896A1 (en) Domain name server allocation method and apparatus
CN109547580B (en) Method and device for processing data message
EP2676411B1 (en) Method and system for classification and management of inter-blade network traffic in a blade server
US9807015B2 (en) Message processing using dynamic load balancing queues in a messaging system
CN110352401B (en) Local device coordinator with on-demand code execution capability
US11432137B2 (en) Service notification method for mobile edge host and apparatus
US11496414B2 (en) Interoperable cloud based media processing using dynamic network interface
US20210051211A1 (en) Method and system for image pulling
CN113783922A (en) Load balancing method, system and device
WO2023030417A1 (en) Packet processing method and device, storage medium, and computer program product
US10178033B2 (en) System and method for efficient traffic shaping and quota enforcement in a cluster environment
Buyakar et al. Prototyping and load balancing the service based architecture of 5G core using NFV
US9350606B2 (en) System and method for assigning server to terminal and efficiently delivering messages to the terminal
US20230216775A1 (en) Packet forwarding method, forwarding indication information advertising method, advertisement packet advertising method, and device
CN115883655A (en) Service request processing method and device, electronic equipment and storage medium
CN112929264A (en) Service flow transmission method, system and network equipment
CN116886496A (en) DPU-based data processing method, device, equipment and readable storage medium
US10887236B2 (en) Method, network interface card, and computer program product for load balance
CN117812159A (en) Message transmission method, device, equipment and storage medium
CN116232884A (en) Proxy instance management method, device, electronic equipment and storage medium
CN115987990A (en) Multi-cluster load balancing method and device, electronic equipment and storage medium
CN113169936B (en) Service chaining mechanism for data stream processing
US8825901B2 (en) Distributed parallel discovery
KR20200051196A (en) Electronic device providing fast packet forwarding with reference to additional network address translation table

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination