CN117194017A - Message forwarding method, device, electronic equipment and computer readable medium - Google Patents

Message forwarding method, device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN117194017A
CN117194017A CN202311110002.1A CN202311110002A CN117194017A CN 117194017 A CN117194017 A CN 117194017A CN 202311110002 A CN202311110002 A CN 202311110002A CN 117194017 A CN117194017 A CN 117194017A
Authority
CN
China
Prior art keywords
message
thread
node
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311110002.1A
Other languages
Chinese (zh)
Inventor
潘道俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCB Finetech Co Ltd
Original Assignee
CCB Finetech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCB Finetech Co Ltd filed Critical CCB Finetech Co Ltd
Priority to CN202311110002.1A priority Critical patent/CN117194017A/en
Publication of CN117194017A publication Critical patent/CN117194017A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a message forwarding method, a message forwarding device, electronic equipment and a computer readable medium, and relates to the technical field of cloud computing. One embodiment of the method comprises the following steps: receiving a message sent by a network card receiving end, and forwarding the message to a message queue of a target thread; the target thread takes out the message from the message queue, and searches node data under a target node where the target thread is located according to message parameters carried in the message, so that target operation executed on the message and a next hop address of a route are determined according to the node data; the target thread executes the target operation on the message and records thread data generated by executing the target operation; and sending the message and the next hop address to a network card sending end. The implementation mode can solve the technical problem that the performance of gateway service is improved to a limited extent.

Description

Message forwarding method, device, electronic equipment and computer readable medium
Technical Field
The present invention relates to the field of cloud computing technologies, and in particular, to a method and apparatus for forwarding a message, an electronic device, and a computer readable medium.
Background
In the application of the non-cloud computing network field, a DPDK technical framework and a DPDK are generally used for providing a high-performance library, so that performance requirements in a service scene are met, and when the memory data are run, only the service subfunctions are required to be scattered under different nodes for processing, and the role of the memory data and the association relation between data items do not need to be analyzed intentionally.
Under the technical background of wide cloud computing application, the traditional gateway service realized based on a kernel mode protocol stack has performance bottleneck, and the DPDK technology is generally adopted at present to realize a data plane forwarding function in a user mode, so that CPU resources can be fully utilized to improve the gateway service performance, but the gateway service performance is limited to improve.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, an electronic device, and a computer readable medium for forwarding a message, so as to solve the technical problem that performance enhancement of a gateway service is limited.
In order to achieve the above object, according to an aspect of the embodiments of the present invention, there is provided a message forwarding method, applied to a CPU, including:
receiving a message sent by a network card receiving end, and forwarding the message to a message queue of a target thread;
The target thread takes out the message from the message queue, and searches node data under a target node where the target thread is located according to message parameters carried in the message, so that target operation executed on the message and a next hop address of a route are determined according to the node data;
the target thread executes the target operation on the message and records thread data generated by executing the target operation;
and sending the message and the next hop address to a network card sending end.
Optionally, searching node data under the target node where the target thread is located according to the message parameter carried in the message includes:
determining a storage address of node data in a memory under a target node where the target thread is located according to the global data of the CPU; the global data comprises context environment data, storage addresses of node data under each node in a memory and storage addresses of thread data of each thread in the memory;
and searching the node data from the memory according to the storage address.
Optionally, the CPU integrates a plurality of cores, the cores are grouped into different nodes, each core is bound with each thread one by one, and node data under each node is the same.
Optionally, before receiving a message sent by the network card receiving end and forwarding the message to the message queue of the target thread, the method further includes:
determining the storage address of the node data under each node in the memory according to the global data of the CPU;
and synchronously modifying the node data under each node according to the storage address of the node data under each node in the memory.
Optionally, forwarding the message to a message queue of the target thread includes:
mapping out a target thread through a hash algorithm according to the five-tuple information carried in the message, so as to forward the message to a message queue of the target thread; the five-tuple information comprises an outer layer source address, an outer layer destination address, an inner layer source address, an inner layer destination address and an inner layer protocol type.
Optionally, the node data includes virtual private cloud network configuration data, an access control list, a routing table, and network address translation configuration data.
Optionally, the target operation includes at least one of:
removing the GRE message header, packaging the GRE message header, forwarding the message, discarding the message, and modifying the source and destination in the message header.
In addition, according to another aspect of the embodiment of the present invention, there is provided a packet forwarding device, provided in a CPU, including:
the receiving module is used for receiving the message sent by the network card receiving end and forwarding the message to a message queue of the target thread;
the thread module is used for taking out the message from the message queue through the target thread, searching node data under a target node where the target thread is located according to message parameters carried in the message, and determining target operation and a next hop address of a route executed on the message according to the node data; executing the target operation on the message through the target thread, and recording thread data generated by executing the target operation;
and the sending module is used for sending the message and the next hop address to a network card sending end.
Optionally, the thread module is further configured to:
determining a storage address in a memory of node data under a target node where the target thread is located according to global data of the CPU by the target thread; the global data comprises context environment data, storage addresses of node data under each node in a memory and storage addresses of thread data of each thread in the memory;
And searching the node data from the memory by the target thread according to the storage address.
Optionally, the CPU integrates a plurality of cores, the cores are grouped into different nodes, each core is bound with each thread one by one, and node data under each node is the same.
Optionally, the apparatus further comprises a configuration module for:
determining the storage address of the node data under each node in the memory according to the global data of the CPU;
and synchronously modifying the node data under each node according to the storage address of the node data under each node in the memory.
Optionally, the receiving module is further configured to:
mapping out a target thread through a hash algorithm according to the five-tuple information carried in the message, so as to forward the message to a message queue of the target thread; the five-tuple information comprises an outer layer source address, an outer layer destination address, an inner layer source address, an inner layer destination address and an inner layer protocol type.
Optionally, the node data includes virtual private cloud network configuration data, an access control list, a routing table, and network address translation configuration data.
Optionally, the target operation includes at least one of:
removing the GRE message header, packaging the GRE message header, forwarding the message, discarding the message, and modifying the source and destination in the message header.
According to another aspect of an embodiment of the present invention, there is also provided an electronic device including:
one or more processors;
storage means for storing one or more programs,
the one or more processors implement the method of any of the embodiments described above when the one or more programs are executed by the one or more processors.
According to another aspect of an embodiment of the present invention, there is also provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method according to any of the embodiments described above.
According to another aspect of embodiments of the present invention, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to any of the embodiments described above.
One embodiment of the above invention has the following advantages or benefits: because the target thread is adopted to take out the message from the message queue, the node data under the target node where the target thread is located is searched according to the message parameter carried in the message, and thus the technical means of determining the target operation executed on the message and the next hop address of the route according to the node data is adopted, the technical problem that the performance of gateway service in the prior art is improved to a limited extent is solved. According to the embodiment of the invention, the memory data is divided into the global data, the node data and the thread data, the global data is globally unique, the node data is redundant among the nodes, the thread data is grouped among the threads, and the threads only access the node data of the node where the CPU core bound by the thread is located, so that the memory is prevented from being accessed across the nodes, and the gateway performance is effectively improved.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flow chart of a message forwarding method according to an embodiment of the present invention;
fig. 2 is a system architecture diagram of a message forwarding method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a message forwarding method according to one exemplary embodiment of the present invention;
FIG. 4 is a flow chart of a message forwarding method according to one exemplary embodiment of the present invention;
fig. 5 is a schematic diagram of a packet forwarding device according to an embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
fig. 7 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the technical scheme of the invention, the aspects of acquisition, analysis, use, transmission, storage and the like of the related user personal information all meet the requirements of related laws and regulations, are used for legal and reasonable purposes, are not shared, leaked or sold outside the aspects of legal use and the like, and are subjected to supervision and management of a supervision department. Necessary measures should be taken for the personal information of the user to prevent illegal access to such personal information data, ensure that personnel having access to the personal information data comply with the regulations of the relevant laws and regulations, and ensure the personal information of the user. Once these user personal information data are no longer needed, the risk should be minimized by limiting or even prohibiting the data collection and/or deletion.
User privacy is protected by de-identifying data when applicable, including in certain related applications, such as by removing specific identifiers (e.g., name, account, date of birth, etc.), controlling the amount or specificity of stored data, controlling how data is stored, and/or other methods.
Fig. 1 is a flow chart of a message forwarding method according to an embodiment of the present invention. As an embodiment of the present invention, as shown in fig. 1, the message forwarding method is applied to a CPU, and may include the following steps:
step 101, receiving a message sent by a network card receiving end, and forwarding the message to a message queue of a target thread.
As shown in fig. 2, a gateway service deployed on the CPU receives the data plane messages from the PMD (RX) and forwards the data plane messages to a message queue of the target thread. PMD: the Poll Mode Driver, i.e. the polling Mode Driver, can be understood as a network card Driver provided by the DPDK to replace the kernel interrupt model with the polling Mode. Wherein RX and TX respectively represent the receiving and transmitting of the network card, R represents the receiving, T represents the transmitting, and X represents the intersection of the receiving and transmitting. As shown in fig. 2, the message is received by the PMD (RX) at the receiving end of the network card, and after being processed by the gateway service, the message is sent out by the PMD (TX) at the sending end of the network card.
Optionally, forwarding the message to a message queue of the target thread includes: mapping out a target thread through a hash algorithm according to the five-tuple information carried in the message, so as to forward the message to a message queue of the target thread; the five-tuple information comprises an outer layer source address, an outer layer destination address, an inner layer source address, an inner layer destination address and an inner layer protocol type. In the embodiment of the invention, the five-tuple hash algorithm is adopted to uniformly disperse the received message to different threads for processing, so that the thread resource balance is ensured, and the threads can quickly access the memory of the node where the CPU core bound with the thread is located. Specifically, based on the data of five dimensions, namely the outer layer source address, the outer layer destination address, the inner layer source address, the inner layer destination address and the inner layer protocol type of the GRE message, a 32-bit check code is calculated through a CRC32 algorithm, and then the check code is used as an integer value and mapped according to the thread number.
Step 102, the target thread takes the message out of the message queue, and searches node data under the target node where the target thread is located according to the message parameter carried in the message, so as to determine a target operation executed on the message and a next hop address of a route according to the node data.
Optionally, the CPU integrates a plurality of cores, the cores are grouped into different nodes, each core is bound with each thread one by one, and node data under each node is the same. It should be noted that, under the NUMA architecture, multiple cores are integrated on the CPU, and the CPU cores are grouped into NODEs (NODE) for management, so each NODE has a CPU core, a bus and a memory, the CPU core is bound with threads one by one, and switching of thread scheduling is avoided. The embodiment of the invention realizes gateway service based on DPDK technology, and binds the thread and CPU core by utilizing thread affinity. On the basis, the embodiment of the invention divides the memory data of the gateway service in running into global data, node data and thread data, wherein the global data is globally unique, the node data is redundant among the nodes, the thread data is grouped among the threads, and each thread is ensured to only need to access the memory of the node where the CPU core bound by the thread is located. Because the memory accessing the node is faster than the memory accessing other nodes, the performance of the gateway service can be obviously improved.
Searching node data under the target node where the target thread is located according to the message parameters carried in the message, including: determining a storage address of node data in a memory under a target node where the target thread is located according to the global data of the CPU; the global data comprises context environment data, storage addresses of node data under each node in a memory and storage addresses of thread data of each thread in the memory; and searching the node data from the memory according to the storage address. According to the embodiment of the invention, from the viewpoint of hierarchical management of memory data in the running process of the gateway service, the memory data in the running process of the gateway service is divided into global data, node data and thread data, each thread only needs to access the memory of the node where the CPU core bound by the thread is located, so that data cross-node access is reduced, thread resources are balanced, and the performance of the gateway service is further improved. The storage address of the global data in the memory is fixed, the global data comprises context environment data in the running process of the gateway service, the storage address of node data in the memory under each node, and the storage address of thread data in the memory of each thread, the node data is distributed in a redundancy mode on each node, and is only accessed by the thread on the node and cannot be accessed across the nodes. Therefore, the node data is used for CPU access under the node, and the CPU cannot directly access the cross-node data. And if the cross-node data needs to be accessed, the cross-node data needs to be indirectly accessed by step-by-step dereferencing through pointers in the global data.
It should be noted that, global data is uniquely identified by global_ctx, global data is managed in a memory, and any data item can be directly or indirectly accessed through global_ctx.
Optionally, the node data includes virtual private cloud network configuration data, an access control list, a routing table, and network address translation configuration data. In the embodiment of the present invention, the node data mainly includes static configuration data related to gateway forwarding, such as VPC (Virtual Private Cloud, virtual private cloud network), ACL (Access Control Lists, access control list), RTB (Route Table), NAT (Network Address Translation ), and the like, and this part of data is mostly used for being read in the forwarding process.
The thread data is dynamically managed along with the processing process of the message, and mainly comprises session tracking (CT) data, each thread can determine the storage address of each thread data in the memory according to the global data, so as to access the thread data corresponding to each thread, and the thread data can also be stored in the memory.
Optionally, the target operation includes at least one of: removing the GRE message header, packaging the GRE message header, forwarding the message, discarding the message, and modifying the source and destination in the message header. After the thread takes out the message from the message queue, looking up VPC, ACL, RTB, NAT configuration data under the node, and determining a target operation to be performed on the message and a next hop address of the route according to the configuration data, for example, an overlay visit underlay needs to remove the gre message header, and the underlay visit underlay needs to encapsulate the gre message header. The method comprises the steps that an ACL determines whether a message is forwarded or discarded, and an RTB determines the next hop route of the message; the NAT configuration data requires that the source in the header be modified before forwarding the message, etc. Because the configuration data are stored in each node in a redundant way, the thread only needs to access the memory unit of the node where the thread is located, and the cross-node access is avoided, so that the performance of gateway service, including the performance of flow (bps), packet quantity (pps), connection number, time delay and the like, is effectively improved.
It should be noted that, underly refers to a bottom layer bearing network, and in the present invention refers to a physical network, that is, a communication network between cloud physical machine nodes; the overlay refers to a logical service network above a bearer network, and in the present invention refers to a network on the cloud, i.e. a logical network between virtual cloud hosts on the cloud inside a VPC or between VPCs.
And 103, the target thread executes the target operation on the message and records thread data generated by executing the target operation.
In the message processing process, the thread needs to record thread data, such as CT data, the thread data is dynamically maintained along with the content of the processed message, the thread data is only accessed and modified in the message processing process, the thread data is uniformly scattered to different threads, and each thread only pays attention to the data generated by the session message processed by the thread.
In the process of processing the message according to the node data, the thread data generated by a single message is recorded, the five-tuple information is used as a keyword to identify, and then the thread is dynamically created and destroyed along with the processing process of the message. In step 101, the message is hashed to each message queue uniformly through the quintuple information in the header, so that the thread data is also scattered uniformly along with the quintuple information, the thread only records the thread data generated by the message forwarded to the thread, and the inter-thread processes the thread data in a packet-like manner.
For example, after a TCP packet is received by a network card, the TCP packet is uniformly dispersed to different threads for processing, and in the process of processing a single packet by a thread, the session state (i.e., the intermediate state of three-way handshake and four-way swing) of the TCP connection is recorded according to the connection process of the TCP, and this part of state data is only accessed in the single thread. Under the condition that the client and the server finish three-way handshake, the thread creates session tracking data and stores the session tracking data into the memory, and under the condition that the client and the server finish four-way handshake, the thread destroys the session tracking data.
And 104, transmitting the message and the next hop address to a network card transmitting end.
As shown in fig. 2, after the thread processes the message, the processed message and the next hop address are sent to the PMD (TX) at the network card transmitting end, and the network card transmitting end sends the processed message and the next hop address to a switch or a router, etc.
According to the various embodiments described above, it can be seen that the embodiment of the present invention extracts a message from a message queue through a target thread, and searches node data under a target node where the target thread is located according to a message parameter carried in the message, thereby determining a technical means of performing a target operation on the message and a next hop address of a route according to the node data, and solving a technical problem that in the prior art, performance improvement of a gateway service is limited. According to the embodiment of the invention, the memory data is divided into the global data, the node data and the thread data, the global data is globally unique, the node data is redundant among the nodes, the thread data is grouped among the threads, and the threads only access the node data of the node where the CPU core bound by the thread is located, so that the memory is prevented from being accessed across the nodes, and the gateway performance is effectively improved.
Fig. 3 is a flow chart of a message forwarding method according to one exemplary embodiment of the present invention. As a further embodiment of the present invention, as shown in fig. 3, the message forwarding method is applied to a CPU, and may include the following steps:
step 301, determining the storage address of the node data in the memory under each node according to the global data of the CPU.
Step 302, synchronously modifying the node data under each node according to the storage address of the node data under each node in the memory.
If the node data is required to be changed, the node data of all the nodes can be found through the global data, and then all the node data are modified in sequence, so that the content consistency of all the node data is ensured. The node data is accessed in the message processing process, the node data is stored in a redundant mode under each node, and once the node data is changed, the node data under all nodes needs to be synchronously modified, so that the node data under each node is ensured to be identical.
Step 303, receiving a message sent by a network card receiving end, and forwarding the message to a message queue of a target thread.
And 304, the target thread takes the message out of the message queue, and searches node data under a target node where the target thread is located according to the message parameter carried in the message, so as to determine a target operation executed on the message and a next hop address of a route according to the node data.
Step 305, the target thread executes the target operation on the message, and records thread data generated by executing the target operation.
And step 306, the message and the next hop address are sent to a network card sending end.
According to the embodiment of the invention, from the viewpoint of hierarchical management of memory data in the running process of the gateway service, the memory data in the running process of the gateway service is divided into global data, node data and thread data, each thread only needs to access the memory of the node where the CPU core bound by the thread is located, so that data cross-node access is reduced, thread resources are balanced, and the performance of the gateway service is further improved. The storage address of the global data in the memory is fixed, the global data comprises context environment data in the running process of the gateway service, the storage address of node data in the memory under each node, and the storage address of thread data in the memory of each thread, the node data is distributed in a redundancy mode on each node, and is only accessed by the thread on the node and cannot be accessed across the nodes. Therefore, the node data is used for CPU access under the node, and the CPU cannot directly access the cross-node data. And if the cross-node data needs to be accessed, the cross-node data needs to be indirectly accessed by step-by-step dereferencing through pointers in the global data.
The embodiment of the invention has the following beneficial effects: the application scene based on the horizontal expansion of the nodes and the threads mainly comprises: 1) Server hardware upgrades, such as upgrading the number of cores of the CPU, the number of NUMA nodes; 2) Server migration is performed in a domestic environment, and an x86 architecture server is migrated to an arm architecture server. The configuration is modified according to the node number and the CPU core number, and the maximum exertion of the hardware performance can be ensured after the upgrade or migration. In addition, the memory data during operation is managed from the data classification angle, so that the memory data can be conveniently expanded into a stateless service cluster, and the application scene of the stateless service comprises the situation that a single machine cannot meet the performance requirement of an actual scene. The number of servers is increased, and the stateless service has no sense on the number of clustered machines.
In addition, in the embodiment of the present invention, the specific implementation of the message forwarding method is described in detail in the foregoing message forwarding method, so that the description is not repeated here.
Fig. 4 is a flow chart of a message forwarding method according to another exemplary embodiment of the present invention. As another embodiment of the present invention, as shown in fig. 4, the message forwarding method is applied to a CPU, and may include the following steps:
Step 401, receiving a message sent by a network card receiving end.
And step 402, mapping out a target thread through a hash algorithm according to the five-tuple information carried in the message, so as to forward the message to a message queue of the target thread.
The embodiment of the invention adopts a quintuple hash algorithm to uniformly disperse the received message to different threads for processing, ensures the balance of thread resources, and can quickly access the memory of the node where the CPU core bound with the thread is located. Specifically, based on the data of five dimensions, namely the outer layer source address, the outer layer destination address, the inner layer source address, the inner layer destination address and the inner layer protocol type of the GRE message, a 32-bit check code is calculated through a CRC32 algorithm, and then the check code is used as an integer value and mapped according to the thread number.
Step 403, the target thread takes the message out of the message queue, determines a storage address of node data in a memory under a target node where the target thread is located according to global data of the CPU, and searches the node data from the memory according to the storage address.
Global data is uniquely identified by global_ctx, the global data is managed in memory, and any data item can be directly or indirectly accessed through global_ctx. And the storage address of the global data in the memory is fixed, and the global data comprises the context environment data of the gateway service running, the storage address of the node data under each node in the memory and the storage address of the thread data of each thread in the memory. In addition, the node data is redundantly distributed on each node, is only accessed by the thread on the node, and cannot be accessed across nodes. Therefore, the node data is used for CPU access under the node, and the CPU cannot directly access the cross-node data. And if the cross-node data needs to be accessed, the cross-node data needs to be indirectly accessed by step-by-step dereferencing through pointers in the global data.
And step 404, the target thread determines the target operation executed on the message and the next hop address of the route according to the node data.
And step 405, the target thread executes the target operation on the message and records thread data generated by executing the target operation.
In the message processing process, the thread needs to record thread data, such as CT data, the thread data is dynamically maintained along with the content of the processed message, the thread data is only accessed and modified in the message processing process, the thread data is uniformly scattered to different threads, and each thread only pays attention to the data generated by the session message processed by the thread.
Step 406, the message and the next hop address are sent to a network card sender.
According to the embodiment of the invention, from the viewpoint of hierarchical management of memory data in the running process of the gateway service, the memory data in the running process of the gateway service is divided into global data, node data and thread data, each thread only needs to access the memory of the node where the CPU core bound by the thread is located, so that data cross-node access is reduced, thread resources are balanced, and the performance of the gateway service is further improved.
In addition, in another embodiment of the present invention, the specific implementation of the message forwarding method is already described in detail in the foregoing message forwarding method, so that the description is not repeated here.
Fig. 5 is a schematic diagram of a packet forwarding device according to an embodiment of the present invention. As shown in fig. 5, the message forwarding device 500 is disposed in a CPU and includes a receiving module 501, a thread module 502 and a sending module 503; the receiving module 501 is configured to receive a message sent by a receiving end of a network card, and forward the message to a message queue of a target thread; the thread module 502 is configured to take the message out of the message queue through the target thread, and find node data under a target node where the target thread is located according to a message parameter carried in the message, so as to determine a target operation executed on the message and a next hop address of a route according to the node data; executing the target operation on the message through the target thread, and recording thread data generated by executing the target operation; the sending module 503 is configured to send the message and the next hop address to a network card sender.
Optionally, the thread module 502 is further configured to:
determining a storage address in a memory of node data under a target node where the target thread is located according to global data of the CPU by the target thread; the global data comprises context environment data, storage addresses of node data under each node in a memory and storage addresses of thread data of each thread in the memory;
And searching the node data from the memory by the target thread according to the storage address.
Optionally, the CPU integrates a plurality of cores, the cores are grouped into different nodes, each core is bound with each thread one by one, and node data under each node is the same.
Optionally, the apparatus further comprises a configuration module for:
determining the storage address of the node data under each node in the memory according to the global data of the CPU;
and synchronously modifying the node data under each node according to the storage address of the node data under each node in the memory.
Optionally, the receiving module 501 is further configured to:
mapping out a target thread through a hash algorithm according to the five-tuple information carried in the message, so as to forward the message to a message queue of the target thread; the five-tuple information comprises an outer layer source address, an outer layer destination address, an inner layer source address, an inner layer destination address and an inner layer protocol type.
Optionally, the node data includes virtual private cloud network configuration data, an access control list, a routing table, and network address translation configuration data.
Optionally, the target operation includes at least one of:
removing the GRE message header, packaging the GRE message header, forwarding the message, discarding the message, and modifying the source and destination in the message header.
The specific implementation content of the message forwarding device of the present invention is described in detail in the foregoing message forwarding method, so that the description is not repeated here.
Fig. 6 illustrates an exemplary system architecture 600 in which a message forwarding method or message forwarding apparatus of embodiments of the present invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 is used as a medium to provide communication links between the terminal devices 601, 602, 603 and the server 605. The network 604 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 605 via the network 604 using the terminal devices 601, 602, 603 to receive or send messages, etc. Various communication client applications such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 601, 602, 603.
The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 605 may be a server providing various services, such as a background management server (by way of example only) providing support for shopping-type websites browsed by users using terminal devices 601, 602, 603. The background management server can analyze and other data such as the received article information inquiry request and feed back the processing result to the terminal equipment.
It should be noted that, the message forwarding method provided in the embodiment of the present invention is generally executed by the server 605, and accordingly, the message forwarding device is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 7, there is illustrated a schematic diagram of a computer system 700 suitable for use in implementing an embodiment of the present invention. The terminal device shown in fig. 7 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the system 700 are also stored. The CPU 701, ROM 702, and RAM703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 701.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer programs according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor includes a receiving module, a threading module, and a sending module, where the names of the modules do not constitute a limitation on the module itself in some cases.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, implement the method of: receiving a message sent by a network card receiving end, and forwarding the message to a message queue of a target thread; the target thread takes out the message from the message queue, and searches node data under a target node where the target thread is located according to message parameters carried in the message, so that target operation executed on the message and a next hop address of a route are determined according to the node data; the target thread executes the target operation on the message and records thread data generated by executing the target operation; and sending the message and the next hop address to a network card sending end.
As a further aspect, embodiments of the present invention also provide a computer program product comprising a computer program which, when executed by a processor, implements the method according to any of the above embodiments.
According to the technical scheme of the embodiment of the invention, the target thread is adopted to take out the message from the message queue, and the node data under the target node where the target thread is located is searched according to the message parameter carried in the message, so that the technical means of determining the target operation executed on the message and the next hop address of the route according to the node data is adopted, and the technical problem that the performance improvement of gateway service in the prior art is limited is solved. According to the embodiment of the invention, the memory data is divided into the global data, the node data and the thread data, the global data is globally unique, the node data is redundant among the nodes, the thread data is grouped among the threads, and the threads only access the node data of the node where the CPU core bound by the thread is located, so that the memory is prevented from being accessed across the nodes, and the gateway performance is effectively improved.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (17)

1. The message forwarding method is characterized by being applied to a CPU and comprising the following steps:
receiving a message sent by a network card receiving end, and forwarding the message to a message queue of a target thread;
the target thread takes out the message from the message queue, and searches node data under a target node where the target thread is located according to message parameters carried in the message, so that target operation executed on the message and a next hop address of a route are determined according to the node data;
the target thread executes the target operation on the message and records thread data generated by executing the target operation;
and sending the message and the next hop address to a network card sending end.
2. The method of claim 1, wherein searching node data under a target node where the target thread is located according to a message parameter carried in the message comprises:
determining a storage address of node data in a memory under a target node where the target thread is located according to the global data of the CPU; the global data comprises context environment data, storage addresses of node data under each node in a memory and storage addresses of thread data of each thread in the memory;
And searching the node data from the memory according to the storage address.
3. The method of claim 2, wherein a plurality of cores are integrated on the CPU, the plurality of cores are grouped into different nodes, each core is bound with each thread one by one, and node data under the respective nodes is the same.
4. The method of claim 3, wherein receiving a message sent by a receiving end of a network card, and before forwarding the message to a message queue of a target thread, further comprises:
determining the storage address of the node data under each node in the memory according to the global data of the CPU;
and synchronously modifying the node data under each node according to the storage address of the node data under each node in the memory.
5. The method of claim 1, wherein forwarding the message to the message queue of the target thread comprises:
mapping out a target thread through a hash algorithm according to the five-tuple information carried in the message, so as to forward the message to a message queue of the target thread; the five-tuple information comprises an outer layer source address, an outer layer destination address, an inner layer source address, an inner layer destination address and an inner layer protocol type.
6. The method of claim 1, wherein the node data comprises virtual private cloud network configuration data, access control lists, routing tables, network address translation configuration data.
7. The method of claim 6, wherein the target operation comprises at least one of:
removing the GRE message header, packaging the GRE message header, forwarding the message, discarding the message, and modifying the source and destination in the message header.
8. The message forwarding device is characterized by being arranged on a CPU and comprising:
the receiving module is used for receiving the message sent by the network card receiving end and forwarding the message to a message queue of the target thread;
the thread module is used for taking out the message from the message queue through the target thread, searching node data under a target node where the target thread is located according to message parameters carried in the message, and determining target operation and a next hop address of a route executed on the message according to the node data; executing the target operation on the message through the target thread, and recording thread data generated by executing the target operation;
and the sending module is used for sending the message and the next hop address to a network card sending end.
9. The apparatus of claim 8, wherein the thread module is further configured to:
determining a storage address in a memory of node data under a target node where the target thread is located according to global data of the CPU by the target thread; the global data comprises context environment data, storage addresses of node data under each node in a memory and storage addresses of thread data of each thread in the memory;
and searching the node data from the memory by the target thread according to the storage address.
10. The apparatus of claim 9, wherein the CPU has integrated thereon a plurality of cores, the plurality of cores being grouped into different nodes, each core being bound one-to-one to each thread, node data under the respective nodes being identical.
11. The apparatus of claim 10, further comprising a configuration module configured to:
determining the storage address of the node data under each node in the memory according to the global data of the CPU;
and synchronously modifying the node data under each node according to the storage address of the node data under each node in the memory.
12. The apparatus of claim 8, wherein the receiving module is further configured to:
mapping out a target thread through a hash algorithm according to the five-tuple information carried in the message, so as to forward the message to a message queue of the target thread; the five-tuple information comprises an outer layer source address, an outer layer destination address, an inner layer source address, an inner layer destination address and an inner layer protocol type.
13. The apparatus of claim 8, wherein the node data comprises virtual private cloud network configuration data, access control lists, routing tables, network address translation configuration data.
14. The apparatus of claim 13, wherein the target operation comprises at least one of:
removing the GRE message header, packaging the GRE message header, forwarding the message, discarding the message, and modifying the source and destination in the message header.
15. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
the one or more processors implement the method of any of claims 1-7 when the one or more programs are executed by the one or more processors.
16. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-7.
CN202311110002.1A 2023-08-30 2023-08-30 Message forwarding method, device, electronic equipment and computer readable medium Pending CN117194017A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311110002.1A CN117194017A (en) 2023-08-30 2023-08-30 Message forwarding method, device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311110002.1A CN117194017A (en) 2023-08-30 2023-08-30 Message forwarding method, device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN117194017A true CN117194017A (en) 2023-12-08

Family

ID=89002678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311110002.1A Pending CN117194017A (en) 2023-08-30 2023-08-30 Message forwarding method, device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN117194017A (en)

Similar Documents

Publication Publication Date Title
US9354941B2 (en) Load balancing for single-address tenants
US9535871B2 (en) Dynamic routing through virtual appliances
US9454392B2 (en) Routing data packets between virtual machines using shared memory without copying the data packet
US20140146705A1 (en) Managing a dynamically configurable routing scheme for virtual appliances
US11516312B2 (en) Kubernetes as a distributed operating system for multitenancy/multiuser
US11689631B2 (en) Transparent network service migration across service devices
US9910687B2 (en) Data flow affinity for heterogenous virtual machines
JP2016526732A (en) Endpoint data center with different tenancy set
CN104283939A (en) Flexible flow offload
US20200213387A1 (en) Bidirectional Communication Clusters
US11436524B2 (en) Hosting machine learning models
EP3973668A1 (en) Network traffic steering with programmatically generated proxy auto-configuration files
US11853806B2 (en) Cloud computing platform that executes third-party code in a distributed cloud computing network and uses a distributed data store
US11102139B1 (en) Shared queue management utilizing shuffle sharding
US11562288B2 (en) Pre-warming scheme to load machine learning models
WO2020068978A1 (en) Hosting machine learning models
US20220100885A1 (en) Adaptive data loss prevention
US10148613B2 (en) Increased port address space
CN117194017A (en) Message forwarding method, device, electronic equipment and computer readable medium
US11258860B2 (en) System and method for bot detection and classification
Ivanisenko Methods and Algorithms of load balancing
US11487839B2 (en) Resource link engine
US11856073B1 (en) Message batching for communication protocols
US20230315543A1 (en) Tightly coupled parallel applications on a serverless computing system
US20230315541A1 (en) Tightly coupled parallel applications on a serverless computing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination