CN114500470A - Data packet processing method and device - Google Patents

Data packet processing method and device Download PDF

Info

Publication number
CN114500470A
CN114500470A CN202111643710.2A CN202111643710A CN114500470A CN 114500470 A CN114500470 A CN 114500470A CN 202111643710 A CN202111643710 A CN 202111643710A CN 114500470 A CN114500470 A CN 114500470A
Authority
CN
China
Prior art keywords
data packet
address
cpu
client
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111643710.2A
Other languages
Chinese (zh)
Inventor
谢金壮
肖玮勇
黄永远
莫琛
户才来
罗印威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202111643710.2A priority Critical patent/CN114500470A/en
Publication of CN114500470A publication Critical patent/CN114500470A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a data packet processing method and a device, wherein the method comprises the following steps: the first CPU reads a data packet from a client in a queue associated with the first CPU in a polling mode, modifies a source IP address of the data packet from an IP address of the client to a local IP address of the first CPU, and sends the modified data packet to a service system which the client requests to access. Therefore, each CPU manages the session processed by each CPU, and different CPUs have no lock conflict, so that the processing efficiency of the data packet can be improved.

Description

Data packet processing method and device
Technical Field
The present invention relates to the field of network technologies and security, and in particular, to a method and an apparatus for processing a data packet.
Background
The access gateway system is researched and developed in a computer project, and is mainly used for meeting the requirement of safe access from a cloud computer client to a cloud service system, and meeting the requirement of quick access from different cloud computer clients to the cloud service system through universality transformation.
The cloud computer client mainly transmits images, files and user operation instruction data to the cloud service system, and has the characteristics of large data volume and high requirement on interaction real-time performance. The traditional gateway system is mainly realized in an application layer, after a client and the gateway system are connected and authentication is completed, data uploaded by the client is cached in a local memory and then forwarded to a corresponding service system, the realization mode mainly adopts a traditional data packet processing mode, the interruption is realized through a Central Processing Unit (CPU), namely, after a network card drive receives a data packet, the CPU is informed to process through the interruption, and then the CPU copies the data and delivers the data to a protocol stack. When the data volume is large, a large amount of CPU interruption is generated in the mode, and the CPU cannot run other programs.
The traditional data packet processing mode causes major factors of network IO bottleneck, including:
1. in the traditional message receiving and sending mode, hard interrupts are used for communication, each hard interrupt consumes about 100 microseconds, and the Cache loss (Cache Miss) caused by terminating the context is not considered.
2. Data must be copied from the kernel mode to the user mode, and global lock contention results in significant CPU consumption.
3. Both the send and receive packets have overhead for system calls.
4. The kernel works on the multi-core, so that the performance loss caused by the fact that a bus and a memory barrier cannot be locked is avoided.
Therefore, a new packet processing method is needed to overcome the above problems.
Disclosure of Invention
The invention provides a data packet processing method and a data packet processing device, which are used for improving the data packet processing efficiency.
In a first aspect, the present invention provides a data packet processing method, including: the first CPU reads a data packet from a client in a queue associated with the first CPU in a polling mode; modifying the source IP address of the data packet from the IP address of the client to the local IP address of the first CPU; and sending the modified data packet to a service system which the client requests to access.
Therefore, by adopting the method, each CPU manages the session processed by the CPU, and no lock conflict exists among different CPUs, so that the waiting time of the global lock is greatly reduced, and the processing efficiency of the data packet can be improved.
In one possible design, further comprising: saving the session information of the client in a connection pool;
in one possible design, further comprising: receiving a backhaul data packet from the service system, wherein a target IP address of the backhaul data packet is a local IP address of the first CPU; inquiring session information corresponding to the backhaul data packet in the connection pool; when the session information corresponding to the backhaul data packet is the session information of the client, modifying the destination IP address of the first data packet from the local IP address of the first CPU to the IP address of the client; and sending the modified backhaul data packet to the client.
In one possible design, further comprising: before the source IP address of the data packet is modified into the local IP address of the first CPU, the first CPU and the client corresponding to the data packet execute handshaking and security authentication processes.
In a second aspect, the present invention provides a data packet processing method, including: the gateway system respectively configures at least one local IP address and a queue for a plurality of CPUs; receiving a data packet from a client; determining a queue to which the data packet belongs, and caching the data packet to the queue to which the data packet belongs, wherein the queue to which the data packet belongs is associated with a first CPU, and the first CPU is one of the CPUs.
In one possible design, further comprising: and the gateway system issues a virtual IP address, and the virtual IP address is used for the client to access the service system.
In one possible design, further comprising: and receiving a backhaul data packet from the service system, and sending the backhaul data packet to the first CPU when a source IP address of the backhaul data packet is a local IP address of the first CPU.
In one possible design, further comprising: and when the gateway system establishes connection with the client, verifying the message sent by the client at least twice by adopting a preset algorithm.
In a third aspect, the present application further provides an apparatus. The device can execute the method design. The apparatus may be a chip or a circuit capable of executing a function corresponding to the method, or a device including the chip or the circuit.
In one possible implementation, the apparatus includes: a memory for storing computer executable program code; and a processor coupled with the memory. Wherein the program code stored in the memory comprises instructions which, when executed by the processor, cause the apparatus or a device in which the apparatus is installed to perform the method of any of the above possible designs.
Wherein the apparatus may further comprise a communication interface, which may be a transceiver, or, if the apparatus is a chip or a circuit, an input/output interface of the chip, such as an input/output pin or the like.
In one possible embodiment, the device comprises corresponding functional units for carrying out the steps of the above method. The functions may be implemented by hardware, or by hardware executing corresponding software. The hardware or software includes one or more units corresponding to the above functions.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program for performing the method of any one of the above possible designs when the computer program runs on an apparatus.
In addition, for technical effects brought by any one implementation manner of the third aspect to the fourth aspect, reference may be made to technical effects brought by different implementation manners of the first aspect, and details are not described here.
Drawings
Fig. 1 is a diagram of a gateway system architecture provided by an embodiment of the present invention;
fig. 2 is a flowchart illustrating an overview of a packet processing method according to an embodiment of the present invention;
fig. 3 is a flow chart of a client access service system according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a process of establishing a connection and a security authentication between a client and a gateway system according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an apparatus according to an embodiment of the present invention;
fig. 6 is a second schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The application scenario described in the embodiment of the present invention is for more clearly illustrating the technical solution of the embodiment of the present invention, and does not form a limitation on the technical solution provided in the embodiment of the present invention, and it can be known by a person skilled in the art that with the occurrence of a new application scenario, the technical solution provided in the embodiment of the present invention is also applicable to similar technical problems. In the description of the present invention, the term "plurality" means two or more unless otherwise specified.
The gateway system provided by the invention adopts a Data Plane Development Kit (DPDK) technology, and compared with the traditional data packet processing mode, the DPDK adopts a polling mode to realize the data packet processing process: the DPDK reloads the network card drive, the drive does not interrupt the CPU after receiving the data packet, but stores the data packet into the memory through the zero copy technology, and at this time, the application layer program can directly read the data packet from the memory through the interface provided by the DPDK. The processing mode saves CPU interruption time and memory copy time, and provides a simple, easy and efficient data packet processing mode for an application layer, so that the development of network application is more convenient.
The DPDK transceiving package has the following advantages: the interruption times of the CPU are reduced; the memory copy frequency is reduced; the method bypasses the protocol stack of linux, enters a user protocol stack, and a user obtains the control right of the protocol stack, so that the protocol stack can be customized to reduce the complexity; and a large-page memory is adopted, so that the cache loss is reduced.
In order to improve the universality and expandability of the gateway system, the invention adopts a universal forwarding framework, a user can define the forwarding rule by self, and compared with the traditional application gateway, the gateway system is further decoupled from the service function. Meanwhile, the system adopts a Border Gateway Protocol (BGP) protocol to issue Virtual IP (VIP) address routes of the gateways to form equivalent routes, so that multi-gateway cluster deployment is realized, and the reliability of the system and the expansibility of throughput are improved.
In the embodiment of the present invention, the gateway system is used as a "gate" for accessing the client and the cloud service system, and in order to satisfy service scenarios (such as cloud computer service, video transmission, etc.) with high requirements for real-time interaction and high data throughput, the following factors need to be considered:
the invention adopts DPDK technology, and realizes high-performance forwarding of service traffic by directly bypassing network card received traffic to a user state in a polling manner and realizing a lightweight TCP/IP protocol stack in the user state.
The gateway system of the invention adopts a 25G high-performance network card and combines a DPDK software package, thereby greatly improving the use efficiency of the network card. Meanwhile, the system load is deployed in a balanced manner, if the system throughput is insufficient, the capacity can be rapidly expanded, and the service throughput requirements of different service systems are met.
In the security protection, the gateway system needs to perform security authentication on the access client and protect the back-end service system, so as to prevent the service system from running due to traffic attack (sys flood). The invention adopts BGP protocol to issue VIP address route of gateway to form equivalent route, to realize multi-gateway cluster arrangement, without affecting service system when single gateway node is in fault.
The gateway system of the application mainly comprises a data plane part and a control plane part, the whole architecture of the system refers to fig. 1, the data plane part mainly realizes the functions of forwarding, session management, security authentication, flow statistics and the like of client data, and the gateway system mainly comprises the following modules:
(1) network equipment layer: the DPDK is used for realizing the functions of receiving and transmitting data by the network card, port aggregation, VLAN, flow control and the like.
(2) IP protocol stack: and the Linux kernel protocol stack is referred, the kernel protocol stack is simplified, and a lightweight IP protocol stack is realized.
(3) An access gateway layer: the main realization of gateway data forwarding, session management, security authentication, flow statistics and other functions.
The control plane part mainly realizes functions of issuing management, flow monitoring and the like of gateway configuration, and mainly comprises the following modules: a logic control layer; a proxy layer; a Web display layer; and configuring a management layer.
The following describes a data packet processing procedure with reference to fig. 2 by taking a specific flow of accessing a service system by a client as an example.
Step 200: the gateway system configures at least one local IP address and a queue for the plurality of CPUs respectively.
Illustratively, when the gateway system is started, the gateway program is bound to run on each CPU (for example, bound to 16 CPUs) by setting kernel affinity, each CPU is allocated with at least one Local IP (LIP) address, and the number of local IP addresses determines that the gateway system supports the maximum number of concurrences, for example, when the number of CPUs is 16, and 1 local IP address is allocated to each CPU, the maximum number of concurrences is: 16 × 1 × 65000 ═ 104 ten thousand. For example, as shown in fig. 3, the gateway system configures queue 1 for CPU1, queue 2 for CPU2, queues 3, … … for CPU3, and queue n for CPU n.
Step 210: the client sends a data packet to the gateway system.
Step 220: the gateway system determines the queue to which the data packet belongs and buffers the data packet to the queue to which the data packet belongs. The queue to which the data packet belongs is a first CPU, and the first CPU is one of the CPUs.
Illustratively, the gateway system also employs quagga to publish a virtual IP address (i.e., VIP) externally, which is used for clients to access the business system.
For example, the VIP address is 115.34.154.1:80, the IP of the client is 100.124.101.3:2468, and the client can access the gateway system by accessing the VIP, and further access the business system.
The gateway system performs hash (hash) calculation according to the quintuple corresponding to the packet, determines a queue to which the packet belongs, and buffers the queue to which the packet belongs, for example, a queue 3 shown in fig. 3.
Step 230: the first CPU processes data packets in a queue associated with the first CPU using a round robin fashion.
The data packet at this time may be a data packet processed by a custom lightweight IP protocol stack, and the content of a Transmission Control Protocol (TCP) layer data segment is obtained. By using a lightweight IP protocol stack, the complexity of logic determination can be reduced.
For example, as shown in FIG. 3, the first CPU is CPU3, the queue associated with CPU3 is queue 3, and at step 200, the gateway system configures or binds queue 3 for CPU 3.
The data packets are directly read from the queue in a polling mode, so that kernel interruption and kernel-to-user space copying can be reduced.
Step 240: the first CPU modifies the source IP address of the data packet from the IP address of the client to the local ID address of the first CPU.
For example, in FIG. 3, the local IP address of the CPU3 is 192.168.1.3.
In addition, the first CPU modifies the destination IP port of the data packet into the IP port of the service system. For example, in fig. 3, the IP ports of the service system are 192.168.244.1: 8080. The source port uses a non-assigned port (e.g., 1001).
In addition, before the source IP address of the data packet is modified into the local IP address of the first CPU, the first CPU and the client corresponding to the data packet execute handshaking and security authentication processes. Exemplarily, reference may be made to the following first to fourth stages.
Step 250: and the first CPU sends the modified data packet to a service system which is requested to be accessed by the client.
Illustratively, the first CPU3 sends the modified data packet to the service system to which the client requests access, and establishes a connection with the service system, and the connection pool stores current session information, that is, session information of the client.
Therefore, by adopting the method, each CPU manages the session processed by the CPU, and no lock conflict exists among different CPUs, so that the waiting time of the global lock is greatly reduced, and the processing efficiency of the data packet can be improved.
Optionally, the method further includes:
step 260: and the service system sends a backhaul data packet to the gateway system.
Step 270: and when the source IP address of the backhaul data packet is the local IP address of the first CPU, the gateway system sends the backhaul data packet to the first CPU.
Step 280: and the first CPU inquires the session information corresponding to the backhaul data packet in the connection pool, and modifies the destination IP address of the first data packet from the local IP address of the first CPU to the IP address of the client when the session information corresponding to the backhaul data packet is the session information of the client.
Illustratively, the first CPU performs hash calculation according to a quintuple corresponding to the backhaul data packet, searches session information corresponding to the backhaul data packet in the connection pool according to a hash value, modifies a destination IP address of the backhaul data packet into an IP address of the client when the session information corresponding to the backhaul data packet is session information of the client, and modifies a port of the backhaul data packet into a port of the client. Therefore, the data packets to and from are processed by the same CPU, and the session management CPU is localized without lock conflict.
The gateway system may buffer the backhaul data packet to a backhaul data packet in a queue associated with the first CPU, and the first CPU may process the backhaul data packet in the queue associated with the first CPU in a polling manner.
Step 290: and the first CPU sends the modified backhaul data packet to the client.
The establishment of connection and security authentication process between the client and the gateway system is another innovative point of the system, and the whole process of establishment of connection and security authentication is divided into 4 stages, which is shown with reference to fig. 4.
The first stage is as follows: establishing connections and preventing traffic attacks (syn flow)
1. The client sends a synchronization (Syn) message to the gateway system. The gateway system replies a Syn message or an acknowledgement (Ack) message by using a preset algorithm of a TCP layer, wherein the preset algorithm can be a Syn-cookie algorithm or other algorithms for preventing traffic attacks.
2. When the Ack message of the three-way handshake of the Client side reaches the gateway system, the gateway system adopts a preset algorithm of a TCP layer for verification, if the verification is not passed, the message is discarded, and if the verification is passed, the three-way handshake with the Client is completed.
3. And the gateway system caches the Ack message of the client.
And a second stage: security authentication
1. After the three-way handshake is completed, the gateway system sends the public key (public key) of the gateway system to the client.
2. The client generates a string of random numbers (keys), encrypts the random numbers by using a public key (public key) of the gateway system, and sends the encrypted data to the gateway system.
3. The gateway system decrypts the received data by using a private key (private key), symmetrically encrypts the received data by using a decryption result, and then sends the encrypted data to the client.
4. The client uses the random number (key) to decrypt the received data, if the decrypted plaintext is consistent with the last sent data, the next step is continued, otherwise, the authentication is stopped.
5. The client encrypts the authentication information using a random number (key) and sends the encrypted authentication information to the gateway system.
6. And the gateway system determines the IP and the port of the service system to be connected after obtaining the authentication information.
And a third stage: establishing connection between gateway system and application system
1. The gateway system sends a Syn message to the service system to start establishing connection.
2. After receiving the Syn message or the Ack message, the gateway system sends the Ack message of the client cached in the first stage to the service system to complete three-way handshake connection.
A fourth stage: data forwarding
When the subsequent data packet arrives at the gateway system, the data packet is directly forwarded to the service system according to the connection established in the third stage without buffer storage.
Compared with the traditional gateway implementation based on kernel mode forwarding, the method provided by the embodiment of the application has the advantages that the performance is obviously improved, and the packet forwarding rate is increased from 200 ten thousand pps to 800 ten thousand pps. And secondly, the gateway system realizes syn-cookies verification in a handshake phase, so that syn flood attack can be effectively blocked, and the safety of a back-end service system is protected. The gateway system realizes a Transport Layer Security (TLS) security protocol on a TCP layer, and the client uses different keys for each connection, thereby ensuring security authentication and data transmission security. And thirdly, the gateway system adopts a general forwarding framework, services transmitted based on the TCP can be accessed through the gateway system, and the gateway system is completely decoupled from the services. Finally, the system adopts a BGP protocol to publish the VIP address route of the gateway to form an equivalent route, so that the multi-gateway cluster deployment is realized, and the reliability and the expandability of the gateway are ensured.
The gateway system is written by C language, DPDK adopts 18.11 version, when deployed, Mellanox MT 2771025G network card is adopted, 16 CPU kernels are exclusively used, large page memory is set to 32G, and the equivalent route is formed by distributing VIP address route of the gateway by BGP protocol.
The division of the unit in the embodiments of the present invention is schematic, and is only a logical function division, and there may be another division manner in actual implementation, and in addition, each functional unit in each embodiment of the present invention may be integrated in one processor, may also exist alone physically, or may also be integrated in one unit by two or more units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
An embodiment of the present invention further provides an apparatus 500, as shown in fig. 5, including: a processing module 510 and a transceiver module 520.
The transceiving module 520 may include a receiving unit and a transmitting unit. The processing module 510 is used for controlling and managing the actions of the apparatus 500. The transceiver module 520 is used to support the communication between the apparatus 500 and other apparatuses. Optionally, the apparatus 500 may further comprise a storage unit for storing program codes and data of the apparatus 500.
Alternatively, the modules in the apparatus 500 may be implemented by software.
Alternatively, the processing module 510 may be a processor or a controller, such as a general purpose Central Processing Unit (CPU), a general purpose processor, a Digital Signal Processing (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure of the embodiments of the application. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The transceiver module 520 may be a communication interface, a transceiver or a transceiver circuit, etc., wherein the communication interface is referred to as a general term, and in a specific implementation, the communication interface may include a plurality of interfaces, and the storage unit may be a memory.
In one implementation, the processing module 510 is configured to read, in a polling manner, a data packet from a client in a queue associated with the processing module; modifying the source IP address of the data packet from the IP address of the client to the local IP address of the first CPU; the transceiver module 520 is configured to send the modified data packet to the service system that the client requests to access.
In one implementation, the processing module 510 is configured to configure at least one local IP address and a queue for the plurality of CPUs, respectively; a transceiver module 520, configured to receive a data packet from a client; a processing module 510, configured to determine a queue to which the data packet belongs, and cache the data packet in the queue to which the data packet belongs, where the queue to which the data packet belongs is associated with a first CPU, and the first CPU is one of the CPUs.
Another apparatus 600 is provided in the embodiment of the present invention, as shown in fig. 6, including:
a communication interface 601, a memory 602, and a processor 603;
wherein, the communication apparatus 600 communicates with other devices, such as receiving and sending messages, through the communication interface 601; a memory 602 for storing program instructions; a processor 603 for calling the program instructions stored in the memory 602, and executing the method according to the obtained program.
The communication interface 601 is used for acquiring the network load rates of N servers at the Kth moment, wherein K and N are positive integers;
in one implementation, the processor 603 invokes program instructions stored in the memory 602 to perform: reading a data packet from a client in a queue associated with the client in a polling mode; modifying the source IP address of the data packet from the IP address of the client to the local IP address of the first CPU; and sending the modified data packet to a service system which the client requests to access.
In one implementation, the processor 603 invokes program instructions stored in the memory 602 to perform: respectively configuring at least one local IP address and a queue for a plurality of CPUs; receiving a data packet from a client; determining a queue to which the data packet belongs, and caching the data packet to the queue to which the data packet belongs, wherein the queue to which the data packet belongs is associated with a first CPU, and the first CPU is one of the CPUs.
In the embodiment of the present invention, the specific connection medium among the communication interface 601, the memory 602, and the processor 603 is not limited, for example, a bus, and the bus may be divided into an address bus, a data bus, a control bus, and the like.
In the embodiments of the present invention, the processor may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present invention. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor.
In the embodiment of the present invention, the memory may be a nonvolatile memory, such as a Hard Disk Drive (HDD) or a solid-state drive (SSD), and may also be a volatile memory, for example, a random-access memory (RAM). The memory can also be, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory in embodiments of the present invention may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Embodiments of the present invention also provide a computer-readable storage medium, which includes program code for causing a computer to perform the steps of the method provided above in the embodiments of the present invention when the program code runs on the computer.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for processing a data packet, comprising:
a first Central Processing Unit (CPU) reads a data packet from a client in a queue associated with the first CPU in a polling mode;
modifying the source IP address of the data packet from the IP address of the client to the local IP address of the first CPU;
and sending the modified data packet to a service system which the client requests to access.
2. The method of claim 1, further comprising:
and saving the session information of the client in a connection pool.
3. The method of claim 2, further comprising:
receiving a backhaul data packet from the service system, wherein a target IP address of the backhaul data packet is a local IP address of the first CPU;
inquiring session information corresponding to the backhaul data packet in the connection pool;
when the session information corresponding to the backhaul data packet is the session information of the client, modifying the destination IP address of the first data packet from the local IP address of the first CPU to the IP address of the client;
and sending the modified backhaul data packet to the client.
4. The method of any one of claims 1-3, further comprising:
the first CPU obtains the authentication result of handshake and security authentication executed by the gateway system and the client corresponding to the data packet;
and modifying the source IP address of the data packet into the local IP address of the first CPU according to the authentication result.
5. A method for processing a data packet, comprising:
the gateway system respectively configures at least one local IP address and a queue for a plurality of CPUs;
receiving a data packet from a client;
determining a queue to which the data packet belongs, and caching the data packet to the queue to which the data packet belongs, wherein the queue to which the data packet belongs is associated with a first CPU, and the first CPU is one of the CPUs.
6. The method of claim 5, further comprising:
and the gateway system issues a virtual IP address, and the virtual IP address is used for the client to access the service system.
7. The method of claim 6, further comprising:
receiving a backhaul data packet from the business system;
and when the source IP address of the backhaul data packet is the local IP address of the first CPU, sending the backhaul data packet to the first CPU.
8. The method of any one of claims 5-7, further comprising:
and when the gateway system establishes connection with the client, verifying the message sent by the client at least twice by adopting a preset algorithm.
9. A packet processing apparatus comprising a processor and interface circuitry for receiving signals from or transmitting signals to or from a device other than the apparatus, the processor being arranged to implement the method of any one of claims 1 to 4 or 5 to 8 by means of logic circuitry or executing code instructions.
10. A computer-readable storage medium having stored thereon computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 8.
CN202111643710.2A 2021-12-29 2021-12-29 Data packet processing method and device Pending CN114500470A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111643710.2A CN114500470A (en) 2021-12-29 2021-12-29 Data packet processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111643710.2A CN114500470A (en) 2021-12-29 2021-12-29 Data packet processing method and device

Publications (1)

Publication Number Publication Date
CN114500470A true CN114500470A (en) 2022-05-13

Family

ID=81509039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111643710.2A Pending CN114500470A (en) 2021-12-29 2021-12-29 Data packet processing method and device

Country Status (1)

Country Link
CN (1) CN114500470A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1845513A (en) * 2006-05-23 2006-10-11 烽火通信科技股份有限公司 Method for multi service access node access device sharing public network IP address
CN101739380A (en) * 2009-12-11 2010-06-16 中国航空无线电电子研究所 Shared memory architecture-based multiprocessor communication device and method thereof
CN102938718A (en) * 2012-10-19 2013-02-20 中兴通讯股份有限公司 Home gateway and intelligent terminal integrated system and communication method thereof
CN106713185A (en) * 2016-12-06 2017-05-24 瑞斯康达科技发展股份有限公司 Load balancing method and apparatus of multi-core CPU
CN110704211A (en) * 2019-09-29 2020-01-17 烽火通信科技股份有限公司 Method and system for receiving packets across CPUs (central processing units) in multi-core system
CN112887229A (en) * 2021-01-11 2021-06-01 杭州迪普科技股份有限公司 Session information synchronization method and device
CN113010379A (en) * 2021-03-09 2021-06-22 爱瑟福信息科技(上海)有限公司 Electronic equipment monitoring system
CN113507532A (en) * 2021-08-24 2021-10-15 优刻得科技股份有限公司 Method for network address translation, corresponding server, storage medium and electronic device
CN113794646A (en) * 2021-09-13 2021-12-14 国网电子商务有限公司 Monitoring data transmission system and method for energy industry

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1845513A (en) * 2006-05-23 2006-10-11 烽火通信科技股份有限公司 Method for multi service access node access device sharing public network IP address
CN101739380A (en) * 2009-12-11 2010-06-16 中国航空无线电电子研究所 Shared memory architecture-based multiprocessor communication device and method thereof
CN102938718A (en) * 2012-10-19 2013-02-20 中兴通讯股份有限公司 Home gateway and intelligent terminal integrated system and communication method thereof
CN106713185A (en) * 2016-12-06 2017-05-24 瑞斯康达科技发展股份有限公司 Load balancing method and apparatus of multi-core CPU
CN110704211A (en) * 2019-09-29 2020-01-17 烽火通信科技股份有限公司 Method and system for receiving packets across CPUs (central processing units) in multi-core system
CN112887229A (en) * 2021-01-11 2021-06-01 杭州迪普科技股份有限公司 Session information synchronization method and device
CN113010379A (en) * 2021-03-09 2021-06-22 爱瑟福信息科技(上海)有限公司 Electronic equipment monitoring system
CN113507532A (en) * 2021-08-24 2021-10-15 优刻得科技股份有限公司 Method for network address translation, corresponding server, storage medium and electronic device
CN113794646A (en) * 2021-09-13 2021-12-14 国网电子商务有限公司 Monitoring data transmission system and method for energy industry

Similar Documents

Publication Publication Date Title
US11146665B2 (en) Methods and apparatus for sharing and arbitration of host stack information with user space communication stacks
US20210281444A1 (en) Intelligent and dynamic overlay tunnel formation via automatic discovery of citrivity/sdwan peer in the datapath in a pure plug and play environment with zero networking configuration
US10009419B2 (en) Socket application program interface (API) for efficient data transactions
JP5772946B2 (en) Computer system and offloading method in computer system
US8671152B2 (en) Network processor system and network protocol processing method
US7634650B1 (en) Virtualized shared security engine and creation of a protected zone
EP2158546B1 (en) Providing enhanced data retrieval from remote locations
US8156230B2 (en) Offload stack for network, block and file input and output
US10225194B2 (en) Transparent network-services elastic scale-out
US20210243227A1 (en) Detecting attacks using handshake requests systems and methods
US9749354B1 (en) Establishing and transferring connections
WO2023005773A1 (en) Message forwarding method and apparatus based on remote direct data storage, and network card and device
US6983382B1 (en) Method and circuit to accelerate secure socket layer (SSL) process
Kim et al. A case for smartnic-accelerated private communication
US11044350B1 (en) Methods for dynamically managing utilization of Nagle's algorithm in transmission control protocol (TCP) connections and devices thereof
CN115577397B (en) Data processing method, device, equipment and storage medium
WO2024040846A1 (en) Data processing method and apparatus, electronic device, and storage medium
CN113810397B (en) Protocol data processing method and device
WO2010023951A1 (en) Secure communication device, secure communication method, and program
CN114500470A (en) Data packet processing method and device
EP3996351A1 (en) Managing network services using multipath protocols
KR102476159B1 (en) Method for offloading secure connection setup into network interface card, and a network interface card, and a computer-readable recording medium
US11544114B1 (en) Methods for optimizing cloud-scale distributed asynchronous systems with idempotent workloads and devices thereof
KR101577034B1 (en) Multicore based toe system easy to add supplemental network functions with software and the control method therefor
KR101755620B1 (en) Network device and control method of the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination