CN116366582A - Data packet processing method and system based on OVS-DPDK - Google Patents

Data packet processing method and system based on OVS-DPDK Download PDF

Info

Publication number
CN116366582A
CN116366582A CN202310639196.8A CN202310639196A CN116366582A CN 116366582 A CN116366582 A CN 116366582A CN 202310639196 A CN202310639196 A CN 202310639196A CN 116366582 A CN116366582 A CN 116366582A
Authority
CN
China
Prior art keywords
data packet
processing
flow
batch
cache node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310639196.8A
Other languages
Chinese (zh)
Other versions
CN116366582B (en
Inventor
吴绍华
李易
郑理
邹明
文旭
韩丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Cloud Technology Co Ltd
Original Assignee
Tianyi Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Cloud Technology Co Ltd filed Critical Tianyi Cloud Technology Co Ltd
Priority to CN202310639196.8A priority Critical patent/CN116366582B/en
Publication of CN116366582A publication Critical patent/CN116366582A/en
Application granted granted Critical
Publication of CN116366582B publication Critical patent/CN116366582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9015Buffering arrangements for supporting a linked list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The disclosure relates to the technical field of cloud computing, and relates to a data packet processing method and system based on an OVS-DPDK. The method of the embodiment of the disclosure comprises the following steps: step S01, obtaining a PMD data packet from a DPDK and storing the PMD data packet into a batch; step S02, obtaining PMD data packets from the batch to carry out a flow table processing flow; and the flow table processing flow carries out two operations on the non-existing CT table items, the lock acquisition is successful, the CT table items are inserted into the CT tracking link table, the lock acquisition is failed, and the pointers of the CT table items are stored into CT cache nodes. Step S03, after the data packet is taken out, executing a CT table entry inserting flow of the CT cache node, when the lock acquisition is successful, inserting the CT table entry corresponding to the pointer in the CT cache node into a CT tracking link table, deleting the CT cache node, and returning to step S01 to process the PMD data packet of the next round after the execution is finished. The system of the embodiment of the disclosure is realized based on the method. The embodiment of the disclosure reduces the time of serial processing when data packets are processed in batches and improves CPS rate and flow table unloading speed.

Description

Data packet processing method and system based on OVS-DPDK
Technical Field
The embodiment of the disclosure relates to the technical field of cloud computing, in particular to a data packet processing method and system based on OVS-DPDK.
Background
The cloud platform network is to implement processing of a data packet of an application program by means of a kernel space of an OVS (OpenvSwitch). The data plane development suite (DPDK, data Plane Development Kit) is a library of functions and a set of drivers for fast packet processing. After DPDK is combined with OVS, the performance becomes efficient and convenient to develop.
The CT (Connection Tracking) module in the OVS-DPDK architecture has a global lock to protect the CT tracking link table, and performs data protection when a CT flow table tracking entry (hereinafter, collectively referred to as CT table entry) is inserted and deleted. Wherein the CT (Connection Tracking) module is for connection tracking to allow tracking of all logical network connections or sessions, thereby associating all data packets that may make up the connection. Most of the work on the DPDK datapath is handled by a Poll Mode Driver (PMD) thread, which also performs tasks such as continuous polling of input ports. Once the PMD receives the data packet and classifies it, it performs processing actions after classification is complete.
The general flow of the existing process flow table is as follows: receiving a data packet sent from a certain network device connected with an OVS (over-the-counter protocol) on a DPDK (digital pre-distortion) data path, and extracting five-tuple information of source/destination IP (Internet protocol), source/destination port and protocol number from the data packet; and carrying out table lookup based on the hash value generated by the five-tuple information, when the index value obtained by table lookup is not empty, the stream table is not required to be updated, and when the index value obtained by table lookup is empty, the global lock is required to be acquired, and the created CT table item is inserted into the CT tracking link table. If global lock acquisition fails during this period, the lock release is waited for before the table lookup modification operation (including insertion, deletion, etc.) can be performed.
It follows that when multiple threads are concurrently performing data collection, interaction with the TCP/IP protocol stack must be performed by means of unlocking and write locking. For this reason, there are the following problems: (1) The CT module of the prior OVS is limited in the construction chain to access the same CT tracking link table by a plurality of PMD processes, and a lookup table and an inserted global lock cannot be removed, so that CPS (Connection Per Second, connection number per second) performance cannot be improved as expected as the PMD processes are increased; with this, as the bandwidth of the hardware network card (25 Gbps to 100 Gbps) is further increased, the allocation policy of PMD of the OVS-DPDK cannot increase the CPS of the OVS by increasing the number of PMDs. That is, this will result in packet processing speeds that do not match the bandwidth processing speed of the hardware. (2) After one thread acquires the lock, the related logic code is executed, the execution time is too long, so that other threads can only sleep and wait, the serial processing granularity is too large, and the CPU resource cannot be fully utilized.
Disclosure of Invention
Embodiments of the present disclosure provide a method, system, and computer program product for OVS-DPDK based packet processing that addresses one or more of the above-mentioned problems as well as other potential problems.
In order to achieve the above object, the following technical scheme is provided:
according to a first aspect of the present disclosure, there is provided a data packet processing method based on OVS-DPDK, including:
step S01, obtaining a PMD data packet from a DPDK data path and caching the PMD data packet into a batch processing batch;
step S02, obtaining PMD data packets from batch processing batch to carry out flow table processing flow;
when the flow table processing flow is executed, table lookup is carried out on PMD data packets in batch processing batch, and when CT table items are not queried in the CT tracking link table, CT table items are created and different operations are executed according to the global lock acquisition condition: inserting the CT table entry into a CT tracking link table when the global lock is successfully acquired, and filling a pointer of the CT table entry into a CT cache node when the global lock is failed to be acquired;
step S03, after the data packet in the batch processing batch is completely fetched, executing a CT table entry inserting flow of a CT cache node; after the CT table entry insertion process of the CT cache node is finished, returning to the step S01 to process the PMD data packet of the next round;
when the global lock is successfully acquired, the CT table entry corresponding to the pointer in the CT cache node is inserted into the CT tracking link table and the CT cache node is deleted.
The method of the embodiment of the disclosure does not wait for lock release when the link establishment fails to acquire the lock, but does other operations by utilizing the time waiting for lock release, namely, filling the pointer of the CT table entry into the CT cache node. In this process, there is no need to spend time waiting for lock release. After the data packet is taken out, the CT table entry is inserted by utilizing the CT table entry insertion flow of the CT cache node. The embodiment of the disclosure reduces the time of serial processing when data packets are processed in batches, improves CPS rate and unloading speed of a flow table, and optimizes network performance.
In some embodiments, the flow table processing flow includes:
judging whether the data packet in the batch is completely fetched, if so, ending, and if not, executing the subsequent flow;
inquiring a CT tracking link table, when a CT table item cannot be found, creating the CT table item and filling the CT table item based on the data packet;
acquiring a global lock, and inserting the CT table entry into a CT tracking link table when the global lock is successfully acquired; when the global lock acquisition fails, filling the pointer of the CT table entry into a CT cache node;
and acquiring the data packets from the batch one by one to execute the flow until the data packets are fetched.
In some embodiments, the flow table processing flow further comprises: when the CT table item is searched, the state of the CT table item is updated, and then the next data packet is acquired from the batch processing batch to execute the flow table processing flow.
In some embodiments, the process of populating the CT table entry based on the data packet includes: and storing the states of the source IP, the destination IP, the source port, the destination port, the protocol number and the CT table entry in the data packet into the CT table entry.
In some embodiments, the step S02 further includes: judging whether the CT cache node is empty, and when the CT cache node is empty, acquiring PMD data packets from the batch processing batch to carry out a flow table processing flow; and when the CT cache nodes are not empty, traversing all CT cache nodes on the thread, and executing a CT table entry insertion flow of the CT cache nodes.
In some embodiments, the CT entry insertion procedure of the CT cache node includes:
acquiring a CT cache node on a thread;
acquiring a global lock, and inserting CT table items corresponding to pointers in CT cache nodes into a CT tracking link table when the global lock is successfully acquired;
deleting the CT cache node;
and traversing all CT cache nodes on the thread in sequence until CT table items corresponding to pointers in all the cache nodes are inserted into the CT tracking link table and the CT cache nodes are deleted.
In some embodiments, the CT entry insertion procedure of the CT cache node further includes: and after the CT table entry insertion process of the CT cache node is executed, continuing the flow table processing process.
In some embodiments, the method further comprises: after the PMD packet is acquired in step S01 and before the step S02 is performed, a CT cache node is created.
In some embodiments, the number of CT buffer nodes is the maximum value of PMD packets obtained from the DPDK data path in step S01.
According to a second aspect of the present disclosure, there is provided a data packet processing system based on OVS-DPDK, including:
the data packet acquisition module is used for acquiring the PMD data packet from the DPDK data path;
the batch processing batch module is used for caching the PMD data packet acquired by the data packet acquisition module;
the flow table processing module is used for acquiring PMD data packets in the batch processing batch to carry out a flow table processing flow, looking up a table of the PMD data packets in the batch processing batch when the flow table processing flow is executed, creating CT table items when CT table items are not queried in the CT tracking link table, and executing different operations according to the global lock acquisition condition: inserting the CT table entry into a CT tracking link table when the global lock is successfully acquired, and filling a pointer of the CT table entry into a CT cache node when the global lock is failed to be acquired; triggering the cache insertion module to execute a CT table entry insertion flow of a CT cache node after the data packet in the batch processing batch is completely fetched; triggering the data packet acquisition module to process the PMD data packet of the next round after the CT table entry insertion process of the CT cache node is completed;
and the cache insertion module is used for inserting the CT table entry corresponding to the pointer in the CT cache node into the CT tracking link table and deleting the CT cache node when the global lock is successfully acquired when the CT table entry insertion flow of the CT cache node is executed.
Drawings
The above, as well as additional purposes, features, and advantages of embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the accompanying drawings, several embodiments of the present disclosure are shown by way of example, and not by way of limitation.
Fig. 1 shows a flowchart of a packet processing method based on OVS-DPDK according to an embodiment of the present disclosure;
fig. 2 shows an example flowchart of a packet processing method based on OVS-DPDK according to an embodiment of the present disclosure;
fig. 3 shows an effect comparison diagram of packet processing performed by using an OVS-DPDK-based packet processing method and an existing method according to an embodiment of the present disclosure.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are illustrated in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The term "comprising" and variations thereof as used herein means open ended, i.e., "including but not limited to. The term "or" means "and/or" unless specifically stated otherwise. The term "based on" means "based at least in part on". The terms "one example embodiment" and "one embodiment" mean "at least one example embodiment. The term "another embodiment" means "at least one additional embodiment".
When the existing flow for processing the flow table fails to build a chain to acquire a lock, the flow needs to wait for the release of the lock to update the CT tracking link table in series. This process decreases CPS performance as the PMD progresses. Even if the bandwidth of the hardware network card is increased, the packet processing speed cannot match the bandwidth processing speed of the hardware.
Therefore, the embodiment of the disclosure provides a data packet processing method based on OVS-DPDK, which adds a parallel processing mechanism for creating session tracking by CT on OVS, and uses the original serial processing logic when the PMD number is not large and the lock competition is not strong; when the PMD number is increased and lock competition is strong, after lock acquisition fails, the lock release is not waited, other logic is processed, and a newly added table entry is cached in a CT cache node. Therefore, by increasing the parallel processing time, the waste of CPU (Central processing Unit) computational power resources caused by releasing the lock of all tasks is reduced.
Fig. 1 shows a flowchart of a packet processing method based on OVS-DPDK according to an embodiment of the present disclosure. The method comprises the following steps:
step S01, obtaining a PMD data packet from a DPDK data path and caching the PMD data packet into a batch processing batch;
step S02, obtaining PMD data packets from batch processing batch to carry out flow table processing flow;
when the flow table processing flow is executed, table lookup is carried out on PMD data packets in batch processing batch, and when CT table items are not queried in the CT tracking link table, CT table items are created and different operations are executed according to the global lock acquisition condition: inserting the CT table entry into a CT tracking link table when the global lock is successfully acquired, and filling a pointer of the CT table entry into a CT cache node when the global lock is failed to be acquired;
step S03, after the data packet in the batch processing batch is completely fetched, executing a CT table entry inserting flow of a CT cache node; after the CT table entry insertion process of the CT cache node is finished, returning to the step S01 to process the PMD data packet of the next round;
when the global lock is successfully acquired, the CT table entry corresponding to the pointer in the CT cache node is inserted into the CT tracking link table and the CT cache node is deleted.
The method of the embodiment of the disclosure is applied to the network card end, and the network card acquires the PMD data packet on the DPDK data path to execute the processing flow.
In step S01, each time a data packet is acquired in a batch, for example, 32 data packets are acquired in a batch, the 32 data packets are buffered in the batch processing batch.
And step S02, sequentially executing a flow table processing flow on all the data packets in the batch until the current batch of data packets are obtained, and entering a next round of new batch of data packet obtaining and the flow table processing flow of the new data packets.
In the embodiment of the disclosure, when step S02 is performed, a pre-lookup table is first used to determine whether a CT table entry of a current data packet exists in the CT tracking link table, and if not, the CT table entry is created and a global lock is required to be acquired to execute the operation of inserting the CT table entry into the CT tracking link table. In the process, when the global lock acquisition is successful, the operation of inserting CT table entries into the CT tracking link table is executed according to the existing flow, and when the global lock acquisition fails, the CT table entry insertion flow of the CT cache node is executed after the data packet processing is finished in a mode of inserting CT table entries into the CT cache node, which is different from the existing flow, namely, without waiting for global lock release. In the process, the CPS reduction problem under multiple processes can be solved, the time of serial processing during batch processing of data packets is reduced by increasing the time of parallel processing, and the CPS rate is improved.
Specifically, the flow table processing flow includes:
a1, judging whether the data packet in the batch is completely fetched, if so, ending, and if not, executing a subsequent flow;
a2, inquiring the CT tracking link table, and inquiring whether CT table items are in the CT tracking link table.
At the time of the query, a hash value needs to be calculated based on the PMD packet. Acquiring a source IP, a destination IP, a source port, a destination port and a protocol number in a data packet, and forming the information into a tuple; hash calculation is performed based on the tuples to obtain hash values. And inquiring the CT tracking link table by taking the hash value as an index value, and returning non-null if the CT tracking link table can be inquired. When no query is found, then Null is returned.
When a CT entry is found, the state of the CT entry is updated. The state of the CT table item comprises a timecoat state and an establish state. The updated content includes the aging time of the updated CT, and the state transitions. The data packet is marked after updating the state, for example, when the CT table item is queried, the data packet is marked as "+est (link established)" according to the state of the CT table item, and when the CT table item is not queried, the data packet is marked as "+new (link to be established)". After the state is updated, the next data packet is acquired from the batch to execute the flow table processing flow.
When the CT table entry cannot be searched, the CT table entry is created and filled in based on the data packet. Specifically, the states of the source IP, the destination IP, the source port, the destination port, the protocol number and the CT table entry in the data packet are obtained, and the states of the five-tuple and the CT table entry are filled into the newly-built CT table entry.
A3, acquiring a global lock to perform linked list insertion operation.
When global lock acquisition is successful, the CT entry is inserted into the CT tracking link table. After insertion, the next data packet is acquired from the batch to execute the flow table processing flow.
And when the global lock acquisition fails, filling the pointer of the CT table entry into a CT cache node. After filling, the next data packet is acquired from the batch processing batch to execute the flow table processing flow.
And acquiring the data packets from the batch one by one, and executing the processes A1-A3 until the data packets are completely fetched.
Specifically, the CT entry insertion procedure of the CT cache node includes:
b1, acquiring a CT cache node on a thread;
b2, acquiring a global lock, and inserting CT table items corresponding to pointers in CT cache nodes into a CT tracking link table when the global lock is successfully acquired; when global lock acquisition fails, then it is necessary to continue waiting for lock release until the lock is acquired.
And B3, deleting the CT cache node after the insertion operation of the B2 is finished.
And traversing all CT cache nodes on the thread in sequence until CT table items corresponding to pointers in all the cache nodes are inserted into the CT tracking link table and the CT cache nodes are deleted.
After the data packet is fetched, the steps are executed, and CT table entry insertion operation can be executed on the data packet which fails to acquire the lock in the processing process of the table based on the CT cache node.
In addition, the method step S02 of the embodiment of the present disclosure further includes: judging whether the CT cache node is empty, and when the CT cache node is empty, acquiring PMD data packets from the batch processing batch to carry out a flow table processing flow; and when the CT cache nodes are not empty, traversing all CT cache nodes on the thread, and executing a CT table entry insertion flow of the CT cache nodes.
Before the flow table processing flow is executed, the judging flow of the cache node can be further carried out, and when the CT cache node is not empty, the resources in the cache node can be processed preferentially. The method of the embodiment of the disclosure executes the CT table entry insertion flow of the CT cache node before and after executing the flow table processing flow. But with different purposes, the former is used for carrying out priority treatment on resources in the cache node, and the latter is used for carrying out CT table entry inserting operation on the table entry which does not carry out lock release waiting on the basis of the CT cache node.
The CT table entry insertion process of the CT cache node further comprises the following steps: and after the CT table entry insertion process of the CT cache node is executed, entering a flow table processing process. When the CT table entry insertion flow is finished, if the data packet is not completely fetched, continuing the flow table processing flow; and after the CT table entry insertion flow is finished, if the data packet is completely fetched, finishing the current batch processing flow, and returning to the step S01 to execute the next round of data packet acquisition and processing flow.
The method of the embodiment of the disclosure further comprises the following steps: after the PMD packet is acquired in step S01 and before the step S02 is performed, a CT cache node is created. Each PMD opens up a block of cache for use by the caching nodes. The buffer size can store the maximum number of CT buffer nodes that the PMD can acquire the PMD data packet from the DPDK data path once.
Fig. 2 shows an example flowchart of a data packet processing method implementation based on OVS-DPDK according to an embodiment of the present disclosure. Wherein, ct_cache is a cache node. The network card prefetches the data packet and caches the data packet into the batch. Firstly judging whether the ct_cache is empty or not, and executing a flow table processing flow on the right when the ct_cache is empty; and when the CT list item is not empty, executing a CT list item inserting process of the CT cache node below. After the data packet in the batch is taken out, the data cached in the ct_cache is processed, and the processing is traversed once. This process is performed according to the CT entry insertion procedure of the CT cache node, which is not developed in detail herein.
The embodiment of the disclosure also provides a data packet processing system based on the OVS-DPDK, which comprises a data packet acquisition module, a batch processing module, a flow table processing module and a cache insertion module. The system is implemented based on the method of the disclosed embodiments.
The data packet obtaining module is configured to obtain a PMD data packet from a DPDK data path.
The batch processing batch module is used for caching the PMD data packet acquired by the data packet acquisition module.
The flow table processing module is used for acquiring the PMD data packet in the batch processing batch module to carry out a flow table processing flow. When executing the flow table processing flow, the PMD data packet in batch processing batch is subjected to table lookup, and when the CT table entry is queried in the CT tracking link table, the state of the CT table entry is updated. When CT table items are not queried in the CT tracking link table, the CT table items are created and different operations are executed according to the global lock acquisition condition: inserting the CT table entry into a CT tracking link table when the global lock is successfully acquired, and filling a pointer of the CT table entry into a CT cache node when the global lock is failed to be acquired; triggering the cache insertion module to execute a CT table entry insertion flow of a CT cache node after the data packet in the batch processing batch is completely fetched; and triggering the data packet acquisition module to process the PMD data packet of the next round after the CT table entry insertion process of the CT cache node is completed.
And the cache insertion module is used for inserting the CT table entry corresponding to the pointer in the CT cache node into the CT tracking link table and deleting the CT cache node when the global lock is successfully acquired when the CT table entry insertion flow of the CT cache node is executed. And the cache insertion module waits for lock release until the acquisition of the lock is successful when the global lock acquisition fails.
The system of the embodiment of the disclosure further comprises a buffer judgment module, a flow table processing module and a control module, wherein the buffer judgment module is used for judging whether the CT buffer node is empty, and when the CT buffer node is empty, the PMD data packet is acquired from the batch processing batch module, and the flow table processing module is triggered to perform a flow table processing flow; when the CT cache nodes are not empty, traversing all CT cache nodes on the thread, and triggering the cache insertion module to execute a CT table entry insertion flow of the CT cache nodes.
The system of the embodiment of the disclosure further comprises a node creation module, which is used for creating CT cache nodes after the PMD data packet is acquired by the data packet acquisition module and before the cache judgment module executes the action. Each PMD opens up a block of cache for use by the caching nodes. The buffer size can store the maximum number of CT buffer nodes that the PMD can acquire the PMD data packet from the DPDK data path once.
Fig. 3 shows a comparison of packet processing effects achieved using the prior art method and the method according to the embodiments of the present disclosure. The left is the treatment effect achieved by the prior method. The right side is the data packet processing effect realized by adopting the method of the embodiment of the disclosure. In the example 96 packets are processed per batch, with packets from three PMDs (32 packets per PMD). The total serial time after processing is lower than the time used on the left. The embodiment of the disclosure reduces the serial processing time of the parallel processing tasks of the multi-PMD process, increases the parallel processing strength of each PMD, reduces the competition among each PMD, greatly improves CPS performance and improves the speed of processing the data packet by the OVS. When the method and the system of the embodiment of the present disclosure are applied to a network card scene, as CPS increases, the flow table unloading speed is also significantly increased.
While several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to method logic acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. The data packet processing method based on the OVS-DPDK is characterized by comprising the following steps:
step S01, obtaining a PMD data packet from a DPDK data path and caching the PMD data packet into a batch processing batch;
step S02, obtaining PMD data packets from batch processing batch to carry out flow table processing flow;
when the flow table processing flow is executed, table lookup is carried out on PMD data packets in batch processing batch, and when CT table items are not queried in the CT tracking link table, CT table items are created and different operations are executed according to the global lock acquisition condition: inserting the CT table entry into a CT tracking link table when the global lock is successfully acquired, and filling a pointer of the CT table entry into a CT cache node when the global lock is failed to be acquired;
step S03, after the data packet in the batch processing batch is completely fetched, executing a CT table entry inserting flow of a CT cache node; after the CT table entry insertion process of the CT cache node is finished, returning to the step S01 to process the PMD data packet of the next round;
when the global lock is successfully acquired, the CT table entry corresponding to the pointer in the CT cache node is inserted into the CT tracking link table and the CT cache node is deleted.
2. The method for processing a data packet based on OVS-DPDK according to claim 1, wherein the flow table processing procedure includes:
judging whether the data packet in the batch is completely fetched, if so, ending, and if not, executing the subsequent flow;
inquiring a CT tracking link table, when a CT table item cannot be found, creating the CT table item and filling the CT table item based on the data packet;
acquiring a global lock, and inserting the CT table entry into a CT tracking link table when the global lock is successfully acquired; when the global lock acquisition fails, filling the pointer of the CT table entry into a CT cache node;
and acquiring the data packets from the batch one by one to execute the flow until the data packets are fetched.
3. The method for processing a data packet based on OVS-DPDK according to claim 2, wherein said flow table processing procedure further includes: when the CT table item is searched, the state of the CT table item is updated, and then the next data packet is acquired from the batch processing batch to execute the flow table processing flow.
4. The method for processing a data packet based on OVS-DPDK according to claim 2, wherein said process of filling said CT table entry based on the data packet includes: and storing the states of the source IP, the destination IP, the source port, the destination port, the protocol number and the CT table entry in the data packet into the CT table entry.
5. The method for processing a data packet based on OVS-DPDK according to claim 1, wherein said step S02 further includes: judging whether the CT cache node is empty, and when the CT cache node is empty, acquiring PMD data packets from the batch processing batch to carry out a flow table processing flow; and when the CT cache nodes are not empty, traversing all CT cache nodes on the thread, and executing a CT table entry insertion flow of the CT cache nodes.
6. The method for processing a data packet based on OVS-DPDK according to claim 1 or 5, wherein the CT entry insertion procedure of the CT buffer node includes:
acquiring a CT cache node on a thread;
acquiring a global lock, and inserting CT table items corresponding to pointers in CT cache nodes into a CT tracking link table when the global lock is successfully acquired;
deleting the CT cache node;
and traversing all CT cache nodes on the thread in sequence until CT table items corresponding to pointers in all the cache nodes are inserted into the CT tracking link table and the CT cache nodes are deleted.
7. The method for processing a data packet based on OVS-DPDK according to claim 6, wherein the CT entry insertion procedure of the CT buffer node further includes: and after the CT table entry insertion process of the CT cache node is executed, continuing the flow table processing process.
8. The method for processing a data packet based on OVS-DPDK according to claim 1, further comprising: after the PMD packet is acquired in step S01 and before the step S02 is performed, a CT cache node is created.
9. The method of claim 8, wherein the number of CT buffer nodes is the maximum value of PMD packets obtained from the DPDK data path in step S01.
10. A data packet processing system based on OVS-DPDK, comprising:
the data packet acquisition module is used for acquiring the PMD data packet from the DPDK data path;
the batch processing batch module is used for caching the PMD data packet acquired by the data packet acquisition module;
the flow table processing module is used for acquiring PMD data packets in the batch processing batch to carry out a flow table processing flow, looking up a table of the PMD data packets in the batch processing batch when the flow table processing flow is executed, creating CT table items when CT table items are not queried in the CT tracking link table, and executing different operations according to the global lock acquisition condition: inserting the CT table entry into a CT tracking link table when the global lock is successfully acquired, and filling a pointer of the CT table entry into a CT cache node when the global lock is failed to be acquired; triggering the cache insertion module to execute a CT table entry insertion flow of a CT cache node after the data packet in the batch processing batch is completely fetched; triggering the data packet acquisition module to process the PMD data packet of the next round after the CT table entry insertion process of the CT cache node is completed;
and the cache insertion module is used for inserting the CT table entry corresponding to the pointer in the CT cache node into the CT tracking link table and deleting the CT cache node when the global lock is successfully acquired when the CT table entry insertion flow of the CT cache node is executed.
CN202310639196.8A 2023-06-01 2023-06-01 Data packet processing method and system based on OVS-DPDK Active CN116366582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310639196.8A CN116366582B (en) 2023-06-01 2023-06-01 Data packet processing method and system based on OVS-DPDK

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310639196.8A CN116366582B (en) 2023-06-01 2023-06-01 Data packet processing method and system based on OVS-DPDK

Publications (2)

Publication Number Publication Date
CN116366582A true CN116366582A (en) 2023-06-30
CN116366582B CN116366582B (en) 2023-08-04

Family

ID=86928335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310639196.8A Active CN116366582B (en) 2023-06-01 2023-06-01 Data packet processing method and system based on OVS-DPDK

Country Status (1)

Country Link
CN (1) CN116366582B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017186042A1 (en) * 2016-04-29 2017-11-02 华为技术有限公司 Method and device for data transmission in virtual switch technique
CN111797119A (en) * 2020-05-19 2020-10-20 武汉乐程软工科技有限公司 Caching device, caching system and caching method
CN115695522A (en) * 2022-09-16 2023-02-03 中电信数智科技有限公司 Data packet drainage system based on OVS-DPDK and implementation method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017186042A1 (en) * 2016-04-29 2017-11-02 华为技术有限公司 Method and device for data transmission in virtual switch technique
CN111797119A (en) * 2020-05-19 2020-10-20 武汉乐程软工科技有限公司 Caching device, caching system and caching method
CN115695522A (en) * 2022-09-16 2023-02-03 中电信数智科技有限公司 Data packet drainage system based on OVS-DPDK and implementation method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋卫平等: "基于DPDK的虚拟化系统高性能网络模块的研究与实现", 《信息科学》, pages 125 - 133 *

Also Published As

Publication number Publication date
CN116366582B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
US9154442B2 (en) Concurrent linked-list traversal for real-time hash processing in multi-core, multi-thread network processors
US10645187B2 (en) ARC caching for determininstic finite automata of regular expression accelerator
KR100603699B1 (en) Hybrid search memory for network processor and computer systems
EP1159813B1 (en) Method and apparatus for dynamic packet batching with a network interface
US8539199B2 (en) Hash processing in a network communications processor architecture
US8176300B2 (en) Method and apparatus for content based searching
US10635419B2 (en) Incremental compilation of finite automata for a regular expression accelerator
US20030172169A1 (en) Method and apparatus for caching protocol processing data
US10983721B2 (en) Deterministic finite automata node construction and memory mapping for regular expression accelerator
US20110225168A1 (en) Hash processing in a network communications processor architecture
US7792873B2 (en) Data structure supporting random delete and timer function
CN112929299B (en) SDN cloud network implementation method, device and equipment based on FPGA accelerator card
US20040100956A1 (en) Packet search device, packet processing search method used for the same, and program for the same
US8555374B2 (en) High performance packet processing using a general purpose processor
JP2005513895A5 (en)
US20130111000A1 (en) Work request processor
CN115567446A (en) Message forwarding method and device, computing equipment and uninstalling card
EP3742307A1 (en) Managing network traffic flows
US8954409B1 (en) Acquisition of multiple synchronization objects within a computing device
US7466716B2 (en) Reducing latency in a channel adapter by accelerated I/O control block processing
CN116366582B (en) Data packet processing method and system based on OVS-DPDK
CN113411380A (en) Processing method, logic circuit and equipment based on FPGA (field programmable gate array) programmable session table
CN114978676B (en) Data packet encryption and decryption method and system based on FPGA and eBPF cooperation
KR100864889B1 (en) Device and method for tcp stateful packet filter
Parizotto et al. Shadowfs: Speeding-up data plane monitoring and telemetry using p4

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant