CN116033017A - Processing equipment and processing method for network data packet and electronic equipment - Google Patents

Processing equipment and processing method for network data packet and electronic equipment Download PDF

Info

Publication number
CN116033017A
CN116033017A CN202211418623.1A CN202211418623A CN116033017A CN 116033017 A CN116033017 A CN 116033017A CN 202211418623 A CN202211418623 A CN 202211418623A CN 116033017 A CN116033017 A CN 116033017A
Authority
CN
China
Prior art keywords
rule
network data
data packet
processing
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211418623.1A
Other languages
Chinese (zh)
Inventor
贺巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zeku Technology Shanghai Corp Ltd
Original Assignee
Zeku Technology Shanghai Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zeku Technology Shanghai Corp Ltd filed Critical Zeku Technology Shanghai Corp Ltd
Priority to CN202211418623.1A priority Critical patent/CN116033017A/en
Publication of CN116033017A publication Critical patent/CN116033017A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a processing device and a processing method of a network data packet, and an electronic device, wherein the processing device comprises: an input interface for receiving network data packets; a memory for storing processing rules for network packets, the processing rules including at least one of: filtering rules, address conversion rules and routing rules of the network data packet; a cache for caching one or more rule items hit in the processing rule; the output interface is used for outputting the network data packet processed by the processing rule; control circuitry for performing the following operations: in response to the input interface receiving the first network data packet, querying the cache to determine whether the first network data packet conforms to one or more rule terms; processing the first network data packet based on the first rule term if the first network data packet conforms to the first rule term of the one or more rule terms; and forwarding the first network data packet processed by the first rule item to an output interface.

Description

Processing equipment and processing method for network data packet and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of data processing, and more particularly relates to processing equipment, processing method and electronic equipment of a network data packet.
Background
In order to implement the processing of the network data packet, a filtering table, an address translation table, a routing table and the like are required to be queried sequentially. Because the network data packets have more rules, the query table items are often larger, and secondary or multi-level query is needed, so that the query time is too long, and the processing efficiency of the network data packets is lower.
Disclosure of Invention
The embodiment of the application provides a processing device and a processing method of a network data packet and an electronic device. Various aspects related to embodiments of the present application are described below.
In a first aspect, there is provided a processing device for a network data packet, including: an input interface for receiving network data packets; a memory for storing processing rules for the network data packet, the processing rules including at least one of: the filtering rule of the network data packet, the address conversion rule of the network data packet and the routing rule of the network data packet; a cache for caching one or more rule items hit in the processing rule; the output interface is used for outputting the network data packet processed by the processing rule; control circuitry for performing the following operations: querying the cache to determine whether the first network data packet conforms to the one or more rule terms in response to the input interface receiving the first network data packet; processing the first network data packet based on a first rule term of the one or more rule terms if the first network data packet conforms to the first rule term; and forwarding the first network data packet processed by the first rule item to the output interface.
In a second aspect, a processing method of a network data packet is provided, and the processing device is applied to the processing device of the network data packet, and includes: an input interface for receiving network data packets; a memory for storing processing rules for the network data packet, the processing rules including at least one of: the filtering rule of the network data packet, the address conversion rule of the network data packet and the routing rule of the network data packet; a cache for caching one or more rule items hit in the processing rule; the output interface is used for outputting the network data packet processed by the processing rule; the method comprises the following steps: querying the cache to determine whether the first network data packet conforms to the one or more rule terms in response to the input interface receiving the first network data packet; processing the first network data packet based on a first rule term of the one or more rule terms if the first network data packet conforms to the first rule term; and forwarding the first network data packet processed by the first rule item to the output interface.
In a third aspect, there is provided an electronic device comprising a processing device as described in the first aspect.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon executable code which when executed is capable of carrying out the method of the second aspect.
The embodiment of the application provides a processing device for a network data packet, which comprises the following components: an input interface for receiving network data packets; a memory for storing processing rules for network packets, the processing rules including at least one of: filtering rules, address conversion rules and routing rules of the network data packet; a cache for caching one or more rule items hit in the processing rule; the output interface is used for outputting the network data packet processed by the processing rule; control circuitry for performing the following operations: in response to the input interface receiving the first network data packet, querying the cache to determine whether the first network data packet conforms to one or more rule terms; processing the first network data packet based on the first rule term if the first network data packet conforms to the first rule term of the one or more rule terms; and forwarding the first network data packet processed by the first rule item to an output interface. After the cache mechanisms of the rules such as filtering, address conversion, routing and the like are introduced into the scheme, the rule table lookup process can be directly skipped for the network data packet with the same rule, and the processing efficiency of the network data packet is greatly improved.
Drawings
Fig. 1 is a schematic structural diagram of a PTA according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a processing device for network data packets according to an embodiment of the present application.
Fig. 3 is a flow diagram of one possible network packet processing based on the processing device shown in fig. 2.
Fig. 4 is a flow diagram of another possible network packet processing based on the processing device shown in fig. 2.
Fig. 5 is a flowchart of a processing method of a network packet according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a processing device for network data packets according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments.
Before describing the embodiments of the present application, the processing device of the network data packet and the problems existing in the processing device in the related art will be described with reference to the accompanying drawings.
In order to provide a processing speed of the network data packet, a hardware accelerator is generally used to process the data of the network data packet. A schematic diagram of one possible network packet transport accelerator is shown in fig. 1. As shown in fig. 1, the flow of processing the network data packet by the packet transport accelerator (packet traffic accelerator, PTA) 100 may include the following steps:
step 1.1: after the input interface of the packet transport accelerator 100 receives the network data packet, the received network data packet may be checked and then filtered, for example, the filtering rule table rule item may be queried to match the filtering rule to the network data packet, and the network data packet that does not meet the filtering rule may be filtered and discarded. Further, before the routing processing is performed on the filtered network data packet, whether the network data packet needs internal and external network address translation (network address translation, NAT) processing or not may be determined according to the filtering rule table, if the address translation processing is needed, the step is skipped to step 1.2, otherwise the step is skipped to step 1.3.
Step 1.2: and the address of the network data packet is converted according to the rule items in the network address conversion rule table by inquiring the address rule table.
Step 1.3: after filtering, the routing sequence number of the network data packet may be determined, and then, according to the routing sequence number, the matched routing rule may be queried in the routing rule table, and the processed network data packet may be forwarded to the output interface of the packet transmission accelerator 100.
In general, since network data packets have a large number of sources and complex conditions, the rule tables such as the filtering rule table, the address conversion rule table, and the routing rule table contain a large number of rule entries, and the rule tables are quite huge. In order to improve the query efficiency, it is often necessary to put the rule table classification in different storage media, for example, a part of the rule table classification may be stored in the memory of the PTA, a part of the rule table classification may be stored in the memory outside the PTA, and based on the storage mechanism, a multi-level query mechanism may be correspondingly set.
As shown in fig. 1 above, to implement processing of network packets, a filtering rule table, an address conversion rule table, and a routing rule table of the network packets are sequentially queried. Because of the large rule, the query table items are often larger, and secondary or multi-level query is needed, so that the query time is prolonged, the power consumption is high, and the processing efficiency of the network data packet is lower.
In order to solve the above-mentioned problem, an embodiment of the present application provides a processing device for a network data packet, including: an input interface for receiving network data packets; a memory for storing processing rules for network packets, the processing rules including at least one of: filtering rules, address conversion rules and routing rules of the network data packet; a cache for caching one or more rule items hit in the processing rule; the output interface is used for outputting the network data packet processed by the processing rule; control circuitry for performing the following operations: in response to the input interface receiving the first network data packet, querying the cache to determine whether the first network data packet conforms to one or more rule terms; processing the first network data packet based on the first rule term if the first network data packet conforms to the first rule term of the one or more rule terms; and forwarding the first network data packet processed by the first rule item to an output interface. After the cache mechanisms of the rules such as filtering, address conversion, routing and the like are introduced into the scheme, the rule table lookup process can be directly skipped for the network data packet with the same rule, and the processing efficiency of the network data packet is greatly improved.
The following describes in detail the processing device of the network data packet in the embodiment of the present application with reference to fig. 2. The network packet processing device 200 may be a packet traffic accelerator, such as a packet transport accelerator PTA. Of course, the network packet processing device 200 may be another type of hardware accelerator, which is not specifically limited in this application.
As shown in fig. 2, the processing device 200 for network packets may include an input interface 210, a memory 220, a buffer 230, an output interface 240, and a control circuit 250.
The input interface 210 may be used to receive network data packets, and the input interface 210 may be a serial or parallel interface, such as a Universal Asynchronous Receiver Transmitter (UART) interface, an I2C interface, or the like.
The memory 220 may be used to store processing rules for network packets, which may include at least one of the following: filtering rules of network data packets, address conversion rules of network data packets, routing rules of network data packets, and the like. Any of the processing rules mentioned above may include one or more rule items. The memory 220 may be, for example, a static random-access memory (SRAM), a Double Data Rate (DDR), etc., which is not particularly limited in this application. It should be noted that the memory 220 may be used to store all or part of the processing rules of the network data packet. When a portion of the processing rules for the network packet are stored in memory 220, a storage device external to processing device 200 may store the remaining processing rules.
Cache 230 is commonly referred to as a cache. Cache 230 may be used to cache one or more rule entries that are hit in the processing rule. That is, cache 230 may be used to cache one or more hit rule entries in a rule table, such as a filter rule table, an address translation rule table, and a routing rule table. The hit rule term refers to a hit rule term stored outside the cache 230 during the history of network packets.
In some embodiments, the hit rule entries may be added to the cache 230 or may be evicted from the cache 230. As one example, due to the limited space of cache 230, cached rule items may be evicted at storage time, such as the earliest cached rule item may be evicted first, i.e., the oldest hit rule item in cache 230 may be replaced with the latest hit rule item. As another example, the active draining of cached rule items from cache 230 may be controlled, such as draining rule items that are less frequently used.
The output interface 240 may be configured to output the network packet processed by the processing rule; the output interface 240 may be a serial or parallel interface, such as a UART interface, an I2C interface, or the like. It should be understood that the output interface 240 may be the same as or different from the input interface 210, which is not specifically limited in this application.
The control circuit 250 may be used to perform the following operations: in response to the input interface 210 receiving the first network packet, the cached rule term in the cache 230 may be queried first to determine whether the first network packet meets (may also be referred to as hits on) the cached rule term or terms. If the first network data packet meets a first rule item of the one or more rule items, the first network data packet is processed based on the first rule item and the first network data packet processed by the first rule item is forwarded to the output interface 240.
It is understood that the first network data packet may refer to any one of the network data packets received by the input interface 210.
It may be understood that the first rule term is a matching rule term of the first network packet cached in the cache 230, and according to different matching rules, the first rule term may include a filtering rule term, an address conversion rule term and a routing rule term, or may include only the filtering rule term and the routing rule term, which is not limited in this application specifically. If the first network data packet hits the first rule item, the processed first network data packet can be directly forwarded to the output interface 240 after being processed by the first rule item, and the filtering rule table, the address conversion rule table and the routing rule table of the first network data packet are not required to be queried from the memory 220 or the external memory in sequence, so that the processing efficiency of the network data packet can be greatly improved. Particularly, in the scene of high use frequency of users such as data downloading, video playing, games and the like, the table lookup time delay of the network data packet can be effectively reduced, the processing performance of the equipment can be obviously improved, and the power consumption of the processing equipment is reduced.
In some embodiments, the control circuit 250 may also be used to perform the following operations: if the first network data packet does not conform to the one or more rule terms cached in cache 230, the first network data packet may be processed based on the processing rules stored in memory 220. It should be appreciated that the first network data packet may be processed based on processing rules stored externally to the processing device 200.
It will be appreciated that if the first network packet conforms to a second rule term in the processing rules stored in memory 220, cache 230 is updated based on the second rule term. The second rule term may refer to one or more rule terms that are not stored in the current cache 230 and that are hit that are stored in the memory 220. The second rule term may include a filtering rule term, an address translation rule term, and a routing rule term, or may include only a filtering rule term and a routing rule term, which is not particularly limited in this application.
That is, if the first network packet does not hit the rule entry in the cache 230, the conventional process flow needs to be skipped, and the filtering rule matching, the network address translation (which may be omitted) and the routing rule query matching are sequentially performed on the first network packet. The hit second rule term is then added to the cache 230, so that the hit rate of subsequent network packets can be further improved.
In some embodiments, cache 230 stores therein matching rules for each of the one or more rule entries, the matching rules including five-tuple information for the network packet. The five-tuple information includes information such as a source IP address, a source port, a destination IP address, a destination port, and a transport layer protocol.
For further understanding of the packet data processing flow of the processing device 200, an example is illustrated below in connection with fig. 2 and 3. The processing procedure of the network data packet comprises the following steps.
Step 3.1: buffer 230 is an empty table at initialization.
Step 3.2: when the first packet flows in, it will not hit in the buffer 230 because of the empty table, and the packet is subjected to normal data processing, such as a filtering rule table, an address conversion rule table (which may be omitted), and a routing rule table in the memory can be sequentially accessed.
Step 3.3: after the first packet is processed, its filtering query result, address translation result, and routing result are recorded and stored in the first entry of the cache 230 while the first packet is streamed.
Step 3.4: when the second packet flows in, because it matches the first packet type (it will be appreciated that 5-tuple information is typically matched, and other matching rules can be flexibly set), it hits directly when it flows in the cache 230, without having to look up the filtering rule table, address translation rule table, and routing rule table again, takes the rule directly and flows out.
Step 3.5: similarly, if the following data packet is matched with the previous rule, the query process is skipped and the data packet is directly output. When there is no match in the cache 230, then the normal query process is skipped and new rule entries are updated into the cache 230 until all space in the cache 230 is filled with the respective matching rule. It should be understood that the embodiment of the present application exemplarily gives 8 data packets, three entries.
It should be noted that, according to the aging mechanism, the processing device 200 in the embodiment of the present application replaces the oldest rule and the least recently used rule in the cache 230 with the most recent matching rule, so as to maintain the efficient hit rate of the cache 230.
To further understand the above-mentioned processing device 200, the embodiment of the present application further provides a processing flow of a network data packet, as shown in fig. 4, where the processing flow of the packet data includes the following steps. The processing apparatus 200 for network packets will be described below taking PTA as an example.
Step 4.1: after the PTA receives the packet, if the buffer 230 (also referred to as a rule cache) is enabled, the buffer jumps into the rule cache table look-up state, if the rule cache hits, the processing of the packet is completed, and the data processing flow is ended. If the rule cache is not enabled, the process jumps to step 4.2.
Step 4.2: normal data processing is performed, such as a filtering rule table, an address conversion rule table (which may be omitted), and a routing rule table in the memory can be sequentially accessed. It should be appreciated that the memory in the jump-in PTA may be selected for normal packet data processing depending on the current data processing scenario. Of course, the normal packet data processing can also be performed by jumping into the memory outside the PTA, and as an example, the filtering rule outside the PTA can be searched through hashing, and further, the address conversion rule outside the PTA can also be searched through hashing, and then the routing rule query is performed, so as to complete the packet processing. As another example, the filtering rule outside the PTA may be searched through non-hash, further, the address conversion rule outside the PTA may be searched through non-hash, and then the routing rule query is performed, so as to complete the packet processing.
Step 4.3: when the memory in the PTA fails to hit, the memory outside the PTA may be selected to jump into normal packet data processing, for example, the regular query may be performed by hashing or non-hashing, which may be specifically referred to in the step 4.2, and will not be described in detail herein.
According to the above, it can be seen that the general cache mechanism is introduced into the PTA hardware accelerator, so that the table lookup efficiency of the filtering rule table, the address conversion rule table and the routing rule table is greatly improved, the frequency of accessing the SRAM or the DDR memory is reduced, and the power consumption is also reduced. It should be appreciated that even SRAM or DDR can be eliminated due to the introduction of the cache mechanism, thereby saving the area of the PTA.
In some embodiments, embodiments of the present application further provide an electronic device, such as any of the processing devices described in the foregoing. The electronic device may be, for example, a mobile phone, a tablet computer, etc., which is not particularly limited in this application.
Embodiments of the apparatus portion of the present application are described above in detail in connection with fig. 1-4, and method embodiments of the present application are described below in detail in connection with fig. 5 and 6. It is to be understood that the description of the method embodiments corresponds to the description of the device embodiments, and that parts not described in detail can therefore be seen in the preceding device embodiments.
An embodiment of the application provides a flow diagram of a processing method of a network data packet. As shown in fig. 5, the method 500 is applied to a processing device of the network data packet, where the processing device includes: an input interface for receiving network data packets; a memory for storing processing rules for the network data packet, the processing rules including at least one of: the filtering rule of the network data packet, the address conversion rule of the network data packet and the routing rule of the network data packet; a cache for caching one or more rule items hit in the processing rule; the output interface is used for outputting the network data packet processed by the processing rule; the method 500 comprises the steps of: s520 to S560.
In step S520: in response to the input interface receiving a first network data packet, querying the cache to determine whether the first network data packet conforms to the one or more rule terms.
In step S540: and if the first network data packet accords with a first rule item in the one or more rule items, processing the first network data packet based on the first rule item.
In step S560: and forwarding the first network data packet processed by the first rule item to the output interface.
Optionally, the method 500 further includes: and if the first network data packet does not conform to the one or more rule terms, processing the first network data packet based on processing rules stored in the memory.
Optionally, the method 500 further includes: and if the first network data packet accords with a second rule item in the processing rules stored in the memory, updating the cache based on the second rule item.
Optionally, the cache stores matching rules corresponding to the one or more rule items, where the matching rules include five-tuple information of the network data packet.
Optionally, the processing device is a packet traffic accelerator.
The following describes a processing device 600 for network packets according to an embodiment of the present application with reference to fig. 6. The dashed lines in fig. 6 indicate that the unit or module is optional. The apparatus 600 may be used to implement the methods described in the method embodiments above. The apparatus 600 may be a computer or any type of electronic device.
The apparatus 600 may include one or more processors 610. The processor 610 may support the apparatus 600 to implement the methods described in the method embodiments above.
It is to be appreciated that the processor 610 may be a general purpose processor or a special purpose processor. For example, the processor 610 may be a central processing unit (central processing unit, CPU), an application chip (application processor, AP). Alternatively, the processor 610 may be another general purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The apparatus 600 may also include one or more memories 620. The memory 620 has stored thereon a program that can be executed by the processor 610 to cause the processor 610 to perform the method described in the method embodiments above. The memory 620 may be separate from the processor 610 or may be integrated into the processor 610.
The apparatus 600 may also include a transceiver 630. The processor 610 may communicate with other devices through the transceiver 630. For example, the processor 610 may transmit and receive data to and from other devices through the transceiver 630.
Embodiments of the present application also provide a machine-readable storage medium for storing a program. And which causes a computer to perform the methods in the various embodiments of the present application.
Embodiments of the present application also provide a computer program product. The computer program product includes a program. The program causes a computer to perform the methods in the various embodiments of the present application.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present disclosure, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a machine-readable storage medium or transmitted from one machine-readable storage medium to another machine-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (Digital Subscriber Line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The machine-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, etc. integrated with the available medium. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a digital video disc (Digital Video Disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. A processing device for network packets, comprising:
an input interface for receiving network data packets;
a memory for storing processing rules for the network data packet, the processing rules including at least one of: the filtering rule of the network data packet, the address conversion rule of the network data packet and the routing rule of the network data packet;
a cache for caching one or more rule items hit in the processing rule;
the output interface is used for outputting the network data packet processed by the processing rule;
control circuitry for performing the following operations:
querying the cache to determine whether the first network data packet conforms to the one or more rule terms in response to the input interface receiving the first network data packet;
processing the first network data packet based on a first rule term of the one or more rule terms if the first network data packet conforms to the first rule term;
and forwarding the first network data packet processed by the first rule item to the output interface.
2. The processing device of claim 1, wherein the control circuit is further configured to:
and if the first network data packet does not conform to the one or more rule terms, processing the first network data packet based on processing rules stored in the memory.
3. The processing device of claim 2, wherein the control circuitry is further to:
and if the first network data packet accords with a second rule item in the processing rules stored in the memory, updating the cache based on the second rule item.
4. The processing device of claim 1, wherein the cache stores matching rules for each of the one or more rule entries, the matching rules including five-tuple information for the network packet.
5. The processing device of claim 1, wherein the processing device is a packet traffic accelerator.
6. A method for processing a network packet, characterized by a processing device applied to the network packet, the processing device comprising:
an input interface for receiving network data packets;
a memory for storing processing rules for the network data packet, the processing rules including at least one of: the filtering rule of the network data packet, the address conversion rule of the network data packet and the routing rule of the network data packet;
a cache for caching one or more rule items hit in the processing rule;
the output interface is used for outputting the network data packet processed by the processing rule;
the method comprises the following steps:
querying the cache to determine whether the first network data packet conforms to the one or more rule terms in response to the input interface receiving the first network data packet;
processing the first network data packet based on a first rule term of the one or more rule terms if the first network data packet conforms to the first rule term;
and forwarding the first network data packet processed by the first rule item to the output interface.
7. The method of processing according to claim 6, further comprising:
and if the first network data packet does not conform to the one or more rule terms, processing the first network data packet based on processing rules stored in the memory.
8. The method of processing according to claim 7, wherein the method further comprises:
and if the first network data packet accords with a second rule item in the processing rules stored in the memory, updating the cache based on the second rule item.
9. The processing method of claim 6, wherein the cache stores matching rules corresponding to the one or more rule entries, the matching rules including five-tuple information of the network packet.
10. The processing method of claim 6, wherein the processing device is a packet traffic accelerator.
11. An electronic device, comprising: the processing apparatus of any of claims 1-5.
CN202211418623.1A 2022-11-14 2022-11-14 Processing equipment and processing method for network data packet and electronic equipment Pending CN116033017A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211418623.1A CN116033017A (en) 2022-11-14 2022-11-14 Processing equipment and processing method for network data packet and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211418623.1A CN116033017A (en) 2022-11-14 2022-11-14 Processing equipment and processing method for network data packet and electronic equipment

Publications (1)

Publication Number Publication Date
CN116033017A true CN116033017A (en) 2023-04-28

Family

ID=86073011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211418623.1A Pending CN116033017A (en) 2022-11-14 2022-11-14 Processing equipment and processing method for network data packet and electronic equipment

Country Status (1)

Country Link
CN (1) CN116033017A (en)

Similar Documents

Publication Publication Date Title
US9825860B2 (en) Flow-driven forwarding architecture for information centric networks
US8886680B2 (en) Deterministic finite automata graph traversal with nodal bit mapping
US9385957B1 (en) Flow key lookup involving multiple simultaneous cam operations to identify hash values in a hash bucket
KR102072203B1 (en) A node and a method for generating shortened name increasing adaptability of hierarchical name in a content centric network
US6389419B1 (en) Storing and retrieving connection information using bidirectional hashing of connection identifiers
EP2314027B1 (en) Switching table in an ethernet bridge
US20080101381A1 (en) Address resolution protocol (arp) cache management methods and devices
US10397362B1 (en) Combined cache-overflow memory structure
CN103209141A (en) Method for processing data messages with switching chip and switching chip
US20140047188A1 (en) Method and Multi-Core Communication Processor for Replacing Data in System Cache
EP1683321B1 (en) Method and system to protect a protocol control block for network packet processing
US8510513B2 (en) Network load reducing method and node structure for multiprocessor system with distributed memory
US11093405B1 (en) Shared mid-level data cache
KR20170072645A (en) Processor and method for processing data thereof
CN116033017A (en) Processing equipment and processing method for network data packet and electronic equipment
US11411868B2 (en) Batch oriented service chaining method and corresponding devices and computer program
JP2001237881A (en) Table type data retrieval device and packet processing system using it, and table type data retrieval method
US20170118113A1 (en) System and method for processing data packets by caching instructions
US20140136647A1 (en) Router and operating method thereof
US11036643B1 (en) Mid-level instruction cache
CN111541624B (en) Space Ethernet buffer processing method
US11327890B1 (en) Partitioning in a processor cache
CN116886605B (en) Stream table unloading system, method, equipment and storage medium
KR102060907B1 (en) Method for sharing an FIB table in Named Data Networking and Named Data Network system
US11336557B2 (en) System and method for managing computing resources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination