CN117235317A - Tri-state content addressable memory, electronic device, system on chip, and related methods - Google Patents

Tri-state content addressable memory, electronic device, system on chip, and related methods Download PDF

Info

Publication number
CN117235317A
CN117235317A CN202311206373.XA CN202311206373A CN117235317A CN 117235317 A CN117235317 A CN 117235317A CN 202311206373 A CN202311206373 A CN 202311206373A CN 117235317 A CN117235317 A CN 117235317A
Authority
CN
China
Prior art keywords
input data
content
processing action
item
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311206373.XA
Other languages
Chinese (zh)
Inventor
秦军杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pingtouge Shanghai Semiconductor Co Ltd
Original Assignee
Pingtouge Shanghai Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pingtouge Shanghai Semiconductor Co Ltd filed Critical Pingtouge Shanghai Semiconductor Co Ltd
Priority to CN202311206373.XA priority Critical patent/CN117235317A/en
Publication of CN117235317A publication Critical patent/CN117235317A/en
Pending legal-status Critical Current

Links

Landscapes

  • Logic Circuits (AREA)

Abstract

The embodiment of the application provides a three-state content addressing memory, an electronic device, a system on a chip and a related method, wherein the three-state content addressing memory comprises: a receiving unit for receiving input data; a comparison unit configured to compare the input data with a plurality of content items, and determine a target content item that matches the input data from among the plurality of content items; and the reading unit is used for reading the target processing action item corresponding to the target content item and outputting the target processing action item. The scheme can shorten the time delay of searching the processing action items.

Description

Tri-state content addressable memory, electronic device, system on chip, and related methods
Technical Field
The embodiment of the application relates to the technical field of chips, in particular to a ternary content addressable memory, electronic equipment, a system on a chip and a related method.
Background
The network processor (Network Processor, NP) needs to obtain a subsequent processing action (action) according to the input data (key), and in order to improve the processing performance, high-performance query is implemented through the ternary content addressable memory (Ternary Content Addressable Memory, TCAM).
Currently, a scheme of combining TCAM with Static Random-Access Memory (SRAM) is adopted to implement query of processing actions. After the TCAM performs a comparison operation according to the input data to obtain a comparison result (hit), the encoding module (encode) converts the comparison result into address information (index) of the SRAM, and the SRAM performs a read operation according to the address information and outputs a processing action, and the processing action output by the SRAM is used for subsequent logic processing.
However, the TCAM outputs the comparison result, and after the encoding module converts the comparison result into address information, the SRAM can be read according to the address information to obtain the processing action to execute the subsequent processing, so that a plurality of clock cycles are required for obtaining the processing action according to the input data, resulting in a larger delay for obtaining the processing action.
Disclosure of Invention
Accordingly, embodiments of the present application provide a ternary content addressable memory, an electronic device, a system-on-chip and a related method to solve or alleviate at least the above-mentioned problems.
According to a first aspect of an embodiment of the present application, there is provided a ternary content addressable memory comprising: a receiving unit for receiving input data; a comparison unit configured to compare the input data with a plurality of content items, and determine a target content item that matches the input data from among the plurality of content items; and the reading unit is used for reading the target processing action item corresponding to the target content item and outputting the target processing action item.
According to a second aspect of an embodiment of the present application, there is provided a data processing method, including: receiving input data; comparing the input data with a plurality of content items, determining a target content item matching the input data from the plurality of content items; and reading a target processing action item corresponding to the target content item, and outputting the target processing action item.
According to a third aspect of an embodiment of the present application, there is provided an electronic apparatus including: at least one ternary content addressable memory according to the first aspect above; a control unit for sending input data to the ternary content addressable memory; and the logic processing unit is used for receiving the target processing action item output by the three-state content addressable memory and executing the processing action according to the target processing action item.
According to a fourth aspect of an embodiment of the present application, there is provided a system on a chip, including: at least one ternary content addressable memory according to the first aspect above; the control module is used for sending input data to the three-state content addressing memory; and the logic processing module is used for receiving the target processing action item output by the three-state content addressable memory and executing the processing action according to the target processing action item.
According to a fifth aspect of an embodiment of the present application, there is provided a data center including the electronic device according to the third aspect or the system-on-chip according to the fourth aspect.
According to the ternary content addressing memory provided by the embodiment of the application, the ternary content addressing memory is stored with a plurality of content items and a plurality of corresponding processing action items, after the receiving unit receives input data, the comparing unit determines a target content item matched with the input data from the plurality of content items, the reading unit further determines a target processing action item from the plurality of processing action items according to the target content item, and after the target processing action item is output, the logic processing circuit executes the corresponding processing action according to the target processing action. Because the content item and the processing action item are stored in the three-state content addressing memory, the three-state content addressing memory can directly output the hit target processing action item, and the delay for encoding the output comparison result of the three-state content addressing memory and reading the processing action item from the SRAM is saved, so that the delay for searching the processing action item can be shortened.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a schematic diagram of a data center of one embodiment of the present application;
FIG. 2 is a schematic diagram of an electronic device according to one embodiment of the application;
FIG. 3 is a schematic diagram illustrating operation of a ternary content addressable memory according to one embodiment of the present application;
FIG. 4 is a schematic diagram illustrating operation of a ternary content addressable memory according to another embodiment of the present application;
FIG. 5 is a schematic diagram of a system-on-chip of one embodiment of the application;
FIG. 6 is a schematic diagram of a ternary content addressable memory according to one embodiment of the present application;
FIG. 7 is a flow chart of a data processing method of one embodiment of the present application.
Detailed Description
The present application is described below based on examples, but the present application is not limited to only these examples. In the following detailed description of the present application, certain specific details are set forth in detail. The present application will be fully understood by those skilled in the art without the details described herein. Well-known methods, procedures, and flows have not been described in detail so as not to obscure the nature of the application. The figures are not necessarily drawn to scale.
First, partial terms or terminology appearing in the course of describing the embodiments of the application are applicable to the following explanation.
Content addressable memory: a Content-addressable memory (CAM) is a memory that is addressed by Content, and operates by comparing an input data item with a plurality of data items stored in the CAM, determining a data item that matches the input data item from the data items stored in the CAM, and outputting matching information corresponding to the data item.
Ternary content addressable memory: ternary content addressable memory (Ternary Content Addressable Memory, TCAM) implements ternary searches with respect to CAM, i.e., exact match searches can be implemented by masking, fuzzy match searches can also be implemented, and CAM can only do exact match searches. The TCAM is mainly used for quickly searching entries such as an access control list (Access Control Lists, ACL), a route, and the like.
Static random access memory: a Static Random-Access Memory (SRAM) is one type of Random Access Memory, and data stored therein can be always held as long as the Memory is kept powered on.
Electronic equipment: the device with computing or processing capabilities may be embodied in the form of a terminal, such as an internet of things device, a mobile terminal, a desktop computer, a laptop computer, etc., or as a server or a cluster of servers. In the context of a data center in which the present application is applied, the electronic devices may be servers, switches, routers, etc. in the data center.
Application environment of the application
The embodiment of the application provides a scheme for searching processing actions through a three-state content addressable memory, and the whole scheme for searching the processing actions is relatively universal and can be used for various hardware devices comprising the three-state content addressable memory, such as a data center, a server, a personal computer, a switch, a router, an Internet of things (Internet of Things, ioT) device, an embedded device and the like. The process action lookup scheme is independent of the hardware of the computing device deployment that performs the scheme, but for illustrative purposes will be described below primarily in terms of a data center as an application scenario. Those skilled in the art will appreciate that embodiments of the present application may also be applicable to other application scenarios.
Data center
Data centers are globally coordinated, specific networks of devices used to communicate, accelerate, display, calculate, store data information over an internet network infrastructure. In future developments, data centers will also become an asset for enterprise competition. With the widespread use of data centers, artificial intelligence and the like are increasingly applied to data centers. Neural networks have been widely used as an important technology for artificial intelligence in data center big data analysis operations.
In a conventional large data center, the network architecture is generally shown in fig. 1, i.e., an interconnection network model (hierarchical inter-networking model). This model contains the following parts:
server 140: each server 140 is a processing and storage entity of a data center in which the processing and storage of large amounts of data is accomplished by these servers 140.
Access switch 130: access switch 130 is a switch used to allow server 140 access to a data center. An access switch 130 accesses a plurality of servers 140. The access switches 130 are typically located at the Top of the Rack, so they are also referred to as Top of Rack switches, which physically connect to the servers.
Aggregation switch 120: each aggregation switch 120 connects multiple access switches 130 while providing other services such as firewall, intrusion detection, network analysis, etc.
Core switch 110: core switch 110 provides high speed forwarding of packets into and out of the data center and connectivity for aggregation switch 120. The network of the entire data center is divided into an L3 layer routing network and an L2 layer routing network, and the core switch 110 provides a flexible L3 layer routing network for the network of the entire data center in general.
Typically, the aggregation switch 120 is a demarcation point for L2 and L3 layer routing networks, below the aggregation switch 120 is an L2 network, above is an L3 network. Each group of aggregation switches manages one transport point (Point Of Delivery, POD), within each POD is a separate VLAN network. The server migration within the POD does not have to modify the IP address and default gateway because one POD corresponds to one L2 broadcast domain.
Spanning tree protocol (Spanning Tree Protocol, STP) is typically used between the aggregation switch 120 and the access switch 130. STP makes only one aggregation switch 120 available for one VLAN network, the other aggregation switches 120 being used in the event of a failure. That is, at the level of the aggregation switch 120, no horizontal expansion is made, since only one is working even if multiple aggregation switches 120 are added.
The ternary content addressable memory provided by the embodiment of the application can be applied to network message analysis or other matching search application scenes. When the ternary content addressable memory is applied to network message parsing, the ternary content addressable memory is deployed in a switch, such as any one or more of the access switch 130, the aggregation switch 120, and the core switch 110, for parsing network messages between servers 140 or between servers 140 and clients.
Electronic equipment
Fig. 2 shows an internal structure diagram of an electronic device 20 according to an embodiment of the present application, where the electronic device 20 may be a core switch 110, an aggregation switch 120, or an access switch 130 in the foregoing data center embodiment, or may be a device for network packet parsing, such as a router. As shown in fig. 2, the electronic device 20 comprises a control unit 21, a logic processing unit 22 and at least one ternary content addressable memory 23.
The control unit 21 may send the input data to the ternary content addressable memory 23. When the electronic device 20 comprises a plurality of ternary content addressable memories 23, the control unit 21 may send the input data to the plurality of ternary content addressable memories 23 in a parallel manner. The input data is a query condition for looking up processing action entries from the ternary content addressable memory 23, e.g., the input data may include one or more keywords.
The ternary content addressable memory 23 stores a plurality of content items (content) and a plurality of processing action items (action), each content item corresponds to a processing action item one by one, and different content items correspond to different processing action items. The ternary content addressable memory 23, upon receiving the input data, may compare the input data with a plurality of content items, respectively, to determine a target content item from the plurality of content items that matches the input data. After determining the target content item that matches the input data, the ternary content addressable memory 23 may determine a target processing action item corresponding to the target content item from the plurality of processing action items and send the target processing action item to the logic processing unit 22.
The logic processing unit 22, upon receiving the target processing action entry sent from the ternary content addressable memory 23, executes a corresponding processing action according to the target processing action entry. Different processing action entries indicate different processing actions, and the processing actions can be shifting, adding and subtracting operations or amplitude values of a certain domain segment in table entries such as an access control list and a route, and the processing actions indicated by the processing action entries are not limited in the embodiment of the present application.
In the embodiment of the present application, the ternary content addressable memory 23 stores a plurality of content items and a plurality of corresponding processing action items, after the control unit 21 sends the input data to the ternary content addressable memory 23, the ternary content addressable memory 23 determines a target content item matched with the input data from the plurality of content items, further determines a target processing action item corresponding to the target content item from the plurality of processing action items, and sends the target processing action item to the logic processing unit 22, and the logic processing unit 22 executes a corresponding processing action according to the target processing action item. Because the content item and the processing action item are both stored in the ternary content addressing memory 23, the ternary content addressing memory 23 can directly determine the target processing action item corresponding to the target content item after determining the target content item, thereby saving the coding delay of an encoder and the delay of reading the processing action item from the SRAM, and shortening the delay of searching the processing action item.
In one possible implementation, the control unit 21 sends the input data included in the network packet to the ternary content addressable memory 23, where the single network packet includes M input data that need to be processed in series, where M is a positive integer greater than or equal to 2, that is, for a plurality of input data included in the single network packet, it is necessary to search for a processing action entry corresponding to the subsequent input data after executing a processing action corresponding to the preceding input data.
The ternary content addressable memory 23 requires P clock cycles to process an input data in the network message processing process, the logic processing unit 22 needs Q clock cycles to execute a processing action according to the target processing action entry, P is a positive integer greater than or equal to 2, and Q is a positive integer greater than or equal to 1. For example, the network packet includes 2 input data to be processed serially, the ternary content addressable memory 23 needs 3 clock cycles to process one input data to obtain a corresponding processing action entry, and the logic processing unit 22 needs 1 clock cycle to execute the processing action according to the processing action entry, so that the logic processing unit 22 needs 8 clock cycles to execute the processing action corresponding to the last input data in the network packet.
The electronic device 20 may include M pieces of ternary content addressable memories 23, the control unit 21 may send input data included in the network packet to the M pieces of ternary content addressable memories 23, and the ternary content addressable memories 23 may process p+q pieces of input data in parallel, that is, the control unit 21 may send p+q pieces of input data to one ternary content addressable memory 23, and the ternary content addressable memories 23 process the received p+q pieces of input data in parallel, to obtain corresponding p+q pieces of target processing action entries.
When the control unit 21 sends p+q input data to one of the ternary content addressable memories 23, the p+q input data are input data included in different network messages, and the control unit 21 sends input data in different network message processing procedures to the different ternary content addressable memories 23. For example, the electronic device 20 includes 2 ternary content addressable memories 23, p+q=4, and the control unit 21 sends the input data in the network message 1, the network message 2, the network message 3, and the network message 4 to the first ternary content addressable memory 23, and sends the input data in the network message 5, the network message 6, the network message 7, and the network message 8 to the second ternary content addressable memory 23.
In order to ensure that the logic processing unit 22 can output the processing result of the network message in each clock cycle, when the control unit 21 sends the input data to the ternary content addressable memory 23, the control unit may send the input data to the ternary content addressable memory 23 in each clock cycle, and the ternary content addressable memory 23 sends the target processing action entry corresponding to the last input data in the network message to the logic processing unit 22 in each clock cycle, so that the logic processing unit 22 can output the processing result of the network message in each clock cycle.
When the electronic device 20 comprises 2 tri-state content addressable memories 23, fig. 3 shows a schematic diagram of the operation of one tri-state content addressable memory, and fig. 4 shows a schematic diagram of the operation of another tri-state content addressable memory. For ease of description, in the diagrams shown in fig. 3 and 4, cycle1 represents the 1 st clock Cycle, cycle2 represents the 2 nd clock Cycle, and so on, cycle19 represents the 19 th clock Cycle. Packet1 represents the 1 st network message, packet2 represents the 2 nd network message, and so on, packet16 represents the 16 th network message. Key1 represents the first input data in the network message processing process, and Key2 represents the second input data in the network message processing process. Action1 represents a processing Action corresponding to the first input data, and Action2 represents a processing Action corresponding to the second input data.
Packet1Key1 represents the first input data in the processing procedure of network Packet1, packet1Key2 represents the second input data in the processing procedure of network Packet1, packet2Key1 represents the first input data in the processing procedure of network Packet2, packet2Key2 represents the second input data in the processing procedure of network Packet2, and so on, packet16Key1 represents the first input data in the processing procedure of network Packet16, packet16Key2 represents the second input data in the processing procedure of network Packet 16. Packet1Action1 represents a processing Action corresponding to first input data in a processing procedure of network Packet1, packet1Action2 represents a processing Action corresponding to second input data in a processing procedure of network Packet1, packet2Action1 represents a processing Action corresponding to first input data in a processing procedure of network Packet2, packet2Action2 represents a processing Action corresponding to second input data in a processing procedure of network Packet2, and so on, packet16Action1 represents a processing Action corresponding to first input data in a processing procedure of network Packet16, and Packet16Action2 represents a processing Action corresponding to second input data in a processing procedure of network Packet 16.
Fig. 3 shows a process in which the control unit 21 transmits input data to the 1 st ternary content addressable memory 23, as shown in fig. 3:
and sending a Packet1Key1 at a Cycle1, executing a Packet1Action1 at a Cycle4 and obtaining an execution result, sending a Packet1Key2 at a Cycle5, executing a Packet1Action2 at a Cycle8 and obtaining an execution result, wherein the execution result of the Packet1Action2 is the processing result of the Packet 1.
And sending the Packet2Key1 at the Cycle2, executing the Packet2Key2 at the Cycle5 and obtaining an execution result, sending the Packet2Key2 at the Cycle6, executing the Packet2Action2 at the Cycle9 and obtaining an execution result, wherein the execution result of the Packet2Action2 is the processing result of the Packet 2.
And sending the Packet3Key1 at the Cycle3, executing the Packet3Key2 at the Cycle7, and executing the Packet3Key2 at the Cycle10 to obtain an execution result, wherein the execution result of the Packet3Key2 is the processing result of the Packet 3.
And sending the Packet4Key1 at the Cycle4, executing the Packet4Key2 at the Cycle8 and obtaining an execution result, and executing the Packet4Key2 at the Cycle11, wherein the execution result of the Packet4Key2 is the processing result of the Packet 4.
Fig. 4 shows a process in which the control unit 21 transmits input data to the 2 nd ternary content addressable memory 23, as shown in fig. 4:
And sending the Packet5Key1 at the Cycle5, executing the Packet5Key2 at the Cycle9, and executing the Packet5Key2 at the Cycle12 to obtain an execution result, wherein the execution result of the Packet5Key2 is the processing result of the Packet 5.
And sending the Packet6Key1 at the Cycle6, executing the Packet6Key2 at the Cycle9 and obtaining an execution result, and sending the Packet6Key2 at the Cycle10, executing the Packet6Key2 at the Cycle13 and obtaining an execution result, wherein the execution result of the Packet6Key2 is the processing result of the Packet 6.
And sending the Packet7Key1 at the Cycle7, executing the Packet7Action1 at the Cycle10 and obtaining an execution result, sending the Packet7Key2 at the Cycle11, executing the Packet7Action2 at the Cycle14 and obtaining an execution result, wherein the execution result of the Packet7Action2 is the processing result of the Packet 7.
And sending a Packet8Key1 at a Cycle8, executing the Packet8Key2 at a Cycle11 and obtaining an execution result, and sending the Packet8Key2 at a Cycle12, executing the Packet8Key2 at a Cycle15 and obtaining an execution result, wherein the execution result of the Packet8Key2 is the processing result of the Packet 8.
As shown in fig. 3, the 1 st ternary content addressable memory 23 starts processing Packet9 at Cycle9, processing Packet10 at Cycle10, processing Packet11 at Cycle11, and processing Packet12 at Cycle 12. As shown in fig. 4, the 2 nd ternary content addressable memory 23 starts processing Packet13 at Cycle13, processing Packet14 at Cycle14, processing Packet15 at Cycle15, and processing Packet16 at Cycle 16.
As shown in fig. 3 and fig. 4, the logic processing unit 22 obtains the processing result of the Packet1 in the Cycle8, obtains the processing result of the Packet2 in the Cycle9, and so on, obtains the processing result of the Packet8 in the Cycle15, and starts from the 8 th clock Cycle on the premise that the network message to be processed is sufficient, the logic processing unit 22 can obtain the processing result of the network message in each clock Cycle, so as to ensure the timeliness of the network message processing.
It should be noted that, in fig. 3 and fig. 4, the dashed boxes represent threads for processing network messages, each of the three-state content addressable memories 23 may process network messages through 4 parallel threads, and two of the three-state content addressable memories 23 process network messages through 8 parallel threads, so that the logic processing unit 22 may obtain a processing result of the network messages every clock cycle. The network message includes two input data to be processed serially, which is only an example of the embodiment of the present application, and the network message includes other amounts of input data to be processed serially, such as 3, 4, 5, or 6, etc., and the embodiment of the present application does not limit the amount of input data to be processed serially included in the network message. The requirement that the logic processing unit 22 perform the processing action requires 1 clock cycle is only one example of the embodiment of the present application, and the logic processing unit 22 may perform the processing action through a plurality of clock cycles, such as performing the processing action through 2, 3, 5 or 8 clock cycles, and the embodiment of the present application does not limit the time required for the logic processing unit 22 to perform the processing action.
In one possible implementation, the ternary content addressable memory 23 may take one clock cycle to receive input data, determine that a content item may take one clock cycle from the input data, and determine that a corresponding processing action item may take one clock cycle from the content item, where the logic processing unit 22 may take one clock cycle to perform a processing action from the processing action item, i.e., a total of 4 clock cycles to complete processing of one input data. In the related art, one clock cycle is required for the ternary content addressable memory to receive input data, one clock cycle is required for the ternary content addressable memory to compare the input data with the content item, one clock cycle is required for the ternary content addressable memory to transmit the comparison result to the encoder, one clock cycle is required for the encoder to encode the comparison result, one clock cycle is required for the encoder to transmit the encoding result (address information of the SRAM) to the SRAM, one clock cycle is required for the SRAM to determine the corresponding processing action item according to the encoding result, one clock cycle is required for performing the processing action according to the processing action item, and 7 clock cycles are required for completing the processing of one input data in total.
If the network message includes two input data to be processed serially, in order to output a processing result of one network message in each clock cycle, with the electronic device 20 provided in the embodiment of the present application, a single ternary content addressable memory 23 needs 8 clock cycles to output a processing result of the network message, and the electronic device 20 includes two ternary content addressable memories 23, each ternary content addressable memory 23 processes 4 network messages concurrently, each ternary content addressable memory 23 needs processing logic for buffering 4 network messages, and processing logic for buffering 8 network messages as a whole is needed. If the ternary content addressing memory in the related art is adopted, the processing result of the network message needs to be output by a single ternary content addressing memory in 14 clock cycles, two ternary content addressing memories and SRAMs need to be combined, 7 network messages need to be processed concurrently by each ternary content addressing memory and SRAM combination, and processing logic of the 7 network messages needs to be cached by each ternary content addressing memory and SRAM combination, and the processing logic of the 14 network messages needs to be cached as a whole.
In view of the above, compared with the solution of combining TCAM and SRAM in the related art, the electronic device 20 provided in the embodiment of the present application can shorten the time required for processing the network message, reduce the delay of processing the network message, and reduce the design resources in the application scenario of high-performance multiple serial queries.
The embodiment of the present application mainly focuses on the structure and configuration of the ternary content addressable memory 23, and the structure and configuration of the ternary content addressable memory 23 will be described in detail later.
System on chip
Fig. 5 is an internal block diagram of the system on chip 30 according to an embodiment of the present application, where the system on chip 30 may include the core switch 110, the aggregation switch 120, or the access switch 130 in the above-described data center embodiment, and may further include a device for network packet parsing, such as a router. As shown in fig. 5, the system on chip 30 includes a control module 31, a logic processing module 32, and at least one ternary content addressable memory 23. The control module 31 may send the input data to the ternary content addressable memory 23, and the ternary content addressable memory 23 may send a corresponding target processing action entry to the logic processing module 32 according to the input data, where the logic processing module 32 may perform a corresponding processing action according to the target processing action entry.
It should be noted that, the process of executing the corresponding processing action by the system on chip 30 according to the input data may refer to the description in the embodiment of the electronic device 20, and will not be described herein. In the system on chip 30, compared to the electronic device 20, the control module 31, the logic processing module 32 and the ternary content addressable memory 23 included in the system on chip 30 are integrated in one chip, and the control unit 21, the logic processing unit 22 and the ternary content addressable memory 23 included in the electronic device 20 are separate elements or circuits, but the logic of processing input data by the system on chip 30 and the electronic device 20 is the same.
The embodiment of the present application mainly focuses on the structure and configuration of the ternary content addressable memory 23, and the structure and configuration of the ternary content addressable memory 23 will be described in detail later.
Tri-state content addressable memory
The operation of the ternary content addressable memory 23 will be described in detail below in conjunction with the internal structure of the ternary content addressable memory 23 shown in fig. 6.
As shown in fig. 6, the ternary content addressable memory 23 includes a receiving unit 231, a comparing unit 232, and a reading unit 233. The receiving unit 231 may receive input data and transmit the received input data to the comparing unit 232. The comparison unit 232 may compare the received input data with a plurality of content items to determine a target content item matching the input data from the plurality of content items, and transmit the target content item to the reading unit 233. The reading unit 233 reads a target processing action entry corresponding to the target content entry after receiving the target content entry, and outputs the target processing action entry.
The receiving unit 231 may receive input data sent by the control unit 21 or the control module 31, where the input data may be a query condition in a network packet for searching for a corresponding processing action entry, for example, the input data may include one or more keywords. The network message may include one or more input data, for example, the network message includes a plurality of input data that need to be processed in series, that is, after the processing action corresponding to the preceding input data in the network message is executed, the processing action entry corresponding to the following input data is searched.
The ternary content addressing memory 23 is provided with a comparison buffer area and an action buffer area, the comparison buffer area stores a plurality of content items, the action buffer area stores a plurality of processing action items, the content items in the comparison buffer area are in one-to-one correspondence with the processing action items in the action buffer area, and different content items are in correspondence with different processing action items. The content items may be protocol type, source address, destination address, source port, destination port, internet protocol (Internet Protocol, IP) address priority, etc. The processing action entry is used for indicating a processing action, and the processing action can be to shift, add and subtract a certain domain segment in the table entries such as an access control list, a route and the like, or to perform amplitude value on a certain domain segment and the like.
The comparison unit 232 may compare the input data with the content items stored in the comparison buffer to find a target content item that matches the input data. The content items stored in the comparison buffer have a status flag indicating whether the corresponding content item is valid, and the comparison unit 232 may compare the input data with the content item in the valid state according to the status flag of the content item, and determine a target content item matching the input data from the content item in the valid state without comparing the input data with the content item in the non-valid state.
Since the content items are in one-to-one correspondence with the processing action items, the reading unit 233 may determine, from the plurality of processing action items, a target processing action item corresponding to the target content item after receiving the target content item, and then output the target processing action item, for example, send the target processing action item to the logic processing unit 22 or the logic processing module 32, so that the logic processing unit 22 or the logic processing module 32 performs a corresponding processing action according to the target processing action item.
In the embodiment of the present application, the ternary content addressable memory 23 stores a plurality of content items and a plurality of corresponding processing action items, and after the receiving unit 231 receives the input data, the comparing unit 232 determines a target content item matched with the input data from the plurality of content items, and further the reading unit 233 determines a target processing action item from the plurality of processing action items according to the target content item, outputs the target processing action item, and then makes the logic processing circuit execute the corresponding processing action according to the target processing action. Because the content item and the processing action item are both stored in the ternary content addressing memory 23, the ternary content addressing memory 23 can directly output the hit target processing action item, so that the delay of encoding the output comparison result of the ternary content addressing memory and reading the processing action item from the SRAM is saved, and the delay of searching the processing action item can be shortened.
In one possible implementation, the comparing unit 232 may compare the input data with the plurality of content items in parallel to obtain hit information indicating a target content item matching the input data among the plurality of content items, and then send the hit information to the reading unit 233. The reading unit 233 may determine a target processing action entry corresponding to the target content entry according to the hit information after receiving the hit information.
The comparison buffer area stores a plurality of content items, and after the receiving unit 231 transmits the input data to the comparing unit 232, the comparing unit 232 compares the input data with the plurality of content items in a parallel manner to determine whether the input data matches each content item, respectively. The comparison unit 232 generates hit information from the comparison result of the input data and the content items, and the hit information may indicate a target content item that matches the input data. For example, the comparison buffer area stores 512 content items, the comparison unit 232 compares the input data with the 512 content items in a parallel manner to obtain a 512-bit hit information, each bit in the hit information corresponds to one content item, different bits in the hit information correspond to different content items, and a value of a certain bit in the hit information is used for representing a matching relationship between the corresponding content item and the input data. In one example, the ith bit in the hit information is used to characterize the matching relationship of the data input to the ith content item, i is a positive integer greater than or equal to 1, if the ith bit in the hit information is a binary number 1, it indicates that the input data matches the ith content item, and if the ith bit in the hit information is a binary number 0, it indicates that the input data does not match the ith content item.
The reading unit 233 may determine a target content item matching the input data based on the hit information, and further determine a processing action item corresponding to the target content item as a target processing action item. For example, if the hit information has a binary number 0 for each bit except for the 8 th bit, the 8 th content item can be determined as the target content item, and the processing action item corresponding to the 8 th content item can be determined as the target processing action item.
The comparison buffer area includes a plurality of content addresses for storing content items, the action buffer area includes a plurality of action addresses for storing processing action items, the content addresses are in one-to-one correspondence with the action addresses, different content addresses correspond to different action addresses, when the content items and the processing action items are stored in the looks tri-state content addressable memory 23, the corresponding content items and the processing action items are stored to the corresponding content addresses and the action addresses, so that the correspondence between the content items and the processing action items can be conveniently established. When the content item and the processing action item stored in the ternary content addressing memory 23 are changed, the corresponding content item and the processing action item are stored to the corresponding content address and action address only according to the corresponding relation between the content item and the processing action item, so that the maintenance of the content item and the processing action item is more convenient.
In the embodiment of the present application, the comparing unit 232 compares the input data with a plurality of content items in a parallel manner, so that the time required for determining the target content item can be shortened, thereby reducing the delay of searching for the processing action item and improving the processing efficiency of the network message. The comparing unit 232 generates hit information for indicating the target content item, and after the hit information is sent to the reading unit 233, the reading unit 233 can directly determine the target processing action item according to the hit information, so that the hit information does not need to be encoded, and the delay of searching the processing action item is reduced.
In one possible implementation manner, after receiving the hit information, the reading unit 233 reads the target processing action entry corresponding to the target content entry according to the hit information, and outputs the read target processing action entry, so that the logic processing high-speed circuit executes the corresponding processing action according to the target processing action entry.
Since the hit information may indicate a target content item, the reading unit 233 may determine the target content item according to the hit information, and thus may determine a target processing action item corresponding to the target content item, and after determining the target processing action item, the reading unit 233 may read the target processing action item.
In the embodiment of the present application, after determining the target processing action item according to the hit information, the reading unit 233 reads the target processing action item and outputs the read data, and since the reading unit 233 reads the target processing action item according to the hit information with pertinence, only the target processing action item is required, and no non-target processing action item is required to be read, the power consumption of the reading unit 233 is lower, and the tri-state content addressable memory 23 is guaranteed to have lower power consumption, so as to be suitable for an electronic device or a system on a chip with higher power consumption requirement.
In another possible implementation manner, in the process of acquiring the hit information by the comparing unit 232, the reading unit 233 reads a plurality of processing action entries corresponding to a plurality of content entries, and after receiving the hit information, outputs the read target processing action entry according to the hit information.
In the process of comparing the input data with the plurality of content items by the comparing unit 232, the reading unit 233 reads the plurality of processing action items in synchronization. When the comparing unit 232 transmits hit information indicating a target content item to the reading unit 233, the reading unit 233 has completed reading each processing action item, and after receiving the hit information, the reading unit 233 finds a target processing action item from among the read processing action items according to the hit information, and outputs the target processing action item.
In the embodiment of the present application, after the receiving unit 231 receives the input data, the comparing unit 232 and the reading unit 233 operate in parallel, the comparing unit 232 compares the input data with a plurality of content items to obtain hit information that can indicate a target content item, and the reading unit 233 reads a plurality of processing action items. After the comparing unit 232 obtains the hit information, the hit information is sent to the reading unit 233, and after the reading unit 233 receives the hit information, the target processing action item indicated by the hit information is determined from the read processing action items, and the target processing action item is output. Since the reading unit 233 has read each processing action information before receiving the hit information, after receiving the hit information, the reading unit 233 can directly output the target processing action information according to the hit information, thereby saving the time for the reading unit 233 to read the target processing action information according to the hit information, and further reducing the delay of searching for the processing action items and improving the processing efficiency of the network message.
In one possible implementation, the ternary content addressable memory 23 is provided with a comparison buffer and an action buffer, where N content entries are stored in the comparison buffer, and N processing action entries are stored in the action buffer, where different content entries correspond to different processing action entries, and N is a positive integer greater than or equal to 2. The hit information comprises N binary digits, and the binary digit stored in the ith binary digit in the hit information is used for indicating the matching property of the ith content item in the N content items and input data, wherein i is a positive integer less than or equal to N.
N content items in the comparison buffer area are in one-to-one correspondence with N processing action items in the action buffer area, and different content items are corresponding to different processing action items. For example, 512 content items are stored in the comparison buffer, 512 processing action items are stored in the action buffer, the ith content item in the comparison buffer corresponds to the ith processing action item in the action buffer, i is less than or equal to a positive integer of 512,
the comparison unit 232 generates hit information including N binary digits by comparing the input data with N content items, the binary digit stored in the i-th binary digit in the hit information indicating the matching of the input data with the i-th content item. For example, when the input data matches the ith content item, the binary number stored in the ith binary bit in the hit information is 1, and when the input data does not match the ith content item, the binary number stored in the ith binary bit in the hit information is 0.
After receiving the hit information, the reading unit 233 can directly determine the target content item according to the binary numbers stored in each binary bit in the hit information, and further determine the processing action item corresponding to the target content item as the target processing action item. For example, if the binary numbers stored in the 99 th binary bit are all 0 s in the hit information except the binary number stored in the 99 th binary bit, the reading unit 233 determines the 99 th content item as the target content item, and further determines the processing action item corresponding to the 99 th content item as the target processing action item.
Alternatively, the reading unit 233 directly determines the target processing action entry from the binary number stored in each binary bit in the hit information. For example, if the binary numbers stored in the 99 th binary bit are all 0 s in the hit information except the binary numbers stored in the 99 th binary bit, the reading unit 233 determines the 99 th processing action entry as the target processing action entry.
It should be noted that if the hit information indicates that a plurality of content items match the input data, the top-ranked one of the plurality of content items matching the input data may be determined as the target content item according to the ranking of the content items in the comparison buffer. For example, according to the order of the content items in the comparison buffer, the 5 th content item, the 24 th content item order, and the 125 th content item match with the input data, the 5 th content item is determined as the target content item corresponding to the input data.
In the embodiment of the present application, the hit information generated by the comparing unit 232 includes a plurality of binary digits, the number of binary digits included in the hit information is equal to the number of content items, and the binary digits stored in different binary digits in the hit information are used to indicate the matching property of the input data and different content items, and the reading unit 233 can directly determine the target processing action item according to the hit information, without encoding the hit information, so that the delay of searching the processing action item can be reduced.
In one possible implementation, the comparing unit 232 compares the input data with a plurality of content items to find a target content item matching the input data, and if the comparing unit 232 finds a content item matching the input data, the reading unit 233 reads a default processing action item and outputs the default processing action item.
The ternary content addressing memory 23 is provided with a comparison buffer in which a plurality of content items are stored, and the comparison unit 232 compares the input data with each content item in the comparison buffer to determine a target content item matching the input data from the plurality of content items stored in the comparison buffer. If there is no content item matching the input data in the comparison buffer, i.e., the target content item does not exist, the reading unit 233 reads the default processing action item and outputs the target processing action item.
The ternary content addressable memory 23 is provided with an action buffer in which a plurality of processing action entries are stored, and different content entries correspond to different processing action entries. In advance, a certain processing action in the action buffer is determined as a default processing action entry, and when there is no content entry matching the input data, the reading unit 233 reads the default processing action entry and outputs the default processing action entry. The processing action indicated by the default processing action entry may be shifting, adding and subtracting a certain field segment in an entry such as an access control list, a route, or a certain field segment amplitude, and the embodiment of the present application does not limit the processing action indicated by the default processing action entry.
It should be understood that different processing action entries refer to any two processing action entries in the action cache, that different processing action entries may include the same content, i.e., different processing action entries may indicate the same processing action.
In the embodiment of the present application, after the comparing unit 232 compares the input data with the plurality of content items, if no content item matching the input data exists in the plurality of content items, the reading unit 233 reads the default processing action item and outputs the default processing action item, so that after the receiving unit 231 receives the input data, the reading unit 233 can output the processing action item, and further, the normal operation of the processing action searching process is ensured.
In one possible implementation, the input data may include identification information of a target content item, the identification information of the target content item including at least part of the content of the target content item, different content items corresponding to different identification information.
The identification information of the content item is used to distinguish between different content items, which correspond to different identification information, and the input data may include the identification information of the target content item. After receiving the input data, the comparing unit 232 compares the identification information included in the input data with the plurality of content items, and determines the content item matched with the identification information included in the input data as the target content item because the different content items correspond to the different identification information.
The identification information comprises at least part of the content item, e.g. the identification information may be a digest or a keyword of the corresponding content item. When the identification information is a keyword of the corresponding content item, the identification information may include one or more keywords of the corresponding content item, at least part of the keywords of different content items being different.
When the comparison unit 232 compares the input data with the content item, it may detect whether each keyword included in the input data is located in the content item. If each keyword included in the input data is located in the content item, the content item is determined to be a target content item. If the input data includes at least one keyword that is not in the content item, it is determined that the content item does not match the input data.
In the embodiment of the present application, the input data includes the identification information of the target content item, the identification information includes at least part of the content of the target content item, and different content items correspond to different identification information, so the comparison unit 232 may perform exact match search according to the input data to determine the target content item matched with the input data, which is suitable for the application scenario of the exact match search.
In another possible implementation manner, the input data includes the identification information and the mask information of the target content item, where the identification information has been described in the foregoing embodiment, and is not described herein. The mask information is used to mask a portion of the content in the identification information.
In the embodiment of the application, the exact matching search of the content item can be realized through the identification information, and the mask information can mask part of the content in the identification information, so that the fuzzy matching search of the content item can be realized according to the identification information and the mask information, and therefore, when the input data comprises the identification information and the mask information, the exact matching search of the content item can be realized, and the fuzzy matching search of the content item can also be realized, so that the method is suitable for different application scenes, and the applicability of the three-state content addressing memory 23 is improved.
In one possible implementation, the ternary content addressable memory 23 may process a plurality of input data in parallel by the receiving unit 231, the comparing unit 232, and the reading unit 233, and input a plurality of target processing action entries corresponding to the plurality of input data.
The sequential processing of receiving the input data by the receiving unit 231, comparing the input data with the content items by the comparing unit 232, and reading the target content items by the reading unit 233 is regarded as one data processing thread, and the ternary content addressable memory 23 can process a plurality of such data processing threads in parallel. As shown in fig. 3 and 4, the ternary content addressable memory 23 processes 4 input data in parallel by 4 parallel data processing threads.
It should be understood that the tristate content addressable memory 23 processes multiple data processing threads in parallel, meaning that multiple data processing threads executing in the tristate content addressable memory 23 overlap in time, rather than multiple data processing threads beginning and ending simultaneously. For example, a first data processing thread starts at clock cycle 1 and ends at clock cycle 8, a second data processing thread starts at clock cycle 2 and ends at clock cycle 9, a third data processing thread starts at clock cycle 3 and ends at clock cycle 10, and a fourth data processing thread starts at clock cycle 4 and ends at clock cycle 11.
In the embodiment of the application, the tri-state content addressable memory 23 can process a plurality of input data in parallel and output a plurality of target processing action items corresponding to the plurality of input data, thereby realizing synchronous processing of the plurality of input data, improving the processing efficiency of the network message, and realizing the output of the processing result of the network message in each clock period by the cooperation of the plurality of tri-state content addressable memories 23 and reducing the delay of processing the network message.
Data processing method
Fig. 7 is a flow chart of a data processing method according to an embodiment of the present application, which may be performed by the ternary content addressable memory 23 in any of the embodiments described above. As shown in fig. 7, the data processing method includes:
step 701, receiving input data;
step 702, comparing the input data with a plurality of content items, and determining a target content item matched with the input data from the plurality of content items;
step 703, reading a target processing action entry corresponding to the target content entry, and outputting the target processing action entry.
In the embodiment of the application, the three-state content addressable memory stores a plurality of content items and a plurality of corresponding processing action items, after receiving input data, a target content item matched with the input data is determined from the plurality of content items, further, a target processing action item is determined from the plurality of processing action items according to the target content item, and after the target processing action item is output, the logic processing circuit executes the corresponding processing action according to the target processing action. Because the content item and the processing action item are stored in the three-state content addressing memory, the three-state content addressing memory can directly output the hit target processing action item, and the delay for encoding the output comparison result of the three-state content addressing memory and reading the processing action item from the SRAM is saved, so that the delay for searching the processing action item can be shortened.
It should be noted that, because details of the data processing method have been described in the embodiments of the ternary content addressable memory and the embodiments of the electronic device with reference to the schematic structural diagrams, specific processes may be referred to in the embodiments of the ternary content addressable memory and the embodiments of the electronic device, and will not be described herein.
Commercial value of embodiments of the application
In the embodiment of the application, because the content item and the processing action item are both stored in the three-state content addressing memory, the three-state content addressing memory can directly output the hit target processing action item, and the delay of encoding the output comparison result of the three-state content addressing memory and reading the processing action item from the SRAM is saved, thereby shortening the delay of searching the processing action item, reducing the design resource in the high-performance multi-serial inquiry application scene and improving the competitiveness of the three-state content addressing memory and related products comprising the three-state content addressing memory.
It should be noted that, the information related to the user (including, but not limited to, user equipment information, user personal information, etc.) and the data related to the embodiment of the present application (including, but not limited to, sample data for training the model, data for analyzing, stored data, displayed data, etc.) are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and are provided with corresponding operation entries for the user to select authorization or rejection.
It should be understood that each embodiment in this specification is described in an incremental manner, and the same or similar parts between each embodiment are referred to each other, and the embodiments focus on differences from other embodiments. In particular, for method embodiments, the description is relatively simple as it is substantially similar to the methods described in the apparatus and system embodiments, with reference to the description of other embodiments being relevant.
It should be understood that the foregoing describes specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
It should be understood that elements described herein in the singular or shown in the drawings are not intended to limit the number of elements to one. Furthermore, modules or elements described or illustrated herein as separate may be combined into a single module or element, and modules or elements described or illustrated herein as a single may be split into multiple modules or elements.
It is also to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. The use of these terms and expressions is not meant to exclude any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible and are intended to be included within the scope of the claims. Other modifications, variations, and alternatives are also possible. Accordingly, the claims should be looked to in order to cover all such equivalents.

Claims (14)

1. A ternary content addressable memory comprising:
a receiving unit for receiving input data;
a comparison unit configured to compare the input data with a plurality of content items, and determine a target content item that matches the input data from among the plurality of content items;
and the reading unit is used for reading the target processing action item corresponding to the target content item and outputting the target processing action item.
2. The ternary content addressable memory of claim 1 wherein,
the comparing unit is used for comparing the input data with a plurality of content items in a parallel mode respectively to obtain hit information, and sending the hit information to the reading unit, wherein the hit information is used for indicating a target content item matched with the input data in the plurality of content items.
3. The ternary content addressable memory of claim 2 wherein,
and the reading unit is used for reading the target processing action item corresponding to the target content item according to the hit information after receiving the hit information.
4. The ternary content addressable memory of claim 2 wherein,
the reading unit is used for reading a plurality of processing action items corresponding to the content items in the process of acquiring the hit information by the comparing unit, and outputting the read target processing action items according to the hit information after receiving the hit information.
5. The ternary content addressable memory of claim 2 wherein the comparison buffer in the ternary content addressable memory stores N content entries, the action buffer in the ternary content addressable memory stores N processing action entries, different content entries correspond to different processing action entries, N is a positive integer greater than or equal to 2;
the hit information includes N binary bits, and an i-th binary bit of the N binary bits stores a binary number for indicating a matching property of an i-th content item of the N content items with the input data, where i is a positive integer less than or equal to N.
6. The ternary content addressable memory of claim 1 wherein,
the reading unit is used for reading a default processing action item and outputting the default processing action item when the content item matched with the input data does not exist in the plurality of content items.
7. The ternary content addressable memory of claim 1 wherein,
the input data comprises identification information of the target content item, the identification information of the target content item comprises at least part of content of the target content item, and different content items correspond to different identification information;
or,
the input data includes identification information of the target content item and mask information for masking a portion of content in the identification information of the target content item.
8. The ternary content addressable memory of any one of claims 1-7, wherein the ternary content addressable memory processes a plurality of input data in parallel through the receiving unit, the comparing unit, and the reading unit, outputting a plurality of target processing action entries corresponding to the plurality of input data.
9. A data processing method, comprising:
receiving input data;
comparing the input data with a plurality of content items, determining a target content item matching the input data from the plurality of content items;
and reading a target processing action item corresponding to the target content item, and outputting the target processing action item.
10. An electronic device, comprising:
at least one ternary content addressable memory according to any one of claims 1-8;
a control unit for sending input data to the ternary content addressable memory;
and the logic processing unit is used for receiving the target processing action item output by the three-state content addressable memory and executing the processing action according to the target processing action item.
11. The electronic device of claim 10, wherein the electronic device comprises M ternary content addressable memories, M being a positive integer greater than or equal to 2;
the control unit is configured to send input data included in a network packet to the M ternary content addressable memories, where the network packet includes M input data that needs to be processed in series, the ternary content addressable memories need P clock cycles for processing one input data in the network packet, the logic processing unit needs Q clock cycles for executing a processing action according to the target processing action entry, P is a positive integer greater than or equal to 2, and Q is a positive integer greater than or equal to 1;
The ternary content addressable memory is used for processing P+Q input data in parallel.
12. A system on a chip, comprising:
at least one ternary content addressable memory according to any one of claims 1-8;
the control module is used for sending input data to the three-state content addressing memory;
and the logic processing module is used for receiving the target processing action item output by the three-state content addressable memory and executing the processing action according to the target processing action item.
13. The system on a chip of claim 12, wherein the system on a chip comprises M ternary content addressable memories, M being a positive integer greater than or equal to 2;
the control module is configured to send input data included in a network packet to the M ternary content addressable memories, where the network packet includes M input data that needs to be processed in series, and the ternary content addressable memories need P clock cycles for processing the M input data included in the network packet, where P is a positive integer greater than or equal to 2;
the ternary content addressable memory is used for processing P input data in parallel.
14. A data center comprising the electronic device of any of claims 10-11 or the system-on-chip of any of claims 12-13.
CN202311206373.XA 2023-09-18 2023-09-18 Tri-state content addressable memory, electronic device, system on chip, and related methods Pending CN117235317A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311206373.XA CN117235317A (en) 2023-09-18 2023-09-18 Tri-state content addressable memory, electronic device, system on chip, and related methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311206373.XA CN117235317A (en) 2023-09-18 2023-09-18 Tri-state content addressable memory, electronic device, system on chip, and related methods

Publications (1)

Publication Number Publication Date
CN117235317A true CN117235317A (en) 2023-12-15

Family

ID=89089108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311206373.XA Pending CN117235317A (en) 2023-09-18 2023-09-18 Tri-state content addressable memory, electronic device, system on chip, and related methods

Country Status (1)

Country Link
CN (1) CN117235317A (en)

Similar Documents

Publication Publication Date Title
US11102120B2 (en) Storing keys with variable sizes in a multi-bank database
Li et al. Packet forwarding in named data networking requirements and survey of solutions
US10469235B2 (en) Methods and systems for network address lookup engines
Yu et al. Efficient multimatch packet classification and lookup with TCAM
Che et al. DRES: Dynamic range encoding scheme for TCAM coprocessors
US9424366B1 (en) Reducing power consumption in ternary content addressable memory (TCAM)
US6430190B1 (en) Method and apparatus for message routing, including a content addressable memory
US7680806B2 (en) Reducing overflow of hash table entries
US11362948B2 (en) Exact match and ternary content addressable memory (TCAM) hybrid lookup for network device
US6987683B2 (en) Magnitude comparator based content addressable memory for search and sorting
Ghasemi et al. A fast and memory-efficient trie structure for name-based packet forwarding
US9210082B2 (en) High speed network bridging
CN112667526B (en) Method and circuit for realizing access control list circuit
Lee et al. Bundle-updatable SRAM-based TCAM design for openflow-compliant packet processor
CN111984835A (en) IPv4 mask quintuple rule storage compression method and device
EP3964966B1 (en) Message matching table lookup method, system, storage medium, and terminal
Wang et al. Statistical optimal hash-based longest prefix match
CN111819552B (en) Access control list management method and device
CN117235317A (en) Tri-state content addressable memory, electronic device, system on chip, and related methods
Kuo et al. A memory-efficient TCAM coprocessor for IPv4/IPv6 routing table update
US7523251B2 (en) Quaternary content-addressable memory
CN112653639B (en) IPv6 message fragment recombination method based on multi-thread interactive processing
Kogan et al. Efficient FIB representations on distributed platforms
Saxena et al. Scalable, high-speed on-chip-based NDN name forwarding using FPGA
CN112965970A (en) Abnormal flow parallel detection method and system based on Hash algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination