US8195873B2 - Ternary content-addressable memory - Google Patents
Ternary content-addressable memory Download PDFInfo
- Publication number
- US8195873B2 US8195873B2 US12/322,794 US32279409A US8195873B2 US 8195873 B2 US8195873 B2 US 8195873B2 US 32279409 A US32279409 A US 32279409A US 8195873 B2 US8195873 B2 US 8195873B2
- Authority
- US
- United States
- Prior art keywords
- bits
- retained
- entry
- tcam
- sets
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000007906 compression Methods 0.000 claims abstract description 30
- 238000012856 packing Methods 0.000 claims abstract description 8
- 230000000717 retained effect Effects 0.000 claims description 193
- 238000012545 processing Methods 0.000 claims description 87
- 238000000034 method Methods 0.000 claims description 67
- 230000004044 response Effects 0.000 claims description 25
- 230000006870 function Effects 0.000 claims description 14
- 230000006835 compression Effects 0.000 abstract description 25
- 230000008569 process Effects 0.000 description 17
- 239000011159 matrix material Substances 0.000 description 4
- 238000013459 approach Methods 0.000 description 2
- 238000013144 data compression Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/90335—Query processing
- G06F16/90339—Query processing by using parallel associative memories or content-addressable memories
Definitions
- the present invention generally relates to ternary content-addressable memory. More particularly, the present invention relates to a low-heat, large-scale ternary content-addressable memory using hash. Compression may also be used to further enhance results but is not required.
- CAM Content-addressable memory
- associative memory is a type of memory used for high-speed computer searches.
- CAMs typically comprise one or more arrays; each array comprises a large number of entries. Each entry, in turn, comprises information to be used in a search or comparison involving one or more input records. One or more input record entries can be compared in parallel.
- a CAM performs comparison (i.e., exclusive-OR or equivalent) operations at the bit level; CAMs fundamentally constitute an array of linked exclusive-OR gates.
- the results of comparing a group of bits in words or entries in the CAM storage are transmitted to a processing unit as a CAM can be viewed as comprising a number of bit-serial processing elements.
- Binary content-addressable memory employs search terms composed entirely of 1s and 0s.
- Ternary content-addressable memory employs search terms comprising is, 0s, and a third state of “X” or a so-called “Don't Care” bit.
- the “X” or “Don't Care” bit is a bit whose value is of no relevance to the search being conducted. The X bit is thus determined based on the search interests of the user.
- TCAMs are used in a number of applications including network routing tables, database engines, data compression, neural networks, intrusion prevention systems, central processing unit (CPU) cache controllers, and translation look-aside buffers.
- TCAMs to date have suffered from large use of resources. X bits are typically not eliminated from the searched entries and a TCAM controller searches all TCAM entries or a substantial portion of the TCAM entries. Silicon usage is large due to suboptimal processing of records and entries to remove as much data as possible that is not essential to the search process. Excessive generation of heat has also limited the compactness and speed available.
- a method for generating an output reporting a success or failure in comparing an input with a set of entries in a ternary content-addressable memory comprising the steps of removing X bits from the set of entries to create one or more retained entry bit sets; storing the one or more retained entry bit sets into the TCAM; selecting, in response to instructions executed by a central processing unit, a set of retained input bits and one or more corresponding sets of retained entry bits; determining, by a digital comparator, whether a match exists between the retained input bit set and the corresponding retained entry bit set, a match indicating success and no match indicating a failure; and generating, by the digital comparator, output reporting the comparison as success or failure.
- TCAM ternary content-addressable memory
- a method for generating an output reporting a success or failure in comparing an input with a set of entries in a TCAM comprising the steps of removing X bits from the set of entries to create one or more retained entry bit sets; compressing, in response to instructions executed by a central processing unit, the one or more retained entry bit sets to create one or more sets of fields including retained entry bits; storing the sets of fields into a TCAM; selecting, in response to instructions executed by the central processing unit, sets of retained input bits with positions that correspond to the sets of fields; comparing, by a digital comparator linked to a field processor in an ordered series of one or more field processors, sets of retained input bits with corresponding set of fields; determining, by the digital comparator, whether a match exists between one or more sets of retained input bits and the corresponding sets of fields, the existence of a match indicating a success and the lack of a match indicating a failure; reporting the successes and failures to a TCAM controller by the one or more field processors; collating, by the TCAM controller,
- a method for generating an output reporting a success or failure in comparing an input with a set of entries in a TCAM comprising the steps of removing X bits from the set of entries to create one or more retained entry bit sets; compressing, in response to instructions executed by a central processing unit, the one or more retained entry bit sets to create one or more sets of triplets; storing the sets of triplets into a TCAM; selecting, in response to instructions executed by the central processing unit, sets of retained input bits with positions that correspond to the sets of triplets; comparing, by a digital comparator linked to a field processor in an ordered series of one or more field processors, sets of retained input bits with corresponding sets of triplets; determining, by the digital comparator, whether a match exists between one or more sets of retained input bits and the corresponding sets of triplets, the existence of a match indicating a success and the lack of a match indicating a failure; reporting the successes and failures to a TCAM controller by the one or more field processors; collating,
- a method for generating an output reporting a success or failure in comparing an input with a set of entries in a TCAM comprising storage lines having a storage limit comprising the following steps: removing X bits from the set of entries to create one or more retained entries; using one or more hash functions, converting the retained entries into hashed entries comprising hashed entry bit sets, so that the largest number of hashed entry bits is less than or equal to the storage limit; determining a number of hashed entry values for each hashed entry bit set; using a bin-packing algorithm, allocating an optimized number of storage lines to store the one or more hashed entries into the TCAM; storing the one or more hashed entries into the TCAM; selecting, in response to instructions executed by a central processing unit, a set of retained input bits and one or more corresponding sets of hashed entry bits; determining, by a digital comparator, whether a match exists between the retained input bit set and the corresponding hashed entry bit set, a match indicating success and no match
- a computer-readable storage medium having embodied thereon a program, the program being executable by a computer to perform a method for efficiently comparing an input with a set of entries in a TCAM, the method comprising the steps of removing X bits from the set of entries to create one or more retained entry bit sets; storing the one or more retained entry bit sets into the TCAM; selecting a set of retained input bits and one or more corresponding sets of retained entry bits; determining whether a match exists between the retained input bit set and the corresponding retained entry bit set, a match indicating success and a lack of a match indicating failure; and generating output reporting the comparison as success or failure.
- FIG. 1A illustrates processing and storing an entry and an input pursuant to a TCAM using hash without compression.
- FIG. 1B illustrates processing and storing an entry and an input pursuant to a TCAM using compression and hash.
- FIG. 1C illustrates comparing an input line against a matching TCAM entry line pursuant to a TCAM using compression and hash.
- FIG. 1D illustrates comparing an input line against a non-matching TCAM entry line pursuant to a TCAM using compression and hash.
- FIG. 2 illustrates a system for comparing an input with a set of entries.
- FIG. 3 illustrates generation of a triplet of TCAM fields.
- FIG. 4 illustrates a flow chart for a TCAM using compression and hash.
- FIG. 5 illustrates a flow chart for a TCAM using compression, hash, and field processors.
- FIG. 6 illustrates a flow chart for a TCAM using compression, hash, and bin-packing.
- Embodiments of the invention offer a system and method whereby X bits do not have to be checked and therefore do not have to be remembered. The number of entries that must be checked in a search is thereby greatly reduced.
- X bits are selected based on a user's search criteria as bits not relevant to the search criteria of the user. X bits are eliminated by a central processing unit. In certain embodiments, only a subset of the non-X entry bits is retained and used with the central processing unit eliminating the non-retained entry bits. Searches only need to be performed on TCAM entries in the RAM line or RAM lines corresponding to an input or inputs of interest.
- Optional compression steps reduces the number of bits that must be checked per entry. Memory requirements are reduced based on a lack of need to retain X bits and, in some embodiments, a lack of need to retain all non-X bits.
- Embodiments of the invention may allow for increased computational speed relative to existing systems.
- RAM is optimized by eliminating X bits, by optionally compressing entries, and by hashing both entries and inputs so that not much heat is generated. Because of the X bit, conventional TCAM uses a theoretical minimum of 1.5 RAM bits per entry bit, which in practice typically amounts to two RAM bits.
- An optional compression operation applies a static function to the entry data to compress the number of compressed entry bits.
- the optional compression process of selecting compressed entry bits streamlines data processing and reduces required memory.
- the optional data compression process is followed by a hash operation on the compressed entry bits.
- the entry bits are then saved in a ternary content-addressable memory (TCAM). Entries that produce the same output from the hashing function or hash table will be saved in the same RAM line.
- TCAM ternary content-addressable memory
- Near-optimal hashing uses bin packing principles to allocate the TCAM entries across the different possible RAM lines so that each RAM line contains approximately the same number of entries, with no RAM line containing more entries than its capacity permits. This approach saves greatly on memory space and heat while not causing substantial losses in accuracy or speed of results.
- FIG. 1A illustrates processing and storing an entry pursuant to a TCAM using hash without compression.
- the entries are entered, hashed, and then stored as TCAM entries in the RAM line of TCAM storage corresponding to the matching hash key.
- Entry 110 which includes entry bits 115 , is entered into central processing unit 120 .
- entry 110 includes 72 entry bits 115 .
- Central processing unit 120 removes X bits from entry 110 , retaining a number N of non-X bits as retained entry 130 comprising retained entry bits 135 .
- Retained entry 130 includes a more manageable number N of retained entry bits 135 versus the larger number of bits from entry 110 .
- N 60.
- N can be preset by a user or otherwise predetermined. N can also be selected during operation according to a preset algorithm. 60 bits of the 72 entry bits 115 in entry 110 are retained as retained entry bits 135 in retained entry 130 .
- None of the retained entry bits 135 is an X bit.
- the first ten retained entry bits 135 are the entry bits 115 A- 115 C, 115 I- 115 K, and 115 O- 115 R.
- Entry bits 115 D, 115 F, 115 H, and 115 L- 115 N are X bits so they are omitted from retained entry 130 .
- Entry bits 115 E and 115 G, while not X bits, are among the entry bits not retained in retained entry 130 . This example is chosen purely for illustrative purposes; more complex retention algorithms may be used.
- Retained entry 130 next enters into central processing unit 120 .
- central processing unit 120 uses one or more of a hash table, a set of one or more hashing functions, and direct hashing bits, central processing unit 120 generates a hash key 150 that fits the retained entry bits 135 in retained entry 130 .
- the 16 retained entry bits 135 are transformed into hash key 150 including 10 entry hash key bits 155 . Since entry hash key bits 155 are generated from retained entry bits 135 , which are not X bits, entry hash key bits 155 are also not X bits.
- Central processing unit 120 directs the retained entry 130 matching hash key 150 to TCAM storage 160 .
- Retained entries 130 are saved as TCAM entries 162 A, 162 B, 162 C, 162 D . . . in one of the RAM lines 165 in TCAM storage 160 .
- Each RAM line 165 matches one hash key 150 .
- TCAM entries are also known as word lines.
- TCAM storage 160 comprises an array of RAM lines 165 .
- RAM lines 165 can hold any number of TCAM entries 162 comprising retained entry bits 135 .
- TCAM entry 162 A comprises the same bits as retained entry 130 .
- Another TCAM entry 162 B fit the same hash key and therefore is stored in the same RAM line 165 .
- TCAM entries 162 C and 162 D are stored in other RAM lines 165 .
- TCAM controller 167 conducts searches of TCAM storage 160 .
- Each of the entry hash key bits 155 comprising hash key 150 will not be an X bit.
- a hash function transforms a set of retained entry bits into an index of a line in the RAM.
- the hash functions make no use of X bits.
- the TCAM controller 167 selects one or more hash functions that will distribute the entries in an approximately pseudo-random fashion so that each line of RAM will have approximately the same number of entries. Approximately the same number of zeroes and ones will be generated with X bits minimized.
- one or more hash tables can be used.
- hash functions are employed, a small number of hash functions (around eight) may be sufficient to adequately disperse the entries into different hash keys while not unduly slowing the speed of the process. Implementation of each of the hash functions can be relatively expensive without influencing too much the total size or cost of the system.
- a RAM line 165 can hold any number of retained entries 130 .
- TCAM storage 160 thus stores an array of retained entries 130 with each TCAM entry 162 matching entry hash key bits 155 with each entry hash key bit 155 having a corresponding position and also a corresponding value.
- an input is entered. Retained input bits are determined and the input is hashed and then stored as a TCAM input in the RAM line of TCAM storage corresponding to the matching hash key. Due to X bits, more than one hash key may be generated for a given input, and therefore there may be more than one matching hash key. In that event, a TCAM input will be stored in the RAM lines corresponding to the matching hash keys.
- Input record 170 comprising input bits 175 A- 175 X . . . , is entered into central processing unit 120 .
- Input record 170 comprises, in this example, 60 input bits 175 .
- Input record 170 passes through central processing unit 120 .
- Central processing unit 120 selects a set of input bits 185 that corresponds in position to the retained entry bits 135 .
- all 60 of the input bits 175 in input record 170 are retained as retained input 180 comprising retained input bits 185 .
- the retained input bits 185 each have a corresponding position and a corresponding value.
- Retained input 180 next enters into central processing unit 120 .
- central processing unit 120 uses one or more of a hash table, a set of one or more hashing functions, or direct hashing bits, central processing unit 120 generates an input hash key 190 that fits the retained input bits 185 in retained input 180 .
- input hash key 190 comprises 10 input hash key bits 195 and is the same as entry hash key 150 .
- Central processing unit 120 directs the retained input 180 matching input hash key 190 to TCAM storage 160 .
- Retained input 180 is saved as TCAM input 197 A in a RAM line 165 in TCAM storage 160 that matches input hash key 190 .
- TCAM input 197 A comprises the same bits as retained input 180 .
- TCAM controller 167 compares TCAM inputs 197 A with TCAM entries 162 derived from retained entries 130 that share a matching hash key with retained input 180 .
- FIGS. 1C and 1D present a comparison of a compressed TCAM input and a compressed TCAM entry, the process is similar for comparison of an uncompressed TCAM input and an uncompressed TCAM entry.
- FIG. 1B illustrates processing and storing an entry pursuant to a TCAM using compression and hash.
- the entries are entered, have compression performed on them, are hashed, and then are stored as TCAM entries in the RAM line of TCAM storage corresponding to the matching hash key.
- the compression process uses compression algorithms and techniques well known in the art.
- entry 110 comprising entry bits 115 , is entered into central processing unit 120 .
- entry 110 comprises 72 entry bits 115 .
- Central processing unit 120 removes X bits from entry 110 and compresses entry 110 to retained entry 130 .
- retained entry 130 comprises a more manageable number N of retained entry bits 135 versus the larger number of bits from entry 110 . Compression also generally reduced the number N relative to the non-compressed case in FIG. 1A .
- N 16.
- N can be preset by a user or otherwise predetermined. N can also be selected during operation according to a preset algorithm. After compression, 16 bits of the 72 entry bits 115 comprised in entry 110 are retained as retained entry bits 135 comprised in retained entry 130 .
- the 16 retained entry bits 135 are the entry bits 115 A- 115 C, 115 I- 115 K, 115 O- 115 R, 115 T, 115 V- 115 X, and 115 AA- 115 BB.
- Entry bits 115 D, 115 F, 115 H, 115 L- 115 N, 115 S, 115 U, and 115 Y- 115 Z are X bits so they are omitted from retained entry 130 .
- Entry bits 115 E and 115 G, while not X bits, are among the entry bits not retained in retained entry 130 .
- Retained entry 130 next enters into central processing unit 120 .
- central processing unit 120 uses one or more of a hash table, a set of one or more hashing functions, and direct hashing bits, central processing unit 120 generates a hash key 150 that fits the retained entry bits 135 comprised in retained entry 130 .
- the 16 retained entry bits 135 are transformed into hash key 150 comprising 10 entry hash key bits 155 . Since entry hash key bits 155 are generated from retained entry bits 135 , which are not X bits, entry hash key bits 155 are also not X bits.
- Central processing unit 120 directs the retained entry 130 matching hash key 150 to TCAM storage 160 .
- Retained entries 130 are saved as TCAM entries 162 E, 162 F, 162 G, 162 H . . . in one of the RAM lines 165 in TCAM storage 160 .
- Each RAM line 165 matches one hash key 150 .
- each of the 16 retained entry bits has two possible states (0 or 1) so the number of possible TCAM entries 162 is 2 16 .
- TCAM entry 162 E comprises the same bits as retained entry 130 .
- Another TCAM entry 162 F fit the same hash key and therefore is stored in the same RAM line 165 .
- TCAM entries 162 G and 162 H are stored in other RAM lines 165 .
- TCAM controller 167 conducts searches of TCAM storage 160 .
- an input is entered. As in FIG. 1A , retained input bits are determined and the input is hashed, and then stored as a TCAM input in the RAM line of TCAM storage corresponding to the matching hash key.
- Input record 170 comprising input bits 175 , is entered into central processing unit 120 .
- Input record 170 comprises, in this example, 60 input bits 175 .
- Input record 170 passes through central processing unit 120 .
- Central processing unit 120 selects a set of input bits 185 that corresponds in position to the retained entry bits 135 .
- Central processing unit 120 does not, however, compress input record 170 .
- the number of input bits 185 can equal N, the number of retained entry bits 135 .
- N 16.
- 16 bits of the 60 input bits 175 in input record 170 are retained as retained input 180 comprising retained input bits 185 .
- the retained input bits 185 each have a corresponding position and a corresponding value.
- the 16 retained input bits 185 are input bits 175 A- 175 E, 175 G, 175 K- 175 N, 175 R- 175 V, and 175 X.
- Retained input 180 next enters into central processing unit 120 .
- central processing unit 120 uses one or more of a hash table, a set of one or more hashing functions, or direct hashing bits, central processing unit 120 generates an input hash key 190 that fits the retained input bits 185 in retained input 180 .
- input hash key 190 comprises 10 input hash key bits 195 and is the same as entry hash key 150 .
- Central processing unit 120 directs the retained input 180 matching input hash key 190 to TCAM storage 160 .
- Retained input 180 is saved as TCAM input 197 B in a RAM line 165 in TCAM storage 160 that matches input hash key 190 .
- TCAM input 197 B comprise the same bits as retained input 180 .
- TCAM controller 167 compares retained input 180 with TCAM entries 162 derived from retained entries 130 that share a matching hash key with retained input 180 .
- FIGS. 1C and 1D present the process whereby the TCAM controller 167 (not pictured) compares TCAM entry 162 from TCAM storage 160 against the TCAM input 197 that occupies the same hash key.
- a digital comparator coupled to a truth table and comprised in TCAM controller 167 (not pictured) selects the TCAM entries 162 that share the same hash key as the corresponding TCAM input 197 .
- the digital comparator compares the TCAM entry 162 with the TCAM input 197 . That is, the digital comparator then compares retained entry bits 135 of retained entry 130 with the retained input bits 185 of retained input 180 that have a corresponding position. In case of a match between all the compared pairs of bits for input and entry sharing the same hash key, a success is reported. In case of a failure, the failure is reported. According to alternative embodiments, failures need not be reported.
- the top line depicts an exemplary TCAM input 197 B after an input record goes through the compression and hash processes according to embodiments of the invention.
- TCAM entry 162 E is “0000011000100011.”
- TCAM input 197 B is the same as in FIG. 1 B.”
- each bit of TCAM input 197 B matches the corresponding bit of TCAM entry 162 E. This matching process is therefore a success and will be reported as such. If the corresponding bit of TCAM entry 162 E were an X bit, this would also be considered a match. However, normally this would occur because as stated, the X bits are eliminated from the entries in the compression process.
- FIG. 1D the top line depicts exemplary TCAM input 197 B after an input record goes through the compression and hash processes according to embodiments of the invention.
- TCAM entry 162 F is “0000011010100011.”
- TCAM input 197 B is again as in FIG. 1B , “0000011000100011.” .
- each bit of TCAM input 197 B matches the corresponding bit of TCAM entry 162 F.
- each bit of TCAM entry 197 B is either an X bit or matches the corresponding bit of TCAM entry 162 F, except for the bit in the ninth position, which does not match. This comparison process is therefore a failure and will be reported as such.
- a TCAM 160 may be 100 MB in size but only 100 KB of it will need to be compared against hashed input record 190 as only the one or more TCAM entries 162 that match the corresponding RAM line 165 need be compared.
- embodiments of the invention offer a robust, low-heat, large-scale system for searching an input against stored TCAM values.
- FIG. 2 illustrates a system 200 designed to efficiently compare an input with one or more TCAM entries.
- the system 200 comprises central processing unit 120 , central processing unit 120 , TCAM storage 160 , TCAM controller 167 , digital comparator 205 , memory 210 , a central processing unit 120 , an input 230 , and an output 240 .
- Central processing unit 120 retrieves stored hashed input 190 from memory 210 via memory bus 265 .
- a TCAM bus 275 allows TCAM storage 160 to communicate and exchange data with CPU 120 .
- a TCAM controller bus 290 allows TCAM controller 167 to communicate and exchange data with central processing unit 120 .
- a digital comparator 205 is coupled to a truth table comprised in memory 210 .
- Digital comparator 205 compares retained entry bits 135 of retained entry 130 , which are stored in TCAM storage 160 with retained input bits 185 of retained input 180 , which are stored in memory 210 . If one or more of the retained input bits 185 does not match the corresponding retained entry bit 135 , then a failure has occurred. If they all match, a success has occurred. Failures and successes are reported to TCAM controller 167 and then potentially reported, displayed, or otherwise communicated using output 240 . Alternatively, failures need not be reported to TCAM controller 167 .
- central processing unit 120 can be located externally to system 200 .
- central processing unit 120 and central processing unit 120 can be the same device in which case the compression operation may be a part of the hash operation or may be a separate operation.
- the CPU codes entries into records of variable length.
- the entries may comprise metadata and compressed data.
- the compressed data may comprise one or more corresponding sets of triplets. In that case, he compressed data can comprise a long string of triplets of: 1) position of start of current string of retained entry bits; 2) number of retained entry bits in current string of retained entry bits; and 3) retained entry bit data in current string.
- the first field of the triplet can provide information on the start position of the current string of retained entry bits.
- the first field can be either a relative number compared to the previous triplet or an absolute. If the latter, it can comprise a length from the start of the current string of retained entry bits to the start of the next string of retained entry bits. In that case, extra hardware will be needed for fast calculation of absolute position, but some bits will be saved.
- the limiting factor for the hardware is compressed size, i.e., number of significant or “live” bits and metadata, not uncompressed size or total bits.
- FIG. 3 provides a schematic example of generation of a triplet by central processing unit 120 (not pictured).
- an exemplary portion of an entry 110 comprises entry bits 115 A, 115 B . . . and is illustrated as “000X0X1X001XXX1000X.”
- the first eight bits 115 A- 115 H of the entry 110 are, in the case of FIG. 3A , presumed to have previously been processed into triplets.
- central processing unit 120 inserts breaks between X bits and non-X bits to divide up the significant data that will be transformed into triplets from the X bits that will be ignored. This produces the result “000
- central processing unit 120 processes the start bit of the next triplet, which is the first non-X entry bit that has not yet been processed, entry bit 115 I.
- the triplet is derived based on the set of non-X entry bits beginning with entry bit 115 I. In this case, the triplet is derived based on a set comprising three non-X entry bits, entry bits 115 I, 115 J, and 115 K. For simplicity, entry 110 is no longer depicted nor is the portion of the entry 110 that has already been processed depicted.
- central processing unit 120 calculates the first triplet. Triplets do not include non-X bits. Entry bit 115 I, which starts the set of three entry bits, is in the ninth position. The number of non-X bits in the first set of non-X bits starting with the ninth entry bit 115 is 3, and the data is 001. Accordingly, the first triplet generated in this example is (9, 3, 001).
- central processing unit 120 calculates the second triplet. Again the process skips over the X bits. Entry bit 1150 , which starts the set of four entry bits, is in the fifteenth position. The number of non-X bits in the second set of non-X bits starting with the fifteenth entry bit 115 is 4, and the data is 1000. The start position for the second triplet is, as can be seen from the figure, the next non-X entry bit that has not yet been processed, entry bit 1150 . The triplet is derived based on the set of non-X entry bits beginning with entry bit 1150 .
- the triplet is derived based on a set comprising four non-X entry bits, entry bits 115 O, 115 P, 115 Q, and 115 R.
- the number of non-X bits in the second set of non-X bits is 4, and the data is 1000. Accordingly, the second and final triplet generated in this schematic example is (15, 4, 1000).
- the logic of the TCAM controller will need to decide which one to pick.
- the choice is made based on priorities.
- a priority is associated with each record that determines which one is evaluated first.
- the entries are arranged in order of priority and the highest priority entry can be evaluated first.
- the priorities tree is static in time; that is, no comparison operation is needed to decide the “winner.” Entries may be prioritized in the order of their entry into memory, or in the order of their first bit number. Alternatively, priority can be calculated while the comparison of the input and entries is performed.
- Relative priorities can be stored in a compressed form. Relative priorities can be used in place of absolute priorities. They can either be calculated sequentially or by a tree structure. Resources will then be used to compute actual priorities from relative priorities, but the savings in memory may compensate. Alternatively, a relative priority can have two possible values, a maximum relative priority and a typical relative priority.
- the maximum relative priority can be a global property of the chip or a property of a specific RAM line.
- the maximum relative priority can equal the number of bits needed to represent the maximum priority value. For example, maximum priority difference between the ith and jth entry can be log(
- Priorities can be determined using a predetermined priority determination protocol. One possible priority determination protocol is a predetermined order of priority.
- predetermined priorities for a set of nine entries are: 6, 100, 102, 2000, 10000, 13456, 22678, 63543, 64000.
- the predetermined priorities are then: 110, 1011110, 10, 11101101010, 1111101000000, 110110000000, 10010000000110, 1001111110100001, 111001001.
- Relative priorities equal to the priority differences between successive entries are: 6, 94, 2, 1898, 8000, 3456, 9222, 40865, 457.
- the number of bits required to represent these relative priorities is: 3, 7, 2, 11, 13, 12, 14, 16, 9.
- a field processor compares the data in the input to the data in the significant (non-X) entry bits and determines success or failure. Success is reported if each significant entry bit matches each corresponding input bit. Operationally, success occurs if the data in each triplet matches the corresponding input data. In that event, a success is reported; otherwise, a failure has occurred and is reported. Alternatively, failures need not be reported.
- a network of linked AND gates each connected to a corresponding digital comparator, jointly processes the results of comparing the triplets with corresponding portions of a given input record.
- a key issue for a real-time TCAM is to break the input into a structure that lets every field processor access the data it needs to process.
- Two exemplary methods to achieve the same are: 1) Pad fields of a size less than a fixed length whenever necessary. Set a fixed length k for every field and pad the extra bits. 2) Place bits determining triplet length, which can be called “length bits,” in predetermined positions. For example, bits in positions 0, k, 2*k, . . . can give the length of a triplet, i.e., the number of retained entry bits in the corresponding string of retained entry bits. The processor will know to “skip” appropriately so as to process the length bits in the positions 0, k, 2*k . . . . In this case, the bits in positions other than 0, k, 2*k . . . will contain the other two pieces of each triplet: position of start of current string of retained entry bits; and retained entry bit data in current string.
- Both methods can work depending on hardware and tining requirements. For a small k, method 1 may be better whereas for a large k, method 2 may be preferred. Compression is achieved through both methods. The advantage of a large k is better compression and the drawback is longer processing time. In both methods, every field processor has to process about k bits. But in method 2, if the next triplet happens to start near the end of an entry, the field processor may have to process approximately 2*k bits.
- Method 2 is straightforward for the case of a single processor. In chip-level parallel implementation, method 2 may be more difficult to implement because input bits have to be matched to retained entry bits.
- One means of achieving the same is by rotating the entry data as follows. Once the entry data has been entered, it can be copied into shift registers. For an input size of S, n shift registers will be required, each of approximate size S/n. The first register will contain bit 0, n, 2n, etc. The second shift register will contain bit 1, n+1, 2n+1, etc. Every field processor will see only one bit of every shift register in a processing cycle and the field processor will be able to choose the retained entry bit it needs, if any, by a simple selector. To keep time required reasonable, a limit is placed on the maximum difference in position between the first and last retained entry bit that a field processor processes, that is, on the number of shift registers with which a given field processor must interact.
- FIG. 4 illustrates a flow chart 400 for a TCAM using compression and hash according to the invention.
- a central processing unit removes X bits from the set of entries to create one or more retained entry bit sets.
- step 420 the central processing unit stores the retained entry bit sets into the TCAM.
- step 430 the central processing unit selects a set of retained input bits and one or more corresponding sets of retained entry bits.
- a digital comparator determines whether a match exists between the retained input bit set and the corresponding retained entry bit set. A match indicates success and the absence of a match indicates a failure.
- step 450 the digital comparator generates output reporting the comparison as a success or a failure.
- FIG. 5 illustrates a flow chart 500 using an ordered series of field processors for efficiently comparing an input with a set of TCAM entries according to the invention, in a high-level view.
- a central processing unit removes X bits from the set of entries to create one or more retained entry bit sets.
- a central processing unit converts the retained entry bit sets into sets of fields.
- the central processing unit stores the sets of fields in the TCAM.
- the central processing unit selects from an input sets of retained input bits with positions that correspond to the sets of fields.
- a digital comparator linked to a corresponding field processor in an ordered series of one or more field processors compares the sets of retained input bits to corresponding sets of fields and determines if a match exists, indicating a success, or if the match does not exist, indicating a failure.
- step 550 the field processor reports the successes and failures to a TCAM controller.
- step 570 when no more retained input bits remain to be compared, the TCAM controller collates priorities for all field processors reporting success.
- step 580 the TCAM controller applies a predetermined priority determination protocol to all field processors reporting success to determine their priority.
- step 590 the TCAM controller generates an output reporting as the winner the field processor reporting success that has the highest priority according to the priority determination method.
- FIG. 6 illustrates a flow chart 600 for a TCAM using compression, hash, and bin-packing according to the invention.
- a central processing unit removes X bits from the set of entries to create one or more retained entries.
- step 620 the central processing unit, using hash, converts the retained entries into hashed entries comprising hashed entry bit sets.
- the CPU is programmed to minimize any remaining X bits, to generate a set of hashed entries with as level a distribution of values as feasible, and to ensure that the largest number of hashed entry bits is less than or equal to the storage limit.
- the central processing unit determines a number of hashed entry values for each hashed entry bit set.
- the CPU counts each hashed entry bit that is an X bit as having two possible values. For example, if a hashed entry has three X bits, the CPU will count eight possible values.
- the result is a table that gives a number of hashed entry values for every possible combination of hash bits.
- step 640 using a bin-packing algorithm, the CPU allocates an optimized number of storage lines to store the one or more hashed entries into the TCAM. Two hash bit values will be mapped to the same RAM line if, and only if, the bin-packing algorithm puts them in the same bin.
- step 650 the CPU stores the hashed entries into the TCAM.
- step 670 the central processing unit selects a set of retained input bits and one or more corresponding sets of hashed entries.
- a digital comparator determines whether a match exists between the retained input bit set and the corresponding hashed entry bit set. A match indicates success and the absence of a match indicates a failure.
- step 690 the digital comparator generates output reporting the comparison as a success or a failure.
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
Claims (23)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/322,794 US8195873B2 (en) | 2009-02-06 | 2009-02-06 | Ternary content-addressable memory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/322,794 US8195873B2 (en) | 2009-02-06 | 2009-02-06 | Ternary content-addressable memory |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100205364A1 US20100205364A1 (en) | 2010-08-12 |
US8195873B2 true US8195873B2 (en) | 2012-06-05 |
Family
ID=42541324
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/322,794 Active 2031-01-25 US8195873B2 (en) | 2009-02-06 | 2009-02-06 | Ternary content-addressable memory |
Country Status (1)
Country | Link |
---|---|
US (1) | US8195873B2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8681795B1 (en) * | 2011-12-28 | 2014-03-25 | Juniper Networks, Inc. | Fixed latency priority classifier for network data |
US9087572B2 (en) | 2012-11-29 | 2015-07-21 | Rambus Inc. | Content addressable memory |
US9224091B2 (en) | 2014-03-10 | 2015-12-29 | Globalfoundries Inc. | Learning artificial neural network using ternary content addressable memory (TCAM) |
US11683039B1 (en) | 2021-03-31 | 2023-06-20 | DreamBig Semiconductor Inc. | TCAM-based not logic |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102986177B (en) * | 2011-06-29 | 2015-03-11 | 华为技术有限公司 | Method and device for setting ternary content addressable memory (TCAM) table iterms |
WO2014029084A1 (en) * | 2012-08-22 | 2014-02-27 | 华为技术有限公司 | Data storage method and search method and device |
JP5916563B2 (en) * | 2012-08-23 | 2016-05-11 | 国立大学法人広島大学 | Associative memory |
US9424366B1 (en) * | 2013-02-11 | 2016-08-23 | Marvell International Ltd. | Reducing power consumption in ternary content addressable memory (TCAM) |
US9602129B2 (en) * | 2013-03-15 | 2017-03-21 | International Business Machines Corporation | Compactly storing geodetic points |
US9719790B2 (en) * | 2013-03-15 | 2017-08-01 | International Business Machines Corporation | Mapping uncertain geometries to graticules |
US9692684B2 (en) * | 2014-09-05 | 2017-06-27 | Telefonaktiebolaget L M Ericsson (Publ) | Forwarding table precedence in SDN |
US9886783B2 (en) | 2015-01-07 | 2018-02-06 | International Business Machines Corporation | Indexing and querying spatial graphs |
CN112087389B (en) * | 2019-06-14 | 2023-01-24 | 深圳市中兴微电子技术有限公司 | Message matching table look-up method, system, storage medium and terminal |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6154384A (en) | 1999-11-12 | 2000-11-28 | Netlogic Microsystems, Inc. | Ternary content addressable memory cell |
US6362992B1 (en) | 2000-10-06 | 2002-03-26 | Purple Ray, Inc. | Binary-ternary content addressable memory |
US20030093613A1 (en) * | 2000-01-14 | 2003-05-15 | David Sherman | Compressed ternary mask system and method |
US6584003B1 (en) | 2001-12-28 | 2003-06-24 | Mosaid Technologies Incorporated | Low power content addressable memory architecture |
US6735670B1 (en) | 2000-05-12 | 2004-05-11 | 3Com Corporation | Forwarding table incorporating hash table and content addressable memory |
US6791855B2 (en) | 2002-04-15 | 2004-09-14 | International Business Machines Corporation | Redundant array architecture for word replacement in CAM |
US6823434B1 (en) | 2000-02-21 | 2004-11-23 | Hewlett-Packard Development Company, L.P. | System and method for resetting and initializing a fully associative array to a known state at power on or through machine specific state |
US6889225B2 (en) | 2001-08-09 | 2005-05-03 | Integrated Silicon Solution, Inc. | Large database search using content addressable memory and hash |
US6996662B2 (en) | 2001-06-18 | 2006-02-07 | Integrated Device Technology, Inc. | Content addressable memory array having flexible priority support |
US20060080498A1 (en) * | 2004-06-29 | 2006-04-13 | Cisco Technology, Inc. | Error protection for lookup operations performed on ternary content-addressable memory entries |
US20060155915A1 (en) * | 2004-12-30 | 2006-07-13 | Pereira Jose P | Database query processor |
US7228378B1 (en) * | 2003-06-11 | 2007-06-05 | Netlogic Microsystems, Inc. | Entry location in a content addressable memory |
US20070168600A1 (en) * | 2006-01-19 | 2007-07-19 | Anthony Bruce O Jr | Content access memory (CAM) as an application hardware accelerator for servers |
US20080065821A1 (en) * | 2006-09-12 | 2008-03-13 | Alcatel | Method and system for character string searching |
US7382637B1 (en) * | 2002-02-01 | 2008-06-03 | Netlogic Microsystems, Inc. | Block-writable content addressable memory device |
US7392349B1 (en) * | 2004-01-27 | 2008-06-24 | Netlogic Microsystems, Inc. | Table management within a policy-based routing system |
US20080215953A1 (en) | 2007-03-01 | 2008-09-04 | Cisco Technology, Inc. | Three bit error detection using ecc codes |
-
2009
- 2009-02-06 US US12/322,794 patent/US8195873B2/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6154384A (en) | 1999-11-12 | 2000-11-28 | Netlogic Microsystems, Inc. | Ternary content addressable memory cell |
US20030093613A1 (en) * | 2000-01-14 | 2003-05-15 | David Sherman | Compressed ternary mask system and method |
US6823434B1 (en) | 2000-02-21 | 2004-11-23 | Hewlett-Packard Development Company, L.P. | System and method for resetting and initializing a fully associative array to a known state at power on or through machine specific state |
US6735670B1 (en) | 2000-05-12 | 2004-05-11 | 3Com Corporation | Forwarding table incorporating hash table and content addressable memory |
US6362992B1 (en) | 2000-10-06 | 2002-03-26 | Purple Ray, Inc. | Binary-ternary content addressable memory |
US6996662B2 (en) | 2001-06-18 | 2006-02-07 | Integrated Device Technology, Inc. | Content addressable memory array having flexible priority support |
US6889225B2 (en) | 2001-08-09 | 2005-05-03 | Integrated Silicon Solution, Inc. | Large database search using content addressable memory and hash |
US6584003B1 (en) | 2001-12-28 | 2003-06-24 | Mosaid Technologies Incorporated | Low power content addressable memory architecture |
US7382637B1 (en) * | 2002-02-01 | 2008-06-03 | Netlogic Microsystems, Inc. | Block-writable content addressable memory device |
US6791855B2 (en) | 2002-04-15 | 2004-09-14 | International Business Machines Corporation | Redundant array architecture for word replacement in CAM |
US7228378B1 (en) * | 2003-06-11 | 2007-06-05 | Netlogic Microsystems, Inc. | Entry location in a content addressable memory |
US7392349B1 (en) * | 2004-01-27 | 2008-06-24 | Netlogic Microsystems, Inc. | Table management within a policy-based routing system |
US20060080498A1 (en) * | 2004-06-29 | 2006-04-13 | Cisco Technology, Inc. | Error protection for lookup operations performed on ternary content-addressable memory entries |
US20060155915A1 (en) * | 2004-12-30 | 2006-07-13 | Pereira Jose P | Database query processor |
US20070168600A1 (en) * | 2006-01-19 | 2007-07-19 | Anthony Bruce O Jr | Content access memory (CAM) as an application hardware accelerator for servers |
US20080065821A1 (en) * | 2006-09-12 | 2008-03-13 | Alcatel | Method and system for character string searching |
US20080215953A1 (en) | 2007-03-01 | 2008-09-04 | Cisco Technology, Inc. | Three bit error detection using ecc codes |
Non-Patent Citations (15)
Title |
---|
Anthony J. McAuley et al. Fast Routing Table Lookup Using CAMs. IEEE 1993. pp. 1382-1391. |
Banit Agrawal et al. Modeling TCAM Power for Next Generation Network Devices. IEEE International Symposium on Performance Analysis of Systems and Software(ISPASS). 2006. pp. 1-10. |
David E. Taylor et al. On using addressable memory for packet classification. Department of Computer Science & Engineering-Washington University in St. Louis. WUCSE-2005-9. Mar. 3, 2005. |
Devavrat Shah et al. Fast Updating Algorithms for TCAMs. IEEE Micro. Jan.-Feb. 2001. pp. 36-47. |
Hideyuki Noda et al. A Cost-Efficient High-Performance Dynamic TCAM With Pipelined Hierarchical Searching and Shift Redundancy Architecture. IEEE Journal of Solid-State Circuits, vol. 40, No. 1. Jan. 2005. pp. 245-253. |
Igor Arsovski et al. A Mismatch-Dependent Power Allocation Technique for Match-Line Sensing in Content-Addressable Memories. IEEE Journal of Solid-State Circuits, vol. 38, No. 11. Nov. 2003. pp. 1958-1966. |
Igor Arsovski et al. A Ternary Content-Addressable Memory (TCAM) Based on 4T Static Storage and Including a Current-Race Sensing Scheme. IEEE Journal of Solid-State Circuits, vol. 38. No. 1. Jan. 2003. pp. 155-158. |
Jon P. Wade et al. A Ternary Content Addressable Search Engine. IEEE Journal of Solid-State Circuits, vol. 24, No. 4. Aug. 1989. pp. 1003-1013. |
Kostas Pagiamtzis et al. A Low-Power Content-Addressable Memory (CAM) Using Pipelined Hierarchical Search Scheme. IEEE Journal of Solid-State Circuits, vol. 39, No. 9. Sep. 2004. pp. 1512-1519. |
Kostas Pagiamtzis et al. Content-Addressable Memory (CAM) Circuits and Architectures: A Tutorial and Survey. IEEE Journal of Solid-State Circuits, vol. 41, No. 3, Mar. 2006. pp. 712-727. |
Nitin Mohan. Low-Power High-Performance Ternary Content Addressable Memory Circuits. A Thesis presented to the University of Waterloo, Ontario, Canada. 2006. |
Rina Panigrahy et al. Reducing TCAM Power Consumption and Increasing Throughput. IEEE. 10th Symposium on High Performance Interconnects Hot Interconnects. 2002. |
Rina Panigrahy et al. Sorting and Searching Using Ternary CAMs. IEEE Computer Society. Jan.-Feb. 2003. pp. 44-53. |
Samar Sharma et al. Sorting and Searching using Ternary CAMs. IEEE 10th Symposium on High Performance Interconnects Hot Interconnects. 2002. |
Valerie Lines et al. 66MHz 2.3M Ternary Dynamic Content Addressable Memory. IEEE Xplore. 2000. pp. 101-105. |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8681795B1 (en) * | 2011-12-28 | 2014-03-25 | Juniper Networks, Inc. | Fixed latency priority classifier for network data |
US8923301B1 (en) | 2011-12-28 | 2014-12-30 | Juniper Networks, Inc. | Fixed latency priority classifier for network data |
US9087572B2 (en) | 2012-11-29 | 2015-07-21 | Rambus Inc. | Content addressable memory |
US9224091B2 (en) | 2014-03-10 | 2015-12-29 | Globalfoundries Inc. | Learning artificial neural network using ternary content addressable memory (TCAM) |
US11683039B1 (en) | 2021-03-31 | 2023-06-20 | DreamBig Semiconductor Inc. | TCAM-based not logic |
US11720492B1 (en) | 2021-03-31 | 2023-08-08 | DreamBig Semiconductor Inc. | Algorithmic TCAM with compressed key encoding |
US11886746B1 (en) | 2021-03-31 | 2024-01-30 | DreamBig Semiconductor Inc. | Algorithmic TCAM with storage activity-based read |
US11899985B1 (en) | 2021-03-31 | 2024-02-13 | DreamBig Semiconductor Inc. | Virtual modules in TCAM |
US11922032B1 (en) | 2021-03-31 | 2024-03-05 | DreamBig Semiconductor Inc. | Content relocation and hash updates in algorithmic TCAM |
Also Published As
Publication number | Publication date |
---|---|
US20100205364A1 (en) | 2010-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8195873B2 (en) | Ternary content-addressable memory | |
JP5240475B2 (en) | Approximate pattern matching method and apparatus | |
US8335780B2 (en) | Scalable high speed relational processor for databases and networks | |
EP3113036B1 (en) | Data matching method and apparatus and computer storage medium | |
US7499912B2 (en) | Search method using coded keys | |
JP3935880B2 (en) | Hybrid search memory for network processors and computer systems | |
US20160342662A1 (en) | Multi-stage tcam search | |
CN101694672B (en) | Distributed safe retrieval system | |
EP1585073A1 (en) | Method for duplicate detection and suppression | |
US10649997B2 (en) | Method, system and computer program product for performing numeric searches related to biometric information, for finding a matching biometric identifier in a biometric database | |
EP2830260B1 (en) | Rule matching method and device | |
CN107368527B (en) | Multi-attribute index method based on data stream | |
JP2005513895A5 (en) | ||
JP2007004801A (en) | Skip list with address-related table structure | |
KR20210121253A (en) | Traffic classification methods and devices | |
US11989185B2 (en) | In-memory efficient multistep search | |
US20140114995A1 (en) | Scalable high speed relational processor for databases and networks | |
US10795580B2 (en) | Content addressable memory system | |
US20170010814A1 (en) | Memory with compressed key | |
US20160105363A1 (en) | Memory system for multiple clients | |
JP6205386B2 (en) | Semiconductor device and information writing / reading method | |
US8117384B2 (en) | Searching a content addressable memory with modifiable comparands | |
CN106681939A (en) | Reading method and device for disk page | |
CN109165220B (en) | Data matching calculation method | |
CN117609284A (en) | Ciphertext retrieval method, ciphertext retrieval device, ciphertext retrieval apparatus, and ciphertext retrieval method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: QUESTARIUM LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GAZIT, HILLEL;REEL/FRAME:026573/0950 Effective date: 20110709 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: FIRQUEST LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MARVELL INTERNATIONAL LTD.;REEL/FRAME:039679/0372 Effective date: 20160805 |
|
AS | Assignment |
Owner name: MARVELL INTERNATIONAL LTD., BERMUDA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:QUESTARIUM LLC;REEL/FRAME:040196/0976 Effective date: 20120809 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: CORIGINE (HONG KONG) LIMITED, HONG KONG Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FIRQUEST LLC;REEL/FRAME:052093/0938 Effective date: 20160808 |
|
AS | Assignment |
Owner name: CORIGINE ELECTRONIC TECHNOLOGY CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CORIGINE (HONG KONG) LIMITED;REEL/FRAME:058176/0940 Effective date: 20211118 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2553); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 12 |