EP1702275A1 - Trie-speicheranordnung mit zirkularem pipeline-mechanismus - Google Patents
Trie-speicheranordnung mit zirkularem pipeline-mechanismusInfo
- Publication number
- EP1702275A1 EP1702275A1 EP03758229A EP03758229A EP1702275A1 EP 1702275 A1 EP1702275 A1 EP 1702275A1 EP 03758229 A EP03758229 A EP 03758229A EP 03758229 A EP03758229 A EP 03758229A EP 1702275 A1 EP1702275 A1 EP 1702275A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- analysis
- memory
- data
- cell
- chain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C15/00—Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/90335—Query processing
- G06F16/90339—Query processing by using parallel associative memories or content-addressable memories
Definitions
- the present invention relates to associative memories, and in particular memories of the "TRIE" type (from the English verb "reTRIEve”).
- the register assigned to the first section of the chain which is also the entry point of the table, is called a porter.
- the data to be analyzed in the form of bit strings, that is to say to compare with the content of the TRIE memory, will also be called routes below.
- the succession of chained cells associated with a route will be called path in the table.
- Each register of the table will be said to be of order i> 0 if it is assigned to the (i + 1) th section of one or more stored routes.
- the gatekeeper register is therefore of order 0.
- the TRIE memory associates with each of its registers of order i> 0 a unique sequence of iK bits corresponding to the first iK bits of each route whose path in the table passes through a cell of the register in question.
- TRIE can be presented as shown in Figure 1, where the underlined data is ' status.
- the patterns 45A4, 45AB, 67 AB, 788A and 788BD are respectively represented in the table from there Figure 1 by the paths: T [0.4] - T [, 5] -> T [2, A] ⁇ T [3 , 4]; T [0.4] ⁇ T [1, 5] ⁇ T [2, A] ⁇ T [3, BJ; T [0.6] - »T [4.7] ⁇ T [5, A] ⁇ T [6, B]; T [0.7] ⁇ T [7.8] ⁇ T [8.8] - »T [9, A]; T [0.7] - »T [7.8J - T [8.8] -» T [9, B] ⁇ T [10, D].
- the analysis rank i is set to 0 and the gate register R 0 is selected as register R.
- the content C of the cell T [R, V j ] designated by the (i + 1) -th slice V j of the route in the selected order register i is read in step 2. If this cell contains a pointer for further analysis, as indicated in test 3 the value 1 of a bit FP (C) stored in the cell, the order register i + 1 designated by this pointer Ptr (C) is selected as register R for the next iteration in step 4, and rank i is incremented.
- the status Ref (C) read in the cell concerned is returned to step 5 as a result of consulting the table.
- This algorithm allows the analysis of routes comprising any number of sections.
- the same table can be used for several types of analyzes by managing the data from different gatekeepers.
- it allows to control the time of data analysis: the analysis of a number
- N of K-bit slices will last at most N times the duration of an iteration.
- the algorithm of FIG. 2 can be implemented very quickly by a hardware component managing access to the memory table.
- a hardware component managing access to the memory table.
- the packet header is analyzed on the fly by the component, and the status associated with a route designates for example an output port of the router to which the packets carrying a destination address conforming to this route must be routed.
- Such a router can be mufti-protocols.
- This first analysis provides a reference which, although corresponding to a logical end of analysis, can be materialized in the TRIE memory by a pointer for further analysis designating another gatekeeper register to be used to analyze the rest of the head.
- the reference in question can also trigger timers or jumps of a determined number of bits in the analyzed header in order to be able to choose which portion of the header should then be analyzed.
- a certain number of analyzes are generally executed successively, to trigger the operations required by the supported protocols according to the content of the headers.
- One of these analyzes will relate to the destination address to perform the routing function proper.
- TRIE tables Another advantage of the TRIE tables is that it allows routing constraints to be taken into account on the basis of the longest recorded path corresponding to a prefix of the route to be recognized, a constraint encountered in particular in the context of IP routing (see EP-A-0 989 502).
- EP-A-1 030 493 describes a TRIE memory, the content of which includes, in addition to the references proper associated with the packet headers, a program consisting of the sequence of elementary analyzes to be carried out according to the different configurations taken into account by Memory. These sequences are fully programmable. The user can arbitrarily define, at each step of the process, which portion of the header should be examined and from which register of the TRIE memory, which provides great processing flexibility.
- a TRIE memory can also be described in tree form, with nodes distributed in several successive stages corresponding to the analysis orders i previously mentioned. Each node of a stage i represents a decision to be made during the analysis of the (i + 1) th section of a route.
- the root node of the tree corresponds to the gatekeeper register, the leaf nodes to the status, and the intermediate nodes to the registers designated by the further analysis pointers.
- the tree representation makes it easy to visualize the paths.
- the tree in FIG. 3 thus shows the paths recorded in the table in FIG. 1, the root and the intermediate nodes being represented by circles (registers) and the leaves by rectangles (status).
- the tree representation makes it possible to design compression methods aimed at reducing the memory size required to implement a TRIE table. This reduction is particularly useful for rapid implementations of large tables using static memory circuits (SRAM).
- SRAM static memory circuits
- a hardware implementation in the form of a table where each register contains 2 K cells is ineffective in terms memory occupation since such a table has many empty cells, as shown in Figure 1.
- the nodes close to the root have a number of valid descendants close the number of possible descendants (2 K ).
- the average number of valid descendants of a given node decreases considerably and tends towards 1 (or 2 if we take into account a default status). In this case, there is only between 10% and 15% in the memory. useful cells.
- - path compression (path compression) consists in aggregating at a node Y a stage i the non-empty nodes of stages i + 1 to i + j-1 (j _ ⁇ ) which are descendants of this node Y when each of these nodes of stages i to i + j-1 has a single non-empty descendant (register or status). See also US-A-6,014,659 or US-A-6,505,206.
- the length of the slice to be analyzed in relation to the compressed node Y is multiplied by j; - level compression (“level compression”) consists in aggregating at a node Z of a stage i the non-empty nodes of stages i + 1 to i + j-1 (j _ ⁇ ) which are descendants of this node Z when each of these nodes of stages i + 1 to i + j-1 itself has at least one non-empty descendant (register or status).
- the length of the slice to be analyzed in relation to the compressed node Z is multiplied by j; - the compression in extent (“width compression” or "pointer compression”) consists in eliminating the empty descendants of a given node.
- a TRIE table is suitable for parallel processing in pipeline mode, as mentioned in the article "Putting Routing Tables in Silicon", by T-B. Pei et al., IEEE Network Magazine, January 1992, pages 42-50.
- M the maximum number of stages in the tree
- K the available memory space
- N the available memory space
- Each memory plane P: of level j (0 ⁇ j ⁇ N) is reserved for the nodes of one or more consecutive stages of the tree.
- N operators operate in parallel with each a respective buffer containing a data chain to analyze.
- This pipeline processing by the N operators increases the maximum rate of processing of the device.
- N M (one stage per pipeline level)
- the memory plane associated with the gatekeeper only processes one. single node while the following stages have many nodes and therefore require much larger memory plans, with wider address buses and access times increased accordingly.
- the invention thus provides a TRIE memory device, comprising means for storing binary patterns associated with respective references, and means for analyzing data strings by successive K bit slices (K ⁇ 1) to extract one of the references during a match between an analyzed data string and a stored binary pattern associated with said reference.
- the storage means comprise several successive stages of memory cells, and the analysis means have access to a cell of a stage i ⁇ O in relation to the analysis of the (i + 1) -th section of a data chain.
- the invention enriches this structure by a circular pipeline mechanism according to which, N and p being two integers such that N ⁇ 2, p ⁇ 1 and Np is less than a maximum number of stages of the storage means, the storage means are divided into N separate memory areas of levels 0 to N-1, and the analysis means comprise at most N parallel analysis modules, each cell of a stage i ⁇ O belonging to the memory area of level
- the TRIE memory is divided into N memory zones or planes to which the analysis means access in a circular manner.
- Each zone contains stages or groups of stages distributed regularly in the tree associated with the TRIE memory, so that the memory can be shared in a relatively uniform manner between the N memory planes.
- N 2
- N 3
- M the maximum number of stages of the tree
- the increase in the speed of the device is maximum when the analysis means comprise N parallel modules.
- a smaller number of analysis modules could be provided (at most only one), for example to reserve one or more pipeline levels for operations of inserting or deleting paths in the TRIE memory.
- Another advantage of the device according to the invention is its compatibility with various compression schemes of the * TRIE memory, in particular with the schemes which, such as compression in extent, do not modify the length of the slices to be analyzed on each stage. The implementation of such a compression scheme essentially affects the calculation performed by each analysis module, but not the general architecture of the pipeline.
- the parallel analysis modules are associated with respective data buffers to each receive a chain of data to be analyzed.
- Each memory area has a data bus and an address bus respectively.
- the analysis means comprise data multiplexing means connected to the N data buses and to the analysis modules and address multiplexing means connected to the N address buses and to the analysis modules. These multiplexing means are controlled to supply to each analysis module during the analysis of the (i + 1) th section of a data chain, the content of a cell whose address in the area of level memory
- the device comprises a buffer memory for receiving up to N strings of data to be analyzed.
- the analysis means comprise N parallel modules each associated with one of the N memory areas, and multiplexing means for distributing the slices of said chains to the N analysis modules so that, for all i ⁇ O, the ( i + 1) -th slice of each data string is addressed to the module associated with the level memory area
- FIG. 1, previously commented on, shows an example of the content of a TRIE memory
- FIG. 2, previously commented on is a flow diagram of a conventional analysis procedure executed to consult the TRIE memory
- - Figure 3, previously commented on is a tree representation of the TRIE memory having the content illustrated in Figure 1
- - Figure 4 is a block diagram of a packet router incorporating a device according to the invention
- - Figure 5 is a diagram of a circuit forming a device according to the invention
- - Figure 6 is a timing diagram of signals involved in the operation of the circuit of Figure 5
- - Figure 7 is a diagram of an alternative embodiment of the device according to the invention.
- ATM asynchronous transfer mode
- the router 10 shown in FIG. 4 operates with a host computer 11.
- the host computer 11 can send and receive packets, in particular for managing the routing process. For this, it has a virtual channel (VC) at the input and output of router 10.
- VC virtual channel
- the router 10 comprises a forwarding module 12 which routes the received packets according to instructions, hereinafter called “routing references” or "final status", obtained by an analysis module 13 from of a memory 14 organized as a TRIE memory table.
- the routing module 12 can essentially carry out a translation of the virtual path identifiers VPI / VCI (Virtual Path Identifier / Virtual Channel Identifier), the fusion of the virtual channels according to the virtual paths, and the delivery of the packets on the output ports of the router. For this, it needs to know the VPI / VCI pairs of outgoing packets, which can constitute the routing references stored in the memory. TRIE 14.
- Each ATM cell containing the header of a packet to be routed passes through a buffer memory 15 to which the analysis unit 13 has access to analyze portions of these headers by means of the TRIE memory 14.
- Configuring the router 10 consists in recording the relevant data in the TRIE memory 14. This operation is carried out by a unit (not shown) for managing the TRIE memory under the control of the host computer 11.
- the configuration commands can be received in packets transmitted over the network and intended for router 10.
- the analysis unit 13 cooperates with an automaton 16 programmed to perform certain checks and certain actions on the headers of the packets, in a manner dependent on the communication protocols supported. through the router. Apart from this automaton 16, the operation of the router 10 is independent of the packet transport protocols.
- each elementary cell of the TRIE memory occupies 32 bits.
- the analysis unit 13 the TRIE memory 14, as well as the buffer memory 15, subdivided into two buffer registers 15 a , 15 b each intended to receive a chain of data to be analyzed in slices of K bits.
- Each memory plane P 0 , P 1 comprises a data bus D 0 , D 1 of width 32 bits, as well as an address bus A 0 , A., the width of which depends on the quantity of data to be stored in TRIE memory.
- a slice of K bits of the data chain contained in the register 15 a , 15 b is supplied to the operator OP a , OP b .
- the analysis unit 13 also comprises multiplexers 18 a , 18 b , 19 0 , 19 1 for managing the communication of the buses D 0 , Dj, A 0 , A 1 with the two operators OP a , OP b under the control of a periodic clock signal
- the clock signal CK alternately has a high voltage and a low voltage, with a duty cycle 1/2.
- the multiplexer 18 a puts the data bus D 1 of the memory plane P 1 into communication with an input bus D a of the operator OP a
- the multiplexer 18 puts the bus into communication of data D 0 of the memory plane P 0 with an input bus D b of the operator OP
- the multiplexer 19 0 puts an output bus A a of the operator OP a in communication with the address bus A 0 of the memory plane P 0
- the multiplexer 19 1 communicates an output bus A b of the operator OP b with the address bus A 1 of the memory plane P ⁇
- the multiplexer 18 a communicates the data bus D 0 of the memory plane P 0 with the input bus D a of the operator OP a
- the multiplexer 18 b connects the data bus D
- the operator OP ⁇ performs the following elementary processing: '' - the "status" or "pointer" type of the node received on the bus D ⁇ is determined, by examining the value of a flag of one or more bits provided by example at the head of each cell; - if the node, ud is of type status, the operator OP x extracts the reference contained in the node, presents it on the output bus A X , and warns the automaton 16 so that the latter retrieves the reference extracted from the output marked S a or S in FIG.
- LA combination consists for example of a simple concatenation, the slice V ⁇ forming the K least significant bits of the address.
- the data field presented on the input bus D x is initialized as being of pointer type, with a pointer value DP designating the location of the register gatekeeper in the memory plane P 0 .
- the first section V x (0) of the chain completes this address to provide the address of the gatekeeper's cell which should be read.
- the processing then continues cyclically as indicated above.
- the buffer register 15 x is made available to receive a next data string to be analyzed.
- the following analysis can only start when the operator OP x associated with this buffer 15 x has access to the memory plane P 0 containing the gatekeeper.
- FIG. 6 illustrates the operation of the circuit according to FIG. 5, by showing the information circulating on the various buses.
- the arrows in dashed lines illustrate the analysis of two routes successively introduced into the buffer register 15 a associated with the operator
- FIG. 6 clearly shows the parallelism between the two operators, capable of accessing the two memory planes alternately to process simultaneously, in circular pipeline mode, the data strings presented in the two buffer registers.
- FIG. 5 is easily transposable to a case where N> 2: it suffices to add memory planes P j and operators OP x , by increasing the number of inputs of the multiplexers 18 ⁇ , 19 j and controlling them by means of mutually offset clock signals, for example generated using modulo N counters.
- each operator OP 0 , OP.,, ..., OP N _., is associated with a respective memory plane P 0 , P. ,, ..., P t ⁇ .
- the output bus of each operator OP j (0 ⁇ i ⁇ N) directly serves as the address bus for the memory plane P j
- the data bus D, of each memory plane P j serves as the input bus for the operator OP (i + 1) mod N.
- the buffer memory 15 is shared between the N operators OP j , instead of having the buffer register / operator association as in the embodiment illustrated by FIG. 5.
- This buffer memory 15 has a matrix organization, with N rows and N columns.
- the column of rank j (0 ⁇ j ⁇ N) receives a string of data to be analyzed, so that the buffer memory 15 is capable of receiving simultaneously up to N strings, one per column.
- the row of rank i (0 ⁇ i ⁇ N) each contains slices of rows i, i + N, i + 2N, etc. of each chain to analyze.
- a multiplexing module 20 distributes the data present in the matrix buffer 15 so that each operator OP j successively receives the slices present in the row of rank i, cyclically by column.
- This module 20 is clocked so that the analysis of a chain is always started by the operator OP 0 associated with the memory plane containing the gatekeeper (at this time the initialization data word DP is provided on the bus input D N _ 1 of the operator OP 0 ), and that each analysis cycle of order q ⁇ O is executed q cycles later by the operator OP q mod then commanding a reading in the memory plane P q mod N.
- An operator OP j who falls on a status signals it so that the corresponding reference is retrieved on its output bus A j , which frees column j from the matrix 15 from which the chain whose analysis ends ends.
- a new chain inserted in this column j will only begin to be analyzed when it is the turn of a section of column j to be addressed to the operator OP 0 by the multiplexing module 20.
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/FR2003/002508 WO2005024659A1 (fr) | 2003-08-11 | 2003-08-11 | Dispositif de memoire trie a mecanisme de pipeline circulaire |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1702275A1 true EP1702275A1 (de) | 2006-09-20 |
Family
ID=34259339
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP03758229A Withdrawn EP1702275A1 (de) | 2003-08-11 | 2003-08-11 | Trie-speicheranordnung mit zirkularem pipeline-mechanismus |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1702275A1 (de) |
AU (1) | AU2003274246A1 (de) |
WO (1) | WO2005024659A1 (de) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103365992A (zh) * | 2013-07-03 | 2013-10-23 | 深圳市华傲数据技术有限公司 | 一种基于一维线性空间实现Trie树的词典检索方法 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102739550B (zh) * | 2012-07-17 | 2015-11-25 | 中山大学 | 基于随机副本分配的多存储器流水路由体系结构 |
CN103365991B (zh) * | 2013-07-03 | 2017-03-08 | 深圳市华傲数据技术有限公司 | 一种基于一维线性空间实现Trie树的词典存储管理方法 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2707775B1 (fr) * | 1993-07-12 | 1996-04-12 | Duret Chrsitian | Procédé et dispositif d'analyse d'informations contenues dans des structures de données. |
US6691124B2 (en) * | 2001-04-04 | 2004-02-10 | Cypress Semiconductor Corp. | Compact data structures for pipelined message forwarding lookups |
-
2003
- 2003-08-11 EP EP03758229A patent/EP1702275A1/de not_active Withdrawn
- 2003-08-11 WO PCT/FR2003/002508 patent/WO2005024659A1/fr active Application Filing
- 2003-08-11 AU AU2003274246A patent/AU2003274246A1/en not_active Abandoned
Non-Patent Citations (1)
Title |
---|
See references of WO2005024659A1 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103365992A (zh) * | 2013-07-03 | 2013-10-23 | 深圳市华傲数据技术有限公司 | 一种基于一维线性空间实现Trie树的词典检索方法 |
CN103365992B (zh) * | 2013-07-03 | 2017-02-15 | 深圳市华傲数据技术有限公司 | 一种基于一维线性空间实现Trie树的词典检索方法 |
Also Published As
Publication number | Publication date |
---|---|
WO2005024659A1 (fr) | 2005-03-17 |
AU2003274246A1 (en) | 2005-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Eatherton et al. | Tree bitmap: hardware/software IP lookups with incremental updates | |
US8780926B2 (en) | Updating prefix-compressed tries for IP route lookup | |
US7237058B2 (en) | Input data selection for content addressable memory | |
EP0639013B1 (de) | Verfahren und Vorrichtung zur Auswertung von Datenstrukturinformationen | |
US6744652B2 (en) | Concurrent searching of different tables within a content addressable memory | |
US7281085B1 (en) | Method and device for virtualization of multiple data sets on same associative memory | |
Huang et al. | A fast IP routing lookup scheme for gigabit switching routers | |
US7177978B2 (en) | Generating and merging lookup results to apply multiple features | |
Le et al. | Scalable tree-based architectures for IPv4/v6 lookup using prefix partitioning | |
Pao et al. | Efficient hardware architecture for fast IP address lookup | |
FR2789778A1 (fr) | Procede pour associer des references d'acheminement a des paquets de donnees au moyen d'une memoire trie, et routeur de paquets appliquant ce procede | |
Le et al. | Memory-efficient and scalable virtual routers using FPGA | |
Luo et al. | A hybrid IP lookup architecture with fast updates | |
US20110255544A1 (en) | System and method for an exact match search using pointer based pipelined multibit trie traversal technique | |
Le et al. | Scalable high throughput and power efficient ip-lookup on fpga | |
Sun et al. | An on-chip IP address lookup algorithm | |
FR2835991A1 (fr) | Procede de configuration d'une memoire trie pour le traitement de paquets de donnees, et dispositif de traitement de paquets mettant en oeuvre un tel procede | |
EP1702275A1 (de) | Trie-speicheranordnung mit zirkularem pipeline-mechanismus | |
EP0857005B1 (de) | Verfahren zum Zuordnen von Daten zu ATM Zellen | |
Smiljanić et al. | A comparative review of scalable lookup algorithms for IPv6 | |
US20060018142A1 (en) | Concurrent searching of different tables within a content addressable memory | |
EP1678632A1 (de) | Trie-basierte speichervorrichtung mit kompressionsmechanismus | |
WO2005024840A1 (fr) | Dispositif de memoire trie avec compression en etendue | |
EP0989502B1 (de) | Aktualisierungsverfahren für einen inhaltsadressierbaren Trietypspeicher, und Router um solch ein Verfahren zu implementieren | |
Smiljanić et al. | Scalable Lookup Algorithms for IPv6 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20060425 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20070626 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G11C 15/00 20060101ALN20081111BHEP Ipc: G06F 17/30 20060101AFI20081111BHEP |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20090613 |