US5978885A - Method and apparatus for self-timing associative data memory - Google Patents

Method and apparatus for self-timing associative data memory Download PDF

Info

Publication number
US5978885A
US5978885A US08/920,395 US92039597A US5978885A US 5978885 A US5978885 A US 5978885A US 92039597 A US92039597 A US 92039597A US 5978885 A US5978885 A US 5978885A
Authority
US
United States
Prior art keywords
signal
search
match
cam
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US08/920,395
Inventor
II Airell R. Clark
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Hewlett Packard Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/664,902 external-priority patent/US5828324A/en
Application filed by Hewlett Packard Co filed Critical Hewlett Packard Co
Priority to US08/920,395 priority Critical patent/US5978885A/en
Assigned to HEWLETT-PACKARD COMPANY reassignment HEWLETT-PACKARD COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CLARK II, AIRELL R.
Application granted granted Critical
Publication of US5978885A publication Critical patent/US5978885A/en
Assigned to HEWLETT-PACKARD COMPANY, A DELAWARE CORPORATION reassignment HEWLETT-PACKARD COMPANY, A DELAWARE CORPORATION MERGER (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD COMPANY, A CALIFORNIA CORPORATION
Assigned to AGILENT TECHNOLOGIES INC reassignment AGILENT TECHNOLOGIES INC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD COMPANY
Assigned to AVAGO TECHNOLOGIES GENERAL IP PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AGILENT TECHNOLOGIES, INC.
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. CORRECTIVE ASSIGNMENT TO CORRECT THE NAME OF THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 017207 FRAME 0020. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: AGILENT TECHNOLOGIES, INC.
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C15/00Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores
    • G11C15/04Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F16/90339Query processing by using parallel associative memories or content-addressable memories
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3084Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction using adaptive string matching, e.g. the Lempel-Ziv method

Definitions

  • the present invention relates generally to associative data storage and retrieval as, for example, in content addressable memory (“CAM”) devices. More particularly, the invention relates to a CAM search mode of operation, and more specifically to a method and apparatus for a CAM circuit having self-timing functionality.
  • CAM content addressable memory
  • Random access memory is perhaps the most common form of integrated circuit memory available in the state of the art.
  • RAM devices are not suited for use in systems which process associative data.
  • the well known methodology of sequentially accessing data when reading from the RAM, where the data address is input and the data itself stored at the address is output, is inefficient for systems involving stored information involving pattern recognition, data compression, natural language recognition, sparse matrix processes, data-base interrogation, and the like, since the address associated with the desired stored data may not be known.
  • it is more efficient to interrogate a memory by supplying a compressed subset of the desired data or a code representative of the full data set.
  • the memory responds by indicating either the presence or absence of the desired data set and, if a match occurs, the respective address in the memory bank for that data set.
  • CAM content addressable memory
  • a data string dictionary can be stored in a CAM and used in generating Lev-Zempel compressed output data (known in the art as "LZ,” generically used for any LZ data compression technique; see “Compression of Individual Sequences Via Variable-Rate Coding", IEEE Transactions on Information Theory, 24(5):530-536, September 1978).
  • the input data signal to the CAM would comprise a bit string representation of the data which is being searched for in the CAM.
  • the output would be a signal indicative as to whether the data was found, e.g., a MATCH signal, and, if found, the location within the CAM array of memory cells, also referred to as the cam core, e.g., a MATCH -- ADDRESS signal. Obtaining this MATCH and MATCH -- ADDRESS information is done with a "match encoder.”
  • CAM devices compared to RAM each individual cell structure is relatively complex. See e.g., U.S. Pat. No. 4,780,845 (Threewitt); incorporated herein by reference.
  • a CAM device can not match the density, speed, or low-power performance of a RAM device.
  • Integrated circuit fabrication process improvements generally affect both types of devices equally, so that in relative terms, CAM architects can not do much to narrow the performance gap.
  • Many signals in the CAM are signals which will only transition in one direction between the start of a search cycle and the output of the MATCH and MATCH -- ADDRESS.
  • the time after the MATCH -- ADDRESS is output and before the next CAM search is started must include returning the CAM cells to a pre-search state, referred to as "precharge time.” For example, during precharge time, all output logic of the CAM is driven to a HIGH state, ready to be driven to its NO -- MATCH state, a HIGH to LOW transition, in one direction during a search. If a search is initiated and completed in one clock period, the precharge time must be less than or equal to half the cycle.
  • the precharge has to be completed in less than 7.5 ns.
  • the CAM precharge setup allotment of time may even be much less.
  • the present invention provides a method for self-timing a computer data memory system having a single transition associative data memory device, a system clock providing a system timing signal, and a single transition output encoder for providing a memory data match signal and memory data match address signal, including the steps of: providing a memory search signal for starting a memory search and for disabling memory pre-transition state precharging; delaying the memory search signal until memory search is complete, providing a delayed memory search signal; using the delayed memory search signal, enabling said output encoder and using the delayed memory search signal as a feedback signal substantially simultaneously re-enabling memory precharging.
  • the present invention provides a self-timed computer memory system for associative data storage, search, and retrieval, the system including a system clock providing a system timing signal; an array of memory cells, including search driver circuitry and cell output precharge circuitry; encoder circuitry for providing array search match and array match address output signals based on array search results, the encoder circuitry having output encoder circuitry, encoder precharge circuitry, and output circuitry for latching the match and match address output signals; first circuitry connected to receive a signal indicative of a search request and the system timing signal and to transmit the signal to the search driver circuitry and cell precharge driver circuitry, for turning the search driver circuitry on and cell precharge driver circuitry off substantially simultaneously; second circuitry, connecting the array and the encoder, for substantially simultaneously turning off the encoder precharge circuitry and resetting first circuitry as soon as the encoder is enabled.
  • the present invention provides a content addressable memory (CAM) apparatus for a system having a system clock timing signal.
  • Self-timing is provided in the apparatus using: a CAM device having an input and an output, an array of CAM cells, CAM search driver circuitry, and CAM precharging circuitry; a CAM output encoder having CAM array match signal and CAM array match address signal encoding circuitry connected to the CAM output and encoder precharging circuitry; a set-reset first flip-flop having set inputs connected for receiving the clock timing signal and a signal indicative of a search request, a reset input, and an output connected to the search driver circuitry and the precharging circuitry such that a set condition of the first flip-flop transmits a signal enabling a search of the array and disabling precharging of said array; and a set-reset second flip-flop having a set input connected for receiving a first delayed signal indicative of a search request and a reset input connected for receiving a first delayed signal indicative of disabling
  • CAM core cell gates that are used to have only one direction of change during a CAM search can be unbalanced to change quickly in that direction and slower in the other, increasing the speed through the cells and decreasing the core search timing budget requirement.
  • FIG. 1 is a schematic block diagram of a CAM system incorporating the invention of the parent application.
  • FIGS. 2A, 2B, 2C are comparison schematic diagrams of detail for CAM core encoders as shown in FIG. 1 in which:
  • FIG. 2A represents a traditional encoder design
  • FIG. 2B represents an encoder in accordance with the invention of the parent application.
  • FIG. 2C shows a detail of FIGS. 2A and 2B.
  • FIGS. 3A and 3B in conjunction with FIGS. 4A and 4B are comparison schematic diagrams of final -- encoders as shown in FIG. 1 in which:
  • FIGS. 3A-3B represent a traditional final -- encoder design
  • FIGS. 4A-4B represent a final -- encoder in accordance with the invention of the parent application as shown in FIG. 1.
  • FIG. 5A is a detailed schematic of one final -- encoder subsection for a CAM -- CORE x as shown in FIG. 4B.
  • FIG. 5B is a detail of FIG. 5A.
  • FIG. 6 is a schematic block diagram of a section of a CAM system incorporating the present invention.
  • FIG. 7 depicts timing waveform diagrams for the present invention as shown in FIG. 6.
  • FIG. 8 is a detailed schematic block diagram of components of a subsystem in an alternative embodiment to the system as shown in FIG. 1 in accordance with the present invention.
  • SEARCH -- DATA on standard bus 101 is fed from the search engine (e.g., a central processing unit (not shown)) through a driver 103, comprising standard buffering hardware as would be known in the art, to drive the relatively high capacitance CAM core cell architecture.
  • the search engine e.g., a central processing unit (not shown)
  • driver 103 comprising standard buffering hardware as would be known in the art, to drive the relatively high capacitance CAM core cell architecture.
  • the SEARCH -- DATA is input through standard buses 107 1 -107 N to interrogate each CAM -- CORE 105 1 -105 N .
  • the search cycle that is, the time from receipt of an input data search signal, or code, to the encoder input to determine if the CAM has the desired data set to the time of the output of a match or mismatch indication, and, if a MATCH signal is generated, the MATCH -- ADDRESS.
  • Match detection and encoder circuitry can then determine which cells are transitioning during the cycle, providing the MATCH and MATCH -- ADDRESS.
  • Each CAM -- CORE 105 1 -105 N has an output bus 109 1 -109 N with one line for each of the stored data words, viz. 128 -- words in the exemplary embodiment. If a mismatch occurs for any location, the output bit for that location is pulled to LOW to indicate a mismatch; thus, if an output stays HIGH, it indicates a MATCH. If there is no match, all outputs go LOW. Thus, for each CAM -- CORE 105 1 -105 n , one hundred and twenty eight outputs on respective buses 109 1 -109 N tell whether a particular address in that cell array is a MATCH or a MISMATCH. The output signal derivation for each CAM -- CORE output of the six device memory bank is accomplished using a memory FIRST -- ENCODER 111 1 -111 N .
  • CAM -- CORE 1 105 1 through CAM -- CORE N 105 N a MATCH F signal and an appropriate DATA -- MATCH -- ADDRESS F is derived using a FINAL -- ENCODER 113.
  • FIGS. 2A and 2B a standard CAM encoder 201, FIG. 2A, is shown.
  • Such an encoder 201 is used in a CAM system such as shown in the assignee's U.S. Pat. No. 5,373,290 (Lempel et al.) as element 194, FIG. 5, explained beginning in column 12, line 28 et seq., incorporated herein by reference in its entirety.
  • a MATCH line 203 has a pull down transistor 205, configured as in FIG. 2C, one for each of the one hundred and twenty eight data words in each CAM -- CORE 105 1 -105 N .
  • one hundred and twenty eight CORE -- MATCH lines 207 0000000 (location zero) through 207 1111111 (location 127) are multiplexed to the MATCH line 203, from a least significant bit ("LSB") MATCH -- ADDRESS line 209 1 through a most significant bit (“MSB) MATCH -- ADDRESS line 209 7 , in essence a multiplex wired OR configuration [note: as will be described hereinafter, seven bits will also form the lower address bits of a ten bit address from the FINAL -- ENCODER 113, FIG. 1].
  • the MATCH line 203 has one hundred and twenty eight pull down transistors 205 (counted vertically in FIG. 2A), but each of the MATCH -- ADDRESS lines 209 1 -209 7 has only sixty four pull down transistors.
  • every other MATCH line 203 has a pull down transistor 205
  • DATA of interest of the SEARCH -- DATA is at location 0000011
  • a location having no MATCH line 203 pull down transistor 205 but using bit -- 0 to do the double duty since only one location of the CAM -- CORE is ever a match, no conflicts will occur. That is, if the CAM -- CORE has set the MATCH -- ADDRESS at location 0000011, bit -- 0 has change state, indicating a MATCH.
  • bit -- 0 has change state, indicating a MATCH.
  • the most significant MATCH -- ADDRESS bit is used for the double duty, only the top sixty-four MATCH lines 203 require pull down transistors 205.
  • This function is accomplished in the FINAL -- ENCODER 113 by adding three upper address bits to the seven FIRST -- MATCH -- ADDRESS bits for the CAM -- CORE 105 location where the full data of interest
  • FIGS. 3A-3B and 4A-4B a FINAL -- ENCODER 113 for accomplishing this task is provided.
  • FIG. 3A again refers to an embodiment as shown in assignee's U.S. Pat. No. 5,373,290 as part of element 194, FIG. 5.
  • a final -- encoder 301 for an array of six cam -- cores has six sections, one designated for each cam -- core of the array.
  • each FIRST -- ENCODER 111 1-N FIG. 1
  • FIGS. 3A-3B and 4A-4B for comparison, and focusing on the section of FINAL -- ENCODER 113, FIG.
  • each CAM -- CORE x has its respective FIRST -- ENCODER 111 x output connected to a respective subsection of the FINAL -- ENCODER 113 which will in turn provide the actual MATCH F signal and DATA -- MATCH -- ADDRESS F for the data of interest based on the SEARCH -- DATA input.
  • FIGS. 5A and 5B detail for FINAL -- ENCODER 113 subsection CAM -- CORE 6 303 6 is depicted.
  • the FINAL -- ENCODER 113 is multiplexed with the inputs 115, 117 from the FIRST -- ENCODER x .
  • Match signal pull down transistors 501 are provided in a manner such that when a MATCH 6 and FIRST -- MATCH -- ADDRESS 6 is received from a FIRST -- ENCODER 6 , the FINAL -- ENCODER input subsection CAM -- CORE 6 will provide both a MATCH F signal on FINAL -- MATCH -- LINE 401 and an expanded, 10-bit address for the data, DATA -- MATCH -- ADDRESS F .
  • each CAM -- CORE x can be compared and it can be seen that the removal of half of the pull down transistors 205 on FIRST -- ENCODER -- MATCH lines 207 in FIG. 2B for providing the MATCH x signal has been added back in the FINAL -- ENCODER 113 on MATCH F lines 401.
  • this arrangement in the critical path in the present invention as shown in FIGS. 2B, 4A-4B, and 5A-5B provides an improvement of in reducing the cycle time approximate ten percent over the arrangement of FIGS. 2A, 3A-3B in a synergistic manner.
  • Prioritization--selection of one of a possible plurality of the matching data sets--must be accomplished to prevent an unresolved contention and logic error.
  • a priority encoder for the situation where there may be more than one match and match address follows, e.g., in a data compression implementation where multiple compression dictionaries are employed is shown in FIG. 8, where elements 811 0 -811 N are analogous to element 611 for the purpose of explaining the invention in terms of a particular exemplary embodiment.
  • the memory output for example, of a set of data compressions dictionaries stored in the CAM -- CORES 105 0 -105 N .
  • more than one core location can contain the data sought at a given time.
  • Status information tells which of the multiple dictionaries the information is actually in. (See e.g., U.S. Pat. No. 5,455,576, elements 40 and 28).
  • the possibility of multiple matching entries in such a system is a distinct possibility.
  • the present invention serves the function to provide both the MATCH signal and a 10-bit MATCH -- ADDRESS signal to select the first location having the data sought.
  • Each CAM -- CORE section has a possibility of one or more of its 64-match lines of the each bus line 809 0-63 indicating either a HIGH if there is a MATCH at the connected location or a LOW if there is no match for that location.
  • the goal is to have the prioritizer circuit including PRIORITY -- ENCODER 811 x and ADDRESS -- ENCODER 813 x (analogous to FIG. 1, elements 111 x ) provide a MATCH and a MATCH -- ADDRESS to only the first location where the data is to be found in a CAM -- CORE 105 n .
  • MATCH signals appear in time relatively quickly following an EVALUATEBUF signal (see, e.g., and compare FIGS. 6, FIG. 7, waveforms circle-6 and circle-12, and FIG. 8 on line 817), whereas the MATCH -- ADDRESS signals take longer to establish and output.
  • a FINAL -- ENCODING 113' can be provided as explained heretofore.
  • FIG. 6 a preferred embodiment is shown of details of a CAM -- CORE device 200 and ENCODER 611 device, block diagram, system architecture in accordance with the present invention.
  • FIG. 7 A signal timing diagram for the system architecture is shown in FIG. 7. While actual timing in a specific implementation will vary, and while actual signal occurrence timing will vary with fabrication process, voltage, and temperature (“PVT") fluctuations, relative signal occurrence timing is substantially constant as shown for the exemplary embodiment described throughout.
  • PVT voltage, and temperature
  • waveform-2 the system is exemplified by a 15-nanosecond ("ns") system clock cycle.
  • SEARCHEN -- 1 goes LOW, as explained hereinafter.
  • CAM precharge would have to wait until the start of each new cycle Depending upon the CAMCORE size, system speed, and clock cycle budgeting for a specific implementation, there might not be enough time in such a budget to precharge the CAMCORE in this manner. However, if the CAM search time can be shortened and precharge can be initiated as soon as the actual search of the CAMCORE 207 ends, a greater precharge time can be made available in which to schedule and accomplish precharging.
  • An advantage to having a longer precharge time is that where only one transition of a cell gate of the CAMCORE is necessary during the clock cycle--viz., to indicate a match--the cells can be designed as unbalanced, i.e., to change more quickly in one direction.
  • a NAND gate that goes HIGH to LOW in 0.2 ns during the search and LOW to HIGH in 2.0 ns during precharge is acceptable when enough precharge time can be scheduled.
  • a balanced gate might take 0.4 ns in each direction, by unbalancing the gate, the speed through the gate is thus doubled. Maximizing the precharge time allows a maximal unbalance factor in the gates, thereby maximizing search speed.
  • DFF2 617 and DFF3 633 receive an system initialization signal, NRESET, whenever re-initialization is required, going LOW and clocking a HIGH respectively at that time.
  • EVALUATEBUF1 drives the BANK ENCODER DELAY 623, DUMMYENC waveform 10, which waits for the amount of time needed for the BANK PRIORITY ENCODER 625 to generate a MATCH signal, waveform-12, and send it to the FINAL PRIORITY ENCODER 621, then it turns off the FINAL ENCODER PRECHARGE 627 for the FINAL PRIORITY ENCODER 621.
  • the precharge signal NBANKPRE
  • the desired output MATCH and MATCH -- ADDRESS signals appear on the output ports of the CAMCORE 105. The time at which this happens and the length of time these CAMCORE outputs remain valid is search process, voltage and temperature dependent.
  • the cam cores can start precharging.
  • the bank encoder can start precharging and one of the output latches can close, allowing the final encoder to start precharging.
  • the bank encoder does not stop precharging until the cam cores have search data to send.
  • the final encoder does not stop precharging until the bank encoder has data to send.
  • the output latches are set to open on the falling edge of the clock cycle rather than when the final encoder has data to send. Note that in an alternative embodiment the functionality can be reversed.
  • the present invention provides a CAM search functionality which self-times CAMCORE precharge functions and latched output MATCH and MATCH -- ADDRESS signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A computer memory system provides self-timed precharging and output signal latching. The method and apparatus are useful in accelerating search cycles for associative data in a memory such as a content addressable memory (CAM) where single transition memory search and output signal encoding is required. Feedback is provided to initiate memory precharging as soon as an actual search of the memory ends rather than at a next system clock transition.

Description

RELATED APPLICATIONS
This is a continuation-in-part of U.S. patent application Ser. No. 08/664,902, filed Jun. 17, 1996, U.S. Pat. No. 5,828,324 by Clark II for Match and Match Address Signal Generation in a Content Addressable Memory Encoder.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to associative data storage and retrieval as, for example, in content addressable memory ("CAM") devices. More particularly, the invention relates to a CAM search mode of operation, and more specifically to a method and apparatus for a CAM circuit having self-timing functionality.
2. Description of Related Art
Random access memory ("RAM") is perhaps the most common form of integrated circuit memory available in the state of the art. However, RAM devices are not suited for use in systems which process associative data. The well known methodology of sequentially accessing data when reading from the RAM, where the data address is input and the data itself stored at the address is output, is inefficient for systems involving stored information involving pattern recognition, data compression, natural language recognition, sparse matrix processes, data-base interrogation, and the like, since the address associated with the desired stored data may not be known. For this type of data, it is more efficient to interrogate a memory by supplying a compressed subset of the desired data or a code representative of the full data set. The memory responds by indicating either the presence or absence of the desired data set and, if a match occurs, the respective address in the memory bank for that data set.
In the 1980's, another type of integrated circuit memory device was developed to have ambiguous, non-contiguous addressing and was dubbed the content addressable memory ("CAM"). See e.g., U.S. Pat. No. 3,701,980 (Mundy). In essence, for this type of associative data storage, the entire CAM can be searched in a single clock cycle, giving it a great advantage over the sequential search technique required when using a RAM device.
For example, a data string dictionary can be stored in a CAM and used in generating Lev-Zempel compressed output data (known in the art as "LZ," generically used for any LZ data compression technique; see "Compression of Individual Sequences Via Variable-Rate Coding", IEEE Transactions on Information Theory, 24(5):530-536, September 1978). The input data signal to the CAM would comprise a bit string representation of the data which is being searched for in the CAM. The output would be a signal indicative as to whether the data was found, e.g., a MATCH signal, and, if found, the location within the CAM array of memory cells, also referred to as the cam core, e.g., a MATCH-- ADDRESS signal. Obtaining this MATCH and MATCH-- ADDRESS information is done with a "match encoder."
The problem with CAM devices is that compared to RAM each individual cell structure is relatively complex. See e.g., U.S. Pat. No. 4,780,845 (Threewitt); incorporated herein by reference. Thus, for the same integrated circuit real estate, a CAM device can not match the density, speed, or low-power performance of a RAM device. Integrated circuit fabrication process improvements generally affect both types of devices equally, so that in relative terms, CAM architects can not do much to narrow the performance gap.
Many signals in the CAM are signals which will only transition in one direction between the start of a search cycle and the output of the MATCH and MATCH-- ADDRESS. The time after the MATCH-- ADDRESS is output and before the next CAM search is started must include returning the CAM cells to a pre-search state, referred to as "precharge time." For example, during precharge time, all output logic of the CAM is driven to a HIGH state, ready to be driven to its NO-- MATCH state, a HIGH to LOW transition, in one direction during a search. If a search is initiated and completed in one clock period, the precharge time must be less than or equal to half the cycle. For example, in a 15-nanosecond ("ns") system clock period, the precharge has to be completed in less than 7.5 ns. Depending on CAM size, combinatorial input logic timing, and the like as would be known to a person skilled in the art, the CAM precharge setup allotment of time may even be much less.
Therefore, there is a need for a self-timed precharge method and apparatus for CAM devices.
SUMMARY OF THE INVENTION
In its basic aspects, the present invention provides a method for self-timing a computer data memory system having a single transition associative data memory device, a system clock providing a system timing signal, and a single transition output encoder for providing a memory data match signal and memory data match address signal, including the steps of: providing a memory search signal for starting a memory search and for disabling memory pre-transition state precharging; delaying the memory search signal until memory search is complete, providing a delayed memory search signal; using the delayed memory search signal, enabling said output encoder and using the delayed memory search signal as a feedback signal substantially simultaneously re-enabling memory precharging.
In another basic aspect, the present invention provides a self-timed computer memory system for associative data storage, search, and retrieval, the system including a system clock providing a system timing signal; an array of memory cells, including search driver circuitry and cell output precharge circuitry; encoder circuitry for providing array search match and array match address output signals based on array search results, the encoder circuitry having output encoder circuitry, encoder precharge circuitry, and output circuitry for latching the match and match address output signals; first circuitry connected to receive a signal indicative of a search request and the system timing signal and to transmit the signal to the search driver circuitry and cell precharge driver circuitry, for turning the search driver circuitry on and cell precharge driver circuitry off substantially simultaneously; second circuitry, connecting the array and the encoder, for substantially simultaneously turning off the encoder precharge circuitry and resetting first circuitry as soon as the encoder is enabled.
In another basic aspect, the present invention provides a content addressable memory (CAM) apparatus for a system having a system clock timing signal. Self-timing is provided in the apparatus using: a CAM device having an input and an output, an array of CAM cells, CAM search driver circuitry, and CAM precharging circuitry; a CAM output encoder having CAM array match signal and CAM array match address signal encoding circuitry connected to the CAM output and encoder precharging circuitry; a set-reset first flip-flop having set inputs connected for receiving the clock timing signal and a signal indicative of a search request, a reset input, and an output connected to the search driver circuitry and the precharging circuitry such that a set condition of the first flip-flop transmits a signal enabling a search of the array and disabling precharging of said array; and a set-reset second flip-flop having a set input connected for receiving a first delayed signal indicative of a search request and a reset input connected for receiving a first delayed signal indicative of disabling precharging and an output connected to the encoder precharging circuitry and to the first flip-flop reset input, wherein the first delayed signal indicative of a search request sets the second flip-flop and transmits a signal enabling encoding the CAM array output with the match signal and match address signal encoding circuitry and disabling the encoder precharging circuitry and resetting the first flip-flop, enabling the CAM precharging circuitry.
It is an advantage of the present invention that it provides a CAM precharge methodology that is self-timed, eliminating any auxiliary timing and precharging circuitry.
It is an advantage of the present invention that it provides a CAM precharge that permits circuit timing overlap with other, on-going circuit functions.
It is another advantage of the present invention that CAM core cell gates that are used to have only one direction of change during a CAM search can be unbalanced to change quickly in that direction and slower in the other, increasing the speed through the cells and decreasing the core search timing budget requirement.
Other objects, features and advantages of the present invention will become apparent upon consideration of the following explanation and the accompanying drawings, in which like reference designations represent like features throughout the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a schematic block diagram of a CAM system incorporating the invention of the parent application.
FIGS. 2A, 2B, 2C are comparison schematic diagrams of detail for CAM core encoders as shown in FIG. 1 in which:
FIG. 2A represents a traditional encoder design,
FIG. 2B represents an encoder in accordance with the invention of the parent application, and
FIG. 2C shows a detail of FIGS. 2A and 2B.
FIGS. 3A and 3B in conjunction with FIGS. 4A and 4B are comparison schematic diagrams of final -- encoders as shown in FIG. 1 in which:
FIGS. 3A-3B represent a traditional final -- encoder design, and
FIGS. 4A-4B represent a final -- encoder in accordance with the invention of the parent application as shown in FIG. 1.
FIG. 5A is a detailed schematic of one final-- encoder subsection for a CAM-- COREx as shown in FIG. 4B.
FIG. 5B is a detail of FIG. 5A.
FIG. 6 is a schematic block diagram of a section of a CAM system incorporating the present invention.
FIG. 7 depicts timing waveform diagrams for the present invention as shown in FIG. 6.
FIG. 8 is a detailed schematic block diagram of components of a subsystem in an alternative embodiment to the system as shown in FIG. 1 in accordance with the present invention.
The drawings referred to in this specification should be understood as not being drawn to scale except if specifically noted.
DESCRIPTION OF THE PREFERRED EMBODIMENT
Reference is made now in detail to a specific embodiment of the present invention, which illustrates the best mode presently contemplated by the inventor for practicing the invention. Alternative embodiments are also briefly described as applicable. As will be recognized by a person skilled in the art of digital electronic design, the exemplary digital logic selection for signal transitions can be reversed HIGH for LOW and LOW for HIGH as long as consistency is maintained. Thus, the specifics of the exemplary embodiment signal transitions should not be extrapolated as a limitation on the scope of the present invention.
Turning now to FIG. 1, an exemplary embodiment of a CAM-based memory circuit is depicted. SEARCH-- DATA on standard bus 101 is fed from the search engine (e.g., a central processing unit (not shown)) through a driver 103, comprising standard buffering hardware as would be known in the art, to drive the relatively high capacitance CAM core cell architecture.
Each CAM-- CORE 1051 -105N comprises an array of standard transistor-based cell circuitry and search circuitry as would also be known to a person skilled in the art. Each cell of the array stores one bit. In accordance with the exemplary embodiment, a total CAM of 768-- words by 19-- bits is described. It is assumed for the exemplary embodiment that due to integrated circuit layout constraints, that N=6; that is, six CAM-- CORES 1051 -1056 of 128-- words by 19-- bits each is provided. The SEARCH-- DATA is input through standard buses 1071 -107N to interrogate each CAM-- CORE 1051 -105N. While for implementations for certain algorithms more than one CAM-- CORE 1051 -105N could have a MATCH, it is assumed in this exemplary embodiment implementation that only one cell on one CAM-- CORE 1051 -105N contains the data set of interest. Thus, if any, there will be only one MATCHx signal and one corresponding FIRST-- MATCH-- ADDRESSx.
Perhaps the most critical path through a system circuit using a CAM is the search cycle; that is, the time from receipt of an input data search signal, or code, to the encoder input to determine if the CAM has the desired data set to the time of the output of a match or mismatch indication, and, if a MATCH signal is generated, the MATCH-- ADDRESS. In general, it is known to precharge the CAM-- CORE prior to starting a search; for example, the signal lines to each cell of the CAM array are precharged to all HIGH, approximately to the system voltage, VDD. Match detection and encoder circuitry can then determine which cells are transitioning during the cycle, providing the MATCH and MATCH-- ADDRESS. Each CAM-- CORE 1051 -105N has an output bus 1091 -109N with one line for each of the stored data words, viz. 128-- words in the exemplary embodiment. If a mismatch occurs for any location, the output bit for that location is pulled to LOW to indicate a mismatch; thus, if an output stays HIGH, it indicates a MATCH. If there is no match, all outputs go LOW. Thus, for each CAM-- CORE 1051 -105n, one hundred and twenty eight outputs on respective buses 1091 -109N tell whether a particular address in that cell array is a MATCH or a MISMATCH. The output signal derivation for each CAM-- CORE output of the six device memory bank is accomplished using a memory FIRST-- ENCODER 1111 -111N.
The one hundred and twenty eight outputs of the six CAM-- COREs 1051 -1056 now need to be turned into a final match signal, MATCH-- SIGNALF, 401 and CAM address, DATA-- MATCH-- ADDRESSF, 403, 405 signal, preferably in one clock cycle, where DATA-- MATCH-- ADDRESSF is both the address of a particular CAM-- CORE 105x and its cell array address, FIRST-- MATCH-- ADDRESSx. Assuming again only one MATCH designation for one CORE-- CORE 1051 -105N of the memory bank, CAM-- CORE1 1051 through CAM-- COREN 105N, a MATCHF signal and an appropriate DATA-- MATCH-- ADDRESSF is derived using a FINAL-- ENCODER 113.
Turning now to FIGS. 2A and 2B, a standard CAM encoder 201, FIG. 2A, is shown. Such an encoder 201 is used in a CAM system such as shown in the assignee's U.S. Pat. No. 5,373,290 (Lempel et al.) as element 194, FIG. 5, explained beginning in column 12, line 28 et seq., incorporated herein by reference in its entirety. In the encoder 201, a MATCH line 203 has a pull down transistor 205, configured as in FIG. 2C, one for each of the one hundred and twenty eight data words in each CAM-- CORE 1051 -105N. Likewise, one hundred and twenty eight CORE-- MATCH lines 2070000000 (location zero) through 2071111111 (location 127) are multiplexed to the MATCH line 203, from a least significant bit ("LSB") MATCH-- ADDRESS line 2091 through a most significant bit ("MSB) MATCH-- ADDRESS line 2097, in essence a multiplex wired OR configuration [note: as will be described hereinafter, seven bits will also form the lower address bits of a ten bit address from the FINAL-- ENCODER 113, FIG. 1]. Thus, the MATCH line 203 has one hundred and twenty eight pull down transistors 205 (counted vertically in FIG. 2A), but each of the MATCH-- ADDRESS lines 2091 -2097 has only sixty four pull down transistors.
Comparing this embodiment of the standard CAM encoder 201 in FIG. 2A to the FIRST-- ENCODER 201 in accordance with the present invention as shown in FIG. 2B, the difference lies in that on MATCH line 203, pull down transistors 205 are provided only for one half of CORE-- MATCH lines 2070000000 (location zero) through 2071111110 (location 126). For locations having no MATCH line 203 pull down transistors 205, a designated pull down transistor of the MATCH-- ADDRESS lines 2091 -2097 are used to serve double duty, that is, also indicating a match condition when switched.
For example, as shown where every other MATCH line 203 has a pull down transistor 205, if the DATA of interest of the SEARCH-- DATA is at location 0000011, a location having no MATCH line 203 pull down transistor 205 but using bit-- 0 to do the double duty, since only one location of the CAM-- CORE is ever a match, no conflicts will occur. That is, if the CAM-- CORE has set the MATCH-- ADDRESS at location 0000011, bit-- 0 has change state, indicating a MATCH. As another example, if the most significant MATCH-- ADDRESS bit is used for the double duty, only the top sixty-four MATCH lines 203 require pull down transistors 205. Thus, use of one of the MATCH-- ADDRESS bits as also indicating a MATCH when a true match has occurred in this manner reduces the number of pull down transistors on the MATCH line 203 to sixty-four. As a result, the MATCH line 203 will be as fast as the MATCH-- ADDRESS lines 209.
In a commercial implementation having a search access time of approximately 6 nanoseconds, an improvement of approximately 0.5 nanosecond has been found to be achieved.
Recall that the present exemplary embodiment as shown in FIG. 1 uses a bank of six CAM-- CORES 1051-6, each with its own FIRST-- ENCODER 1111-6. Now each of the output signals MATCH1 -6 on each FIRST-- ENCODER-- MATCH output bus 1151-6 and its appurtenant FIRST-- MATCH-- ADDRESS output bus 1171-6 needs to be encoded in order to obtain both a final MATCHF signal back to the CPU, indicating the data of interest has been found, and a DATA-- MATCH-- ADDRESSF specifying both the FIRST-- MATCH-- ADDRESS on bus 117x, where x=the CAM-- CORE0-127 location which generated a MATCH signal, and a designation of which of the six CAM-- CORES 1051-6 generated a MATCH signal. This function is accomplished in the FINAL-- ENCODER 113 by adding three upper address bits to the seven FIRST-- MATCH-- ADDRESS bits for the CAM-- CORE 105 location where the full data of interest resides.
Turning to FIGS. 3A-3B and 4A-4B, a FINAL-- ENCODER 113 for accomplishing this task is provided.
FIG. 3A again refers to an embodiment as shown in assignee's U.S. Pat. No. 5,373,290 as part of element 194, FIG. 5. In element 194, a final-- encoder 301 for an array of six cam-- cores has six sections, one designated for each cam-- core of the array. As stated earlier, each FIRST-- ENCODER 1111-N, FIG. 1, has an output line 1151-N for a MATCH3-N signal and an output bus 1171-N for a FIRST-- MATCH-- ADDRESS1-N. Looking to both FIGS. 3A-3B and 4A-4B for comparison, and focusing on the section of FINAL-- ENCODER 113, FIG. 1, for CAM-- CORE6 as an example of each section, the MATCH6 signal on line 1156 provides an appropriate HIGH or LOW state signal to each respective FINAL-- ENCODER 113 input subsection, CAM-- CORE1-N,3031-N. Each FIRST-- MATCH-- ADDRESS 7-bit bus 1171-N is likewise input to each FINAL-- ENCODER 113 input subsection, CAM-- CORE1-N. That is to say, each CAM-- COREx has its respective FIRST-- ENCODER 111x output connected to a respective subsection of the FINAL-- ENCODER 113 which will in turn provide the actual MATCHF signal and DATA-- MATCH-- ADDRESSF for the data of interest based on the SEARCH-- DATA input.
Looking also to FIGS. 5A and 5B, detail for FINAL-- ENCODER 113 subsection CAM-- CORE6 3036 is depicted. The FINAL-- ENCODER 113 is multiplexed with the inputs 115, 117 from the FIRST-- ENCODERx. Match signal pull down transistors 501 are provided in a manner such that when a MATCH6 and FIRST-- MATCH-- ADDRESS6 is received from a FIRST-- ENCODER6, the FINAL-- ENCODER input subsection CAM-- CORE6 will provide both a MATCHF signal on FINAL-- MATCH-- LINE 401 and an expanded, 10-bit address for the data, DATA-- MATCH-- ADDRESSF. In the example, the DATA-- MATCH ADDRESS designates the CAM-- CORE6 in its added upper three bits on DATA-- MATCH-- ADDRESSF upper bit lines 4031-3, and pass through the FIRST-- MATCH-- ADDRESS6 on DATA-- MATCH-- ADDRESSF lower bit lines 4051-7 (with reversal of all signal levels, HIGH to LOW and LOW to HIGH if necessary to use standard logic where HIGH=1).
Returning to FIGS. 3A-3B and 4A-4B, each CAM-- COREx can be compared and it can be seen that the removal of half of the pull down transistors 205 on FIRST-- ENCODER-- MATCH lines 207 in FIG. 2B for providing the MATCHx signal has been added back in the FINAL-- ENCODER 113 on MATCHF lines 401. However, it has been found that this arrangement in the critical path in the present invention as shown in FIGS. 2B, 4A-4B, and 5A-5B provides an improvement of in reducing the cycle time approximate ten percent over the arrangement of FIGS. 2A, 3A-3B in a synergistic manner.
For some implementations the assumption that only one matching data set will be found is not true. Prioritization--selection of one of a possible plurality of the matching data sets--must be accomplished to prevent an unresolved contention and logic error. A priority encoder for the situation where there may be more than one match and match address follows, e.g., in a data compression implementation where multiple compression dictionaries are employed is shown in FIG. 8, where elements 8110 -811N are analogous to element 611 for the purpose of explaining the invention in terms of a particular exemplary embodiment.
Generally speaking, since the memory output, for example, of a set of data compressions dictionaries stored in the CAM-- CORES 1050 -105N, is deterministic, more than one core location can contain the data sought at a given time. As an example of use, assume there are two actual CAM devices, one holding data compression string information and the second holding status information, telling status of a particular dictionary, e.g., 00=previous dictionary, 01=current dictionary, 10=standby dictionary, 11=invalid. There is a one to one relationship between the string CAM and the status CAM. Status information tells which of the multiple dictionaries the information is actually in. (See e.g., U.S. Pat. No. 5,455,576, elements 40 and 28). The possibility of multiple matching entries in such a system is a distinct possibility.
While the CAM-- CORES 1050 -105N of CAM-- CORE device 200 are shown in FIG. 8 as discrete devices, it will be recognized by a person skilled in the art that generally one memory cell array is used and for the purpose of the present invention is subdivided. For this example, let N=11, 768 inputs divided into twelve segments of 64. The present invention serves the function to provide both the MATCH signal and a 10-bit MATCH-- ADDRESS signal to select the first location having the data sought. It will be recognized by those skilled in the art that this is a design expedient for purpose of describing the present invention and modifications can be made to develop other selection criteria for a different implementation; for example, for 1024 entries N=16 and circuitry expansion to develop a 10-bit MATCH-- ADDRESS is required; that is the circuit sections are in a power of two, 2n, e.g., 210 =1024.
Signal convention hereinafter uses "N-- " to indicate active low logic signals, "P-- " for prioritized signals.
When the CAM-- CORES 105N have their respective 64-bit output bus lines 8090 -809N (analogous to FIG. 1, elements 109x) set to output SEARCH-- DATA results, and the EVALUATEBUF enable signal is set, priority encoding commences. Each CAM-- CORE section has a possibility of one or more of its 64-match lines of the each bus line 8090-63 indicating either a HIGH if there is a MATCH at the connected location or a LOW if there is no match for that location.
The goal is to have the prioritizer circuit including PRIORITY-- ENCODER 811x and ADDRESS-- ENCODER 813x (analogous to FIG. 1, elements 111x) provide a MATCH and a MATCH-- ADDRESS to only the first location where the data is to be found in a CAM-- CORE 105n. MATCH signals appear in time relatively quickly following an EVALUATEBUF signal (see, e.g., and compare FIGS. 6, FIG. 7, waveforms circle-6 and circle-12, and FIG. 8 on line 817), whereas the MATCH-- ADDRESS signals take longer to establish and output. By dividing the encoding functionality as follows, by the time six lower bits of a MATCH-- ADDRESS are available, four upper bits are also generated such that the MATCH-- ADDRESS provided is to the first location word lines of the first CAM-- CORE of the bank having the required data. A FINAL-- ENCODING 113' can be provided as explained heretofore.
Turning to FIG. 6, a preferred embodiment is shown of details of a CAM-- CORE device 200 and ENCODER 611 device, block diagram, system architecture in accordance with the present invention.
A signal timing diagram for the system architecture is shown in FIG. 7. While actual timing in a specific implementation will vary, and while actual signal occurrence timing will vary with fabrication process, voltage, and temperature ("PVT") fluctuations, relative signal occurrence timing is substantially constant as shown for the exemplary embodiment described throughout.
Referring to both FIGS. 6 and 7, waveform-2, the system is exemplified by a 15-nanosecond ("ns") system clock cycle. Assume that the chip logic output is a CAM search request, ASEARCH, waveform-1, having a rising edge occurring at t=112+. A next CAMCORE search is enabled, SEARCHEN -- 1, waveform-3, issued at the falling edge of the CLOCK signal, t=112.5. Assume further that the system is timed for a search of the CAM array to be accomplished in about 3.0 ns, and SEARCHEN -- 1 goes LOW, as explained hereinafter. During the remainder of the clock cycle, 12 ns, time must be budgeted for the CAM output and for the setting up for the next system clock cycle repeat, starting at t=127.5, where, absent the present invention, the next precharge can also be triggered. Thus, with only the clock transitions as triggers, CAM precharge would have to wait until the start of each new cycle Depending upon the CAMCORE size, system speed, and clock cycle budgeting for a specific implementation, there might not be enough time in such a budget to precharge the CAMCORE in this manner. However, if the CAM search time can be shortened and precharge can be initiated as soon as the actual search of the CAMCORE 207 ends, a greater precharge time can be made available in which to schedule and accomplish precharging.
An advantage to having a longer precharge time is that where only one transition of a cell gate of the CAMCORE is necessary during the clock cycle--viz., to indicate a match--the cells can be designed as unbalanced, i.e., to change more quickly in one direction. For example, a NAND gate that goes HIGH to LOW in 0.2 ns during the search and LOW to HIGH in 2.0 ns during precharge is acceptable when enough precharge time can be scheduled. Whereas a balanced gate might take 0.4 ns in each direction, by unbalancing the gate, the speed through the gate is thus doubled. Maximizing the precharge time allows a maximal unbalance factor in the gates, thereby maximizing search speed.
Returning to FIGS. 6 and 7, as CLOCK goes LOW, SEARCHEN 1 goes HIGH, t=113.0. This enables the SEARCH DRIVERS 609 and disables the CORE PRECHARGE circuitry 613, 615, NPRECHARGE going HIGH [Note that a signal name starting with N--, symbolizes an active LOW signal]. The CAMCORE precharge signal, NPREML2, waveform-4, goes HIGH, t=113.5, turning the core precharge off, and DUMM1, waveform-5, goes LOW, t=114.0. The search signals thus pass through the CAMCORE 105 to block an edge-triggered, set-reset, flip-flop DFF2 617 which drives EVALUATEBUF1, waveform-6, HIGH, t=113.5. DFF2 617 and DFF3 633, detailed hereinafter, receive an system initialization signal, NRESET, whenever re-initialization is required, going LOW and clocking a HIGH respectively at that time.
Note from FIG. 6 that EVALUATEBUF1 also is inverted, becoming SRCHDFFRST1, waveform-7, feeding back and resetting the search enabling, edge-triggered, set-reset, flip-flop DFF1 619 at t=115+. Resetting flip-flop DFF1 619 drives SEARCHEN -- 1 LOW, t=117.3, disabling the SEARCH DRIVER 609 and enabling the PRECHARGE DRIVER 615 and CORE PRECHARGE 613 circuitry as NPREML2, waveform-4 goes LOW, t=118.0. The CORE PRECHARGE signal NPREML2 feeds DFF2 617 reset port and EVALUATEBUF1 goes LOW, t=199+. This portion of the CAM 200 system is thus back to its original state, ready for the next clock cycle to begin.
EVALUATEBUF1 going HIGH also triggers encoder signals, waveforms 8 through 13. While SEARCHEN -- 1 and EVALUATEBUF1 are HIGH, namely from t=113.0 to t=119+, the PRIORITY-- ENCODER section 611 generates MATCH and MATCH-- ADDRESS signals based on the CAMCORE 105 search results. EVALUATEBUF1 going HIGH turns off the precharge signal for a PRIORITY ENCODER 611, NBANKPRE, waveform-8, goes HIGH, t≅115.5, just as the MATCH signals from the CAMCORE 105, SCTRMATCH, waveform-9, are fed into the FINAL PRIORITY ENCODER 621 (see also FIG. 8, element 113'). Furthermore, EVALUATEBUF1 drives the BANK ENCODER DELAY 623, DUMMYENC waveform 10, which waits for the amount of time needed for the BANK PRIORITY ENCODER 625 to generate a MATCH signal, waveform-12, and send it to the FINAL PRIORITY ENCODER 621, then it turns off the FINAL ENCODER PRECHARGE 627 for the FINAL PRIORITY ENCODER 621.
When, following the end of a search cycle and SEARCHEN -- 1 goes LOW, NPREML2 goes LOW, t=118.0-119.0, restarting the precharge of the CAMCORE 107 cells, it also pulls the MATCH output lines from the CAMCORE to LOW and starts the precharge of the BANK PRIORITY ENCODER 625 as the PRIORITY ENCODE PRECHARGE 620 signal NBANKPRE goes low, t=121.0-122.0. As before, the precharge signal , NBANKPRE, feeds through the BANK ENCODER DELAY 623 and turns on the FINAL ENCODE PRECHARGE 627 just as the BANK PRIORITY ENCODER 625 stops driving the FINAL PRIORITY ENCODER 621. Sometime during this process time, the desired output MATCH and MATCH-- ADDRESS signals appear on the output ports of the CAMCORE 105. The time at which this happens and the length of time these CAMCORE outputs remain valid is search process, voltage and temperature dependent.
The desired action is to hold the outputs until after the next CLOCK edge, t=127.5. This is done by placing OUTPUT LATCHES 631 on the FINAL PRIORITY ENCODER 621 outputs MADDR, waveform-13, LATCHMOUT, where the OUTPUT LATCHES 631 are set at t=118+ and release the latched bits at t=128+ following the CLOCK cycle falling edge at t=127.5, and FBMAT1, waveform-11.
The OUTPUT LATCHES 631 are also self-timed; an edge-triggered, set-reset, flip-flop DFF3 633 is triggered, LATCHMOUT, waveform-13, by the MATCH signal going HIGH, t=116.5-117.5, causing the MATCH and MATCH-- ADDRESS signals to be latched. The OUTPUT LATCHES 631 remain closed until reset by the falling edge of the CLOCK at t=127.5. Note that if no match occurs on a particular search, the OUTPUT LATCHES 631 will not close since the CAMCORE 105 will continuously output the desired LOW values during a no match search cycle.
Thus, once the memory bank encoder has the data, the cam cores can start precharging. Once the final encoder has the data, the bank encoder can start precharging and one of the output latches can close, allowing the final encoder to start precharging. Conversely, the bank encoder does not stop precharging until the cam cores have search data to send. The final encoder does not stop precharging until the bank encoder has data to send. The output latches are set to open on the falling edge of the clock cycle rather than when the final encoder has data to send. Note that in an alternative embodiment the functionality can be reversed. The present invention provides a CAM search functionality which self-times CAMCORE precharge functions and latched output MATCH and MATCH-- ADDRESS signals.
The foregoing description of the preferred embodiment of the present invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form or to exemplary embodiments disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. The invention can be applied to associative data storage and retrieval in devices other than cam circuits. Similarly, any process steps described might be interchangeable with other steps in order to achieve the same result. The embodiment was chosen and described in order to best explain the principles of the invention and its best mode practical application, thereby to enable others skilled in the art to understand the invention for various embodiments and with various modifications as are suited to the particular use or implementation contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.

Claims (8)

What is claimed is:
1. A method for self-timing a computer data memory system having a single transition associative data memory device, a system clock providing a system timing signal, and a single transition output encoder for providing a memory data match signal and memory data match address signal, comprising the steps of:
providing a memory search signal for starting a memory search and for disabling memory pre-transition state precharging;
delaying said memory search signal until memory search is complete, providing a delayed memory search signal;
using said delayed memory search signal, enabling said output encoder and using said delayed memory search signal as a feedback signal substantially simultaneously re-enabling memory precharging.
2. The method as set forth in claim 1, further comprising the step of:
using said memory data match signal to latch said memory data match signal and memory data match address signal until a next system timing signal transition.
3. The method as set forth in claim 1, comprising the further steps of:
using said memory search signal to start precharging said output encoder; and
using said delayed memory search signal to stop precharging said output encoder and enable encoding of memory search output transition signals.
4. A self-timed computer memory system for associative data storage, search, and retrieval, said system including a system clock providing a system timing signal, comprising:
an array of memory cells, including search driver circuitry and cell output precharge circuitry;
encoder means for providing array search match and array match address output signals based on array search results, said encoder means having output encoder circuitry, encoder precharge circuitry, and output circuitry for latching said match and match address output signals;
first means, connected to receive a signal indicative of a search request and said system timing signal and to transmit said signal to said search driver circuitry and said cell precharge driver circuitry, for turning said search driver circuitry on and said cell precharge driver circuitry off substantially simultaneously;
second means, connecting said array and said encoder means, for substantially simultaneously turning off said encoder precharge circuitry and resetting said first means as soon as said encoder means is enabled.
5. The system as set forth in claim 4, wherein said output encoder circuitry further comprises:
third means for latching said match and match address output signals such that said signals are held until a next system timing signal transition.
6. The system as set forth in claim 5, wherein said first means, said second means, and said third means are edge-triggered, reset-set flip-flop devices.
7. A content addressable memory (CAM) apparatus for a system having a system clock timing signal, comprising:
a CAM device having
an input and an output,
an array of CAM cells, CAM search driver circuitry, and
CAM precharging circuitry;
a CAM output encoder having
CAM array match signal and CAM array match address signal encoding circuitry connected to said CAM output and encoder precharging circuitry;
a set-reset first flip-flop having
set inputs connected for receiving said clock timing signal and a signal indicative of a search request,
a reset input, and
an output connected to said search driver circuitry and said precharging circuitry
such that a set condition of said first flip-flop transmits a signal enabling a search of said array and disabling precharging of said array; and
a set-reset second flip-flop having
a set input connected for receiving a first delayed signal indicative of a search request and
a reset input connected for receiving a first delayed signal indicative of disabling precharging and
an output connected to said encoder precharging circuitry and to said first flip-flop reset input,
wherein said first delayed signal indicative of a search request sets said second flip-flop and transmits a signal enabling encoding said CAM array output with said match signal and match address signal encoding circuitry and disabling said encoder precharging circuitry and resetting said first flip-flop, enabling said CAM precharging circuitry.
8. The apparatus as set forth in claim 7, further comprising:
each CAM cell including unbalanced gates having search time specifications substantially shorter than precharge time specifications such that search time is minimized.
US08/920,395 1996-06-17 1997-08-29 Method and apparatus for self-timing associative data memory Expired - Fee Related US5978885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/920,395 US5978885A (en) 1996-06-17 1997-08-29 Method and apparatus for self-timing associative data memory

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/664,902 US5828324A (en) 1996-06-17 1996-06-17 Match and match address signal generation in a content addressable memory encoder
US08/920,395 US5978885A (en) 1996-06-17 1997-08-29 Method and apparatus for self-timing associative data memory

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US08/664,902 Continuation-In-Part US5828324A (en) 1996-06-17 1996-06-17 Match and match address signal generation in a content addressable memory encoder

Publications (1)

Publication Number Publication Date
US5978885A true US5978885A (en) 1999-11-02

Family

ID=46203187

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/920,395 Expired - Fee Related US5978885A (en) 1996-06-17 1997-08-29 Method and apparatus for self-timing associative data memory

Country Status (1)

Country Link
US (1) US5978885A (en)

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6191970B1 (en) * 1999-09-09 2001-02-20 Netlogic Microsystems, Inc. Selective match line discharging in a partitioned content addressable memory array
US6374326B1 (en) * 1999-10-25 2002-04-16 Cisco Technology, Inc. Multiple bank CAM architecture and method for performing concurrent lookup operations
US6526474B1 (en) 1999-10-25 2003-02-25 Cisco Technology, Inc. Content addressable memory (CAM) with accesses to multiple CAM arrays used to generate result for various matching sizes
US20030084236A1 (en) * 2001-10-31 2003-05-01 Sandeep Khanna Bit level programming interface in a content addressable memory
US6606681B1 (en) 2001-02-23 2003-08-12 Cisco Systems, Inc. Optimized content addressable memory (CAM)
US6658002B1 (en) 1998-06-30 2003-12-02 Cisco Technology, Inc. Logical operation unit for packet processing
US20040030802A1 (en) * 2002-08-10 2004-02-12 Eatherton William N. Performing lookup operations using associative memories optionally including selectively determining which associative memory blocks to use in identifying a result and possibly propagating error indications
US20040030803A1 (en) * 2002-08-10 2004-02-12 Eatherton William N. Performing lookup operations using associative memories optionally including modifying a search key in generating a lookup word and possibly forcing a no-hit indication in response to matching a particular entry
US20040032775A1 (en) * 2001-08-22 2004-02-19 Varadarajan Srinivasan Concurrent searching of different tables within a content addressable memory
US6715029B1 (en) 2002-01-07 2004-03-30 Cisco Technology, Inc. Method and apparatus for possibly decreasing the number of associative memory entries by supplementing an associative memory result with discriminator bits from an original set of information
US6738862B1 (en) 1998-08-07 2004-05-18 Cisco Technology, Inc. Block mask ternary CAM
US20040170172A1 (en) * 2002-08-10 2004-09-02 Cisco Technology, Inc., A California Corporation Associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices
US20040172346A1 (en) * 2002-08-10 2004-09-02 Cisco Technology, Inc., A California Corporation Generating accounting data based on access control list entries
US20040170171A1 (en) * 2002-08-10 2004-09-02 Cisco Technology, Inc., A California Corporation Generating and merging lookup results to apply multiple features
US20040240484A1 (en) * 2002-01-14 2004-12-02 Argyres Dimitri C. Transposing of bits in input data to form a comparand within a content addressable memory
US20050010612A1 (en) * 2003-07-09 2005-01-13 Cisco Technology, Inc. Storing and searching a hierarchy of items of particular use with IP security policies and security associations
US20050021752A1 (en) * 2002-08-10 2005-01-27 Cisco Technology, Inc., A California Corporation Reverse path forwarding protection of packets using automated population of access control lists based on a forwarding information base
US6862281B1 (en) 2001-05-10 2005-03-01 Cisco Technology, Inc. L4 lookup implementation using efficient CAM organization
US6871262B1 (en) 2002-02-14 2005-03-22 Cisco Technology, Inc. Method and apparatus for matching a string with multiple lookups using a single associative memory
US20050063241A1 (en) * 2000-06-08 2005-03-24 Pereira Jose P. Content addressable memory with configurable class-based storage partition
US6892272B1 (en) 1999-02-23 2005-05-10 Netlogic Microsystems, Inc. Method and apparatus for determining a longest prefix match in a content addressable memory device
US6961808B1 (en) 2002-01-08 2005-11-01 Cisco Technology, Inc. Method and apparatus for implementing and using multiple virtual portions of physical associative memories
US20050289295A1 (en) * 2004-06-29 2005-12-29 Cisco Technology, Inc. Error Protection For Lookup Operations in Content-Addressable Memory Entries
US20060018142A1 (en) * 2003-08-11 2006-01-26 Varadarajan Srinivasan Concurrent searching of different tables within a content addressable memory
US7002965B1 (en) 2001-05-21 2006-02-21 Cisco Technology, Inc. Method and apparatus for using ternary and binary content-addressable memory stages to classify packets
US7028136B1 (en) 2002-08-10 2006-04-11 Cisco Technology, Inc. Managing idle time and performing lookup operations to adapt to refresh requirements or operational rates of the particular associative memory or other devices used to implement the system
US20060106977A1 (en) * 2002-08-10 2006-05-18 Cisco Technology, Inc. A California Corporation Performing lookup operations on associative memory entries
US7065083B1 (en) 2001-10-04 2006-06-20 Cisco Technology, Inc. Method and apparatus for dynamically generating lookup words for content-addressable memories
US20060168494A1 (en) * 2005-01-22 2006-07-27 Cisco Technology, Inc., A California Corporation Error protecting groups of data words
US7114026B1 (en) 2002-06-17 2006-09-26 Sandeep Khanna CAM device having multiple index generators
US7143231B1 (en) 1999-09-23 2006-11-28 Netlogic Microsystems, Inc. Method and apparatus for performing packet classification for policy-based packet routing
US7210003B2 (en) 2001-10-31 2007-04-24 Netlogic Microsystems, Inc. Comparand generation in a content addressable memory
US7260673B1 (en) 2001-07-20 2007-08-21 Cisco Technology, Inc. Method and apparatus for verifying the integrity of a content-addressable memory result
US7272027B2 (en) 1999-09-23 2007-09-18 Netlogic Microsystems, Inc. Priority circuit for content addressable memory
US7305519B1 (en) 2004-03-29 2007-12-04 Cisco Technology, Inc. Error protection for associative memory entries and lookup operations performed thereon
US7313666B1 (en) 2002-08-10 2007-12-25 Cisco Technology, Inc. Methods and apparatus for longest common prefix based caching
US20080049522A1 (en) * 2006-08-24 2008-02-28 Cisco Technology, Inc. Content addressable memory entry coding for error detection and correction
US7941605B1 (en) 2002-11-01 2011-05-10 Cisco Technology, Inc Methods and apparatus for generating a result based on a lookup result from a lookup operation using an associative memory and processing based on a discriminator portion of a lookup word

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3701980A (en) * 1970-08-03 1972-10-31 Gen Electric High density four-transistor mos content addressed memory
US4780845A (en) * 1986-07-23 1988-10-25 Advanced Micro Devices, Inc. High density, dynamic, content-addressable memory cell
EP0313190A2 (en) * 1987-10-19 1989-04-26 Hewlett-Packard Company Performance-based reset of data compression dictionary
US4881075A (en) * 1987-10-15 1989-11-14 Digital Equipment Corporation Method and apparatus for adaptive data compression
EP0380294A1 (en) * 1989-01-23 1990-08-01 Codex Corporation String matching
US5175543A (en) * 1991-09-25 1992-12-29 Hewlett-Packard Company Dictionary reset performance enhancement for data compression applications
US5373290A (en) * 1991-09-25 1994-12-13 Hewlett-Packard Corporation Apparatus and method for managing multiple dictionaries in content addressable memory based data compression
US5448733A (en) * 1993-07-16 1995-09-05 International Business Machines Corp. Data search and compression device and method for searching and compressing repeating data
US5602770A (en) * 1995-02-03 1997-02-11 Kawasaki Steel Corporation Associative memory device
US5859791A (en) * 1997-01-09 1999-01-12 Northern Telecom Limited Content addressable memory

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3701980A (en) * 1970-08-03 1972-10-31 Gen Electric High density four-transistor mos content addressed memory
US4780845A (en) * 1986-07-23 1988-10-25 Advanced Micro Devices, Inc. High density, dynamic, content-addressable memory cell
US4881075A (en) * 1987-10-15 1989-11-14 Digital Equipment Corporation Method and apparatus for adaptive data compression
EP0313190A2 (en) * 1987-10-19 1989-04-26 Hewlett-Packard Company Performance-based reset of data compression dictionary
EP0380294A1 (en) * 1989-01-23 1990-08-01 Codex Corporation String matching
US5175543A (en) * 1991-09-25 1992-12-29 Hewlett-Packard Company Dictionary reset performance enhancement for data compression applications
US5373290A (en) * 1991-09-25 1994-12-13 Hewlett-Packard Corporation Apparatus and method for managing multiple dictionaries in content addressable memory based data compression
US5448733A (en) * 1993-07-16 1995-09-05 International Business Machines Corp. Data search and compression device and method for searching and compressing repeating data
US5602770A (en) * 1995-02-03 1997-02-11 Kawasaki Steel Corporation Associative memory device
US5859791A (en) * 1997-01-09 1999-01-12 Northern Telecom Limited Content addressable memory

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Compression Of Individual Sequences Via Variable-Rate Coding", By Jacob Ziv and Abraham Lempel, IEEE Transactions on Information Theory, vol. IT-24, No. 5, Sep. 1978.
"Practical Dictionary Management For Hardware Data Compression", By Ziv & Lempel, Development of a Theme, Department of Computer Science & Engineering FR-35 University of Washington Seattle WA 98195, pp. 33-50.
Compression Of Individual Sequences Via Variable Rate Coding , By Jacob Ziv and Abraham Lempel, IEEE Transactions on Information Theory, vol. IT 24, No. 5, Sep. 1978. *
Practical Dictionary Management For Hardware Data Compression , By Ziv & Lempel, Development of a Theme, Department of Computer Science & Engineering FR 35 University of Washington Seattle WA 98195, pp. 33 50. *

Cited By (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6658002B1 (en) 1998-06-30 2003-12-02 Cisco Technology, Inc. Logical operation unit for packet processing
US6738862B1 (en) 1998-08-07 2004-05-18 Cisco Technology, Inc. Block mask ternary CAM
US6892272B1 (en) 1999-02-23 2005-05-10 Netlogic Microsystems, Inc. Method and apparatus for determining a longest prefix match in a content addressable memory device
US6191970B1 (en) * 1999-09-09 2001-02-20 Netlogic Microsystems, Inc. Selective match line discharging in a partitioned content addressable memory array
US7143231B1 (en) 1999-09-23 2006-11-28 Netlogic Microsystems, Inc. Method and apparatus for performing packet classification for policy-based packet routing
US7272027B2 (en) 1999-09-23 2007-09-18 Netlogic Microsystems, Inc. Priority circuit for content addressable memory
US6374326B1 (en) * 1999-10-25 2002-04-16 Cisco Technology, Inc. Multiple bank CAM architecture and method for performing concurrent lookup operations
US6526474B1 (en) 1999-10-25 2003-02-25 Cisco Technology, Inc. Content addressable memory (CAM) with accesses to multiple CAM arrays used to generate result for various matching sizes
US7230840B2 (en) 2000-06-08 2007-06-12 Netlogic Microsystems, Inc. Content addressable memory with configurable class-based storage partition
US20050063241A1 (en) * 2000-06-08 2005-03-24 Pereira Jose P. Content addressable memory with configurable class-based storage partition
US6606681B1 (en) 2001-02-23 2003-08-12 Cisco Systems, Inc. Optimized content addressable memory (CAM)
US6862281B1 (en) 2001-05-10 2005-03-01 Cisco Technology, Inc. L4 lookup implementation using efficient CAM organization
US20060104286A1 (en) * 2001-05-21 2006-05-18 Cisco Technology, Inc., A California Corporation Using ternary and binary content addressable memory stages to classify information such as packets
US7602787B2 (en) 2001-05-21 2009-10-13 Cisco Technology, Inc. Using ternary and binary content addressable memory stages to classify information such as packets
US7002965B1 (en) 2001-05-21 2006-02-21 Cisco Technology, Inc. Method and apparatus for using ternary and binary content-addressable memory stages to classify packets
US7260673B1 (en) 2001-07-20 2007-08-21 Cisco Technology, Inc. Method and apparatus for verifying the integrity of a content-addressable memory result
US20040032775A1 (en) * 2001-08-22 2004-02-19 Varadarajan Srinivasan Concurrent searching of different tables within a content addressable memory
US6744652B2 (en) 2001-08-22 2004-06-01 Netlogic Microsystems, Inc. Concurrent searching of different tables within a content addressable memory
US6967855B2 (en) 2001-08-22 2005-11-22 Netlogic Microsystems, Inc. Concurrent searching of different tables within a content addressable memory
US7065083B1 (en) 2001-10-04 2006-06-20 Cisco Technology, Inc. Method and apparatus for dynamically generating lookup words for content-addressable memories
US7210003B2 (en) 2001-10-31 2007-04-24 Netlogic Microsystems, Inc. Comparand generation in a content addressable memory
US20030084236A1 (en) * 2001-10-31 2003-05-01 Sandeep Khanna Bit level programming interface in a content addressable memory
US6993622B2 (en) 2001-10-31 2006-01-31 Netlogic Microsystems, Inc. Bit level programming interface in a content addressable memory
US6715029B1 (en) 2002-01-07 2004-03-30 Cisco Technology, Inc. Method and apparatus for possibly decreasing the number of associative memory entries by supplementing an associative memory result with discriminator bits from an original set of information
US6961808B1 (en) 2002-01-08 2005-11-01 Cisco Technology, Inc. Method and apparatus for implementing and using multiple virtual portions of physical associative memories
US20080288721A1 (en) * 2002-01-14 2008-11-20 Argyres Dimitri C Transposing of bits in input data to form a comparand within a content addressable memory
US20040240484A1 (en) * 2002-01-14 2004-12-02 Argyres Dimitri C. Transposing of bits in input data to form a comparand within a content addressable memory
US7856524B2 (en) 2002-01-14 2010-12-21 Netlogic Microsystems, Inc. Transposing of bits in input data to form a comparand within a content addressable memory
US7412561B2 (en) 2002-01-14 2008-08-12 Netlogic Microsystems, Inc. Transposing of bits in input data to form a comparand within a content addressable memory
US7237058B2 (en) 2002-01-14 2007-06-26 Netlogic Microsystems, Inc. Input data selection for content addressable memory
US6871262B1 (en) 2002-02-14 2005-03-22 Cisco Technology, Inc. Method and apparatus for matching a string with multiple lookups using a single associative memory
US7114026B1 (en) 2002-06-17 2006-09-26 Sandeep Khanna CAM device having multiple index generators
US20060106977A1 (en) * 2002-08-10 2006-05-18 Cisco Technology, Inc. A California Corporation Performing lookup operations on associative memory entries
US20040170171A1 (en) * 2002-08-10 2004-09-02 Cisco Technology, Inc., A California Corporation Generating and merging lookup results to apply multiple features
US7028136B1 (en) 2002-08-10 2006-04-11 Cisco Technology, Inc. Managing idle time and performing lookup operations to adapt to refresh requirements or operational rates of the particular associative memory or other devices used to implement the system
US7082492B2 (en) 2002-08-10 2006-07-25 Cisco Technology, Inc. Associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices
US20040030802A1 (en) * 2002-08-10 2004-02-12 Eatherton William N. Performing lookup operations using associative memories optionally including selectively determining which associative memory blocks to use in identifying a result and possibly propagating error indications
US7103708B2 (en) 2002-08-10 2006-09-05 Cisco Technology, Inc. Performing lookup operations using associative memories optionally including modifying a search key in generating a lookup word and possibly forcing a no-hit indication in response to matching a particular entry
US7689485B2 (en) 2002-08-10 2010-03-30 Cisco Technology, Inc. Generating accounting data based on access control list entries
US20040030803A1 (en) * 2002-08-10 2004-02-12 Eatherton William N. Performing lookup operations using associative memories optionally including modifying a search key in generating a lookup word and possibly forcing a no-hit indication in response to matching a particular entry
US20070002862A1 (en) * 2002-08-10 2007-01-04 Cisco Technology, Inc., A California Corporation Generating and merging lookup results to apply multiple features
US7177978B2 (en) 2002-08-10 2007-02-13 Cisco Technology, Inc. Generating and merging lookup results to apply multiple features
US20040170172A1 (en) * 2002-08-10 2004-09-02 Cisco Technology, Inc., A California Corporation Associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices
US7441074B1 (en) 2002-08-10 2008-10-21 Cisco Technology, Inc. Methods and apparatus for distributing entries among lookup units and selectively enabling less than all of the lookup units when performing a lookup operation
US7237059B2 (en) 2002-08-10 2007-06-26 Cisco Technology, Inc Performing lookup operations on associative memory entries
US20050021752A1 (en) * 2002-08-10 2005-01-27 Cisco Technology, Inc., A California Corporation Reverse path forwarding protection of packets using automated population of access control lists based on a forwarding information base
US20040172346A1 (en) * 2002-08-10 2004-09-02 Cisco Technology, Inc., A California Corporation Generating accounting data based on access control list entries
US7065609B2 (en) 2002-08-10 2006-06-20 Cisco Technology, Inc. Performing lookup operations using associative memories optionally including selectively determining which associative memory blocks to use in identifying a result and possibly propagating error indications
US7350020B2 (en) 2002-08-10 2008-03-25 Cisco Technology, Inc. Generating and merging lookup results to apply multiple features
US7349382B2 (en) 2002-08-10 2008-03-25 Cisco Technology, Inc. Reverse path forwarding protection of packets using automated population of access control lists based on a forwarding information base
US7313666B1 (en) 2002-08-10 2007-12-25 Cisco Technology, Inc. Methods and apparatus for longest common prefix based caching
US7941605B1 (en) 2002-11-01 2011-05-10 Cisco Technology, Inc Methods and apparatus for generating a result based on a lookup result from a lookup operation using an associative memory and processing based on a discriminator portion of a lookup word
US7493328B2 (en) 2003-07-09 2009-02-17 Cisco Technology, Inc. Storing and searching a hierarchy of policies and associations thereof of particular use with IP security policies and security associations
US20060074899A1 (en) * 2003-07-09 2006-04-06 Cisco Technology, Inc., A California Corporation Storing and searching a hierarchy of policies and associations thereof of particular use with IP security policies and security associations
US20050010612A1 (en) * 2003-07-09 2005-01-13 Cisco Technology, Inc. Storing and searching a hierarchy of items of particular use with IP security policies and security associations
US6988106B2 (en) 2003-07-09 2006-01-17 Cisco Technology, Inc. Strong and searching a hierarchy of items of particular use with IP security policies and security associations
US20060018142A1 (en) * 2003-08-11 2006-01-26 Varadarajan Srinivasan Concurrent searching of different tables within a content addressable memory
US7305519B1 (en) 2004-03-29 2007-12-04 Cisco Technology, Inc. Error protection for associative memory entries and lookup operations performed thereon
US7290083B2 (en) 2004-06-29 2007-10-30 Cisco Technology, Inc. Error protection for lookup operations in content-addressable memory entries
US20050289295A1 (en) * 2004-06-29 2005-12-29 Cisco Technology, Inc. Error Protection For Lookup Operations in Content-Addressable Memory Entries
US20060168494A1 (en) * 2005-01-22 2006-07-27 Cisco Technology, Inc., A California Corporation Error protecting groups of data words
US7350131B2 (en) 2005-01-22 2008-03-25 Cisco Technology, Inc. Error protecting groups of data words
US7689889B2 (en) 2006-08-24 2010-03-30 Cisco Technology, Inc. Content addressable memory entry coding for error detection and correction
US20080049522A1 (en) * 2006-08-24 2008-02-28 Cisco Technology, Inc. Content addressable memory entry coding for error detection and correction

Similar Documents

Publication Publication Date Title
US5978885A (en) Method and apparatus for self-timing associative data memory
US6069573A (en) Match and match address signal prioritization in a content addressable memory encoder
US6191969B1 (en) Selective match line discharging in a partitioned content addressable memory array
US6064625A (en) Semiconductor memory device having a short write time
Kadota et al. An 8-kbit content-addressable and reentrant memory
US5398213A (en) Access time speed-up circuit for a semiconductor memory device
US6243280B1 (en) Selective match line pre-charging in a partitioned content addressable memory array
US6564289B2 (en) Method and apparatus for performing a read next highest priority match instruction in a content addressable memory device
US6901020B2 (en) Integrated charge sensing scheme for resistive memories
US6338127B1 (en) Method and apparatus for resynchronizing a plurality of clock signals used to latch respective digital signals, and memory device using same
EP1097455B1 (en) Method and apparatus for controlling the data rate of a clocking circuit
KR20110124326A (en) Daisy chain cascading devices
WO1998057331A1 (en) Two step memory device command buffer apparatus and method and memory devices and computer systems using same
US6301185B1 (en) Random access memory with divided memory banks and data read/write architecture therefor
JPH11306757A (en) Synchronization-type semiconductor storage
US6195309B1 (en) Timing circuit for a burst-mode address counter
US4314353A (en) On chip ram interconnect to MPU bus
US5726950A (en) Synchronous semiconductor memory device performing input/output of data in a cycle shorter than an external clock signal cycle
US6370627B1 (en) Memory device command buffer apparatus and method and memory devices and computer systems using same
JP2779114B2 (en) Associative memory
US5828324A (en) Match and match address signal generation in a content addressable memory encoder
US6707734B2 (en) Method and circuit for accelerating redundant address matching
US4328558A (en) RAM Address enable circuit for a microprocessor having an on-chip RAM
US6195308B1 (en) Self-timed address decoder for register file and compare circuit of a multi-port cam
US5363337A (en) Integrated circuit memory with variable addressing of memory cells

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD COMPANY, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CLARK II, AIRELL R.;REEL/FRAME:008778/0656

Effective date: 19970829

AS Assignment

Owner name: HEWLETT-PACKARD COMPANY, A DELAWARE CORPORATION, C

Free format text: MERGER;ASSIGNOR:HEWLETT-PACKARD COMPANY, A CALIFORNIA CORPORATION;REEL/FRAME:010841/0649

Effective date: 19980520

AS Assignment

Owner name: AGILENT TECHNOLOGIES INC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:010977/0540

Effective date: 19991101

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AGILENT TECHNOLOGIES, INC.;REEL/FRAME:017207/0020

Effective date: 20051201

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20071102

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE NAME OF THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 017207 FRAME 0020. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:AGILENT TECHNOLOGIES, INC.;REEL/FRAME:038633/0001

Effective date: 20051201