US20180211699A1 - Method for calibrating capturing read data in a read data path for a ddr memory interface circuit - Google Patents
Method for calibrating capturing read data in a read data path for a ddr memory interface circuit Download PDFInfo
- Publication number
- US20180211699A1 US20180211699A1 US15/926,902 US201815926902A US2018211699A1 US 20180211699 A1 US20180211699 A1 US 20180211699A1 US 201815926902 A US201815926902 A US 201815926902A US 2018211699 A1 US2018211699 A1 US 2018211699A1
- Authority
- US
- United States
- Prior art keywords
- signal
- read data
- dqs
- timing
- calibration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000015654 memory Effects 0.000 title claims abstract description 181
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000003111 delayed effect Effects 0.000 claims abstract description 46
- 230000008569 process Effects 0.000 claims description 23
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 claims description 6
- 229910052710 silicon Inorganic materials 0.000 claims description 6
- 239000010703 silicon Substances 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 4
- 230000001960 triggered effect Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 2
- 238000013481 data capture Methods 0.000 abstract description 11
- 238000012360 testing method Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 14
- 230000001934 delay Effects 0.000 description 8
- 230000000630 rising effect Effects 0.000 description 6
- 238000013461 design Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 2
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 2
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 2
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 2
- 238000007792 addition Methods 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000010408 sweeping Methods 0.000 description 2
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000006880 cross-coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
- G11C11/34—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
- G11C11/40—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
- G11C11/401—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
- G11C11/4063—Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
- G11C11/407—Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing for memory cells of the field-effect type
- G11C11/4076—Timing circuits
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/04—Generating or distributing clock signals or signals derived directly therefrom
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/04—Generating or distributing clock signals or signals derived directly therefrom
- G06F1/08—Clock generators with changeable or programmable clock frequency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/04—Generating or distributing clock signals or signals derived directly therefrom
- G06F1/12—Synchronisation of different clock signals provided by a plurality of clock generators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/04—Generating or distributing clock signals or signals derived directly therefrom
- G06F1/14—Time supervision arrangements, e.g. real time clock
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
- G06F12/0646—Configuration or reconfiguration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
- G06F13/1689—Synchronisation and timing concerns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4204—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
- G06F13/4234—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a memory bus
- G06F13/4243—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a memory bus with synchronous protocol
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
- G11C11/34—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
- G11C11/40—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
- G11C11/401—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
- G11C11/4063—Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
- G11C11/407—Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing for memory cells of the field-effect type
- G11C11/409—Read-write [R-W] circuits
- G11C11/4093—Input/output [I/O] data interface arrangements, e.g. data buffers
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
- G11C11/34—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
- G11C11/40—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
- G11C11/401—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
- G11C11/4063—Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
- G11C11/407—Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing for memory cells of the field-effect type
- G11C11/409—Read-write [R-W] circuits
- G11C11/4096—Input/output [I/O] data management or control circuits, e.g. reading or writing circuits, I/O drivers or bit-line switches
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/02—Detection or location of defective auxiliary circuits, e.g. defective refresh counters
- G11C29/022—Detection or location of defective auxiliary circuits, e.g. defective refresh counters in I/O circuitry
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/02—Detection or location of defective auxiliary circuits, e.g. defective refresh counters
- G11C29/023—Detection or location of defective auxiliary circuits, e.g. defective refresh counters in clock generator or timing circuitry
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/02—Detection or location of defective auxiliary circuits, e.g. defective refresh counters
- G11C29/028—Detection or location of defective auxiliary circuits, e.g. defective refresh counters with adaption or trimming of parameters
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
- G11C7/1072—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers for memories with random access ports synchronised on clock signal pulse trains, e.g. synchronous memories, self timed memories
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/22—Read-write [R-W] timing or clocking circuits; Read-write [R-W] control signal generators or management
- G11C7/222—Clock generating, synchronizing or distributing circuits within memory device
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
- G11C11/34—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
- G11C11/40—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/04—Arrangements for writing information into, or reading information out from, a digital store with means for avoiding disturbances due to temperature effects
Definitions
- U.S. patent application Ser. No. 14/882,226 also claimed priority as a Continuation-In-Part of U.S. Utility patent application Ser. No. 14/752,903, filed on Jun. 27, 2015, registered as U.S. Pat. No. 9,552,853 on Jan. 24, 2017, and entitled “Methods for Calibrating a Read Data Path for a Memory Interface,” which in turn claims priority as a Continuation of U.S. Utility patent application Ser. No. 14/152,902, filed on Jan. 10, 2014, patented as U.S. Pat. No. 9,081,516 on Jul. 14, 2015 and entitled “Application Memory Preservation for Dynamic Calibration of Memory Interfaces,” which in turn claimed priority as a Continuation of U.S.
- This invention relates to circuits that interface with memories, in particular DDR or “double data rate” dynamic memories.
- Such circuits are found in a wide variety of integrated circuit devices including processors, ASICs, and ASSPs used in a wide variety of applications, as well as devices whose primary purpose is interfacing between memories and other devices.
- Double Data Rate, or “DDR” memories are extremely popular due to their performance and density, however they present challenges to designers.
- DDR memory controllers circuits known as DDR memory controllers. These controller circuits may reside on Processor, ASSP, or ASIC semiconductor devices, or alternately may reside on semiconductor devices dedicated solely to the purpose of controlling DDR memories.
- ASSP System-specific integrated circuit
- ASIC Application-specific integrated circuit
- FIG. 1 Shows a typical prior art DDR memory controller where an Asynchronous FIFO 101 is utilized to move data from the clocking domain of the Phy 102 to the Core clock domain 103 .
- Incoming read data dq 0 is clocked into input registers 105 and 106 , each of these input registers being clocked on the opposite phase of a delayed version of the dqs clock 107 , this delay having been performed by delay element 108 .
- Asynchronous FIFO 101 typically consists of at least eight stages of flip-flops requiring at least 16 flip-flops in total per dq data bit. Notice also that an additional circuit 109 for delay and gating of dqs has been added prior to driving the Write Clock input of FIFO 101 . This is due to the potential that exists for glitches on dqs. Both data and control signals on a typical DDR memory bus are actually bidirectional. As such, dqs may float at times during the transition between writes and reads, and as such be susceptible to glitches during those time periods. For this reason, typical prior art in DDR controller designs utilizing asynchronous FIFOs add gating element 109 to reduce the propensity for errors due to glitches on dqs.
- read data is transferred to the core domain according to Core_Clk 110 .
- Additional circuitry is typically added to FIFO 101 in order to deal with timing issues relative to potential metastable conditions given the unpredictable relationship between Core_Clk and dqs.
- FIG. 2 shows another prior art circuit for implementing a DDR memory controller, in particular a style utilized by the FPGA manufacturer Altera Corp. Portions of two byte lanes are shown in FIG. 2 , the first byte lane represented by data bit dq 0 201 and corresponding dqs strobe 202 . The second byte lane is represented by dqs strobe 203 and data bit dq 0 204 .
- the data and strobe signals connecting between a DDR memory and a DDR memory controller are organized such that each byte or eight bits of data has its own dqs strobe signal. Each of these groupings is referred to as a byte lane.
- the PLL resynchronization clock generator 214 is phase and frequency synchronized with dqs. Notice that at this point, data stored in final stage registers 212 has not yet been captured by the core clock of the memory controller. Also notice that the circuit of FIG. 2 utilizes an individual delay element for each data bit such as dq 0 201 and dq 0 204 .
- FIG. 4 describes some of the timing relationships that occur for a prior art DDR memory controller which uses delay elements within the Phy for individual read data bits.
- FIG. 4 a shows a simplified diagram where a single data bit is programmably delayed by element 401 in addition to the dqs strobe being delayed by element 402 .
- data from input dq is captured on both the rising and falling edges of dqs as shown in FIGS. 1 and 2 , however for the sake of simplicity, the diagrams of FIGS. 3-12 only show the schematic and timing for the dq bits captured on the rising edge of dqs.
- the output of capture register 403 can be delayed by any amount within the range of the delay elements before it is passed into the core clock domain and clocked into register 404 by the Core_Clk signal 405 .
- the dqs_delayed signal 406 is placed near the center of the valid window for dq 407 and after being captured in register 403 , data then enters the core domain at clock edge 408 is shown as shown.
- the latency to move the data into the core domain is relatively low simply because of the natural relationship between core clock and dqs. This relationship however is extremely dependent upon the system topology and delays, and in fact could have almost any phase relationship.
- FIG. 4 c A different phase relationship is possible as shown in FIG. 4 c .
- a first edge 409 of Core_Clk happens to occur just before the leading edge 410 of dqs_delayed.
- the result is that each data bit will not be captured in the core clock domain until leading edge 411 of Core_Clk as shown, and thus will be delayed by amount of time 412 before being transferred into the core domain.
- the ability to delay both dq and dqs can accomplish synchronization with the core clock, it may introduce a significant amount of latency in the process.
- a DDR memory controller circuit and method is therefore needed that reliably captures and processes memory data during read cycles while requiring a small gate count resulting in implementations requiring a small amount of silicon real estate.
- the controller should also offer a high yield for memory controller devices as well as a high yield for memory system implementations using those controller devices. Further, it is desirable to provide a DDR memory controller that is calibrated to compensate for system level timing irregularities and for chip process parameter variations—that calibration occurring not only during power-up initialization, but also dynamically during system operation to further compensate for power supply voltage variations over time as well as system level timing variations as the system warms during operation.
- One object of this invention is to provide a DDR memory controller with a more flexible timing calibration capability such that the controller may be calibrated for higher performance operation while at the same time providing more margin for system timing variations.
- Another object of this invention is to provide a DDR memory controller with a more flexible timing calibration capability where this timing calibration is operated during the power-up initialization of the device containing the DDR memory controller and, where this timing calibration is performed in conjunction with at least one DDR memory device, both said device and controller installed in a system environment, and where the timing calibration performed by the memory controller takes into account delays in the round-trip path between the DDR memory controller and the DDR memory.
- Another object of this invention is to provide a DDR memory controller that transfers, at an earlier point in time, captured data on memory read cycles from the dqs clock domain to the core clock domain. This reduces the possibility that a glitch on dqs that may occur during the time period where dqs is not driven, would inadvertently clock invalid data into the controller during read cycles.
- Another object of this invention is to provide a DDR Memory Controller with a smaller gate count thereby reducing the amount of silicon required to implement the controller and the size and cost of the semiconductor device containing the controller function. Gate count is reduced by eliminating delay elements on the dq data inputs, and by eliminating the use of an asynchronous FIFO for transitioning data from the dqs clock domain to the core clock domain.
- Another object of this invention is to move captured data into the core clock domain as quickly as possible for read cycles to minimize latency.
- Another object of this invention is to provide a DDR memory controller that is calibrated to compensate for system level timing irregularities and for chip process parameter variations where that calibration occurs dynamically during system operation to compensate for power supply voltage variations over time as well as system level timing variations as the system warms during operation.
- Another object of the invention is to provide a memory interface that includes two different windows for gating key timing signals like DQS—a first that is large and allows for performing initial calibration functions when the precise timing is not yet known, and a second for gating key timing signals more precisely as timing relationships become more defined as the calibration process progresses.
- Another object of the invention is to provide a memory interface that operates at substantially half a DQS clock rate, or a reduced clock rate, such that data can be captured accurately and calibration performed accurately even as primary clock rates for memories increase over successive technology generations.
- FIG. 1 shows a prior art DDR memory controller which utilizes an asynchronous FIFO with gated clock, all contained within the Phy portion of the controller circuit.
- FIG. 2 shows a prior art DDR memory controller where delay elements are used on both dq and dqs signals and a form of FIFO is used for data levelization, the FIFO being clocked by a clock that is PLL-synchronized with dqs, the entire circuit contained within the Phy portion of the memory controller.
- FIG. 3 describes the read data path for a prior art DDR memory controller having delay elements on both dq and dqs inputs.
- FIG. 4 shows the data capture and synchronization timing for the read data path of a prior art DDR memory controller having delay elements on both dq and dqs inputs.
- FIG. 5 shows the read data path for a DDR memory controller according to an embodiment of the present invention where delay elements are used on dqs but not on dq inputs, and read data synchronization is performed with the core clock by way of a core clock delay element.
- FIG. 6 shows the data capture and synchronization timing for the read data path of a DDR memory controller according to an embodiment of the present invention where delay elements are used on dqs but not on dq inputs, and read data synchronization is performed with the core clock by way of a core clock delay element.
- FIG. 7 shows the read data path for a DDR memory controller according to one embodiment of the present invention including a CAS latency compensation circuit which is clocked by the core clock.
- FIG. 8 shows the glitch problem which can occur on the bidirectional dqs signal in DDR memory systems.
- FIG. 9 shows a comparison of prior art memory controllers which utilize delay elements on both dq and the dqs inputs when compared with the memory controller of one embodiment of the present invention, with emphasis on the number of total delay elements required for each implementation.
- FIG. 10 shows a diagram for the read data path of a DDR memory controller according to one embodiment of the present invention with emphasis on the inputs and outputs for the Self Configuring Logic function which controls the programmable delay elements.
- FIG. 11 describes the timing relationships involved in choosing the larger passing window when the delay element producing Capture_Clk is to be programmed according to one embodiment of the present invention.
- FIG. 12 shows a timing diagram for the data eye indicating the common window for valid data across a group of data bits such as a byte lane, given the skew that exists between all the data bits.
- FIG. 13 shows a flow chart for the power-on initialization test and calibration operation according to one embodiment of the present invention, the results of this operation including choosing programmable delay values.
- FIG. 14 shows the functionality of FIG. 10 with circuitry added to implement a dynamically calibrated DDR controller function according to one embodiment of the invention, in particular to determine an optimum Capture_Clk delay.
- FIG. 15 shows a timing diagram where Core_Clk and ip_dqs are delayed and sampled as part of implementing a dynamically calibrated DDR controller function according to one embodiment of the invention.
- FIG. 16 shows a flowchart describing the process of delaying and sampling both ip_dqs and Core_Clk, and for computing an optimum Capture_Clk delay.
- FIG. 17 includes circuitry added for dynamic calibration, in particular for a second phase according to the process of FIG. 18 .
- FIG. 18 shows a flowchart describing the process of iteratively capturing read data from the DDR memory while sweeping different CAS latency compensation values to determine the settings for the DDR memory controller that provide the optimum CAS latency compensation.
- FIGS. 19-22 show circuit details and timing relationships for providing a memory interface that includes two different windows for gating key timing signals like DQS—a first that is large and allows for performing initial calibration functions when the precise timing is not yet known, and a second for gating key timing signals more precisely as timing relationships become more defined as the calibration process progresses.
- FIGS. 19-22 Also shown in FIGS. 19-22 are circuit details and timing relationships for a memory interface that operates at substantially half a DQS clock rate, or a reduced clock rate, such that data can be captured accurately and calibration performed accurately even as primary clock rates for memories increase over successive technology generations.
- FIGS. 23-26 depict additional details of the half frequency operation, pursuant to one embodiment of the invention.
- the DDR memory controller of one embodiment of the present invention focuses on utilizing core domain clocking mechanisms, at times combined with circuitry in the Phy, to implement an improved solution for a timing-adaptive DDR memory controller.
- FIG. 5 shows a simplified version of a DDR controller circuit according to an embodiment of the present invention.
- the data inputs for a byte lane 501 are shown being captured in dq read data registers 502 without any additional delay elements added, these registers being clocked by a delayed version of dqs.
- the dqs clock signal 503 has dqs delay element 504 added, typically delaying dqs by approximately 90 degrees relative to the dqs signal driven by the DDR memory.
- the outputs of registers 502 enter the core domain and are captured in first core domain registers 505 .
- Registers 505 are clocked by a delayed version of Core_Clk called Capture_Clk 506 .
- Capture_Clk is essentially the output of core clock delay element 507 which produces a programmably delayed version of Core_Clk 508 .
- the outputs of first core domain registers 505 feed second core domain registers 509 which are clocked by Core_Clk.
- the amount of delay assigned to programmable delay element 507 is controlled by a self-configuring logic circuit (SCL) contained within the memory controller, this self-configuring logic circuit determining the appropriate delay for element 507 during a power-on initialization test and calibration operation.
- SCL self-configuring logic circuit
- FIG. 6 shows how the timing for the read data path can occur for the DDR memory controller circuit of one embodiment of the present invention.
- a simplified version of the read data path is shown in FIG. 6 a where dqs is delayed by dqs delay element 601 which clocks dq into Phy data capture register 602 .
- the output of data capture register 602 then feeds the first core domain register 603 which is clocked by Capture_Clk, the output of core clock delay element 604 .
- the timing scenario shown in FIG. 6 occurs when the active edge of Core_Clk 605 (depicted in FIG. 6( b ) ) occurs just after dq data 606 has been clocked into Phy data capture register 602 by dqs_delayed 607 .
- data can be immediately clocked into first core domain register 603 , and thus delay element 604 may be programmably set to a delay of essentially zero, making the timing for Capture_Clk essentially the same as Core_Cl
- FIG. 6( c ) a shows another timing scenario where the active edge of Core_Clk 608 occurs just prior to dq data 609 being clocked into Phy data capture register 602 by dqs_delayed 610 .
- core clock delay element 604 will be programmed with delay 611 such that first core domain register 603 is clocked on the active edge of Capture_Clk 612 .
- Capture_Clk will be positioned such that data will move from the Phy domain to the core domain in a predictable manner with minimal added latency due to random clock alignment.
- FIG. 7 shows an embodiment for the present invention including a circuit that compensates for CAS latency.
- CAS latency is the time (in number of clock cycles) that elapses between the memory controller telling the memory module to access a particular column in the current row, and the data from that column being read from the module's output pins.
- Data is stored in individual memory cells, each uniquely identified by a memory bank, row, and column.
- controllers To access DRAM, controllers first select a memory bank, then a row (using the row address strobe, RAS), then a column (using the CAS), and finally request to read the data from the physical location of the memory cell.
- the CAS latency is the number of clock cycles that elapse from the time the request for data is sent to the actual memory location until the data is transmitted from the module.”
- the amount of this timing unpredictability can be determined during the power-on initialization test and calibration operation, and then compensated for by the circuit shown in FIG. 7 where the output of second core domain register 701 feeds a partially populated array of registers 702 , 703 , and 704 , which along with direct connection path 705 feed multiplexer 706 .
- registers are all clocked by Core_Clk and thus create different numbers of clock cycles of CAS latency compensation depending upon which input is selected for multiplexer 706 .
- different inputs for multiplexer 706 will be selected at different times during the test in order to determine which of the paths leading to multiplexer 706 is appropriate in order to properly compensate for the CAS delay in a particular system installation.
- the dqs strobe is first driven by the memory controller during a write cycle and then, during a read cycle it is driven by the DDR memory. In between, the there is a transitional time period 801 where the dqs connection may float, that is not be driven by either the memory or the controller.
- glitches 802 are induced in dqs from a variety of sources including cross coupling from edges on other signals on boards or in the IC packages for the memory and/or the controller.
- the embodiment of the present invention as shown in FIGS. 5 through 7 allows capture clock 803 to be optimally positioned relative to dqs_delayed 804 such that read data is always moved into the core clock domain as early as possible.
- FIG. 9 shows a comparison between an embodiment the present invention and prior art memory controllers according to FIGS. 2 through 4 , with emphasis on the amount of silicon real estate required based on the numbers of delay elements introduced for an example implementation containing a total of 256 data bits.
- FIG. 9 a shows that prior art memory controllers that include delay elements on all dq data bits 901 would require 256 delay elements 902 for dq inputs in addition to 16 delay elements 903 for dqs inputs.
- FIG. 9 b shows an implementation according to one embodiment of the present invention where only dqs input delay elements 904 are required and therefore the total number of delay elements in the Phy for an embodiment the present invention is 16 versus 272 for the prior art implementation of FIG. 9 a.
- FIG. 10 shows a diagram of how the Self Configuring Logic (SCL) function 1001 interfaces with other elements of the DDR memory controller according to an embodiment of the present invention.
- the SCL 1001 receives the output 1002 of the first core domain register (clocked by Capture_Clk) as well as the output 1003 of the second core domain register (clocked by Core_Clk).
- the SCL provides output 1004 which controls the delay of the delay element 1005 which creates Capture_Clk.
- the SCL also drives multiplexer 1006 which selects the different paths which implement the CAS latency compensation circuit as previously described in FIG. 7 where multiplexer 706 performs this selection function.
- SCL 1001 also receives data 1007 from input data register 1008 , and in turn also controls 1009 dqs delay element 1010 , thereby enabling a much finer degree of control for the dqs delay function than is normally utilized in most memory controller designs, as well as allowing the dqs delay to be initialized as part of the power on initialization test and calibration operation.
- FIG. 11 describes the concept behind the process for choosing the larger passing window when positioning Capture_Clk.
- the core clock signal is delayed in element 1101 as shown in FIG. 11 a to produce Capture_Clk.
- FIG. 11 b shows a timing diagram where the RD_Data signal 1102 is to be captured in first core domain register 1103 .
- the position of core clock 1104 rarely falls in the center of the time that RD_Data 1102 is valid, in this instance being position towards the beginning of the valid time period 1105 for RD_Data.
- two passing windows 1106 and 1107 have been created, with 1106 being the smaller passing window and 1107 being the larger passing window.
- FIG. 12 shows a timing diagram for a group of data bits in a byte lane such as Rd_Data 1201 where the timing skew 1202 across the group of bits is shown as indicated.
- the common time across all data bits in the group where data is simultaneously valid is called the data eye 1203 .
- Delay line increments 1207 represent the possible timing positions that may be chosen for a programmable delay line to implement core clock delay element 604 that produces Capture_Clk.
- this number of delay line increments 1207 for which the power on initialization test will determine that data is captured successfully, achieving that minimum number being necessary for the manufacturer of the system to feel confident that the timing margin is robust enough for a production unit to be declared good.
- this number of delay line increments that is seen as a minimum requirement for a successful test is specified and stored in the system containing the memory controller, and is utilized in determining if the power-on initialization and calibration test is successful.
- FIG. 13 shows a flow chart for the process implemented according to one embodiment of the present invention for a power-on initialization test and calibration operation.
- Software or firmware controls this operation and typically runs on a processor located in the system containing the DDR memory and the controller functionality described herein.
- This processor may be located on the IC containing the memory controller functionality, or may be located elsewhere within the system.
- a minimum passing window requirement is specified in terms of a minimum number of delay increments for which data is successfully captured, as described in the diagram of FIG. 12 .
- the minimum passing window requirement will be used to determine a pass or fail condition during the test, and also may be used in order to determine the number of delay increments that must be tested and how many iterations of the test loops (steps 1302 through 1307 ) must be performed.
- Steps 1302 , 1303 , 1304 , 1305 , and 1306 together implement what in general is known as nested “for” loops.
- each byte lane will be tested according to step 1303 .
- each delay tap value within a chosen range of delay tap values will be tested according to step 1304 .
- the BIST test (Built-In Self-Test for the read data test) will be run according to step 1305 , and a pass or fail result will be recorded according to step 1306 .
- the processor controlling the power-on initialization and calibration test will then check (step 1308 ) to see if the minimum passing window requirement has been met as specified in step 1301 . If the minimum has not been met, then the system will indicate a failure 1311 .
- step 1309 for each byte lane the processor will choose the latency value that offers the largest passing window, and then choose the delay tap value the places capture clock in the center of that window. Finally, values will be programmed into control registers according to step 1310 such that all delays within the controller system according to this invention are programmed with optimum settings.
- DSCL a dynamic version of the SCL or Self Configuring Logic functionality as described herein, addresses the problem of VT (voltage and temperature) variations during normal operation of a chip that utilizes a DDR memory controller as described herein to access a DRAM.
- Regular SCL as described earlier is typically run only on system power on. It can calibrate for the system level timing at the time it is run and can compensate for PVT (Process variations in addition to Voltage and Temperature) variations that occur from chip to chip, and do it in the context of the system operation.
- Computer memory is vulnerable to temperature changes both in the controller and the corresponding memory modules.
- any DDR memory chip or as the chip containing the DDR memory controller heat up, and supply voltage variations occur due to other external factors such as loading experienced by the power supply source, VT variations can cause system level timing to change. These changes can affect the optimal programming settings as compared with those that were produced by operation of the SCL function when calibration was run at power on.
- DSCL functionality helps the chip to continuously compensate for VT variations providing the best DRAM timing margin even as system timing changes significantly over time. By performing the necessary calibration in the shortest period of time, DSCL also ensures that the impact on system performance is minimal.
- DSCL divides the problem of calculating the Capture_Clk delay and the problem of CAS latency compensation into separate problems per FIGS. 16 and 18 , and solves each of these problems independently. It also runs independently and parallely in each byte lane. Thus the whole calibration process is greatly speeded up. Specifically, in one embodiment, if the user has an on-board CPU, the non-dynamic SCL could be run within about 2 milliseconds assuming 4 byte lanes and 4 milliseconds for 8 byte lanes. In one embodiment of the dynamic SCL, regardless of 4 or 8 byte lanes, SCL would run within 1 micro-second.
- the operation of the DSCL functionality described herein utilizes portions of the existing SCL circuitry previously described and utilizes that existing circuitry during both the calibration phase and operational phase, however new circuitry is added for DSCL and the calibration phase is broken into two sub-phases.
- One of these sub-phases corresponds to the process described in FIG. 16
- the other sub-phase corresponds to the process described in FIG. 18 .
- FIG. 14 when compared with FIG. 10 , shows the circuit component additions which may be present in order to support the dynamically calibrated version of the DDR memory controller as described herein.
- the purpose of the additions to FIG. 10 as shown in FIG. 14 is to support the first phase of the SCL calibration whereby an optimum Capture_Clk delay is determined according to the process of FIG. 16 .
- the optimum Capture_Clk value is determined by the Self-configuring Logic 1001 output 1004 to the Delay element 1005 .
- the delayed version of the dqs input signal produced by delay element 1010 and herein called ip_dqs is sampled in flip-flop 1413 .
- Flip-flop 1413 is clocked by the output of delay element 1411 which delays Core_Clk.
- the output of flip-flop 1413 is connected 1414 to the self configuring logic function 1001 .
- Core_Clk is also delayed in delay element 1415 which in turn samples Core_Clk in flip-flop 1417 .
- the output of flip-flop 1417 is connected 1418 to the self configuring logic function 1001 .
- Delay elements 1411 and 1415 are controlled respectively by signals 1412 and 1416 from self configuring logic function 1001 .
- An output 1419 of SCL logic function 1001 controls the select lines of multiplexer 1006 which is the same multiplexer as shown earlier as multiplexer 706 in FIG. 7 and is used to select captured read data which is delayed by different increments according to which flip-flop delay chain path is most appropriate.
- FIG. 15 graphically shows some of the timing delays that are manipulated as part of the dynamic calibration sequence of the DDR memory controller per one embodiment of the present invention and as described in FIG. 16 .
- Core_Clk 1501 is delayed by different values, here marked value “A” 1503 in FIG. 15 .
- the ip_dqs signal 1502 is also delayed by different values, here marked value “B” 1504 .
- FIG. 16 shows a flowchart for the dynamic calibration procedure in order to determine an optimum delay for Core_Clk delay element 1005 in order to produce an optimum timing for the Capture_Clk signal.
- step 1601 a sequence of read commands is issued so that the ip_dqs signal toggles continuously.
- step 1602 the Core_Clk signal is delayed and used to sample ip_dqs at different delay increments until a 1 to 0 transition is detected on ip_dqs, whereby this value for the Core_Clk delay is recorded as value “A”.
- step 1603 the Core_Clk signal is delayed and used to sample Core_Clk at different delay increments until a 0 to 1 transition is detected on Core_Clk, whereby this value for the Core_Clk delay is recorded as value “B”.
- FIG. 17 shows the circuitry within the DSCL functionality that is utilized during the portion of the calibration sequence described in the process of FIG. 18 .
- read data has been captured in flip-flop 1103 by Capture_Clk to produce Rd_Data_Cap 1110 .
- Rd_Data_Cap 1110 is then captured in each of flip-flops 1701 on an edge of Core_Clk and are enabled to register Rd_Data_Cap by one of counters 1702 which themselves are also clocked by Core_Clk.
- Counters 1702 are enabled to start counting by a Read Command 1703 issued by the DSCL functionality.
- the outputs of flip-flops 1701 each go to a data comparator 1704 where they are compared with a predefined data value 1705 which is stored in the DDR memory controller in location 1706 and has also been previously placed in the DDR memory itself as described in the process of FIG. 18 .
- the outputs of the data comparators enter encoder 1707 whose output 1419 controls multiplexer 1006 which chooses a flip-flop chain delay path from those previously described in FIG. 7 .
- FIG. 18 shows a procedure for operating the DDR memory controller in order to calibrate the controller during dynamic operation, and in particular to determine the optimum overall CAS latency compensation.
- the Capture_Clk delay is set to the previously determined optimum value according to the procedure described in the flowchart of FIG. 16 .
- a known data pattern is read from a DDR memory connected to the DDR memory controller. This known data pattern originates in a stored location 1706 in the DDR controller device and would typically have been previously saved or located in the DDR memory. If such a pattern is not available in the DDR memory, an appropriate pattern would be written to the DDR memory before this step and subsequent steps are executed.
- step 1803 read data is captured from the DDR memory in an iterative manner while sweeping possible predetermined CAS latency compensation values from a minimum to a maximum value utilizing the different delay paths that can be chosen with the circuitry shown in FIG. 17 .
- step 1804 when the read data matches at a particular CAS latency compensation, the parameters and settings that produced that optimum value of CAS latency compensation, i.e. the chosen delay path through the flip-flop chains feeding multiplexer 706 in combination with the previously determined optimum Capture_Clk delay, are recorded as the optimum parameters for the CAS latency compensation value and used thereafter during normal operation until another dynamic calibration sequence is performed.
- Circuits and methods are described for a DDR memory controller where two different DQS gating modes are utilized. These gating modes together ensure that the DQS signal, driven by a DDR memory to the memory controller, is only available when read data is valid, thus eliminating capture of undesirable data into the memory controller caused by glitches when DQS is floating.
- Two types of gating logic are used: Initial DQS gating logic, and Functional DQS gating logic.
- the Initial gating logic has additional margin to allow for the unknown round trip timing during initial bit levelling calibration. Eventually the memory controller will establish precise timing in view of the actual round-trip delay.
- Round trip delay is the difference between the instant when a read command is issued by the memory controller and the instant when the corresponding data from a DDR memory is received at the memory controller excluding the known and fixed number of clock cycle delays involved in fetching data in the DDR protocol. Even though this round trip delay has not been characterized when initial bit-levelling calibration is performed, it is useful to perform bit-levelling early in the overall calibration process as this makes subsequent phase and latency calibration for data capture more precise and consistent across all data bits. During bit-levelling calibration an alternating pattern of 1 s and 0 s is read from the memory and the memory controller is able to perform bit-levelling regardless of the round-trip delay due to the predictable nature of the pattern and the manner in which bit-leveling calibration operates.
- DQS functional gating is optimized to gate DQS precisely as Capture_Clk delay and CAS latency compensation calibration is performed. This gating functionality is especially useful when data capture into a core clock domain is performed at half the DQS frequency in view of rising clock rates for DDR memories.
- This capability to use two gating modes of operation is also useful for an implementation even where the clocks are operated at full frequency, in view of the smaller available timing margins as memory access clock speeds continue to rise from year to year.
- the waveform of FIG. 19 shows a hypothetical example of the goal of DQS Gating by only allowing the DQS pulses that correspond to the issued read command to be operated on by the memory controller.
- FIG. 20 there are two types of gating logic, the Initial gating logic 2002 , and the Functional gating logic 2003 . The difference between the two is how precisely they work.
- the Initial gating logic 2002 has additional margin to allow for the unknown input DQS round trip timing during initial bit-levelling calibration.
- the Functional gating logic 2003 gates DQS precisely based on the round trip timing information discovered and refined during SCL calibration.
- ip_dqs post gate 2005 .
- disable control 2004 that can be used which forgoes gating but it is not advised to turn it on with half-frequency mode since glitches can invert the phase of the divided DQS.
- FIG. 20 shows a high-level block diagram representation for the logic used for both Initial DQS gating 2002 and for Functional DQS gating 2003 .
- the Initial gating mode is only used for the first time that bit-levelling calibration is run. At this initial point in the calibration process, SCL calibration has not yet been run. Therefore the Functional gate timing would be imprecise if used at this stage of the calibration process.
- Functional gating mode is used during SCL calibration and for functional operation after determination of precise timing values for Capture_clk 2105 and CAS latency calibration. Thereafter, whenever bit levelling or dynamic SCL calibration are run from time to time during functional system operation, the Functional gating timing is used.
- capture_clk 2105 is the variable delay clock which SCL will tune so that there is optimal setup and hold margins for clocking data from the input DDR3/DDR4 strobe domain to the memory controller's core clock domain, where it is captured by core_clk 2104 .
- the memory controller will continuously look for the location of the second falling edge of ip_dqs 2102 . This is the edge in which valid data on ip_dq 2101 will be available. The data will cross clock domains from this edge to the falling edge of d 1 _half_rate_dqs 2103 which happens on the same edge of ip_dqs that triggered d 1 _half_rate_dqs to go low. This is done to reduce latency on the read path but it must be noted that to check timing based on this, a multi-cycle path of zero is used to time the path during Static Timing Analysis.
- SCL will find the center between the rising edge of core_clk and the falling edge of the next d 1 _half_rate_dqs strobe, shown by points A 2201 and B 2202 in the FIG. 22 . Whichever point gives the largest setup and hold margins—point B in the example below—will be set as the active edge location for capture_clk.
- the gate In the Initial gating mode, the gate is extended 8 full rate cycles beyond the falling edge of rd_data_en_scl 2001 to ensure maximum round trip delay in receiving valid DQS pulses is accounted for. This is exemplary, and extension by other numbers of full rate cycles is possible.
- FIG. 23 shows an example timing diagram of the fundamental signals in initial ABC gating routine to create the final gating signal.
- the signals shown in FIG. 23 are defined as follows:
- Full Rate Clock 2301 One of two clock domains in the memory controller with the same frequency as ip_dqs and is used sparingly as some portions of the memory controller must be in the full rate domain.
- Read Data Enable SCL 2001 Read enable signal from the memory controller which is used for calibration purposes and to control the DQS gate signal.
- Read Data Enable SCL Delayed 2303 This is the read data enable SCL signal but delayed by two full rate cycles.
- Read Data Enable Count 2304 A counter which is used to extend the final DQS gate signal by eight full rate cycles.
- Read Data Enable SCL Extended 2305 A one bit signal derived from the read data enable count to extend the final DQS gate by eight cycles.
- DQS Gate Final 2306 This signal will gate DQS but it has no concept of round trip time and therefore opens earlier and closes later giving more margins. (NOTE: this signal is the same one used for functional gating, but the logic to have the gate open/close is different since the round trip time is known)
- DQS 2307 The incoming DQS from the memory.
- Round trip delay is the time it takes for the read data and strobe to be received at the memory controller after the memory has received the read address and command issued by the memory controller.
- the read data enable SCL delayed signal will open before the DQS strobe is received by the memory controller as it is much more lenient.
- the delayed version of the read data enable signal will open the gate albeit a bit earlier than the time when the DQS from the memory reaches the memory controller.
- the memory controller will extend the gating signal by 8 full rate cycles and then will close it. The position at which it closes will be after the DQS has arrived at the memory controller from the memory.
- the logic for generating the functional gating signal is more intricate. It is necessary to being gating shortly before the rising edge of the first DQS pulse during the preamble and to stop gating shortly after the last falling edge during the postamble as shown in FIG. 25 .
- dl_half_rate_dqs 2103 is a divided version of ip_dqs (post DLL) 2401 which toggles on every falling edge of ip_dqs (post DLL).
- SCL calibration runs, it determines the phase difference between the rising edge of core_clk 2104 and the falling edge of dl_half_rate_dqs 2103 which corresponds to the second falling edge of ip_dqs (post DLL) 2401 and stores this value as a variable called cycle_cnt (this is the same as the SCL measurement point A mentioned previously with respect to FIG. 22 ). Therefore the invention uses cycle_cnt as a reference to determine when ip_dqs will pulse with respect to core_clk so gating can being beforehand.
- First cycle_cnt_clk 2402 is created by delaying core_clock by the value cycle_cnt. This new clock (cycle_cnt_clk) has each positive edge aligned to each second falling edge of ip_dqs (post DLL). Another clock, cycle_cnt_modified_clk 2403 is generated 1 ⁇ 4 Full rate clock cycle sooner or one and 3 ⁇ 4 Full rate clock cycle later than cycle_cnt_clk (depending on whether cycle_cnt is greater than 1 ⁇ 4 Full rate clock cycle or less than 1 ⁇ 4 cycle respectively).
- each positive edge of cycle_cnt_modified_clk 2403 is aligned to each second falling edge of ip_dqs (pre DLL) 2102 and is therefore centered in the middle of ip_dqs preamble time—as shown by the dotted line 2501 in FIG. 25 .
- the read enable signal from the controller is registered into this new cycle_cnt_modified_clk domain using capture_clk and cycle_cnt_clk as staging clocks.
- Capture_Clk is guaranteed by SCL calibration to be positioned so that maximum setup and hold margins are obtained when transitioning between the core_clk and cycle_cnt_clk domains. Timing from cycle_cnt_clk to cycle_cnt_modified_clk is met by design.
- This read enable signal once latched in the cycle_cnt_modified_clk domain, is used to signal the start of DQS gating.
- the clock cycle latency of the read enable signal is also adjusted based on SCL calculated CAS latency as described previously. Also the enable signal is shortened by 1 clock cycle compared to the length of the read burst so that it does not affect the gate closing timing.
- the DQS gate is closed directly by the last falling edge of the final DQS pulse. This is done by latching the third staged read data enable signal (in cycle_cnt_clk domain) into the d 1 _half_rate_dqs domain.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Computer Security & Cryptography (AREA)
- Databases & Information Systems (AREA)
- Dram (AREA)
- Memory System (AREA)
- Logic Circuits (AREA)
Abstract
Description
- This application claims priority as a Continuation of U.S. patent application Ser. No. 15/722,209 filed on Oct. 2, 2017, currently pending, the contents of which are incorporated by reference.
- U.S. patent application Ser. No. 15/722,209 claimed priority as a Continuation of U.S. patent application Ser. No. 15/249,188, filed on Aug. 26, 2016, registered as U.S. Pat. No. 9,805,784 on Oct. 31, 2017, the contents of which all are incorporated by reference.
- U.S. patent application Ser. No. 15/249,188 claimed priority as a Continuation of U.S. patent application Ser. No. 14/882,226, filed on Oct. 13, 2015, registered as U.S. Pat. No. 9,431,091 on Aug. 30, 2016, the contents of which all are incorporated by reference.
- U.S. patent application Ser. No. 14/882,226, in turn claimed priority as a Nonprovisional patent application of U.S. Provisional Patent Application Ser. No. 62/063,136, filed on Oct. 13, 2014, currently expired and entitled “Half-Frequency Dynamic Calibration for DDR Memory Controllers,” commonly assigned with the present application and incorporated herein by reference.
- U.S. patent application Ser. No. 14/882,226 also claimed priority as a Continuation-In-Part of U.S. Utility patent application Ser. No. 14/752,903, filed on Jun. 27, 2015, registered as U.S. Pat. No. 9,552,853 on Jan. 24, 2017, and entitled “Methods for Calibrating a Read Data Path for a Memory Interface,” which in turn claims priority as a Continuation of U.S. Utility patent application Ser. No. 14/152,902, filed on Jan. 10, 2014, patented as U.S. Pat. No. 9,081,516 on Jul. 14, 2015 and entitled “Application Memory Preservation for Dynamic Calibration of Memory Interfaces,” which in turn claimed priority as a Continuation of U.S. Utility patent application Ser. No. 14/023,630, filed on Sep. 11, 2013, patented as U.S. Pat. No. 8,843,778 on Sep. 23, 2014 and entitled “Dynamically Calibrated DDR Memory Controller,” which in turn claimed priority as a Continuation of U.S. Utility patent application Ser. No. 13/172,740, filed Jun. 29, 2011, patented as U.S. Pat. No. 8,661,285 on Feb. 25, 2014 and entitled “Dynamically Calibrated DDR Memory Controller,” which in turn claimed priority as a Continuation-In-Part of U.S. Utility patent application Ser. No. 12/157,081, filed on Jun. 6, 2008, patented as U.S. Pat. No. 7,975,164 on Jul. 5, 2011 and entitled “DDR Memory Controller,” all commonly assigned with the present application and incorporated herein by reference.
- A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
- This invention relates to circuits that interface with memories, in particular DDR or “double data rate” dynamic memories. Such circuits are found in a wide variety of integrated circuit devices including processors, ASICs, and ASSPs used in a wide variety of applications, as well as devices whose primary purpose is interfacing between memories and other devices.
- Double Data Rate, or “DDR” memories are extremely popular due to their performance and density, however they present challenges to designers. In order to reduce the amount of real estate on the memory chips, much of the burden of controlling the devices has been offloaded to circuits known as DDR memory controllers. These controller circuits may reside on Processor, ASSP, or ASIC semiconductor devices, or alternately may reside on semiconductor devices dedicated solely to the purpose of controlling DDR memories. Given the high clock rates and fast edge speeds utilized in today's systems, timing considerations become challenging and it is often the case that timing skews vary greatly from one system implementation to another, especially for systems with larger amounts of memory and a greater overall width of the memory bus.
- In general, the industry has responded by moving towards memory controllers that attempt to calibrate themselves during a power-on initialization sequence in order to adapt to a given system implementation. Such an approach has been supported by the DDR3 standard where a special register called a “Multi-Purpose Register” is included on the DDR3 memories in order for test data to be written prior to the calibration test performed during power-on initialization. The circuitry on memory controllers typically used for receiving data from DDR memories normally incorporates features into the Phy portion (Physical interface) of the memory controller circuit where the controller can adapt to system timing irregularities, this adaptation sometimes being calibrated during a power-on initialization test sequence.
-
FIG. 1 Shows a typical prior art DDR memory controller where an Asynchronous FIFO 101 is utilized to move data from the clocking domain of thePhy 102 to theCore clock domain 103. Incoming read data dq0 is clocked intoinput registers dqs clock 107, this delay having been performed bydelay element 108. - Asynchronous FIFO 101 typically consists of at least eight stages of flip-flops requiring at least 16 flip-flops in total per dq data bit. Notice also that an
additional circuit 109 for delay and gating of dqs has been added prior to driving the Write Clock input ofFIFO 101. This is due to the potential that exists for glitches on dqs. Both data and control signals on a typical DDR memory bus are actually bidirectional. As such, dqs may float at times during the transition between writes and reads, and as such be susceptible to glitches during those time periods. For this reason, typical prior art in DDR controller designs utilizing asynchronous FIFOs addgating element 109 to reduce the propensity for errors due to glitches on dqs. After passing through the entire asynchronous FIFO 101, read data is transferred to the core domain according to Core_Clk 110. Additional circuitry is typically added to FIFO 101 in order to deal with timing issues relative to potential metastable conditions given the unpredictable relationship between Core_Clk and dqs. -
FIG. 2 shows another prior art circuit for implementing a DDR memory controller, in particular a style utilized by the FPGA manufacturer Altera Corp. Portions of two byte lanes are shown inFIG. 2 , the first byte lane represented bydata bit dq0 201 and correspondingdqs strobe 202. The second byte lane is represented bydqs strobe 203 anddata bit dq0 204. In general, the data and strobe signals connecting between a DDR memory and a DDR memory controller are organized such that each byte or eight bits of data has its own dqs strobe signal. Each of these groupings is referred to as a byte lane. - Looking at the data path starting with
dq data bit 201 anddqs strobe 202, these pass throughprogrammable delay elements capture registers registers delay line 213. These registers form what is called a levelization FIFO and attempt to align the data bits within a byte lane relative to other byte lanes. Tappeddelay line 213 is driven by a PLLre-synchronization clock generator 214 which also drives thefinal stage registers 212 of the levelization FIFO as well as being made available to the core circuitry of the controller. The PLLresynchronization clock generator 214 is phase and frequency synchronized with dqs. Notice that at this point, data stored infinal stage registers 212 has not yet been captured by the core clock of the memory controller. Also notice that the circuit ofFIG. 2 utilizes an individual delay element for each data bit such asdq0 201 anddq0 204. - When we examine fully-populated byte lanes, it should be noted that the additional delay elements required to provide an individual programmable delay on all incoming data bits can consume a large amount of silicon real estate on the device containing a DDR memory controller circuit. Such a situation is shown in
FIG. 3 where asingle dqs strobe 301 requires a singleprogrammable delay 302, while the eightdata bits 303 of the byte lane each drive aprogrammable delay element 304. -
FIG. 4 describes some of the timing relationships that occur for a prior art DDR memory controller which uses delay elements within the Phy for individual read data bits.FIG. 4a shows a simplified diagram where a single data bit is programmably delayed byelement 401 in addition to the dqs strobe being delayed byelement 402. Typically data from input dq is captured on both the rising and falling edges of dqs as shown inFIGS. 1 and 2 , however for the sake of simplicity, the diagrams ofFIGS. 3-12 only show the schematic and timing for the dq bits captured on the rising edge of dqs. By controlling both of these two delays, the output ofcapture register 403 can be delayed by any amount within the range of the delay elements before it is passed into the core clock domain and clocked intoregister 404 by theCore_Clk signal 405. InFIG. 4b , thedqs_delayed signal 406 is placed near the center of the valid window fordq 407 and after being captured inregister 403, data then enters the core domain atclock edge 408 is shown as shown. In this scenario the latency to move the data into the core domain is relatively low simply because of the natural relationship between core clock and dqs. This relationship however is extremely dependent upon the system topology and delays, and in fact could have almost any phase relationship. - A different phase relationship is possible as shown in
FIG. 4c . Here, afirst edge 409 of Core_Clk happens to occur just before theleading edge 410 of dqs_delayed. The result is that each data bit will not be captured in the core clock domain until leadingedge 411 of Core_Clk as shown, and thus will be delayed by amount oftime 412 before being transferred into the core domain. Thus, while the ability to delay both dq and dqs can accomplish synchronization with the core clock, it may introduce a significant amount of latency in the process. - A DDR memory controller circuit and method is therefore needed that reliably captures and processes memory data during read cycles while requiring a small gate count resulting in implementations requiring a small amount of silicon real estate. The controller should also offer a high yield for memory controller devices as well as a high yield for memory system implementations using those controller devices. Further, it is desirable to provide a DDR memory controller that is calibrated to compensate for system level timing irregularities and for chip process parameter variations—that calibration occurring not only during power-up initialization, but also dynamically during system operation to further compensate for power supply voltage variations over time as well as system level timing variations as the system warms during operation.
- Further it is useful to have a memory controller circuit that can perform a portion of calibration operations while allowing a signal gating window that is large, and then can perform further calibration operations and functional operation with an optimized signal gating window.
- Also, given the ever increasing clock rates that memories are capable of, it is useful to perform calibration and functional operation with some number of related signals within a memory controller operating at half the frequency of memory strobe signals such as DQS.
- One object of this invention is to provide a DDR memory controller with a more flexible timing calibration capability such that the controller may be calibrated for higher performance operation while at the same time providing more margin for system timing variations.
- Another object of this invention is to provide a DDR memory controller with a more flexible timing calibration capability where this timing calibration is operated during the power-up initialization of the device containing the DDR memory controller and, where this timing calibration is performed in conjunction with at least one DDR memory device, both said device and controller installed in a system environment, and where the timing calibration performed by the memory controller takes into account delays in the round-trip path between the DDR memory controller and the DDR memory. By taking into account system delays during this calibration, the overall yield of the system is improved, and effectively the yield of the devices containing the DDR memory controller is also improved since the DDR memory controller is therefore self-adaptive to the irregularities of the system environment.
- Another object of this invention is to provide a DDR memory controller that transfers, at an earlier point in time, captured data on memory read cycles from the dqs clock domain to the core clock domain. This reduces the possibility that a glitch on dqs that may occur during the time period where dqs is not driven, would inadvertently clock invalid data into the controller during read cycles.
- Another object of this invention is to provide a DDR Memory Controller with a smaller gate count thereby reducing the amount of silicon required to implement the controller and the size and cost of the semiconductor device containing the controller function. Gate count is reduced by eliminating delay elements on the dq data inputs, and by eliminating the use of an asynchronous FIFO for transitioning data from the dqs clock domain to the core clock domain.
- Another object of this invention is to move captured data into the core clock domain as quickly as possible for read cycles to minimize latency.
- Another object of this invention is to provide a DDR memory controller that is calibrated to compensate for system level timing irregularities and for chip process parameter variations where that calibration occurs dynamically during system operation to compensate for power supply voltage variations over time as well as system level timing variations as the system warms during operation.
- Another object of the invention is to provide a memory interface that includes two different windows for gating key timing signals like DQS—a first that is large and allows for performing initial calibration functions when the precise timing is not yet known, and a second for gating key timing signals more precisely as timing relationships become more defined as the calibration process progresses.
- Another object of the invention is to provide a memory interface that operates at substantially half a DQS clock rate, or a reduced clock rate, such that data can be captured accurately and calibration performed accurately even as primary clock rates for memories increase over successive technology generations.
-
FIG. 1 shows a prior art DDR memory controller which utilizes an asynchronous FIFO with gated clock, all contained within the Phy portion of the controller circuit. -
FIG. 2 shows a prior art DDR memory controller where delay elements are used on both dq and dqs signals and a form of FIFO is used for data levelization, the FIFO being clocked by a clock that is PLL-synchronized with dqs, the entire circuit contained within the Phy portion of the memory controller. -
FIG. 3 describes the read data path for a prior art DDR memory controller having delay elements on both dq and dqs inputs. -
FIG. 4 shows the data capture and synchronization timing for the read data path of a prior art DDR memory controller having delay elements on both dq and dqs inputs. -
FIG. 5 shows the read data path for a DDR memory controller according to an embodiment of the present invention where delay elements are used on dqs but not on dq inputs, and read data synchronization is performed with the core clock by way of a core clock delay element. -
FIG. 6 shows the data capture and synchronization timing for the read data path of a DDR memory controller according to an embodiment of the present invention where delay elements are used on dqs but not on dq inputs, and read data synchronization is performed with the core clock by way of a core clock delay element. -
FIG. 7 shows the read data path for a DDR memory controller according to one embodiment of the present invention including a CAS latency compensation circuit which is clocked by the core clock. -
FIG. 8 shows the glitch problem which can occur on the bidirectional dqs signal in DDR memory systems. -
FIG. 9 shows a comparison of prior art memory controllers which utilize delay elements on both dq and the dqs inputs when compared with the memory controller of one embodiment of the present invention, with emphasis on the number of total delay elements required for each implementation. -
FIG. 10 shows a diagram for the read data path of a DDR memory controller according to one embodiment of the present invention with emphasis on the inputs and outputs for the Self Configuring Logic function which controls the programmable delay elements. -
FIG. 11 describes the timing relationships involved in choosing the larger passing window when the delay element producing Capture_Clk is to be programmed according to one embodiment of the present invention. -
FIG. 12 shows a timing diagram for the data eye indicating the common window for valid data across a group of data bits such as a byte lane, given the skew that exists between all the data bits. -
FIG. 13 shows a flow chart for the power-on initialization test and calibration operation according to one embodiment of the present invention, the results of this operation including choosing programmable delay values. -
FIG. 14 shows the functionality ofFIG. 10 with circuitry added to implement a dynamically calibrated DDR controller function according to one embodiment of the invention, in particular to determine an optimum Capture_Clk delay. -
FIG. 15 shows a timing diagram where Core_Clk and ip_dqs are delayed and sampled as part of implementing a dynamically calibrated DDR controller function according to one embodiment of the invention. -
FIG. 16 shows a flowchart describing the process of delaying and sampling both ip_dqs and Core_Clk, and for computing an optimum Capture_Clk delay. -
FIG. 17 includes circuitry added for dynamic calibration, in particular for a second phase according to the process ofFIG. 18 . -
FIG. 18 shows a flowchart describing the process of iteratively capturing read data from the DDR memory while sweeping different CAS latency compensation values to determine the settings for the DDR memory controller that provide the optimum CAS latency compensation. -
FIGS. 19-22 show circuit details and timing relationships for providing a memory interface that includes two different windows for gating key timing signals like DQS—a first that is large and allows for performing initial calibration functions when the precise timing is not yet known, and a second for gating key timing signals more precisely as timing relationships become more defined as the calibration process progresses. - Also shown in
FIGS. 19-22 are circuit details and timing relationships for a memory interface that operates at substantially half a DQS clock rate, or a reduced clock rate, such that data can be captured accurately and calibration performed accurately even as primary clock rates for memories increase over successive technology generations. -
FIGS. 23-26 depict additional details of the half frequency operation, pursuant to one embodiment of the invention. - In contrast to prior art DDR memory controllers where calibration features for timing inconsistencies are implemented only in the Phy portion of the controller, the DDR memory controller of one embodiment of the present invention focuses on utilizing core domain clocking mechanisms, at times combined with circuitry in the Phy, to implement an improved solution for a timing-adaptive DDR memory controller.
- In contrast with the prior art circuit of
FIG. 4 ,FIG. 5 shows a simplified version of a DDR controller circuit according to an embodiment of the present invention. Here, the data inputs for abyte lane 501 are shown being captured in dq readdata registers 502 without any additional delay elements added, these registers being clocked by a delayed version of dqs. Thedqs clock signal 503 has dqsdelay element 504 added, typically delaying dqs by approximately 90 degrees relative to the dqs signal driven by the DDR memory. The outputs ofregisters 502 enter the core domain and are captured in first core domain registers 505.Registers 505 are clocked by a delayed version of Core_Clk calledCapture_Clk 506. Capture_Clk is essentially the output of coreclock delay element 507 which produces a programmably delayed version ofCore_Clk 508. The outputs of first core domain registers 505 feed second core domain registers 509 which are clocked by Core_Clk. The amount of delay assigned toprogrammable delay element 507 is controlled by a self-configuring logic circuit (SCL) contained within the memory controller, this self-configuring logic circuit determining the appropriate delay forelement 507 during a power-on initialization test and calibration operation. -
FIG. 6 shows how the timing for the read data path can occur for the DDR memory controller circuit of one embodiment of the present invention. A simplified version of the read data path is shown inFIG. 6a where dqs is delayed bydqs delay element 601 which clocks dq into Phydata capture register 602. The output ofdata capture register 602 then feeds the firstcore domain register 603 which is clocked by Capture_Clk, the output of coreclock delay element 604. The timing scenario shown inFIG. 6 occurs when the active edge of Core_Clk 605 (depicted inFIG. 6(b) ) occurs just after dqdata 606 has been clocked into Phydata capture register 602 bydqs_delayed 607. In this scenario, data can be immediately clocked into firstcore domain register 603, and thus delayelement 604 may be programmably set to a delay of essentially zero, making the timing for Capture_Clk essentially the same as Core_Clk. -
FIG. 6(c) a shows another timing scenario where the active edge ofCore_Clk 608 occurs just prior todq data 609 being clocked into Phydata capture register 602 bydqs_delayed 610. As a result, coreclock delay element 604 will be programmed withdelay 611 such that firstcore domain register 603 is clocked on the active edge ofCapture_Clk 612. Thus, regardless of the natural timing of Core_Clk relative to dqs, Capture_Clk will be positioned such that data will move from the Phy domain to the core domain in a predictable manner with minimal added latency due to random clock alignment. -
FIG. 7 shows an embodiment for the present invention including a circuit that compensates for CAS latency. According to Wikipedia: “CAS latency (CL) is the time (in number of clock cycles) that elapses between the memory controller telling the memory module to access a particular column in the current row, and the data from that column being read from the module's output pins. Data is stored in individual memory cells, each uniquely identified by a memory bank, row, and column. To access DRAM, controllers first select a memory bank, then a row (using the row address strobe, RAS), then a column (using the CAS), and finally request to read the data from the physical location of the memory cell. The CAS latency is the number of clock cycles that elapse from the time the request for data is sent to the actual memory location until the data is transmitted from the module.” Thus, there is a timing unpredictability in any system implementation involving DDR memory between the read request from the controller to the memory and the resulting data actually arriving back at the memory controller. The amount of this timing unpredictability can be determined during the power-on initialization test and calibration operation, and then compensated for by the circuit shown inFIG. 7 where the output of secondcore domain register 701 feeds a partially populated array ofregisters direct connection path 705feed multiplexer 706. These registers are all clocked by Core_Clk and thus create different numbers of clock cycles of CAS latency compensation depending upon which input is selected formultiplexer 706. During the power-on initialization test and calibration operation, different inputs formultiplexer 706 will be selected at different times during the test in order to determine which of the paths leading tomultiplexer 706 is appropriate in order to properly compensate for the CAS delay in a particular system installation. - In the earlier discussion with reference to
FIG. 1 , it was mentioned that delay andgating element 109 was included in order to lower the propensity for spurious glitches on dqs inadvertently clockingFIFO 101. The timing diagram ofFIG. 8 shows this problem in more detail. During the normal sequence of operation of a DDR memory, the dqs strobe is first driven by the memory controller during a write cycle and then, during a read cycle it is driven by the DDR memory. In between, the there is atransitional time period 801 where the dqs connection may float, that is not be driven by either the memory or the controller. Duringtime periods 801, it is possible forglitches 802 to be induced in dqs from a variety of sources including cross coupling from edges on other signals on boards or in the IC packages for the memory and/or the controller. In order to minimize the chance of any glitch on dqs causing data corruption, the embodiment of the present invention as shown inFIGS. 5 through 7 allowscapture clock 803 to be optimally positioned relative to dqs_delayed 804 such that read data is always moved into the core clock domain as early as possible. -
FIG. 9 shows a comparison between an embodiment the present invention and prior art memory controllers according toFIGS. 2 through 4 , with emphasis on the amount of silicon real estate required based on the numbers of delay elements introduced for an example implementation containing a total of 256 data bits. Notice inFIG. 9a that prior art memory controllers that include delay elements on alldq data bits 901 would require 256delay elements 902 for dq inputs in addition to 16delay elements 903 for dqs inputs. In contrast to this,FIG. 9b shows an implementation according to one embodiment of the present invention where only dqsinput delay elements 904 are required and therefore the total number of delay elements in the Phy for an embodiment the present invention is 16 versus 272 for the prior art implementation ofFIG. 9 a. -
FIG. 10 shows a diagram of how the Self Configuring Logic (SCL)function 1001 interfaces with other elements of the DDR memory controller according to an embodiment of the present invention. In a first embodiment of the present invention, theSCL 1001 receives theoutput 1002 of the first core domain register (clocked by Capture_Clk) as well as theoutput 1003 of the second core domain register (clocked by Core_Clk). In turn, the SCL providesoutput 1004 which controls the delay of thedelay element 1005 which creates Capture_Clk. The SCL also drivesmultiplexer 1006 which selects the different paths which implement the CAS latency compensation circuit as previously described inFIG. 7 wheremultiplexer 706 performs this selection function. - In an alternate embodiment of the present invention,
SCL 1001 also receivesdata 1007 from input data register 1008, and in turn also controls 1009dqs delay element 1010, thereby enabling a much finer degree of control for the dqs delay function than is normally utilized in most memory controller designs, as well as allowing the dqs delay to be initialized as part of the power on initialization test and calibration operation. -
FIG. 11 describes the concept behind the process for choosing the larger passing window when positioning Capture_Clk. As described previously for an embodiment the present invention, the core clock signal is delayed inelement 1101 as shown inFIG. 11a to produce Capture_Clk.FIG. 11b shows a timing diagram where theRD_Data signal 1102 is to be captured in firstcore domain register 1103. As shown inFIG. 11b , the position ofcore clock 1104 rarely falls in the center of the time thatRD_Data 1102 is valid, in this instance being position towards the beginning of thevalid time period 1105 for RD_Data. In this instance, two passingwindows - Therefore in the scenario shown in
FIG. 11b , some amount of programmeddelay 1108 would be programmed intodelay element 1101 in order thatCapture_Clk 1109 may be positioned in thelarger passing window 1107. -
FIG. 12 shows a timing diagram for a group of data bits in a byte lane such asRd_Data 1201 where thetiming skew 1202 across the group of bits is shown as indicated. The common time across all data bits in the group where data is simultaneously valid is called thedata eye 1203. After subtractingsetup time 1204 and holdtime 1205 fromdata eye 1203, what remains is the window within whichCapture_Clk 1206 may be placed in order to properly clock valid data on all bits ofRd_Data 1201 within the byte lane.Delay line increments 1207 represent the possible timing positions that may be chosen for a programmable delay line to implement coreclock delay element 604 that produces Capture_Clk. For all systems there will be a minimum number ofdelay line increments 1207 for which the power on initialization test will determine that data is captured successfully, achieving that minimum number being necessary for the manufacturer of the system to feel confident that the timing margin is robust enough for a production unit to be declared good. Thus, this number of delay line increments that is seen as a minimum requirement for a successful test is specified and stored in the system containing the memory controller, and is utilized in determining if the power-on initialization and calibration test is successful. -
FIG. 13 shows a flow chart for the process implemented according to one embodiment of the present invention for a power-on initialization test and calibration operation. Software or firmware controls this operation and typically runs on a processor located in the system containing the DDR memory and the controller functionality described herein. This processor may be located on the IC containing the memory controller functionality, or may be located elsewhere within the system. Instep 1301, a minimum passing window requirement is specified in terms of a minimum number of delay increments for which data is successfully captured, as described in the diagram ofFIG. 12 . The minimum passing window requirement will be used to determine a pass or fail condition during the test, and also may be used in order to determine the number of delay increments that must be tested and how many iterations of the test loops (steps 1302 through 1307) must be performed.Steps step 1302, each byte lane will be tested according tostep 1303. And, for each byte lane to be tested according tostep 1303, each delay tap value within a chosen range of delay tap values will be tested according tostep 1304. So, for each specific permutation of latency delay, byte lane, and delay tap value, the BIST test (Built-In Self-Test for the read data test) will be run according tostep 1305, and a pass or fail result will be recorded according tostep 1306. Once all iterations of the nested “for” loops are completed as determined bystep 1307, the processor controlling the power-on initialization and calibration test will then check (step 1308) to see if the minimum passing window requirement has been met as specified instep 1301. If the minimum has not been met, then the system will indicate afailure 1311. If the requirement has been met, then according tostep 1309 for each byte lane the processor will choose the latency value that offers the largest passing window, and then choose the delay tap value the places capture clock in the center of that window. Finally, values will be programmed into control registers according tostep 1310 such that all delays within the controller system according to this invention are programmed with optimum settings. - Further, it is desirable to provide a DDR memory controller that is calibrated to compensate for system level timing irregularities and for chip process parameter variations—that calibration occurring not only during power-up initialization, but also dynamically during system operation to further compensate for power supply voltage variations over time as well as system level timing variations as the system environment variables (such as temperature) change during operation. DSCL, a dynamic version of the SCL or Self Configuring Logic functionality as described herein, addresses the problem of VT (voltage and temperature) variations during normal operation of a chip that utilizes a DDR memory controller as described herein to access a DRAM. Regular SCL as described earlier is typically run only on system power on. It can calibrate for the system level timing at the time it is run and can compensate for PVT (Process variations in addition to Voltage and Temperature) variations that occur from chip to chip, and do it in the context of the system operation.
- Computer memory is vulnerable to temperature changes both in the controller and the corresponding memory modules. As any DDR memory chip or as the chip containing the DDR memory controller heat up, and supply voltage variations occur due to other external factors such as loading experienced by the power supply source, VT variations can cause system level timing to change. These changes can affect the optimal programming settings as compared with those that were produced by operation of the SCL function when calibration was run at power on. Thus, DSCL functionality helps the chip to continuously compensate for VT variations providing the best DRAM timing margin even as system timing changes significantly over time. By performing the necessary calibration in the shortest period of time, DSCL also ensures that the impact on system performance is minimal. DSCL divides the problem of calculating the Capture_Clk delay and the problem of CAS latency compensation into separate problems per
FIGS. 16 and 18 , and solves each of these problems independently. It also runs independently and parallely in each byte lane. Thus the whole calibration process is greatly speeded up. Specifically, in one embodiment, if the user has an on-board CPU, the non-dynamic SCL could be run within about 2 milliseconds assuming 4 byte lanes and 4 milliseconds for 8 byte lanes. In one embodiment of the dynamic SCL, regardless of 4 or 8 byte lanes, SCL would run within 1 micro-second. - The operation of the DSCL functionality described herein utilizes portions of the existing SCL circuitry previously described and utilizes that existing circuitry during both the calibration phase and operational phase, however new circuitry is added for DSCL and the calibration phase is broken into two sub-phases. One of these sub-phases corresponds to the process described in
FIG. 16 , and the other sub-phase corresponds to the process described inFIG. 18 . -
FIG. 14 , when compared withFIG. 10 , shows the circuit component additions which may be present in order to support the dynamically calibrated version of the DDR memory controller as described herein. The purpose of the additions toFIG. 10 as shown inFIG. 14 is to support the first phase of the SCL calibration whereby an optimum Capture_Clk delay is determined according to the process ofFIG. 16 . The optimum Capture_Clk value is determined by the Self-configuringLogic 1001output 1004 to theDelay element 1005. Here, the delayed version of the dqs input signal produced bydelay element 1010 and herein called ip_dqs is sampled in flip-flop 1413. Flip-flop 1413 is clocked by the output ofdelay element 1411 which delays Core_Clk. The output of flip-flop 1413 is connected 1414 to the self configuringlogic function 1001. Core_Clk is also delayed indelay element 1415 which in turn samples Core_Clk in flip-flop 1417. The output of flip-flop 1417 is connected 1418 to the self configuringlogic function 1001. Delayelements signals logic function 1001. Anoutput 1419 ofSCL logic function 1001 controls the select lines ofmultiplexer 1006 which is the same multiplexer as shown earlier asmultiplexer 706 inFIG. 7 and is used to select captured read data which is delayed by different increments according to which flip-flop delay chain path is most appropriate. -
FIG. 15 graphically shows some of the timing delays that are manipulated as part of the dynamic calibration sequence of the DDR memory controller per one embodiment of the present invention and as described inFIG. 16 . Here,Core_Clk 1501 is delayed by different values, here marked value “A” 1503 inFIG. 15 . Theip_dqs signal 1502 is also delayed by different values, here marked value “B” 1504. -
FIG. 16 shows a flowchart for the dynamic calibration procedure in order to determine an optimum delay forCore_Clk delay element 1005 in order to produce an optimum timing for the Capture_Clk signal. Instep 1601, a sequence of read commands is issued so that the ip_dqs signal toggles continuously. Instep 1602, the Core_Clk signal is delayed and used to sample ip_dqs at different delay increments until a 1 to 0 transition is detected on ip_dqs, whereby this value for the Core_Clk delay is recorded as value “A”. Instep 1603, the Core_Clk signal is delayed and used to sample Core_Clk at different delay increments until a 0 to 1 transition is detected on Core_Clk, whereby this value for the Core_Clk delay is recorded as value “B”. Instep 1604, the optimum delay value “C” for delaying Core_Clk in order to produce an optimum Capture_Clk signal is computed according to the formula: if B−A→A then the resulting value C=(A+B)/2, otherwise C=A/2. -
FIG. 17 shows the circuitry within the DSCL functionality that is utilized during the portion of the calibration sequence described in the process ofFIG. 18 . According toFIG. 11 , read data has been captured in flip-flop 1103 by Capture_Clk to produceRd_Data_Cap 1110.Rd_Data_Cap 1110 is then captured in each of flip-flops 1701 on an edge of Core_Clk and are enabled to register Rd_Data_Cap by one ofcounters 1702 which themselves are also clocked by Core_Clk.Counters 1702 are enabled to start counting by aRead Command 1703 issued by the DSCL functionality. The outputs of flip-flops 1701 each go to adata comparator 1704 where they are compared with apredefined data value 1705 which is stored in the DDR memory controller inlocation 1706 and has also been previously placed in the DDR memory itself as described in the process ofFIG. 18 . The outputs of the data comparators enterencoder 1707 whoseoutput 1419 controls multiplexer 1006 which chooses a flip-flop chain delay path from those previously described inFIG. 7 . -
FIG. 18 shows a procedure for operating the DDR memory controller in order to calibrate the controller during dynamic operation, and in particular to determine the optimum overall CAS latency compensation. First, instep 1801 the Capture_Clk delay is set to the previously determined optimum value according to the procedure described in the flowchart ofFIG. 16 . In step 1802 a known data pattern is read from a DDR memory connected to the DDR memory controller. This known data pattern originates in a storedlocation 1706 in the DDR controller device and would typically have been previously saved or located in the DDR memory. If such a pattern is not available in the DDR memory, an appropriate pattern would be written to the DDR memory before this step and subsequent steps are executed. If, in order to write such a known data pattern to the DDR memory, existing data at those memory locations needs to be preserved, the existing data may be read out and saved inside the memory controller or at another (unused) memory location, and then may be restored after the DSCL dynamic calibration sequence perFIGS. 16 and 18 is run. Instep 1803 read data is captured from the DDR memory in an iterative manner while sweeping possible predetermined CAS latency compensation values from a minimum to a maximum value utilizing the different delay paths that can be chosen with the circuitry shown inFIG. 17 . Instep 1804, when the read data matches at a particular CAS latency compensation, the parameters and settings that produced that optimum value of CAS latency compensation, i.e. the chosen delay path through the flip-flopchains feeding multiplexer 706 in combination with the previously determined optimum Capture_Clk delay, are recorded as the optimum parameters for the CAS latency compensation value and used thereafter during normal operation until another dynamic calibration sequence is performed. - Half-Frequency Operation and Dual-Mode DQS Gating
- Circuits and methods are described for a DDR memory controller where two different DQS gating modes are utilized. These gating modes together ensure that the DQS signal, driven by a DDR memory to the memory controller, is only available when read data is valid, thus eliminating capture of undesirable data into the memory controller caused by glitches when DQS is floating. Two types of gating logic are used: Initial DQS gating logic, and Functional DQS gating logic. The Initial gating logic has additional margin to allow for the unknown round trip timing during initial bit levelling calibration. Eventually the memory controller will establish precise timing in view of the actual round-trip delay. Round trip delay is the difference between the instant when a read command is issued by the memory controller and the instant when the corresponding data from a DDR memory is received at the memory controller excluding the known and fixed number of clock cycle delays involved in fetching data in the DDR protocol. Even though this round trip delay has not been characterized when initial bit-levelling calibration is performed, it is useful to perform bit-levelling early in the overall calibration process as this makes subsequent phase and latency calibration for data capture more precise and consistent across all data bits. During bit-levelling calibration an alternating pattern of 1 s and 0 s is read from the memory and the memory controller is able to perform bit-levelling regardless of the round-trip delay due to the predictable nature of the pattern and the manner in which bit-leveling calibration operates. This does, however, require a wider window for DQS gating and hence the Initial gating mode as described herein is used. Please see co-pending U.S. Ser. No. 13/797,200 for details on calibration for bit-levelling. DQS functional gating is optimized to gate DQS precisely as Capture_Clk delay and CAS latency compensation calibration is performed. This gating functionality is especially useful when data capture into a core clock domain is performed at half the DQS frequency in view of rising clock rates for DDR memories.
- With newer DDR technologies, memory speeds are becoming faster and faster. This means that the period of the clocks are becoming smaller and smaller. This is problematic for successful data capture because the related timing windows also become smaller. By operating with some of the clocks involved in data capture at the half frequency, as well as other associated logic, the size of these timing windows can be increased. Whereas while operating at full frequency, SCL could theoretically choose a position for Capture_Clk in such a way that input DQS gating is not necessary, when running at half frequency such an option no longer exists. This is because the input DQS needs to be divided to half its frequency using a toggling flip-flop to produce a signal shown as
d1_half_rate_dqs 2103 inFIG. 21 . If d1_half_rate_dqs were to toggle because of a spurious noise pulse oninput DQS 1903 inFIG. 19 , or when DQS is toggling at other times not corresponding to a valid input being driven from theDRAM 1904, then it could have an opposite polarity from what is required to latch the input data from the DRAM correctly. - Especially when some of the capture-related clocks and logic are operated at half frequency, it can become problematic during a first run of bit-levelling calibration when the gating for
input DQS 1902 may not yet be perfect. In such a condition, it may be unclear how to best open/close DQS gating, since write side bit-levelling may need the gate to be open either perfectly or for more time. An initial gating strategy is therefore used for the first bit-levelling calibration because it is more lenient in that it will leave the gate open for a larger amount of time before closing it. This does not cause a problem for the bit-leveling function to work properly since it does not depend on d1_half_rate_dqs to perform its function. This capability and extra margin is not needed after SCL calibration is performed, as described earlier in this specification with respect to Self-Configuring Logic 1001, because the gating can then be programmed more precisely within the functional gating mode using the information obtained by SCL. - This capability to use two gating modes of operation is also useful for an implementation even where the clocks are operated at full frequency, in view of the smaller available timing margins as memory access clock speeds continue to rise from year to year.
- The waveform of
FIG. 19 shows a hypothetical example of the goal of DQS Gating by only allowing the DQS pulses that correspond to the issued read command to be operated on by the memory controller. As shown inFIG. 20 , there are two types of gating logic, theInitial gating logic 2002, and theFunctional gating logic 2003. The difference between the two is how precisely they work. TheInitial gating logic 2002 has additional margin to allow for the unknown input DQS round trip timing during initial bit-levelling calibration. TheFunctional gating logic 2003 gates DQS precisely based on the round trip timing information discovered and refined during SCL calibration. Regardless of which gating logic is active, either 2002 or 2003, the resulting output is a gated ip_dqs called ip_dqs (post gate) 2005. There is also a disablecontrol 2004 that can be used which forgoes gating but it is not advised to turn it on with half-frequency mode since glitches can invert the phase of the divided DQS. -
FIG. 20 shows a high-level block diagram representation for the logic used for bothInitial DQS gating 2002 and forFunctional DQS gating 2003. The Initial gating mode is only used for the first time that bit-levelling calibration is run. At this initial point in the calibration process, SCL calibration has not yet been run. Therefore the Functional gate timing would be imprecise if used at this stage of the calibration process. After the first time bit levelling is run using Initial DQS gating, Functional gating mode is used during SCL calibration and for functional operation after determination of precise timing values forCapture_clk 2105 and CAS latency calibration. Thereafter, whenever bit levelling or dynamic SCL calibration are run from time to time during functional system operation, the Functional gating timing is used. - Functional gating timing has not been optimized prior to the first run of SCL calibration for optimizing
Capture_clk 2105 timing. During the first run of SCL calibration, the gate opening timing is not precise, so it is possible that for half-frequency operation—for applications where half-frequency functionality according to the present invention is used—the divided input DQS, called d1_half_rate_dqs 2103, has the opposite phase from what is required. This situation is automatically detected and corrected by SCL calibration as described below with respect to SCL Clock Domain Crossing. After SCL calibration has completed, the just discovered Capture_Clk and CAS latency settings are used to close the gate precisely, for functional operation and for any further calibration operations. - SCL Clock Domain Crossing and Half-Frequency Capture Logic
- One exemplary circuit used to implement the read capture logic is shown in
FIG. 21 for applications where half-frequency functionality according to the present invention is used. As described earlier in this specification,capture_clk 2105 is the variable delay clock which SCL will tune so that there is optimal setup and hold margins for clocking data from the input DDR3/DDR4 strobe domain to the memory controller's core clock domain, where it is captured bycore_clk 2104. - During SCL operation, the memory controller will continuously look for the location of the second falling edge of
ip_dqs 2102. This is the edge in which valid data onip_dq 2101 will be available. The data will cross clock domains from this edge to the falling edge ofd1_half_rate_dqs 2103 which happens on the same edge of ip_dqs that triggered d1_half_rate_dqs to go low. This is done to reduce latency on the read path but it must be noted that to check timing based on this, a multi-cycle path of zero is used to time the path during Static Timing Analysis. SCL will find the center between the rising edge of core_clk and the falling edge of the next d1_half_rate_dqs strobe, shown by points A 2201 andB 2202 in theFIG. 22 . Whichever point gives the largest setup and hold margins—point B in the example below—will be set as the active edge location for capture_clk. - Phase Fixing
- As described above, valid read data is available after the second falling edge of ip_dqs or the falling edge of the divided DQS, d1_half_rate_dqs. It is possible that d1_half_rate_dqs could start or become out of phase. If out of phase, the data read back will not be correct. SCL calibration has the ability to detect this situation. Once SCL finishes calibration, it will check to see if it failed or not. If it passed, the phase is correct and normal functionality will follow. If it failed, SCL will run CAS latency calibration again after flipping the polarity of d1_half_rate_dqs placing it back into phase. The setting for Capture_Clk will also be recalculated by moving point A in
FIG. 22 either forward or backward by 1 cycle of ip_dqs based on whether A is lesser or greater than one cycle of ip_dqs. - Logic for Initial Gating During Initial Bit Levelling Calibration
- In the Initial gating mode, the gate is extended 8 full rate cycles beyond the falling edge of
rd_data_en_scl 2001 to ensure maximum round trip delay in receiving valid DQS pulses is accounted for. This is exemplary, and extension by other numbers of full rate cycles is possible. -
FIG. 23 , shows an example timing diagram of the fundamental signals in initial ABC gating routine to create the final gating signal. The signals shown inFIG. 23 are defined as follows: - Full Rate Clock 2301: One of two clock domains in the memory controller with the same frequency as ip_dqs and is used sparingly as some portions of the memory controller must be in the full rate domain.
- Read Data Enable SCL 2001: Read enable signal from the memory controller which is used for calibration purposes and to control the DQS gate signal.
- Read Data Enable SCL Delayed 2303: This is the read data enable SCL signal but delayed by two full rate cycles.
- Read Data Enable Count 2304: A counter which is used to extend the final DQS gate signal by eight full rate cycles.
- Read Data Enable SCL Extended 2305: A one bit signal derived from the read data enable count to extend the final DQS gate by eight cycles.
- DQS Gate Final 2306: This signal will gate DQS but it has no concept of round trip time and therefore opens earlier and closes later giving more margins. (NOTE: this signal is the same one used for functional gating, but the logic to have the gate open/close is different since the round trip time is known)
- DQS 2307: The incoming DQS from the memory.
- Note that in
FIG. 23 the round trip delay here looks relatively small as the drawing has been simplified. Round trip delay is the time it takes for the read data and strobe to be received at the memory controller after the memory has received the read address and command issued by the memory controller. The read data enable SCL delayed signal will open before the DQS strobe is received by the memory controller as it is much more lenient. - Before SCL calibration has been run, the memory controller does not know anything about the round trip time and therefore the gate will not open/close perfectly. This is why Initial gating mode is used since it is much more lenient on when it opens and closes the gate, thus not interfering with bit levelling calibration. Again, Initial gating mode in half frequency mode is only used during the initial run of bit levelling calibration for both the read and write side. When the memory controller is going start reading data for calibration, it will generate a read data enable signal which takes in account the read latency of the memory. When this read data enable signal is used for gating, it is delayed further by two cycles. This is exemplary and could be delayed more or less. The delayed version of the read data enable signal will open the gate albeit a bit earlier than the time when the DQS from the memory reaches the memory controller. At the falling edge of the delayed read data enable signal, the memory controller will extend the gating signal by 8 full rate cycles and then will close it. The position at which it closes will be after the DQS has arrived at the memory controller from the memory.
- Logic for Functional Gating (Functional Gating Logic)
- The logic for generating the functional gating signal is more intricate. It is necessary to being gating shortly before the rising edge of the first DQS pulse during the preamble and to stop gating shortly after the last falling edge during the postamble as shown in
FIG. 25 . - How each of the gating logic functions fits in the overall memory interface according to the invention is shown in the schematic block diagram per
FIG. 24 in conjunction with the timing diagram ofFIG. 25 . - Gate Opening Timing for Functional Gating
- Per
FIG. 25 , in order to begin gating just before the first pulse of DQS, it must be determined when the first pulse actually occurs with respect to something that is known. Note that there is also an analog or digital DLL that is used to delay the input DQS by ¼ cycle for centering it with respect to DQ. The waveforms ofFIG. 25 show the timing of the gating signal with respect to ip_dqs prior 2102 to being delayed by the DLL (pre DLL) as well as after being delayed 2401 by the DLL (post DLL). InFIG. 25 with respect to half-frequency operation, dl_half_rate_dqs 2103 is a divided version of ip_dqs (post DLL) 2401 which toggles on every falling edge of ip_dqs (post DLL). When SCL calibration runs, it determines the phase difference between the rising edge ofcore_clk 2104 and the falling edge ofdl_half_rate_dqs 2103 which corresponds to the second falling edge of ip_dqs (post DLL) 2401 and stores this value as a variable called cycle_cnt (this is the same as the SCL measurement point A mentioned previously with respect toFIG. 22 ). Therefore the invention uses cycle_cnt as a reference to determine when ip_dqs will pulse with respect to core_clk so gating can being beforehand. -
First cycle_cnt_clk 2402 is created by delaying core_clock by the value cycle_cnt. This new clock (cycle_cnt_clk) has each positive edge aligned to each second falling edge of ip_dqs (post DLL). Another clock,cycle_cnt_modified_clk 2403 is generated ¼ Full rate clock cycle sooner or one and ¾ Full rate clock cycle later than cycle_cnt_clk (depending on whether cycle_cnt is greater than ¼ Full rate clock cycle or less than ¼ cycle respectively). - It can be seen that each positive edge of
cycle_cnt_modified_clk 2403 is aligned to each second falling edge of ip_dqs (pre DLL) 2102 and is therefore centered in the middle of ip_dqs preamble time—as shown by the dottedline 2501 inFIG. 25 . - Next, the read enable signal from the controller is registered into this new cycle_cnt_modified_clk domain using capture_clk and cycle_cnt_clk as staging clocks. Capture_Clk is guaranteed by SCL calibration to be positioned so that maximum setup and hold margins are obtained when transitioning between the core_clk and cycle_cnt_clk domains. Timing from cycle_cnt_clk to cycle_cnt_modified_clk is met by design. This read enable signal, once latched in the cycle_cnt_modified_clk domain, is used to signal the start of DQS gating. The clock cycle latency of the read enable signal is also adjusted based on SCL calculated CAS latency as described previously. Also the enable signal is shortened by 1 clock cycle compared to the length of the read burst so that it does not affect the gate closing timing.
- Gate Closing
- Per
FIG. 26 , the DQS gate is closed directly by the last falling edge of the final DQS pulse. This is done by latching the third staged read data enable signal (in cycle_cnt_clk domain) into the d1_half_rate_dqs domain. - Thus, the foregoing description of preferred embodiments of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to one of ordinary skill in the relevant arts. For example, unless otherwise specified, steps performed in the embodiments of the invention disclosed can be performed in alternate orders, certain steps can be omitted, and additional steps can be added. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.
Claims (30)
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/926,902 US10032502B1 (en) | 2008-06-06 | 2018-03-20 | Method for calibrating capturing read data in a read data path for a DDR memory interface circuit |
US15/996,365 US10242730B2 (en) | 2008-06-06 | 2018-06-01 | Double data rate (DDR) memory controller apparatus and method |
US16/049,693 US10269408B2 (en) | 2008-06-06 | 2018-07-30 | Double data rate (DDR) memory controller apparatus and method |
US16/296,025 US10586585B2 (en) | 2008-06-06 | 2019-03-07 | Double data rate (DDR) memory controller apparatus and method |
US16/584,600 US10734061B2 (en) | 2008-06-06 | 2019-09-26 | Double data rate (DDR) memory controller apparatus and method |
US16/909,871 US11348632B2 (en) | 2008-06-06 | 2020-06-23 | Double data rate (DDR) memory controller apparatus and method |
US17/728,673 US11710516B2 (en) | 2008-06-06 | 2022-04-25 | Double data rate (DDR) memory controller apparatus and method |
US18/208,050 US12014767B2 (en) | 2008-06-06 | 2023-06-09 | Double data rate (DDR) memory controller apparatus and method |
US18/657,640 US20240290372A1 (en) | 2008-06-06 | 2024-05-07 | Double data rate (ddr) memory controller apparatus and method |
Applications Claiming Priority (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/157,081 US7975164B2 (en) | 2008-06-06 | 2008-06-06 | DDR memory controller |
US13/172,740 US8661285B2 (en) | 2008-06-06 | 2011-06-29 | Dynamically calibrated DDR memory controller |
US14/023,630 US8843778B2 (en) | 2008-06-06 | 2013-09-11 | Dynamically calibrated DDR memory controller |
US14/152,902 US9081516B2 (en) | 2008-06-06 | 2014-01-10 | Application memory preservation for dynamic calibration of memory interfaces |
US201462063136P | 2014-10-13 | 2014-10-13 | |
US14/752,903 US9552853B2 (en) | 2008-06-06 | 2015-06-27 | Methods for calibrating a read data path for a memory interface |
US14/882,226 US9431091B2 (en) | 2008-06-06 | 2015-10-13 | Multiple gating modes and half-frequency dynamic calibration for DDR memory controllers |
US15/249,188 US9805784B2 (en) | 2008-06-06 | 2016-08-26 | Multiple gating modes and half-frequency dynamic calibration for DDR memory controllers |
US15/722,209 US10229729B2 (en) | 2008-06-06 | 2017-10-02 | Method for calibrating capturing read data in a read data path for a DDR memory interface circuit |
US15/926,902 US10032502B1 (en) | 2008-06-06 | 2018-03-20 | Method for calibrating capturing read data in a read data path for a DDR memory interface circuit |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/722,209 Continuation US10229729B2 (en) | 2008-06-06 | 2017-10-02 | Method for calibrating capturing read data in a read data path for a DDR memory interface circuit |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/996,365 Continuation US10242730B2 (en) | 2008-06-06 | 2018-06-01 | Double data rate (DDR) memory controller apparatus and method |
Publications (2)
Publication Number | Publication Date |
---|---|
US10032502B1 US10032502B1 (en) | 2018-07-24 |
US20180211699A1 true US20180211699A1 (en) | 2018-07-26 |
Family
ID=55180702
Family Applications (12)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/882,226 Active US9431091B2 (en) | 2008-06-06 | 2015-10-13 | Multiple gating modes and half-frequency dynamic calibration for DDR memory controllers |
US15/249,188 Active US9805784B2 (en) | 2008-06-06 | 2016-08-26 | Multiple gating modes and half-frequency dynamic calibration for DDR memory controllers |
US15/722,209 Active US10229729B2 (en) | 2008-06-06 | 2017-10-02 | Method for calibrating capturing read data in a read data path for a DDR memory interface circuit |
US15/926,902 Active US10032502B1 (en) | 2008-06-06 | 2018-03-20 | Method for calibrating capturing read data in a read data path for a DDR memory interface circuit |
US15/996,365 Active US10242730B2 (en) | 2008-06-06 | 2018-06-01 | Double data rate (DDR) memory controller apparatus and method |
US16/049,693 Active US10269408B2 (en) | 2008-06-06 | 2018-07-30 | Double data rate (DDR) memory controller apparatus and method |
US16/296,025 Active US10586585B2 (en) | 2008-06-06 | 2019-03-07 | Double data rate (DDR) memory controller apparatus and method |
US16/584,600 Active US10734061B2 (en) | 2008-06-06 | 2019-09-26 | Double data rate (DDR) memory controller apparatus and method |
US16/909,871 Active US11348632B2 (en) | 2008-06-06 | 2020-06-23 | Double data rate (DDR) memory controller apparatus and method |
US17/728,673 Active US11710516B2 (en) | 2008-06-06 | 2022-04-25 | Double data rate (DDR) memory controller apparatus and method |
US18/208,050 Active US12014767B2 (en) | 2008-06-06 | 2023-06-09 | Double data rate (DDR) memory controller apparatus and method |
US18/657,640 Pending US20240290372A1 (en) | 2008-06-06 | 2024-05-07 | Double data rate (ddr) memory controller apparatus and method |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/882,226 Active US9431091B2 (en) | 2008-06-06 | 2015-10-13 | Multiple gating modes and half-frequency dynamic calibration for DDR memory controllers |
US15/249,188 Active US9805784B2 (en) | 2008-06-06 | 2016-08-26 | Multiple gating modes and half-frequency dynamic calibration for DDR memory controllers |
US15/722,209 Active US10229729B2 (en) | 2008-06-06 | 2017-10-02 | Method for calibrating capturing read data in a read data path for a DDR memory interface circuit |
Family Applications After (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/996,365 Active US10242730B2 (en) | 2008-06-06 | 2018-06-01 | Double data rate (DDR) memory controller apparatus and method |
US16/049,693 Active US10269408B2 (en) | 2008-06-06 | 2018-07-30 | Double data rate (DDR) memory controller apparatus and method |
US16/296,025 Active US10586585B2 (en) | 2008-06-06 | 2019-03-07 | Double data rate (DDR) memory controller apparatus and method |
US16/584,600 Active US10734061B2 (en) | 2008-06-06 | 2019-09-26 | Double data rate (DDR) memory controller apparatus and method |
US16/909,871 Active US11348632B2 (en) | 2008-06-06 | 2020-06-23 | Double data rate (DDR) memory controller apparatus and method |
US17/728,673 Active US11710516B2 (en) | 2008-06-06 | 2022-04-25 | Double data rate (DDR) memory controller apparatus and method |
US18/208,050 Active US12014767B2 (en) | 2008-06-06 | 2023-06-09 | Double data rate (DDR) memory controller apparatus and method |
US18/657,640 Pending US20240290372A1 (en) | 2008-06-06 | 2024-05-07 | Double data rate (ddr) memory controller apparatus and method |
Country Status (1)
Country | Link |
---|---|
US (12) | US9431091B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10229729B2 (en) | 2008-06-06 | 2019-03-12 | Uniquify IP Company, LLC | Method for calibrating capturing read data in a read data path for a DDR memory interface circuit |
WO2022213058A1 (en) * | 2021-04-01 | 2022-10-06 | Micron Technology, Inc. | Dynamic random access memory speed bin compatibility |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014207237A1 (en) * | 2013-06-27 | 2014-12-31 | Napatech A/S | An apparatus and a method for determining a point in time |
JP2015103262A (en) * | 2013-11-25 | 2015-06-04 | ルネサスエレクトロニクス株式会社 | Semiconductor device |
US9722767B2 (en) * | 2015-06-25 | 2017-08-01 | Microsoft Technology Licensing, Llc | Clock domain bridge static timing analysis |
KR20170101597A (en) | 2016-02-29 | 2017-09-06 | 에스케이하이닉스 주식회사 | Test apparatus |
KR102666132B1 (en) | 2016-12-21 | 2024-05-14 | 삼성전자주식회사 | Data alignment circuit of a semiconductor memory device, semiconductor memory device and method of aligning data in a semiconductor memory device |
US10359803B2 (en) * | 2017-05-22 | 2019-07-23 | Qualcomm Incorporated | System memory latency compensation |
US10347307B2 (en) * | 2017-06-29 | 2019-07-09 | SK Hynix Inc. | Skew control circuit and interface circuit including the same |
KR102378384B1 (en) | 2017-09-11 | 2022-03-24 | 삼성전자주식회사 | Operation method of nonvolatile memory device and operation method of memory controller |
KR102438991B1 (en) | 2017-11-28 | 2022-09-02 | 삼성전자주식회사 | Memory device and operation method thereof |
US10580476B2 (en) | 2018-01-11 | 2020-03-03 | International Business Machines Corporation | Simulating a single data rate (SDR) mode on a dual data rate (DDR) memory controller for calibrating DDR memory coarse alignment |
US10481834B2 (en) * | 2018-01-24 | 2019-11-19 | Samsung Electronics Co., Ltd. | Erasure code data protection across multiple NVME over fabrics storage devices |
US10535387B2 (en) * | 2018-02-07 | 2020-01-14 | Micron Technology, Inc. | DQS gating in a parallelizer of a memory device |
US10607671B2 (en) * | 2018-02-17 | 2020-03-31 | Micron Technology, Inc. | Timing circuit for command path in a memory device |
CN108922570B (en) * | 2018-07-13 | 2020-11-13 | 豪威科技(上海)有限公司 | Phase offset detection method, training method, circuit and system for reading DQS signal |
US10522204B1 (en) * | 2018-11-07 | 2019-12-31 | Realtek Semiconductor Corporation | Memory signal phase difference calibration circuit and method |
US11735237B2 (en) | 2019-02-27 | 2023-08-22 | Rambus Inc. | Low power memory with on-demand bandwidth boost |
US10936222B2 (en) | 2019-06-19 | 2021-03-02 | International Business Machines Corporation | Hardware abstraction in software or firmware for hardware calibration |
CN110310685A (en) * | 2019-06-28 | 2019-10-08 | 西安紫光国芯半导体有限公司 | One kind writing clock delay method of adjustment and circuit |
US11270745B2 (en) * | 2019-07-24 | 2022-03-08 | Realtek Semiconductor Corp. | Method of foreground auto-calibrating data reception window and related device |
CN110399319B (en) * | 2019-07-25 | 2021-03-23 | 尧云科技(西安)有限公司 | NAND Flash PHY |
US11416353B2 (en) | 2019-09-13 | 2022-08-16 | Dell Products L.P. | DIMM voltage regulator soft start-up for power fault detection |
CN114127697A (en) | 2019-09-13 | 2022-03-01 | 铠侠股份有限公司 | Memory system |
JP7458740B2 (en) * | 2019-10-21 | 2024-04-01 | キオクシア株式会社 | Memory system and control method |
US11935613B2 (en) * | 2020-08-05 | 2024-03-19 | Texas Instruments Incorporated | Method for tuning an external memory interface |
US11360709B1 (en) * | 2020-11-20 | 2022-06-14 | Faraday Technology Corporation | Gate signal control circuit for DDR memory system |
US11892506B2 (en) * | 2020-11-30 | 2024-02-06 | Mediatek Singapore Pte. Ltd. | Method and circuit for at-speed testing of multicycle path circuits |
US11609868B1 (en) | 2020-12-31 | 2023-03-21 | Waymo Llc | Control calibration timing to avoid memory write blackout period |
US20230176608A1 (en) * | 2021-12-08 | 2023-06-08 | Advanced Micro Devices, Inc. | Read clock start and stop for synchronous memories |
US12002541B2 (en) | 2021-12-08 | 2024-06-04 | Advanced Micro Devices, Inc. | Read clock toggle at configurable PAM levels |
US11923853B2 (en) | 2022-02-25 | 2024-03-05 | Nvidia Corp. | Circuit structures to measure flip-flop timing characteristics |
CN117253520B (en) * | 2023-01-18 | 2024-05-28 | 北京忆芯科技有限公司 | Read clock and programming clock and method for distinguishing operation of NVM chip |
CN116665731B (en) * | 2023-08-02 | 2023-10-03 | 成都智多晶科技有限公司 | DDR memory sampling calibration method and DDR memory |
CN118244689B (en) * | 2024-05-29 | 2024-09-03 | 小米汽车科技有限公司 | Chip system control method, system-level chip and vehicle |
Family Cites Families (87)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5157530A (en) | 1990-01-18 | 1992-10-20 | International Business Machines Corporation | Optical fiber system |
US5548620A (en) | 1994-04-20 | 1996-08-20 | Sun Microsystems, Inc. | Zero latency synchronized method and apparatus for system having at least two clock domains |
US6510503B2 (en) | 1998-07-27 | 2003-01-21 | Mosaid Technologies Incorporated | High bandwidth memory interface |
US6779128B1 (en) | 2000-02-18 | 2004-08-17 | Invensys Systems, Inc. | Fault-tolerant data transfer |
US6316980B1 (en) | 2000-06-30 | 2001-11-13 | Intel Corporation | Calibrating data strobe signal using adjustable delays with feedback |
US6782459B1 (en) | 2000-08-14 | 2004-08-24 | Rambus, Inc. | Method and apparatus for controlling a read valid window of a synchronous memory device |
US6691214B1 (en) | 2000-08-29 | 2004-02-10 | Micron Technology, Inc. | DDR II write data capture calibration |
US6370067B1 (en) | 2001-01-25 | 2002-04-09 | Ishoni Networks, Inc. | Automatic configuration of delay parameters in a dynamic memory controller |
US6442102B1 (en) | 2001-04-04 | 2002-08-27 | International Business Machines Corporation | Method and apparatus for implementing high speed DDR SDRAM read interface with reduced ACLV effects |
US6975595B2 (en) | 2001-04-24 | 2005-12-13 | Atttania Ltd. | Method and apparatus for monitoring and logging the operation of a distributed processing system |
JP2002324398A (en) | 2001-04-25 | 2002-11-08 | Mitsubishi Electric Corp | Semiconductor memory device, memory system and memory module |
JP4067787B2 (en) | 2001-07-05 | 2008-03-26 | 富士通株式会社 | Parallel signal transmission device |
JP2003050738A (en) | 2001-08-03 | 2003-02-21 | Elpida Memory Inc | Calibration method and memory system |
US20030041295A1 (en) | 2001-08-24 | 2003-02-27 | Chien-Tzu Hou | Method of defects recovery and status display of dram |
US6646929B1 (en) | 2001-12-05 | 2003-11-11 | Lsi Logic Corporation | Methods and structure for read data synchronization with minimal latency |
US6496043B1 (en) | 2001-12-13 | 2002-12-17 | Lsi Logic Corporation | Method and apparatus for measuring the phase of captured read data |
TW559694B (en) | 2002-06-21 | 2003-11-01 | Via Tech Inc | Method and system of calibrating the control delay time |
US7036053B2 (en) | 2002-12-19 | 2006-04-25 | Intel Corporation | Two dimensional data eye centering for source synchronous data transfers |
US6864715B1 (en) | 2003-02-27 | 2005-03-08 | Xilinx, Inc. | Windowing circuit for aligning data and clock signals |
US7209531B1 (en) | 2003-03-26 | 2007-04-24 | Cavium Networks, Inc. | Apparatus and method for data deskew |
US7177379B1 (en) | 2003-04-29 | 2007-02-13 | Advanced Micro Devices, Inc. | DDR on-the-fly synchronization |
US7240249B2 (en) | 2003-06-26 | 2007-07-03 | International Business Machines Corporation | Circuit for bit skew suppression in high speed multichannel data transmission |
US6940768B2 (en) | 2003-11-04 | 2005-09-06 | Agere Systems Inc. | Programmable data strobe offset with DLL for double data rate (DDR) RAM memory |
US20050114725A1 (en) | 2003-11-24 | 2005-05-26 | Qualcomm, Inc. | Calibrating an integrated circuit to an electronic device |
KR20050061123A (en) | 2003-12-18 | 2005-06-22 | 삼성전자주식회사 | Data control circuit in the double data rate synchronous dram controller |
US7259606B2 (en) | 2004-01-27 | 2007-08-21 | Nvidia Corporation | Data sampling clock edge placement training for high speed GPU-memory interface |
US6972998B1 (en) | 2004-02-09 | 2005-12-06 | Integrated Device Technology, Inc. | Double data rate memory devices including clock domain alignment circuits and methods of operation thereof |
US7123051B1 (en) | 2004-06-21 | 2006-10-17 | Altera Corporation | Soft core control of dedicated memory interface hardware in a programmable logic device |
US7171321B2 (en) | 2004-08-20 | 2007-01-30 | Rambus Inc. | Individual data line strobe-offset control in memory systems |
US7157948B2 (en) | 2004-09-10 | 2007-01-02 | Lsi Logic Corporation | Method and apparatus for calibrating a delay line |
US7366862B2 (en) | 2004-11-12 | 2008-04-29 | Lsi Logic Corporation | Method and apparatus for self-adjusting input delay in DDR-based memory systems |
US7543172B2 (en) | 2004-12-21 | 2009-06-02 | Rambus Inc. | Strobe masking in a signaling system having multiple clock domains |
US7493461B1 (en) | 2005-01-20 | 2009-02-17 | Altera Corporation | Dynamic phase alignment for resynchronization of captured data |
US7461287B2 (en) | 2005-02-11 | 2008-12-02 | International Business Machines Corporation | Elastic interface de-skew mechanism |
US7342838B1 (en) | 2005-06-24 | 2008-03-11 | Lattice Semiconductor Corporation | Programmable logic device with a double data rate SDRAM interface |
US7215584B2 (en) | 2005-07-01 | 2007-05-08 | Lsi Logic Corporation | Method and/or apparatus for training DQS strobe gating |
JP2007059040A (en) | 2005-07-26 | 2007-03-08 | Nippon Densan Corp | Chucking device and brushless motor and disk drive device in which chucking device is installed |
JP4718933B2 (en) | 2005-08-24 | 2011-07-06 | 富士通株式会社 | Parallel signal skew adjustment circuit and skew adjustment method |
US7177230B1 (en) | 2005-08-25 | 2007-02-13 | Mediatek Inc. | Memory controller and memory system |
US8121237B2 (en) | 2006-03-16 | 2012-02-21 | Rambus Inc. | Signaling system with adaptive timing calibration |
US7698589B2 (en) | 2006-03-21 | 2010-04-13 | Mediatek Inc. | Memory controller and device with data strobe calibration |
US7222036B1 (en) | 2006-03-31 | 2007-05-22 | Altera Corporation | Method for providing PVT compensation |
US7647467B1 (en) | 2006-05-25 | 2010-01-12 | Nvidia Corporation | Tuning DRAM I/O parameters on the fly |
US7685393B2 (en) | 2006-06-30 | 2010-03-23 | Mosaid Technologies Incorporated | Synchronous memory read data capture |
US7543171B2 (en) | 2006-07-10 | 2009-06-02 | Alcatel Lucent | Method and system for dynamic temperature compensation for a source-synchronous interface |
JP5013768B2 (en) | 2006-08-03 | 2012-08-29 | ルネサスエレクトロニクス株式会社 | Interface circuit |
TWI302320B (en) * | 2006-09-07 | 2008-10-21 | Nanya Technology Corp | Phase detection method, memory control method, and related device |
US7405984B2 (en) | 2006-09-19 | 2008-07-29 | Lsi Corporation | System and method for providing programmable delay read data strobe gating with voltage and temperature compensation |
US7818528B2 (en) | 2006-09-19 | 2010-10-19 | Lsi Corporation | System and method for asynchronous clock regeneration |
US7571396B2 (en) | 2006-09-19 | 2009-08-04 | Lsi Logic Corporation | System and method for providing swap path voltage and temperature compensation |
US7739539B2 (en) | 2006-10-13 | 2010-06-15 | Atmel Corporation | Read-data stage circuitry for DDR-SDRAM memory controller |
JP2008103013A (en) | 2006-10-18 | 2008-05-01 | Nec Electronics Corp | Memory read control circuit and its control method |
US7849345B1 (en) | 2006-10-26 | 2010-12-07 | Marvell International Ltd. | DDR control |
US7593273B2 (en) | 2006-11-06 | 2009-09-22 | Altera Corporation | Read-leveling implementations for DDR3 applications on an FPGA |
JP2008117195A (en) | 2006-11-06 | 2008-05-22 | Hitachi Ltd | Semiconductor storage device |
US7590008B1 (en) | 2006-11-06 | 2009-09-15 | Altera Corporation | PVT compensated auto-calibration scheme for DDR3 |
US7716510B2 (en) * | 2006-12-19 | 2010-05-11 | Micron Technology, Inc. | Timing synchronization circuit with loop counter |
US7454303B2 (en) | 2006-12-21 | 2008-11-18 | Lsi Logic Corporation | System and method for compensating for PVT variation effects on the delay line of a clock signal |
US8775701B1 (en) | 2007-02-28 | 2014-07-08 | Altera Corporation | Method and apparatus for source-synchronous capture using a first-in-first-out unit |
US20080276133A1 (en) | 2007-05-02 | 2008-11-06 | Andrew Hadley | Software-Controlled Dynamic DDR Calibration |
KR20090026939A (en) * | 2007-09-11 | 2009-03-16 | 삼성전자주식회사 | Apparatus and method for controlling data strobe signal |
US7886176B1 (en) * | 2007-09-24 | 2011-02-08 | Integrated Device Technology, Inc. | DDR memory system for measuring a clock signal by identifying a delay value corresponding to a changed logic state during clock signal transitions |
US7558151B1 (en) | 2007-09-25 | 2009-07-07 | Integrated Device Technology, Inc. | Methods and circuits for DDR-2 memory device read data resynchronization |
US20090168563A1 (en) | 2007-12-31 | 2009-07-02 | Yueming Jiang | Apparatus, system, and method for bitwise deskewing |
US7924637B2 (en) | 2008-03-31 | 2011-04-12 | Advanced Micro Devices, Inc. | Method for training dynamic random access memory (DRAM) controller timing delays |
US8661285B2 (en) | 2008-06-06 | 2014-02-25 | Uniquify, Incorporated | Dynamically calibrated DDR memory controller |
US7975164B2 (en) | 2008-06-06 | 2011-07-05 | Uniquify, Incorporated | DDR memory controller |
US9431091B2 (en) | 2008-06-06 | 2016-08-30 | Uniquify, Inc. | Multiple gating modes and half-frequency dynamic calibration for DDR memory controllers |
US8139430B2 (en) | 2008-07-01 | 2012-03-20 | International Business Machines Corporation | Power-on initialization and test for a cascade interconnect memory system |
US8237475B1 (en) | 2008-10-08 | 2012-08-07 | Altera Corporation | Techniques for generating PVT compensated phase offset to improve accuracy of a locked loop |
JP2012515376A (en) | 2009-01-12 | 2012-07-05 | ラムバス・インコーポレーテッド | Clock transfer low power signaling system |
US8742791B1 (en) | 2009-01-31 | 2014-06-03 | Xilinx, Inc. | Method and apparatus for preamble detection for a control signal |
TWI410982B (en) | 2009-03-18 | 2013-10-01 | Mstar Semiconductor Inc | Method and circuit of calibrating data strobe signal in memory controller |
US8347020B2 (en) | 2009-03-20 | 2013-01-01 | Qualcomm Incorporated | Memory access controller, systems, and methods for optimizing memory access times |
US8269538B2 (en) | 2009-04-27 | 2012-09-18 | Mosys, Inc. | Signal alignment system |
US7957218B2 (en) | 2009-06-11 | 2011-06-07 | Freescale Semiconductor, Inc. | Memory controller with skew control and method |
US7791375B1 (en) | 2009-07-10 | 2010-09-07 | Altera Corporation | DQS re sync calibration |
JP2011059762A (en) | 2009-09-07 | 2011-03-24 | Ricoh Co Ltd | System and method for controlling memory |
US8284621B2 (en) | 2010-02-15 | 2012-10-09 | International Business Machines Corporation | Strobe offset in bidirectional memory strobe configurations |
US8159888B2 (en) | 2010-03-01 | 2012-04-17 | Qualcomm Incorporated | Recalibration systems and techniques for electronic memory applications |
US8300464B2 (en) | 2010-04-13 | 2012-10-30 | Freescale Semiconductor, Inc. | Method and circuit for calibrating data capture in a memory controller |
US8446195B2 (en) | 2010-06-04 | 2013-05-21 | Xilinx, Inc. | Strobe signal management to clock data into a system |
US8385144B2 (en) | 2011-02-25 | 2013-02-26 | Lsi Corporation | Utilizing two algorithms to determine a delay value for training DDR3 memory |
US8565033B1 (en) | 2011-05-31 | 2013-10-22 | Altera Corporation | Methods for calibrating memory interface circuitry |
US8588014B1 (en) | 2011-05-31 | 2013-11-19 | Altera Corporation | Methods for memory interface calibration |
US8897084B2 (en) | 2011-09-08 | 2014-11-25 | Apple Inc. | Dynamic data strobe detection |
US9100027B2 (en) * | 2013-03-12 | 2015-08-04 | Uniquify, Inc. | Data interface circuit for capturing received data bits including continuous calibration |
-
2015
- 2015-10-13 US US14/882,226 patent/US9431091B2/en active Active
-
2016
- 2016-08-26 US US15/249,188 patent/US9805784B2/en active Active
-
2017
- 2017-10-02 US US15/722,209 patent/US10229729B2/en active Active
-
2018
- 2018-03-20 US US15/926,902 patent/US10032502B1/en active Active
- 2018-06-01 US US15/996,365 patent/US10242730B2/en active Active
- 2018-07-30 US US16/049,693 patent/US10269408B2/en active Active
-
2019
- 2019-03-07 US US16/296,025 patent/US10586585B2/en active Active
- 2019-09-26 US US16/584,600 patent/US10734061B2/en active Active
-
2020
- 2020-06-23 US US16/909,871 patent/US11348632B2/en active Active
-
2022
- 2022-04-25 US US17/728,673 patent/US11710516B2/en active Active
-
2023
- 2023-06-09 US US18/208,050 patent/US12014767B2/en active Active
-
2024
- 2024-05-07 US US18/657,640 patent/US20240290372A1/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10229729B2 (en) | 2008-06-06 | 2019-03-12 | Uniquify IP Company, LLC | Method for calibrating capturing read data in a read data path for a DDR memory interface circuit |
WO2022213058A1 (en) * | 2021-04-01 | 2022-10-06 | Micron Technology, Inc. | Dynamic random access memory speed bin compatibility |
US11823767B2 (en) | 2021-04-01 | 2023-11-21 | Micron Technology, Inc. | Dynamic random access memory speed bin compatibility |
Also Published As
Publication number | Publication date |
---|---|
US20200321044A1 (en) | 2020-10-08 |
US11710516B2 (en) | 2023-07-25 |
US20160365135A1 (en) | 2016-12-15 |
US10229729B2 (en) | 2019-03-12 |
US10242730B2 (en) | 2019-03-26 |
US12014767B2 (en) | 2024-06-18 |
US10734061B2 (en) | 2020-08-04 |
US10586585B2 (en) | 2020-03-10 |
US20220254403A1 (en) | 2022-08-11 |
US11348632B2 (en) | 2022-05-31 |
US20200020381A1 (en) | 2020-01-16 |
US20240290372A1 (en) | 2024-08-29 |
US20240112721A1 (en) | 2024-04-04 |
US20180033477A1 (en) | 2018-02-01 |
US9805784B2 (en) | 2017-10-31 |
US9431091B2 (en) | 2016-08-30 |
US20190206479A1 (en) | 2019-07-04 |
US20180336942A1 (en) | 2018-11-22 |
US10269408B2 (en) | 2019-04-23 |
US20180277195A1 (en) | 2018-09-27 |
US20160035409A1 (en) | 2016-02-04 |
US10032502B1 (en) | 2018-07-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12014767B2 (en) | Double data rate (DDR) memory controller apparatus and method | |
US9552853B2 (en) | Methods for calibrating a read data path for a memory interface | |
US7975164B2 (en) | DDR memory controller | |
KR101374417B1 (en) | Synchronous memory read data capture | |
US7443741B2 (en) | DQS strobe centering (data eye training) method | |
US20190164583A1 (en) | Signal training for prevention of metastability due to clocking indeterminacy | |
US7808849B2 (en) | Read leveling of memory units designed to receive access requests in a sequential chained topology | |
US20110051531A1 (en) | Data output control circuit of a double data rate (ddr) synchronous semiconductor memory device responsive to a delay locked loop (dll) clock | |
KR20090045672A (en) | Digial delay locked circuit | |
Plessas et al. | Advanced calibration techniques for high-speed source–synchronous interfaces | |
Koutsomitsos et al. | Advanced calibration techniques for high-speed source–synchronous interfaces |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: UNIQUIFY, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GOPALAN, MAHESH;WU, DAVID;IYER, VENKAT;REEL/FRAME:045894/0425 Effective date: 20151130 |
|
AS | Assignment |
Owner name: UNIQUIFY IP COMPANY, LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:UNIQUIFY, INC.;REEL/FRAME:045966/0672 Effective date: 20170412 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: UNIQUIFY, INC., CALIFORNIA Free format text: CERTIFICATE OF MERGER;ASSIGNOR:UNIQUIFY IP COMPANY, LLC;REEL/FRAME:057326/0887 Effective date: 20210630 Owner name: UNIQUIFY, INC., CALIFORNIA Free format text: MERGER;ASSIGNOR:UNIQUIFY IP COMPANY, LLC;REEL/FRAME:056725/0939 Effective date: 20210630 Owner name: NEWLIGHT CAPITAL LLC, AS SERVICER, CALIFORNIA Free format text: SHORT FORM INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:UNIQUIFY, INC.;REEL/FRAME:056728/0317 Effective date: 20210630 Owner name: NEWLIGHT CAPITAL LLC, AS SERVICER, CALIFORNIA Free format text: SHORT FORM INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:UNIQUIFY, INC.;REEL/FRAME:056729/0519 Effective date: 20210630 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: GALLAGHER IP SOLUTIONS LLC, NEW JERSEY Free format text: SECURITY INTEREST;ASSIGNOR:NEWLIGHT CAPITAL, LLC;REEL/FRAME:068201/0194 Effective date: 20240731 |