US20220222191A1 - Flash-dram hybrid memory module - Google Patents
Flash-dram hybrid memory module Download PDFInfo
- Publication number
- US20220222191A1 US20220222191A1 US17/582,797 US202217582797A US2022222191A1 US 20220222191 A1 US20220222191 A1 US 20220222191A1 US 202217582797 A US202217582797 A US 202217582797A US 2022222191 A1 US2022222191 A1 US 2022222191A1
- Authority
- US
- United States
- Prior art keywords
- memory
- volatile memory
- data
- dram
- controller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000015654 memory Effects 0.000 title claims abstract description 880
- 230000001105 regulatory effect Effects 0.000 claims abstract description 18
- 230000001360 synchronised effect Effects 0.000 claims abstract description 3
- 230000004044 response Effects 0.000 abstract description 17
- 238000000034 method Methods 0.000 description 77
- 238000012546 transfer Methods 0.000 description 72
- 239000000872 buffer Substances 0.000 description 38
- 238000010586 diagram Methods 0.000 description 24
- 239000003990 capacitor Substances 0.000 description 22
- 238000006243 chemical reaction Methods 0.000 description 21
- 230000001276 controlling effect Effects 0.000 description 12
- 238000004891 communication Methods 0.000 description 11
- 230000008901 benefit Effects 0.000 description 9
- 230000002829 reductive effect Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 238000013507 mapping Methods 0.000 description 8
- 230000007704 transition Effects 0.000 description 8
- 238000012937 correction Methods 0.000 description 7
- 239000004744 fabric Substances 0.000 description 7
- 238000002955 isolation Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000009467 reduction Effects 0.000 description 5
- 238000013519 translation Methods 0.000 description 5
- 101100498818 Arabidopsis thaliana DDR4 gene Proteins 0.000 description 4
- 230000003139 buffering effect Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004806 packaging method and process Methods 0.000 description 4
- 230000009977 dual effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 230000001934 delay Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000012530 fluid Substances 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000004146 energy storage Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/18—Packaging or power distribution
- G06F1/183—Internal mounting support structures, e.g. for printed circuit boards, internal connecting means
- G06F1/185—Mounting of expansion boards
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
- G06F12/0638—Combination of memories, e.g. ROM and RAM such as to permit replacement or supplementing of words in one module by words in another module
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
- G06F13/1694—Configuration of memory controller to different memory types
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4027—Coupling between buses using bus bridges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4204—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
- G06F13/4234—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a memory bus
- G06F13/4243—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being a memory bus with synchronous protocol
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0613—Improving I/O performance in relation to throughput
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0685—Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/005—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor comprising combined but independently operative RAM-ROM, RAM-PROM, RAM-EPROM cells
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C14/00—Digital stores characterised by arrangements of cells having volatile and non-volatile storage properties for back-up when the power is down
- G11C14/0009—Digital stores characterised by arrangements of cells having volatile and non-volatile storage properties for back-up when the power is down in which the volatile element is a DRAM cell
- G11C14/0018—Digital stores characterised by arrangements of cells having volatile and non-volatile storage properties for back-up when the power is down in which the volatile element is a DRAM cell whereby the nonvolatile element is an EEPROM element, e.g. a floating gate or metal-nitride-oxide-silicon [MNOS] transistor
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
- G11C7/1072—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers for memories with random access ports synchronised on clock signal pulse trains, e.g. synchronous memories, self timed memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/20—Employing a main memory using a specific memory technology
- G06F2212/205—Hybrid memory, e.g. using both volatile and non-volatile memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7208—Multiple device management, e.g. distributing data over multiple flash devices
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/21—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
- G11C11/34—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
- G11C11/40—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
- G11C11/401—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
- G11C11/4063—Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
- G11C11/407—Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing for memory cells of the field-effect type
- G11C11/409—Read-write [R-W] circuits
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/06—Auxiliary circuits, e.g. for writing into memory
- G11C16/10—Programming or data input circuits
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates generally to computer memory devices, and more particularly, to devices that employ different types of memory devices such as combinations of Flash and random access memories.
- data centers are built by clustering multiple servers that are networked to increase performance.
- server technology has evolved to be specific to particular applications such as ‘finance transactions’ (for example, point-of-service, inter-bank transaction, stock market transaction), ‘scientific computation’ (for example, fluid dynamic for automobile and ship design, weather prediction, oil and gas expeditions), ‘medical diagnostics’ (for example, diagnostics based on the fuzzy logic, medical data processing), ‘simple information sharing and searching’ (for example, web search, retail store website, company home page), ‘email’ (information distribution and archive), ‘security service’, ‘entertainment’ (for example, video-on-demand), and so on.
- ‘finance transactions’ for example, point-of-service, inter-bank transaction, stock market transaction
- scientific computation for example, fluid dynamic for automobile and ship design, weather prediction, oil and gas expeditions
- medical diagnostics for example, diagnostics based on the fuzzy logic, medical data processing
- ‘simple information sharing and searching’ for example, web search, retail store website, company home page
- ‘email’ information distribution and archive
- the data transfer limitations by the CPU are exemplified by the arrangement shown in FIG. 1 , and apply to data transfers between main storage (for example the hard disk (HD) or solid state drive (SSD) and the memory subsystems (for example DRAM DIMM (Dynamic Random Access Memory Dual In-line Memory Module) connected to the front side bus (FSB)).
- main storage for example the hard disk (HD) or solid state drive (SSD)
- DRAM DIMM Dynamic Random Access Memory Dual In-line Memory Module
- FIG. 1 specifically shows, through the double-headed arrow, the data flow path between the computer or server main storage (SSD/HD) to the DRAM DIMMs.
- the CPU Since the SSD/HD data I/O and the DRAM DIMM data I/O are controlled by the CPU, the CPU needs to allocate its process cycles to control these I/Os, which may include the IRQ (Interrupt Request) service which the CPU performs periodically. As will be appreciated, the more time a CPU allocates to controlling the data transfer traffic, the less time the CPU has to perform other tasks. Therefore, the overall performance of a server will deteriorate with the increased amount of time the CPU has to expend in performing data transfer.
- IRQ Interrupt Request
- EcoRAMTM developed by Spansion provides a storage SSD based system that assumes a physical form factor of a DIMM.
- the EcoRAMTM is populated with Flash memories and a relatively small memory capacity using DRAMs which serve as a data buffer.
- This arrangement is capable of delivering higher throughput rate than a standard SSD based system since the EcoRAMTM is connected to the CPU (central processing unit) via a high speed interface, such as the HT (Hyper Transport) interface, while an SSD/HD is typically connected via SATA (serial AT attachment), USB (universal serial bus), or PCI Express (peripheral component interface express).
- a high speed interface such as the HT (Hyper Transport) interface
- an SSD/HD is typically connected via SATA (serial AT attachment), USB (universal serial bus), or PCI Express (peripheral component interface express).
- SATA Serial AT attachment
- USB universal serial bus
- PCI Express peripheral component interface express
- the read random access throughput rate of EcoRAMTM is near 3 GB/s compared with 400 MB/s for a NAND SSD memory subsystem using the standard PCI Express-based. This is a 7.5 ⁇ performance improvement.
- the performance improvement for write random access throughput rate is less than 2 ⁇ (197 MBs for the EcoRAM
- FIG. 2 is an example of EcoRAMTM using SSD with the form factor of a standard DIMM such that it can be connected to the FSB (front side bus).
- EcoRAM AcceleratorTM an interface device, EcoRAM AcceleratorTM which occupies one of the server's CPU sockets is used, and hence further reducing server's performance by reducing the number of available CPU sockets available, and in turn reducing the overall computation efficiency.
- the server's performance will further suffer due to the limited utilization of the CPU bus due to the large difference in the data transfer throughput rate between read and write operations.
- the EcoRAMTM architecture enables the CPU to view the Flash DIMM controller chip as another processor with a large size of memory available for CPU access.
- the access speed of a Flash based system is limited by four items: the read/write speed of the Flash memory, the CPU's FSB bus speed and efficiency, the Flash DIMM controller's inherent latency, and the HT interconnect speed and efficiency which is dependent on the HT interface controller in the CPU and Flash DIMM controller chip.
- DRAM dynamic random-access memory
- PCB printed circuit board
- Volatile memory generally maintains stored information only when it is powered. Batteries have been used to provide power to volatile memory during power failures or interruptions. However, batteries may require maintenance, may need to be replaced, are not environmentally friendly, and the status of batteries can be difficult to monitor.
- Non-volatile memory can generally maintain stored information while power is not applied to the non-volatile memory. In certain circumstances, it can therefore be useful to backup volatile memory using non-volatile memory.
- the memory module includes a non-volatile memory subsystem, a data manager coupled to the non-volatile memory subsystem, a volatile memory subsystem coupled to the data manager and operable to exchange data with the non-volatile memory subsystem by way of the data manager, and a controller operable to receive commands from the memory controller and to direct (i) operation of the non-volatile memory subsystem, (ii) operation of the volatile memory subsystem, and (iii) transfer of data between any two or more of the memory controller, the volatile memory subsystem, and the non-volatile memory subsystem based on at least one received command from the memory controller.
- Also described herein is a method for managing a memory module by a memory controller, the memory module including volatile and non-volatile memory subsystems.
- the method includes receiving control information from the memory controller, wherein the control information is received using a protocol of the volatile memory subsystem.
- the method further includes identifying a data path to be used for transferring data to or from the memory module using the received control information, and using a data manager and a controller of the memory module to transfer data between any two or more of the memory controller, the volatile memory subsystem, and the non-volatile memory subsystem based on at least one of the received control information and the identified data path.
- Also described herein is a memory module wherein the data manager is operable to control one or more of data flow rate, data transfer size, data buffer size, data error monitoring, and data error correction in response to receiving at least one of a control signal and control information from the controller.
- Also described herein is a memory module wherein the data manager controls data traffic between any two or more of the memory controller, the volatile memory subsystem, and the non-volatile memory subsystem based on instructions received from the controller.
- data traffic control relates to any one or more of data flow rate, data transfer size, data buffer size, data transfer bit width, formatting information, direction of data flow, and the starting time of data transfer.
- Also described herein is a memory module wherein the controller configures at least one of a first memory address space of the volatile memory subsystem and a second memory address space of the non-volatile memory subsystem in response to at least one of a received command from the memory controller and memory address space initialization information of the memory module.
- Also described herein is a memory module wherein the data manager is configured as a bi-directional data transfer fabric having two or more sets of data ports coupled to any one of the volatile and non-volatile memory subsystems.
- a memory module wherein at least one of the volatile and non-volatile memory subsystems comprises one or more memory segments.
- each memory segment comprises at least one memory circuit, memory device, or memory die.
- volatile memory subsystem comprises DRAM memory.
- non-volatile memory subsystem comprises flash memory
- a memory module wherein at least one set of data ports is operated by the data manager to independently and/or concurrently transfer data to or from one or more memory segments of the volatile or non-volatile memory subsystems.
- Also described herein is a memory module wherein the data manager and controller are configured to effect data transfer between the memory controller and the non-volatile memory subsystem in response to memory access commands received by the controller from the memory controller.
- volatile memory subsystem operable as a buffer for the data transfer between the memory controller and non-volatile memory.
- the data manager further includes a data format module configured to format data to be transferred between any two or more of the memory controller, the volatile memory subsystem, and the non-volatile memory subsystem based on control information received from the controller.
- the data manager further includes a data buffer for buffering data delivered to or from the non-volatile memory subsystem.
- controller is operable to perform one or more of memory address translation, memory address mapping, address domain conversion, memory access control, data error correction, and data width modulation between the volatile and non-volatile memory subsystems.
- Also described herein is a memory module wherein the controller is configured to effect operation with the host system in accordance with a prescribed protocol.
- a memory module wherein the prescribed protocol is selected from one or more of DDR, DDR2, DDR3, and DDR4 protocols.
- controller is operable to configure memory space in the memory module based on at least one of a command received from the memory controller, a programmable value written into a register, a value corresponding to a first portion of the volatile memory subsystem, a value corresponding to a first portion of the non-volatile memory subsystem, and a timing value.
- controller configures the memory space of the memory module using at least a first portion of the volatile memory subsystem and a first portion of the non-volatile memory subsystem, and the controller presents a unified memory space to the memory controller.
- Also described herein is a memory module wherein the controller configures the memory space in the memory module using partitioning instructions that are application-specific.
- controller is operable to copy booting information from the non-volatile to the volatile memory subsystem during power up.
- controller includes a volatile memory control module, a non-volatile memory control module, data manager control module, a command interpreter module, and a scheduler module.
- Also described herein is a memory module wherein commands from the volatile memory control module to the volatile memory subsystem are subordinated to commands from the memory controller to the controller.
- Also described herein is a memory module wherein the controller effects pre-fetching of data from the non-volatile to the volatile memory.
- Also described herein is a memory module wherein the pre-fetching is initiated by the memory controller writing an address of requested data into a register of the controller.
- Also described herein is a memory module wherein the controller is operable to initiate a copy operation of data of a closed block in the volatile memory subsystem to a target block in the non-volatile memory subsystem.
- Also described herein is a memory module wherein, if the closed block is re-opened, the controller is operable to abort the copy operation and to erase the target block from the non-volatile memory subsystem.
- Also described herein is a method for managing a memory module wherein the transfer of data includes a bidirectional transfer of data between the non-volatile and the volatile memory subsystems.
- Also described herein is a method for managing a memory module further comprising operating the data manager to control one or more of data flow rate, data transfer size, data width size, data buffer size, data error monitoring, data error correction, and the starting time of the transfer of data.
- Also described herein is a method for managing a memory module further comprising operating the data manager to control data traffic between the memory controller and at least one of the volatile and non-volatile memory subsystems.
- Also described herein is a method for managing a memory module wherein data traffic control relates to any one or more of data transfer size, formatting information, direction of data flow, and the starting time of the transfer of data.
- Also described herein is a method for managing a memory module wherein data traffic control by the data manager is based on instructions received from the controller.
- Also described herein is a method for managing a memory module further comprising operating the data manager as a bi-directional data transfer fabric with two or more sets of data ports coupled to any one of the volatile and non-volatile memory subsystems.
- Also described herein is a method for managing a memory module wherein at least one of the volatile and non-volatile memory subsystems comprises one or more memory segments.
- each memory segment comprises at least one memory circuit, memory device, or memory die.
- Also described herein is a method for managing a memory module wherein the volatile memory subsystem comprises DRAM memory.
- non-volatile memory subsystem comprises Flash memory.
- Also described herein is a method for managing a memory module further comprising operating the data ports to independently and/or concurrently transfer data to or from one or more memory segments of the volatile or non-volatile memory subsystems.
- Also described herein is a method for managing a memory module further comprising directing transfer of data bi-directionally between the volatile and non-volatile memory subsystems using the data manager and in response to memory access commands received by the controller from the memory controller.
- Also described herein is a method for managing a memory module further comprising buffering the data transferred between the memory controller and non-volatile memory subsystem using the volatile memory subsystem.
- Also described herein is a method for managing a memory module further comprising using the controller to perform one or more of memory address translation, memory address mapping, address domain conversion, memory access control, data error correction, and data width modulation between the volatile and non-volatile memory subsystems.
- Also described herein is a method for managing a memory module further comprising using the controller to effect communication with a host system by the volatile memory subsystem in accordance with a prescribed protocol.
- Also described herein is a method for managing a memory module wherein the prescribed protocol is selected from one or more of DDR, DDR2, DDR3, and DDR4 protocols.
- Also described herein is a method for managing a memory module further comprising using the controller to configure memory space in the memory module based on at least one of a command received from the memory controller, a programmable value written into a register, a value corresponding to a first portion of the volatile memory subsystem, a value corresponding to a first portion of the non-volatile memory subsystem, and a timing value.
- Also described herein is a method for managing a memory module wherein the controller configures the memory space of the memory module using at least a first portion of the volatile memory subsystem and a first portion of the non-volatile memory subsystem, and the controller presents a unified memory space to the memory controller.
- Also described herein is a method for managing a memory module wherein the controller configures the memory space in the memory module using partitioning instructions that are application-specific.
- Also described herein is a method for managing a memory module further comprising using the controller to copy booting information from the non-volatile to the volatile memory subsystem during power up.
- Also described herein is a method for managing a memory module wherein the controller includes a volatile memory control module, the method further comprising generating commands by the volatile memory control module in response to commands from the memory controller, and transmitting the generated commands to the volatile memory subsystem.
- Also described herein is a method for managing a memory module further comprising pre-fetching of data from the non-volatile memory subsystem to the volatile memory subsystem.
- Also described herein is a method for managing a memory module wherein the pre-fetching is initiated by the memory controller writing an address of requested data into a register of the controller.
- Also described herein is a method for managing a memory module further comprising initiating a copy operation of data of a closed block in the volatile memory subsystem to a target block in the non-volatile memory subsystem.
- Also described herein is a method for managing a memory module further comprising aborting the copy operation when the closed block of the volatile memory subsystem is re-opened, and erasing the target block in the non-volatile memory subsystem.
- a memory system having a volatile memory subsystem, a non-volatile memory subsystem, a controller coupled to the non-volatile memory subsystem, and a circuit coupled to the volatile memory subsystem, to the controller, and to a host system.
- the circuit is operable to selectively isolate the controller from the volatile memory subsystem, and to selectively couple the volatile memory subsystem to the host system to allow data to be communicated between the volatile memory subsystem and the host system.
- the circuit is operable to selectively couple the controller to the volatile memory subsystem to allow data to be communicated between the volatile memory subsystem and the nonvolatile memory subsystem using the controller, and the circuit is operable to selectively isolate the volatile memory subsystem from the host system.
- the method includes coupling a circuit to a host system, a volatile memory subsystem, and a controller, wherein the controller is coupled to a non-volatile memory subsystem.
- the circuit In a first mode of operation that allows data to be communicated between the volatile memory subsystem and the host system, the circuit is used to (i) selectively isolate the controller from the volatile memory subsystem, and (ii) selectively couple the volatile memory subsystem to the host system.
- the circuit is used to (i) selectively couple the controller to the volatile memory subsystem, and (ii) selectively isolate the volatile memory subsystem from the host system.
- Nontransitory computer readable storage medium storing one or more programs configured to be executed by one or more computing devices.
- the programs when executing on the one or more computing devices, cause a circuit that is coupled to a host system, to a volatile memory subsystem, and to a controller that is coupled to a nonvolatile memory subsystem, to perform a method in which, in a first mode of operation that allows data to be communicated between the volatile memory subsystem and the host system, operating the circuit to (i) selectively isolate the controller from the volatile memory subsystem, and (ii) selectively couple the volatile memory subsystem to the host system.
- a second mode of operation that allows data to be communicated between the volatile memory subsystem and the nonvolatile memory subsystem via the controller, operating the circuit to (i) selectively couple the controller to the volatile memory subsystem, and (ii) selectively isolate the volatile memory subsystem from the host system.
- FIG. 1 is a block diagram illustrating the path of data transfer, via a CPU, of a conventional memory arrangement
- FIG. 2 is a block diagram of a known EcoRAMTM architecture
- FIGS. 3A and 3B are block diagrams of a non-volatile memory DIMM or NVDIMM;
- FIGS. 4A and 4B are block diagrams of a Flash-DRAM hybrid DIMM or FDHDIMM
- FIG. 5A is a block diagram of a memory module 500 in accordance with certain embodiments described herein;
- FIG. 5B is a block diagram showing some functionality of a memory module such as that shown in FIG. 5A ;
- FIG. 6 is a block diagram showing some details of the data manager (DMgr);
- FIG. 7 is a functional block diagram of the on-module controller (CDC).
- FIG. 8A is a block diagram showing more details of the prior art Flash-DRAM hybrid DIMM (FDHDIMM) of FIGS. 4A and 4B ;
- FIG. 8B is a block diagram of a Flash-DRAM hybrid DIMM (FDHDIMM) in accordance with certain embodiments disclosed herein;
- FIG. 9 is a flow diagram directed to the transfer of data from Flash memory to DRAM memory and vice versa in an exemplary FDHDIMM;
- FIG. 10 is a block diagram showing an example of mapping of DRAM address space to Flash memory address space.
- FIG. 11 is a table showing estimates of the maximum allowed closed blocks in a queue to be written back to Flash memory for different DRAM densities using various average block use time.
- FIG. 12 is a block diagram of an example memory system compatible with certain embodiments described herein.
- FIG. 13 is a block diagram of an example memory module with ECC (error-correcting code) having a volatile memory subsystem with nine volatile memory elements and a non-volatile memory subsystem with five non-volatile memory elements in accordance with certain embodiments described herein.
- ECC error-correcting code
- FIG. 14 is a block diagram of an example memory module having a microcontroller unit and logic element integrated into a single device in accordance with certain embodiments described herein.
- FIGS. 15A-15C schematically illustrate example embodiments of memory systems having volatile memory subsystems comprising registered dual in-line memory modules in accordance with certain embodiments described herein.
- FIG. 16 schematically illustrates an example power module of a memory system in accordance with certain embodiments described herein.
- FIG. 17 is a flowchart of an example method of providing a first voltage and a second voltage to a memory system including volatile and non-volatile memory subsystems.
- FIG. 18 is a flowchart of an example method of controlling a memory system operatively coupled to a host system and which includes at least 100 percent more storage capacity in non-volatile memory than in volatile memory.
- FIG. 19 schematically illustrates an example clock distribution topology of a memory system in accordance with certain embodiments described herein.
- FIG. 20 is a flowchart of an example method of controlling a memory system operatively coupled to a host system, the method including operating a volatile memory subsystem at a reduced rate in a back-up mode.
- FIG. 21 schematically illustrates an example topology of a connection to transfer data slices from two DRAM segments of a volatile memory subsystem of a memory system to a controller of the memory system.
- FIG. 22 is a flowchart of an example method of controlling a memory system operatively coupled to a host system, the method including backing up and/or restoring a volatile memory subsystem in slices.
- Example embodiments are described herein in the context of a system of computers, servers, controllers, memory modules, hard disk drives and software. Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Other embodiments will readily suggest themselves to such skilled persons having the benefit of this disclosure. Reference will now be made in detail to implementations of the example embodiments as illustrated in the accompanying drawings. The same reference indicators will be used to the extent possible throughout the drawings and the following description to refer to the same or like items.
- the components, process steps, and/or data structures described herein may be implemented using various types of operating systems, computing platforms, computer programs, and/or general purpose machines.
- devices of a less general purpose nature such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein.
- a method comprising a series of process steps is implemented by a computer or a machine and those process steps can be stored as a series of instructions readable by the machine, they may be stored on a tangible medium such as a computer memory device (e.g., ROM (Read Only Memory), PROM (Programmable Read Only Memory), EEPROM (Electrically Eraseable Programmable Read Only Memory), Flash memory, Jump Drive, and the like), magnetic storage medium (e.g., tape, magnetic disk drive, and the like), optical storage medium (e.g., CD-ROM, DVD-ROM, paper card, paper tape and the like) and other types of program memory.
- ROM Read Only Memory
- PROM Programmable Read Only Memory
- EEPROM Electrically Eraseable Programmable Read Only Memory
- Flash memory Jump Drive, and the like
- magnetic storage medium e.g., tape, magnetic disk drive, and the like
- optical storage medium e.g., CD-ROM, DVD-ROM, paper card, paper tape and the like
- FDHDIMM Flash-DRAM-hybrid DIMM
- Methods for controlling such an arrangement are described.
- the actual memory density (size or capacity) of the DIMM and/or the ratio of DRAM memory to Flash memory are configurable for optimal use with a particular application (for example, POS, inter-bank transaction, stock market transaction, scientific computation such as fluid dynamics for automobile and ship design, weather prediction, oil and gas expeditions, medical diagnostics such as diagnostics based on the fuzzy logic, medical data processing, simple information sharing and searching such as web search, retail store website, company home page, email or information distribution and archive, security service, and entertainment such as video-on-demand).
- the device contains a high density Flash memory with a low density DRAM, wherein the DRAM is used as a data buffer for read/write operation.
- the Flash serves as the main memory.
- a memory system 300 includes a non-volatile (for example Flash) memory subsystem 302 and a volatile (for example DRAM) memory subsystem 304 .
- the examples of FIGS. 3A and 3B are directed to architectures of a non-volatile DIMM (NVDIMM) NVDIMM system that may use a power subsystem (not shown) that can include a battery or a capacitor as a means for energy storage to copy DRAM memory data into Flash memory when power loss occurs, is detected, or is anticipated to occur during operation.
- NVDIMM non-volatile DIMM
- the density of the Flash is about the same as the DRAM memory size or within a few multiples, although in some applications it may be higher.
- This type of architecture may also be used to provide non-volatile storage that is connected to the FSB (front side bus) to support RAID (Redundant Array of Independent Disks) based systems or other type of operations.
- An NVDIMM controller 306 receives and interprets commands from the system memory controller hub (MCH). The NVDIMM controller 306 control the NVDIMM DRAM and Flash memory operations.
- MCH system memory controller hub
- the NVDIMM controller 306 control the NVDIMM DRAM and Flash memory operations.
- the DRAM 304 communicates data with the MCH, while an internal bus 308 is used for data transfer between the DRAM and Flash memory subsystems.
- an internal bus 308 is used for data transfer between the DRAM and Flash memory subsystems.
- the NVDIMM controller 306 ′ of NVDIMM 300 ′ monitors events or commands and enables data transfer to occur in a first mode between the DRAM 304 ′ and Flash 302 ′ or in a second mode between the DRAM and the MCH.
- FIG. 4A a general architecture for a Flash and DRAM hybrid DIMM (FDHDIMM) system 400 is shown in FIG. 4A .
- the FDHDIMM interfaces with an MCH (memory controller hub) to operate and behave as a high density DIMM, wherein the MCH interfaces with the non-volatile memory subsystem (for example Flash) 402 is controlled by an FDHDIMM controller 404 .
- the MCH interfaces with the Flash via the FDHDIMM controller, the FDHDIMM overall performance is governed by the Flash access time.
- the volatile memory subsystem (for example DRAM) 406 is primarily used as a data buffer or a temporary storage location such that data from the Flash memory 402 is transferred to the DRAM 406 at the Flash access speed, and buffered or collected into the DRAM 406 , which then transfers the buffered data to the MCH based on the access time of DRAM.
- the FDHDIMM controller 404 manages the data transfer from the DRAM 406 to the Flash 402 . Since the Flash memory access speed (both read and write) is relatively slower than DRAM, (e.g. for example a few hundred microseconds for read access), the average data throughput rate of FDHDIMM 400 is limited by the Flash access speed.
- the DRAM 406 serves as a data buffer stage that buffers the MCH read or write data.
- the DRAM 406 serves as a temporary storage for the data to be transferred from/to the Flash 402 .
- the MCH recognizes the physical density of an FDHDIMM operating as a high density DIMM as the density of Flash alone.
- a read operation can be performed by the MCH by sending an activate command (may be simply referred to as RAS, or row address strobe) to the FDHDIMM 400 to conduct a pre-fetch read data operation from the Flash 402 to the DRAM 406 , with the pre-fetch data size being for example a page (1 KB or 2 KB, or may be programmable to any size).
- the MCH then sends a read command (may be simply referred to as CAS, or column address strobe) to read the data out input of the DRAM.
- the data transfer from Flash to DRAM occurs at Flash access speed rates, while data transfer from DRAM to MCH occurs at DRAM access speed rates.
- data latency and throughput rates are the same as any DRAM operation as long as the read operations are executed onto the pages that were opened with the activate command previously sent to pre-fetch data from the Flash to DRAM.
- a longer separation time period between the RAS (e.g. Activate command) and the first CAS (column address strobe e.g. read or write command) is required to account for the time it takes to pre-fetch data from the Flash to DRAM.
- FIG. 4B An example of FDHDIMM operating as a DDR DIMM with SSD is shown in FIG. 4B , wherein the FDHDIMM 400 ′ supports two different interface interpretations to the MCH.
- the MCH views the FDHDIMM 400 ′ as a combination of DRAM DIMM and SSD (not illustrated).
- the MCH needs to manage two address spaces, one for the DRAMs 402 ′ and one for the Flash 404 ′.
- the MCH is coupled to, and controls, both of the DRAM and Flash memory subsystems.
- One advantage of this mode is that the CPU does not need to be in the data path when data is moved from DRAM to Flash or from Flash to DRAM.
- the MCH views the FDHDIMM 400 ′ as an on-DIMM Flash with the SSD in an extended memory space that is behind the DRAM space.
- the MCH physically fetches data from the SSD to the DDR DRAM and then the DRAM sends the data to the MCH. Since all data movement occurs on the FDHDIMM, this mode will provide better performance than if the data were to be moved through or via the CPU.
- the FDHDIMM 400 ′ receives control signals 408 from the MCH, where the control signals may include one or more control signals specifically for the DRAM 402 ′ operation and one or more control signals specifically for the Flash 404 ′ operation.
- the MCH or CPU is coupled to the FDHDIMM via a single data bus interface 410 which couples the MCH to the DRAM.
- FIGS. 5A and 5B are block diagrams of a memory module 500 that is couplable to a host system (not shown).
- the host system may be a server or any other system comprising a memory system controller or an MCH for providing and controlling the read/write access to one or more memory systems, wherein each memory system may include a plurality of memory subsystems, a plurality of memory devices, or at least one memory module.
- read/write access means the ability of the MCH to interface with a memory system or subsystem in order to write data into it or read data from it, depending on the particular requirement at a particular time.
- memory module 500 is a Flash-DRAM hybrid memory subsystem which may be integrated with other components of a host system.
- memory module 500 is a Flash-DRAM hybrid memory module that has the DIMM (dual-inline memory module) form factor, and may be referred to as a FDHDIMM, although it is to be understood that in both structure and operation it may be different from the FDHDIMM discussed above and described with reference to FIGS. 4A and 4B .
- Memory module 500 includes two on-module intermediary components: a controller and a data manager.
- on-module intermediary components may be physically separate components, circuits, or modules, or they may be integrated onto a single integrated circuit or device, or integrated with other memory devices, for example in a three dimensional stack, or in any one of several other possible expedients for integration known to those skilled in the art to achieve a specific design, application, or economic goal.
- these on-module intermediary components are an on-DIMM Controller (CDC) 502 and an on-DIMM data manager (DMgr) 504 . While the DIMM form factor will predominate the discussion herein, it should be understood that this is for illustrative purposes only and memory systems using other form factors are contemplated as well.
- CDC 502 and data manager DMgr 504 are operative to manage the interface between a non-volatile memory subsystem such as a Flash 506 , a volatile memory subsystem such as a DRAM 508 , and a host system represented by MCH 510 .
- CDC 502 controls the read/write access to/from Flash memory 506 from/to DRAM memory 508 , and to/from DRAM memory from/to MCH 510 .
- Read/write access between DRAM 508 , Flash 506 and MCH 510 may be referred to herein generally as communication, wherein control and address information C/A 560 is sent from MCH 510 to CDC 502 , and possible data transfers follow as indicated by Data 550 , Data 555 , and/or Data 556 .
- the CDC 502 performs specific functions for memory address transformation, such as address translation, mapping, or address domain conversion, Flash access control, data error correction, manipulation of data width or data formatting or data modulation between the Flash memory and DRAM, and so on.
- the CDC 502 ensures that memory module 500 provides transparent operation to the MCH in accordance with certain industry standards, such as DDR, DDR2, DDR3, DDR4 protocols.
- certain industry standards such as DDR, DDR2, DDR3, DDR4 protocols.
- the Flash access speed has minimal impact on the overall FDHDIMM access speed.
- the CDC controller 502 receives standard DDR commands from the MCH, interprets, and produces commands and/or control signals to control the operation of the Data manager (DMgr), the Flash memory and the DRAM memory.
- the DMgr controls the data path routing amongst DRAMs, Flash and MCH, as detailed below.
- the data path routing control signals are independently operated without any exclusivity.
- DMgr 504 provides a variety of functions to control data flow rate, data transfer size, data buffer size, data error monitoring or data error correction. For example, these functions or operations can be performed on-the-fly (while data is being transferred via the DMgr 504 ) or performed on buffered or stored data in DRAM or a buffer.
- one role of DMgr 504 is to provide interoperability among various memory subsystems or components and/or MCH 510 .
- an exemplary host system operation begins with initialization.
- the CDC 502 receives a first command from the MCH 510 to initialize FDHDIMM 500 using a certain memory space.
- the memory space as would be controlled by MCH 510 can be configured or programmed during initialization or after initialization has completed.
- the MCH 510 can partition or parse the memory space in various ways that are optimized for a particular application that the host system needs to run or execute.
- the CDC 502 maps the actual physical Flash 506 and DRAM 508 memory space using the information sent by MCH 510 via the first command.
- the CDC 502 maps the memory address space of any one of the Flash 506 and DRAM 508 memory subsystems using memory address space information that is received from the host system, stored in a register within FDHDIMM 500 , or stored in a memory location of a non-volatile memory subsystem, for example a portion of Flash 506 or a separate non-volatile memory subsystem.
- the memory address space information corresponds to a portion of initialization information of the FDHDIMM 500 .
- MCH 510 may send a command to restore a certain amount of data information from Flash 506 to DRAM 508 .
- the CDC 502 provides control information to DMgr 504 to appropriately copy the necessary information from Flash 506 to the DRAM 508 .
- This operation can provide support for various host system booting operations and/or a special host system power up operation.
- MCH 510 sends a command which may include various fields comprising control information regarding data transfer size, data format options, and/or startup time.
- CDC 502 receives and interprets the command and provides control signals to DMgr 504 to control the data traffic between the Flash 506 , the DRAM 508 , and the MCH 510 .
- DMgr 504 receives the data transfer size, formatting information, direction of data flow (via one or more multiplexers such as 611 , 612 , 621 , 622 as detailed below), and the starting time of the actual data transfer from CDC 502 .
- DMgr 504 may also receive additional control information from the CDC 502 to establish a data flow path and/or to correctly establish the data transfer fabric.
- DMgr 504 also functions as a bi-directional data transfer fabric.
- DMgr 504 may have more than 2 sets of data ports facing the Flash 506 and the DRAM 508 .
- Multiplexers 611 and 612 provide controllable data paths from any one of the DRAMs 508 ( 1 ) and 508 ( 2 ) (DRAM-A and DRAM-B) to any one of the MCH 510 and the Flash 506 .
- multiplexers 621 and 622 provide controllable data paths from any one of the MCH and the Flash memory to any one of the DRAMs 508 ( 1 ) and 508 ( 2 ) (DRAM-A and DRAM-B).
- DRAM 508 ( 1 ) is a segment of DRAM 508 , while in other embodiments, DRAM 508 ( 1 ) is a separate DRAM memory subsystem. It will be understood that each memory segment can comprise one or more memory circuits, a memory devices, and/or memory integrated circuits. Of course other configurations for DRAM 508 are possible, and other data transfer fabrics using complex data paths and suitable types of multiplexing logic are contemplated.
- the two sets of multiplexors 611 , 612 and 621 , 622 allow independent data transfer to Flash 506 from DRAM-A 508 ( 1 ) and DRAM-B 508 ( 2 ).
- DMgr 504 can transfer data from DRAM-A 508 ( 1 ) to MCH 510 , via multiplexer 611 , at the same time as from DRAM-B 508 ( 2 ) to the Flash 506 , via multiplexer 612 ; or data is transferred from DRAM-B 508 ( 2 ) to MCH 510 , via multiplexer 611 , and simultaneously data is transferred from the Flash 506 to DRAM-A 508 ( 1 ), via multiplexer 621 .
- data can be transferred to or from the DRAM in both device-wide or segment-by-segment fashion
- data can be transferred to or from the flash memory in device-wide or segment-by-segment fashion, and the flash memory can be addressed and accessed accordingly.
- the illustrated arrangement of data transfer fabric of DMgr 504 also allows the CDC 502 to control data transfer from the Flash memory to the MCH by buffering the data from the Flash 506 using a buffer 602 , and matching the data rate and/or data format of MCH 510 .
- the buffer 602 is shown in FIG. 6 as a portion of a data format module 604 ; however, buffer 602 may also be a distributed buffer such that one buffer is used for each one of the set of multiplexer logic elements shown as multiplexers 611 , 612 , 621 , and 622 .
- Various buffer arrangements may be used, such as a programmable size buffer to meet the requirement of a given system design requirement, for example the disparity between read/write access time; or overall system performance, for example latency.
- the buffer 604 may introduce one or more clock cycle delays into a data communication path between MCH 510 , DRAM 508 , and Flash 506 .
- data format module 604 contains a data formatting subsystem (not shown) to enable DMgr 504 to format and perform data transfer in accordance with control information received from CDC 502 .
- Data buffer 604 of data format module 602 also supports a wide data bus 606 coupled to the Flash memory 506 operating at a first frequency, while receiving data from DRAM 508 using a relatively smaller width data bus 608 operating at a second frequency, the second frequency being larger than the first frequency in certain embodiments.
- the buffer 602 is designed to match the data flow rate between the DRAM 508 and the Flash 506 .
- a register 690 provides the ability to register commands received from MCH 510 via C/A 560 ( FIG. 5A ).
- the register 690 may communicate these commands to CDC 502 and/or to the DRAM 508 and/or Flash 506 .
- the register 690 communicates these registered commands to CDC 502 for processing.
- the register 690 may also include multiple registers (not shown), such that it can provide the ability to register multiple commands, a sequence of commands, or provide a pipeline delay stage for buffering and providing a controlled execution of certain commands received form MCH 510 .
- the register 690 may register commands from MCH 510 and transmit the registered commands to DRAM 508 and/or Flash 506 memory subsystems.
- the CDC 502 monitors commands received from MCH 510 , via control and address bus C/A 560 , and provides appropriate control information to DMgr 504 , DRAM 508 , or Flash 506 to execute these commands and perform data transfer operations between MCH 510 and FDHDIMM 500 via MCH data bus 610 .
- FIG. 7 illustrates a functional block diagram of the CDC 502 .
- the major functional blocks of the CDC 502 are a DRAM control block DRAMCtrl 702 , Flash control block FlashCtrl 704 , MCH command interpreter Cmdlnt 706 , DRAM-Flash interface scheduler Scheduler 708 , and DMgr control block (DMgrCtrl) 710 .
- DRAMCtrl 702 generates DRAM commands that are independent from the commands issued by the MCH 510 .
- the CDC 502 may choose to instruct DRAMCtrl 702 to abort its operation in order to execute the operation initiated by the MCH.
- the CDC 502 may also pipeline the operation so that it causes DRAMCtrl 702 to either halt or complete its current operation prior to executing that of the MCH.
- the CDC 502 may also instruct DRAMCtrl 702 to resume its operation once the command from MCH 510 is completed.
- the FlashCtrl 704 generates appropriate Flash commands for the proper read/write operations.
- the CmdInt 706 intercepts commands received from MCH 510 and generates the appropriate control information and control signals and transmit them to the appropriate FDHDIMM functional block. For example, Cmdlnt 706 issues an interrupt signal to the DRAMCtrl 702 when the MCH issues a command that collides (conflicts) with the currently executing or pending commands that DRAMCtrl 702 has initiated independently from MCH 510 , thus subordinating these commands to those from the MCH.
- the Scheduler 708 schedules the Flash-DRAM interface operation such that there is no resource conflict in the DMgr 504 .
- the Scheduler 708 assigns time slots for the DRAMCtrl 702 and FlashCtrl 704 operation based on the current status and the pending command received or to be received from the MCH.
- the DMgrCtrl 710 generates and sends appropriate control information and control signals for the proper operation and control of the data transfer fabric to enable or disable data paths between Flash 506 , DRAM 508 , and the MCH 510 .
- FIG. 8A is a block diagram showing a Flash-DRAM hybrid DIMM (FDHDIMM).
- FDHDIMM Flash-DRAM hybrid DIMM
- this Flash-DRAM hybrid DIMM requires two separate and independent address buses to separately control the address spaces: one for the Flash memory Flash 506 and the other for the DRAM memory DRAM 508 .
- the MCH treats the DRAM 508 and Flash 506 as separate memory subsystems, for example DRAM and SSD/HD memory subsystems.
- the memory in each address space is controlled directly by the MCH.
- the on-DIMM data path between Flash 506 and DRAM 508 allows for direct data transfer to occur between the Flash 506 and the DRAM 508 in response to control information from Ctrl 502 .
- this data transfer mechanism provides direct support for executing commands from the MCH without having the MCH directly controlling the data transfer, and thus improving data transfer performance from Flash 506 to the DRAM 508 .
- the MCH needs to manage two address spaces and two different memory protocols simultaneously.
- the MCH needs to map the DRAM memory space into the Flash memory space, and the data interface time suffers due to the difference in the data access time between the Flash memory and the DRAM memory.
- a memory space mapping of a Flash-DRAM hybrid DIMM is shown in FIG. 8B .
- a memory controller of a host system (not shown) controls both of the DRAM 508 address space and the Flash 506 address space using a single unified address space.
- the CDC 502 receives memory access commands from the MCH and generates control information for appropriate mapping and data transfer between Flash and DRAM memory subsystem to properly carry out the memory access commands.
- the memory controller of the host system views the large Flash memory space as a DRAM memory space, and accesses this unified memory space with a standard DDR (double data rate) protocol used for accessing DRAM.
- the unified memory space in this case can exhibit overlapping memory address space between the Flash 506 and the DRAM 508 .
- the overlapping memory address space may be used as a temporary storage or buffer for data transfer between the Flash 506 and the DRAM 508 .
- the DRAM memory space may hold a copy of data from the selected Flash memory space such that the MCH can access this data normally via DDR memory access commands.
- the CDC 502 controls the operation of the Flash 506 and DRAM 508 memory subsystems in response to commands received from a memory controller of a host system.
- the unified memory space corresponds to a contiguous address space comprising a first portion of the address space of the Flash 506 and a first portion of the address space of the DRAM 508 .
- the first portion of the address space of the Flash 506 can be determined via a first programmable register holding a first value corresponding to the desired Flash memory size to be used.
- the first portion of the address space of the DRAM 508 can be determined via a second programmable register holding a second value corresponding to the desired DRAM memory size to be used.
- any one of the first portion of the address space of the Flash 506 and the first portion of the address space of the DRAM 508 is determined via a first value corresponding to a desired performance or memory size, the first value being received by the CDC 502 via a command sent by memory controller of the host system.
- FIG. 9 a flow diagram directed to the transfer of data from Flash memory to DRAM memory and vice versa in an exemplary FDHDIMM is shown in FIG. 9 .
- data transfer from the Flash 506 to the DRAM 508 occurs in accordance with memory access commands which the CDC 502 receives from the memory controller of the host system.
- the CDC 502 controls the data transfer from the DRAM 508 to the Flash 506 so as to avoid conflict with any memory operation that is currently being executed. For example, when all the pages in a particular DRAM memory block are closed.
- the CDC 502 partitions the DRAM memory space into a number of blocks for the purpose of optimally supporting the desired application.
- the controller can configure memory space in the memory module based on at least one of one or more commands received from the MCH, instructions received from the MCH, a programmable value written into a register, a value corresponding to a first portion of the volatile memory subsystem, a value corresponding to a first portion of the non-volatile memory subsystem, and a timing value.
- the block size can be configurable by the memory controller of the host system, such that the number pages in a block can be optimized to support a particular application or a task.
- the block size may be configured on-the-fly, e.g. CDC 502 can receive instruction regarding a desired block size from the memory controller via a memory command, or via a programmable value.
- a memory controller can access the memory module using a standard access protocol, such as JEDEC's DDR DRAM, by sending a memory access command to the CDC 502 which in turn determines what type of a data transfer operation it is and the corresponding target address where the data information is stored, e.g. data information is stored in the DRAM 508 or Flash 506 memory subsystems.
- a standard access protocol such as JEDEC's DDR DRAM
- the CDC 502 determines what type of a data transfer operation it is and the corresponding target address where the data information is stored, e.g. data information is stored in the DRAM 508 or Flash 506 memory subsystems.
- the CDC 502 determines that data information, e.g. a page (or block), does not reside in the DRAM 508 but resides in Flash 506 , then the CDC 502 initiates and controls all necessary data transfer operations from Flash 506 to DRAM 508 and subsequently to the memory controller.
- the CDC 502 alerts the memory controller to retrieve the data information from the DRAM 508 .
- the memory controller initiates the copying of data information from Flash 506 to DRAM 508 by writing, into a register in the CDC 502 , the target Flash address along with a valid block size.
- the CDC 502 executes appropriate operations and generates control information to copy the data information to the DRAM 508 . Consequently, the memory controller can access or retrieve the data information using standard memory access commands or protocol.
- FIG. 9 An exemplary flow chart is shown in FIG. 9 , a starting step or power up 902 , is followed by an initialization step 904 , the memory controller initiates, at step 906 , a data move from the Flash 506 to the DRAM 508 by writing target address and size, to a control register in the CDC 502 , which then copies, at 908 , data information from the Flash 506 to the DRAM 508 and erases the block in the Flash. Erasing the data information from Flash may be accomplished independently from (or concurrently with) other steps that CDC 502 performs in this flow chart, i.e. other steps can be executed concurrently with the Erase the Flash block step.
- the memory controller can operate on this data block using standard memory access protocol or commands at 910 .
- the CDC 502 checks, at 912 , if any of the DRAM 508 blocks, or copied blocks, are closed. If the memory controller closed any open blocks in DRAM 508 , then the CDC 502 initiate a Flash write to write the closed block from the DRAM 508 to the Flash 506 , at 914 . In addition, the memory controller, at 916 , reopens the closed block that is currently being written into the Flash 506 , then the CDC 502 stops the Flash write operation and erases the Flash block which was being written to, as shown at 918 . Otherwise, the CDC 502 continues and completes the writing operation to the Flash at 920 .
- the dashed lines in FIG. 9 indicate independent or parallel activities that can be performed by the CDC 502 .
- the CDC 502 receives a DRAM load command from a memory controller which writes a Flash target address and/or block size information into the RC register(s) at 922 , as described above, then the CDC 502 executes a load DRAM w/RC step 906 and initiates another branch (or a thread) of activities that includes steps 908 - 922 .
- the CDC 502 controls the data transfer operations between DRAM 508 and Flash 506 such that the Flash 506 is completely hidden from the memory controller.
- the CDC 502 monitors all memory access commands sent by the memory controller using standard DRAM protocol and appropriately configures and manipulate both Flash 506 and DRAM 508 memory subsystems to perform the requested memory access operation and thus achieve the desired results.
- the memory controller does not interface directly with the Flash memory subsystem. Instead, the memory controller interfaces with the CDC 502 and/or DMgr 504 as shown in FIG. 5 and FIG. 6 .
- the memory controller may use one or more protocol, such as DDR, DDR2, DDR3, DDR4 protocols or the like.
- FIG. 10 an example of mapping a DRAM address space to Flash memory address space is shown in FIG. 10 .
- Two sets ( 1002 , 1004 ) of address bits AD 6 to AD 17 are allocated for the block address. For example, assuming a Block size of 256K Bytes, then a 24-bit block address space (using the two sets of AD 6 to AD 17 1002 and 1004 ) would enable access to 4 TB of Flash memory storage space. If a memory module has 1 GB of DRAM storage capacity, then it can hold approximately 4K Blocks of data in the DRAM memory, each Block comprise 256 K Bytes of data.
- the DRAM address space corresponding to the 4K blocks, can be assigned to different virtual ranks and banks, where the number of virtual ranks and banks is configurable and can be manipulated to meet a specific design or performance needs. For example, if a 1 G Bytes memory module is configured to comprise two ranks with eight banks per rank, then each bank would hold two hundred fifty (250) blocks or the equivalent of 62 M Bytes or 62K pages, where each page correspond to a 1K Bytes. Other configurations using different page, block, banks, or ranks numbers may also be used. Furthermore, an exemplary mapping of 24-bit DDR DIMM block address to Flash memory address, using Block addressing as described above, is shown in FIG. 10 .
- the 24-bit can be decomposed into fields, such as a logical unit number LUN address 1061 field, a Block address 1051 field, a Plane address 1041 , a Page address 1031 , and a group of least significant address bits A 0 A 1 1021 .
- the Plane address 1041 is a sub address of the block address, and it may be used to support multiple page IO so as to improve Flash memory subsystem operation. In this example, it is understood that different number of bits may be allocated to each field of the 24-bit
- the CDC 502 manages the block write-back operation by queuing the blocks that are ready to be written back to the Flash memory. As described above, if any page in a queued block for a write operation is reopened, then the CDC 502 will stop the queued block write operation, and remove the block from the queue. Once all the pages in a block are closed, then the CDC 502 restarts the write-back operation and queue the block for a write operation.
- an exemplary read operation from Flash 506 to DRAM 508 can be performed in approximately 400 ⁇ s, while a write operation from DRAM 508 to Flash 506 can be performed in approximately 22 ms resulting in a read to write ratio of 55 to 1. Therefore, if the average time a host system's memory controller spends accessing data information in a Block of DRAM is about 22 ms (that is the duration that a Block comprises one or more pages that are open), then the block write-back operation from DRAM to Flash would not impact performance and hence the disparity between read and write access may be completely hidden from the memory controller.
- the CDC 502 control the data transfer operation between DRAM 508 and Flash 506 such that there are no more than 9 closed blocks in the queue to be written-back to the Flash memory, hence approximately an average of 100 ms can be maintained for a standard DDR DRAM operation.
- the number of closed Blocks in the queue to be written-back to the Flash memory subsystem varies with the average block usage time and the desired performance for a specific host system or for a specific application running using the host system resources.
- the above equation also indicates that bigger DRAM memory space can support shorter block usage times. For example, 2 GB of DRAM memory allows the 8 closed blocks to be written-back to Flash.
- the table in FIG. 11 provides an estimation of the maximum allowed closed blocks in the queue to be written back to the Flash memory for different DRAM density using various average block use time.
- Certain embodiments described herein include a memory system which can communicate with a host system such as a disk controller of a computer system.
- the memory system can include volatile and non-volatile memory, and a controller.
- the controller backs up the volatile memory using the non-volatile memory in the event of a trigger condition.
- Trigger conditions can include, for example, a power failure, power reduction, request by the host system, etc.
- the memory system can include a secondary power source which does not comprise a battery and may include, for example, a capacitor or capacitor array.
- the memory system can be configured such that the operation of the volatile memory is not adversely affected by the non-volatile memory or by th controller when the volatile memory is interacting with the host system.
- one or more isolation devices may isolate the non-volatile memory and the controller from the volatile memory when the volatile memory is interacting with the host system and may allow communication between the volatile memory and the non-volatile memory when the data of the volatile memory is being restored or backed-up. This configuration generally protects the operation of the volatile memory when isolated while providing backup and restore capability in the event of a trigger condition, such as a power failure.
- the memory system includes a power module which provides power to the various components of the memory system from different sources based on a state of the memory system in relation to a trigger condition (e.g., a power failure).
- the power module may switch the source of the power to the various components in order to efficiently provide power in the event of the power failure. For example, when no power failure is detected, the power module may provide power to certain components, such as the volatile memory, from system power while charging a secondary power source (e.g., a capacitor array). In the event of a power failure or other trigger condition, the power module may power the volatile memory elements using the previously charged secondary power source.
- the power module. transitions relatively smoothly from powering the volatile memory with system power to powering it with the secondary power source.
- the power system may power volatile memory with a third power source from the time the memory system detects that power failure is likely to occur until the time the memory system detects that the power failure has actually occurred.
- the volatile memory system can be operated at a reduced frequency during backup and/or restore operations which can improve the efficiency of the system and save power.
- the volatile memory communicates with the non-volatile memory by writing and/or. reading data words in bit-wise slices instead of by writing entire words at once.
- the unused slice(s) of volatile memory is not active, which can reduce the power consumption of the system.
- the non-volatile memory can include at least 100 percent more storage capacity than the volatile memory. This configuration can allow the memory system to efficiently handle subsequent trigger conditions.
- FIG. 12 is a block diagram of an example memory system 1010 compatible with certain embodiments described herein.
- the memory system 1010 can be coupled to a host computer system and can include a volatile memory subsystem 1030 , a non-volatile memory subsystem 1040 , and a controller 1062 operatively coupled to the non-volatile memory subsystem 1040 .
- the memory system 1010 includes at least one circuit 1052 configured to selectively operatively decouple the controller 1062 from the volatile memory subsystem 1030 .
- the memory system 1010 comprises a memory module.
- the memory system 1010 may comprise a printed-circuit board (PCB) 1020 .
- the memory system 1010 has a memory capacity of 512-MB, 1-GB, 2-GB, 4-GB, or 8-GB. Other volatile memory capacities are also compatible with certain embodiments described herein.
- the memory system 10 has a non-volatile memory capacity of 512-MB, 1-GB, 2-GB, 4-GB, 8-GB, 16-GB, or 32-GB. Other non-volatile memory capacities are also compatible with certain embodiments described herein.
- the PCB 1020 has an industry-standard form factor.
- the PCB 1020 can have a low profile (LP) form factor with a height of 30 millimeters and a width of 133.35 millimeters.
- the PCB 1020 has a very high profile (VHP) form factor with a height of 50 millimeters or more.
- the PCB 1020 has a very low profile (VLP) form factor with a height of 18.3 millimeters.
- VLP very low profile
- Other form factors including, but not limited to, small-outline (SO-DIMM), unbuffered (UDIMM), registered (RDIMM), fully-buffered (FBDIMM), miniDIMM, mini-RDIMM, VLP mini-DIMM, micro-DIMM, and SRAM DIMM are also compatible with certain embodiments described herein.
- certain non-DIMM form factors are possible such as, for example, single in-line memory module (SIMM), multi-media card (MMC), and small computer system interface (SCSI).
- the memory system 1010 is in electrical communication with the host system. In other embodiments, the memory system 1010 may communicate with a host system using some other type of communication, such as, for example, optical communication. Examples of host systems include, but are not limited to, blade servers, 1 U servers, personal computers (PCs), and other applications in which space is constrained or limited.
- the memory system 1010 can be in communication with a disk controller of a computer system, for example.
- the PCB 1020 can comprise an interface 1022 that is configured to be in electrical communication with the host system (not shown).
- the interface 1022 can comprise a plurality of edge connections which fit into a corresponding slot connector of the host system.
- the interface 1022 of certain embodiments provides a conduit for power voltage as well as data, address, and control signals between the memory system 1010 and the host system.
- the interface 1022 can comprise a standard 240-pin DDR2 edge connector.
- the volatile memory subsystem 1030 comprises a plurality of volatile memory elements 1032 and the non-volatile memory subsystem 1040 comprises a plurality of non-volatile memory elements 1042 .
- Certain embodiments described herein advantageously provide nonvolatile storage via the non-volatile memory subsystem 1040 in addition to high-performance (e.g., high speed) storage via the volatile memory subsystem 1030 .
- the first plurality of volatile memory elements 1032 comprises two or more dynamic random-access memory (DRAM) elements.
- DRAM dynamic random-access memory
- Types of DRAM elements 1032 compatible with certain embodiments described herein include, but are not limited to, DDR, DDR2, DDR3, and synchronous DRAM (SDRAM). For example, in the block diagram of FIG.
- the first memory bank 1030 comprises eight 64M ⁇ 8 DDR2 SDRAM elements 1032 .
- the volatile memory elements 1032 may comprise other types of memory elements such as static random-access memory (SRAM).
- volatile memory elements 1032 having bit widths of 4, 8, 16, 32, as well as other bit widths, are compatible with certain embodiments described herein.
- Volatile memory elements 1032 compatible with certain embodiments described herein have packaging which include, but are not limited to, thin small-outline package (TSOP), ball-grid-array (BGA), fine-pitch BGA (FBOA), micro-BOA (1.1,BGA), mini-BGA (mBGA), and chip-scale packaging (CSP).
- TSOP thin small-outline package
- BGA ball-grid-array
- FBOA fine-pitch BGA
- micro-BOA 1.1,BGA
- mini-BGA mini-BGA
- CSP chip-scale packaging
- the second plurality of non-volatile memory elements 1042 comprises one or more flash memory elements.
- Types of flash memory elements 1042 compatible with certain embodiments described herein include, but are not limited to, NOR flash, NAND flash, ONE-NAND flash, and multi-level cell (MLC).
- MLC multi-level cell
- the second memory bank 1040 comprises 512 MB of flash memory organized as four 128 Mb ⁇ 8 NAND flash memory elements 1042 .
- nonvolatile memory elements 1042 having bit widths of 4, 8, 16, 32, as well as other bit widths, are compatible with certain embodiments described herein.
- Non-volatile memory elements 1042 compatible with certain embodiments described herein have packaging which include, but are not limited to, thin small-outline package (TSOP), ball-grid-array (BOA), fine-pitch BOA (FBGA), micro-BOA (POA), mini-BGA (mBGA), and chip-scale packaging (CSP).
- TSOP thin small-outline package
- BOA ball-grid-array
- FBGA fine-pitch BOA
- POA micro-BOA
- mini-BGA mini-BGA
- CSP chip-scale packaging
- FIG. 13 is a block diagram of an example memory module 10 with ECC (error-correcting code) having a volatile memory subsystem 1030 with nine volatile memory elements 1032 and a non-volatile memory subsystem 1040 with five non-volatile memory elements 1042 in accordance with certain embodiments described herein.
- the additional memory element 1032 of the first memory bank 1030 and the additional memory element 1042 of the second memory bank 1040 provide the ECC capability.
- the volatile memory subsystem 1030 comprises other numbers of volatile memory elements 1032 (e.g., 2, 3, 4, 5, 6, 7, more than 9).
- the non-volatile memory subsystem 1040 comprises other numbers of nonvolatile memory elements 1042 (e.g., 2, 3, more than 5).
- the logic element 1070 comprises a field-programmable gate array (FPGA).
- the logic element 1070 comprises an FPGA available from Lattice Semiconductor Corporation which includes an internal flash.
- the logic element 1070 comprises an FPOA available from another vendor.
- the internal flash can improve the speed of the memory system 1010 and save physical space.
- Other types of logic elements 1070 compatible with certain embodiments described herein include, but are not limited to, a programmable-logic device (PLD), an application-specific integrated circuit (ASIC), a custom-designed semiconductor device, a complex programmable logic device (CPLD).
- the logic element 1070 is a custom device.
- the logic element 1070 comprises various discrete electrical elements, while in certain other embodiments, the logic element 1070 comprises one or more integrated circuits.
- FIG. 14 is a block diagram of an example memory module 1010 having a microcontroller unit 1060 and logic element 1070 integrated into a single controller 1062 in accordance with certain embodiments described herein.
- the controller 1062 includes one or more other components. For example, in one embodiment, an FPGA without an internal flash is used and the controller 1062 includes a separate flash memory component which stores configuration information to program the FPGA.
- the at least one circuit 1052 comprises one or more switches coupled to the volatile memory subsystem 1030 , to the controller 1062 , and to the host computer (e.g., via the interface 1022 , as schematically illustrated by FIGS. 12-14 ).
- the one or more switches are responsive to signals (e.g., from the controller 1062 ) to selectively operatively decouple the controller 1062 from the volatile memory subsystem 1030 and to selectively operatively couple the controller 1062 to the volatile memory subsystem 1030 .
- the at least one circuit 1052 selectively operatively couples and decouples the volatile memory subsystem 1030 and the host system.
- the volatile memory subsystem 1030 can comprise a registered DIMM subsystem comprising one or more registers 1160 and a plurality of DRAM elements 1180 , as schematically illustrated by FIG. 15A .
- the at least one circuit 1052 can comprise one or more switches 1172 coupled to the controller 1062 (e.g., logic element 1070 ) and to the volatile memory subsystem 1030 which can be actuated to couple and decouple the controller 1062 to and from the volatile memory subsystem 1030 , respectively.
- the memory system 1010 further comprises one or more switches 1170 coupled to the one or more registers 1160 and to the plurality of DRAM elements 1180 as schematically illustrated by FIG. 15A .
- the one or more switches 1170 can be selectively switched, thereby selectively operatively coupling the volatile memory subsystem 1030 to the host system 1150 .
- the one or more switches 1174 are also coupled to the one or more registers 1160 and to a power source 1162 for the one or more registers 1160 .
- the one or more switches 1174 can be selectively switched to turn power on or off to the one or more registers 1160 , thereby selectively operatively coupling the volatile memory subsystem 1030 to the host system 1150 .
- the at least one circuit 1052 comprises a dynamic on-die termination (ODT) 1176 circuit of the logic element 1070 .
- ODT dynamic on-die termination
- the logic element 1070 can comprise a dynamic ODT circuit 1176 which selectively operatively couples and decouples the logic element 1070 to and from the volatile memory subsystem 1030 , respectively.
- the one or more switches 1170 can be selectively switched, thereby selectively operatively coupling the volatile memory subsystem 1030 to the host system 1150 .
- the non-volatile memory subsystem 1040 may backup the volatile memory subsystem 1030 in the event of a trigger condition, such as, for example, a power failure or power reduction or a request from the host system.
- a trigger condition such as, for example, a power failure or power reduction or a request from the host system.
- the nonvolatile memory subsystem 1040 holds intermediate data results in a noisy system environment when the host computer system is engaged in a long computation.
- a backup may be performed on a regular basis. For example, in one embodiment, the backup may occur every millisecond in response to a trigger condition.
- the trigger condition occurs when the memory system 1010 detects that the system voltage is below a certain threshold voltage.
- the threshold voltage is 10 percent below a specified operating voltage.
- a trigger condition occurs when the voltage goes above a certain threshold value, such as, for example, 10 percent above a specified operating voltage.
- a trigger condition occurs when the voltage goes below a threshold or above another threshold.
- a backup and/or restore operation may occur in reboot and/or non-reboot trigger conditions.
- the controller 1062 may comprise a microcontroller unit (MCU) 1060 and a logic element 1070 .
- the MCU 1060 provides memory management for the non-volatile memory subsystem 1040 and controls data transfer between the volatile memory subsystem 30 and the nonvolatile memory subsystem 1040 .
- the MCU 1060 of certain embodiments comprises a 16-bit microcontroller, although other types of microcontrollers are also compatible with certain embodiments described herein.
- the logic element 1070 of certain embodiments is in electrical communication with the non-volatile memory subsystem 1040 and the MCU 1060 .
- the logic element 1070 can provide signal level translation between the volatile memory elements 1032 (e.g., 1.8V SSTL-2 for DDR2 SDRAM elements) and the non-volatile memory elements 1042 (e.g., 3V TTL for NAND flash memory elements). In certain embodiments, the logic element 1070 is also programmed to perform address/address translation between the volatile memory subsystem 1030 and the non-volatile memory subsystem 1040 . In certain preferred embodiments, 1-NAND type flash are used for the non-volatile memory elements 1042 because of their superior read speed and compact structure.
- the memory system 1010 of certain embodiments is configured to be operated in at least two states.
- the at least two states can comprise a first state in which the controller 1062 and the non-volatile memory subsystem 1040 are operatively decoupled (e.g., isolated) from the volatile memory subsystem 1030 by the at least one circuit 1052 and a second state in which the volatile memory subsystem 1030 is operatively coupled to the controller 1062 to allow data to be communicated between the volatile memory subsystem 1030 and the nonvolatile memory subsystem 1040 via the controller 1062 .
- the memory system 1010 may transition from the first state to the second state in response to a trigger condition, such as when the memory system 1010 detects that there is a power interruption (e.g., power failure or reduction) or a system hang-up.
- a power interruption e.g., power failure or reduction
- the memory system 1010 may further comprise a voltage monitor 1050 .
- the voltage monitor circuit 1050 monitors the voltage supplied by the host system via the interface 1022 . Upon detecting a low voltage condition (e.g., due to a power interruption to the host system), the voltage monitor circuit 1050 may transmit a signal to the controller 1062 indicative of the detected condition.
- the controller 1062 responds to the signal from the voltage monitor circuit 1050 by transmitting a signal to the at least one circuit 1052 to operatively couple the controller to the volatile memory system 1030 , such that the memory system 1010 enters the second state.
- the voltage monitor 1050 may send a signal to the MCU 1060 which responds by accessing the data on the volatile memory system 1030 and by executing a write cycle on the nonvolatile memory subsystem 1040 . During this write cycle, data is read from the volatile memory subsystem 1030 and is transferred to the non-volatile memory subsystem 1040 via the MCU 1060 .
- the voltage monitor circuit 1050 is part of the controller 1062 (e.g., part of the MCU 1060 ) and the voltage monitor circuit 1050 transmits a signal to the other portions of the controller 1062 upon detecting a power threshold condition.
- the isolation or operational decoupling of the volatile memory subsystem 1030 from the non-volatile memory subsystem in the first state can preserve the integrity of the operation of the memory system 1010 during periods of operation in which signals (e.g., data) are transmitted between the host system and the volatile memory subsystem 1030 .
- the controller 1062 and the nonvolatile memory subsystem 1040 do not add a significant capacitive load to the volatile memory system 1030 when the memory system 1010 is in the first state.
- the capacitive load of the controller 1062 and the non-volatile memory subsystem 1040 do not significantly affect the signals propagating between the volatile memory subsystem 1030 and the host system.
- the at least one circuit 1052 comprises an FSA1208 Low-Power, Eight-Port, Hi-Speed Isolation Switch from Fairchild Semiconductor. In other embodiments, the at least one circuit 1052 comprises other types of isolation devices.
- Power may be supplied to the volatile memory subsystem 1030 from a first power supply (e.g., a system power supply) when the memory system 1010 is in the first state and from a second power supply 1080 when the memory system 1010 is in the second state.
- a first power supply e.g., a system power supply
- the memory system 1010 is in the first state when no trigger condition (e.g., a power failure) is present and the memory system 1010 enters the second state in response to a trigger condition.
- the memory system 1010 has a third state in which the controller 1062 is operatively decoupled from the volatile memory subsystem 1030 and power is supplied to the volatile memory subsystem 1030 from a third power supply (not shown).
- the third power supply may provide power to the volatile memory subsystem 1030 when the memory system 1010 detects that a trigger condition is likely to occur but has not yet occurred.
- the second power supply 1080 does not comprise a battery. Because a battery is not used, the second power supply 1080 of certain embodiments may be relatively easy to maintain, does not generally need to be replaced, and is relatively environmentally friendly.
- the second power supply 1080 comprises a step-up transformer 1082 , a step-down transformer 1084 , and a capacitor bank 1086 comprising one or more capacitors (e.g., double-layer capacitors).
- capacitors may take about three to four minutes to charge and about two minutes to discharge. In other embodiments, the one or more capacitors may take a longer time or a shorter time to charge and/or discharge.
- the second power supply 1080 is configured to power the volatile memory subsystem 1030 for less than thirty minutes.
- the second power supply 1080 may comprise a battery.
- the second power supply 1080 comprises a battery and one or more capacitors and is configured to power the volatile memory subsystem 1030 for no more than thirty minutes.
- the capacitor bank 1086 of the second power supply 1080 is charged by the first power supply while the memory system 1010 is in the first state. As a result, the second power supply 1080 is fully charged when the memory system 1010 enters the second state.
- the memory system 1010 and the second power supply 1080 may be located on the same printed circuit board 1020 . In other embodiments, the second power supply 1080 may not be on the same printed circuit board 1020 and may be tethered to the printed circuit board 1020 , for example.
- the step-up transformer 1082 keeps the capacitor bank 1086 charged at a peak value.
- the step-down transformer 1084 acts as a voltage regulator to ensure that regulated voltages are supplied to the memory elements (e.g., 1.8V to the volatile DRAM elements 1032 and 3.0V to the non-volatile flash memory elements 1042 ) when operating in the second state (e.g., during power down).
- the memory elements e.g., 1.8V to the volatile DRAM elements 1032 and 3.0V to the non-volatile flash memory elements 1042
- the second state e.g., during power down.
- the memory module 1010 further comprises a switch 1090 (e.g., FET switch) that switches power provided to the controller 1062 , the volatile memory subsystem 1030 , and the non-volatile memory subsystem 1040 , between the power from the second power supply 1080 and the power from the first power supply (e.g., system power) received via the interface 1022 .
- the switch 1090 may switch from the first power supply to the second power supply 1080 when the voltage monitor 1050 detects a low voltage condition.
- the switch 1090 of certain embodiments advantageously ensures that the volatile memory elements 1032 and non-volatile memory elements 1042 are powered long enough for the data to be transferred from the volatile memory elements 1032 and stored in the non-volatile memory elements 1042 .
- the switch 1090 then switches back to the first power supply and the controller 1062 transmits a signal to the at least one circuit 1052 to operatively decouple the controller 1062 from the volatile memory subsystem 1030 , such that the memory system 1010 reenters the first state.
- data may be transferred back from the non-volatile memory subsystem 1040 to the volatile memory subsystem 1030 via the controller 1062 .
- the host system can then resume accessing the volatile memory subsystem 1030 of the memory module 1010 .
- the host system accesses the volatile memory subsystem 1030 rather than the non-volatile memory subsystem 1040 because the volatile memory elements 1032 have superior read/write characteristics.
- the transfer of data from the volatile memory bank 1030 to the nonvolatile memory bank 1040 , or from the non-volatile memory bank 1040 to the volatile. memory bank 1030 takes less than one minute per GB.
- the memory system 1010 protects the operation of the volatile memory when communicating with the host-system and provides backup and restore capability in the event of a trigger condition such as a power failure. In certain embodiments, the memory system 1010 copies the entire contents of the volatile memory subsystem 1030 into the nonvolatile memory subsystem 1040 on each backup operation. Moreover, in certain embodiments, the entire contents of the non-volatile memory subsystem 1040 are copied back into the volatile memory subsystem 1030 on each restore operation.
- the entire contents of the non-volatile memory subsystem 1040 are accessed for each backup and/or restore operation, such that the non-volatile memory subsystem 1040 (e.g., flash memory subsystem) is used generally uniformly across its memory space and wear-leveling is not performed by the memory system 1010 . In certain embodiments, avoiding wear-leveling can decrease cost and complexity of the memory system 1010 and can improve the performance of the memory system 1010 . In certain other embodiments, the entire contents of the volatile memory subsystem 1030 are not copied into the non-volatile memory subsystem 1040 on each backup operation, but only a partial copy is performed. In certain embodiments, other management capabilities such as bad-block management and error management for the flash memory elements of the non-volatile memory subsystem 1040 are performed in the controller 1062 .
- the non-volatile memory subsystem 1040 e.g., flash memory subsystem
- the memory system 1010 generally operates as a write-back cache in certain embodiments.
- the host system e.g., a disk controller
- writes data to the volatile memory subsystem 1030 which then writes the data to non-volatile storage which is not part of the memory system 1010 , such as, for example, a hard disk.
- the disk controller may wait for an acknowledgment signal from the memory system 1010 indicating that the data has been written to the hard disk or is otherwise secure.
- the memory system 1010 of certain embodiments can decrease delays in the system operation by indicating that the data has been written to the hard disk before it has actually done so.
- the memory system 1010 will still be able to recover the data efficiently in the event of a power outage because of the backup and restore capabilities described herein.
- the memory system 1010 may be operated as a write-through cache or as some other type of cache.
- FIG. 16 schematically illustrates an example power module 1100 of the memory system 1010 in accordance with certain embodiments described herein.
- the power module 1100 provides power to the various components of the memory system 1010 using different elements based on a state of the memory system 1010 in relation to a trigger condition.
- the power module 1100 comprises one or more of the components described above with respect to FIG. 12 .
- the power module 1100 includes the second power supply 1080 and the switch 1090 .
- the power module 1100 provides a plurality of voltages to the memory system 1010 comprising non-volatile and volatile memory subsystems 1030 , 1040 .
- the plurality of voltages comprises at least a first voltage 1102 and a second voltage 1104 .
- the power module 1100 comprises an input 1106 providing a third voltage 1108 to the power module 1100 and a voltage conversion element 1120 configured to provide the second voltage 1104 to the memory system 1010 .
- the power module 1100 further comprises a first power element 1130 configured to selectively provide a fourth voltage 1110 to the conversion element 1120 .
- the first power element 1130 comprises a pulse-width modulation power controller.
- the first power element 1130 is configured to receive a 1.8V input system voltage as the third voltage 1108 and to output a modulated 5 V output as the fourth voltage 1110 .
- the power module 1100 further comprises a second power element 1140 can be configured to selectively provide a fifth voltage 1112 to the conversion element 1120 .
- the power module 1100 can be configured to selectively provide the first voltage 1102 to the memory system 1010 either from the conversion element 1120 or from the input 1106 .
- the power module 1100 can be configured to be operated in at least three states in certain embodiments.
- a first state the first voltage 1102 is provided to the memory system 1010 from the input 1106 and the fourth voltage 1110 is provided to the conversion element 1120 from the first power element 1130 .
- the fourth voltage 1110 is provided to the conversion element 1120 from the first power element 1130 and the first voltage 1102 is provided to the memory system 1010 from the conversion element 1120 .
- the fifth voltage 1112 is provided to the conversion element 1120 from the second power element 1140 and the first voltage 1104 is provided to the memory system 1010 from the conversion element 1120 .
- the power module 1100 transitions from the first state to the second state upon detecting that a trigger condition is likely to occur and transitions from the second state to the third state upon detecting that the trigger condition has occurred. For example, the power module 1100 may transition to the second state when it detects that a power failure is about to occur and transitions to the third state when it detects that the power failure has occurred.
- providing the first voltage 1102 in the second state from the first power element 1130 rather than from the input 1106 allows a smoother transition from the first state to the third state. For example, in certain embodiments, providing the first voltage 1102 from the first power element 1130 has capacitive and other smoothing effects.
- switching the point of power transition to be between the conversion element 1120 and the first and second power elements 1130 , 1140 can smooth out potential voltage spikes.
- the second power element 1140 does not comprise a battery and may comprise one or more capacitors.
- the second power element 1140 comprises a capacitor array 1142 , a buck-boost converter 1144 which adjusts the voltage for charging the capacitor array and a voltage/current limiter 1146 which limits the charge current to the capacitor array 1142 and stops charging the capacitor array 1142 when it has reached a certain charge voltage.
- the capacitor array 1142 comprises two 50 farad capacitors capable of holding a total charge of 4.6V.
- the buck-boost converter 1144 receives a 1.8V system voltage (first voltage 1108 ) and boosts the voltage to 4.3V which is outputted to the voltage current limiter 1146 .
- the voltage/current limiter 1146 limits the current going to the capacitor array 1142 to 1 A and stops charging the array 1142 when it is charged to 4.3V.
- the second power element 1140 may include alternative embodiments. For example, different components and/or different value components may be used.
- a pure boost converter may be used instead of a buck-boost converter.
- only one capacitor may be used instead of a capacitor array 1142 .
- the conversion element 1120 can comprise one or more buck converters and/or one or more buck-boost converters.
- the conversion element 1120 may comprise a plurality of sub-blocks 1122 , 1124 , 1126 as schematically illustrated by FIG. 16 , which can provide more voltages in addition to the second voltage 1104 to the memory system 1010 .
- the sub-blocks may comprise various converter circuits such as buck-converters, boost converters, and buck-boost converter circuits for providing various voltage values to the memory system 1010 .
- sub-block 1122 comprises a buck converter
- sub-block 1124 comprises a dual buck converter
- sub-block 1126 comprises a buck-boost converter as schematically illustrated by FIG. 16 .
- the conversion element 1120 receives as input either the fourth voltage 1110 from the first power element 1130 or the fifth voltage 1112 from the second power element 1140 , depending on the state of the power module 1100 , and reduces the input to an appropriate amount for powering various components of the memory system.
- the buck-converter of sub-block 1122 can provide 1.8V at 2 A for about 60 seconds to the volatile memory elements 1032 (e.g., DRAM), the non-volatile memory elements 1042 (e.g., flash), and the controller 1062 (e.g., an FPGA) in one embodiment.
- the sub-block 1124 can provide the second voltage 1104 as well as another reduced voltage 1105 to the memory system 1010 .
- the second voltage 1104 is 2.5V and is used to power the at least one circuit 1052 (e.g., isolation device) and the other reduced voltage 1105 is 1.2V and is used to power the controller 1062 (e.g., FPGA).
- the subblock 1126 can provide yet another voltage 1107 to the memory system 1010 .
- the voltage 1107 may be 3.3V and may be used to power both the controller 1062 and the at least one circuit 1052 .
- the conversion element 1120 may include alternative embodiments.
- the volatile memory elements 1032 and nonvolatile memory elements 1042 are powered using independent voltages and are not both powered using the first voltage 1102 .
- FIG. 17 is a flowchart of an example method 1200 of providing a first voltage 1102 and a second voltage 1104 to a memory system 1010 including volatile and nonvolatile memory subsystems 1030 , 1040 . While the method 1200 is described herein by reference to the memory system 1010 schematically illustrated by FIGS. 12-15 , other memory systems are also compatible with embodiments of the method 1200 .
- the method 1200 comprises providing the first voltage 1102 to the memory system 1010 from an input power supply 1106 and providing the second voltage 1104 to the memory system 1010 from a first power subsystem in operational block 1210 .
- the first power subsystem comprises the first power element 1130 and the voltage conversion element 1120 described above with respect to FIG. 16 . In other embodiments, other first power subsystems are used.
- the method 1200 further comprises detecting a second condition in operational block 1220 .
- detecting the second condition comprises detecting that a trigger condition is likely to occur.
- the method 1200 comprises providing the first voltage 1102 and the second voltage 1104 to the memory system 1010 from the first power subsystem in an operational block 1230 .
- a switch 1148 can be toggled to provide the first voltage 1102 from the conversion element 1120 rather than from the input power supply.
- the method 1200 further comprises charging a second power subsystem in operational block 1240 .
- the second power subsystem comprises the second power element 1140 or another power supply that does not comprise a battery.
- the second power subsystem comprises the second power element 1140 and the voltage conversion element 1120 described above with respect to FIG. 16 . In other embodiments, some other second power subsystem is used.
- the method 1200 further comprises detecting a third condition in an operational block 1250 and during the third condition, providing the first voltage 1102 and the second voltage 1104 to the memory system 1010 from the second power subsystem 1140 in an operational block 1260 .
- detecting the third condition comprises detecting that the trigger condition has occurred.
- the trigger condition may comprise various conditions described herein.
- the trigger condition comprises a power reduction, power failure, or system hang-up.
- the operational blocks of the method 1200 may be performed in different orders in various embodiments. For example, in certain embodiments, the second power subsystem 1140 is charged before detecting the second condition.
- the memory system 1010 comprises a volatile memory subsystem 1030 and a non-volatile memory subsystem 1040 comprising at least 100 percent more storage capacity than does the volatile memory subsystem.
- the memory system 1010 also comprises a controller 1062 operatively coupled to the volatile memory subsystem 1030 and operatively coupled to the non-volatile memory subsystem 1040 .
- the controller 1062 can be configured to allow data to be communicated between the volatile memory subsystem 1030 and the host system when the memory system 1010 is operating in a first state and to allow data to be communicated between the volatile memory subsystem 1030 and the non-volatile memory subsystem 1040 when the memory system 1010 is operating in a second state.
- the memory system 1010 having extra storage capacity of the non-volatile memory subsystem 1040 has been described with respect to certain embodiments, alternative configurations exist. For example, in certain embodiments, there may be more than 100 percent more storage capacity in the non-volatile memory subsystem 1040 than in the volatile memory subsystem 1030 . In various embodiments, there may be at least 200, 300, or 400 percent more storage capacity in the non-volatile memory subsystem 1040 than in the volatile memory subsystem 1030 . In other embodiments, the non-volatile memory subsystem 1040 includes at least some other integer multiples of the storage capacity of the volatile memory subsystem 1030 .
- the non-volatile memory subsystem 1040 includes a non-integer multiple of the storage capacity of the volatile memory subsystem 1030 . In one embodiment, the non-volatile memory subsystem 1040 includes less than 100 percent more storage capacity than does the volatile memory subsystem 1030 .
- the extra storage capacity of the non-volatile memory subsystem 1040 can be used to improve the backup capability of the memory system 1010 .
- the extra storage capacity of the nonvolatile memory subsystem 1040 allows the volatile memory subsystem 1030 to be backed up in the event of a subsequent power failure or other trigger event.
- the extra storage capacity of the non-volatile memory subsystem 1040 may allow the memory system 1010 to backup the volatile memory subsystem 1030 efficiently in the event of multiple trigger conditions (e.g., power failures).
- the data in the volatile memory system 1030 is copied to a first, previously erased portion of the nonvolatile memory subsystem 1040 via the controller 1062 . Since the non-volatile memory subsystem 1040 has more storage capacity than does the volatile memory subsystem 1030 , there is a second portion of the non-volatile memory subsystem 1040 which does not have data from the volatile memory subsystem 1030 copied to it and which remains free of data (e.g., erased).
- the controller 1062 of the memory system 1010 restores the data to the volatile memory subsystem 1030 by copying the backed-up data from the non-volatile memory subsystem 40 back to the volatile memory subsystem 1030 . After the data is restored, the memory system 1010 erases the non-volatile memory subsystem 1040 . While the first portion of the non-volatile memory subsystem 1040 is being erased, it may be temporarily unaccessible.
- the volatile memory subsystem 1030 can be backed-up or stored again in the second portion of the non-volatile memory subsystem 1040 as described herein.
- the extra storage capacity of the non-volatile memory subsystem 1040 may allow the memory system 1010 to operate more efficiently. For example, because of the extra storage capacity of the non-volatile memory subsystem 1040 , the memory system 1010 can handle a higher frequency of trigger events that is not limited by the erase time of the non-volatile memory subsystem 1040 .
- FIG. 18 is a flowchart of an example method 1300 of controlling a memory system 1010 operatively coupled to a host system and which includes a volatile memory subsystem 1030 and a non-volatile memory subsystem 1040 .
- the non-volatile memory subsystem 1040 comprises at least 100 percent more storage capacity than does the volatile memory subsystem 30 as described herein. While the method 1300 is described herein by reference to the memory system 1010 schematically illustrated by FIGS. 12-14 , the method 1300 can be practiced using other memory systems in accordance with certain embodiments described herein.
- the method 1300 comprises communicating data between the volatile memory subsystem 1030 and the host system when the memory system 1010 is in a first mode of operation.
- the method 1300 further comprises storing a first copy of data from the volatile memory subsystem 1030 to the non-volatile memory subsystem 1040 at a first time when the memory system 1010 is in a second mode of operation in an operational block 1320 .
- the method 1300 comprises restoring the first copy of data from the non-volatile memory subsystem 1040 to the volatile memory subsystem 1030 .
- the method 1300 further comprises erasing the first copy of data from the non-volatile memory subsystem 1040 in an operational block 1340 .
- the method further comprises storing a second copy of data from the volatile memory subsystem 1030 to the non-volatile memory subsystem 1040 at a second time when the memory system 1010 is in the second mode of operation in an operational block 1350 . Storing the second copy begins before the first copy is completely erased from the non-volatile memory subsystem 1040 .
- the memory system 1010 enters the second mode of operation in response to a trigger condition, such as a power failure.
- a trigger condition such as a power failure.
- the first copy of data and the second copy of data are stored in separate portions of the nonvolatile memory subsystem 1040 .
- the method 1300 can also include restoring the second copy of data from the non-volatile memory subsystem 1040 to the volatile memory subsystem 1030 in an operational block 1360 .
- the operational blocks of method 1300 referred to herein may be performed in different orders in various embodiments. For example, in some embodiments, the second copy of data is restored to the volatile memory subsystem 1030 at operational block 1360 before the first copy of data is completely erased in the operational block 1340 .
- FIG. 19 schematically illustrates an example clock distribution topology 1400 of a memory system 1010 in accordance with certain embodiments described herein.
- the clock distribution topology 1400 generally illustrates the creation and routing of the clock signals provided to the various components of the memory system 1010 .
- a clock source 1402 such as, for example, a 25 MHz oscillator, generates a clock signal.
- the clock source 1402 may feed a clock generator 1404 which provides a clock signal 1406 to the controller 1062 , which may be an FPGA.
- the clock generator 1404 generates a 125 MHz clock signal 1406 .
- the controller 1062 receives the clock signal 1406 and uses it to clock the controller 1062 master state control logic.
- the master state control logic may control the general operation of an FPGA controller 1062 .
- the clock signal 1406 can also be input into a clock divider 1410 which produces a frequency-divided version of the clock signal 1406 .
- the clock divider 1410 is a divide by two clock divider and produces a 62.5 MHz clock signal in response to the 125 MHz clock signal 1406 .
- a non-volatile memory phase-locked loop (PLL) block 1412 can be included (e.g., in the controller 1062 ) which distributes a series of clock signals to the non-volatile memory subsystem 1040 and to associated control logic.
- a series of clock signals 1414 can be sent from the controller 1062 to the non-volatile memory subsystem 1040 .
- Another clock signal 1416 can be used by the controller logic which is dedicated to controlling the non-volatile memory subsystem 1040 .
- the clock signal 1416 may clock the portion of the controller 1062 which is dedicated to generating address and/or control lines for the non-volatile memory subsystem 1040 .
- a feedback clock signal 1418 is fed back into the non-volatile memory PLL block 1412 .
- the PLL block 1412 compares the feedback clock 1418 to the reference clock 1411 and varies the phase and frequency of its output until the reference 1411 and feedback 1418 clocks are phase and frequency matched.
- a version of the clock signal 1406 such as the backup clock signal 1408 may be sent from the controller to the volatile memory subsystem 1030 .
- the clock signal 1408 may be, for example, a differential version of the clock signal 1406 .
- the backup clock signal 1408 may be used to clock the volatile memory subsystem 1030 when the memory system 1010 is backing up the data from the volatile memory subsystem 1030 into the non-volatile memory subsystem 1040 .
- the backup clock signal 1408 may also be used to clock the volatile memory subsystem 1030 when the memory system 1010 is copying the backed-up data back into the volatile memory subsystem 1030 from the nonvolatile memory subsystem 1040 (also referred to as restoring the volatile memory subsystem 1030 ).
- the volatile memory subsystem 1030 may normally be run at a higher frequency (e.g., DRAM running at 400 MHz) than the nonvolatile memory subsystem 1040 (e.g., flash memory running at 62.5 MHz) when communicating with the host system (e.g., when no trigger condition is present). However, in certain embodiments the volatile memory subsystem 1030 may be operated at a reduced frequency (e.g., at twice the frequency of the non-volatile memory subsystem 1040 ) without introducing significant delay into the system during backup operation and/or restore operations. Running the volatile memory subsystem 1030 at the reduced frequency during a backup and/or restore operation may advantageously reduce overall power consumption of the memory system 1010 .
- a reduced frequency e.g., at twice the frequency of the non-volatile memory subsystem 1040
- the backup clock 1408 and the volatile memory system clock signal 1420 are received by a multiplexer 1422 , as schematically illustrated by FIG. 19 .
- the multiplexer 1422 can output either the volatile memory system clock signal 1420 or the backup clock signal 1408 depending on the backup state of the memory system 1010 .
- the volatile memory system clock signal 1420 may be provided by the multiplexer 422 to the volatile memory PLL block 1424 .
- the backup clock signal 1408 may be provided.
- the volatile memory PLL block 1424 receives the volatile memory reference clock signal 1423 from the multiplexer 1422 and can generate a series of clock signals which are distributed to the volatile memory subsystem 1030 and associated control logic. For example, in one embodiment, the PLL block 1424 generates a series of clock signals 1426 which clock the volatile memory elements 1032 . A clock signal 1428 may be used to clock control logic associated with the volatile memory elements, such as one or more registers (e.g., the one or more registers of a registered DIMM). Another clock signal 1430 may be sent to the controller 1062 . A feedback clock signal 1432 is fed back into the volatile memory PLL block 1424 . In one embodiment, the PLL block 1424 compares the feedback clock signal 1432 to the reference clock signal 1423 and varies the phase and frequency of its output until the reference clock signal 1423 and the feedback clock signal 1432 clocks are phase and frequency matched.
- the clock signal 1430 may be used by the controller 1062 to generate and distribute clock signals which will be used by controller logic which is configured to control the volatile memory subsystem 1030 .
- control logic in the controller 1062 may be used to control the volatile memory subsystem 1030 during a backup or restore operation.
- the clock signal 1430 may be used as a reference clock signal for the PLL block 1434 which can generate one or more clocks 1438 used by logic in the controller 1062 .
- the PLL block 1434 may generate one or more clock signals 1438 used to drive logic circuitry associated with controlling the volatile memory subsystem 1030 .
- the PLL block 1434 includes a feedback clock signal 1436 and operates in a similar manner to other PLL blocks described herein.
- the clock signal 1430 may be used as a reference clock signal for the PLL block 1440 which may generate one or more clock signals used by a sub-block 1442 to generate one or more other clock signals 1444 .
- the volatile memory subsystem 1030 comprises DDR2 SDRAM elements and the sub-block 1442 generates one or more DDR2 compatible clock signals 1444 .
- a feedback clock signal 1446 is fed back into the PLL block 1440 .
- the PLL block 1440 operates in a similar manner to other PLL blocks described herein.
- the volatile memory subsystem 1030 operates on the volatile memory clock signal 1420 and there is no backup clock signal 1408 . In some embodiments, the volatile memory subsystem 1030 is operated at a reduced frequency during a backup operation and not during a restore operation. In other embodiments, the volatile memory subsystem 1030 is operated at a reduced frequency during a restore operation and not during a backup operation.
- FIG. 20 is a flowchart of an example method 1500 of controlling a memory system 1010 operatively coupled to a host system. Although described with respect to the memory system 1010 described herein, the method 1500 is compatible with other memory systems.
- the memory system 1010 may include a clock distribution topology 1400 similar to the one described above with respect to FIG. 19 or another clock distribution topology.
- the memory system 1010 can include a volatile memory subsystem 30 and a non-volatile memory subsystem 1040 .
- the method 1500 comprises operating the volatile memory subsystem 1030 at a first frequency when the memory system 1010 is in a first mode of operation in which data is communicated between the volatile memory subsystem 1030 and the host system.
- the method 1500 comprises operating the non-volatile memory subsystem 1040 at a second frequency when the memory system 1010 is in a second mode of operation in which data is communicated between the volatile memory subsystem 1030 and the non-volatile memory subsystem 1040 .
- the method 1500 further comprises operating the volatile memory subsystem 1030 at a third frequency in an operational block 1530 when the memory system 1010 is in the second mode of operation.
- the memory system 1010 is not powered by a battery when it is in the second mode of operation.
- the memory system 1010 may switch from the first mode of operation to the second mode of operation in response to a trigger condition.
- the trigger condition may be any trigger condition described herein such as, for example, a power failure condition.
- the second mode of operation includes both backup and restore operations as described herein.
- the second mode of operation includes backup operations but not restore operations.
- the second mode of operation includes restore operations but not backup operations.
- the third frequency can be less than the first frequency.
- the third frequency can be approximately equal to the second frequency.
- the reduced frequency operation is an optional mode.
- the first, second and/or third frequencies are configurable by a user or by the memory system 1010 .
- FIG. 21 schematically illustrates an example topology of a connection to transfer data slices from two DRAM segments 1630 , 1640 of a volatile memory subsystem 1030 of a memory system 1010 to a controller 1062 of the memory system 1010 . While the example of FIG. 21 shows a topology including two DRAM segments 1630 , 1640 for the purposes of illustration, each address location of the volatile memory subsystem 1030 comprises more than the two segments in certain embodiments.
- the data lines 1632 , 1642 from the first DRAM segment 1630 and the second DRAM segment 1640 of the volatile memory subsystem 1030 are coupled to switches 1650 , 1652 which are coupled to the controller 1062 (e.g., logic element 1070 ) of the memory system 1010 .
- the chip select lines 1634 , 1644 and the self-refresh lines 1636 , 1646 (e.g., CKe signals) of the first and second DRAM segments 1630 , 1640 , respectively, are coupled to the controller 1062 .
- the controller 1062 comprises a buffer (not shown) which is configured to store data from the volatile memory subsystem 1030 .
- the buffer is a first-in, first out buffer (FIFO).
- data slices from each DRAM segment 1630 , 1640 comprise a portion of the volatile memory subsystem data bus.
- the volatile memory subsystem 1030 comprises a 72-bit data bus (e.g., each data word at each addressable location is 72 bits wide and includes, for example, 64 bits of accessible SDRAM and 8 bits of ECC), the first data slice from the first DRAM segment 1630 may comprise 40 bits of the data word, and the second data slice from the second DRAM segment 1640 may comprise the remaining 32 bits of the data word.
- Certain other embodiments comprise data buses and/or data slices of different sizes.
- the switches 1650 , 1652 can each be selectively switched to selectively operatively couple the data lines 1632 , 1642 , respectively from the first and second DRAM segments 1630 , 1640 to the controller 1062 .
- the chip select lines 1634 , 1644 enable the first and second DRAM segments 1630 , 1640 , respectively, of the volatile memory subsystem 1030 , and the self-refresh lines 1636 , 1646 toggle the first and second DRAM segments 1630 , 1640 , respectively, from self-refresh mode to active mode.
- the first and second DRAM segments 1630 , 1640 maintain stored information but are not accessible when they are in self-refresh mode, and maintain stored information and are accessible when they are in active mode.
- data slices from only one of the two DRAM segments 1630 , 1640 at a time are sent to the controller 1062 .
- the controller 1062 sends a signal via the CKe line 1636 to the first DRAM segment 1630 to put the first DRAM segment 1630 in active mode.
- the data slice from the first DRAM segment 1630 for multiple words is written to the controller 1062 before writing the second data slice from the second DRAM segment 1640 to the controller 1062 .
- the controller 1062 While the first data slice is being written to the controller 1062 , the controller 1062 also sends a signal via the CKe line 1646 to put the second DRAM segment 1640 in self-refresh mode. Once the first data slice for one word or for a block of words is written to the controller 1062 , the controller 1062 puts the first DRAM segment 1630 into self-refresh mode by sending a signal via the CKe line 1636 to the first DRAM segment 1640 . The controller 1062 also puts the second DRAM segment 1640 into active mode by sending a signal via the CKe line 1646 to the DRAM segment 1640 . The second slice for a word or for a block of words is written to the controller 1062 .
- the controller 1062 when the first and second data slices are written to the buffer in the controller 1062 , the controller 1062 combines the first and second data slices 1630 , 1640 into complete words or blocks of words and then writes each complete word or block of words to the non-volatile memory subsystem 1040 . In certain embodiments, this process is called “slicing” the volatile memory subsystem 1030 .
- the data may be sliced in a restore operation as well as, or instead of, during a backup operation.
- the nonvolatile memory elements 1042 write each backed-up data word to the controller 1062 which writes a first slice of the data word to the volatile memory subsystem 1030 and then a second slice of the data word to the volatile memory subsystem 1030 .
- slicing the volatile memory subsystem 1030 during a restore operation may be performed in a manner generally inverse to slicing the volatile memory subsystem 1030 during a backup operation.
- FIG. 22 is a flowchart of an example method 1600 of controlling a memory system 1010 operatively coupled to a host system and which includes a volatile memory subsystem 1030 and a non-volatile memory subsystem 1040 .
- the method 1600 comprises communicating data words between the volatile memory subsystem 1030 and the host system when the memory system 1010 is in a first mode of operation in an operational block 1610 .
- the memory system 1010 may be in the first mode of operation when no trigger condition has occurred and the memory system is not performing a backup and/or restore operation or is not being powered by a secondary power supply.
- the method further comprises transferring data words from the volatile memory subsystem 1030 to the non-volatile memory subsystem 1040 when the memory system 1010 is in a second mode of operation.
- each data word comprises the data stored in a particular address of the memory system 1010 .
- the memory system 1010 may enter the second mode of operation, for example, when a trigger condition (e.g., a power failure) occurs.
- transferring each data word comprises storing a first portion (also referred to as a slice) of the data word in a buffer in an operational block 1622 , storing a second portion of the data word in the buffer in an operational block 1624 , and writing the entire data word from the buffer to the non-volatile memory subsystem 1040 in an operational block 1626 .
- the data word may be a 72 bit data word (e.g., 64 bits of accessible SDRAM and 8 bits of ECC), the first portion (or “slice”) may comprise 40 bits of the data word, and the second portion (or “slice”) may comprise the remaining 32 bits of the data word.
- the buffer is included in the controller 1062 .
- the buffer is a first-in, first-out buffer implemented in the controller 1062 which comprises an FPGA.
- the method 1600 may generally be referred to as “slicing” the volatile memory during a backup operation.
- the process of “slicing” the volatile memory during a backup includes bringing the 32-bit slice out of self-refresh, reading a 32-bit block from the slice into the buffer, and putting the 32-bit slice back into self-refresh.
- the 40-bit slice is then brought out of self-refresh and a 40-bit block from the slice is read into a buffer.
- Each block may comprise a portion of multiple words.
- each 32-bit block may comprise 32-bit portions of multiple 72-bit words.
- each block comprises a portion of a single word.
- the 40-bit slice is then put back into self-refresh in the example embodiment.
- the 32-bit and 40-bit slices are then combined into a 72-bit block by the controller 1062 and ECC detection/correction is performed on each 72-bit word as it is read from the buffer and written into the non-volatile memory subsystem (e.g., flash).
- the non-volatile memory subsystem e.g., flash
- the entire data word may comprise more than two portions.
- the entire data word may comprise three portions instead of two and transferring each data word further comprises storing a third portion of each data word in the buffer.
- the data word may comprise more than three portions.
- the data may be sliced in a restore operation as well as, or instead of, during a backup operation.
- the nonvolatile memory elements 1040 write each backed-up data word to the controller 1062 which writes a first portion of the data word to the volatile memory subsystem 1030 and then a second portion of the data word to the volatile memory 1030 .
- slicing the volatile memory subsystem 1030 during a restore operation may be performed in a manner generally inverse to slicing the volatile memory subsystem 1030 during a backup operation.
- the method 1600 can advantageously provide significant power savings and can lead to other advantages.
- the volatile memory subsystem 1030 comprises DRAM elements
- only the slice of the DRAM which is currently being accessed (e.g., written to the buffer) during a backup is configured in full-operational mode.
- the slice or slices that are not being accessed may be put in self-refresh mode. Because DRAM in self-refresh mode uses significantly less power than DRAM in full-operational mode, the method 1600 can allow significant power savings.
- each slice of the DRAM includes a separate self-refresh enable (e.g., CKe) signal which allows each slice to be accessed independently.
- CKe separate self-refresh enable
- connection between the DRAM elements and the controller 1062 may be as large as the largest slice instead of as large as the data bus.
- the connection between the controller 1062 and the DRAM may be 40 bits instead of 72 bits.
- pins on the controller 1062 may be used for other purposes or a smaller controller may be used due to the relatively low number of pin-outs used to connect to the volatile memory subsystem 1030 .
- the full width of the data bus is connected between the volatile memory subsystem 1030 and the controller 1062 but only a portion of it is used during slicing operations.
- memory slicing is an optional mode.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Power Engineering (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Memory System (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 17/138,766, filed Dec. 30, 2020, titled “Flash-Dram Hybrid Memory”, now U.S. Pat. No. 11,016,918, which is a continuation of U.S. patent application Ser. No. 15/934,416, filed Mar. 23, 2018, titled “Flash-Dram Hybrid Memory Module,” which is a continuation of U.S. patent application Ser. No. 14/840,865, filed Aug. 31, 2015, titled “Flash-Dram Hybrid Memory Module,” now U.S. Pat. No. 9,928,186, which is a continuation of U.S. patent application Ser. No. 14/489,269, filed Sep. 17, 2014, titled “Flash-Dram Hybrid Memory Module,” now U.S. Pat. No. 9,158,684, which is a continuation of U.S. patent application Ser. No. 13/559,476, filed Jul. 26, 2012, titled “Flash-Dram Hybrid Memory Module,” now U.S. Pat. No. 8,874,831, which claims the benefit of U. S. Provisional Patent Application No. 61/512,871, filed Jul. 28, 2011, and is a continuation-in-part of U.S. patent application Ser. No. 12/240,916, filed Sep. 29, 2008, titled “Non-Volatile Memory Module,” now U.S. Pat. No. 8,301,833, which is a continuation of U.S. patent application Ser. No. 12/131,873, filed Jun. 2, 2008, which claims the benefit of U. S. Provisional Patent Application No. 60/941,586, filed Jun. 1, 2007, the contents of all of which are incorporated herein by reference in their entirety.
- This application may be considered related to U.S. patent application Ser. No. 14/173,242, titled “Isolation Switching For Backup Of Registered Memory,” filed Feb. 5, 2014, which is a continuation of U.S. patent application Ser. No. 13/905,053, titled “Isolation Switching For Backup Of Registered Memory,” filed May 29, 2013, now U.S. Pat. No. 8,677,060, issued Mar. 18, 2014, which is a continuation of U.S. patent application Ser. 13/536,173, titled “Data Transfer Scheme For Non-Volatile Memory Module,” filed Jun. 28, 2012, now U.S. Pat. No. 8,516,187, issued Aug. 20, 2013, which is a divisional of U.S. patent application Ser. No. 12/240,916, titled “Non-Volatile Memory Module,” filed Sep. 29, 2008, now U.S. Pat. No. 8,301,833, issued Oct. 30, 2012, which is a continuation of U.S. patent application Ser. No. 12/131,873, filed Jun. 2, 2008, now abandoned, which claims the benefit of U.S. Provisional Application No. 60/941,586, filed Jun. 1, 2007, the contents of which are incorporated by reference herein in their entirety.
- This application may also be considered related to U.S. patent application Ser. No. 15/000,834, filed Jan. 19, 2016 (abandoned), which is a continuation of U.S. patent application Ser. No. 14/489,332, filed Sep. 17, 2014, now U.S. Pat. No. 9,269,437, which is a continuation of U.S. patent application Ser. No. 14/173,219, filed Feb. 5, 2014, now U.S. Pat. No. 8,904,099, which is a continuation of U.S. patent application Ser. No. 13/905,048, filed May 29, 2013, now U.S. Pat. No. 6,671,243, which is a continuation U.S. patent application Ser. No. 13/536,173 above.
- This application may also be considered related to U.S. patent application Ser. No. 15/924,866, (abandoned), which is a continuation of U.S. patent application Ser. No. 14/489,281, filed Sep. 17, 2014, now U.S. Pat. No. 9,921,762, which is a continuation of U.S. patent application Ser. No. 13/625,563, filed Sep. 24, 2012, now U.S. Pat. No. 8,904,098, which claims the benefit of U.S. Provisional Application No. 61/583,775, filed Sep. 23, 2011.
- The present disclosure relates generally to computer memory devices, and more particularly, to devices that employ different types of memory devices such as combinations of Flash and random access memories.
- As technology advances and the usage of portable computing devices, such as tablet notebook computers, increases, more data needs to be transferred among data centers and to/from end users. In many cases, data centers are built by clustering multiple servers that are networked to increase performance.
- Although there are many types of networked servers that are specific to the types applications envisioned, the basic concept is generally to increase server performance by dynamically allocating computing and storage resources. In recent years, server technology has evolved to be specific to particular applications such as ‘finance transactions’ (for example, point-of-service, inter-bank transaction, stock market transaction), ‘scientific computation’ (for example, fluid dynamic for automobile and ship design, weather prediction, oil and gas expeditions), ‘medical diagnostics’ (for example, diagnostics based on the fuzzy logic, medical data processing), ‘simple information sharing and searching’ (for example, web search, retail store website, company home page), ‘email’ (information distribution and archive), ‘security service’, ‘entertainment’ (for example, video-on-demand), and so on. However, all of these applications suffer from the same information transfer bottleneck due to the inability of a high speed CPU (central processing unit) to efficiently transfer data in and out of relatively slower speed storage or memory subsystems, particularly since data transfers typically pass through the CPU input/output (I/O) channels.
- The data transfer limitations by the CPU are exemplified by the arrangement shown in
FIG. 1 , and apply to data transfers between main storage (for example the hard disk (HD) or solid state drive (SSD) and the memory subsystems (for example DRAM DIMM (Dynamic Random Access Memory Dual In-line Memory Module) connected to the front side bus (FSB)). In arrangements such as that ofFIG. 1 , the SSD/HD and DRAM DIMM of a conventional memory arrangement are connected to the CPU via separate memory control ports (not shown).FIG. 1 specifically shows, through the double-headed arrow, the data flow path between the computer or server main storage (SSD/HD) to the DRAM DIMMs. Since the SSD/HD data I/O and the DRAM DIMM data I/O are controlled by the CPU, the CPU needs to allocate its process cycles to control these I/Os, which may include the IRQ (Interrupt Request) service which the CPU performs periodically. As will be appreciated, the more time a CPU allocates to controlling the data transfer traffic, the less time the CPU has to perform other tasks. Therefore, the overall performance of a server will deteriorate with the increased amount of time the CPU has to expend in performing data transfer. - There have been various approaches to increase the data transfer throughput rates from/to the main storage, such as SSD/HD, to local storage, such as DRAM DIMM. In one example as illustrated in
FIG. 2 , EcoRAM™ developed by Spansion provides a storage SSD based system that assumes a physical form factor of a DIMM. The EcoRAM™ is populated with Flash memories and a relatively small memory capacity using DRAMs which serve as a data buffer. This arrangement is capable of delivering higher throughput rate than a standard SSD based system since the EcoRAM™ is connected to the CPU (central processing unit) via a high speed interface, such as the HT (Hyper Transport) interface, while an SSD/HD is typically connected via SATA (serial AT attachment), USB (universal serial bus), or PCI Express (peripheral component interface express). For example, the read random access throughput rate of EcoRAM™ is near 3 GB/s compared with 400 MB/s for a NAND SSD memory subsystem using the standard PCI Express-based. This is a 7.5× performance improvement. However, the performance improvement for write random access throughput rate is less than 2× (197 MBs for the EcoRAM vs. 104 MBs for NAND SSD). This is mainly due to the fact that the write speed is cannot be faster than the NAND Flash write access time.FIG. 2 is an example of EcoRAM™ using SSD with the form factor of a standard DIMM such that it can be connected to the FSB (front side bus). However, due to the interface protocol difference between DRAM and Flash, an interface device, EcoRAM Accelerator™), which occupies one of the server's CPU sockets is used, and hence further reducing server's performance by reducing the number of available CPU sockets available, and in turn reducing the overall computation efficiency. The server's performance will further suffer due to the limited utilization of the CPU bus due to the large difference in the data transfer throughput rate between read and write operations. - The EcoRAM™ architecture enables the CPU to view the Flash DIMM controller chip as another processor with a large size of memory available for CPU access.
- In general, the access speed of a Flash based system is limited by four items: the read/write speed of the Flash memory, the CPU's FSB bus speed and efficiency, the Flash DIMM controller's inherent latency, and the HT interconnect speed and efficiency which is dependent on the HT interface controller in the CPU and Flash DIMM controller chip.
- The published results indicate that these shortcomings are evident in that the maximum throughput rate is 1.56 GBs for the read operation and 104 MBs for the write operation. These access rates are 25% of the DRAM read access speed, and 1.7% of the DRAM access speed at 400 MHz operation. The disparity in the access speed (15 to 1) between the read operation and write operation highlight a major disadvantage of this architecture. The discrepancy of the access speed between this type of architecture and JEDEC standard DRAM DIMM is expected to grow wider as the DRAM memory technology advances much faster than the Flash memory.
- Certain types of memory modules comprise a plurality of dynamic random-access memory (DRAM) devices mounted on a printed circuit board (PCB). These memory modules are typically mounted in a memory slot or socket of a computer system (e.g., a server system or a personal computer) and are accessed by the computer system to provide volatile memory to the computer system.
- Volatile memory generally maintains stored information only when it is powered. Batteries have been used to provide power to volatile memory during power failures or interruptions. However, batteries may require maintenance, may need to be replaced, are not environmentally friendly, and the status of batteries can be difficult to monitor.
- Non-volatile memory can generally maintain stored information while power is not applied to the non-volatile memory. In certain circumstances, it can therefore be useful to backup volatile memory using non-volatile memory.
- Described herein is a memory module couplable to a memory controller of a host system. The memory module includes a non-volatile memory subsystem, a data manager coupled to the non-volatile memory subsystem, a volatile memory subsystem coupled to the data manager and operable to exchange data with the non-volatile memory subsystem by way of the data manager, and a controller operable to receive commands from the memory controller and to direct (i) operation of the non-volatile memory subsystem, (ii) operation of the volatile memory subsystem, and (iii) transfer of data between any two or more of the memory controller, the volatile memory subsystem, and the non-volatile memory subsystem based on at least one received command from the memory controller.
- Also described herein is a method for managing a memory module by a memory controller, the memory module including volatile and non-volatile memory subsystems. The method includes receiving control information from the memory controller, wherein the control information is received using a protocol of the volatile memory subsystem. The method further includes identifying a data path to be used for transferring data to or from the memory module using the received control information, and using a data manager and a controller of the memory module to transfer data between any two or more of the memory controller, the volatile memory subsystem, and the non-volatile memory subsystem based on at least one of the received control information and the identified data path.
- Also described herein is a memory module wherein the data manager is operable to control one or more of data flow rate, data transfer size, data buffer size, data error monitoring, and data error correction in response to receiving at least one of a control signal and control information from the controller.
- Also described herein is a memory module wherein the data manager controls data traffic between any two or more of the memory controller, the volatile memory subsystem, and the non-volatile memory subsystem based on instructions received from the controller.
- Also described herein is a memory module wherein data traffic control relates to any one or more of data flow rate, data transfer size, data buffer size, data transfer bit width, formatting information, direction of data flow, and the starting time of data transfer.
- Also described herein is a memory module wherein the controller configures at least one of a first memory address space of the volatile memory subsystem and a second memory address space of the non-volatile memory subsystem in response to at least one of a received command from the memory controller and memory address space initialization information of the memory module.
- Also described herein is a memory module wherein the data manager is configured as a bi-directional data transfer fabric having two or more sets of data ports coupled to any one of the volatile and non-volatile memory subsystems.
- Also described herein is a memory module wherein at least one of the volatile and non-volatile memory subsystems comprises one or more memory segments.
- Also described herein is a memory module wherein each memory segment comprises at least one memory circuit, memory device, or memory die.
- Also described herein is a memory module wherein the volatile memory subsystem comprises DRAM memory.
- Also described herein is a memory module wherein the non-volatile memory subsystem comprises flash memory.
- Also described herein is a memory module wherein at least one set of data ports is operated by the data manager to independently and/or concurrently transfer data to or from one or more memory segments of the volatile or non-volatile memory subsystems.
- Also described herein is a memory module wherein the data manager and controller are configured to effect data transfer between the memory controller and the non-volatile memory subsystem in response to memory access commands received by the controller from the memory controller.
- Also described herein is a memory module wherein the volatile memory subsystem is operable as a buffer for the data transfer between the memory controller and non-volatile memory.
- Also described herein is a memory module wherein the data manager further includes a data format module configured to format data to be transferred between any two or more of the memory controller, the volatile memory subsystem, and the non-volatile memory subsystem based on control information received from the controller.
- Also described herein is a memory module wherein the data manager further includes a data buffer for buffering data delivered to or from the non-volatile memory subsystem.
- Also described herein is a memory module wherein the controller is operable to perform one or more of memory address translation, memory address mapping, address domain conversion, memory access control, data error correction, and data width modulation between the volatile and non-volatile memory subsystems.
- Also described herein is a memory module wherein the controller is configured to effect operation with the host system in accordance with a prescribed protocol.
- Also described herein is a memory module wherein the prescribed protocol is selected from one or more of DDR, DDR2, DDR3, and DDR4 protocols.
- Also described herein is a memory module wherein the controller is operable to configure memory space in the memory module based on at least one of a command received from the memory controller, a programmable value written into a register, a value corresponding to a first portion of the volatile memory subsystem, a value corresponding to a first portion of the non-volatile memory subsystem, and a timing value.
- Also described herein is a memory module wherein the controller configures the memory space of the memory module using at least a first portion of the volatile memory subsystem and a first portion of the non-volatile memory subsystem, and the controller presents a unified memory space to the memory controller.
- Also described herein is a memory module wherein the controller configures the memory space in the memory module using partitioning instructions that are application-specific.
- Also described herein is a memory module wherein the controller is operable to copy booting information from the non-volatile to the volatile memory subsystem during power up.
- Also described herein is a memory module wherein the controller includes a volatile memory control module, a non-volatile memory control module, data manager control module, a command interpreter module, and a scheduler module.
- Also described herein is a memory module wherein commands from the volatile memory control module to the volatile memory subsystem are subordinated to commands from the memory controller to the controller.
- Also described herein is a memory module wherein the controller effects pre-fetching of data from the non-volatile to the volatile memory.
- Also described herein is a memory module wherein the pre-fetching is initiated by the memory controller writing an address of requested data into a register of the controller.
- Also described herein is a memory module wherein the controller is operable to initiate a copy operation of data of a closed block in the volatile memory subsystem to a target block in the non-volatile memory subsystem.
- Also described herein is a memory module wherein, if the closed block is re-opened, the controller is operable to abort the copy operation and to erase the target block from the non-volatile memory subsystem.
- Also described herein is a method for managing a memory module wherein the transfer of data includes a bidirectional transfer of data between the non-volatile and the volatile memory subsystems.
- Also described herein is a method for managing a memory module further comprising operating the data manager to control one or more of data flow rate, data transfer size, data width size, data buffer size, data error monitoring, data error correction, and the starting time of the transfer of data.
- Also described herein is a method for managing a memory module further comprising operating the data manager to control data traffic between the memory controller and at least one of the volatile and non-volatile memory subsystems.
- Also described herein is a method for managing a memory module wherein data traffic control relates to any one or more of data transfer size, formatting information, direction of data flow, and the starting time of the transfer of data.
- Also described herein is a method for managing a memory module wherein data traffic control by the data manager is based on instructions received from the controller.
- Also described herein is a method for managing a memory module further comprising operating the data manager as a bi-directional data transfer fabric with two or more sets of data ports coupled to any one of the volatile and non-volatile memory subsystems.
- Also described herein is a method for managing a memory module wherein at least one of the volatile and non-volatile memory subsystems comprises one or more memory segments.
- Also described herein is a method for managing a memory module wherein each memory segment comprises at least one memory circuit, memory device, or memory die.
- Also described herein is a method for managing a memory module wherein the volatile memory subsystem comprises DRAM memory.
- Also described herein is a method for managing a memory module wherein the non-volatile memory subsystem comprises Flash memory.
- Also described herein is a method for managing a memory module further comprising operating the data ports to independently and/or concurrently transfer data to or from one or more memory segments of the volatile or non-volatile memory subsystems.
- Also described herein is a method for managing a memory module further comprising directing transfer of data bi-directionally between the volatile and non-volatile memory subsystems using the data manager and in response to memory access commands received by the controller from the memory controller.
- Also described herein is a method for managing a memory module further comprising buffering the data transferred between the memory controller and non-volatile memory subsystem using the volatile memory subsystem.
- Also described herein is a method for managing a memory module further comprising using the controller to perform one or more of memory address translation, memory address mapping, address domain conversion, memory access control, data error correction, and data width modulation between the volatile and non-volatile memory subsystems.
- Also described herein is a method for managing a memory module further comprising using the controller to effect communication with a host system by the volatile memory subsystem in accordance with a prescribed protocol.
- Also described herein is a method for managing a memory module wherein the prescribed protocol is selected from one or more of DDR, DDR2, DDR3, and DDR4 protocols.
- Also described herein is a method for managing a memory module further comprising using the controller to configure memory space in the memory module based on at least one of a command received from the memory controller, a programmable value written into a register, a value corresponding to a first portion of the volatile memory subsystem, a value corresponding to a first portion of the non-volatile memory subsystem, and a timing value.
- Also described herein is a method for managing a memory module wherein the controller configures the memory space of the memory module using at least a first portion of the volatile memory subsystem and a first portion of the non-volatile memory subsystem, and the controller presents a unified memory space to the memory controller.
- Also described herein is a method for managing a memory module wherein the controller configures the memory space in the memory module using partitioning instructions that are application-specific.
- Also described herein is a method for managing a memory module further comprising using the controller to copy booting information from the non-volatile to the volatile memory subsystem during power up.
- Also described herein is a method for managing a memory module wherein the controller includes a volatile memory control module, the method further comprising generating commands by the volatile memory control module in response to commands from the memory controller, and transmitting the generated commands to the volatile memory subsystem.
- Also described herein is a method for managing a memory module further comprising pre-fetching of data from the non-volatile memory subsystem to the volatile memory subsystem.
- Also described herein is a method for managing a memory module wherein the pre-fetching is initiated by the memory controller writing an address of requested data into a register of the controller.
- Also described herein is a method for managing a memory module further comprising initiating a copy operation of data of a closed block in the volatile memory subsystem to a target block in the non-volatile memory subsystem.
- Also described herein is a method for managing a memory module further comprising aborting the copy operation when the closed block of the volatile memory subsystem is re-opened, and erasing the target block in the non-volatile memory subsystem.
- Also described herein is a memory system having a volatile memory subsystem, a non-volatile memory subsystem, a controller coupled to the non-volatile memory subsystem, and a circuit coupled to the volatile memory subsystem, to the controller, and to a host system. In a first mode of operation, the circuit is operable to selectively isolate the controller from the volatile memory subsystem, and to selectively couple the volatile memory subsystem to the host system to allow data to be communicated between the volatile memory subsystem and the host system. In a second mode of operation, the circuit is operable to selectively couple the controller to the volatile memory subsystem to allow data to be communicated between the volatile memory subsystem and the nonvolatile memory subsystem using the controller, and the circuit is operable to selectively isolate the volatile memory subsystem from the host system.
- Also described herein is a method for operating a memory system. The method includes coupling a circuit to a host system, a volatile memory subsystem, and a controller, wherein the controller is coupled to a non-volatile memory subsystem. In a first mode of operation that allows data to be communicated between the volatile memory subsystem and the host system, the circuit is used to (i) selectively isolate the controller from the volatile memory subsystem, and (ii) selectively couple the volatile memory subsystem to the host system. In a second mode of operation that allows data to be communicated between the volatile memory subsystem and the nonvolatile memory subsystem via the controller, the circuit is used to (i) selectively couple the controller to the volatile memory subsystem, and (ii) selectively isolate the volatile memory subsystem from the host system.
- Also described herein is a nontransitory computer readable storage medium storing one or more programs configured to be executed by one or more computing devices. The programs, when executing on the one or more computing devices, cause a circuit that is coupled to a host system, to a volatile memory subsystem, and to a controller that is coupled to a nonvolatile memory subsystem, to perform a method in which, in a first mode of operation that allows data to be communicated between the volatile memory subsystem and the host system, operating the circuit to (i) selectively isolate the controller from the volatile memory subsystem, and (ii) selectively couple the volatile memory subsystem to the host system. In a second mode of operation that allows data to be communicated between the volatile memory subsystem and the nonvolatile memory subsystem via the controller, operating the circuit to (i) selectively couple the controller to the volatile memory subsystem, and (ii) selectively isolate the volatile memory subsystem from the host system.
- The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate one or more examples of embodiments and, together with the description of example embodiments, serve to explain the principles and implementations of the embodiments.
- In the drawings:
-
FIG. 1 is a block diagram illustrating the path of data transfer, via a CPU, of a conventional memory arrangement; -
FIG. 2 is a block diagram of a known EcoRAM™ architecture; -
FIGS. 3A and 3B are block diagrams of a non-volatile memory DIMM or NVDIMM; -
FIGS. 4A and 4B are block diagrams of a Flash-DRAM hybrid DIMM or FDHDIMM; -
FIG. 5A is a block diagram of amemory module 500 in accordance with certain embodiments described herein; -
FIG. 5B is a block diagram showing some functionality of a memory module such as that shown inFIG. 5A ; -
FIG. 6 is a block diagram showing some details of the data manager (DMgr); -
FIG. 7 is a functional block diagram of the on-module controller (CDC); -
FIG. 8A is a block diagram showing more details of the prior art Flash-DRAM hybrid DIMM (FDHDIMM) ofFIGS. 4A and 4B ; -
FIG. 8B is a block diagram of a Flash-DRAM hybrid DIMM (FDHDIMM) in accordance with certain embodiments disclosed herein; -
FIG. 9 is a flow diagram directed to the transfer of data from Flash memory to DRAM memory and vice versa in an exemplary FDHDIMM; -
FIG. 10 is a block diagram showing an example of mapping of DRAM address space to Flash memory address space; and -
FIG. 11 is a table showing estimates of the maximum allowed closed blocks in a queue to be written back to Flash memory for different DRAM densities using various average block use time. -
FIG. 12 is a block diagram of an example memory system compatible with certain embodiments described herein. -
FIG. 13 is a block diagram of an example memory module with ECC (error-correcting code) having a volatile memory subsystem with nine volatile memory elements and a non-volatile memory subsystem with five non-volatile memory elements in accordance with certain embodiments described herein. -
FIG. 14 is a block diagram of an example memory module having a microcontroller unit and logic element integrated into a single device in accordance with certain embodiments described herein. -
FIGS. 15A-15C schematically illustrate example embodiments of memory systems having volatile memory subsystems comprising registered dual in-line memory modules in accordance with certain embodiments described herein. -
FIG. 16 schematically illustrates an example power module of a memory system in accordance with certain embodiments described herein. -
FIG. 17 is a flowchart of an example method of providing a first voltage and a second voltage to a memory system including volatile and non-volatile memory subsystems. -
FIG. 18 is a flowchart of an example method of controlling a memory system operatively coupled to a host system and which includes at least 100 percent more storage capacity in non-volatile memory than in volatile memory. -
FIG. 19 schematically illustrates an example clock distribution topology of a memory system in accordance with certain embodiments described herein. -
FIG. 20 is a flowchart of an example method of controlling a memory system operatively coupled to a host system, the method including operating a volatile memory subsystem at a reduced rate in a back-up mode. -
FIG. 21 schematically illustrates an example topology of a connection to transfer data slices from two DRAM segments of a volatile memory subsystem of a memory system to a controller of the memory system. -
FIG. 22 is a flowchart of an example method of controlling a memory system operatively coupled to a host system, the method including backing up and/or restoring a volatile memory subsystem in slices. - Example embodiments are described herein in the context of a system of computers, servers, controllers, memory modules, hard disk drives and software. Those of ordinary skill in the art will realize that the following description is illustrative only and is not intended to be in any way limiting. Other embodiments will readily suggest themselves to such skilled persons having the benefit of this disclosure. Reference will now be made in detail to implementations of the example embodiments as illustrated in the accompanying drawings. The same reference indicators will be used to the extent possible throughout the drawings and the following description to refer to the same or like items.
- In the interest of clarity, not all of the routine features of the implementations described herein are shown and described. It will, of course, be appreciated that in the development of any such actual implementation, numerous implementation-specific decisions must be made in order to achieve the developer's specific goals, such as compliance with application- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking of engineering for those of ordinary skill in the art having the benefit of this disclosure.
- In accordance with this disclosure, the components, process steps, and/or data structures described herein may be implemented using various types of operating systems, computing platforms, computer programs, and/or general purpose machines. In addition, those of ordinary skill in the art will recognize that devices of a less general purpose nature, such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein. Where a method comprising a series of process steps is implemented by a computer or a machine and those process steps can be stored as a series of instructions readable by the machine, they may be stored on a tangible medium such as a computer memory device (e.g., ROM (Read Only Memory), PROM (Programmable Read Only Memory), EEPROM (Electrically Eraseable Programmable Read Only Memory), Flash memory, Jump Drive, and the like), magnetic storage medium (e.g., tape, magnetic disk drive, and the like), optical storage medium (e.g., CD-ROM, DVD-ROM, paper card, paper tape and the like) and other types of program memory.
- The term “exemplary” where used herein is intended to mean “serving as an example, instance or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments.
- Disclosed herein are arrangements for improving memory access rates and addressing the high disparity (15 to 1 ratio) between the read and write data throughput rates. In one arrangement, a Flash-DRAM-hybrid DIMM (FDHDIMM) with integrated Flash and DRAM is used. Methods for controlling such an arrangement are described.
- In certain embodiments, the actual memory density (size or capacity) of the DIMM and/or the ratio of DRAM memory to Flash memory are configurable for optimal use with a particular application (for example, POS, inter-bank transaction, stock market transaction, scientific computation such as fluid dynamics for automobile and ship design, weather prediction, oil and gas expeditions, medical diagnostics such as diagnostics based on the fuzzy logic, medical data processing, simple information sharing and searching such as web search, retail store website, company home page, email or information distribution and archive, security service, and entertainment such as video-on-demand).
- In certain embodiments, the device contains a high density Flash memory with a low density DRAM, wherein the DRAM is used as a data buffer for read/write operation. The Flash serves as the main memory. Certain embodiments described herein overcome the needs of having a long separation period between an Activate command (may be referred to as RAS) and a corresponding read or write command (may be referred to as first CAS command).
- In accordance with one embodiment, described with reference to
FIGS. 3A and 3B , amemory system 300 includes a non-volatile (for example Flash)memory subsystem 302 and a volatile (for example DRAM)memory subsystem 304. The examples ofFIGS. 3A and 3B are directed to architectures of a non-volatile DIMM (NVDIMM) NVDIMM system that may use a power subsystem (not shown) that can include a battery or a capacitor as a means for energy storage to copy DRAM memory data into Flash memory when power loss occurs, is detected, or is anticipated to occur during operation. When normal power is restored, a restore NVDIMM operation is initiated and the data stored in the Flash memory is properly restored to the DRAM memory. In this architecture, the density of the Flash is about the same as the DRAM memory size or within a few multiples, although in some applications it may be higher. This type of architecture may also be used to provide non-volatile storage that is connected to the FSB (front side bus) to support RAID (Redundant Array of Independent Disks) based systems or other type of operations. AnNVDIMM controller 306 receives and interprets commands from the system memory controller hub (MCH). TheNVDIMM controller 306 control the NVDIMM DRAM and Flash memory operations. InFIG. 3A , theDRAM 304 communicates data with the MCH, while aninternal bus 308 is used for data transfer between the DRAM and Flash memory subsystems. InFIG. 3B , theNVDIMM controller 306′ ofNVDIMM 300′ monitors events or commands and enables data transfer to occur in a first mode between theDRAM 304′ andFlash 302′ or in a second mode between the DRAM and the MCH. - In accordance with one embodiment, a general architecture for a Flash and DRAM hybrid DIMM (FDHDIMM)
system 400 is shown inFIG. 4A . The FDHDIMM interfaces with an MCH (memory controller hub) to operate and behave as a high density DIMM, wherein the MCH interfaces with the non-volatile memory subsystem (for example Flash) 402 is controlled by anFDHDIMM controller 404. Although the MCH interfaces with the Flash via the FDHDIMM controller, the FDHDIMM overall performance is governed by the Flash access time. The volatile memory subsystem (for example DRAM) 406 is primarily used as a data buffer or a temporary storage location such that data from theFlash memory 402 is transferred to theDRAM 406 at the Flash access speed, and buffered or collected into theDRAM 406, which then transfers the buffered data to the MCH based on the access time of DRAM. Similarly, when the MCH transfers data to theDRAM 406, theFDHDIMM controller 404 manages the data transfer from theDRAM 406 to theFlash 402. Since the Flash memory access speed (both read and write) is relatively slower than DRAM, (e.g. for example a few hundred microseconds for read access), the average data throughput rate ofFDHDIMM 400 is limited by the Flash access speed. TheDRAM 406 serves as a data buffer stage that buffers the MCH read or write data. Thus, theDRAM 406 serves as a temporary storage for the data to be transferred from/to theFlash 402. Furthermore, in accordance with one embodiment, the MCH recognizes the physical density of an FDHDIMM operating as a high density DIMM as the density of Flash alone. - In accordance with one embodiment, a read operation can be performed by the MCH by sending an activate command (may be simply referred to as RAS, or row address strobe) to the
FDHDIMM 400 to conduct a pre-fetch read data operation from theFlash 402 to theDRAM 406, with the pre-fetch data size being for example a page (1 KB or 2 KB, or may be programmable to any size). The MCH then sends a read command (may be simply referred to as CAS, or column address strobe) to read the data out input of the DRAM. In this embodiment, the data transfer from Flash to DRAM occurs at Flash access speed rates, while data transfer from DRAM to MCH occurs at DRAM access speed rates. In this example, data latency and throughput rates are the same as any DRAM operation as long as the read operations are executed onto the pages that were opened with the activate command previously sent to pre-fetch data from the Flash to DRAM. Thus, a longer separation time period between the RAS (e.g. Activate command) and the first CAS (column address strobe e.g. read or write command) is required to account for the time it takes to pre-fetch data from the Flash to DRAM. - An example of FDHDIMM operating as a DDR DIMM with SSD is shown in
FIG. 4B , wherein theFDHDIMM 400′ supports two different interface interpretations to the MCH. In the first interface interpretation, the MCH views theFDHDIMM 400′ as a combination of DRAM DIMM and SSD (not illustrated). In this mode the MCH needs to manage two address spaces, one for theDRAMs 402′ and one for theFlash 404′. The MCH is coupled to, and controls, both of the DRAM and Flash memory subsystems. One advantage of this mode is that the CPU does not need to be in the data path when data is moved from DRAM to Flash or from Flash to DRAM. In the second interface interpretation, the MCH views theFDHDIMM 400′ as an on-DIMM Flash with the SSD in an extended memory space that is behind the DRAM space. Thus, in this mode, the MCH physically fetches data from the SSD to the DDR DRAM and then the DRAM sends the data to the MCH. Since all data movement occurs on the FDHDIMM, this mode will provide better performance than if the data were to be moved through or via the CPU. - In accordance with one embodiment and as shown in
FIG. 4B , theFDHDIMM 400′ receives control signals 408 from the MCH, where the control signals may include one or more control signals specifically for theDRAM 402′ operation and one or more control signals specifically for theFlash 404′ operation. In this embodiment, the MCH or CPU is coupled to the FDHDIMM via a singledata bus interface 410 which couples the MCH to the DRAM. -
FIGS. 5A and 5B are block diagrams of amemory module 500 that is couplable to a host system (not shown). The host system may be a server or any other system comprising a memory system controller or an MCH for providing and controlling the read/write access to one or more memory systems, wherein each memory system may include a plurality of memory subsystems, a plurality of memory devices, or at least one memory module. The term “read/write access” means the ability of the MCH to interface with a memory system or subsystem in order to write data into it or read data from it, depending on the particular requirement at a particular time. - In certain embodiments,
memory module 500 is a Flash-DRAM hybrid memory subsystem which may be integrated with other components of a host system. In certain embodiments,memory module 500 is a Flash-DRAM hybrid memory module that has the DIMM (dual-inline memory module) form factor, and may be referred to as a FDHDIMM, although it is to be understood that in both structure and operation it may be different from the FDHDIMM discussed above and described with reference toFIGS. 4A and 4B .Memory module 500 includes two on-module intermediary components: a controller and a data manager. These on-module intermediary components may be physically separate components, circuits, or modules, or they may be integrated onto a single integrated circuit or device, or integrated with other memory devices, for example in a three dimensional stack, or in any one of several other possible expedients for integration known to those skilled in the art to achieve a specific design, application, or economic goal. In the case of a DIMM, these on-module intermediary components are an on-DIMM Controller (CDC) 502 and an on-DIMM data manager (DMgr) 504. While the DIMM form factor will predominate the discussion herein, it should be understood that this is for illustrative purposes only and memory systems using other form factors are contemplated as well.CDC 502 anddata manager DMgr 504 are operative to manage the interface between a non-volatile memory subsystem such as aFlash 506, a volatile memory subsystem such as aDRAM 508, and a host system represented byMCH 510. - In certain embodiments,
CDC 502 controls the read/write access to/fromFlash memory 506 from/toDRAM memory 508, and to/from DRAM memory from/toMCH 510. Read/write access betweenDRAM 508,Flash 506 andMCH 510 may be referred to herein generally as communication, wherein control and address information C/A 560 is sent fromMCH 510 toCDC 502, and possible data transfers follow as indicated byData 550,Data 555, and/orData 556. In certain embodiments, theCDC 502 performs specific functions for memory address transformation, such as address translation, mapping, or address domain conversion, Flash access control, data error correction, manipulation of data width or data formatting or data modulation between the Flash memory and DRAM, and so on. In certain embodiments, theCDC 502 ensures thatmemory module 500 provides transparent operation to the MCH in accordance with certain industry standards, such as DDR, DDR2, DDR3, DDR4 protocols. In the arrangement shown inFIGS. 5A and 5B , there is no direct access from theMCH 510 to theFlash 506 memory subsystem. Thus in accordance with certain embodiments, the Flash access speed has minimal impact on the overall FDHDIMM access speed. In the schematic illustration ofFIG. 5B and in accordance with one embodiment, theCDC controller 502 receives standard DDR commands from the MCH, interprets, and produces commands and/or control signals to control the operation of the Data manager (DMgr), the Flash memory and the DRAM memory. The DMgr controls the data path routing amongst DRAMs, Flash and MCH, as detailed below. The data path routing control signals are independently operated without any exclusivity. - An exemplary role of
DMgr 504 is described with reference toFIG. 6 . In certain embodiments and in response to communication fromCDC 502,DMgr 504 provides a variety of functions to control data flow rate, data transfer size, data buffer size, data error monitoring or data error correction. For example, these functions or operations can be performed on-the-fly (while data is being transferred via the DMgr 504) or performed on buffered or stored data in DRAM or a buffer. In addition, one role ofDMgr 504 is to provide interoperability among various memory subsystems or components and/orMCH 510. - In one embodiment, an exemplary host system operation begins with initialization. The
CDC 502 receives a first command from theMCH 510 to initializeFDHDIMM 500 using a certain memory space. The memory space as would be controlled byMCH 510 can be configured or programmed during initialization or after initialization has completed. TheMCH 510 can partition or parse the memory space in various ways that are optimized for a particular application that the host system needs to run or execute. In one embodiment, theCDC 502 maps the actualphysical Flash 506 andDRAM 508 memory space using the information sent byMCH 510 via the first command. In one embodiment, theCDC 502 maps the memory address space of any one of theFlash 506 andDRAM 508 memory subsystems using memory address space information that is received from the host system, stored in a register withinFDHDIMM 500, or stored in a memory location of a non-volatile memory subsystem, for example a portion ofFlash 506 or a separate non-volatile memory subsystem. In one embodiment, the memory address space information corresponds to a portion of initialization information of theFDHDIMM 500. - In one embodiment,
MCH 510 may send a command to restore a certain amount of data information fromFlash 506 toDRAM 508. TheCDC 502 provides control information toDMgr 504 to appropriately copy the necessary information fromFlash 506 to theDRAM 508. This operation can provide support for various host system booting operations and/or a special host system power up operation. - In one embodiment,
MCH 510 sends a command which may include various fields comprising control information regarding data transfer size, data format options, and/or startup time.CDC 502 receives and interprets the command and provides control signals toDMgr 504 to control the data traffic between theFlash 506, theDRAM 508, and theMCH 510. For example,DMgr 504 receives the data transfer size, formatting information, direction of data flow (via one or more multiplexers such as 611, 612, 621, 622 as detailed below), and the starting time of the actual data transfer fromCDC 502.DMgr 504 may also receive additional control information from theCDC 502 to establish a data flow path and/or to correctly establish the data transfer fabric. In certain embodiments,DMgr 504 also functions as a bi-directional data transfer fabric. For example,DMgr 504 may have more than 2 sets of data ports facing theFlash 506 and theDRAM 508.Multiplexers MCH 510 and theFlash 506. Similarly multiplexers 621 and 622 provide controllable data paths from any one of the MCH and the Flash memory to any one of the DRAMs 508(1) and 508(2) (DRAM-A and DRAM-B). In one embodiment, DRAM 508(1) is a segment ofDRAM 508, while in other embodiments, DRAM 508(1) is a separate DRAM memory subsystem. It will be understood that each memory segment can comprise one or more memory circuits, a memory devices, and/or memory integrated circuits. Of course other configurations forDRAM 508 are possible, and other data transfer fabrics using complex data paths and suitable types of multiplexing logic are contemplated. - In accordance with one embodiment, the two sets of
multiplexors CDC 502,DMgr 504 can transfer data from DRAM-A 508(1) toMCH 510, viamultiplexer 611, at the same time as from DRAM-B 508(2) to theFlash 506, viamultiplexer 612; or data is transferred from DRAM-B 508(2) toMCH 510, viamultiplexer 611, and simultaneously data is transferred from theFlash 506 to DRAM-A 508(1), viamultiplexer 621. Further, in the same way that data can be transferred to or from the DRAM in both device-wide or segment-by-segment fashion, data can be transferred to or from the flash memory in device-wide or segment-by-segment fashion, and the flash memory can be addressed and accessed accordingly. - In accordance with one embodiment the illustrated arrangement of data transfer fabric of
DMgr 504 also allows theCDC 502 to control data transfer from the Flash memory to the MCH by buffering the data from theFlash 506 using abuffer 602, and matching the data rate and/or data format ofMCH 510. Thebuffer 602 is shown inFIG. 6 as a portion of adata format module 604; however, buffer 602 may also be a distributed buffer such that one buffer is used for each one of the set of multiplexer logic elements shown asmultiplexers buffer 604 may introduce one or more clock cycle delays into a data communication path betweenMCH 510,DRAM 508, andFlash 506. - In certain embodiments,
data format module 604 contains a data formatting subsystem (not shown) to enableDMgr 504 to format and perform data transfer in accordance with control information received from CDC502.Data buffer 604 ofdata format module 602, discussed above, also supports awide data bus 606 coupled to theFlash memory 506 operating at a first frequency, while receiving data fromDRAM 508 using a relatively smallerwidth data bus 608 operating at a second frequency, the second frequency being larger than the first frequency in certain embodiments. Thebuffer 602 is designed to match the data flow rate between theDRAM 508 and theFlash 506. - A
register 690 provides the ability to register commands received fromMCH 510 via C/A 560 (FIG. 5A ). Theregister 690 may communicate these commands toCDC 502 and/or to theDRAM 508 and/orFlash 506. Theregister 690 communicates these registered commands toCDC 502 for processing. Theregister 690 may also include multiple registers (not shown), such that it can provide the ability to register multiple commands, a sequence of commands, or provide a pipeline delay stage for buffering and providing a controlled execution of certain commands receivedform MCH 510. - In certain embodiments, the
register 690 may register commands fromMCH 510 and transmit the registered commands toDRAM 508 and/orFlash 506 memory subsystems. In certain embodiments, theCDC 502 monitors commands received fromMCH 510, via control and address bus C/A 560, and provides appropriate control information toDMgr 504,DRAM 508, orFlash 506 to execute these commands and perform data transfer operations betweenMCH 510 andFDHDIMM 500 viaMCH data bus 610. -
FIG. 7 illustrates a functional block diagram of theCDC 502. In certain embodiments, the major functional blocks of theCDC 502 are a DRAM control blockDRAMCtrl 702, Flash control block FlashCtrl 704, MCHcommand interpreter Cmdlnt 706, DRAM-Flashinterface scheduler Scheduler 708, and DMgr control block (DMgrCtrl) 710. - In accordance with one embodiment,
DRAMCtrl 702 generates DRAM commands that are independent from the commands issued by theMCH 510. In accordance with one embodiment, when theMCH 510 initiates a read/write operation from/to thesame DRAM 508 that is currently executing a command from theDRAMCtrl 702, then theCDC 502 may choose to instructDRAMCtrl 702 to abort its operation in order to execute the operation initiated by the MCH. However, theCDC 502 may also pipeline the operation so that it causesDRAMCtrl 702 to either halt or complete its current operation prior to executing that of the MCH. TheCDC 502 may also instructDRAMCtrl 702 to resume its operation once the command fromMCH 510 is completed. - In accordance with one embodiment, the FlashCtrl 704 generates appropriate Flash commands for the proper read/write operations. The
CmdInt 706 intercepts commands received fromMCH 510 and generates the appropriate control information and control signals and transmit them to the appropriate FDHDIMM functional block. For example,Cmdlnt 706 issues an interrupt signal to theDRAMCtrl 702 when the MCH issues a command that collides (conflicts) with the currently executing or pending commands thatDRAMCtrl 702 has initiated independently fromMCH 510, thus subordinating these commands to those from the MCH. TheScheduler 708 schedules the Flash-DRAM interface operation such that there is no resource conflict in theDMgr 504. In accordance with one embodiment, theScheduler 708 assigns time slots for theDRAMCtrl 702 and FlashCtrl 704 operation based on the current status and the pending command received or to be received from the MCH. TheDMgrCtrl 710 generates and sends appropriate control information and control signals for the proper operation and control of the data transfer fabric to enable or disable data paths betweenFlash 506,DRAM 508, and theMCH 510. -
FIG. 8A is a block diagram showing a Flash-DRAM hybrid DIMM (FDHDIMM). As seen fromFIG. 8A , this Flash-DRAM hybrid DIMM requires two separate and independent address buses to separately control the address spaces: one for theFlash memory Flash 506 and the other for theDRAM memory DRAM 508. The MCH treats theDRAM 508 andFlash 506 as separate memory subsystems, for example DRAM and SSD/HD memory subsystems. The memory in each address space is controlled directly by the MCH. However, the on-DIMM data path betweenFlash 506 andDRAM 508 allows for direct data transfer to occur between theFlash 506 and theDRAM 508 in response to control information fromCtrl 502. In this embodiment, this data transfer mechanism provides direct support for executing commands from the MCH without having the MCH directly controlling the data transfer, and thus improving data transfer performance fromFlash 506 to theDRAM 508. However, the MCH needs to manage two address spaces and two different memory protocols simultaneously. Moreover, the MCH needs to map the DRAM memory space into the Flash memory space, and the data interface time suffers due to the difference in the data access time between the Flash memory and the DRAM memory. - In accordance with one embodiment, a memory space mapping of a Flash-DRAM hybrid DIMM is shown in
FIG. 8B . A memory controller of a host system (not shown) controls both of theDRAM 508 address space and theFlash 506 address space using a single unified address space. TheCDC 502 receives memory access commands from the MCH and generates control information for appropriate mapping and data transfer between Flash and DRAM memory subsystem to properly carry out the memory access commands. In one embodiment, the memory controller of the host system views the large Flash memory space as a DRAM memory space, and accesses this unified memory space with a standard DDR (double data rate) protocol used for accessing DRAM. The unified memory space in this case can exhibit overlapping memory address space between theFlash 506 and theDRAM 508. The overlapping memory address space may be used as a temporary storage or buffer for data transfer between theFlash 506 and theDRAM 508. For example, the DRAM memory space may hold a copy of data from the selected Flash memory space such that the MCH can access this data normally via DDR memory access commands. TheCDC 502 controls the operation of theFlash 506 andDRAM 508 memory subsystems in response to commands received from a memory controller of a host system. - In one embodiment, the unified memory space corresponds to a contiguous address space comprising a first portion of the address space of the
Flash 506 and a first portion of the address space of theDRAM 508. The first portion of the address space of theFlash 506 can be determined via a first programmable register holding a first value corresponding to the desired Flash memory size to be used. Similarly, the first portion of the address space of theDRAM 508 can be determined via a second programmable register holding a second value corresponding to the desired DRAM memory size to be used. In one embodiment, any one of the first portion of the address space of theFlash 506 and the first portion of the address space of theDRAM 508 is determined via a first value corresponding to a desired performance or memory size, the first value being received by theCDC 502 via a command sent by memory controller of the host system. - In accordance with one embodiment, a flow diagram directed to the transfer of data from Flash memory to DRAM memory and vice versa in an exemplary FDHDIMM is shown in
FIG. 9 . In certain embodiments, data transfer from theFlash 506 to theDRAM 508 occurs in accordance with memory access commands which theCDC 502 receives from the memory controller of the host system. In certain embodiments, theCDC 502 controls the data transfer from theDRAM 508 to theFlash 506 so as to avoid conflict with any memory operation that is currently being executed. For example, when all the pages in a particular DRAM memory block are closed. TheCDC 502 partitions the DRAM memory space into a number of blocks for the purpose of optimally supporting the desired application. The controller can configure memory space in the memory module based on at least one of one or more commands received from the MCH, instructions received from the MCH, a programmable value written into a register, a value corresponding to a first portion of the volatile memory subsystem, a value corresponding to a first portion of the non-volatile memory subsystem, and a timing value. Furthermore, the block size can be configurable by the memory controller of the host system, such that the number pages in a block can be optimized to support a particular application or a task. Furthermore, the block size may be configured on-the-fly,e.g. CDC 502 can receive instruction regarding a desired block size from the memory controller via a memory command, or via a programmable value. - In certain embodiments, a memory controller can access the memory module using a standard access protocol, such as JEDEC's DDR DRAM, by sending a memory access command to the
CDC 502 which in turn determines what type of a data transfer operation it is and the corresponding target address where the data information is stored, e.g. data information is stored in theDRAM 508 orFlash 506 memory subsystems. In response to a read operation, if theCDC 502 determines that data information, e.g. a page (or block), does not reside in theDRAM 508 but resides inFlash 506, then theCDC 502 initiates and controls all necessary data transfer operations fromFlash 506 toDRAM 508 and subsequently to the memory controller. In one embodiment, once theCDC 502 completes the data transfer operation of the requested data information from theFlash 506 to theDRAM 508, theCDC 502 alerts the memory controller to retrieve the data information from theDRAM 508. In on embodiment, the memory controller initiates the copying of data information fromFlash 506 toDRAM 508 by writing, into a register in theCDC 502, the target Flash address along with a valid block size. TheCDC 502 in turn, executes appropriate operations and generates control information to copy the data information to theDRAM 508. Consequently, the memory controller can access or retrieve the data information using standard memory access commands or protocol. - An exemplary flow chart is shown in
FIG. 9 , a starting step or power up 902, is followed by an initialization step 904, the memory controller initiates, atstep 906, a data move from theFlash 506 to theDRAM 508 by writing target address and size, to a control register in theCDC 502, which then copies, at 908, data information from theFlash 506 to theDRAM 508 and erases the block in the Flash. Erasing the data information from Flash may be accomplished independently from (or concurrently with) other steps thatCDC 502 performs in this flow chart, i.e. other steps can be executed concurrently with the Erase the Flash block step. Once the data information or a block of data information is thus moved to theDRAM 508, the memory controller can operate on this data block using standard memory access protocol or commands at 910. TheCDC 502 checks, at 912, if any of theDRAM 508 blocks, or copied blocks, are closed. If the memory controller closed any open blocks inDRAM 508, then theCDC 502 initiate a Flash write to write the closed block from theDRAM 508 to theFlash 506, at 914. In addition, the memory controller, at 916, reopens the closed block that is currently being written into theFlash 506, then theCDC 502 stops the Flash write operation and erases the Flash block which was being written to, as shown at 918. Otherwise, theCDC 502 continues and completes the writing operation to the Flash at 920. - The dashed lines in
FIG. 9 indicate independent or parallel activities that can be performed by theCDC 502. At any time theCDC 502 receives a DRAM load command from a memory controller which writes a Flash target address and/or block size information into the RC register(s) at 922, as described above, then theCDC 502 executes a load DRAM w/RC step 906 and initiates another branch (or a thread) of activities that includes steps 908-922. In one embodiment, theCDC 502 controls the data transfer operations betweenDRAM 508 andFlash 506 such that theFlash 506 is completely hidden from the memory controller. TheCDC 502 monitors all memory access commands sent by the memory controller using standard DRAM protocol and appropriately configures and manipulate bothFlash 506 andDRAM 508 memory subsystems to perform the requested memory access operation and thus achieve the desired results. The memory controller does not interface directly with the Flash memory subsystem. Instead, the memory controller interfaces with theCDC 502 and/orDMgr 504 as shown inFIG. 5 andFIG. 6 . Moreover, the memory controller may use one or more protocol, such as DDR, DDR2, DDR3, DDR4 protocols or the like. - In accordance with one embodiment, an example of mapping a DRAM address space to Flash memory address space is shown in
FIG. 10 . Two sets (1002, 1004) of address bits AD6 to AD17, forming a 24 bit extended memory page address, are allocated for the block address. For example, assuming a Block size of 256K Bytes, then a 24-bit block address space (using the two sets of AD6 toAD17 1002 and 1004) would enable access to 4 TB of Flash memory storage space. If a memory module has 1 GB of DRAM storage capacity, then it can hold approximately 4K Blocks of data in the DRAM memory, each Block comprise 256 K Bytes of data. The DRAM address space, corresponding to the 4K blocks, can be assigned to different virtual ranks and banks, where the number of virtual ranks and banks is configurable and can be manipulated to meet a specific design or performance needs. For example, if a 1 G Bytes memory module is configured to comprise two ranks with eight banks per rank, then each bank would hold two hundred fifty (250) blocks or the equivalent of 62 M Bytes or 62K pages, where each page correspond to a 1K Bytes. Other configurations using different page, block, banks, or ranks numbers may also be used. Furthermore, an exemplary mapping of 24-bit DDR DIMM block address to Flash memory address, using Block addressing as described above, is shown inFIG. 10 . The 24-bit can be decomposed into fields, such as a logical unitnumber LUN address 1061 field, aBlock address 1051 field, aPlane address 1041, aPage address 1031, and a group of least significant address bits A0A1 1021. ThePlane address 1041 is a sub address of the block address, and it may be used to support multiple page IO so as to improve Flash memory subsystem operation. In this example, it is understood that different number of bits may be allocated to each field of the 24-bit - The
CDC 502 manages the block write-back operation by queuing the blocks that are ready to be written back to the Flash memory. As described above, if any page in a queued block for a write operation is reopened, then theCDC 502 will stop the queued block write operation, and remove the block from the queue. Once all the pages in a block are closed, then theCDC 502 restarts the write-back operation and queue the block for a write operation. - In accordance with one embodiment, an exemplary read operation from
Flash 506 toDRAM 508 can be performed in approximately 400 μs, while a write operation fromDRAM 508 to Flash 506 can be performed in approximately 22 ms resulting in a read to write ratio of 55 to 1. Therefore, if the average time a host system's memory controller spends accessing data information in a Block of DRAM is about 22 ms (that is the duration that a Block comprises one or more pages that are open), then the block write-back operation from DRAM to Flash would not impact performance and hence the disparity between read and write access may be completely hidden from the memory controller. If the block usage time is 11 ms instead of 22 ms, then theCDC 502 control the data transfer operation betweenDRAM 508 andFlash 506 such that there are no more than 9 closed blocks in the queue to be written-back to the Flash memory, hence approximately an average of 100 ms can be maintained for a standard DDR DRAM operation. Moreover, the number of closed Blocks in the queue to be written-back to the Flash memory subsystem varies with the average block usage time and the desired performance for a specific host system or for a specific application running using the host system resources. - Consequently, the maximum number of closed Blocks to be written-back to Flash can be approximated to be
-
- ((#of blocks per bank)/(ratio of ‘Flash_block_write_time’ to ‘Flash_read_time’))*((Block usage time)/(‘Flash_block_write_time’))
- In order to maintain less than 100 ms time period for queued write-back Blocks, then using a Flash memory subsystem having 22 ms write access time per Block would results in a maximum number of four Blocks to be queued for write operation to
Flash 506. Therefore, on average approximately 88 ms (=22 ms*4) for blocks means that each bank should not have more than four Blocks that need to be written back to theFlash 506. - The above equation also indicates that bigger DRAM memory space can support shorter block usage times. For example, 2 GB of DRAM memory allows the 8 closed blocks to be written-back to Flash. The table in
FIG. 11 provides an estimation of the maximum allowed closed blocks in the queue to be written back to the Flash memory for different DRAM density using various average block use time. - Certain embodiments described herein include a memory system which can communicate with a host system such as a disk controller of a computer system. The memory system can include volatile and non-volatile memory, and a controller. The controller backs up the volatile memory using the non-volatile memory in the event of a trigger condition. Trigger conditions can include, for example, a power failure, power reduction, request by the host system, etc. In order to power the system in the event of a power failure or reduction, the memory system can include a secondary power source which does not comprise a battery and may include, for example, a capacitor or capacitor array.
- In certain embodiments, the memory system can be configured such that the operation of the volatile memory is not adversely affected by the non-volatile memory or by th controller when the volatile memory is interacting with the host system. For example, one or more isolation devices may isolate the non-volatile memory and the controller from the volatile memory when the volatile memory is interacting with the host system and may allow communication between the volatile memory and the non-volatile memory when the data of the volatile memory is being restored or backed-up. This configuration generally protects the operation of the volatile memory when isolated while providing backup and restore capability in the event of a trigger condition, such as a power failure.
- In certain embodiments described herein, the memory system includes a power module which provides power to the various components of the memory system from different sources based on a state of the memory system in relation to a trigger condition (e.g., a power failure). The power module may switch the source of the power to the various components in order to efficiently provide power in the event of the power failure. For example, when no power failure is detected, the power module may provide power to certain components, such as the volatile memory, from system power while charging a secondary power source (e.g., a capacitor array). In the event of a power failure or other trigger condition, the power module may power the volatile memory elements using the previously charged secondary power source.
- In certain embodiments, the power module. transitions relatively smoothly from powering the volatile memory with system power to powering it with the secondary power source. For example, the power system may power volatile memory with a third power source from the time the memory system detects that power failure is likely to occur until the time the memory system detects that the power failure has actually occurred.
- In certain embodiments, the volatile memory system can be operated at a reduced frequency during backup and/or restore operations which can improve the efficiency of the system and save power. In some embodiments, during backup and/or restore operations, the volatile memory communicates with the non-volatile memory by writing and/or. reading data words in bit-wise slices instead of by writing entire words at once. In certain embodiments, when each slice is being written to or read from the volatile memory the unused slice(s) of volatile memory is not active, which can reduce the power consumption of the system.
- In yet other embodiments, the non-volatile memory can include at least 100 percent more storage capacity than the volatile memory. This configuration can allow the memory system to efficiently handle subsequent trigger conditions.
-
FIG. 12 is a block diagram of anexample memory system 1010 compatible with certain embodiments described herein. Thememory system 1010 can be coupled to a host computer system and can include avolatile memory subsystem 1030, anon-volatile memory subsystem 1040, and acontroller 1062 operatively coupled to thenon-volatile memory subsystem 1040. In certain embodiments, thememory system 1010 includes at least onecircuit 1052 configured to selectively operatively decouple thecontroller 1062 from thevolatile memory subsystem 1030. - In certain embodiments, the
memory system 1010 comprises a memory module. Thememory system 1010 may comprise a printed-circuit board (PCB) 1020. In certain embodiments, thememory system 1010 has a memory capacity of 512-MB, 1-GB, 2-GB, 4-GB, or 8-GB. Other volatile memory capacities are also compatible with certain embodiments described herein. In certain embodiments, thememory system 10 has a non-volatile memory capacity of 512-MB, 1-GB, 2-GB, 4-GB, 8-GB, 16-GB, or 32-GB. Other non-volatile memory capacities are also compatible with certain embodiments described herein. In addition,memory systems 1010 having widths of 4 bytes, 8 bytes, 16 bytes, 32 bytes, or 32 bits, 64 bits, 128 bits, 256 bits, as well as other widths (in bytes or in bits), are compatible with embodiments described herein. In certain embodiments, thePCB 1020 has an industry-standard form factor. For example, thePCB 1020 can have a low profile (LP) form factor with a height of 30 millimeters and a width of 133.35 millimeters. In certain other embodiments, thePCB 1020 has a very high profile (VHP) form factor with a height of 50 millimeters or more. In certain other embodiments, thePCB 1020 has a very low profile (VLP) form factor with a height of 18.3 millimeters. Other form factors including, but not limited to, small-outline (SO-DIMM), unbuffered (UDIMM), registered (RDIMM), fully-buffered (FBDIMM), miniDIMM, mini-RDIMM, VLP mini-DIMM, micro-DIMM, and SRAM DIMM are also compatible with certain embodiments described herein. For example, in other embodiments, certain non-DIMM form factors are possible such as, for example, single in-line memory module (SIMM), multi-media card (MMC), and small computer system interface (SCSI). - In certain preferred embodiments, the
memory system 1010 is in electrical communication with the host system. In other embodiments, thememory system 1010 may communicate with a host system using some other type of communication, such as, for example, optical communication. Examples of host systems include, but are not limited to, blade servers, 1 U servers, personal computers (PCs), and other applications in which space is constrained or limited. Thememory system 1010 can be in communication with a disk controller of a computer system, for example. ThePCB 1020 can comprise aninterface 1022 that is configured to be in electrical communication with the host system (not shown). For example, theinterface 1022 can comprise a plurality of edge connections which fit into a corresponding slot connector of the host system. Theinterface 1022 of certain embodiments provides a conduit for power voltage as well as data, address, and control signals between thememory system 1010 and the host system. For example, theinterface 1022 can comprise a standard 240-pin DDR2 edge connector. - The
volatile memory subsystem 1030 comprises a plurality ofvolatile memory elements 1032 and thenon-volatile memory subsystem 1040 comprises a plurality ofnon-volatile memory elements 1042. Certain embodiments described herein advantageously provide nonvolatile storage via thenon-volatile memory subsystem 1040 in addition to high-performance (e.g., high speed) storage via thevolatile memory subsystem 1030. In certain embodiments, the first plurality ofvolatile memory elements 1032 comprises two or more dynamic random-access memory (DRAM) elements. Types ofDRAM elements 1032 compatible with certain embodiments described herein include, but are not limited to, DDR, DDR2, DDR3, and synchronous DRAM (SDRAM). For example, in the block diagram ofFIG. 12 , thefirst memory bank 1030 comprises eight 64M×8DDR2 SDRAM elements 1032. Thevolatile memory elements 1032 may comprise other types of memory elements such as static random-access memory (SRAM). In addition,volatile memory elements 1032 having bit widths of 4, 8, 16, 32, as well as other bit widths, are compatible with certain embodiments described herein.Volatile memory elements 1032 compatible with certain embodiments described herein have packaging which include, but are not limited to, thin small-outline package (TSOP), ball-grid-array (BGA), fine-pitch BGA (FBOA), micro-BOA (1.1,BGA), mini-BGA (mBGA), and chip-scale packaging (CSP). - In certain embodiments, the second plurality of
non-volatile memory elements 1042 comprises one or more flash memory elements. Types offlash memory elements 1042 compatible with certain embodiments described herein include, but are not limited to, NOR flash, NAND flash, ONE-NAND flash, and multi-level cell (MLC). For example, in the block diagram ofFIG. 12 , thesecond memory bank 1040 comprises 512 MB of flash memory organized as four 128 Mb×8 NANDflash memory elements 1042. In addition,nonvolatile memory elements 1042 having bit widths of 4, 8, 16, 32, as well as other bit widths, are compatible with certain embodiments described herein.Non-volatile memory elements 1042 compatible with certain embodiments described herein have packaging which include, but are not limited to, thin small-outline package (TSOP), ball-grid-array (BOA), fine-pitch BOA (FBGA), micro-BOA (POA), mini-BGA (mBGA), and chip-scale packaging (CSP). -
FIG. 13 is a block diagram of anexample memory module 10 with ECC (error-correcting code) having avolatile memory subsystem 1030 with ninevolatile memory elements 1032 and anon-volatile memory subsystem 1040 with fivenon-volatile memory elements 1042 in accordance with certain embodiments described herein. Theadditional memory element 1032 of thefirst memory bank 1030 and theadditional memory element 1042 of thesecond memory bank 1040 provide the ECC capability. In certain other embodiments, thevolatile memory subsystem 1030 comprises other numbers of volatile memory elements 1032 (e.g., 2, 3, 4, 5, 6, 7, more than 9). In certain embodiments, thenon-volatile memory subsystem 1040 comprises other numbers of nonvolatile memory elements 1042 (e.g., 2, 3, more than 5). - Referring to
FIG. 12 , in certain embodiments, thelogic element 1070 comprises a field-programmable gate array (FPGA). In certain embodiments, thelogic element 1070 comprises an FPGA available from Lattice Semiconductor Corporation which includes an internal flash. In certain other embodiments, thelogic element 1070 comprises an FPOA available from another vendor. The internal flash can improve the speed of thememory system 1010 and save physical space. Other types oflogic elements 1070 compatible with certain embodiments described herein include, but are not limited to, a programmable-logic device (PLD), an application-specific integrated circuit (ASIC), a custom-designed semiconductor device, a complex programmable logic device (CPLD). In certain embodiments, thelogic element 1070 is a custom device. In certain embodiments, thelogic element 1070 comprises various discrete electrical elements, while in certain other embodiments, thelogic element 1070 comprises one or more integrated circuits.FIG. 14 is a block diagram of anexample memory module 1010 having amicrocontroller unit 1060 andlogic element 1070 integrated into asingle controller 1062 in accordance with certain embodiments described herein. In certain embodiments, thecontroller 1062 includes one or more other components. For example, in one embodiment, an FPGA without an internal flash is used and thecontroller 1062 includes a separate flash memory component which stores configuration information to program the FPGA. - In certain embodiments, the at least one
circuit 1052 comprises one or more switches coupled to thevolatile memory subsystem 1030, to thecontroller 1062, and to the host computer (e.g., via theinterface 1022, as schematically illustrated byFIGS. 12-14 ). The one or more switches are responsive to signals (e.g., from the controller 1062) to selectively operatively decouple thecontroller 1062 from thevolatile memory subsystem 1030 and to selectively operatively couple thecontroller 1062 to thevolatile memory subsystem 1030. In addition, in certain embodiments, the at least onecircuit 1052 selectively operatively couples and decouples thevolatile memory subsystem 1030 and the host system. - In certain embodiments, the
volatile memory subsystem 1030 can comprise a registered DIMM subsystem comprising one ormore registers 1160 and a plurality ofDRAM elements 1180, as schematically illustrated byFIG. 15A . In certain such embodiments, the at least onecircuit 1052 can comprise one ormore switches 1172 coupled to the controller 1062 (e.g., logic element 1070) and to thevolatile memory subsystem 1030 which can be actuated to couple and decouple thecontroller 1062 to and from thevolatile memory subsystem 1030, respectively. Thememory system 1010 further comprises one ormore switches 1170 coupled to the one ormore registers 1160 and to the plurality ofDRAM elements 1180 as schematically illustrated byFIG. 15A . The one ormore switches 1170 can be selectively switched, thereby selectively operatively coupling thevolatile memory subsystem 1030 to thehost system 1150. In certain other embodiments, as schematically illustrated byFIG. 15B , the one ormore switches 1174 are also coupled to the one ormore registers 1160 and to apower source 1162 for the one ormore registers 1160. The one ormore switches 1174 can be selectively switched to turn power on or off to the one ormore registers 1160, thereby selectively operatively coupling thevolatile memory subsystem 1030 to thehost system 1150. As schematically illustrated byFIG. 15C , in certain embodiments the at least onecircuit 1052 comprises a dynamic on-die termination (ODT) 1176 circuit of thelogic element 1070. For example, thelogic element 1070 can comprise adynamic ODT circuit 1176 which selectively operatively couples and decouples thelogic element 1070 to and from thevolatile memory subsystem 1030, respectively. In addition, and similar to the example embodiment ofFIG. 15A described above, the one ormore switches 1170 can be selectively switched, thereby selectively operatively coupling thevolatile memory subsystem 1030 to thehost system 1150. - Certain embodiments described herein utilize the
non-volatile memory subsystem 1040 as a flash “mirror” to provide backup of thevolatile memory subsystem 1030 in the event of certain system conditions. For example, thenon-volatile memory subsystem 1040 may backup thevolatile memory subsystem 1030 in the event of a trigger condition, such as, for example, a power failure or power reduction or a request from the host system. In one embodiment, thenonvolatile memory subsystem 1040 holds intermediate data results in a noisy system environment when the host computer system is engaged in a long computation. In certain embodiments, a backup may be performed on a regular basis. For example, in one embodiment, the backup may occur every millisecond in response to a trigger condition. In certain embodiments, the trigger condition occurs when thememory system 1010 detects that the system voltage is below a certain threshold voltage. For example, in one embodiment, the threshold voltage is 10 percent below a specified operating voltage. In certain embodiments, a trigger condition occurs when the voltage goes above a certain threshold value, such as, for example, 10 percent above a specified operating voltage. In some embodiments, a trigger condition occurs when the voltage goes below a threshold or above another threshold. In various embodiments, a backup and/or restore operation may occur in reboot and/or non-reboot trigger conditions. - As schematically illustrated by
FIGS. 12 and 13 , in certain embodiments, thecontroller 1062 may comprise a microcontroller unit (MCU) 1060 and alogic element 1070. In certain embodiments, theMCU 1060 provides memory management for thenon-volatile memory subsystem 1040 and controls data transfer between the volatile memory subsystem 30 and thenonvolatile memory subsystem 1040. TheMCU 1060 of certain embodiments comprises a 16-bit microcontroller, although other types of microcontrollers are also compatible with certain embodiments described herein. As schematically illustrated byFIGS. 12 and 13 , thelogic element 1070 of certain embodiments is in electrical communication with thenon-volatile memory subsystem 1040 and theMCU 1060. Thelogic element 1070 can provide signal level translation between the volatile memory elements 1032 (e.g., 1.8V SSTL-2 for DDR2 SDRAM elements) and the non-volatile memory elements 1042 (e.g., 3V TTL for NAND flash memory elements). In certain embodiments, thelogic element 1070 is also programmed to perform address/address translation between thevolatile memory subsystem 1030 and thenon-volatile memory subsystem 1040. In certain preferred embodiments, 1-NAND type flash are used for thenon-volatile memory elements 1042 because of their superior read speed and compact structure. - The
memory system 1010 of certain embodiments is configured to be operated in at least two states. The at least two states can comprise a first state in which thecontroller 1062 and thenon-volatile memory subsystem 1040 are operatively decoupled (e.g., isolated) from thevolatile memory subsystem 1030 by the at least onecircuit 1052 and a second state in which thevolatile memory subsystem 1030 is operatively coupled to thecontroller 1062 to allow data to be communicated between thevolatile memory subsystem 1030 and thenonvolatile memory subsystem 1040 via thecontroller 1062. Thememory system 1010 may transition from the first state to the second state in response to a trigger condition, such as when thememory system 1010 detects that there is a power interruption (e.g., power failure or reduction) or a system hang-up. - The
memory system 1010 may further comprise avoltage monitor 1050. Thevoltage monitor circuit 1050 monitors the voltage supplied by the host system via theinterface 1022. Upon detecting a low voltage condition (e.g., due to a power interruption to the host system), thevoltage monitor circuit 1050 may transmit a signal to thecontroller 1062 indicative of the detected condition. Thecontroller 1062 of certain embodiments responds to the signal from thevoltage monitor circuit 1050 by transmitting a signal to the at least onecircuit 1052 to operatively couple the controller to thevolatile memory system 1030, such that thememory system 1010 enters the second state. For example, thevoltage monitor 1050 may send a signal to theMCU 1060 which responds by accessing the data on thevolatile memory system 1030 and by executing a write cycle on thenonvolatile memory subsystem 1040. During this write cycle, data is read from thevolatile memory subsystem 1030 and is transferred to thenon-volatile memory subsystem 1040 via theMCU 1060. In certain embodiments, thevoltage monitor circuit 1050 is part of the controller 1062 (e.g., part of the MCU 1060) and thevoltage monitor circuit 1050 transmits a signal to the other portions of thecontroller 1062 upon detecting a power threshold condition. - The isolation or operational decoupling of the
volatile memory subsystem 1030 from the non-volatile memory subsystem in the first state can preserve the integrity of the operation of thememory system 1010 during periods of operation in which signals (e.g., data) are transmitted between the host system and thevolatile memory subsystem 1030. For example, in one embodiment during such periods of operation, thecontroller 1062 and thenonvolatile memory subsystem 1040 do not add a significant capacitive load to thevolatile memory system 1030 when thememory system 1010 is in the first state. In certain such embodiments, the capacitive load of thecontroller 1062 and thenon-volatile memory subsystem 1040 do not significantly affect the signals propagating between thevolatile memory subsystem 1030 and the host system. This can be particularly advantageous in relatively high-speed memory systems where loading effects can be significant. In one preferred embodiment, the at least onecircuit 1052 comprises an FSA1208 Low-Power, Eight-Port, Hi-Speed Isolation Switch from Fairchild Semiconductor. In other embodiments, the at least onecircuit 1052 comprises other types of isolation devices. - Power may be supplied to the
volatile memory subsystem 1030 from a first power supply (e.g., a system power supply) when thememory system 1010 is in the first state and from asecond power supply 1080 when thememory system 1010 is in the second state. In certain embodiments, thememory system 1010 is in the first state when no trigger condition (e.g., a power failure) is present and thememory system 1010 enters the second state in response to a trigger condition. In certain embodiments, thememory system 1010 has a third state in which thecontroller 1062 is operatively decoupled from thevolatile memory subsystem 1030 and power is supplied to thevolatile memory subsystem 1030 from a third power supply (not shown). For example, in one embodiment the third power supply may provide power to thevolatile memory subsystem 1030 when thememory system 1010 detects that a trigger condition is likely to occur but has not yet occurred. - In certain embodiments, the
second power supply 1080 does not comprise a battery. Because a battery is not used, thesecond power supply 1080 of certain embodiments may be relatively easy to maintain, does not generally need to be replaced, and is relatively environmentally friendly. In certain embodiments, as schematically illustrated byFIGS. 12-14 , thesecond power supply 1080 comprises a step-uptransformer 1082, a step-downtransformer 1084, and acapacitor bank 1086 comprising one or more capacitors (e.g., double-layer capacitors). In one example embodiment, capacitors may take about three to four minutes to charge and about two minutes to discharge. In other embodiments, the one or more capacitors may take a longer time or a shorter time to charge and/or discharge. For example, in certain embodiments, thesecond power supply 1080 is configured to power thevolatile memory subsystem 1030 for less than thirty minutes. In certain embodiments, thesecond power supply 1080 may comprise a battery. For example, in certain embodiments, thesecond power supply 1080 comprises a battery and one or more capacitors and is configured to power thevolatile memory subsystem 1030 for no more than thirty minutes. - In certain embodiments, the
capacitor bank 1086 of thesecond power supply 1080 is charged by the first power supply while thememory system 1010 is in the first state. As a result, thesecond power supply 1080 is fully charged when thememory system 1010 enters the second state. Thememory system 1010 and thesecond power supply 1080 may be located on the same printedcircuit board 1020. In other embodiments, thesecond power supply 1080 may not be on the same printedcircuit board 1020 and may be tethered to the printedcircuit board 1020, for example. - When operating in the first state, in certain embodiments, the step-up
transformer 1082 keeps thecapacitor bank 1086 charged at a peak value. In certain embodiments, the step-downtransformer 1084 acts as a voltage regulator to ensure that regulated voltages are supplied to the memory elements (e.g., 1.8V to thevolatile DRAM elements 1032 and 3.0V to the non-volatile flash memory elements 1042) when operating in the second state (e.g., during power down). In certain embodiments, as schematically illustrated byFIGS. 12-14 , thememory module 1010 further comprises a switch 1090 (e.g., FET switch) that switches power provided to thecontroller 1062, thevolatile memory subsystem 1030, and thenon-volatile memory subsystem 1040, between the power from thesecond power supply 1080 and the power from the first power supply (e.g., system power) received via theinterface 1022. For example, theswitch 1090 may switch from the first power supply to thesecond power supply 1080 when thevoltage monitor 1050 detects a low voltage condition. Theswitch 1090 of certain embodiments advantageously ensures that thevolatile memory elements 1032 andnon-volatile memory elements 1042 are powered long enough for the data to be transferred from thevolatile memory elements 1032 and stored in thenon-volatile memory elements 1042. In certain embodiments, after the data transfer is complete, theswitch 1090 then switches back to the first power supply and thecontroller 1062 transmits a signal to the at least onecircuit 1052 to operatively decouple thecontroller 1062 from thevolatile memory subsystem 1030, such that thememory system 1010 reenters the first state. - When the
memory system 1010 re-enters the first state, data may be transferred back from thenon-volatile memory subsystem 1040 to thevolatile memory subsystem 1030 via thecontroller 1062. The host system can then resume accessing thevolatile memory subsystem 1030 of thememory module 1010. In certain embodiments, after thememory system 1010 enters or re-enters the first state (e.g., after power is restored), the host system accesses thevolatile memory subsystem 1030 rather than thenon-volatile memory subsystem 1040 because thevolatile memory elements 1032 have superior read/write characteristics. In certain embodiments, the transfer of data from thevolatile memory bank 1030 to thenonvolatile memory bank 1040, or from thenon-volatile memory bank 1040 to the volatile.memory bank 1030, takes less than one minute per GB. - In certain embodiments, the
memory system 1010 protects the operation of the volatile memory when communicating with the host-system and provides backup and restore capability in the event of a trigger condition such as a power failure. In certain embodiments, thememory system 1010 copies the entire contents of thevolatile memory subsystem 1030 into thenonvolatile memory subsystem 1040 on each backup operation. Moreover, in certain embodiments, the entire contents of thenon-volatile memory subsystem 1040 are copied back into thevolatile memory subsystem 1030 on each restore operation. In certain embodiments, the entire contents of thenon-volatile memory subsystem 1040 are accessed for each backup and/or restore operation, such that the non-volatile memory subsystem 1040 (e.g., flash memory subsystem) is used generally uniformly across its memory space and wear-leveling is not performed by thememory system 1010. In certain embodiments, avoiding wear-leveling can decrease cost and complexity of thememory system 1010 and can improve the performance of thememory system 1010. In certain other embodiments, the entire contents of thevolatile memory subsystem 1030 are not copied into thenon-volatile memory subsystem 1040 on each backup operation, but only a partial copy is performed. In certain embodiments, other management capabilities such as bad-block management and error management for the flash memory elements of thenon-volatile memory subsystem 1040 are performed in thecontroller 1062. - The
memory system 1010 generally operates as a write-back cache in certain embodiments. For example, in one embodiment, the host system (e.g., a disk controller) writes data to thevolatile memory subsystem 1030 which then writes the data to non-volatile storage which is not part of thememory system 1010, such as, for example, a hard disk. The disk controller may wait for an acknowledgment signal from thememory system 1010 indicating that the data has been written to the hard disk or is otherwise secure. Thememory system 1010 of certain embodiments can decrease delays in the system operation by indicating that the data has been written to the hard disk before it has actually done so. In certain embodiments, thememory system 1010 will still be able to recover the data efficiently in the event of a power outage because of the backup and restore capabilities described herein. In certain other embodiments, thememory system 1010 may be operated as a write-through cache or as some other type of cache. -
FIG. 16 schematically illustrates anexample power module 1100 of thememory system 1010 in accordance with certain embodiments described herein. Thepower module 1100 provides power to the various components of thememory system 1010 using different elements based on a state of thememory system 1010 in relation to a trigger condition. In certain embodiments, thepower module 1100 comprises one or more of the components described above with respect toFIG. 12 . For example, in certain embodiments, thepower module 1100 includes thesecond power supply 1080 and theswitch 1090. - The
power module 1100 provides a plurality of voltages to thememory system 1010 comprising non-volatile andvolatile memory subsystems first voltage 1102 and asecond voltage 1104. Thepower module 1100 comprises aninput 1106 providing athird voltage 1108 to thepower module 1100 and avoltage conversion element 1120 configured to provide thesecond voltage 1104 to thememory system 1010. Thepower module 1100 further comprises afirst power element 1130 configured to selectively provide afourth voltage 1110 to theconversion element 1120. In certain embodiments, thefirst power element 1130 comprises a pulse-width modulation power controller. For example, in one example embodiment, thefirst power element 1130 is configured to receive a 1.8V input system voltage as thethird voltage 1108 and to output a modulated 5V output as thefourth voltage 1110. - The
power module 1100 further comprises asecond power element 1140 can be configured to selectively provide afifth voltage 1112 to theconversion element 1120. Thepower module 1100 can be configured to selectively provide thefirst voltage 1102 to thememory system 1010 either from theconversion element 1120 or from theinput 1106. - The
power module 1100 can be configured to be operated in at least three states in certain embodiments. In a first state, thefirst voltage 1102 is provided to thememory system 1010 from theinput 1106 and thefourth voltage 1110 is provided to theconversion element 1120 from thefirst power element 1130. In a second state, thefourth voltage 1110 is provided to theconversion element 1120 from thefirst power element 1130 and thefirst voltage 1102 is provided to thememory system 1010 from theconversion element 1120. In the third state, thefifth voltage 1112 is provided to theconversion element 1120 from thesecond power element 1140 and thefirst voltage 1104 is provided to thememory system 1010 from theconversion element 1120. - In certain embodiments, the
power module 1100 transitions from the first state to the second state upon detecting that a trigger condition is likely to occur and transitions from the second state to the third state upon detecting that the trigger condition has occurred. For example, thepower module 1100 may transition to the second state when it detects that a power failure is about to occur and transitions to the third state when it detects that the power failure has occurred. In certain embodiments, providing thefirst voltage 1102 in the second state from thefirst power element 1130 rather than from theinput 1106 allows a smoother transition from the first state to the third state. For example, in certain embodiments, providing thefirst voltage 1102 from thefirst power element 1130 has capacitive and other smoothing effects. In addition, switching the point of power transition to be between theconversion element 1120 and the first andsecond power elements 1130, 1140 (e.g., the sources of the pre-regulatedfourth voltage 1110 in the second state and the pre-regulatedfifth voltage 1112 in the third state) can smooth out potential voltage spikes. - In certain embodiments, the
second power element 1140 does not comprise a battery and may comprise one or more capacitors. For example, as schematically illustrated inFIG. 16 , thesecond power element 1140 comprises a capacitor array 1142, a buck-boost converter 1144 which adjusts the voltage for charging the capacitor array and a voltage/current limiter 1146 which limits the charge current to the capacitor array 1142 and stops charging the capacitor array 1142 when it has reached a certain charge voltage. In one example embodiment, the capacitor array 1142 comprises two 50 farad capacitors capable of holding a total charge of 4.6V. For example, in one example embodiment, the buck-boost converter 1144 receives a 1.8V system voltage (first voltage 1108) and boosts the voltage to 4.3V which is outputted to the voltagecurrent limiter 1146. The voltage/current limiter 1146 limits the current going to the capacitor array 1142 to 1 A and stops charging the array 1142 when it is charged to 4.3V. Although described with respect to certain example embodiments, one of ordinary skill will recognize from the disclosure herein that thesecond power element 1140 may include alternative embodiments. For example, different components and/or different value components may be used. For example, in other embodiments, a pure boost converter may be used instead of a buck-boost converter. In another embodiment, only one capacitor may be used instead of a capacitor array 1142. - The
conversion element 1120 can comprise one or more buck converters and/or one or more buck-boost converters. Theconversion element 1120 may comprise a plurality of sub-blocks 1122, 1124, 1126 as schematically illustrated byFIG. 16 , which can provide more voltages in addition to thesecond voltage 1104 to thememory system 1010. The sub-blocks may comprise various converter circuits such as buck-converters, boost converters, and buck-boost converter circuits for providing various voltage values to thememory system 1010. For example, in one embodiment, sub-block 1122 comprises a buck converter, sub-block 1124 comprises a dual buck converter, and sub-block 1126 comprises a buck-boost converter as schematically illustrated byFIG. 16 . Various other components for the sub-blocks 1122, 1124, 1126 of theconversion element 1120 are also compatible with certain embodiments described herein. In certain embodiments, theconversion element 1120 receives as input either thefourth voltage 1110 from thefirst power element 1130 or thefifth voltage 1112 from thesecond power element 1140, depending on the state of thepower module 1100, and reduces the input to an appropriate amount for powering various components of the memory system. For example, the buck-converter of sub-block 1122 can provide 1.8V at 2 A for about 60 seconds to the volatile memory elements 1032 (e.g., DRAM), the non-volatile memory elements 1042 (e.g., flash), and the controller 1062 (e.g., an FPGA) in one embodiment. The sub-block 1124 can provide thesecond voltage 1104 as well as another reducedvoltage 1105 to thememory system 1010. In one example embodiment, thesecond voltage 1104 is 2.5V and is used to power the at least one circuit 1052 (e.g., isolation device) and the otherreduced voltage 1105 is 1.2V and is used to power the controller 1062 (e.g., FPGA). Thesubblock 1126 can provide yet anothervoltage 1107 to thememory system 1010. For example, thevoltage 1107 may be 3.3V and may be used to power both thecontroller 1062 and the at least onecircuit 1052. - Although described with respect to certain example embodiments, one of ordinary skill will recognize from the disclosure herein that the
conversion element 1120 may include alternative embodiments. For example, there may be more or less sub-blocks which may comprise other types of converters (e.g., pure boost converters) or which may produce different voltage values. In one embodiment, thevolatile memory elements 1032 andnonvolatile memory elements 1042 are powered using independent voltages and are not both powered using thefirst voltage 1102. -
FIG. 17 is a flowchart of anexample method 1200 of providing afirst voltage 1102 and asecond voltage 1104 to amemory system 1010 including volatile andnonvolatile memory subsystems method 1200 is described herein by reference to thememory system 1010 schematically illustrated byFIGS. 12-15 , other memory systems are also compatible with embodiments of themethod 1200. During a first condition, themethod 1200 comprises providing thefirst voltage 1102 to thememory system 1010 from aninput power supply 1106 and providing thesecond voltage 1104 to thememory system 1010 from a first power subsystem inoperational block 1210. For example, in one embodiment, the first power subsystem comprises thefirst power element 1130 and thevoltage conversion element 1120 described above with respect toFIG. 16 . In other embodiments, other first power subsystems are used. - The
method 1200 further comprises detecting a second condition inoperational block 1220. In certain embodiments, detecting the second condition comprises detecting that a trigger condition is likely to occur. During the second condition, themethod 1200 comprises providing thefirst voltage 1102 and thesecond voltage 1104 to thememory system 1010 from the first power subsystem in anoperational block 1230. For example, referring toFIG. 16 , aswitch 1148 can be toggled to provide thefirst voltage 1102 from theconversion element 1120 rather than from the input power supply. - The
method 1200 further comprises charging a second power subsystem in operational block 1240. In certain embodiments, the second power subsystem comprises thesecond power element 1140 or another power supply that does not comprise a battery. For example, in one embodiment, the second power subsystem comprises thesecond power element 1140 and thevoltage conversion element 1120 described above with respect toFIG. 16 . In other embodiments, some other second power subsystem is used. - The
method 1200 further comprises detecting a third condition in anoperational block 1250 and during the third condition, providing thefirst voltage 1102 and thesecond voltage 1104 to thememory system 1010 from thesecond power subsystem 1140 in anoperational block 1260. In certain embodiments, detecting the third condition comprises detecting that the trigger condition has occurred. The trigger condition may comprise various conditions described herein. In various embodiments, for example, the trigger condition comprises a power reduction, power failure, or system hang-up. The operational blocks of themethod 1200 may be performed in different orders in various embodiments. For example, in certain embodiments, thesecond power subsystem 1140 is charged before detecting the second condition. - In certain embodiments, the
memory system 1010 comprises avolatile memory subsystem 1030 and anon-volatile memory subsystem 1040 comprising at least 100 percent more storage capacity than does the volatile memory subsystem. Thememory system 1010 also comprises acontroller 1062 operatively coupled to thevolatile memory subsystem 1030 and operatively coupled to thenon-volatile memory subsystem 1040. Thecontroller 1062 can be configured to allow data to be communicated between thevolatile memory subsystem 1030 and the host system when thememory system 1010 is operating in a first state and to allow data to be communicated between thevolatile memory subsystem 1030 and thenon-volatile memory subsystem 1040 when thememory system 1010 is operating in a second state. - Although the
memory system 1010 having extra storage capacity of thenon-volatile memory subsystem 1040 has been described with respect to certain embodiments, alternative configurations exist. For example, in certain embodiments, there may be more than 100 percent more storage capacity in thenon-volatile memory subsystem 1040 than in thevolatile memory subsystem 1030. In various embodiments, there may be at least 200, 300, or 400 percent more storage capacity in thenon-volatile memory subsystem 1040 than in thevolatile memory subsystem 1030. In other embodiments, thenon-volatile memory subsystem 1040 includes at least some other integer multiples of the storage capacity of thevolatile memory subsystem 1030. In some embodiments, thenon-volatile memory subsystem 1040 includes a non-integer multiple of the storage capacity of thevolatile memory subsystem 1030. In one embodiment, thenon-volatile memory subsystem 1040 includes less than 100 percent more storage capacity than does thevolatile memory subsystem 1030. - The extra storage capacity of the
non-volatile memory subsystem 1040 can be used to improve the backup capability of thememory system 1010. In certain embodiments in which data can only be written to portions of thenon-volatile memory subsystem 1040 which do not contain data (e.g., portions which have been erased), the extra storage capacity of thenonvolatile memory subsystem 1040 allows thevolatile memory subsystem 1030 to be backed up in the event of a subsequent power failure or other trigger event. For example, the extra storage capacity of thenon-volatile memory subsystem 1040 may allow thememory system 1010 to backup thevolatile memory subsystem 1030 efficiently in the event of multiple trigger conditions (e.g., power failures). In the event of a first power failure, for example, the data in thevolatile memory system 1030 is copied to a first, previously erased portion of thenonvolatile memory subsystem 1040 via thecontroller 1062. Since thenon-volatile memory subsystem 1040 has more storage capacity than does thevolatile memory subsystem 1030, there is a second portion of thenon-volatile memory subsystem 1040 which does not have data from thevolatile memory subsystem 1030 copied to it and which remains free of data (e.g., erased). Once system power is restored, thecontroller 1062 of thememory system 1010 restores the data to thevolatile memory subsystem 1030 by copying the backed-up data from the non-volatile memory subsystem 40 back to thevolatile memory subsystem 1030. After the data is restored, thememory system 1010 erases thenon-volatile memory subsystem 1040. While the first portion of thenon-volatile memory subsystem 1040 is being erased, it may be temporarily unaccessible. - If a subsequent power failure occurs before the first portion of the
non-volatile memory subsystem 1040 is completely erased, thevolatile memory subsystem 1030 can be backed-up or stored again in the second portion of thenon-volatile memory subsystem 1040 as described herein. In certain embodiments, the extra storage capacity of thenon-volatile memory subsystem 1040 may allow thememory system 1010 to operate more efficiently. For example, because of the extra storage capacity of thenon-volatile memory subsystem 1040, thememory system 1010 can handle a higher frequency of trigger events that is not limited by the erase time of thenon-volatile memory subsystem 1040. -
FIG. 18 is a flowchart of anexample method 1300 of controlling amemory system 1010 operatively coupled to a host system and which includes avolatile memory subsystem 1030 and anon-volatile memory subsystem 1040. In certain embodiments, thenon-volatile memory subsystem 1040 comprises at least 100 percent more storage capacity than does the volatile memory subsystem 30 as described herein. While themethod 1300 is described herein by reference to thememory system 1010 schematically illustrated byFIGS. 12-14 , themethod 1300 can be practiced using other memory systems in accordance with certain embodiments described herein. In anoperational block 1310, themethod 1300 comprises communicating data between thevolatile memory subsystem 1030 and the host system when thememory system 1010 is in a first mode of operation. Themethod 1300 further comprises storing a first copy of data from thevolatile memory subsystem 1030 to thenon-volatile memory subsystem 1040 at a first time when thememory system 1010 is in a second mode of operation in anoperational block 1320. - In an
operational block 1330, themethod 1300 comprises restoring the first copy of data from thenon-volatile memory subsystem 1040 to thevolatile memory subsystem 1030. Themethod 1300 further comprises erasing the first copy of data from thenon-volatile memory subsystem 1040 in anoperational block 1340. The method further comprises storing a second copy of data from thevolatile memory subsystem 1030 to thenon-volatile memory subsystem 1040 at a second time when thememory system 1010 is in the second mode of operation in anoperational block 1350. Storing the second copy begins before the first copy is completely erased from thenon-volatile memory subsystem 1040. - In some embodiments, the
memory system 1010 enters the second mode of operation in response to a trigger condition, such as a power failure. In certain embodiments, the first copy of data and the second copy of data are stored in separate portions of thenonvolatile memory subsystem 1040. Themethod 1300 can also include restoring the second copy of data from thenon-volatile memory subsystem 1040 to thevolatile memory subsystem 1030 in anoperational block 1360. The operational blocks ofmethod 1300 referred to herein may be performed in different orders in various embodiments. For example, in some embodiments, the second copy of data is restored to thevolatile memory subsystem 1030 atoperational block 1360 before the first copy of data is completely erased in theoperational block 1340. -
FIG. 19 schematically illustrates an exampleclock distribution topology 1400 of amemory system 1010 in accordance with certain embodiments described herein. Theclock distribution topology 1400 generally illustrates the creation and routing of the clock signals provided to the various components of thememory system 1010. Aclock source 1402 such as, for example, a 25 MHz oscillator, generates a clock signal. Theclock source 1402 may feed aclock generator 1404 which provides aclock signal 1406 to thecontroller 1062, which may be an FPGA. In one embodiment, theclock generator 1404 generates a 125MHz clock signal 1406. Thecontroller 1062 receives theclock signal 1406 and uses it to clock thecontroller 1062 master state control logic. For example, the master state control logic may control the general operation of anFPGA controller 1062. - The
clock signal 1406 can also be input into a clock divider 1410 which produces a frequency-divided version of theclock signal 1406. In an example embodiment, the clock divider 1410 is a divide by two clock divider and produces a 62.5 MHz clock signal in response to the 125MHz clock signal 1406. A non-volatile memory phase-locked loop (PLL)block 1412 can be included (e.g., in the controller 1062) which distributes a series of clock signals to thenon-volatile memory subsystem 1040 and to associated control logic. For example, a series ofclock signals 1414 can be sent from thecontroller 1062 to thenon-volatile memory subsystem 1040. Anotherclock signal 1416 can be used by the controller logic which is dedicated to controlling thenon-volatile memory subsystem 1040. For example, theclock signal 1416 may clock the portion of thecontroller 1062 which is dedicated to generating address and/or control lines for thenon-volatile memory subsystem 1040. Afeedback clock signal 1418 is fed back into the non-volatilememory PLL block 1412. In one embodiment, thePLL block 1412 compares thefeedback clock 1418 to thereference clock 1411 and varies the phase and frequency of its output until thereference 1411 andfeedback 1418 clocks are phase and frequency matched. - A version of the
clock signal 1406 such as thebackup clock signal 1408 may be sent from the controller to thevolatile memory subsystem 1030. Theclock signal 1408 may be, for example, a differential version of theclock signal 1406. As described herein, thebackup clock signal 1408 may be used to clock thevolatile memory subsystem 1030 when thememory system 1010 is backing up the data from thevolatile memory subsystem 1030 into thenon-volatile memory subsystem 1040. In certain embodiments, thebackup clock signal 1408 may also be used to clock thevolatile memory subsystem 1030 when thememory system 1010 is copying the backed-up data back into thevolatile memory subsystem 1030 from the nonvolatile memory subsystem 1040 (also referred to as restoring the volatile memory subsystem 1030). Thevolatile memory subsystem 1030 may normally be run at a higher frequency (e.g., DRAM running at 400 MHz) than the nonvolatile memory subsystem 1040 (e.g., flash memory running at 62.5 MHz) when communicating with the host system (e.g., when no trigger condition is present). However, in certain embodiments thevolatile memory subsystem 1030 may be operated at a reduced frequency (e.g., at twice the frequency of the non-volatile memory subsystem 1040) without introducing significant delay into the system during backup operation and/or restore operations. Running thevolatile memory subsystem 1030 at the reduced frequency during a backup and/or restore operation may advantageously reduce overall power consumption of thememory system 1010. - In one embodiment, the
backup clock 1408 and the volatile memorysystem clock signal 1420 are received by amultiplexer 1422, as schematically illustrated byFIG. 19 . Themultiplexer 1422 can output either the volatile memorysystem clock signal 1420 or thebackup clock signal 1408 depending on the backup state of thememory system 1010. For example, when thememory system 1010 is not performing a backup or restore operation and is communicating with the host system (e.g., normal operation), the volatile memorysystem clock signal 1420 may be provided by the multiplexer 422 to the volatilememory PLL block 1424. When thememory system 1010 is performing a backup (or restore) operation, thebackup clock signal 1408 may be provided. - The volatile
memory PLL block 1424 receives the volatile memoryreference clock signal 1423 from themultiplexer 1422 and can generate a series of clock signals which are distributed to thevolatile memory subsystem 1030 and associated control logic. For example, in one embodiment, thePLL block 1424 generates a series ofclock signals 1426 which clock thevolatile memory elements 1032. Aclock signal 1428 may be used to clock control logic associated with the volatile memory elements, such as one or more registers (e.g., the one or more registers of a registered DIMM). Anotherclock signal 1430 may be sent to thecontroller 1062. Afeedback clock signal 1432 is fed back into the volatilememory PLL block 1424. In one embodiment, thePLL block 1424 compares thefeedback clock signal 1432 to thereference clock signal 1423 and varies the phase and frequency of its output until thereference clock signal 1423 and thefeedback clock signal 1432 clocks are phase and frequency matched. - The
clock signal 1430 may be used by thecontroller 1062 to generate and distribute clock signals which will be used by controller logic which is configured to control thevolatile memory subsystem 1030. For example, control logic in thecontroller 1062 may be used to control thevolatile memory subsystem 1030 during a backup or restore operation. Theclock signal 1430 may be used as a reference clock signal for the PLL block 1434 which can generate one or more clocks 1438 used by logic in thecontroller 1062. For example, the PLL block 1434 may generate one or more clock signals 1438 used to drive logic circuitry associated with controlling thevolatile memory subsystem 1030. In certain embodiments, the PLL block 1434 includes a feedback clock signal 1436 and operates in a similar manner to other PLL blocks described herein. - The
clock signal 1430 may be used as a reference clock signal for thePLL block 1440 which may generate one or more clock signals used by a sub-block 1442 to generate one or more other clock signals 1444. In one embodiment, for example, thevolatile memory subsystem 1030 comprises DDR2 SDRAM elements and the sub-block 1442 generates one or more DDR2 compatible clock signals 1444. Afeedback clock signal 1446 is fed back into thePLL block 1440. In certain embodiments, thePLL block 1440 operates in a similar manner to other PLL blocks described herein. - While described with respect to the example embodiment of
FIG. 19 , various alternative clock distribution topologies are possible. For example, one or more of the clock signals have a different frequency in various other embodiments. In some embodiments, one or more of the clocks shown as differential signals are single ended signals. In one embodiment, thevolatile memory subsystem 1030 operates on the volatilememory clock signal 1420 and there is nobackup clock signal 1408. In some embodiments, thevolatile memory subsystem 1030 is operated at a reduced frequency during a backup operation and not during a restore operation. In other embodiments, thevolatile memory subsystem 1030 is operated at a reduced frequency during a restore operation and not during a backup operation. -
FIG. 20 is a flowchart of anexample method 1500 of controlling amemory system 1010 operatively coupled to a host system. Although described with respect to thememory system 1010 described herein, themethod 1500 is compatible with other memory systems. Thememory system 1010 may include aclock distribution topology 1400 similar to the one described above with respect toFIG. 19 or another clock distribution topology. Thememory system 1010 can include a volatile memory subsystem 30 and anon-volatile memory subsystem 1040. - In an
operational block 1510, themethod 1500 comprises operating thevolatile memory subsystem 1030 at a first frequency when thememory system 1010 is in a first mode of operation in which data is communicated between thevolatile memory subsystem 1030 and the host system. In anoperational block 1520, themethod 1500 comprises operating thenon-volatile memory subsystem 1040 at a second frequency when thememory system 1010 is in a second mode of operation in which data is communicated between thevolatile memory subsystem 1030 and thenon-volatile memory subsystem 1040. Themethod 1500 further comprises operating thevolatile memory subsystem 1030 at a third frequency in anoperational block 1530 when thememory system 1010 is in the second mode of operation. In certain embodiments, thememory system 1010 is not powered by a battery when it is in the second mode of operation. Thememory system 1010 may switch from the first mode of operation to the second mode of operation in response to a trigger condition. The trigger condition may be any trigger condition described herein such as, for example, a power failure condition. In certain embodiments, the second mode of operation includes both backup and restore operations as described herein. In other embodiments, the second mode of operation includes backup operations but not restore operations. In yet other embodiments, the second mode of operation includes restore operations but not backup operations. - The third frequency can be less than the first frequency. For example, the third frequency can be approximately equal to the second frequency. In certain embodiments, the reduced frequency operation is an optional mode. In yet other embodiments, the first, second and/or third frequencies are configurable by a user or by the
memory system 1010. -
FIG. 21 schematically illustrates an example topology of a connection to transfer data slices from twoDRAM segments volatile memory subsystem 1030 of amemory system 1010 to acontroller 1062 of thememory system 1010. While the example ofFIG. 21 shows a topology including twoDRAM segments volatile memory subsystem 1030 comprises more than the two segments in certain embodiments. The data lines 1632, 1642 from thefirst DRAM segment 1630 and thesecond DRAM segment 1640 of thevolatile memory subsystem 1030 are coupled toswitches memory system 1010. The chipselect lines refresh lines 1636, 1646 (e.g., CKe signals) of the first andsecond DRAM segments controller 1062. In certain embodiments, thecontroller 1062 comprises a buffer (not shown) which is configured to store data from thevolatile memory subsystem 1030. In certain embodiments, the buffer is a first-in, first out buffer (FIFO). In certain embodiments, data slices from eachDRAM segment volatile memory subsystem 1030 comprises a 72-bit data bus (e.g., each data word at each addressable location is 72 bits wide and includes, for example, 64 bits of accessible SDRAM and 8 bits of ECC), the first data slice from thefirst DRAM segment 1630 may comprise 40 bits of the data word, and the second data slice from thesecond DRAM segment 1640 may comprise the remaining 32 bits of the data word. Certain other embodiments comprise data buses and/or data slices of different sizes. - In certain embodiments, the
switches data lines second DRAM segments controller 1062. The chipselect lines second DRAM segments volatile memory subsystem 1030, and the self-refresh lines second DRAM segments second DRAM segments - In certain embodiments, when the
memory system 1010 is backing up thevolatile memory system 1030, data slices from only one of the twoDRAM segments controller 1062. For example, when the first slice is being written to thecontroller 1062 during a back-up, thecontroller 1062 sends a signal via theCKe line 1636 to thefirst DRAM segment 1630 to put thefirst DRAM segment 1630 in active mode. In certain embodiments, the data slice from thefirst DRAM segment 1630 for multiple words (e.g., a block of words) is written to thecontroller 1062 before writing the second data slice from thesecond DRAM segment 1640 to thecontroller 1062. While the first data slice is being written to thecontroller 1062, thecontroller 1062 also sends a signal via theCKe line 1646 to put thesecond DRAM segment 1640 in self-refresh mode. Once the first data slice for one word or for a block of words is written to thecontroller 1062, thecontroller 1062 puts thefirst DRAM segment 1630 into self-refresh mode by sending a signal via theCKe line 1636 to thefirst DRAM segment 1640. Thecontroller 1062 also puts thesecond DRAM segment 1640 into active mode by sending a signal via theCKe line 1646 to theDRAM segment 1640. The second slice for a word or for a block of words is written to thecontroller 1062. In certain embodiments, when the first and second data slices are written to the buffer in thecontroller 1062, thecontroller 1062 combines the first and second data slices 1630, 1640 into complete words or blocks of words and then writes each complete word or block of words to thenon-volatile memory subsystem 1040. In certain embodiments, this process is called “slicing” thevolatile memory subsystem 1030. - In certain embodiments, the data may be sliced in a restore operation as well as, or instead of, during a backup operation. For example, in one embodiment, the
nonvolatile memory elements 1042 write each backed-up data word to thecontroller 1062 which writes a first slice of the data word to thevolatile memory subsystem 1030 and then a second slice of the data word to thevolatile memory subsystem 1030. In certain embodiments, slicing thevolatile memory subsystem 1030 during a restore operation may be performed in a manner generally inverse to slicing thevolatile memory subsystem 1030 during a backup operation. -
FIG. 22 is a flowchart of anexample method 1600 of controlling amemory system 1010 operatively coupled to a host system and which includes avolatile memory subsystem 1030 and anon-volatile memory subsystem 1040. Although described with respect to thememory system 1010 described herein with respect toFIGS. 12-14 and 21 , themethod 1600 is compatible with other memory systems. Themethod 1600 comprises communicating data words between thevolatile memory subsystem 1030 and the host system when thememory system 1010 is in a first mode of operation in anoperational block 1610. For example, thememory system 1010 may be in the first mode of operation when no trigger condition has occurred and the memory system is not performing a backup and/or restore operation or is not being powered by a secondary power supply. - In an
operational block 1620, the method further comprises transferring data words from thevolatile memory subsystem 1030 to thenon-volatile memory subsystem 1040 when thememory system 1010 is in a second mode of operation. In certain embodiments, each data word comprises the data stored in a particular address of thememory system 1010. Thememory system 1010 may enter the second mode of operation, for example, when a trigger condition (e.g., a power failure) occurs. In certain embodiments, transferring each data word comprises storing a first portion (also referred to as a slice) of the data word in a buffer in anoperational block 1622, storing a second portion of the data word in the buffer in anoperational block 1624, and writing the entire data word from the buffer to thenon-volatile memory subsystem 1040 in anoperational block 1626. - In one example embodiment, the data word may be a 72 bit data word (e.g., 64 bits of accessible SDRAM and 8 bits of ECC), the first portion (or “slice”) may comprise 40 bits of the data word, and the second portion (or “slice”) may comprise the remaining 32 bits of the data word. In certain embodiments, the buffer is included in the
controller 1062. For example, in one embodiment, the buffer is a first-in, first-out buffer implemented in thecontroller 1062 which comprises an FPGA. Themethod 1600 may generally be referred to as “slicing” the volatile memory during a backup operation. In the example embodiment, the process of “slicing” the volatile memory during a backup includes bringing the 32-bit slice out of self-refresh, reading a 32-bit block from the slice into the buffer, and putting the 32-bit slice back into self-refresh. The 40-bit slice is then brought out of self-refresh and a 40-bit block from the slice is read into a buffer. Each block may comprise a portion of multiple words. For example, each 32-bit block may comprise 32-bit portions of multiple 72-bit words. In other embodiments, each block comprises a portion of a single word. The 40-bit slice is then put back into self-refresh in the example embodiment. The 32-bit and 40-bit slices are then combined into a 72-bit block by thecontroller 1062 and ECC detection/correction is performed on each 72-bit word as it is read from the buffer and written into the non-volatile memory subsystem (e.g., flash). - In some embodiments, the entire data word may comprise more than two portions. For example, the entire data word may comprise three portions instead of two and transferring each data word further comprises storing a third portion of each data word in the buffer. In certain other embodiments, the data word may comprise more than three portions.
- In certain embodiments, the data may be sliced in a restore operation as well as, or instead of, during a backup operation. For example, in one embodiment, the
nonvolatile memory elements 1040 write each backed-up data word to thecontroller 1062 which writes a first portion of the data word to thevolatile memory subsystem 1030 and then a second portion of the data word to thevolatile memory 1030. In certain embodiments, slicing thevolatile memory subsystem 1030 during a restore operation may be performed in a manner generally inverse to slicing thevolatile memory subsystem 1030 during a backup operation. - The
method 1600 can advantageously provide significant power savings and can lead to other advantages. For example, in one embodiment where thevolatile memory subsystem 1030 comprises DRAM elements, only the slice of the DRAM which is currently being accessed (e.g., written to the buffer) during a backup is configured in full-operational mode. The slice or slices that are not being accessed may be put in self-refresh mode. Because DRAM in self-refresh mode uses significantly less power than DRAM in full-operational mode, themethod 1600 can allow significant power savings. In certain embodiments, each slice of the DRAM includes a separate self-refresh enable (e.g., CKe) signal which allows each slice to be accessed independently. - In addition, the connection between the DRAM elements and the
controller 1062 may be as large as the largest slice instead of as large as the data bus. In the example embodiment, the connection between thecontroller 1062 and the DRAM may be 40 bits instead of 72 bits. As a result, pins on thecontroller 1062 may be used for other purposes or a smaller controller may be used due to the relatively low number of pin-outs used to connect to thevolatile memory subsystem 1030. In certain other embodiments, the full width of the data bus is connected between thevolatile memory subsystem 1030 and thecontroller 1062 but only a portion of it is used during slicing operations. For example, in some embodiments, memory slicing is an optional mode. - While embodiments and applications have been shown and described, it would be apparent to those skilled in the art having the benefit of this disclosure that many more modifications than mentioned above are possible without departing from the inventive concepts disclosed herein. The invention, therefore, is not to be restricted except in the spirit of the appended claims.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/582,797 US20220222191A1 (en) | 2007-06-01 | 2022-01-24 | Flash-dram hybrid memory module |
Applications Claiming Priority (11)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US94158607P | 2007-06-01 | 2007-06-01 | |
US13187308A | 2008-06-02 | 2008-06-02 | |
US12/240,916 US8301833B1 (en) | 2007-06-01 | 2008-09-29 | Non-volatile memory module |
US201161512871P | 2011-07-28 | 2011-07-28 | |
US13/559,476 US8874831B2 (en) | 2007-06-01 | 2012-07-26 | Flash-DRAM hybrid memory module |
US14/489,269 US9158684B2 (en) | 2007-06-01 | 2014-09-17 | Flash-DRAM hybrid memory module |
US14/840,865 US9928186B2 (en) | 2007-06-01 | 2015-08-31 | Flash-DRAM hybrid memory module |
US15/934,416 US20190004985A1 (en) | 2007-06-01 | 2018-03-23 | Flash-dram hybrid memory module |
US17/138,766 US11016918B2 (en) | 2007-06-01 | 2020-12-30 | Flash-DRAM hybrid memory module |
US17/328,019 US11232054B2 (en) | 2007-06-01 | 2021-05-24 | Flash-dram hybrid memory module |
US17/582,797 US20220222191A1 (en) | 2007-06-01 | 2022-01-24 | Flash-dram hybrid memory module |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/328,019 Continuation US11232054B2 (en) | 2007-06-01 | 2021-05-24 | Flash-dram hybrid memory module |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220222191A1 true US20220222191A1 (en) | 2022-07-14 |
Family
ID=47601785
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/559,476 Expired - Fee Related US8874831B2 (en) | 2007-06-01 | 2012-07-26 | Flash-DRAM hybrid memory module |
US14/489,269 Active US9158684B2 (en) | 2007-06-01 | 2014-09-17 | Flash-DRAM hybrid memory module |
US14/840,865 Active US9928186B2 (en) | 2007-06-01 | 2015-08-31 | Flash-DRAM hybrid memory module |
US15/934,416 Abandoned US20190004985A1 (en) | 2007-06-01 | 2018-03-23 | Flash-dram hybrid memory module |
US17/138,766 Active US11016918B2 (en) | 2007-06-01 | 2020-12-30 | Flash-DRAM hybrid memory module |
US17/328,019 Active US11232054B2 (en) | 2007-06-01 | 2021-05-24 | Flash-dram hybrid memory module |
US17/582,797 Pending US20220222191A1 (en) | 2007-06-01 | 2022-01-24 | Flash-dram hybrid memory module |
Family Applications Before (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/559,476 Expired - Fee Related US8874831B2 (en) | 2007-06-01 | 2012-07-26 | Flash-DRAM hybrid memory module |
US14/489,269 Active US9158684B2 (en) | 2007-06-01 | 2014-09-17 | Flash-DRAM hybrid memory module |
US14/840,865 Active US9928186B2 (en) | 2007-06-01 | 2015-08-31 | Flash-DRAM hybrid memory module |
US15/934,416 Abandoned US20190004985A1 (en) | 2007-06-01 | 2018-03-23 | Flash-dram hybrid memory module |
US17/138,766 Active US11016918B2 (en) | 2007-06-01 | 2020-12-30 | Flash-DRAM hybrid memory module |
US17/328,019 Active US11232054B2 (en) | 2007-06-01 | 2021-05-24 | Flash-dram hybrid memory module |
Country Status (6)
Country | Link |
---|---|
US (7) | US8874831B2 (en) |
EP (3) | EP2737383B1 (en) |
KR (1) | KR20140063660A (en) |
CN (2) | CN107656700B (en) |
PL (1) | PL3293638T3 (en) |
WO (1) | WO2013016723A2 (en) |
Families Citing this family (174)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8904098B2 (en) | 2007-06-01 | 2014-12-02 | Netlist, Inc. | Redundant backup using non-volatile memory |
US8874831B2 (en) | 2007-06-01 | 2014-10-28 | Netlist, Inc. | Flash-DRAM hybrid memory module |
US8301833B1 (en) | 2007-06-01 | 2012-10-30 | Netlist, Inc. | Non-volatile memory module |
US9720616B2 (en) * | 2008-06-18 | 2017-08-01 | Super Talent Technology, Corp. | Data-retention controller/driver for stand-alone or hosted card reader, solid-state-drive (SSD), or super-enhanced-endurance SSD (SEED) |
US9176671B1 (en) | 2011-04-06 | 2015-11-03 | P4tents1, LLC | Fetching data between thread execution in a flash/DRAM/embedded DRAM-equipped system |
US9170744B1 (en) | 2011-04-06 | 2015-10-27 | P4tents1, LLC | Computer program product for controlling a flash/DRAM/embedded DRAM-equipped system |
US8930647B1 (en) | 2011-04-06 | 2015-01-06 | P4tents1, LLC | Multiple class memory systems |
US9164679B2 (en) | 2011-04-06 | 2015-10-20 | Patents1, Llc | System, method and computer program product for multi-thread operation involving first memory of a first memory class and second memory of a second memory class |
US9158546B1 (en) | 2011-04-06 | 2015-10-13 | P4tents1, LLC | Computer program product for fetching from a first physical memory between an execution of a plurality of threads associated with a second physical memory |
US10380022B2 (en) | 2011-07-28 | 2019-08-13 | Netlist, Inc. | Hybrid memory module and system and method of operating the same |
US10198350B2 (en) | 2011-07-28 | 2019-02-05 | Netlist, Inc. | Memory module having volatile and non-volatile memory subsystems and method of operation |
US10838646B2 (en) | 2011-07-28 | 2020-11-17 | Netlist, Inc. | Method and apparatus for presearching stored data |
US9417754B2 (en) | 2011-08-05 | 2016-08-16 | P4tents1, LLC | User interface system, method, and computer program product |
US11048410B2 (en) | 2011-08-24 | 2021-06-29 | Rambus Inc. | Distributed procedure execution and file systems on a memory interface |
WO2013028859A1 (en) * | 2011-08-24 | 2013-02-28 | Rambus Inc. | Methods and systems for mapping a peripheral function onto a legacy memory interface |
US9098209B2 (en) | 2011-08-24 | 2015-08-04 | Rambus Inc. | Communication via a memory interface |
WO2013101201A1 (en) * | 2011-12-30 | 2013-07-04 | Intel Corporation | Home agent multi-level nvm memory architecture |
US20130318269A1 (en) | 2012-05-22 | 2013-11-28 | Xockets IP, LLC | Processing structured and unstructured data using offload processors |
US9619406B2 (en) | 2012-05-22 | 2017-04-11 | Xockets, Inc. | Offloading of computation for rack level servers and corresponding methods and systems |
US20140089573A1 (en) * | 2012-09-24 | 2014-03-27 | Palsamy Sakthikumar | Method for accessing memory devices prior to bus training |
KR20140064546A (en) * | 2012-11-20 | 2014-05-28 | 삼성전자주식회사 | Semiconductor memory device and computer system including the same |
US10910025B2 (en) * | 2012-12-20 | 2021-02-02 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Flexible utilization of block storage in a computing system |
US9280497B2 (en) * | 2012-12-21 | 2016-03-08 | Dell Products Lp | Systems and methods for support of non-volatile memory on a DDR memory channel |
WO2014113055A1 (en) | 2013-01-17 | 2014-07-24 | Xockets IP, LLC | Offload processor modules for connection to system memory |
US9378161B1 (en) | 2013-01-17 | 2016-06-28 | Xockets, Inc. | Full bandwidth packet handling with server systems including offload processors |
WO2014120140A1 (en) * | 2013-01-30 | 2014-08-07 | Hewlett-Packard Development Company, L.P. | Runtime backup of data in a memory module |
CN103970219B (en) * | 2013-01-30 | 2018-03-20 | 鸿富锦精密电子(天津)有限公司 | Storage device and the mainboard for supporting the storage device |
WO2014139047A1 (en) | 2013-03-14 | 2014-09-18 | Micron Technology, Inc. | Memory systems and methods including training,data organizing,and/or shadowing |
US10372551B2 (en) | 2013-03-15 | 2019-08-06 | Netlist, Inc. | Hybrid memory system with configurable error thresholds and failure analysis capability |
BR112015019459B1 (en) | 2013-03-15 | 2021-10-19 | Intel Corporation | DEVICE FOR USE IN A MEMORY MODULE AND METHOD PERFORMED IN A MEMORY MODULE |
US9436600B2 (en) | 2013-06-11 | 2016-09-06 | Svic No. 28 New Technology Business Investment L.L.P. | Non-volatile memory storage for multi-channel memory system |
WO2014203383A1 (en) * | 2013-06-20 | 2014-12-24 | 株式会社日立製作所 | Memory module having different types of memory mounted together thereon, and information processing device having memory module mounted therein |
US9921980B2 (en) | 2013-08-12 | 2018-03-20 | Micron Technology, Inc. | Apparatuses and methods for configuring I/Os of memory for hybrid memory modules |
US9436563B2 (en) | 2013-10-01 | 2016-09-06 | Globalfoundries Inc. | Memory system for mirroring data |
US20150106547A1 (en) * | 2013-10-14 | 2015-04-16 | Micron Technology, Inc. | Distributed memory systems and methods |
US9152584B2 (en) * | 2013-10-29 | 2015-10-06 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Providing bus resiliency in a hybrid memory system |
TWI527058B (en) | 2013-11-01 | 2016-03-21 | 群聯電子股份有限公司 | Memory controlling method, memory storage device and memory controlling circuit unit |
US10248328B2 (en) | 2013-11-07 | 2019-04-02 | Netlist, Inc. | Direct data move between DRAM and storage on a memory module |
WO2015070110A2 (en) * | 2013-11-07 | 2015-05-14 | Netlist, Inc. | Hybrid memory module and system and method of operating the same |
US11182284B2 (en) | 2013-11-07 | 2021-11-23 | Netlist, Inc. | Memory module having volatile and non-volatile memory subsystems and method of operation |
CN104636267B (en) * | 2013-11-11 | 2018-01-12 | 群联电子股份有限公司 | Memory control methods, memory storage apparatus and memorizer control circuit unit |
KR102156284B1 (en) | 2013-11-27 | 2020-09-15 | 에스케이하이닉스 주식회사 | Memory and memory module including the same |
US9547447B2 (en) * | 2014-01-03 | 2017-01-17 | Advanced Micro Devices, Inc. | Dedicated interface for coupling flash memory and dynamic random access memory |
WO2015127327A1 (en) * | 2014-02-23 | 2015-08-27 | Rambus Inc. | Distributed procedure execution and file systems on a memory interface |
WO2015155103A1 (en) * | 2014-04-08 | 2015-10-15 | Fujitsu Technology Solutions Intellectual Property Gmbh | Method for improved access to a main memory of a computer system, corresponding computer system and computer program product |
US20150347151A1 (en) * | 2014-05-28 | 2015-12-03 | Diablo Technologies Inc. | System and method for booting from a non-volatile memory |
US9753793B2 (en) * | 2014-06-30 | 2017-09-05 | Intel Corporation | Techniques for handling errors in persistent memory |
US9645829B2 (en) * | 2014-06-30 | 2017-05-09 | Intel Corporation | Techniques to communicate with a controller for a non-volatile dual in-line memory module |
US9747200B1 (en) * | 2014-07-02 | 2017-08-29 | Microsemi Solutions (U.S.), Inc. | Memory system with high speed non-volatile memory backup using pre-aged flash memory devices |
KR20160046391A (en) * | 2014-10-20 | 2016-04-29 | 삼성전자주식회사 | Hybrid DIMM structure and Driving Method thereof |
US11775443B2 (en) * | 2014-10-23 | 2023-10-03 | Hewlett Packard Enterprise Development Lp | Supervisory memory management unit |
US9721660B2 (en) | 2014-10-24 | 2017-08-01 | Microsoft Technology Licensing, Llc | Configurable volatile memory without a dedicated power source for detecting a data save trigger condition |
JP2016091523A (en) * | 2014-11-11 | 2016-05-23 | レノボ・シンガポール・プライベート・リミテッド | Method of increasing capacity of backup module, nvdimm system, and information processing apparatus |
US9715453B2 (en) * | 2014-12-11 | 2017-07-25 | Intel Corporation | Computing method and apparatus with persistent memory |
US20160098203A1 (en) * | 2014-12-18 | 2016-04-07 | Mediatek Inc. | Heterogeneous Swap Space With Dynamic Thresholds |
US10126950B2 (en) * | 2014-12-22 | 2018-11-13 | Intel Corporation | Allocating and configuring persistent memory |
US10949286B2 (en) | 2015-01-12 | 2021-03-16 | Hewlett Packard Enterprise Development Lp | Handling memory errors in memory modules that include volatile and non-volatile components |
US20160232112A1 (en) * | 2015-02-06 | 2016-08-11 | Futurewei Technologies, Inc. | Unified Memory Bus and Method to Operate the Unified Memory Bus |
US20160246715A1 (en) * | 2015-02-23 | 2016-08-25 | Advanced Micro Devices, Inc. | Memory module with volatile and non-volatile storage arrays |
CN106155926B (en) * | 2015-04-09 | 2019-11-26 | 澜起科技股份有限公司 | The data interactive method of memory and memory |
WO2016171934A1 (en) * | 2015-04-20 | 2016-10-27 | Netlist, Inc. | Memory module and system and method of operation |
US10649680B2 (en) | 2015-04-30 | 2020-05-12 | Hewlett Packard Enterprise Development Lp | Dual-port non-volatile dual in-line memory modules |
WO2016175855A1 (en) | 2015-04-30 | 2016-11-03 | Hewlett Packard Enterprise Development Lp | Replicating data using dual-port non-volatile dual in-line memory modules |
WO2016175856A1 (en) * | 2015-04-30 | 2016-11-03 | Hewlett Packard Enterprise Development Lp | Migrating data using dual-port non-volatile dual in-line memory modules |
US11257527B2 (en) | 2015-05-06 | 2022-02-22 | SK Hynix Inc. | Memory module with battery and electronic system having the memory module |
KR20160131171A (en) | 2015-05-06 | 2016-11-16 | 에스케이하이닉스 주식회사 | Memory module including battery |
US10025747B2 (en) * | 2015-05-07 | 2018-07-17 | Samsung Electronics Co., Ltd. | I/O channel scrambling/ECC disassociated communication protocol |
US10152413B2 (en) | 2015-06-08 | 2018-12-11 | Samsung Electronics Co. Ltd. | Nonvolatile memory module and operation method thereof |
US9799402B2 (en) | 2015-06-08 | 2017-10-24 | Samsung Electronics Co., Ltd. | Nonvolatile memory device and program method thereof |
US10261697B2 (en) | 2015-06-08 | 2019-04-16 | Samsung Electronics Co., Ltd. | Storage device and operating method of storage device |
KR102290988B1 (en) * | 2015-06-08 | 2021-08-19 | 삼성전자주식회사 | Nonvolatile memory module and operating method thereof |
US9619329B2 (en) | 2015-06-22 | 2017-04-11 | International Business Machines Corporation | Converting volatile memory module devices to flashless non-volatile memory module devices |
US9645939B2 (en) * | 2015-06-26 | 2017-05-09 | Intel Corporation | Hardware apparatuses and methods for distributed durable and atomic transactions in non-volatile memory |
US9904490B2 (en) | 2015-06-26 | 2018-02-27 | Toshiba Memory Corporation | Solid-state mass storage device and method for persisting volatile data to non-volatile media |
KR102274038B1 (en) | 2015-08-03 | 2021-07-09 | 삼성전자주식회사 | Nonvolatile memory module having back-up function |
US9720604B2 (en) | 2015-08-06 | 2017-08-01 | Sandisk Technologies Llc | Block storage protocol to RAM bypass |
KR102430561B1 (en) | 2015-09-11 | 2022-08-09 | 삼성전자주식회사 | Nonvolatile memory module having dual port dram |
KR102427262B1 (en) | 2015-09-11 | 2022-08-01 | 삼성전자주식회사 | Storage device including random access memory devices and nonvolatile memory devices |
EP3356943B1 (en) | 2015-10-01 | 2021-11-03 | Rambus Inc. | Memory system with cached memory module operations |
US10503657B2 (en) | 2015-10-07 | 2019-12-10 | Samsung Electronics Co., Ltd. | DIMM SSD Addressing performance techniques |
US10031674B2 (en) | 2015-10-07 | 2018-07-24 | Samsung Electronics Co., Ltd. | DIMM SSD addressing performance techniques |
US20170109101A1 (en) * | 2015-10-16 | 2017-04-20 | Samsung Electronics Co., Ltd. | System and method for initiating storage device tasks based upon information from the memory channel interconnect |
US9880778B2 (en) * | 2015-11-09 | 2018-01-30 | Google Inc. | Memory devices and methods |
KR102420152B1 (en) | 2015-11-18 | 2022-07-13 | 삼성전자주식회사 | Multi-communication Device in Memory System |
US10719236B2 (en) * | 2015-11-20 | 2020-07-21 | Arm Ltd. | Memory controller with non-volatile buffer for persistent memory operations |
KR102513913B1 (en) * | 2015-12-03 | 2023-03-28 | 삼성전자주식회사 | Nonvolatile memory module and memory system |
KR102513903B1 (en) * | 2015-12-03 | 2023-03-28 | 삼성전자주식회사 | Nonvolatile memory module and memory system |
US10303372B2 (en) | 2015-12-01 | 2019-05-28 | Samsung Electronics Co., Ltd. | Nonvolatile memory device and operation method thereof |
US10025508B2 (en) | 2015-12-02 | 2018-07-17 | International Business Machines Corporation | Concurrent upgrade and backup of non-volatile memory |
CN105354156A (en) * | 2015-12-10 | 2016-02-24 | 浪潮电子信息产业股份有限公司 | Mainboard design method supporting NVDIMM (non-volatile memory Module) |
CN105575433B (en) * | 2015-12-10 | 2019-11-22 | 北京兆易创新科技股份有限公司 | Nand memory and its device for balancing the WL Voltage Establishment time |
US10019367B2 (en) | 2015-12-14 | 2018-07-10 | Samsung Electronics Co., Ltd. | Memory module, computing system having the same, and method for testing tag error thereof |
KR102491651B1 (en) | 2015-12-14 | 2023-01-26 | 삼성전자주식회사 | Nonvolatile memory module, computing system having the same, and operating method thereof |
CN106886495B (en) * | 2015-12-15 | 2019-10-18 | 北京兆易创新科技股份有限公司 | A kind of embedded system and its control method |
US10437483B2 (en) | 2015-12-17 | 2019-10-08 | Samsung Electronics Co., Ltd. | Computing system with communication mechanism and method of operation thereof |
US9971511B2 (en) | 2016-01-06 | 2018-05-15 | Samsung Electronics Co., Ltd. | Hybrid memory module and transaction-based memory interface |
US20170206165A1 (en) * | 2016-01-14 | 2017-07-20 | Samsung Electronics Co., Ltd. | Method for accessing heterogeneous memories and memory module including heterogeneous memories |
US9891864B2 (en) * | 2016-01-19 | 2018-02-13 | Micron Technology, Inc. | Non-volatile memory module architecture to support memory error correction |
US20170212835A1 (en) * | 2016-01-22 | 2017-07-27 | Samsung Electronics Co., Ltd. | Computing system with memory management mechanism and method of operation thereof |
KR102523141B1 (en) | 2016-02-15 | 2023-04-20 | 삼성전자주식회사 | Nonvolatile memory module comprising volatile memory device and nonvolatile memory device |
US10409719B2 (en) | 2016-03-17 | 2019-09-10 | Samsung Electronics Co., Ltd. | User configurable passive background operation |
KR102535738B1 (en) | 2016-03-28 | 2023-05-25 | 에스케이하이닉스 주식회사 | Non-volatile dual in line memory system, memory module and operation method of the same |
KR102547056B1 (en) | 2016-03-28 | 2023-06-22 | 에스케이하이닉스 주식회사 | Command-address snooping for non-volatile memory module |
KR102567279B1 (en) * | 2016-03-28 | 2023-08-17 | 에스케이하이닉스 주식회사 | Power down interrupt of non-volatile dual in line memory system |
CN105938458B (en) * | 2016-04-13 | 2019-02-22 | 上海交通大学 | The isomery mixing EMS memory management process of software definition |
US10152237B2 (en) | 2016-05-05 | 2018-12-11 | Micron Technology, Inc. | Non-deterministic memory protocol |
US10089228B2 (en) * | 2016-05-09 | 2018-10-02 | Dell Products L.P. | I/O blender countermeasures |
KR20170132483A (en) | 2016-05-24 | 2017-12-04 | 삼성전자주식회사 | Method of operating memory device |
US10534540B2 (en) | 2016-06-06 | 2020-01-14 | Micron Technology, Inc. | Memory protocol |
US10747694B2 (en) | 2016-06-07 | 2020-08-18 | Ncorium | Multi-level data cache and storage on a memory bus |
US10540098B2 (en) | 2016-07-19 | 2020-01-21 | Sap Se | Workload-aware page management for in-memory databases in hybrid main memory systems |
US11977484B2 (en) * | 2016-07-19 | 2024-05-07 | Sap Se | Adapting in-memory database in hybrid memory systems and operating system interface |
US10452539B2 (en) * | 2016-07-19 | 2019-10-22 | Sap Se | Simulator for enterprise-scale simulations on hybrid main memory systems |
US10474557B2 (en) | 2016-07-19 | 2019-11-12 | Sap Se | Source code profiling for line-level latency and energy consumption estimation |
US10387127B2 (en) | 2016-07-19 | 2019-08-20 | Sap Se | Detecting sequential access data and random access data for placement on hybrid main memory for in-memory databases |
US10437798B2 (en) | 2016-07-19 | 2019-10-08 | Sap Se | Full system simulator and memory-aware splay tree for in-memory databases in hybrid memory systems |
US10783146B2 (en) | 2016-07-19 | 2020-09-22 | Sap Se | Join operations in hybrid main memory systems |
US10698732B2 (en) | 2016-07-19 | 2020-06-30 | Sap Se | Page ranking in operating system virtual pages in hybrid memory systems |
US10339050B2 (en) * | 2016-09-23 | 2019-07-02 | Arm Limited | Apparatus including a memory controller for controlling direct data transfer between first and second memory modules using direct transfer commands |
US10847196B2 (en) | 2016-10-31 | 2020-11-24 | Rambus Inc. | Hybrid memory module |
KR102649048B1 (en) | 2016-11-02 | 2024-03-18 | 삼성전자주식회사 | Memory device and memory system including the same |
US10585624B2 (en) | 2016-12-01 | 2020-03-10 | Micron Technology, Inc. | Memory protocol |
US10002086B1 (en) | 2016-12-20 | 2018-06-19 | Sandisk Technologies Llc | Multi-channel memory operations based on bit error rates |
KR20180078512A (en) * | 2016-12-30 | 2018-07-10 | 삼성전자주식회사 | Semiconductor device |
US11003602B2 (en) | 2017-01-24 | 2021-05-11 | Micron Technology, Inc. | Memory protocol with command priority |
US11397687B2 (en) | 2017-01-25 | 2022-07-26 | Samsung Electronics Co., Ltd. | Flash-integrated high bandwidth memory appliance |
US10635613B2 (en) | 2017-04-11 | 2020-04-28 | Micron Technology, Inc. | Transaction identification |
CN108877856B (en) * | 2017-05-10 | 2021-02-19 | 慧荣科技股份有限公司 | Storage device, recording method and preloading method |
KR102400102B1 (en) * | 2017-05-11 | 2022-05-23 | 삼성전자주식회사 | Memory system for supporting internal DQ termination of data buffer |
US10496584B2 (en) * | 2017-05-11 | 2019-12-03 | Samsung Electronics Co., Ltd. | Memory system for supporting internal DQ termination of data buffer |
US10585754B2 (en) | 2017-08-15 | 2020-03-10 | International Business Machines Corporation | Memory security protocol |
US11010379B2 (en) | 2017-08-15 | 2021-05-18 | Sap Se | Increasing performance of in-memory databases using re-ordered query execution plans |
KR102353859B1 (en) * | 2017-11-01 | 2022-01-19 | 삼성전자주식회사 | Computing device and non-volatile dual in-line memory module |
US10606513B2 (en) | 2017-12-06 | 2020-03-31 | Western Digital Technologies, Inc. | Volatility management for non-volatile memory device |
US10431305B2 (en) * | 2017-12-14 | 2019-10-01 | Advanced Micro Devices, Inc. | High-performance on-module caching architectures for non-volatile dual in-line memory module (NVDIMM) |
EP3759582B1 (en) * | 2018-03-01 | 2024-05-01 | Micron Technology, Inc. | Performing operation on data blocks concurrently and based on performance rate of another operation on data blocks |
US11579770B2 (en) * | 2018-03-15 | 2023-02-14 | Western Digital Technologies, Inc. | Volatility management for memory device |
CN108874684B (en) * | 2018-05-31 | 2021-05-28 | 北京领芯迅飞科技有限公司 | NVDIMM interface data read-write device for splitting CACHE CACHE |
US11157319B2 (en) | 2018-06-06 | 2021-10-26 | Western Digital Technologies, Inc. | Processor with processor memory pairs for improved process switching and methods thereof |
US10636455B2 (en) | 2018-07-12 | 2020-04-28 | International Business Machines Corporation | Enhanced NVDIMM architecture |
US11169920B2 (en) * | 2018-09-17 | 2021-11-09 | Micron Technology, Inc. | Cache operations in a hybrid dual in-line memory module |
US11099779B2 (en) | 2018-09-24 | 2021-08-24 | Micron Technology, Inc. | Addressing in memory with a read identification (RID) number |
US10949117B2 (en) * | 2018-09-24 | 2021-03-16 | Micron Technology, Inc. | Direct data transfer in memory and between devices of a memory module |
US10732892B2 (en) | 2018-09-24 | 2020-08-04 | Micron Technology, Inc. | Data transfer in port switch memory |
US10901657B2 (en) | 2018-11-29 | 2021-01-26 | International Business Machines Corporation | Dynamic write credit buffer management of non-volatile dual inline memory module |
US11163475B2 (en) | 2019-06-04 | 2021-11-02 | International Business Machines Corporation | Block input/output (I/O) accesses in the presence of a storage class memory |
US11222671B2 (en) | 2019-06-20 | 2022-01-11 | Samsung Electronics Co., Ltd. | Memory device, method of operating the memory device, memory module, and method of operating the memory module |
EP3754512B1 (en) | 2019-06-20 | 2023-03-01 | Samsung Electronics Co., Ltd. | Memory device, method of operating the memory device, memory module, and method of operating the memory module |
US11526441B2 (en) | 2019-08-19 | 2022-12-13 | Truememory Technology, LLC | Hybrid memory systems with cache management |
US11055220B2 (en) | 2019-08-19 | 2021-07-06 | Truememorytechnology, LLC | Hybrid memory systems with cache management |
US11513725B2 (en) * | 2019-09-16 | 2022-11-29 | Netlist, Inc. | Hybrid memory module having a volatile memory subsystem and a module controller sourcing read strobes to accompany read data from the volatile memory subsystem |
US11137941B2 (en) * | 2019-12-30 | 2021-10-05 | Advanced Micro Devices, Inc. | Command replay for non-volatile dual inline memory modules |
US11531601B2 (en) | 2019-12-30 | 2022-12-20 | Advanced Micro Devices, Inc. | Error recovery for non-volatile memory modules |
US11379393B2 (en) * | 2020-02-28 | 2022-07-05 | Innogrit Technologies Co., Ltd. | Multi-frequency memory interface and methods for configurating the same |
CN112000276B (en) * | 2020-06-19 | 2023-04-11 | 浙江绍兴青逸信息科技有限责任公司 | Memory bank |
US11355214B2 (en) * | 2020-08-10 | 2022-06-07 | Micron Technology, Inc. | Debugging memory devices |
KR20220091794A (en) | 2020-12-24 | 2022-07-01 | 삼성전자주식회사 | Semiconductor device and electronic device including the same |
FR3119483B1 (en) * | 2021-01-29 | 2023-12-29 | Commissariat Energie Atomique | Device comprising a non-volatile memory circuit |
US11687281B2 (en) * | 2021-03-31 | 2023-06-27 | Advanced Micro Devices, Inc. | DRAM command streak efficiency management |
CN117242522B (en) * | 2021-05-06 | 2024-09-20 | 超威半导体公司 | Hybrid library latch array |
US11527270B2 (en) | 2021-05-06 | 2022-12-13 | Advanced Micro Devices, Inc. | Hybrid library latch array |
US11715514B2 (en) | 2021-05-06 | 2023-08-01 | Advanced Micro Devices, Inc. | Latch bit cells |
CN112947996B (en) | 2021-05-14 | 2021-08-27 | 南京芯驰半导体科技有限公司 | Off-chip nonvolatile memory dynamic loading system and method based on virtual mapping |
US12009025B2 (en) | 2021-06-25 | 2024-06-11 | Advanced Micro Devices, Inc. | Weak precharge before write dual-rail SRAM write optimization |
US11586266B1 (en) | 2021-07-28 | 2023-02-21 | International Business Machines Corporation | Persistent power enabled on-chip data processor |
US11494319B1 (en) * | 2021-08-17 | 2022-11-08 | Micron Technology, Inc. | Apparatuses, systems, and methods for input/output mappings |
CN113946290B (en) * | 2021-10-14 | 2023-06-06 | 西安紫光国芯半导体有限公司 | Memory device and memory system based on three-dimensional heterogeneous integration |
CN114153402B (en) * | 2022-02-09 | 2022-05-03 | 阿里云计算有限公司 | Memory and data reading and writing method thereof |
FR3138709A1 (en) * | 2022-08-04 | 2024-02-09 | STMicroelectronics (Alps) SAS | FLASH memory device |
US20240111421A1 (en) * | 2022-09-30 | 2024-04-04 | Advanced Micro Devices, Inc. | Connection Modification based on Traffic Pattern |
CN116719486B (en) * | 2023-08-10 | 2023-11-17 | 杭州智灵瞳人工智能有限公司 | Multimode storage device with built-in data automatic transmission function and control method |
CN116955241B (en) * | 2023-09-21 | 2024-01-05 | 杭州智灵瞳人工智能有限公司 | Memory chip compatible with multiple types of memory media |
CN117033267B (en) * | 2023-10-07 | 2024-01-26 | 深圳大普微电子股份有限公司 | Hybrid memory master controller and hybrid memory |
CN118051191A (en) * | 2024-04-16 | 2024-05-17 | 电子科技大学 | Nonvolatile memory circuit and device supporting parameterization and parallel access |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6856556B1 (en) * | 2003-04-03 | 2005-02-15 | Siliconsystems, Inc. | Storage subsystem with embedded circuit for protecting against anomalies in power signal from host |
US20060080515A1 (en) * | 2004-10-12 | 2006-04-13 | Lefthand Networks, Inc. | Non-Volatile Memory Backup for Network Storage System |
US20060174140A1 (en) * | 2005-01-31 | 2006-08-03 | Harris Shaun L | Voltage distribution system and method for a memory assembly |
US20070136523A1 (en) * | 2005-12-08 | 2007-06-14 | Bonella Randy M | Advanced dynamic disk memory module special operations |
US20080126624A1 (en) * | 2006-11-27 | 2008-05-29 | Edoardo Prete | Memory buffer and method for buffering data |
US7724604B2 (en) * | 2006-10-25 | 2010-05-25 | Smart Modular Technologies, Inc. | Clock and power fault detection for memory modules |
US20120271990A1 (en) * | 2007-06-01 | 2012-10-25 | Netlist, Inc. | Non-Volatile Memory Module |
Family Cites Families (157)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US2043099A (en) | 1933-10-26 | 1936-06-02 | Gen Electric | Electrical protective system |
US3562555A (en) | 1967-09-01 | 1971-02-09 | Rca Corp | Memory protecting circuit |
US3916390A (en) | 1974-12-31 | 1975-10-28 | Ibm | Dynamic memory with non-volatile back-up mode |
US4234920A (en) | 1978-11-24 | 1980-11-18 | Engineered Systems, Inc. | Power failure detection and restart system |
US4449205A (en) | 1982-02-19 | 1984-05-15 | International Business Machines Corp. | Dynamic RAM with non-volatile back-up storage and method of operation thereof |
US4420821A (en) | 1982-02-19 | 1983-12-13 | International Business Machines Corporation | Static RAM with non-volatile back-up storage and method of operation thereof |
US4607332A (en) | 1983-01-14 | 1986-08-19 | At&T Bell Laboratories | Dynamic alteration of firmware programs in Read-Only Memory based systems |
US4658204A (en) | 1986-02-07 | 1987-04-14 | Prime Computer, Inc. | Anticipatory power failure detection apparatus and method |
US4884242A (en) | 1988-05-26 | 1989-11-28 | Applied Automation, Inc. | Backup power system for dynamic memory |
US4882709A (en) | 1988-08-25 | 1989-11-21 | Integrated Device Technology, Inc. | Conditional write RAM |
US4965828A (en) | 1989-04-05 | 1990-10-23 | Quadri Corporation | Non-volatile semiconductor memory with SCRAM hold cycle prior to SCRAM-to-E2 PROM backup transfer |
GB2256735B (en) | 1991-06-12 | 1995-06-21 | Intel Corp | Non-volatile disk cache |
US6230233B1 (en) | 1991-09-13 | 2001-05-08 | Sandisk Corporation | Wear leveling techniques for flash EEPROM systems |
US5490155A (en) | 1992-10-02 | 1996-02-06 | Compaq Computer Corp. | Error correction system for n bits using error correcting code designed for fewer than n bits |
US5430742A (en) | 1992-10-14 | 1995-07-04 | Ast Research, Inc. | Memory controller with ECC and data streaming control |
KR970008188B1 (en) | 1993-04-08 | 1997-05-21 | 가부시끼가이샤 히다찌세이사꾸쇼 | Control method of flash memory and information processing apparatus using the same |
JPH0729386A (en) | 1993-07-13 | 1995-01-31 | Hitachi Ltd | Flash member and microcomputer |
US5675725A (en) | 1993-07-19 | 1997-10-07 | Cheyenne Advanced Technology Limited | Computer backup system operable with open files |
KR0130873B1 (en) | 1994-01-11 | 1998-04-20 | 호리에 유키지 | Grain tank in a combine |
US5577213A (en) * | 1994-06-03 | 1996-11-19 | At&T Global Information Solutions Company | Multi-device adapter card for computer |
US5696917A (en) | 1994-06-03 | 1997-12-09 | Intel Corporation | Method and apparatus for performing burst read operations in an asynchronous nonvolatile memory |
US5519663A (en) | 1994-09-28 | 1996-05-21 | Sci Systems, Inc. | Preservation system for volatile memory with nonvolatile backup memory |
EP0710033A3 (en) | 1994-10-28 | 1999-06-09 | Matsushita Electric Industrial Co., Ltd. | MPEG video decoder having a high bandwidth memory |
JPH08278916A (en) | 1994-11-30 | 1996-10-22 | Hitachi Ltd | Multichannel memory system, transfer information synchronizing method, and signal transfer circuit |
US5563839A (en) | 1995-03-30 | 1996-10-08 | Simtek Corporation | Semiconductor memory device having a sleep mode |
US5630096A (en) | 1995-05-10 | 1997-05-13 | Microunity Systems Engineering, Inc. | Controller for a synchronous DRAM that maximizes throughput by allowing memory requests and commands to be issued out of order |
US5619644A (en) * | 1995-09-18 | 1997-04-08 | International Business Machines Corporation | Software directed microcode state save for distributed storage controller |
US5799200A (en) | 1995-09-28 | 1998-08-25 | Emc Corporation | Power failure responsive apparatus and method having a shadow dram, a flash ROM, an auxiliary battery, and a controller |
US5914906A (en) | 1995-12-20 | 1999-06-22 | International Business Machines Corporation | Field programmable memory array |
US6199142B1 (en) | 1996-07-01 | 2001-03-06 | Sun Microsystems, Inc. | Processor/memory device with integrated CPU, main memory, and full width cache and associated method |
US5813029A (en) | 1996-07-09 | 1998-09-22 | Micron Electronics, Inc. | Upgradeable cache circuit using high speed multiplexer |
US5757712A (en) | 1996-07-12 | 1998-05-26 | International Business Machines Corporation | Memory modules with voltage regulation and level translation |
US5890192A (en) | 1996-11-05 | 1999-03-30 | Sandisk Corporation | Concurrent write of multiple chunks of data into multiple subarrays of flash EEPROM |
US5870350A (en) | 1997-05-21 | 1999-02-09 | International Business Machines Corporation | High performance, high bandwidth memory bus architecture utilizing SDRAMs |
US5991885A (en) | 1997-06-11 | 1999-11-23 | Clarinet Systems, Inc. | Method and apparatus for detecting the presence of a remote device and providing power thereto |
KR100238188B1 (en) | 1997-09-12 | 2000-01-15 | 윤종용 | Method and apparatus for generating memory clock of video controller |
US6145068A (en) | 1997-09-16 | 2000-11-07 | Phoenix Technologies Ltd. | Data transfer to a non-volatile storage medium |
US5953215A (en) | 1997-12-01 | 1999-09-14 | Karabatsos; Chris | Apparatus and method for improving computer memory speed and capacity |
US6721860B2 (en) | 1998-01-29 | 2004-04-13 | Micron Technology, Inc. | Method for bus capacitance reduction |
US6158015A (en) | 1998-03-30 | 2000-12-05 | Micron Electronics, Inc. | Apparatus for swapping, adding or removing a processor in an operating computer system |
US6216247B1 (en) | 1998-05-29 | 2001-04-10 | Intel Corporation | 32-bit mode for a 64-bit ECC capable memory subsystem |
US6658507B1 (en) | 1998-08-31 | 2003-12-02 | Wistron Corporation | System and method for hot insertion of computer-related add-on cards |
US6269382B1 (en) * | 1998-08-31 | 2001-07-31 | Microsoft Corporation | Systems and methods for migration and recall of data from local and remote storage |
US6363450B1 (en) | 1999-03-17 | 2002-03-26 | Dell Usa, L.P. | Memory riser card for a computer system |
US6336176B1 (en) | 1999-04-08 | 2002-01-01 | Micron Technology, Inc. | Memory configuration data protection |
US6487623B1 (en) | 1999-04-30 | 2002-11-26 | Compaq Information Technologies Group, L.P. | Replacement, upgrade and/or addition of hot-pluggable components in a computer system |
US7827348B2 (en) | 2000-01-06 | 2010-11-02 | Super Talent Electronics, Inc. | High performance flash memory devices (FMD) |
US6336174B1 (en) | 1999-08-09 | 2002-01-01 | Maxtor Corporation | Hardware assisted memory backup system and method |
KR100375217B1 (en) | 1999-10-21 | 2003-03-07 | 삼성전자주식회사 | Microcontroller incorporating an electrically rewritable non-volatile memory |
US6571244B1 (en) | 1999-10-28 | 2003-05-27 | Microsoft Corporation | Run formation in large scale sorting using batched replacement selection |
JP2001166993A (en) | 1999-12-13 | 2001-06-22 | Hitachi Ltd | Memory control unit and method for controlling cache memory |
US8171204B2 (en) | 2000-01-06 | 2012-05-01 | Super Talent Electronics, Inc. | Intelligent solid-state non-volatile memory device (NVMD) system with multi-level caching of multiple channels |
US6459647B1 (en) | 2000-02-08 | 2002-10-01 | Alliance Semiconductor | Split-bank architecture for high performance SDRAMs |
US6691209B1 (en) * | 2000-05-26 | 2004-02-10 | Emc Corporation | Topological data categorization and formatting for a mass storage system |
JP3871853B2 (en) * | 2000-05-26 | 2007-01-24 | 株式会社ルネサステクノロジ | Semiconductor device and operation method thereof |
JP3871184B2 (en) * | 2000-06-12 | 2007-01-24 | シャープ株式会社 | Semiconductor memory device |
DE10032236C2 (en) | 2000-07-03 | 2002-05-16 | Infineon Technologies Ag | Circuit arrangement for switching a receiver circuit, in particular in DRAM memories |
US6769081B1 (en) | 2000-08-30 | 2004-07-27 | Sun Microsystems, Inc. | Reconfigurable built-in self-test engine for testing a reconfigurable memory |
US6487102B1 (en) | 2000-09-18 | 2002-11-26 | Intel Corporation | Memory module having buffer for isolating stacked memory devices |
JP3646303B2 (en) | 2000-12-21 | 2005-05-11 | 日本電気株式会社 | Computer system, memory management method thereof, and recording medium recording memory management program |
US7107480B1 (en) | 2000-12-22 | 2006-09-12 | Simpletech, Inc. | System and method for preventing data corruption in solid-state memory devices after a power failure |
US6662281B2 (en) | 2001-01-31 | 2003-12-09 | Hewlett-Packard Development Company, L.P. | Redundant backup device |
JP4817510B2 (en) | 2001-02-23 | 2011-11-16 | キヤノン株式会社 | Memory controller and memory control device |
US6816982B2 (en) | 2001-03-13 | 2004-11-09 | Gonen Ravid | Method of and apparatus for computer hard disk drive protection and recovery |
US6675272B2 (en) * | 2001-04-24 | 2004-01-06 | Rambus Inc. | Method and apparatus for coordinating memory operations among diversely-located memory components |
US7228383B2 (en) | 2001-06-01 | 2007-06-05 | Visto Corporation | System and method for progressive and hierarchical caching |
JP4049297B2 (en) | 2001-06-11 | 2008-02-20 | 株式会社ルネサステクノロジ | Semiconductor memory device |
TWI240864B (en) | 2001-06-13 | 2005-10-01 | Hitachi Ltd | Memory device |
JP4765222B2 (en) | 2001-08-09 | 2011-09-07 | 日本電気株式会社 | DRAM device |
US6614685B2 (en) | 2001-08-09 | 2003-09-02 | Multi Level Memory Technology | Flash memory array partitioning architectures |
JP4015835B2 (en) | 2001-10-17 | 2007-11-28 | 松下電器産業株式会社 | Semiconductor memory device |
US6771553B2 (en) | 2001-10-18 | 2004-08-03 | Micron Technology, Inc. | Low power auto-refresh circuit and method for dynamic random access memories |
US6799241B2 (en) | 2002-01-03 | 2004-09-28 | Intel Corporation | Method for dynamically adjusting a memory page closing policy |
JP3756818B2 (en) * | 2002-01-09 | 2006-03-15 | 株式会社メガチップス | Memory control circuit and control system |
JP4082913B2 (en) * | 2002-02-07 | 2008-04-30 | 株式会社ルネサステクノロジ | Memory system |
US20030158995A1 (en) | 2002-02-15 | 2003-08-21 | Ming-Hsien Lee | Method for DRAM control with adjustable page size |
US7249282B2 (en) | 2002-04-29 | 2007-07-24 | Thomson Licensing | Eeprom enable |
US6707748B2 (en) | 2002-05-07 | 2004-03-16 | Ritek Corporation | Back up power embodied non-volatile memory device |
US6810513B1 (en) | 2002-06-19 | 2004-10-26 | Altera Corporation | Method and apparatus of programmable interconnect array with configurable multiplexer |
JP4159415B2 (en) | 2002-08-23 | 2008-10-01 | エルピーダメモリ株式会社 | Memory module and memory system |
JP4499982B2 (en) * | 2002-09-11 | 2010-07-14 | 株式会社日立製作所 | Memory system |
US7111142B2 (en) | 2002-09-13 | 2006-09-19 | Seagate Technology Llc | System for quickly transferring data |
US6910635B1 (en) | 2002-10-08 | 2005-06-28 | Amkor Technology, Inc. | Die down multi-media card and method of making same |
US8412879B2 (en) * | 2002-10-28 | 2013-04-02 | Sandisk Technologies Inc. | Hybrid implementation for error correction codes within a non-volatile memory system |
US6944042B2 (en) | 2002-12-31 | 2005-09-13 | Texas Instruments Incorporated | Multiple bit memory cells and methods for reading non-volatile data |
US7089412B2 (en) | 2003-01-17 | 2006-08-08 | Wintec Industries, Inc. | Adaptive memory module |
US20040163027A1 (en) | 2003-02-18 | 2004-08-19 | Maclaren John M. | Technique for implementing chipkill in a memory system with X8 memory devices |
US20040190210A1 (en) | 2003-03-26 | 2004-09-30 | Leete Brian A. | Memory back up and content preservation |
US7234099B2 (en) | 2003-04-14 | 2007-06-19 | International Business Machines Corporation | High reliability memory module with a fault tolerant address and command bus |
JP2004355351A (en) * | 2003-05-29 | 2004-12-16 | Hitachi Ltd | Server device |
US7170315B2 (en) | 2003-07-31 | 2007-01-30 | Actel Corporation | Programmable system on a chip |
US20050044302A1 (en) | 2003-08-06 | 2005-02-24 | Pauley Robert S. | Non-standard dual in-line memory modules with more than two ranks of memory per module and multiple serial-presence-detect devices to simulate multiple modules |
US7231488B2 (en) | 2003-09-15 | 2007-06-12 | Infineon Technologies Ag | Self-refresh system and method for dynamic random access memory |
KR100574951B1 (en) | 2003-10-31 | 2006-05-02 | 삼성전자주식회사 | Memory module having improved register architecture |
US9213609B2 (en) | 2003-12-16 | 2015-12-15 | Hewlett-Packard Development Company, L.P. | Persistent memory device for backup process checkpoint states |
US7281114B2 (en) | 2003-12-26 | 2007-10-09 | Tdk Corporation | Memory controller, flash memory system, and method of controlling operation for data exchange between host system and flash memory |
KR100528482B1 (en) | 2003-12-31 | 2005-11-15 | 삼성전자주식회사 | Flash memory system capable of inputting/outputting sector dara at random |
JP4428055B2 (en) | 2004-01-06 | 2010-03-10 | ソニー株式会社 | Data communication apparatus and memory management method for data communication apparatus |
KR100606242B1 (en) | 2004-01-30 | 2006-07-31 | 삼성전자주식회사 | Volatile Memory Device for buffering between non-Volatile Memory and host, Multi-chip packaged Semiconductor Device and Apparatus for processing data using the same |
WO2005076137A1 (en) | 2004-02-05 | 2005-08-18 | Research In Motion Limited | Memory controller interface |
KR101133607B1 (en) | 2004-02-25 | 2012-04-10 | 엘지전자 주식회사 | Damper pin of Drum Washing Machine |
US7916574B1 (en) | 2004-03-05 | 2011-03-29 | Netlist, Inc. | Circuit providing load isolation and memory domain translation for memory module |
US7532537B2 (en) | 2004-03-05 | 2009-05-12 | Netlist, Inc. | Memory module with a circuit providing load isolation and memory domain translation |
US20050204091A1 (en) * | 2004-03-11 | 2005-09-15 | Kilbuck Kevin M. | Non-volatile memory with synchronous DRAM interface |
JP2007536634A (en) * | 2004-05-04 | 2007-12-13 | フィッシャー−ローズマウント・システムズ・インコーポレーテッド | Service-oriented architecture for process control systems |
EP1598831B1 (en) | 2004-05-20 | 2007-11-21 | STMicroelectronics S.r.l. | An improved page buffer for a programmable memory device |
US7535759B2 (en) | 2004-06-04 | 2009-05-19 | Micron Technology, Inc. | Memory system with user configurable density/performance option |
US20060069896A1 (en) | 2004-09-27 | 2006-03-30 | Sigmatel, Inc. | System and method for storing data |
US7200021B2 (en) | 2004-12-10 | 2007-04-03 | Infineon Technologies Ag | Stacked DRAM memory chip for a dual inline memory module (DIMM) |
KR100666169B1 (en) | 2004-12-17 | 2007-01-09 | 삼성전자주식회사 | Flash memory data storing device |
US7053470B1 (en) | 2005-02-19 | 2006-05-30 | Azul Systems, Inc. | Multi-chip package having repairable embedded memories on a system chip with an EEPROM chip storing repair information |
US7493441B2 (en) | 2005-03-15 | 2009-02-17 | Dot Hill Systems Corporation | Mass storage controller with apparatus and method for extending battery backup time by selectively providing battery power to volatile memory banks not storing critical data |
KR100759427B1 (en) * | 2005-03-17 | 2007-09-20 | 삼성전자주식회사 | Hard disk drive and information processing system with reduced power consumption and data input and output method thereof |
JP4724461B2 (en) | 2005-05-17 | 2011-07-13 | Okiセミコンダクタ株式会社 | System LSI |
US20060294295A1 (en) | 2005-06-24 | 2006-12-28 | Yukio Fukuzo | DRAM chip device well-communicated with flash memory chip and multi-chip package comprising such a device |
US20080126690A1 (en) | 2006-02-09 | 2008-05-29 | Rajan Suresh N | Memory module with memory stack |
US7464225B2 (en) | 2005-09-26 | 2008-12-09 | Rambus Inc. | Memory module including a plurality of integrated circuit memory devices and a plurality of buffer devices in a matrix topology |
US7409491B2 (en) | 2005-12-14 | 2008-08-05 | Sun Microsystems, Inc. | System memory board subsystem using DRAM with stacked dedicated high speed point to point links |
US7519754B2 (en) * | 2005-12-28 | 2009-04-14 | Silicon Storage Technology, Inc. | Hard disk drive cache memory and playback device |
US20070147115A1 (en) | 2005-12-28 | 2007-06-28 | Fong-Long Lin | Unified memory and controller |
KR20070076849A (en) * | 2006-01-20 | 2007-07-25 | 삼성전자주식회사 | Apparatus and method for accomplishing copy-back operation in memory card |
JP4780304B2 (en) | 2006-02-13 | 2011-09-28 | 株式会社メガチップス | Semiconductor memory and data access method |
US7421552B2 (en) | 2006-03-17 | 2008-09-02 | Emc Corporation | Techniques for managing data within a data storage system utilizing a flash-based memory vault |
JP4768504B2 (en) | 2006-04-28 | 2011-09-07 | 株式会社東芝 | Storage device using nonvolatile flash memory |
US7653778B2 (en) | 2006-05-08 | 2010-01-26 | Siliconsystems, Inc. | Systems and methods for measuring the useful life of solid-state storage devices |
US7464240B2 (en) | 2006-05-23 | 2008-12-09 | Data Ram, Inc. | Hybrid solid state disk drive with controller |
US7716411B2 (en) | 2006-06-07 | 2010-05-11 | Microsoft Corporation | Hybrid memory device with single interface |
US8407395B2 (en) | 2006-08-22 | 2013-03-26 | Mosaid Technologies Incorporated | Scalable memory system |
JP4437489B2 (en) | 2006-10-25 | 2010-03-24 | 株式会社日立製作所 | Storage system having volatile cache memory and nonvolatile memory |
KR101533120B1 (en) | 2006-12-14 | 2015-07-01 | 램버스 인코포레이티드 | Multi-die memory device |
US7554855B2 (en) * | 2006-12-20 | 2009-06-30 | Mosaid Technologies Incorporated | Hybrid solid-state memory system having volatile and non-volatile memory |
US20080189479A1 (en) * | 2007-02-02 | 2008-08-07 | Sigmatel, Inc. | Device, system and method for controlling memory operations |
US7752373B2 (en) | 2007-02-09 | 2010-07-06 | Sigmatel, Inc. | System and method for controlling memory operations |
WO2008131058A2 (en) * | 2007-04-17 | 2008-10-30 | Rambus Inc. | Hybrid volatile and non-volatile memory device |
KR100909965B1 (en) * | 2007-05-23 | 2009-07-29 | 삼성전자주식회사 | A semiconductor memory system having a volatile memory and a nonvolatile memory sharing a bus and a method of controlling the operation of the nonvolatile memory |
US8904098B2 (en) | 2007-06-01 | 2014-12-02 | Netlist, Inc. | Redundant backup using non-volatile memory |
US8874831B2 (en) * | 2007-06-01 | 2014-10-28 | Netlist, Inc. | Flash-DRAM hybrid memory module |
US7952179B2 (en) | 2007-06-28 | 2011-05-31 | Sandisk Corporation | Semiconductor package having through holes for molding back side of package |
US7865679B2 (en) | 2007-07-25 | 2011-01-04 | AgigA Tech Inc., 12700 | Power interrupt recovery in a hybrid memory subsystem |
US8200885B2 (en) | 2007-07-25 | 2012-06-12 | Agiga Tech Inc. | Hybrid memory system with backup power source and multiple backup an restore methodology |
US8001434B1 (en) | 2008-04-14 | 2011-08-16 | Netlist, Inc. | Memory board with self-testing capability |
US20090313416A1 (en) * | 2008-06-16 | 2009-12-17 | George Wayne Nation | Computer main memory incorporating volatile and non-volatile memory |
EP2141590A1 (en) * | 2008-06-26 | 2010-01-06 | Axalto S.A. | Method of managing data in a portable electronic device having a plurality of controllers |
US8478928B2 (en) | 2009-04-23 | 2013-07-02 | Samsung Electronics Co., Ltd. | Data storage device and information processing system incorporating data storage device |
KR101606880B1 (en) | 2009-06-22 | 2016-03-28 | 삼성전자주식회사 | Data storage system and channel driving method thereof |
US8266501B2 (en) | 2009-09-29 | 2012-09-11 | Micron Technology, Inc. | Stripe based memory operation |
CN102110057B (en) * | 2009-12-25 | 2013-05-08 | 澜起科技(上海)有限公司 | Memory module and method for exchanging data in memory module |
US8898324B2 (en) * | 2010-06-24 | 2014-11-25 | International Business Machines Corporation | Data access management in a hybrid memory server |
US8418026B2 (en) | 2010-10-27 | 2013-04-09 | Sandisk Technologies Inc. | Hybrid error correction coding to address uncorrectable errors |
US8806245B2 (en) | 2010-11-04 | 2014-08-12 | Apple Inc. | Memory read timing margin adjustment for a plurality of memory arrays according to predefined delay tables |
US8713379B2 (en) | 2011-02-08 | 2014-04-29 | Diablo Technologies Inc. | System and method of interfacing co-processors and input/output devices via a main memory system |
KR101800445B1 (en) | 2011-05-09 | 2017-12-21 | 삼성전자주식회사 | Memory controller and operating method of memory controller |
US8792273B2 (en) | 2011-06-13 | 2014-07-29 | SMART Storage Systems, Inc. | Data storage system with power cycle management and method of operation thereof |
CN102411548B (en) | 2011-10-27 | 2014-09-10 | 忆正存储技术(武汉)有限公司 | Flash memory controller and method for transmitting data among flash memories |
US20140059170A1 (en) * | 2012-05-02 | 2014-02-27 | Iosif Gasparakis | Packet processing of data using multiple media access controllers |
US20140032820A1 (en) | 2012-07-25 | 2014-01-30 | Akinori Harasawa | Data storage apparatus, memory control method and electronic device with data storage apparatus |
US9436600B2 (en) | 2013-06-11 | 2016-09-06 | Svic No. 28 New Technology Business Investment L.L.P. | Non-volatile memory storage for multi-channel memory system |
-
2012
- 2012-07-26 US US13/559,476 patent/US8874831B2/en not_active Expired - Fee Related
- 2012-07-28 CN CN201710824058.1A patent/CN107656700B/en not_active Expired - Fee Related
- 2012-07-28 PL PL17191878T patent/PL3293638T3/en unknown
- 2012-07-28 WO PCT/US2012/048750 patent/WO2013016723A2/en active Application Filing
- 2012-07-28 CN CN201280047758.XA patent/CN103890688B/en active Active
- 2012-07-28 EP EP12817751.6A patent/EP2737383B1/en active Active
- 2012-07-28 KR KR1020147005608A patent/KR20140063660A/en not_active Application Discontinuation
- 2012-07-28 EP EP21200871.8A patent/EP3985518A1/en active Pending
- 2012-07-28 EP EP17191878.2A patent/EP3293638B1/en active Active
-
2014
- 2014-09-17 US US14/489,269 patent/US9158684B2/en active Active
-
2015
- 2015-08-31 US US14/840,865 patent/US9928186B2/en active Active
-
2018
- 2018-03-23 US US15/934,416 patent/US20190004985A1/en not_active Abandoned
-
2020
- 2020-12-30 US US17/138,766 patent/US11016918B2/en active Active
-
2021
- 2021-05-24 US US17/328,019 patent/US11232054B2/en active Active
-
2022
- 2022-01-24 US US17/582,797 patent/US20220222191A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6856556B1 (en) * | 2003-04-03 | 2005-02-15 | Siliconsystems, Inc. | Storage subsystem with embedded circuit for protecting against anomalies in power signal from host |
US20060080515A1 (en) * | 2004-10-12 | 2006-04-13 | Lefthand Networks, Inc. | Non-Volatile Memory Backup for Network Storage System |
US20060174140A1 (en) * | 2005-01-31 | 2006-08-03 | Harris Shaun L | Voltage distribution system and method for a memory assembly |
US20070136523A1 (en) * | 2005-12-08 | 2007-06-14 | Bonella Randy M | Advanced dynamic disk memory module special operations |
US7724604B2 (en) * | 2006-10-25 | 2010-05-25 | Smart Modular Technologies, Inc. | Clock and power fault detection for memory modules |
US20080126624A1 (en) * | 2006-11-27 | 2008-05-29 | Edoardo Prete | Memory buffer and method for buffering data |
US20120271990A1 (en) * | 2007-06-01 | 2012-10-25 | Netlist, Inc. | Non-Volatile Memory Module |
Non-Patent Citations (2)
Title |
---|
JEDEC STANDARD, FBDIMM Specification, DDR2 SDRAM Fully Buffered DIMM (FBDIMM) Design Specification,, JESD205, JEDEC SOLID STATE TECHNOLOGY ASSOCIATION, March 2007, 129 pages * |
JEDEC STANDARD, FBDIMM, Advanced Memory Buffer (AMB), JES82-20 JEDEC SOLID STATE TECHNOLOGY ASSOCIATION, March 2007, 190 pages * |
Also Published As
Publication number | Publication date |
---|---|
CN103890688B (en) | 2017-10-13 |
WO2013016723A3 (en) | 2014-05-08 |
US11016918B2 (en) | 2021-05-25 |
CN107656700A (en) | 2018-02-02 |
US9928186B2 (en) | 2018-03-27 |
EP3293638B1 (en) | 2021-10-06 |
US20190004985A1 (en) | 2019-01-03 |
CN107656700B (en) | 2020-09-01 |
US20210124701A1 (en) | 2021-04-29 |
PL3293638T3 (en) | 2022-03-28 |
WO2013016723A2 (en) | 2013-01-31 |
EP2737383A2 (en) | 2014-06-04 |
EP2737383B1 (en) | 2017-09-20 |
US20210279194A1 (en) | 2021-09-09 |
EP2737383A4 (en) | 2015-07-08 |
CN103890688A (en) | 2014-06-25 |
US20150242313A1 (en) | 2015-08-27 |
US8874831B2 (en) | 2014-10-28 |
US11232054B2 (en) | 2022-01-25 |
US20160196223A1 (en) | 2016-07-07 |
US20130086309A1 (en) | 2013-04-04 |
EP3985518A1 (en) | 2022-04-20 |
KR20140063660A (en) | 2014-05-27 |
EP3293638A1 (en) | 2018-03-14 |
US9158684B2 (en) | 2015-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11232054B2 (en) | Flash-dram hybrid memory module | |
US8904099B2 (en) | Isolation switching for backup memory | |
US11068170B2 (en) | Multi-tier scheme for logical storage management | |
US9927999B1 (en) | Trim management in solid state drives | |
US8607023B1 (en) | System-on-chip with dynamic memory module switching | |
US9582192B2 (en) | Geometry aware block reclamation | |
US11782648B2 (en) | Storage system and method for host memory access | |
US20230214147A1 (en) | Storage System and Method for Avoiding Clustering of Reads During a Program Suspend |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |