CN108427647A - Read the method and mixing memory module of data - Google Patents

Read the method and mixing memory module of data Download PDF

Info

Publication number
CN108427647A
CN108427647A CN201711136385.4A CN201711136385A CN108427647A CN 108427647 A CN108427647 A CN 108427647A CN 201711136385 A CN201711136385 A CN 201711136385A CN 108427647 A CN108427647 A CN 108427647A
Authority
CN
China
Prior art keywords
cache
metadata
dram
data
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711136385.4A
Other languages
Chinese (zh)
Other versions
CN108427647B (en
Inventor
张牧天
牛迪民
郑宏忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CN108427647A publication Critical patent/CN108427647A/en
Application granted granted Critical
Publication of CN108427647B publication Critical patent/CN108427647B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0864Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using pseudo-associative means, e.g. set-associative or hashing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0646Configuration or reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0817Cache consistency protocols using directory methods
    • G06F12/0822Copy directories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0638Combination of memories, e.g. ROM and RAM such as to permit replacement or supplementing of words in one module by words in another module
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0875Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0895Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/068Hybrid storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1048Scalability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • G06F2212/2022Flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/22Employing cache memory using specific memory technology
    • G06F2212/221Static RAM
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/28Using a specific disk cache architecture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/28Using a specific disk cache architecture
    • G06F2212/283Plural cache memories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/46Caching storage objects of specific type in disk cache
    • G06F2212/466Metadata, control data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/50Control mechanisms for virtual memory, cache or TLB
    • G06F2212/507Control mechanisms for virtual memory, cache or TLB using speculative control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/62Details of cache specific to multiprocessor cache arrangements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A kind of method and mixing memory module reading data is disclosed.According to one embodiment, the method includes:A kind of mixing memory module is provided, including:DRAM cache;Flash memory;And the SRAM for storing metadata cache;By being decoded acquisition host address to the data access request received from host computer, wherein host address includes DRAM cache label and DRAM cache index;It is indexed from DRAM cache and obtains metadata address, wherein metadata address includes metadata cache label and metadata cache index;Based on the presence of the matched metadata cache entry in the metadata cache of SRAM, determine that metadata cache is hit;In the case where metadata cache is hit, obtains data from DRAM cache and skip the access to the metadata of DRAM cache;The data obtained from DRAM cache are returned into host computer.

Description

Read the method and mixing memory module of data
This application claims No. 62/459,414 U.S. Provisional Patent Applications and 2017 submitted for 15 days 2 months in 2017 The disclosure of the equity for the 15/587th, No. 286 U.S. Patent application that on May 4, in submits, the application is completely wrapped by reference Contained in this.
Technical field
The present disclosure generally relates to mixing memory modules, more particularly, to one kind for using SRAM metadata caches (metadata cache) and Bloom filter (Bloom filter), which are alleviated, accesses mixing memory mould DRAM cache in the block (cache) system and method for the expense of metadata.
Background technology
Mixing memory module refer to comprising volatile memory (for example, dynamic random access memory (DRAM)) and it is non-easily Memory module of the property the lost memory (for example, flash memory) both as main data storage device.Mixing memory module One example is the mixing double-row in-line memory module (DIMM) of integrated DRAM and flash memory.In Typical Disposition, DRAM can Buffer memory as the data for storage in a flash memory.In order to realize the quick access to DRAM cache, DRAM cache Metadata can be stored in the static RAM (SRAM) of mixing memory module.
However, the storage size needed for the metadata of DRAM cache can be more than the available size of SRAM.It is integrated in mixing The memory capacity of SRAM in DIMM may be kept relatively small due to its cost.Due to the limited storage size of SRAM, DRAM Whole metadata of caching possibly can not accommodate in sram, as a result, the remainder that can not be accommodated in sram of metadata It must be stored in DRAM.In this case, when accessing data, the slow access for the metadata being stored in DRAM can be led Performance is caused to decline.
In order to solve this problem, it has been suggested that if drying method.First method is to reduce the first number of storage in sram According to size.For example, the size of metadata can be reduced by reducing the quantity of the cache lines of storage in sram.In this feelings Under condition, the size of each cache lines is reduced.Size after the reduction of cache lines can negatively influence hit rate, and cache not It needs to read multiple pages from flash memory in the case of hit.It in another example, can be by reducing label position (tag bit) and replacing Position (replacement bit) reduces Cache associativity, but this method also can negatively influence hit rate.In another example, Replacement policy, which can be replaced by, need not replace position.
However, test result is shown, a small amount of institute can be only realized for reducing the combination of these effects of metadata size The reduction of the metadata size needed.Therefore, the problem of limited size of the SRAM for storing metadata can still maintain for:It dodges The size of the data storage capacity and DRAM cache deposited increases.
Invention content
According to one embodiment, a kind of method includes:A kind of mixing memory module is provided, including:Dynamic randon access Memory (DRAM) caches;Flash memory;And the static RAM (SRAM) for storing metadata cache, wherein DRAM cache includes the cached copies of the data of storage in a flash memory and metadata corresponding with the cached copies of data, wherein Metadata cache includes the cached copies of the part of the metadata of DRAM cache;It is deposited from host computer reception to being stored in mixing The data access request of data in memory modules;By being decoded acquisition host address to data access request, wherein main Machine address includes DRAM cache label and DRAM cache index;It is indexed from DRAM cache and obtains metadata address, wherein metadata Address includes metadata cache label and metadata cache index;Based on the matched metadata in the metadata cache of SRAM The presence of cache entries determines that metadata cache is hit, wherein matched metadata cache entry has metadata cache label With pair of DRAM cache label;In the case where metadata cache is hit, obtains data from DRAM cache and skip slow to DRAM The access for the metadata deposited;The data obtained from DRAM cache are returned into host computer.
According to another embodiment, a kind of mixing memory module includes:Flash memory;Dynamic random access memory (DRAM) is slow It deposits, wherein DRAM cache includes the cached copies of the data of storage in a flash memory and first number corresponding with the cached copies of data According to;Static RAM (SRAM), for storing metadata cache, metadata cache includes the metadata of DRAM cache Part cached copies;Memory interface, for providing interface to host computer;Memory accessing controller, for visiting Ask the data being stored in DRAM cache and flash memory;Dram controller, for controlling the access to DRAM cache;Flash memory controls Device, for controlling the access to flash memory;Cache controller is used to determine the cached copies for the data asked from host computer In the presence of.
Cache controller is configured as:Host address is obtained by being decoded to data access request, wherein host Address includes DRAM cache label and DRAM cache index;It is indexed from DRAM cache and obtains metadata address, wherein metadata Location includes metadata cache label and metadata cache index;It is slow based on the matched metadata in the metadata cache of SRAM The presence for depositing entry, determine metadata cache hit, wherein matched metadata cache entry have metadata cache label and Pair of DRAM cache label;In the case where metadata cache is hit, instruction dram controller skips first number to DRAM cache According to access and from DRAM cache obtain data.Memory accessing controller is configured as:The data that will be obtained from DRAM cache Back to host computer.
Be more specifically described now with reference to attached drawing and point out in the claims include event realization and combination it is each The above and other preferred feature of kind novel details.It will be understood that showing this only by way of explanation and without limitation In the specific system and method that describe.As it will appreciated by a person of ordinary skill, without departing from the scope of the disclosure, Principle described herein and feature can be used in various and many embodiments.
Description of the drawings
The attached drawing that part as this specification includes shows presently preferred embodiment, and is totally retouched with being given above It states and the detailed description of preferred embodiment given below is used for explaining and instructing principle described herein together.
Fig. 1 shows the framework of traditional mixing memory module;
Fig. 2 shows the frameworks according to the exemplary mixing memory module of one embodiment;
Fig. 3 shows the block diagram operated according to the exemplary cache controller of one embodiment;
Fig. 4 shows the exemplary Bloom filter realized in mixing memory module according to one embodiment;
Fig. 5 is the flow chart for executing request of data by cache controller according to one embodiment.
Through attached drawing, for illustrative purposes, attached drawing is not necessarily to scale, and the member of similar structure or function Element is usually indicated by identical reference label.Attached drawing is meant only to help to describe various embodiments described.Attached drawing is not retouched Each aspect of introduction disclosed herein is stated, and does not limit the scope of the claims.
Specific implementation mode
Each of feature disclosed herein and introduction may be employed separately or combine land productivity with other features and introduction With to provide for alleviating access mixing memory mould DRAM cache in the block using SRAM metadata caches and Bloom filter Metadata expense system and method.It is described in further detail with reference to attached drawing and individually and in combination utilizes these additional special The many representative examples sought peace in instructing.The specific implementation mode is meant only to the side that introduction those skilled in the art put into practice this introduction The further details in face, and it is not intended to limitation the scope of the claims.Therefore, disclosed feature in a specific embodiment below Combination may not be only to be taught as being the representativeness for specifically describing this introduction on the contrary necessary to putting into practice this introduction in the broadest sense Example.
In the following description, merely for the purpose of explanation, illustrate specific name to provide the thorough understanding to the disclosure.So And it will be apparent to one skilled in the art that introduction of the disclosure does not need these specific details.
Specific implementation here is presented according to the algorithm of the operation of data bit in computer storage and symbolic indication The some parts of mode.These algorithm descriptions and expression are used by the technical staff of data processing field, with effectively by them The essence of work be communicated to others skilled in the art.Algorithm be usually contemplated to cause herein desired result from phase Consistent series of steps.The step of these steps are the physical operations for needing physical quantity.Usually (but necessarily), this tittle is adopted With can be by storage, the form of the electric signal or magnetic signal that transmission, combine, compare and in addition operate.Primarily for commonly used The reason of, it has therefore proved that it is sometimes convenient that these signals are known as position, value, element, symbol, character, item, numbers etc..
However, should keep firmly in mind, all these terms and term similar and will only be applied to register appropriate The convenient label of this tittle.Unless otherwise such as from following discussion it will be clear that special declaration, otherwise it should be understood that through this Specification indicates computer system using the discussion of the term of " processing ", " operation ", " calculating ", " determination ", " display " etc. Or the behavior and processing of similar computing electronics, wherein computer system or similar computing electronics are manipulated by table The data of physics (electronics) amount in the register or memory of computer system are shown as, and computer system will be represented as The data of physics (electronics) amount in register or memory, which are converted to, to be similarly represented as computer system memory or posts Other data of physical quantity in storage or other such information storage, transmission or display devices.
Various technologies of the disclosure description for reducing the access to DRAM metadata.For example, can be by using replacing at random Strategy is changed to reduce DRAM metadata access.In this case, it may be unnecessary to replace position, but performance can be negatively affected. It, can be by the way that DRAM cache metadata storage in part be reduced DRAM metadata access in sram in another example.This In the case of, only SRAM matchings can trigger DRAM search.However, when meta data match can frequently occur, part metadata side Method can have the performance of deterioration.A kind of alternative is to carry out cache metadata itself using SRAM, and use Bloom filter DRAM cache miss is effectively filtered out, and only executes DRAM search in tag match, but Bloom filter can trigger and miss It reports (false positive).However, because of the deterioration performance of DRAM cache, use Bloom filter usual in sram May not be effective.
In addition, in order to provide the additional useful embodiment of this introduction, it can enumerate or non-clearly enumerate with non-specific Mode combines representative example and the various features of dependent claims.It is also expressly noted that for original disclosed purpose with And for the purpose for limiting theme claimed, the instruction of the range of all values or the group of entity discloses each possible Median or intermediate entities.It is also expressly noted that the dimension and shape of component shown in attached drawing are designed to help to understand such as What puts into practice this introduction, and is not intended to dimension and shape shown in limitation example.
The disclosure provides a kind of for accessing mixing memory mould using SRAM metadata caches and Bloom filter alleviation The system and method for the expense of DRAM cache metadata in the block.According to one embodiment, Bloom filter and sram cache quilt Combination, to store the metadata of DRAM cache.Bloom filter and being applied in combination for sram cache can compensate respective disadvantage, and Improved performance is provided and is stored in mixing memory mould data in the block to access.Generally, Bloom filter is conducive to cache Miss, metadata cache are conducive to cache hit.If both concurrent access metadata cache and Bloom filter, the grand mistake of cloth Filter FALSE shows cache miss, because being accessed without DRAM.The hit of SRAM metadata caches is considered as fast cache life In, because without DRAM metadata access.
Fig. 1 shows the framework of traditional mixing memory module.Mixing memory module 100 includes DRAM cache 101, dodges Deposit 151, memory accessing controller 111, dram controller 121, cache controller 124, for storing metadata cache 127 The SRAM 126 and flash controller 131 of metadata.Metadata cache 127 includes the metadata 102 of DRAM cache 101 Cached version.DRAM cache 101 stores the data cached of flash memory 151, and includes metadata 102, reads caching 103 and write-in Buffer 104.Metadata 102 may include label, significance bit, dirty position (dirty bit) etc..DRAM cache, which may also include, to be stored in The cached copies of data in flash memory, metadata 102 are corresponding with the cached copies of data.Note that unless otherwise expressly stated, it is no Then terminology metadata and label can be used interchangeably herein.Read the data of 103 caching of the storage from flash memory 151 of caching.It reads Take caching 103 for being cached to data.The number of the memory access to flash memory 151 can be reduced by reading caching 103.It is to be written The data for entering flash memory 151 may be buffered in write buffer 104.Write buffer 104 can be reduced to the write-in flow of flash memory 151 (write traffic)。
Host computer (not shown) is according to the memory established between host computer and mixing memory module 100 Master/slave interface protocol (for example, NVDIMM-P) sends memory access request via memory interface 140 and is stored in accessing Data in mixing memory module 100.Memory access request is forwarded to memory accessing controller 111.Memory access It asks that controller 111 serves as buffer module (translator), and the request from host will turn (via NVDIMM-P agreements) It is changed to the format that can be read by mixing memory module 100.After converting the request from host, memory accessing controller Transitional information is forwarded to cache controller 124 by 111.
Cache controller 124 checks the metadata cache 127 being stored in SRAM 126 and determines metadata cache hit Or metadata cache miss.It is hit if it is metadata cache, then the data of 124 confirmation request of cache controller are stored in In DRAM cache 101, and in the case of metadata 102 in not accessing DRAM cache 101, using being stored in metadata cache Metadata in 127 asks the dram controller 121 to access the data of the request in DRAM cache 101.It is slow if it is metadata Deposit miss, then cache controller 124 asks dram controller 121 to access the metadata 102 being stored in DRAM cache 101, And determine DRAM cache hit or DRAM cache miss.After checking DRAM cache hit or DRAM cache miss, delay Memory controller 124 can determine the accurate destination address of the data of request.It is hit if it is DRAM cache, then cache controller 124 Request dram controller 121 accesses the data for being stored in the request in DRAM cache 101.If it is DRAM cache miss, then Cache controller 124 asks flash controller 131 to access the data for being stored in the request in flash memory 151.
When cache controller 124 determines that the data of request are stored in DRAM cache 101, cache controller 124 is logical It crosses in the case where metadata cache is hit with reference to the metadata cache 127 being stored in SRAM 126 or by DRAM cache Metadata 102 is referred in the case of hit, to indicate that dram controller 121 accesses DRAM cache 101.When cache controller 124 When determining that the data of request are stored in flash memory 151, flash controller 131 is visited via flash memory stream (flash stream) 150 It asks and retrieves the data being stored in flash memory 151.
Fig. 2 shows the frameworks according to the exemplary mixing memory module of one embodiment.Mixing memory module 200 is wrapped Include DRAM cache 201, flash memory 251, memory accessing controller 211, dram controller 221, cache controller 224, SRAM 226 and flash controller 231.
The cached copies of the metadata 202 of DRAM cache 201 are stored in metadata cache 227 by SRAM 226.According to The cached copies of the available size of metadata cache 227, the metadata 202 being stored in metadata cache 227 can be first number According to 202 subset.SRAM 226 stores another " set " also in the form of the Bloom filter array 229 in Bloom filter Cache metadata (full set of metadata 202 or the subset of metadata 202).
Metadata cache 227 includes the cached version of the metadata 202 of DRAM cache 201.DRAM cache 201 stores flash memory 251 it is data cached, and include metadata 202, read caching 203 and write buffer 204.Metadata 202 may include marking Label, significance bit, dirty position etc..Note that unless otherwise expressly stated, otherwise terminology metadata and label can be used interchangeably herein. 203 storage of caching is read from the data cached of flash memory 251.Caching 203 is read for being cached to data.Read caching 203 can reduce the number of the memory access to flash memory 251.The data of flash memory 251 to be written may be buffered in write buffer 204 In.Write buffer 204 can be reduced to the write-in flow of flash memory 251.
Host computer (not shown) is according to the memory established between host computer and mixing memory module 200 Master/slave interface protocol (for example, NVDIMM-P) sends memory access request via memory interface 240 and is stored in accessing Data in mixing memory module 200.Memory access request is forwarded to memory accessing controller 211.Memory access Ask that controller 211 serves as buffer module, and the request of (via NVDIMM-P agreements) from host is converted to can be by mixing The format that memory module 200 is read.After converting the request from host, memory accessing controller 211 believes conversion Breath is forwarded to cache controller 224.
Cache controller 224 checks the metadata about caching 227 being stored in SRAM 226, and determines that metadata is slow Deposit hit or metadata cache miss.It is hit if it is metadata cache, then the data quilt of 224 confirmation request of cache controller It is stored in DRAM cache 201, and in the case of metadata 202 in not accessing DRAM cache 201, using being stored in first number Dram controller 221 is asked to access the data of the request in DRAM cache 201 according to the metadata in caching 227.
According to one embodiment, cache controller 224 provides various functions with to the metadata being stored in SRAM 226 It is operated.The example of such function includes the insertions function and test work(for the metadata that management is stored in SRAM 226 Can, but not limited to this.Bloom filter can be used these functions to the Bloom filter array 229 that is stored in SRAM 226 into Row operation, with determination be there are DRAM cache hit or there are DRAM cache miss.For example, cache controller 224 is to cloth Grand filter arrays 229 execute Bloom filter test function.If Bloom filter test result is negative, mean DRAM cache miss, designation date are not stored in DRAM cache 201, and cache controller 224 is not accessing DRAM In the case of the metadata 202 of caching 201 data are sent a request for flash controller 231.It is right according to one embodiment The metadata cache inspection of metadata cache 227 and can independently, together to the test of the Bloom filter of Bloom filter array When or be performed with particular order.
It is affirmative (that is, instruction DRAM is slow if it is metadata cache miss and the test result of Bloom filter Hit is deposited, but it may be wrong report), then cache controller 224 asks first number of the access DRAM cache 201 of dram controller 221 According to 202, to determine being actually DRAM cache hit or DRAM cache miss.Based on the metadata in DRAM cache 201 The presence of matching metadata in 202, cache controller 224 can determine the accurate destination address of the data of request.If it is DRAM cache is hit, then cache controller 224 asks dram controller 221 to access the request being stored in DRAM cache 201 Data.If it is DRAM cache miss, then the request of cache controller 224 flash controller 231 access is stored in flash memory 251 Request data.
With reference to Fig. 2, metadata cache 227 and Bloom filter array 229 can have independent data structure and can store Same or different metadata.Metadata cache 227 and Bloom filter array 229 are independent of one another, have in SRAM 226 There is the reserved area of themselves, reserved area is for storing the metadata of themselves and the logic of operation metadata.According to One embodiment, SRAM 226 only includes one in metadata cache 227 and Bloom filter array 229, or caching control One in device 224 processed only operation metadata caching 227 and Bloom filter array 229.No matter Bloom filter array 229 In the presence of or operation, metadata cache 227 can operate in an identical manner.Similarly, no matter metadata cache 227 presence or Operation, Bloom filter array 229 can operate in an identical manner.According to the pattern of the operation of metadata cache 227 and deposit In (or missing), the distribution region of metadata cache 227 and Bloom filter array 229 in SRAM 226 dynamically changes Become.The mixing memory module 200 provides one or more " encapsulation to metadata cache 227 and Bloom filter array 229 (wrapper) " function, to help necessarily relying on mutual determining cache hit or cache miss.
In the case where metadata cache is hit, that is, if the metadata cache 227 in SRAM 226 stores metadata Cached copies, then cache controller 224 determine that the data of request are stored in DRAM cache 201, cache controller 224 is logical The metadata cache 227 with reference to being stored in SRAM 226 is crossed to indicate that dram controller 221 accesses DRAM cache 201. In the case that DRAM cache is hit, cache controller 224 indicates that dram controller 221 accesses by reference to metadata 202 DRAM cache 201.When cache controller 224 determines that the data of request are stored in flash memory 251, flash controller 231 passes through It is accessed by flash memory stream 250 and retrieves the data being stored in flash memory 251.
According to one embodiment, because DRAM metadata 202 is inclusiveness (inclusive) and is clean (clean), so metadata cache 227 can be kept effectively.For example, the row in metadata cache 227 is also stored on In the metadata 202 of DRAM cache 201.Any access type to metadata cache 227 is always considered as read access.Table 1 shows Go out the action executed according to the position of access type and the data of request by the cache controller 224 of mixing memory module 200 List.
Table 1:Data access and action
In the case of read access request, cache controller 224 can determine whether in the following way can be in DRAM The cached copies of requested data are found in caching 201.First, cache controller 224 check metadata cache 227 in whether In the presence of matching metadata.If finding matching, due to the inclusiveness of metadata tag and DRAM cache label, cache controller 224 it can be inferred that read access request is hit in metadata cache 227.Then, cache controller 224 can be used metadata slow Deposit the metadata hit in 227 and in the case where not accessing DRAM metadata 202, from 201 request target data of DRAM cache. In addition to metadata cache check other than, cache controller 224 can be used Bloom filter array 229 execute DRAM cache hit or Miss inspection.In the case of the DRAM cache miss tested by Bloom filter, cache controller 224 it can be inferred that Target data sends request in the member for not accessing DRAM cache 201 not in DRAM cache 201 to flash controller 231 Data are obtained in the case of data 202.If Bloom filter test result indicates DRAM cache hit, due to grand by cloth The DRAM cache hit of filter may be wrong report, therefore cache controller 224 cannot determine it is genuine DRAM in this stage Cache hit or miss.Therefore, in this case, cache controller 224 also asks dram controller 221 to access DRAM Metadata 202 is genuine DRAM cache hit or miss with determination.Pass through the DRAM being included in data access request Buffer tag is compared with DRAM metadata 202, and cache controller 224 can accurately determine request of data in DRAM cache Hit or miss in 201.If DRAM cache is hit, cache controller 224 can be from 201 request target number of DRAM cache According to.If DRAM cache miss, cache controller 224 can will be obtained from 251 request target data of flash memory from flash memory 251 Data be inserted into DRAM cache 201, and update DRAM metadata 202.No matter DRAM cache hit or miss, due to member Data cache misses, therefore the metadata obtained from DRAM metadata 202 can be used to carry out more new metadata for cache controller 224 Caching 227.
In the case where write-access is asked, cache controller 224 can be based on metadata cache 227 and DRAM cache 201 Matching result equivalent action is executed as in the case of read access request.Read access will be being emphasized in detailed below Difference between operation and write-access operation.
Fig. 3 shows the block diagram operated according to the exemplary cache controller of one embodiment.The caching control explained with reference to figure 3 Device processed can be the as shown in Figure 2 cache controller 224 being integrated in mixing memory module 200.In this case, will It omits mixing memory module 200 and is integrated in the repetition solution of mixing memory mould internal element in the block, module and device It releases.
In response to the data access request received from host computer, cache controller decodes to obtain data access request It takes the host address 301 of the data of request and identifies access type (for example, reading, write-in).Host address 301 includes label 302 (also referred to as DRAM cache labels), index 303 (also referred to as DRAM cache index), offset 304.
According to one embodiment, the cache lines in DRAM cache can be more than the size of the data of host request.In this feelings Under condition, with the data of request corresponding part of the offset 304 for determining cache lines.For example, if caching behavior 2KB, and The size of data of request be 1B, then there are 2048 (2KB/1B) sheet datas in cache lines, need in total 2048 deviation numbers with only One ground determines the data slice involved by host address.For example, offset 304 is the first data slice that 0 refers in cache lines, offset 304 be the final data piece that 2047 refer in cache lines.
According to one embodiment, index 303 may include and the associated metadata information of DRAM cache.Cache controller can be into One step obtains metadata address 311 to 303 decoding of index and delays metadata address 311 and the metadata of storage in sram It deposits 327 to be compared, to determine metadata cache hit or miss.According to one embodiment, metadata address 311 includes member Data (MDC) label 312 and metadata (MDC) index 313.
First, cache controller is being stored in SRAM metadata caches 327 using the MDC indexes 313 of metadata address 311 In multiple DRAM cache metadata among selection matching metadata entry (entry).It is stored in SRAM metadata caches 327 Each of matching metadata entry can have include MDC labels (for example, 333a to 333d) and DRAM cache label (for example, 332a to 332d) label pair and significance bit V.Significance bit V indicates whether associated cache lines are effective.For example, if V=0, Then there are the cache lines of matching metadata entry to indicate cache miss.If SRAM metadata caches 327 are organized as such as existing Multiple roads (way) (for example, road 0, road 1, road 2 and road 3) shown in example, then MCD indexes 313 can correspond to road ID.
In order to determine that metadata cache hit or miss, cache controller search for matching item in metadata cache 327 Mesh, and the MCD labels 312 of metadata address 311 are compared with the MDC labels of the matching entry of identification.If metadata Label 312 and MDC tag match include then the data of origin host label from matching entry reading.It is stored in metadata cache Origin host label and the host label 302 of host address 301 in 327 are compared.If they are matched, buffer control Device determines that metadata cache is hit, and in the case where not accessing the metadata of DRAM cache using being stored in metadata cache The data of the request in matched host tag access DRAM cache in 327.
Other than metadata cache checks, cache controller can perform Bloom filter test to use Bloom filter Determine the data cached presence (or being not present) (that is, DRAM cache hit or miss) in DRAM cache.It can be with various Form (for example, by comparing device) realizes Bloom filter.Cache controller is by the label 302 of host address 301 and passes through member The DRAM cache label for the matching metadata pair that data buffer storage inspection is identified is compared.If Bloom filter test instruction DRAM cache miss is (that is, there is no the 302 matched DRAM caches of label with host address 301 in metadata cache 327 Label), then cache controller in DRAM cache it can be inferred that be not present target data, and flash controller is asked to be deposited to access The data of storage in a flash memory.
In some cases, cache controller can not find matching metadata entry (that is, first number in metadata cache 327 According to cache miss), and Bloom filter may indicate that DRAM cache is hit.However, the DRAM indicated by Bloom filter delays Hit is deposited to may be wrong report, therefore, cache controller and then access the metadata of DRAM cache to determine whether target data is true It is stored in DRAM cache in fact (genuine DRAM cache hit or miss).The label 302 of Intrusion Detection based on host address 301 is slow with DRAM Comparison result between the metadata deposited, cache controller can accurately determine the position of target data.If in DRAM cache In there is no matched metadata (that is, genuine DRAM cache miss), then cache controller can ensure that and not asked in DRAM Data cached copies.In this case, cache controller can skip access DRAM cache, and directly access flash memory control Device is to access the data of storage in a flash memory.If there are matched metadata (that is, genuine DRAM cache life in DRAM cache In), then cache controller may have access to dram controller to access the data being stored in DRAM cache.
Fig. 4 shows the exemplary Bloom filter realized in mixing memory module according to one embodiment.Caching Controller provides for Bloom filter and is inserted into function and test function.According to one embodiment, it is hash function to be inserted into function. Cache controller can use more than one hash function according to the setting of Bloom filter.For example, being represented as x, y, z and w The input of insertion function and test function can be metadata cache label.
The metadata for the Bloom filter being stored in the reserved area of the SRAM of this mixing memory module can be implemented It is the array (also referred to as Bloom filter array) for including multiple entries.In this example, Bloom filter array is every A entry is 16, and there are three hash functions.Test function provides matching result for all three hash functions.Note Meaning, these are only examples, without departing from the scope of the disclosure, the Bloom filter and difference of different length can be used The hash function of quantity.
For given buffer tag (for example, label 302 in Fig. 3), it is inserted into function and is inserted into position according to hash algorithm The entry of (or update) to the direction of Bloom filter array 401.In this example, for buffer tag x, hash function is inserted into The destination aggregation (mda) (position 7,12 and 14) of Bloom filter array 401 as pointed by buffer tag x.Later, when receiving number When according to access request, test function test (x) is called, to read from Bloom filter array 401 by buffer tag x institutes The entry of direction, and test whether Bloom filter array 401 includes buffer tag x.It is inserted into using hash function, second slow Deposit the entry (position 2,4 and 11) for the Bloom filter array 401 that label y can be input into as pointed by buffer tag y, caching mark Function test (y) can be used to test for the presence of label y.
In this example, buffer tag x and y is shown as the input of Bloom filter array 401.Bloom filter array 401 each entry can be 0 or 1.Be inserted into function hash function, each hash function can be used caching based on one or more Label is as input.The output for being inserted into function is position number corresponding with the entry of the direction in Bloom filter array 401.Note Meaning, the present disclosure is not limited to the quantity of specific hash function and the hash function used.For example, when receiving buffer tag work When to input, it is inserted into four entries that Bloom filter array may be updated in function.
Note that label z and w are not input to Bloom filter 401 specifically.When receiving the data with buffer tag z It is called for the test function test (z) of label z when request, and be returned as negating accurately to indicate that z is not included in Bu Long (because without in DRAM cache) in filter.However, when receiving the data access request with buffer tag w, test Function test (w) is called, to read the entry for the Bloom filter array 401 being such as directed toward by buffer tag w and test caching Existence or non-existences of the label w in Bloom filter array 401.In this example, test function test (w) instructions caching mark Label w is present in Bloom filter array 401.Therefore, in this example, test function test (w) is the example of wrong report.Cause This, Bloom filter can be used for definitely identifying label not in the buffer, but cannot be used for that buffer tag accurately predicted In the buffer.
According to one embodiment, this Bloom filter can delete the part or whole portion of the selection of Bloom filter array Point.For example, deleting function can be applied with the wish of cache controller, to remove the part or whole cloth of Bloom filter array Grand filter arrays.When in cache line replacement and cache lines are no longer present in DRAM cache or when by being ejected, delete function Can be useful.In this case, the corresponding entry in this Bloom filter array can be removed by deleting function.In the grand mistake of cloth In the case that filter array is insufficient to greatly, cache controller can delete the part of Bloom filter array, to be risen for new entry Go out space.In an alternate embodiment, this Bloom filter can be used big Bloom filter array can cover entire label Range or cache controller can expire that array cannot be added again when Bloom filter becomes inefficient in Bloom filter When, reset Bloom filter.
Fig. 5 is the flow chart for executing request of data by cache controller according to one embodiment.In response to coming From the request of the data access of host computer, the cache controller of mixing memory module is decoded request is asked with obtaining Seek the host address (501) of type (for example, reading, write-in) and the data of request.Host address may include that (DRAM is marked label Label), index and offset.Then, it includes MDC labels and MDC that cache controller, which is decoded to obtain the index of host address, The metadata address (502) of index.Cache controller is using the MDC indexes of metadata address come from being stored in mixing memory mould Metadata cache in the SRAM of block identifies matched cache lines (503).Matched cache lines storage MDC labels and DRAM cache Pair of label.If matched cache lines are not present in metadata cache, cache controller asks dram controller to access The metadata of DRAM cache simultaneously determines the cached copies in DRAM cache with the presence or absence of the data asked (that is, genuine DRAM cache Hit or miss).
On the other hand, if recognizing matched cache lines in SRAM metadata caches, cache controller determination The MDC labels for the cache lines matched whether the MDC tag match with metadata address, and further determine that matched MDC labels institute The host label of reference whether with origin host tag match (504).If host tag match is (that is, metadata cache is ordered In), then cache controller determines that the data of request are buffered in DRAM cache (506), and dram controller is asked to access DRAM cache, and the metadata for accessing DRAM cache is skipped, using matched host label request is obtained from DRAM cache Data (511).
In addition, cache controller is executed using the DRAM labels (or DRAM labels of host address) of matched cache lines Bloom filter is tested, to determine DRAM cache hit or miss (505).If the test result of Bloom filter is no Fixed, then the data that cache controller is inferred to ask are not buffered in DRAM cache (that is, DRAM cache miss) (509), And it asks flash controller to access flash memory and obtains the data (512) of request.In this case, it can skip to DRAM cache Metadata access, improve the delay to the data access of flash memory.
If there is no the matching (that is, metadata cache miss) for metadata cache and Bloom filter test As a result it is certainly (may be the wrong report of DRAM cache hit), then cache controller request dram controller accesses DRAM cache Metadata (507), with determine host address label (DRAM labels) whether the entries match with the metadata of DRAM cache (508).If DRAM cache tag match, cache controller determine request data be buffered in DRAM cache (that is, DRAM cache is hit) (510), and ask the data (511) of the request in dram controller access DRAM cache.If DRAM is slow Tag match failure is deposited, then cache controller determines that the data of request are not buffered in DRAM cache (that is, DRAM cache is not ordered In) (509), and ask the data (512) of the request in flash controller access flash memory.
This cache controller can be programmed to support any of metadata cache and Bloom filter, both or not It supports.For example, cache controller monitors cache hit rate to determine disabling/enabling metadata cache and/or Bloom filter.Such as Fruit metadata cache hit rate is higher than the first predetermined threshold, then since Bloom filter is alleviating access DRAM cache metadata Seldom benefit is provided in terms of expense, therefore cache controller can disable Bloom filter.In another example, metadata cache is ordered Middle rate is lower than the second predetermined threshold, and cache controller can disable metadata cache, only leave Bloom filter.If DRAM does not have to Make the caching of flash memory, then cache controller can disable both metadata cache and Bloom filter.
According to one embodiment, cache controller can concurrent access Bloom filter and metadata cache, it is quick to obtain Comparison result.According to another embodiment, cache controller serially can access Bloom filter and member under low power conditions Data buffer storage.Cache controller can first check for Bloom filter, if Bloom filter the result is that metadata cache miss, Then cache controller does not activate metadata cache.If Bloom filter is the result is that metadata cache hit, cache controller Metadata cache can be activated to check DRAM cache hit or miss.If Bloom filter is the result is that metadata cache is hit But metadata cache returns to vacation, then cache controller accesses DRAM cache label.The order of serial access can be reversed, that is, It says, is accessed with the order of metadata cache, Bloom filter and DRAM cache label.
According to one embodiment, a kind of method includes:A kind of mixing memory module is provided, including:Dynamic randon access Memory (DRAM) caches;Flash memory;And the static RAM (SRAM) for storing metadata cache, wherein DRAM cache includes the cached copies of the data of storage in a flash memory and metadata corresponding with the cached copies of data, wherein Metadata cache includes the cached copies of the part of the metadata of DRAM cache;It is deposited from host computer reception to being stored in mixing The data access request of data in memory modules;By being decoded acquisition host address to data access request, wherein main Machine address includes DRAM cache label and DRAM cache index;It is indexed from DRAM cache and obtains metadata address, wherein metadata Address includes metadata cache label and metadata cache index;Based on the matched metadata in the metadata cache of SRAM The presence of cache entries determines that metadata cache is hit, wherein matched metadata cache entry has metadata cache label With pair of DRAM cache label;In the case where metadata cache is hit, obtains data from DRAM cache and skip slow to DRAM The access for the metadata deposited;The data obtained from DRAM cache are returned into host computer.
The step of determining metadata cache hit may also include:By the metadata cache label of metadata address and one or The metadata cache label of multiple metadata cache entries is compared, to determine that the matched metadata in metadata cache is slow Deposit the presence of entry.
SRAM can also store Bloom filter, and the method may also include:Matching in metadata cache based on SRAM Metadata cache entry be not present, determine metadata cache miss;Bloom filter is executed using Bloom filter to survey Examination;DRAM cache miss or the hit of potential DRAM cache are determined based on the result of Bloom filter test;In DRAM cache In the case of miss, data are obtained from flash memory;The data obtained from flash memory are returned into host computer.
The comparison of metadata cache label and Bloom filter test can be performed simultaneously.
The method may also include:In the case of metadata cache miss and the hit of potential DRAM cache, visit Ask the metadata of DRAM cache;The comparison of the DRAM labels of Intrusion Detection based on host address and the metadata of DRAM cache determines that data are It is no to be stored in DRAM cache;The case where there are the matching entries of the DRAM labels of host address in the metadata of DRAM cache Under, it obtains data from DRAM cache and the data obtained from DRAM cache is returned into host computer;In the member of DRAM cache In the case of the matching entry that the DRAM labels of host address are not present in data, obtains data from flash memory and will be obtained from flash memory Data return to host computer.
Bloom filter may include the Bloom filter array with multiple entries, and Bloom filter test can be by that will breathe out Uncommon function is applied to Bloom filter array to provide positive result or negative decision.
The method, which may also include, deletes Bloom filter array or resetting Bloom filter array.
The method may also include:When metadata cache hit rate is higher than threshold value, cache controller is programmed to disable Bloom filter.
The method may also include:When metadata cache hit rate is lower than threshold value, cache controller is programmed to disable Metadata cache.
The method may also include:Under low power conditions, serial access Bloom filter and metadata cache.
According to another embodiment, a kind of mixing memory module includes:Flash memory;Dynamic random access memory (DRAM) is slow It deposits, wherein DRAM cache includes the cached copies of the data of storage in a flash memory and first number corresponding with the cached copies of data According to;Static RAM (SRAM), for storing metadata cache, metadata cache includes the metadata of DRAM cache Part cached copies;Memory interface, for providing interface to host computer;Memory accessing controller, for visiting Ask the data being stored in DRAM cache and flash memory;Dram controller, for controlling the access to DRAM cache;Flash memory controls Device, for controlling the access to flash memory;Cache controller is used to determine the cached copies for the data asked from host computer In the presence of.
Cache controller is configured as:By being decoded acquisition host address to data access request, wherein host Location includes DRAM cache label and DRAM cache index;It is indexed from DRAM cache and obtains metadata address, wherein metadata address It is indexed including metadata cache label and metadata cache;Based on the matched metadata cache in the metadata cache of SRAM The presence of entry determine metadata cache hit, wherein matched metadata cache entry have metadata cache label and Pair of DRAM cache label;In the case where metadata cache is hit, instruction dram controller skips first number to DRAM cache According to access and from DRAM cache obtain data.Memory accessing controller is configured as:The data that will be obtained from DRAM cache Back to host computer.
Cache controller is also configured to:By the metadata cache label of metadata address and one or more metadata The metadata cache label of cache entries is compared, to determine depositing for the matched metadata cache entry in metadata cache .
SRAM can also store Bloom filter, and cache controller is also configured to:In metadata cache based on SRAM Matched metadata cache entry be not present, determine metadata cache miss;The grand mistake of cloth is executed using Bloom filter Filter is tested;DRAM cache miss or the hit of potential DRAM cache are determined based on the result of Bloom filter test; In the case of DRAM cache miss, skips the access to the metadata of DRAM cache and indicate flash controller and obtained from flash memory Data;Data are obtained from flash memory, memory accessing controller can be configured as the data obtained from flash memory returning to host meter Calculation machine.
Cache controller can be performed simultaneously comparison and the Bloom filter test of metadata cache label.
In the case of metadata cache miss and the hit of potential DRAM cache, dram controller can be configured For:Access the metadata of DRAM cache;The comparison of the DRAM labels of Intrusion Detection based on host address and the metadata of DRAM cache, determines number According to whether being stored in DRAM cache.There are the matching entries of the DRAM labels of host address in the metadata of DRAM cache In the case of, dram controller is configured as obtaining data from DRAM cache, and memory accessing controller is configured as will be from DRAM The data that caching obtains return to host computer.There is no the DRAM labels of host address in the metadata of DRAM cache Match entry in the case of, flash controller be configured as from flash memory obtain data, memory accessing controller be configured as by The data obtained from flash memory return to host computer.
Bloom filter may include the Bloom filter array with multiple entries, and Bloom filter test can be by that will breathe out Uncommon function is applied to Bloom filter array to provide positive result or negative decision.
Cache controller is also configured to delete Bloom filter array or resets Bloom filter array.
When metadata cache hit rate is higher than threshold value, cache controller can be programmed to disabling Bloom filter.
When metadata cache hit rate is lower than threshold value, cache controller can be programmed to disabling metadata cache.
Cache controller can be configured as:Under low power conditions, serial access Bloom filter and metadata cache.
Above example embodiment is had been described above, to show to realize for using SRAM metadata caches and Bu Long Filter alleviates the various implementations of the system and method for the expense for the metadata for accessing mixing memory mould DRAM cache in the block Example.Those of ordinary skill in the art will expect the various modifications from disclosed example embodiment and deviation.In the claims Illustrate the theme for being intended to be within the scope of the invention.

Claims (20)

1. a kind of method reading data includes:
The data access request to being stored in mixing memory mould data in the block is received from host computer, wherein mixing is deposited Memory modules include:Dynamic random access memory DRAM cache;Flash memory;And the static random for storing metadata cache Access memory SRAM, wherein DRAM cache includes the cached copies of the data of storage in a flash memory and the caching pair with data This corresponding metadata, wherein metadata cache includes the cached copies of the part of the metadata of DRAM cache;
By being decoded acquisition host address to data access request, wherein host address include DRAM cache label and DRAM cache indexes;
It is indexed from DRAM cache and obtains metadata address, wherein metadata address includes that metadata cache label and metadata are slow Deposit index;
Based on the presence of the matched metadata cache entry in the metadata cache of SRAM, determine that metadata cache is hit, Wherein, matched metadata cache entry has metadata cache label and DRAM cache label;
In the case where metadata cache is hit, obtains data from DRAM cache and skip the visit to the metadata of DRAM cache It asks;
The data obtained from DRAM cache are returned into host computer.
2. the step of the method for claim 1, wherein determining metadata cache hit further includes:
By the metadata cache label of the metadata cache label of metadata address and one or more metadata cache entries into Row compares, to determine the presence of the matched metadata cache entry in metadata cache.
3. method as claimed in claim 2, wherein SRAM also stores Bloom filter, and the method further includes:
Being not present for matched metadata cache entry in metadata cache based on SRAM, determines that metadata cache is not ordered In;
Bloom filter is executed using Bloom filter to test;
Based on Bloom filter test as a result, determining DRAM cache miss or the hit of potential DRAM cache;
In the case of DRAM cache miss, data are obtained from flash memory;
The data obtained from flash memory are returned into host computer.
4. method as claimed in claim 3, wherein the comparison of metadata cache label and Bloom filter test can simultaneously quilt It executes.
5. method as claimed in claim 3, further includes:
In the case of metadata cache miss and the hit of potential DRAM cache, the metadata of DRAM cache is accessed;
The comparison of the DRAM cache label of Intrusion Detection based on host address and the metadata of DRAM cache, determines whether data are stored in DRAM In caching;
There are in the case of the matching entry of the DRAM cache label of host address in the metadata of DRAM cache, delay from DRAM It deposits and obtains data and the data obtained from DRAM cache are returned into host computer;
In the case of the matching entry of the DRAM cache label of host address is not present in the metadata of DRAM cache, from flash memory It obtains data and the data obtained from flash memory is returned into host computer.
6. method as claimed in claim 3, wherein Bloom filter includes the Bloom filter array for having multiple entries, Bloom filter test provides positive result or negative decision by the way that hash function is applied to Bloom filter array.
7. method as claimed in claim 6, further including:Delete the part or whole Bloom filter battle array of Bloom filter array Row, or resetting Bloom filter array.
8. the method as described in claim 1 further includes:When metadata cache hit rate is higher than threshold value, to cache controller It is programmed to disable Bloom filter.
9. the method as described in claim 1 further includes:When metadata cache hit rate is lower than threshold value, to cache controller It is programmed to disable metadata cache.
10. the method as described in claim 1 further includes:Under low power conditions, serial access Bloom filter and metadata Caching.
11. a kind of mixing memory module, including:
Flash memory;
Dynamic random access memory DRAM cache, wherein DRAM cache includes the cached copies of the data of storage in a flash memory Metadata corresponding with the cached copies of data;
Static RAM SRAM, for storing metadata cache, metadata cache includes the metadata of DRAM cache Part cached copies;
Memory interface, for providing interface to host computer;
Memory accessing controller, for accessing the data being stored in DRAM cache and flash memory;
Dram controller, for controlling the access to DRAM cache;
Flash controller, for controlling the access to flash memory;
Cache controller, the presence of the cached copies for determining the data asked from host computer,
Wherein, cache controller is configured as:
Obtain host address by being decoded to data access request, wherein host address include DRAM cache label and DRAM cache indexes;
It is indexed from DRAM cache and obtains metadata address, wherein metadata address includes that metadata cache label and metadata are slow Deposit index;
Based on the presence of the matched metadata cache entry in the metadata cache of SRAM, determine that metadata cache is hit, Wherein, matched metadata cache entry has metadata cache label and DRAM cache label;
In the case where metadata cache is hit, instruction dram controller skip the access to the metadata of DRAM cache and from DRAM cache obtains data,
Wherein, memory accessing controller is configured as:The data obtained from DRAM cache are returned into host computer.
12. mixing memory module as claimed in claim 11, wherein cache controller is additionally configured to:By metadata The metadata cache label of location is compared with the metadata cache label of one or more metadata cache entries, to determine member The presence of matched metadata cache entry in data buffer storage.
13. mixing memory module as claimed in claim 12,
Wherein, SRAM also stores Bloom filter,
Wherein, cache controller is additionally configured to:
Being not present for matched metadata cache entry in metadata cache based on SRAM, determines that metadata cache is not ordered In;
Bloom filter is executed using Bloom filter to test;
DRAM cache miss or the hit of potential DRAM cache are determined based on the result of Bloom filter test;
In the case of DRAM cache miss, skips the access to the metadata of DRAM cache and indicate flash controller from sudden strain of a muscle Acquisition data are deposited,
Data are obtained from flash memory,
Wherein, memory accessing controller is configured as the data obtained from flash memory returning to host computer.
14. mixing memory module as claimed in claim 13, wherein cache controller is performed simultaneously metadata cache label Comparison and Bloom filter test.
15. mixing memory module as claimed in claim 13,
Wherein, in the case of metadata cache miss and the hit of potential DRAM cache, dram controller is configured as:
Access the metadata of DRAM cache;
The comparison of the DRAM cache label of Intrusion Detection based on host address and the metadata of DRAM cache, determines whether data are stored in DRAM In caching,
Wherein, in the metadata of DRAM cache there are in the case of the matching entry of the DRAM cache label of host address, Dram controller is configured as obtaining data from DRAM cache, and memory accessing controller is configured as to obtain from DRAM cache Data return to host computer;
Wherein, in the case of the matching entry of DRAM cache label of host address is not present in the metadata of DRAM cache, Flash controller is configured as obtaining data from flash memory, and memory accessing controller is configured as returning the data obtained from flash memory Return to host computer.
16. mixing memory module as claimed in claim 13, wherein Bloom filter includes the Bu Long for having multiple entries Filter arrays, Bloom filter test provide positive result or no by the way that hash function is applied to Bloom filter array Determine result.
17. mixing memory module as claimed in claim 16, wherein cache controller is additionally configured to:Delete the grand mistake of cloth The part or whole Bloom filter array of filter array, or resetting Bloom filter array.
18. mixing memory module as claimed in claim 11, wherein when metadata cache hit rate is higher than threshold value, delay Memory controller is programmed to disabling Bloom filter.
19. mixing memory module as claimed in claim 11, wherein when metadata cache hit rate is lower than threshold value, delay Memory controller is programmed to disabling metadata cache.
20. mixing memory module as claimed in claim 11, wherein cache controller is configured as:In low power condition Under, serial access Bloom filter and metadata cache.
CN201711136385.4A 2017-02-15 2017-11-16 Method for reading data and hybrid memory module Active CN108427647B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762459414P 2017-02-15 2017-02-15
US62/459,414 2017-02-15
US15/587,286 US10282294B2 (en) 2017-02-15 2017-05-04 Mitigating DRAM cache metadata access overhead with SRAM metadata cache and bloom filter
US15/587,286 2017-05-04

Publications (2)

Publication Number Publication Date
CN108427647A true CN108427647A (en) 2018-08-21
CN108427647B CN108427647B (en) 2023-08-08

Family

ID=63106399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711136385.4A Active CN108427647B (en) 2017-02-15 2017-11-16 Method for reading data and hybrid memory module

Country Status (5)

Country Link
US (1) US10282294B2 (en)
JP (1) JP6916751B2 (en)
KR (1) KR102231792B1 (en)
CN (1) CN108427647B (en)
TW (1) TWI744457B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800185A (en) * 2018-12-29 2019-05-24 上海霄云信息科技有限公司 A kind of data cache method in data-storage system
CN111666233A (en) * 2019-03-05 2020-09-15 马维尔亚洲私人有限公司 Dual interface flash memory controller with locally executed cache control
CN112084216A (en) * 2020-09-16 2020-12-15 上海宏路数据技术股份有限公司 Data query system based on bloom filter
CN114095585A (en) * 2022-01-21 2022-02-25 武汉中科通达高新技术股份有限公司 Data transmission method, device, storage medium and electronic equipment
CN116303126A (en) * 2023-03-22 2023-06-23 摩尔线程智能科技(北京)有限责任公司 Caching method, data processing method and electronic equipment
CN117217977A (en) * 2023-05-26 2023-12-12 摩尔线程智能科技(北京)有限责任公司 GPU data access processing method, device and storage medium

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11397687B2 (en) * 2017-01-25 2022-07-26 Samsung Electronics Co., Ltd. Flash-integrated high bandwidth memory appliance
US10402337B2 (en) * 2017-08-03 2019-09-03 Micron Technology, Inc. Cache filter
KR20210045506A (en) * 2018-09-17 2021-04-26 마이크론 테크놀로지, 인크. Cache operation in hybrid dual in-line memory modules
US10761986B2 (en) * 2018-10-23 2020-09-01 Advanced Micro Devices, Inc. Redirecting data to improve page locality in a scalable data fabric
EP3881190A1 (en) * 2018-11-12 2021-09-22 Dover Microsystems, Inc. Systems and methods for metadata encoding
TWI688859B (en) 2018-12-19 2020-03-21 財團法人工業技術研究院 Memory controller and memory page management method
KR20200092710A (en) * 2019-01-25 2020-08-04 주식회사 리얼타임테크 Hybride index appratus in database management system based heterogeneous storage
US10853165B2 (en) * 2019-02-21 2020-12-01 Arm Limited Fault resilient apparatus and method
US11537521B2 (en) 2019-06-05 2022-12-27 Samsung Electronics Co., Ltd. Non-volatile dual inline memory module (NVDIMM) for supporting dram cache mode and operation method of NVDIMM
CN110688062B (en) 2019-08-26 2021-03-30 华为技术有限公司 Cache space management method and device
EP3812892B1 (en) 2019-10-21 2022-12-07 ARM Limited Apparatus and method for handling memory load requests
TWI739227B (en) * 2019-12-03 2021-09-11 智成電子股份有限公司 System-on-chip module to avoid redundant memory access
GB2594732B (en) * 2020-05-06 2022-06-01 Advanced Risc Mach Ltd Adaptive load coalescing
CN113934361A (en) * 2020-06-29 2022-01-14 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing a storage system
US11914517B2 (en) 2020-09-25 2024-02-27 Advanced Micro Devices, Inc. Method and apparatus for monitoring memory access traffic
US11886291B1 (en) * 2022-07-21 2024-01-30 Dell Products L.P. Providing cache line metadata over multiple cache lines
US20240054072A1 (en) * 2022-08-10 2024-02-15 Astera Labs, Inc. Metadata-caching integrated circuit device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130227220A1 (en) * 2012-02-23 2013-08-29 Agency For Science, Technology And Research Data Storage Device and Method of Managing a Cache in a Data Storage Device
US20130290607A1 (en) * 2012-04-30 2013-10-31 Jichuan Chang Storing cache metadata separately from integrated circuit containing cache controller
CN104035887A (en) * 2014-05-22 2014-09-10 中国科学院计算技术研究所 Block device caching device and method based on simplification configuration system
US20140289467A1 (en) * 2013-03-22 2014-09-25 Applied Micro Circuits Corporation Cache miss detection filter
CN104090852A (en) * 2014-07-03 2014-10-08 华为技术有限公司 Method and equipment for managing hybrid cache
US20150019823A1 (en) * 2013-07-15 2015-01-15 Advanced Micro Devices, Inc. Method and apparatus related to cache memory
CN104809179A (en) * 2015-04-16 2015-07-29 华为技术有限公司 Device and method for accessing Hash table

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6920477B2 (en) * 2001-04-06 2005-07-19 President And Fellows Of Harvard College Distributed, compressed Bloom filter Web cache server
US20080155229A1 (en) * 2006-12-21 2008-06-26 Kevin Scott Beyer System and method for generating a cache-aware bloom filter
US20110276744A1 (en) 2010-05-05 2011-11-10 Microsoft Corporation Flash memory cache including for use with persistent key-value store
US8478934B2 (en) 2010-07-19 2013-07-02 Lsi Corporation Managing extended RAID caches using counting bloom filters
US20130173853A1 (en) 2011-09-26 2013-07-04 Nec Laboratories America, Inc. Memory-efficient caching methods and systems
US8868843B2 (en) * 2011-11-30 2014-10-21 Advanced Micro Devices, Inc. Hardware filter for tracking block presence in large caches
US9389965B1 (en) 2012-03-12 2016-07-12 Emc Corporation System and method for improving performance of backup storage system with future access prediction
US9524235B1 (en) 2013-07-25 2016-12-20 Sandisk Technologies Llc Local hash value generation in non-volatile data storage systems
US9396112B2 (en) * 2013-08-26 2016-07-19 Advanced Micro Devices, Inc. Hierarchical write-combining cache coherence
US10268584B2 (en) * 2014-08-20 2019-04-23 Sandisk Technologies Llc Adaptive host memory buffer (HMB) caching using unassisted hinting
CA2876466C (en) 2014-12-29 2022-07-05 Ibm Canada Limited - Ibm Canada Limitee Scan optimization using bloom filter synopsis
KR102403202B1 (en) 2015-03-13 2022-05-30 삼성전자주식회사 Memory system and operating method having meta data manager

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130227220A1 (en) * 2012-02-23 2013-08-29 Agency For Science, Technology And Research Data Storage Device and Method of Managing a Cache in a Data Storage Device
US20130290607A1 (en) * 2012-04-30 2013-10-31 Jichuan Chang Storing cache metadata separately from integrated circuit containing cache controller
US20140289467A1 (en) * 2013-03-22 2014-09-25 Applied Micro Circuits Corporation Cache miss detection filter
US20150019823A1 (en) * 2013-07-15 2015-01-15 Advanced Micro Devices, Inc. Method and apparatus related to cache memory
CN104035887A (en) * 2014-05-22 2014-09-10 中国科学院计算技术研究所 Block device caching device and method based on simplification configuration system
CN104090852A (en) * 2014-07-03 2014-10-08 华为技术有限公司 Method and equipment for managing hybrid cache
CN104809179A (en) * 2015-04-16 2015-07-29 华为技术有限公司 Device and method for accessing Hash table

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜国松;: "一种高效、可扩展细粒度缓存管理混合存储研究", no. 08 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800185A (en) * 2018-12-29 2019-05-24 上海霄云信息科技有限公司 A kind of data cache method in data-storage system
CN109800185B (en) * 2018-12-29 2023-10-20 上海霄云信息科技有限公司 Data caching method in data storage system
CN111666233A (en) * 2019-03-05 2020-09-15 马维尔亚洲私人有限公司 Dual interface flash memory controller with locally executed cache control
CN112084216A (en) * 2020-09-16 2020-12-15 上海宏路数据技术股份有限公司 Data query system based on bloom filter
CN114095585A (en) * 2022-01-21 2022-02-25 武汉中科通达高新技术股份有限公司 Data transmission method, device, storage medium and electronic equipment
CN114095585B (en) * 2022-01-21 2022-05-20 武汉中科通达高新技术股份有限公司 Data transmission method, device, storage medium and electronic equipment
CN116303126A (en) * 2023-03-22 2023-06-23 摩尔线程智能科技(北京)有限责任公司 Caching method, data processing method and electronic equipment
CN116303126B (en) * 2023-03-22 2023-09-01 摩尔线程智能科技(北京)有限责任公司 Caching method, data processing method and electronic equipment
CN117217977A (en) * 2023-05-26 2023-12-12 摩尔线程智能科技(北京)有限责任公司 GPU data access processing method, device and storage medium

Also Published As

Publication number Publication date
JP6916751B2 (en) 2021-08-11
CN108427647B (en) 2023-08-08
US10282294B2 (en) 2019-05-07
KR102231792B1 (en) 2021-03-25
TW201832086A (en) 2018-09-01
TWI744457B (en) 2021-11-01
KR20180094469A (en) 2018-08-23
US20180232310A1 (en) 2018-08-16
JP2018133086A (en) 2018-08-23

Similar Documents

Publication Publication Date Title
CN108427647A (en) Read the method and mixing memory module of data
US7426626B2 (en) TLB lock indicator
US10248572B2 (en) Apparatus and method for operating a virtually indexed physically tagged cache
KR102319809B1 (en) A data processing system and method for handling multiple transactions
CN109416666A (en) Caching with compressed data and label
CN107402889A (en) Retrieve data method, data storage method and data de-duplication module
US6915396B2 (en) Fast priority determination circuit with rotating priority
US20160314069A1 (en) Non-Temporal Write Combining Using Cache Resources
US20020169935A1 (en) System of and method for memory arbitration using multiple queues
US9323675B2 (en) Filtering snoop traffic in a multiprocessor computing system
US9208082B1 (en) Hardware-supported per-process metadata tags
US20140047175A1 (en) Implementing efficient cache tag lookup in very large cache systems
CN106201331A (en) For writing method and apparatus and the storage media of data
US20120173843A1 (en) Translation look-aside buffer including hazard state
US11030115B2 (en) Dataless cache entry
US7356650B1 (en) Cache apparatus and method for accesses lacking locality
EP2790107A1 (en) Processing unit and method for controlling processing unit
US8688919B1 (en) Method and apparatus for associating requests and responses with identification information
US20140297961A1 (en) Selective cache fills in response to write misses
US20100257319A1 (en) Cache system, method of controlling cache system, and information processing apparatus
JP2016057763A (en) Cache device and processor
US11188464B2 (en) System and method for self-invalidation, self-downgrade cachecoherence protocols
US9251070B2 (en) Methods and apparatus for multi-level cache hierarchies
US20140006747A1 (en) Systems and methods for processing instructions when utilizing an extended translation look-aside buffer having a hybrid memory structure
US7519778B2 (en) System and method for cache coherence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant