WO2015169245A1 - 数据缓存方法、缓存和计算机系统 - Google Patents
数据缓存方法、缓存和计算机系统 Download PDFInfo
- Publication number
- WO2015169245A1 WO2015169245A1 PCT/CN2015/078502 CN2015078502W WO2015169245A1 WO 2015169245 A1 WO2015169245 A1 WO 2015169245A1 CN 2015078502 W CN2015078502 W CN 2015078502W WO 2015169245 A1 WO2015169245 A1 WO 2015169245A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- cache
- cache line
- memory
- line
- data
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0868—Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/122—Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/128—Replacement control using replacement algorithms adapted to multidimensional cache systems, e.g. set-associative, multicache, multiset or multilevel
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0631—Configuration or reconfiguration of storage systems by allocating resources to storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0685—Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
- G06F2212/604—Details relating to cache allocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/068—Hybrid storage device
Definitions
- Embodiments of the present invention relate to storage technologies, and in particular, to a data caching method, a cache, and a computer system.
- a dynamic random access memory (Dynamic Random Access Memory, hereinafter referred to as DRAM) is generally used as a computer system.
- DRAM Dynamic Random Access Memory
- NVM non-volatile memory
- NVM non-Volatile Memory
- NVM is used instead of DRAM as a computer system to meet the application of large-capacity and low-energy. Consumption demand.
- NVM has a longer read and write latency than DRAM. Because of the advantages and disadvantages of DRAM and NVM, the prior art further adopts a hybrid memory composed of DRAM and NVM, in order to provide high-capacity, low-energy, high-performance memory for applications.
- Embodiments of the present invention provide a data caching method, a cache, and a computer system.
- an embodiment of the present invention provides a data caching method, where the method is performed by a cache cache, and includes:
- the Cache line satisfying the first preset condition includes a Cache line whose historical access frequency is lower than a preset frequency and corresponds to a memory of a dynamic random access memory DRAM type, the memory including a DRAM type memory and a nonvolatile memory NVM type RAM;
- the first Cache line to be replaced is selected in the Cache line that meets the first preset condition
- the Cache sends the to-be-accessed data to the CPU.
- an embodiment of the present invention provides a cache cache, including:
- a receiving module configured to receive a data access request sent by the CPU, where the data access request includes an access address
- a hit determining module configured to determine, according to the access address, whether data to be accessed is cached in the Cache
- a determining determining module configured to determine, according to a historical access frequency of the cache line Cache line and a type of memory corresponding to the Cache line, in the case that the data to be accessed is not cached in the Cache, Whether the Cache line that satisfies the first preset condition exists in the Cache, wherein the Cache line that satisfies the first preset condition includes a Cache line whose historical access frequency is lower than a preset frequency and corresponds to a memory of a dynamic random access memory DRAM type
- the memory includes a DRAM type memory and a non-volatile memory NVM type memory; in a case where it is determined that there is a Cache line satisfying the first preset condition in the Cache, the Cache line satisfies the first preset condition Selecting a first Cache line to be replaced;
- a reading module configured to read the to-be-accessed data from the memory according to the access address
- a replacement module configured to replace the first cache line with a second cache line, where the second cache line includes the access address and the to-be-accessed data;
- a sending module configured to send the to-be-accessed data to the CPU.
- an embodiment of the present invention provides a computer system, including: a processor, a hybrid memory, and a cache cache according to the foregoing second aspect, wherein the hybrid memory includes a DRAM and an NVM, and the processor, the hybrid The memory is connected to the cache cache through a bus.
- the Cache needs to determine the Cache line to be replaced in the access request miss.
- the amount of buffering of the data stored in the NVM can be increased, so that the access request for the data stored in the NVM can find the corresponding data in the Cache as much as possible, thereby reducing the situation of reading data into the NVM, thereby reducing the situation.
- the delay in reading data from the NVM effectively improves access efficiency.
- FIG. 1 is a schematic structural diagram of a computer system according to an embodiment of the present invention
- FIG. 2 is a flowchart of a data caching method according to an embodiment of the present invention.
- FIG. 3 is a schematic structural diagram of a cache according to an embodiment of the present invention.
- FIG. 1 is a schematic structural diagram of a computer system according to an embodiment of the present invention.
- the computer system includes a processor 11 and a hybrid memory 12, wherein the processor 11 may include a CPU 111.
- the cache (112) and the memory controller 113, the Hybrid Memory 12 may include a DRAM 121 and an NVM 122, and the Processor 11, Hybrid Memory 12, and Cache 112 are connected by a bus.
- the Hybrid Memory 12 and the Memory controller 113 can be connected via a Memory Bus 13.
- the CPU can issue a data access request to the Cache, which contains the access address.
- the Cache caches some of the data in the Hybrid Memory to improve response speed. Therefore, the Cache first determines whether the data requested by the CPU is cached in the Cache according to the access address. In another way, the Cache first determines whether the Cache can hit the data requested by the request according to the access address. When the Cache is hit, that is, when it is determined that the to-be-accessed data is cached in the Cache, the Cache may directly return data requested by the Cache to the CPU.
- the data access request is sent to the Hybrid Memory through the Memory controller to read the data requested by the CPU from the Hybrid Memory.
- the Cache needs to continuously update its cache content according to the access conditions during the data access process to meet the ever-changing access requirements. Specifically, when the data access is hit in the Cache, the replacement update of the cache line is not performed. When the current data access does not hit the cache, the cache needs to be cached from the current cache (Cache). In the line), a Cache line to be replaced is determined, and the Cache line is replaced by a new Cache line read from the memory.
- the Cache line is the minimum operating unit of the Cache Controller.
- the Cache Controller writes data to the memory
- the Cache Controller writes a line of line data into the memory according to the Cache line.
- the Cache Controller reads data from the memory, it also reads the data according to the Cache line.
- a Cache line may represent data of a Cache line.
- the "replacement cache line" in the embodiment of the present invention refers to replacing data of a Cache line in the Cache with data of a Cache line read from the memory.
- the Cache searches for the Cache line with the lowest access frequency from the currently cached Cache line, and determines the Cache line with the lowest access frequency as the Cache line to be replaced.
- the Cache does not sense whether the memory type corresponding to the Cache line to be replaced is a DRAM type or an NVM type, that is, it does not sense whether the Cache line to be replaced is derived from DRMA or NVM.
- the NVM includes but is not limited to: phase change memory (hereinafter referred to as PCM), and spin transfer torque magnetic random access storage. (Spin Transfer Torque-Magnetic Random Access Memory, hereinafter referred to as STT-MRAM), and Resistive Random Access Memory (hereinafter referred to as RRAM).
- PCM phase change memory
- STT-MRAM spin Transfer Torque-Magnetic Random Access Memory
- RRAM Resistive Random Access Memory
- DRAM PCM STT-MRAM RRAM Read delay ⁇ 10ns 12ns 35ns ⁇ 50ns Write delay ⁇ 10ns 100ns 35ns ⁇ 0.3ns Write times >1E16 1.00E+09 >1E12 1.00E+12 keep time 64ms >10y >10y >10y
- the storage capacity of the NVM is larger than that of the DRAM and the power consumption is lower than that of the DRAM
- the read and write latency of the NVM is greater than the read and write latency of the DRAM
- the NVM has a limit on the number of writes.
- the memory type corresponding to the Cache line to be replaced determined by the Cache is an NVM type
- the Cache line to be replaced will be deleted from the Cache, and subsequent access requests must be obtained from the NVM of the Hybrid Memory.
- the read and write latency of the NVM is larger than that of the DRAM, which inevitably brings about the problem of access delay, which cannot meet the application's high demand for access delay.
- the Cache determines the Cache line to be replaced, not only the historical access frequency of each cache line currently cached, but also the memory type corresponding to each Cache line is further considered, and the memory type is preferentially replaced by the DRAM type.
- Cache line that is, the Cache line whose memory type is DRAM is preferentially replaced.
- the memory type is a DRAM type Cache line, that is, the data in the Cache line is stored in the DRAM in the memory.
- the memory type is NVM type Cache line, which means that the data in the Cache line is stored in the NVM in the memory.
- FIG. 2 is a flowchart of a data caching method according to an embodiment of the present invention.
- the method may be performed by a Cache, and may be performed by a Cache Controller in a Cache.
- the method in this embodiment may include:
- the Cache receives a data access request sent by the CPU, where the data access request includes an access address.
- the Cache determines, according to the access address, whether the data to be accessed is cached in the Cache, and if so, executing S207; if not, executing S203;
- the CPU may receive a data access request sent by the application, thereby sending the data access request to the Cache.
- the Cache can compare the access address requested by the CPU with the address of each cached cache line to determine whether the requested data is cached in the Cache, that is, whether it is hit.
- the mapping policy of the Cache if the mapping policy of the Cache is fully connected, the Cache searches and compares the entire cache. If the mapping policy of the Cache is not fully connected, but the group is divided, the Cache can be based on the access address.
- the index bit determines the group in which the access address is located in the Cache, and further determines whether the access address is included in the group according to a bit in the access address. If included, the Cache may determine according to the valid bit. Whether the cached data is valid, if the data is valid, the corresponding data can be found according to the data offset in the access address, and the data is returned to the CPU.
- the Cache determines, according to the historical access frequency of the cache line Cache line and the type of the memory corresponding to the Cache line, whether the Cache line that meets the first preset condition exists in the Cache, and if yes, execute S204, otherwise, Execute S208;
- the Cache line that satisfies the first preset condition includes a Cache line whose historical access frequency is lower than a preset frequency and corresponds to a dynamic random access memory DRAM type memory, and the memory includes a DRAM type memory and a nonvolatile memory NVM.
- Type of memory
- the Cache needs to determine a Cache line from the currently cached Cache line as the Cache line to be replaced, and the Cache line to be replaced is determined to be the first. Cache line.
- the Cache line to be replaced is determined to be the first. Cache line.
- the historical access frequency of the Cache line is used to represent the access heat of the corresponding cached data, and the memory type corresponding to the Cache line indicates whether the source of the Cache line is DRAM or NVM.
- the Cache determines that the first Cache line policy may be based on the historical access frequency, and replaces the Cache line whose memory type is DRAM as much as possible, that is, preferentially replaces the Cache line whose memory type is DRAM type.
- the Cache may select several cache lines whose historical access frequency is lower than the preset frequency from all the cache lines of the cache, and then determine and the DRAM according to the memory type corresponding to each cache line in the several cache lines.
- the Cache line corresponding to the type of memory is used as the first Cache line to be replaced. If the Cache lines contain two or more Cache lines with a memory type of DRAM, the Cache line with the lowest historical access frequency and the memory type of DRAM may be determined as the first Cache line to be replaced.
- the Cache reads the to-be-accessed data from the memory according to the access address.
- the Cache can read the to-be-accessed data from the memory according to the access address, and the to-be-accessed data may be stored on the DRAM or may be stored on the NVM.
- the Cache replaces the first cache line by using a second cache line, where the second cache line includes the access address and the to-be-accessed data.
- the second cache line may be replaced by the second cache line.
- the data of one Cache line read out from the memory is referred to as a second Cache line.
- Replacing the first cache line with the second cache line means that the data read from the memory is cached in the cache, and the data of the first cache line is deleted or written back into the memory.
- a location may be added to each Cache line in the Cache, and the identifier bit is used to identify whether the memory type corresponding to the Cache line is a DRAM type or an NVM type.
- the Cache may record the type of the memory corresponding to the second cache line according to the type of the storage medium in the memory pointed to by the access address, that is, set the flag bit.
- the identifier bit can be a bit. If the bit is 1, the corresponding memory type is a DRAM type. If the bit is 0, the corresponding memory type is an NVM type.
- the Modify bit of the first cache line to be replaced is clean, the data in the first cache line is not modified, and is stored in the memory. If the data is consistent, the data of the first Cache line does not need to be written back into the memory, and the data of the first Cache line can be directly deleted. If the Modify bit of the first Cache line to be replaced is dirty, it indicates that the data in the first Cache line has been modified, and is inconsistent with the data stored in the memory, then the data of the first Cache line needs to be first used. Write back to memory and delete the first Cache line.
- the Cache sends the data to be accessed to the CPU, and ends.
- the Cache can send the read data to be accessed to the CPU.
- the Cache when the Cache line needs to determine the Cache line to be replaced, the Cache needs to consider not only the historical access frequency of the Cache line but also the memory type corresponding to the Cache line, so that the Cache line can be preferentially replaced.
- the Cache line corresponding to the DRAM memory type reduces the cache amount of the Cache to the data stored in the DRAM, so that the Cache can increase the buffer amount of the data stored in the NVM, so that the access request for the data stored in the NVM is as much as possible.
- the corresponding data can be found in the Cache, thereby reducing the situation of reading data into the NVM, thereby reducing the delay in reading data from the NVM and effectively improving the access efficiency.
- the above embodiment can obtain the historical access frequency of each Cache line by means of a Least Recently Used (LRU) linked list.
- LRU Least Recently Used
- the LRU linked list records the Cache line in the order of the access frequency from low to high.
- the Cache may be:
- the first one is selected in the cache line corresponding to the DRAM type memory in the determined first M cache lines.
- the Cache line is the first Cache line.
- the “first one” refers to the top Cache line in the Cache line corresponding to the DRAM type memory in the first M cache lines of the LRU list.
- M is set to be large, the probability that the memory type is replaced by the DRAM type of the DRAM type can be improved, but the value of M cannot be set too large, otherwise the data stored in the DRAM Will not be able to enter the Cache.
- Those skilled in the art can set the value of M according to the requirements.
- Cache determines whether there is a cache line that satisfies the second preset condition in the Cache, and if so, executes S209, otherwise executes S210;
- the cache line that satisfies the second preset condition includes a cache line whose historical access frequency is lower than the preset frequency, corresponds to the memory of the NVM type, and has not been modified;
- the Cache may have to replace the Cache line whose memory type is NVM.
- the Modify bit of the Cache line is clean, that is, the data in the Cache line has not been modified, and it is consistent with the data stored in the memory, so it is not necessary to write the data of the Cache line back to the memory when replacing, if Cache
- the Modify bit of line is Dirty, that is, the data in the Cache line has been modified, and it is inconsistent with the data stored in the memory. In the replacement, the data of the Cache line needs to be written back to the memory.
- NVM has a write limit. Therefore, when you have to replace the Cache line with the memory type of NVM, you can preferentially replace the Cache line with the Modify bit as clean, thus reducing the number of NVM writes.
- the Cache determines whether the cache line that meets the second preset condition exists in the Cache, and may be specifically:
- the first N cache lines are determined to correspond to the NVM type memory and The first cache line is selected as the first cache line in the Cache line whose Modify bit is clean.
- the values of M and N can be tuned according to the application behavior. Because of the Cache line of different memory types, the replacement delay has the following relationship: DRAM ⁇ NVM(clean) ⁇ NVM(dirty), therefore, the storage time of the Cache line from the NVM in the Cache can be appropriately extended. Therefore, it is generally possible to set N ⁇ M.
- S210 The Cache selects a cache line with the lowest historical access frequency as the first cache line, and executes S205.
- the Cache can determine the front of the LRU list.
- the Cache line of the end is the first Cache line.
- the first cache line determined at this time is a Cache line whose memory type is NVM type and the Modify bit is dirty.
- the Cache may further record the type of the memory corresponding to the second cache line according to the type of the storage medium in the memory pointed to by the access address.
- the Cache can adopt the following two implementation manners to obtain the type of the memory corresponding to the Cache line:
- Method 1 determining, according to the address range of the access address in the memory, the memory type corresponding to the second cache line and recording;
- the physical address in the Hybrid Memory is continuous, for example, the first nGB is DRAM, and the last nGB is NVM.
- the Cache can determine whether the access address belongs to the address range of the DRAM or the address range of the NVM; if the access address belongs to the address range of the DRAM, the Cache can record the memory type corresponding to the second cache line as the DRAM type; if the access address belongs to the NVM The address range, the Cache can record the memory type corresponding to the second cache line as the NVM type.
- the second method determines the memory type corresponding to the second cache line according to the information fed back by the Memory Controller and records the memory type.
- a memory map may be stored in the Memory Controller, where the mapping table records the address range of the DRAM or the address range of the NVM, or may simultaneously record the address range of the DRAM and the address range of the NVM.
- the address range can be continuous or not.
- the Memory Controller can determine the storage location of the access address in the Hybrid Memory according to the access address and the mapping table.
- a bit can be added to the interaction data of the Memory Controller and the Cache, and the Memory Controller can send the storage location of the access address in the Hybrid Memory to the Cache through the added bit.
- the Cache when the Cache is in a miss, it needs to send a data read request to the Memory Controller, where the data read request includes an access address; the Memory Controller can read the to-be-accessed data according to the access request to the Hybrid Memory, and Memory The controller can obtain the storage location of the access address in the Hybrid Memory according to the access address and the mapping table. After that, the Memory Controller can send a data read response to the cache, where the data read response includes the to-be-accessed data and the access address in the Hybrid Memory. Storage location; correspondingly, the Cache can read the storage location contained in the response according to the data sent by the Memory Controller. Record the memory type corresponding to the second cache line.
- the first 2GB is DRAM
- the last 2GB is NVM
- the Cache mapping policy is fully connected.
- the CPU requests data from the Cache with an access address of 0x08000000.
- the access address is located at the 1GB of the Hybrid Memory and the data bus is 32 bits.
- Example 1 After the Cache finds the local data, the corresponding data is not found. If the hop is missing, the Cache searches for the first five cache lines of the LRU list, and determines that the Cache with the memory type is DRAM is not found according to the location bits of the first five cache lines. Line, the Cache looks at the Modify bit of the first three cache lines of the LRU list, and finds that the Modify bit of the second cache line is 0, which means clean, then the cache line is the first cache line to be replaced. The Cache can read the Cache line containing the data with the access address 0x08000000 into the Cache to replace the first Cache line, and return the read data to the CPU.
- the Cache can determine that the data is stored in the DRAM according to the access address (0x08000000), so the newly read cache line, that is, the second cache line, is added to the end of the LRU list, and its location is 0 (the certificate is the first)
- the second Cache line is derived from DRAM) and the Modify position is 0 (the data characterizing the second Cache line has not been modified).
- Example 2 After the Cache finds the local data, the corresponding data is not found. If the hop is missing, the Cache searches for the first five cache lines of the LRU list, and determines the Cache line whose memory type is DRAM according to the location bits of the first five cache lines. The Cache line is the first Cache line to be replaced. If the Modify bit of the DRAM is 1 (the data characterizing the first Cache line is modified), the Cache first writes back data to the first Cache line, and then the Cache can store the Cache line containing the data with the access address of 0x08000000. Read in the Cache to replace the first Cache line and return the read data to the CPU.
- the Cache can determine that the data is stored in the DRAM according to the access address (0x08000000), so the newly read cache line, that is, the second cache line, is added to the end of the LRU list, and its location is 0 (the certificate is the first)
- the second Cache line is derived from DRAM) and the Modify position is 0 (the data characterizing the second Cache line has not been modified).
- the Cache when determining the Cache line to be replaced, preferentially replaces the Cache line whose memory type is DRAM, thereby preserving the Cache line whose memory type is NVM as much as possible, so as to avoid reading data from the NVM. Delay; there is no replacement
- the Cache line of the DRAM type is replaced by the Cache line with the memory type of NVM
- the Cache line whose memory type is NVM and the Modify bit is clean is preferentially replaced, thereby reducing the number of writes to the NVM and improving the service life of the memory.
- FIG. 3 is a schematic structural diagram of a cache cache according to an embodiment of the present invention. As shown in FIG. 3, the Cache in this embodiment may include:
- the receiving module 31 is configured to receive a data access request sent by the CPU, where the data access request includes an access address;
- a hit determination module 32 configured to determine, according to the access address, whether data to be accessed is cached in the Cache
- the replacement determining module 33 is configured to: according to the historical access frequency of the cache line Cache line and the type of the memory corresponding to the Cache line, in the case that it is determined that the to-be-accessed data is not cached in the Cache, Determining whether there is a Cache line that meets the first preset condition in the Cache, where the Cache line that satisfies the first preset condition includes a historical access frequency lower than a preset frequency and corresponding to a memory of a dynamic random access memory DRAM type Cache line, the memory includes a DRAM type memory and a non-volatile memory NVM type memory; in the case that it is determined that there is a Cache line satisfying the first preset condition in the Cache, the first preset condition is satisfied Selecting a first Cache line to be replaced in the Cache line;
- the reading module 34 is configured to read the to-be-accessed data from the memory according to the access address
- a replacement module 35 configured to replace the first cache line with a second cache line, where the second cache line includes the access address and the to-be-accessed data;
- the sending module 36 is configured to send the to-be-accessed data to the CPU.
- the replacement determining module 33 is specifically configured to:
- the Cache line corresponding to the DRAM type memory is selected in the determined first M cache lines.
- a Cache line is the first Cache line.
- the replacement determining module 33 is further configured to:
- the Cache line that meets the first preset condition does not exist in the Cache, determining whether there is a cache line that meets the second preset condition in the Cache, where the cache line that satisfies the second preset condition Include a Cache line whose historical access frequency is lower than the preset frequency, corresponds to an NVM type memory, and has not been modified;
- the first cache line to be replaced is selected in the cache line that satisfies the second preset condition.
- the replacement determining module 33 is specifically configured to:
- the first N cache lines are determined to correspond to the NVM type memory and The first cache line is selected as the first cache line in the Cache line whose Modify bit is clean.
- the replacement determination module 33 is further configured to:
- the replacement module 35 is further configured to:
- the type of the memory corresponding to the second cache line is recorded according to the type of the storage medium in the memory pointed to by the access address.
- the Cache of this embodiment may be used to implement the technical solution of the foregoing method embodiment, and the implementation principle and the technical effect are similar, and details are not described herein again.
- a computer system embodiment is also provided.
- the structure of the computer system embodiment may refer to the architecture shown in FIG. 1, that is, a processor 11 and a hybrid memory 12, wherein Processor11 is included.
- the CPU 111, the CPU cache (Cache) 112, and the memory controller 113 may be included.
- the Hybrid Memory 12 may include a DRAM 121 and an NVM 122.
- the Hybrid Memory 12 and the Memory controller 113 may be connected through a Memory Bus 13;
- the Cache 112 may adopt the structure described in the foregoing Cache Embodiment, and the Cache 112 may perform the technical solution of the foregoing method embodiment.
- the implementation principle and technical effect are similar, and will not be described here.
- the aforementioned program can be stored in a computer readable storage medium.
- the program when executed, performs the steps including the foregoing method embodiments; and the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
DRAM | PCM | STT-MRAM | RRAM | |
读延迟 | <10ns | 12ns | 35ns | <50ns |
写延迟 | <10ns | 100ns | 35ns | <0.3ns |
写次数 | >1E16 | 1.00E+09 | >1E12 | 1.00E+12 |
保留时间 | 64ms | >10y | >10y | >10y |
Claims (13)
- 一种数据缓存方法,所述方法由缓存Cache执行,其特征在于,包括:接收CPU发送的数据访问请求,所述数据访问请求中包含访问地址;根据所述访问地址判断待访问数据是否缓存在所述Cache中;在确定所述待访问数据没有缓存在所述Cache中的情况下,根据所述Cache中的缓存线Cache line的历史访问频率以及Cache line对应的内存的类型,判断所述Cache中是否存在满足第一预设条件的Cache line,其中,所述满足第一预设条件的Cache line包括历史访问频率低于预设频率且与动态随机访问存储器DRAM类型的内存对应的Cache line,所述内存包括DRAM类型的内存和非易失性存储器NVM类型的内存;在确定所述Cache中存在满足第一预设条件的Cache line的情况下,在满足第一预设条件的Cache line中选择待替换的第一Cache line;根据所述访问地址从所述内存中读取所述待访问数据;采用第二Cache line替换所述第一Cache line,所述第二Cache line包含所述访问地址和所述待访问数据;所述Cache向所述CPU发送所述待访问数据。
- 根据权利要求1所述的方法,其特征在于,所述判断所述Cache中是否存在满足第一预设条件的Cache line,包括:判断所述Cache的最近最少访问LRU链表中前M个Cache line中是否存在与DRAM类型的内存对应的Cache line,其中所述LRU链表中前M个Cache line为历史访问频率低于预设频率的Cache line;所述在确定所述Cache中存在满足第一预设条件的Cache line的情况下,在满足第一预设条件的Cache line中选择待替换的第一Cache line,包括:在确定所述LRU链表中前M个Cache line中存在与DRAM类型的内存对应的Cache line的情况下,在确定的所述前M个Cache line中与DRAM类型的内存对应的Cache line中选择第一个Cache line为所述第一Cache line。
- 根据权利要求1或2所述的方法,其特征在于,还包括:在确定所述Cache中不存在满足第一预设条件的Cache line的情况下,判断所述Cache中是否存在满足第二预设条件的cache line,其中所述满足第 二预设条件的cache line包括历史访问频率低于所述预设频率、与NVM类型的内存对应并且未被修改过的Cache line;在确定所述Cache中存在满足第二预设条件的cache line的情况下,在所述满足第二预设条件的cache line中选择待替换的第一Cache line。
- 根据权利要求3所述的方法,其特征在于,所述判断所述Cache中是否存在满足第二预设条件的cache line,包括:判断所述Cache的LRU链表中前N个Cache line中是否存在与NVM类型的内存对应并且修改Modify标识位表示干净clean的Cache line,其中所述前N个Cache line为历史访问频率低于预设频率的Cache line;所述在确定所述Cache中存在满足第二预设条件的cache line的情况下,在所述满足第二预设条件的cache line中选择待替换的第一Cache line包括:在确定所述LRU链表中前N个Cache line中存在与NVM类型的内存对应且Modify位为clean的Cache line的情况下,在确定的所述前N个Cache line中与NVM类型的内存对应且Modify位为clean的Cache line中选择第一个Cache line为所述第一Cache line。
- 根据权利要求4所述的方法,其特征在于,还包括:在确定所述Cache中不存在满足第二预设条件的cache line的情况下,确定所述LRU链表中最前端的Cache line为所述第一Cache line。
- 根据权利要求1~5中任一项所述的方法,其特征在于,所述采用第二Cache line替换所述第一Cache line之后,还包括:根据所述访问地址指向的所述内存中的存储介质的类型,记录所述第二Cache line对应的内存的类型。
- 一种缓存Cache,其特征在于,包括:接收模块,用于接收CPU发送的数据访问请求,所述数据访问请求中包含访问地址;命中确定模块,用于根据所述访问地址判断待访问数据是否缓存在所述Cache中;替换确定模块,用于在确定所述待访问数据没有缓存在所述Cache中的情况下,根据所述Cache中的缓存线Cache line的历史访问频率以及Cache line对应的内存的类型,判断所述Cache中是否存在满足第一预设条件的 Cache line,其中,所述满足第一预设条件的Cache line包括历史访问频率低于预设频率且与动态随机访问存储器DRAM类型的内存对应的Cache line,所述内存包括DRAM类型的内存和非易失性存储器NVM类型的内存;在确定所述Cache中存在满足第一预设条件的Cache line的情况下,在满足第一预设条件的Cache line中选择待替换的第一Cache line;读取模块,用于根据所述访问地址从所述内存中读取所述待访问数据;替换模块,用于采用第二Cache line替换所述第一Cache line,所述第二Cache line包含所述访问地址和所述待访问数据;发送模块,用于向所述CPU发送所述待访问数据。
- 根据权利要求7所述的Cache,其特征在于,所述替换确定模块,具体用于:判断所述Cache的最近最少访问LRU链表中前M个Cache line中是否存在与DRAM类型的内存对应的Cache line,其中所述LRU链表中前M个Cache line为历史访问频率低于预设频率的Cache line;在确定所述LRU链表中前M个Cache line中存在与DRAM类型的内存对应的Cache line的情况下,在确定的所述前M个Cache line中与DRAM类型的内存对应的Cache line中选择第一个Cache line为所述第一Cache line。
- 根据权利要求7或8所述的Cache,其特征在于,所述替换确定模块,还用于:在确定所述Cache中不存在满足第一预设条件的Cache line的情况下,判断所述Cache中是否存在满足第二预设条件的cache line,其中所述满足第二预设条件的cache line包括历史访问频率低于所述预设频率、与NVM类型的内存对应并且未被修改过的Cache line;在确定所述Cache中存在满足第二预设条件的cache line的情况下,在所述满足第二预设条件的cache line中选择待替换的第一Cache line。
- 根据权利要求9所述的Cache,其特征在于,所述替换确定模块,具体用于:判断所述Cache的LRU链表中前N个Cache line中是否存在与NVM类型的内存对应并且修改Modify标识位表示干净clean的Cache line,其中所述前N个Cache line为历史访问频率低于预设频率的Cache line;在确定所述LRU链表中前N个Cache line中存在与NVM类型的内存对应且Modify位为clean的Cache line的情况下,在确定的所述前N个Cache line中与NVM类型的内存对应且Modify位为clean的Cache line中选择第一个Cache line为所述第一Cache line。
- 根据权利要求10所述的Cache,其特征在于,所述替换确定模块,还用于:在确定所述Cache中不存在满足第二预设条件的cache line的情况下,确定所述LRU链表中最前端的Cache line为所述第一Cache line。
- 根据权利要求7~11中任一项所述的Cache,其特征在于,所述替换模块,还用于:在采用第二Cache line替换所述第一Cache line之后,根据所述访问地址指向的所述内存中的存储介质的类型,记录所述第二Cache line对应的内存的类型。
- 一种计算机系统,其特征在于,包括:处理器、内存和如权利要求7~12中任一项所述的缓存Cache,所述内存包括DRAM和NVM,所述处理器、所述内存与所述Cache通过总线连接。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP15789250.6A EP3121703B1 (en) | 2014-05-09 | 2015-05-07 | Data caching method, cache and computer system |
KR1020167028378A KR102036769B1 (ko) | 2014-05-09 | 2015-05-07 | 데이터 캐싱 방법, 캐시 및 컴퓨터 시스템 |
JP2016564221A JP6277572B2 (ja) | 2014-05-09 | 2015-05-07 | データキャッシング方法、キャッシュおよびコンピュータシステム |
US15/347,776 US10241919B2 (en) | 2014-05-09 | 2016-11-09 | Data caching method and computer system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410193960.4 | 2014-05-09 | ||
CN201410193960.4A CN105094686B (zh) | 2014-05-09 | 2014-05-09 | 数据缓存方法、缓存和计算机系统 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/347,776 Continuation US10241919B2 (en) | 2014-05-09 | 2016-11-09 | Data caching method and computer system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015169245A1 true WO2015169245A1 (zh) | 2015-11-12 |
Family
ID=54392171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/078502 WO2015169245A1 (zh) | 2014-05-09 | 2015-05-07 | 数据缓存方法、缓存和计算机系统 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10241919B2 (zh) |
EP (1) | EP3121703B1 (zh) |
JP (1) | JP6277572B2 (zh) |
KR (1) | KR102036769B1 (zh) |
CN (1) | CN105094686B (zh) |
WO (1) | WO2015169245A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017083949A (ja) * | 2015-10-23 | 2017-05-18 | 富士通株式会社 | キャッシュメモリおよびキャッシュメモリの制御方法 |
EP3547142A4 (en) * | 2016-12-28 | 2020-04-22 | New H3C Technologies Co., Ltd. | INFORMATION PROCESSING |
CN112289353A (zh) * | 2019-07-25 | 2021-01-29 | 上海磁宇信息科技有限公司 | 一种优化的具有ecc功能的mram系统及其操作方法 |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108139872B (zh) * | 2016-01-06 | 2020-07-07 | 华为技术有限公司 | 一种缓存管理方法、缓存控制器以及计算机系统 |
CN107229575A (zh) * | 2016-03-23 | 2017-10-03 | 上海复旦微电子集团股份有限公司 | 缓存性能的评估方法及装置 |
CN107229574A (zh) * | 2016-03-23 | 2017-10-03 | 上海复旦微电子集团股份有限公司 | 缓存及其控制方法 |
CN107229546A (zh) * | 2016-03-23 | 2017-10-03 | 上海复旦微电子集团股份有限公司 | 缓存的模拟方法及装置 |
CN108021514B (zh) * | 2016-10-28 | 2020-11-06 | 华为技术有限公司 | 一种缓存替换的方法和设备 |
CN106569961B (zh) * | 2016-10-31 | 2023-09-05 | 珠海市一微半导体有限公司 | 一种基于访存地址连续性的cache模块及其访存方法 |
US10417134B2 (en) * | 2016-11-10 | 2019-09-17 | Oracle International Corporation | Cache memory architecture and policies for accelerating graph algorithms |
CN107368437B (zh) * | 2017-07-24 | 2021-06-29 | 郑州云海信息技术有限公司 | 一种末级缓存管理方法及系统 |
CN111433749B (zh) * | 2017-10-12 | 2023-12-08 | 拉姆伯斯公司 | 具有dram高速缓存的非易失性物理存储器 |
CN108572932B (zh) * | 2017-12-29 | 2020-05-19 | 贵阳忆芯科技有限公司 | 多平面nvm命令融合方法与装置 |
US20190303037A1 (en) * | 2018-03-30 | 2019-10-03 | Ca, Inc. | Using sequential read intention to increase data buffer reuse |
JP7071640B2 (ja) * | 2018-09-20 | 2022-05-19 | 富士通株式会社 | 演算処理装置、情報処理装置及び演算処理装置の制御方法 |
CN110134514B (zh) * | 2019-04-18 | 2021-04-13 | 华中科技大学 | 基于异构内存的可扩展内存对象存储系统 |
CN110347338B (zh) * | 2019-06-18 | 2021-04-02 | 重庆大学 | 混合内存数据交换处理方法、系统及可读存储介质 |
CN112667528A (zh) * | 2019-10-16 | 2021-04-16 | 华为技术有限公司 | 一种数据预取的方法及相关设备 |
CN111221749A (zh) * | 2019-11-15 | 2020-06-02 | 新华三半导体技术有限公司 | 数据块写入方法、装置、处理器芯片及Cache |
CN112612727B (zh) * | 2020-12-08 | 2023-07-07 | 成都海光微电子技术有限公司 | 一种高速缓存行替换方法、装置及电子设备 |
WO2022178869A1 (zh) * | 2021-02-26 | 2022-09-01 | 华为技术有限公司 | 一种缓存替换方法和装置 |
CN113421599A (zh) * | 2021-06-08 | 2021-09-21 | 珠海市一微半导体有限公司 | 一种预缓存外部存储器数据的芯片及其运行方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101236530A (zh) * | 2008-01-30 | 2008-08-06 | 清华大学 | 高速缓存替换策略的动态选择方法 |
CN102831087A (zh) * | 2012-07-27 | 2012-12-19 | 国家超级计算深圳中心(深圳云计算中心) | 基于混合存储器的数据读写处理方法和装置 |
CN103092534A (zh) * | 2013-02-04 | 2013-05-08 | 中国科学院微电子研究所 | 一种内存结构的调度方法和装置 |
CN103548005A (zh) * | 2011-12-13 | 2014-01-29 | 华为技术有限公司 | 替换缓存对象的方法和装置 |
US20140032818A1 (en) * | 2012-07-30 | 2014-01-30 | Jichuan Chang | Providing a hybrid memory |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5307477A (en) | 1989-12-01 | 1994-04-26 | Mips Computer Systems, Inc. | Two-level cache memory system |
US6349363B2 (en) * | 1998-12-08 | 2002-02-19 | Intel Corporation | Multi-section cache with different attributes for each section |
US7457926B2 (en) | 2005-05-18 | 2008-11-25 | International Business Machines Corporation | Cache line replacement monitoring and profiling |
US7478197B2 (en) | 2006-07-18 | 2009-01-13 | International Business Machines Corporation | Adaptive mechanisms for supplying volatile data copies in multiprocessor systems |
US7568068B2 (en) * | 2006-11-13 | 2009-07-28 | Hitachi Global Storage Technologies Netherlands B. V. | Disk drive with cache having volatile and nonvolatile memory |
CN100481028C (zh) * | 2007-08-20 | 2009-04-22 | 杭州华三通信技术有限公司 | 一种利用缓存实现数据存储的方法和装置 |
US7962695B2 (en) | 2007-12-04 | 2011-06-14 | International Business Machines Corporation | Method and system for integrating SRAM and DRAM architecture in set associative cache |
JP2011022933A (ja) * | 2009-07-17 | 2011-02-03 | Toshiba Corp | メモリ管理装置を含む情報処理装置及びメモリ管理方法 |
US20100185816A1 (en) * | 2009-01-21 | 2010-07-22 | Sauber William F | Multiple Cache Line Size |
EP2455865B1 (en) | 2009-07-17 | 2020-03-04 | Toshiba Memory Corporation | Memory management device |
US8914568B2 (en) | 2009-12-23 | 2014-12-16 | Intel Corporation | Hybrid memory architectures |
US9448938B2 (en) * | 2010-06-09 | 2016-09-20 | Micron Technology, Inc. | Cache coherence protocol for persistent memories |
CN102253901B (zh) | 2011-07-13 | 2013-07-24 | 清华大学 | 一种基于相变内存的读写区分数据存储替换方法 |
CN102760101B (zh) * | 2012-05-22 | 2015-03-18 | 中国科学院计算技术研究所 | 一种基于ssd 的缓存管理方法及系统 |
CN103927203B (zh) | 2014-03-26 | 2018-06-26 | 上海新储集成电路有限公司 | 一种计算机系统及控制方法 |
CN103914403B (zh) | 2014-04-28 | 2016-11-02 | 中国科学院微电子研究所 | 一种混合内存访问情况的记录方法及其系统 |
CN103927145B (zh) | 2014-04-28 | 2017-02-15 | 中国科学院微电子研究所 | 一种基于混合内存的系统休眠、唤醒方法及装置 |
CN104035893A (zh) | 2014-06-30 | 2014-09-10 | 浪潮(北京)电子信息产业有限公司 | 一种在计算机异常掉电时的数据保存方法 |
-
2014
- 2014-05-09 CN CN201410193960.4A patent/CN105094686B/zh active Active
-
2015
- 2015-05-07 WO PCT/CN2015/078502 patent/WO2015169245A1/zh active Application Filing
- 2015-05-07 EP EP15789250.6A patent/EP3121703B1/en active Active
- 2015-05-07 KR KR1020167028378A patent/KR102036769B1/ko active IP Right Grant
- 2015-05-07 JP JP2016564221A patent/JP6277572B2/ja active Active
-
2016
- 2016-11-09 US US15/347,776 patent/US10241919B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101236530A (zh) * | 2008-01-30 | 2008-08-06 | 清华大学 | 高速缓存替换策略的动态选择方法 |
CN103548005A (zh) * | 2011-12-13 | 2014-01-29 | 华为技术有限公司 | 替换缓存对象的方法和装置 |
CN102831087A (zh) * | 2012-07-27 | 2012-12-19 | 国家超级计算深圳中心(深圳云计算中心) | 基于混合存储器的数据读写处理方法和装置 |
US20140032818A1 (en) * | 2012-07-30 | 2014-01-30 | Jichuan Chang | Providing a hybrid memory |
CN103092534A (zh) * | 2013-02-04 | 2013-05-08 | 中国科学院微电子研究所 | 一种内存结构的调度方法和装置 |
Non-Patent Citations (1)
Title |
---|
WANG, QIANG ET AL.: "Efficient Management with Hotspots Control for Hybrid Memory System", MICROELECTRONICS & COMPUTER, vol. 31, no. 1, 31 January 2014 (2014-01-31), pages 1, XP008182988 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017083949A (ja) * | 2015-10-23 | 2017-05-18 | 富士通株式会社 | キャッシュメモリおよびキャッシュメモリの制御方法 |
EP3547142A4 (en) * | 2016-12-28 | 2020-04-22 | New H3C Technologies Co., Ltd. | INFORMATION PROCESSING |
CN112289353A (zh) * | 2019-07-25 | 2021-01-29 | 上海磁宇信息科技有限公司 | 一种优化的具有ecc功能的mram系统及其操作方法 |
CN112289353B (zh) * | 2019-07-25 | 2024-03-12 | 上海磁宇信息科技有限公司 | 一种优化的具有ecc功能的mram系统及其操作方法 |
Also Published As
Publication number | Publication date |
---|---|
EP3121703A1 (en) | 2017-01-25 |
US10241919B2 (en) | 2019-03-26 |
JP6277572B2 (ja) | 2018-02-14 |
KR102036769B1 (ko) | 2019-10-25 |
CN105094686A (zh) | 2015-11-25 |
EP3121703B1 (en) | 2019-11-20 |
JP2017519275A (ja) | 2017-07-13 |
US20170060752A1 (en) | 2017-03-02 |
EP3121703A4 (en) | 2017-05-17 |
KR20160132458A (ko) | 2016-11-18 |
CN105094686B (zh) | 2018-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015169245A1 (zh) | 数据缓存方法、缓存和计算机系统 | |
CN102760101B (zh) | 一种基于ssd 的缓存管理方法及系统 | |
US9298384B2 (en) | Method and device for storing data in a flash memory using address mapping for supporting various block sizes | |
US9449005B2 (en) | Metadata storage system and management method for cluster file system | |
CN103777905B (zh) | 一种软件定义的固态盘融合存储方法 | |
US20110231598A1 (en) | Memory system and controller | |
CN108804350A (zh) | 一种内存访问方法及计算机系统 | |
US10740251B2 (en) | Hybrid drive translation layer | |
CN107391398B (zh) | 一种闪存缓存区的管理方法及系统 | |
US9268705B2 (en) | Data storage device and method of managing a cache in a data storage device | |
CN109952565B (zh) | 内存访问技术 | |
WO2016095761A1 (zh) | 缓存的处理方法和装置 | |
US20090094391A1 (en) | Storage device including write buffer and method for controlling the same | |
CN104991743B (zh) | 应用于固态硬盘阻变存储器缓存的损耗均衡方法 | |
US20160124639A1 (en) | Dynamic storage channel | |
WO2023000536A1 (zh) | 一种数据处理方法、系统、设备以及介质 | |
WO2013189186A1 (zh) | 非易失性存储设备的缓存管理方法及装置 | |
JP2017126334A (ja) | 記憶装置及びその動作方法並びにシステム | |
CN106909323B (zh) | 适用于dram/pram混合主存架构的页缓存方法及混合主存架构系统 | |
US11126624B2 (en) | Trie search engine | |
US20240020014A1 (en) | Method for Writing Data to Solid-State Drive | |
CN108647157A (zh) | 一种基于相变存储器的映射管理方法及固态硬盘 | |
CN104268102A (zh) | 一种存储服务器采用混合方式写缓存的方法 | |
CN110968527B (zh) | Ftl提供的缓存 | |
JP2013222434A (ja) | キャッシュ制御装置、キャッシュ制御方法、及びそのプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15789250 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20167028378 Country of ref document: KR Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2015789250 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015789250 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2016564221 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |