WO2015169245A1 - 数据缓存方法、缓存和计算机系统 - Google Patents

数据缓存方法、缓存和计算机系统 Download PDF

Info

Publication number
WO2015169245A1
WO2015169245A1 PCT/CN2015/078502 CN2015078502W WO2015169245A1 WO 2015169245 A1 WO2015169245 A1 WO 2015169245A1 CN 2015078502 W CN2015078502 W CN 2015078502W WO 2015169245 A1 WO2015169245 A1 WO 2015169245A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
cache line
memory
line
data
Prior art date
Application number
PCT/CN2015/078502
Other languages
English (en)
French (fr)
Inventor
魏巍
张立新
熊劲
蒋德钧
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP15789250.6A priority Critical patent/EP3121703B1/en
Priority to KR1020167028378A priority patent/KR102036769B1/ko
Priority to JP2016564221A priority patent/JP6277572B2/ja
Publication of WO2015169245A1 publication Critical patent/WO2015169245A1/zh
Priority to US15/347,776 priority patent/US10241919B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/122Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/128Replacement control using replacement algorithms adapted to multidimensional cache systems, e.g. set-associative, multicache, multiset or multilevel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/60Details of cache memory
    • G06F2212/604Details relating to cache allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/068Hybrid storage device

Definitions

  • Embodiments of the present invention relate to storage technologies, and in particular, to a data caching method, a cache, and a computer system.
  • a dynamic random access memory (Dynamic Random Access Memory, hereinafter referred to as DRAM) is generally used as a computer system.
  • DRAM Dynamic Random Access Memory
  • NVM non-volatile memory
  • NVM non-Volatile Memory
  • NVM is used instead of DRAM as a computer system to meet the application of large-capacity and low-energy. Consumption demand.
  • NVM has a longer read and write latency than DRAM. Because of the advantages and disadvantages of DRAM and NVM, the prior art further adopts a hybrid memory composed of DRAM and NVM, in order to provide high-capacity, low-energy, high-performance memory for applications.
  • Embodiments of the present invention provide a data caching method, a cache, and a computer system.
  • an embodiment of the present invention provides a data caching method, where the method is performed by a cache cache, and includes:
  • the Cache line satisfying the first preset condition includes a Cache line whose historical access frequency is lower than a preset frequency and corresponds to a memory of a dynamic random access memory DRAM type, the memory including a DRAM type memory and a nonvolatile memory NVM type RAM;
  • the first Cache line to be replaced is selected in the Cache line that meets the first preset condition
  • the Cache sends the to-be-accessed data to the CPU.
  • an embodiment of the present invention provides a cache cache, including:
  • a receiving module configured to receive a data access request sent by the CPU, where the data access request includes an access address
  • a hit determining module configured to determine, according to the access address, whether data to be accessed is cached in the Cache
  • a determining determining module configured to determine, according to a historical access frequency of the cache line Cache line and a type of memory corresponding to the Cache line, in the case that the data to be accessed is not cached in the Cache, Whether the Cache line that satisfies the first preset condition exists in the Cache, wherein the Cache line that satisfies the first preset condition includes a Cache line whose historical access frequency is lower than a preset frequency and corresponds to a memory of a dynamic random access memory DRAM type
  • the memory includes a DRAM type memory and a non-volatile memory NVM type memory; in a case where it is determined that there is a Cache line satisfying the first preset condition in the Cache, the Cache line satisfies the first preset condition Selecting a first Cache line to be replaced;
  • a reading module configured to read the to-be-accessed data from the memory according to the access address
  • a replacement module configured to replace the first cache line with a second cache line, where the second cache line includes the access address and the to-be-accessed data;
  • a sending module configured to send the to-be-accessed data to the CPU.
  • an embodiment of the present invention provides a computer system, including: a processor, a hybrid memory, and a cache cache according to the foregoing second aspect, wherein the hybrid memory includes a DRAM and an NVM, and the processor, the hybrid The memory is connected to the cache cache through a bus.
  • the Cache needs to determine the Cache line to be replaced in the access request miss.
  • the amount of buffering of the data stored in the NVM can be increased, so that the access request for the data stored in the NVM can find the corresponding data in the Cache as much as possible, thereby reducing the situation of reading data into the NVM, thereby reducing the situation.
  • the delay in reading data from the NVM effectively improves access efficiency.
  • FIG. 1 is a schematic structural diagram of a computer system according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a data caching method according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a cache according to an embodiment of the present invention.
  • FIG. 1 is a schematic structural diagram of a computer system according to an embodiment of the present invention.
  • the computer system includes a processor 11 and a hybrid memory 12, wherein the processor 11 may include a CPU 111.
  • the cache (112) and the memory controller 113, the Hybrid Memory 12 may include a DRAM 121 and an NVM 122, and the Processor 11, Hybrid Memory 12, and Cache 112 are connected by a bus.
  • the Hybrid Memory 12 and the Memory controller 113 can be connected via a Memory Bus 13.
  • the CPU can issue a data access request to the Cache, which contains the access address.
  • the Cache caches some of the data in the Hybrid Memory to improve response speed. Therefore, the Cache first determines whether the data requested by the CPU is cached in the Cache according to the access address. In another way, the Cache first determines whether the Cache can hit the data requested by the request according to the access address. When the Cache is hit, that is, when it is determined that the to-be-accessed data is cached in the Cache, the Cache may directly return data requested by the Cache to the CPU.
  • the data access request is sent to the Hybrid Memory through the Memory controller to read the data requested by the CPU from the Hybrid Memory.
  • the Cache needs to continuously update its cache content according to the access conditions during the data access process to meet the ever-changing access requirements. Specifically, when the data access is hit in the Cache, the replacement update of the cache line is not performed. When the current data access does not hit the cache, the cache needs to be cached from the current cache (Cache). In the line), a Cache line to be replaced is determined, and the Cache line is replaced by a new Cache line read from the memory.
  • the Cache line is the minimum operating unit of the Cache Controller.
  • the Cache Controller writes data to the memory
  • the Cache Controller writes a line of line data into the memory according to the Cache line.
  • the Cache Controller reads data from the memory, it also reads the data according to the Cache line.
  • a Cache line may represent data of a Cache line.
  • the "replacement cache line" in the embodiment of the present invention refers to replacing data of a Cache line in the Cache with data of a Cache line read from the memory.
  • the Cache searches for the Cache line with the lowest access frequency from the currently cached Cache line, and determines the Cache line with the lowest access frequency as the Cache line to be replaced.
  • the Cache does not sense whether the memory type corresponding to the Cache line to be replaced is a DRAM type or an NVM type, that is, it does not sense whether the Cache line to be replaced is derived from DRMA or NVM.
  • the NVM includes but is not limited to: phase change memory (hereinafter referred to as PCM), and spin transfer torque magnetic random access storage. (Spin Transfer Torque-Magnetic Random Access Memory, hereinafter referred to as STT-MRAM), and Resistive Random Access Memory (hereinafter referred to as RRAM).
  • PCM phase change memory
  • STT-MRAM spin Transfer Torque-Magnetic Random Access Memory
  • RRAM Resistive Random Access Memory
  • DRAM PCM STT-MRAM RRAM Read delay ⁇ 10ns 12ns 35ns ⁇ 50ns Write delay ⁇ 10ns 100ns 35ns ⁇ 0.3ns Write times >1E16 1.00E+09 >1E12 1.00E+12 keep time 64ms >10y >10y >10y
  • the storage capacity of the NVM is larger than that of the DRAM and the power consumption is lower than that of the DRAM
  • the read and write latency of the NVM is greater than the read and write latency of the DRAM
  • the NVM has a limit on the number of writes.
  • the memory type corresponding to the Cache line to be replaced determined by the Cache is an NVM type
  • the Cache line to be replaced will be deleted from the Cache, and subsequent access requests must be obtained from the NVM of the Hybrid Memory.
  • the read and write latency of the NVM is larger than that of the DRAM, which inevitably brings about the problem of access delay, which cannot meet the application's high demand for access delay.
  • the Cache determines the Cache line to be replaced, not only the historical access frequency of each cache line currently cached, but also the memory type corresponding to each Cache line is further considered, and the memory type is preferentially replaced by the DRAM type.
  • Cache line that is, the Cache line whose memory type is DRAM is preferentially replaced.
  • the memory type is a DRAM type Cache line, that is, the data in the Cache line is stored in the DRAM in the memory.
  • the memory type is NVM type Cache line, which means that the data in the Cache line is stored in the NVM in the memory.
  • FIG. 2 is a flowchart of a data caching method according to an embodiment of the present invention.
  • the method may be performed by a Cache, and may be performed by a Cache Controller in a Cache.
  • the method in this embodiment may include:
  • the Cache receives a data access request sent by the CPU, where the data access request includes an access address.
  • the Cache determines, according to the access address, whether the data to be accessed is cached in the Cache, and if so, executing S207; if not, executing S203;
  • the CPU may receive a data access request sent by the application, thereby sending the data access request to the Cache.
  • the Cache can compare the access address requested by the CPU with the address of each cached cache line to determine whether the requested data is cached in the Cache, that is, whether it is hit.
  • the mapping policy of the Cache if the mapping policy of the Cache is fully connected, the Cache searches and compares the entire cache. If the mapping policy of the Cache is not fully connected, but the group is divided, the Cache can be based on the access address.
  • the index bit determines the group in which the access address is located in the Cache, and further determines whether the access address is included in the group according to a bit in the access address. If included, the Cache may determine according to the valid bit. Whether the cached data is valid, if the data is valid, the corresponding data can be found according to the data offset in the access address, and the data is returned to the CPU.
  • the Cache determines, according to the historical access frequency of the cache line Cache line and the type of the memory corresponding to the Cache line, whether the Cache line that meets the first preset condition exists in the Cache, and if yes, execute S204, otherwise, Execute S208;
  • the Cache line that satisfies the first preset condition includes a Cache line whose historical access frequency is lower than a preset frequency and corresponds to a dynamic random access memory DRAM type memory, and the memory includes a DRAM type memory and a nonvolatile memory NVM.
  • Type of memory
  • the Cache needs to determine a Cache line from the currently cached Cache line as the Cache line to be replaced, and the Cache line to be replaced is determined to be the first. Cache line.
  • the Cache line to be replaced is determined to be the first. Cache line.
  • the historical access frequency of the Cache line is used to represent the access heat of the corresponding cached data, and the memory type corresponding to the Cache line indicates whether the source of the Cache line is DRAM or NVM.
  • the Cache determines that the first Cache line policy may be based on the historical access frequency, and replaces the Cache line whose memory type is DRAM as much as possible, that is, preferentially replaces the Cache line whose memory type is DRAM type.
  • the Cache may select several cache lines whose historical access frequency is lower than the preset frequency from all the cache lines of the cache, and then determine and the DRAM according to the memory type corresponding to each cache line in the several cache lines.
  • the Cache line corresponding to the type of memory is used as the first Cache line to be replaced. If the Cache lines contain two or more Cache lines with a memory type of DRAM, the Cache line with the lowest historical access frequency and the memory type of DRAM may be determined as the first Cache line to be replaced.
  • the Cache reads the to-be-accessed data from the memory according to the access address.
  • the Cache can read the to-be-accessed data from the memory according to the access address, and the to-be-accessed data may be stored on the DRAM or may be stored on the NVM.
  • the Cache replaces the first cache line by using a second cache line, where the second cache line includes the access address and the to-be-accessed data.
  • the second cache line may be replaced by the second cache line.
  • the data of one Cache line read out from the memory is referred to as a second Cache line.
  • Replacing the first cache line with the second cache line means that the data read from the memory is cached in the cache, and the data of the first cache line is deleted or written back into the memory.
  • a location may be added to each Cache line in the Cache, and the identifier bit is used to identify whether the memory type corresponding to the Cache line is a DRAM type or an NVM type.
  • the Cache may record the type of the memory corresponding to the second cache line according to the type of the storage medium in the memory pointed to by the access address, that is, set the flag bit.
  • the identifier bit can be a bit. If the bit is 1, the corresponding memory type is a DRAM type. If the bit is 0, the corresponding memory type is an NVM type.
  • the Modify bit of the first cache line to be replaced is clean, the data in the first cache line is not modified, and is stored in the memory. If the data is consistent, the data of the first Cache line does not need to be written back into the memory, and the data of the first Cache line can be directly deleted. If the Modify bit of the first Cache line to be replaced is dirty, it indicates that the data in the first Cache line has been modified, and is inconsistent with the data stored in the memory, then the data of the first Cache line needs to be first used. Write back to memory and delete the first Cache line.
  • the Cache sends the data to be accessed to the CPU, and ends.
  • the Cache can send the read data to be accessed to the CPU.
  • the Cache when the Cache line needs to determine the Cache line to be replaced, the Cache needs to consider not only the historical access frequency of the Cache line but also the memory type corresponding to the Cache line, so that the Cache line can be preferentially replaced.
  • the Cache line corresponding to the DRAM memory type reduces the cache amount of the Cache to the data stored in the DRAM, so that the Cache can increase the buffer amount of the data stored in the NVM, so that the access request for the data stored in the NVM is as much as possible.
  • the corresponding data can be found in the Cache, thereby reducing the situation of reading data into the NVM, thereby reducing the delay in reading data from the NVM and effectively improving the access efficiency.
  • the above embodiment can obtain the historical access frequency of each Cache line by means of a Least Recently Used (LRU) linked list.
  • LRU Least Recently Used
  • the LRU linked list records the Cache line in the order of the access frequency from low to high.
  • the Cache may be:
  • the first one is selected in the cache line corresponding to the DRAM type memory in the determined first M cache lines.
  • the Cache line is the first Cache line.
  • the “first one” refers to the top Cache line in the Cache line corresponding to the DRAM type memory in the first M cache lines of the LRU list.
  • M is set to be large, the probability that the memory type is replaced by the DRAM type of the DRAM type can be improved, but the value of M cannot be set too large, otherwise the data stored in the DRAM Will not be able to enter the Cache.
  • Those skilled in the art can set the value of M according to the requirements.
  • Cache determines whether there is a cache line that satisfies the second preset condition in the Cache, and if so, executes S209, otherwise executes S210;
  • the cache line that satisfies the second preset condition includes a cache line whose historical access frequency is lower than the preset frequency, corresponds to the memory of the NVM type, and has not been modified;
  • the Cache may have to replace the Cache line whose memory type is NVM.
  • the Modify bit of the Cache line is clean, that is, the data in the Cache line has not been modified, and it is consistent with the data stored in the memory, so it is not necessary to write the data of the Cache line back to the memory when replacing, if Cache
  • the Modify bit of line is Dirty, that is, the data in the Cache line has been modified, and it is inconsistent with the data stored in the memory. In the replacement, the data of the Cache line needs to be written back to the memory.
  • NVM has a write limit. Therefore, when you have to replace the Cache line with the memory type of NVM, you can preferentially replace the Cache line with the Modify bit as clean, thus reducing the number of NVM writes.
  • the Cache determines whether the cache line that meets the second preset condition exists in the Cache, and may be specifically:
  • the first N cache lines are determined to correspond to the NVM type memory and The first cache line is selected as the first cache line in the Cache line whose Modify bit is clean.
  • the values of M and N can be tuned according to the application behavior. Because of the Cache line of different memory types, the replacement delay has the following relationship: DRAM ⁇ NVM(clean) ⁇ NVM(dirty), therefore, the storage time of the Cache line from the NVM in the Cache can be appropriately extended. Therefore, it is generally possible to set N ⁇ M.
  • S210 The Cache selects a cache line with the lowest historical access frequency as the first cache line, and executes S205.
  • the Cache can determine the front of the LRU list.
  • the Cache line of the end is the first Cache line.
  • the first cache line determined at this time is a Cache line whose memory type is NVM type and the Modify bit is dirty.
  • the Cache may further record the type of the memory corresponding to the second cache line according to the type of the storage medium in the memory pointed to by the access address.
  • the Cache can adopt the following two implementation manners to obtain the type of the memory corresponding to the Cache line:
  • Method 1 determining, according to the address range of the access address in the memory, the memory type corresponding to the second cache line and recording;
  • the physical address in the Hybrid Memory is continuous, for example, the first nGB is DRAM, and the last nGB is NVM.
  • the Cache can determine whether the access address belongs to the address range of the DRAM or the address range of the NVM; if the access address belongs to the address range of the DRAM, the Cache can record the memory type corresponding to the second cache line as the DRAM type; if the access address belongs to the NVM The address range, the Cache can record the memory type corresponding to the second cache line as the NVM type.
  • the second method determines the memory type corresponding to the second cache line according to the information fed back by the Memory Controller and records the memory type.
  • a memory map may be stored in the Memory Controller, where the mapping table records the address range of the DRAM or the address range of the NVM, or may simultaneously record the address range of the DRAM and the address range of the NVM.
  • the address range can be continuous or not.
  • the Memory Controller can determine the storage location of the access address in the Hybrid Memory according to the access address and the mapping table.
  • a bit can be added to the interaction data of the Memory Controller and the Cache, and the Memory Controller can send the storage location of the access address in the Hybrid Memory to the Cache through the added bit.
  • the Cache when the Cache is in a miss, it needs to send a data read request to the Memory Controller, where the data read request includes an access address; the Memory Controller can read the to-be-accessed data according to the access request to the Hybrid Memory, and Memory The controller can obtain the storage location of the access address in the Hybrid Memory according to the access address and the mapping table. After that, the Memory Controller can send a data read response to the cache, where the data read response includes the to-be-accessed data and the access address in the Hybrid Memory. Storage location; correspondingly, the Cache can read the storage location contained in the response according to the data sent by the Memory Controller. Record the memory type corresponding to the second cache line.
  • the first 2GB is DRAM
  • the last 2GB is NVM
  • the Cache mapping policy is fully connected.
  • the CPU requests data from the Cache with an access address of 0x08000000.
  • the access address is located at the 1GB of the Hybrid Memory and the data bus is 32 bits.
  • Example 1 After the Cache finds the local data, the corresponding data is not found. If the hop is missing, the Cache searches for the first five cache lines of the LRU list, and determines that the Cache with the memory type is DRAM is not found according to the location bits of the first five cache lines. Line, the Cache looks at the Modify bit of the first three cache lines of the LRU list, and finds that the Modify bit of the second cache line is 0, which means clean, then the cache line is the first cache line to be replaced. The Cache can read the Cache line containing the data with the access address 0x08000000 into the Cache to replace the first Cache line, and return the read data to the CPU.
  • the Cache can determine that the data is stored in the DRAM according to the access address (0x08000000), so the newly read cache line, that is, the second cache line, is added to the end of the LRU list, and its location is 0 (the certificate is the first)
  • the second Cache line is derived from DRAM) and the Modify position is 0 (the data characterizing the second Cache line has not been modified).
  • Example 2 After the Cache finds the local data, the corresponding data is not found. If the hop is missing, the Cache searches for the first five cache lines of the LRU list, and determines the Cache line whose memory type is DRAM according to the location bits of the first five cache lines. The Cache line is the first Cache line to be replaced. If the Modify bit of the DRAM is 1 (the data characterizing the first Cache line is modified), the Cache first writes back data to the first Cache line, and then the Cache can store the Cache line containing the data with the access address of 0x08000000. Read in the Cache to replace the first Cache line and return the read data to the CPU.
  • the Cache can determine that the data is stored in the DRAM according to the access address (0x08000000), so the newly read cache line, that is, the second cache line, is added to the end of the LRU list, and its location is 0 (the certificate is the first)
  • the second Cache line is derived from DRAM) and the Modify position is 0 (the data characterizing the second Cache line has not been modified).
  • the Cache when determining the Cache line to be replaced, preferentially replaces the Cache line whose memory type is DRAM, thereby preserving the Cache line whose memory type is NVM as much as possible, so as to avoid reading data from the NVM. Delay; there is no replacement
  • the Cache line of the DRAM type is replaced by the Cache line with the memory type of NVM
  • the Cache line whose memory type is NVM and the Modify bit is clean is preferentially replaced, thereby reducing the number of writes to the NVM and improving the service life of the memory.
  • FIG. 3 is a schematic structural diagram of a cache cache according to an embodiment of the present invention. As shown in FIG. 3, the Cache in this embodiment may include:
  • the receiving module 31 is configured to receive a data access request sent by the CPU, where the data access request includes an access address;
  • a hit determination module 32 configured to determine, according to the access address, whether data to be accessed is cached in the Cache
  • the replacement determining module 33 is configured to: according to the historical access frequency of the cache line Cache line and the type of the memory corresponding to the Cache line, in the case that it is determined that the to-be-accessed data is not cached in the Cache, Determining whether there is a Cache line that meets the first preset condition in the Cache, where the Cache line that satisfies the first preset condition includes a historical access frequency lower than a preset frequency and corresponding to a memory of a dynamic random access memory DRAM type Cache line, the memory includes a DRAM type memory and a non-volatile memory NVM type memory; in the case that it is determined that there is a Cache line satisfying the first preset condition in the Cache, the first preset condition is satisfied Selecting a first Cache line to be replaced in the Cache line;
  • the reading module 34 is configured to read the to-be-accessed data from the memory according to the access address
  • a replacement module 35 configured to replace the first cache line with a second cache line, where the second cache line includes the access address and the to-be-accessed data;
  • the sending module 36 is configured to send the to-be-accessed data to the CPU.
  • the replacement determining module 33 is specifically configured to:
  • the Cache line corresponding to the DRAM type memory is selected in the determined first M cache lines.
  • a Cache line is the first Cache line.
  • the replacement determining module 33 is further configured to:
  • the Cache line that meets the first preset condition does not exist in the Cache, determining whether there is a cache line that meets the second preset condition in the Cache, where the cache line that satisfies the second preset condition Include a Cache line whose historical access frequency is lower than the preset frequency, corresponds to an NVM type memory, and has not been modified;
  • the first cache line to be replaced is selected in the cache line that satisfies the second preset condition.
  • the replacement determining module 33 is specifically configured to:
  • the first N cache lines are determined to correspond to the NVM type memory and The first cache line is selected as the first cache line in the Cache line whose Modify bit is clean.
  • the replacement determination module 33 is further configured to:
  • the replacement module 35 is further configured to:
  • the type of the memory corresponding to the second cache line is recorded according to the type of the storage medium in the memory pointed to by the access address.
  • the Cache of this embodiment may be used to implement the technical solution of the foregoing method embodiment, and the implementation principle and the technical effect are similar, and details are not described herein again.
  • a computer system embodiment is also provided.
  • the structure of the computer system embodiment may refer to the architecture shown in FIG. 1, that is, a processor 11 and a hybrid memory 12, wherein Processor11 is included.
  • the CPU 111, the CPU cache (Cache) 112, and the memory controller 113 may be included.
  • the Hybrid Memory 12 may include a DRAM 121 and an NVM 122.
  • the Hybrid Memory 12 and the Memory controller 113 may be connected through a Memory Bus 13;
  • the Cache 112 may adopt the structure described in the foregoing Cache Embodiment, and the Cache 112 may perform the technical solution of the foregoing method embodiment.
  • the implementation principle and technical effect are similar, and will not be described here.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the program when executed, performs the steps including the foregoing method embodiments; and the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种数据缓存方法、缓存和计算机系统,该方法中,Cache在访问请求不命中需要确定待替换的Cache line时,不仅需要考虑Cache line的历史访问频率,还要考虑Cache line对应的内存类型,从而可以优先替换与DRAM内存类型对应的Cache line,减少Cache对存储于DRAM的数据的缓存量,从而使得Cache能够提高对存储于NVM中的数据的缓存量,使得针对存储于NVM中的数据的访问请求尽可能的在Cache中能够找到相应的数据,从而减少了从NVM中读取数据的情形,减少了从NVM中读取数据的延迟,有效的提高访问效率。

Description

数据缓存方法、缓存和计算机系统 技术领域
本发明实施例涉及存储技术,尤其涉及一种数据缓存方法、缓存和计算机系统。
背景技术
目前越来越多的应用以数据为中心,例如互联网应用、大数据应用等。这些应用需要强大的存储支持。
在现有技术中,通常采用的动态随机访问存储器(Dynamic Random-Access Memory,以下简称:DRAM)作为计算机系统。然而,受工艺的限制,DRAM的容量较小且能耗较大,很难满足应用对大容量低能耗的要求。今年来,非易失性存储器(Non-Volatile Memory,以下简称:NVM)被广泛采用,其具有存储量大、能耗低的优势,采用NVM取代DRAM作为计算机系统,能够满足应用对大容量低能耗的需求。但是与DRAM相比,NVM的读写延迟较长。正是由于DRAM和NVM各有利弊,现有技术进一步采用DRAM和NVM组成的混合内存,以期望为应用提供大容量、低能耗、高性能的内存。
发明内容
本发明实施例提供一种数据缓存方法、缓存和计算机系统。
第一方面,本发明实施例提供一种数据缓存方法,所述方法由缓存Cache执行,包括:
接收CPU发送的数据访问请求,所述数据访问请求中包含访问地址;
根据所述访问地址判断待访问数据是否缓存在所述Cache中;
在确定所述待访问数据没有缓存在所述Cache中的情况下,根据所述Cache中的缓存线Cache line的历史访问频率以及Cache line对应的内存的类型,判断所述Cache中是否存在满足第一预设条件的Cache line,其中,所述 满足第一预设条件的Cache line包括历史访问频率低于预设频率且与动态随机访问存储器DRAM类型的内存对应的Cache line,所述内存包括DRAM类型的内存和非易失性存储器NVM类型的内存;
在确定所述Cache中存在满足第一预设条件的Cache line的情况下,在满足第一预设条件的Cache line中选择待替换的第一Cache line;
根据所述访问地址从所述内存中读取所述待访问数据;
采用第二Cache line替换所述第一Cache line,所述第二Cache line包含所述访问地址和所述待访问数据;
所述Cache向所述CPU发送所述待访问数据。
第二方面,本发明实施例提供一种缓存Cache,包括:
接收模块,用于接收CPU发送的数据访问请求,所述数据访问请求中包含访问地址;
命中确定模块,用于根据所述访问地址判断待访问数据是否缓存在所述Cache中;
替换确定模块,用于在确定所述待访问数据没有缓存在所述Cache中的情况下,根据所述Cache中的缓存线Cache line的历史访问频率以及Cache line对应的内存的类型,判断所述Cache中是否存在满足第一预设条件的Cache line,其中,所述满足第一预设条件的Cache line包括历史访问频率低于预设频率且与动态随机访问存储器DRAM类型的内存对应的Cache line,所述内存包括DRAM类型的内存和非易失性存储器NVM类型的内存;在确定所述Cache中存在满足第一预设条件的Cache line的情况下,在满足第一预设条件的Cache line中选择待替换的第一Cache line;
读取模块,用于根据所述访问地址从所述内存中读取所述待访问数据;
替换模块,用于采用第二Cache line替换所述第一Cache line,所述第二Cache line包含所述访问地址和所述待访问数据;
发送模块,用于向所述CPU发送所述待访问数据。
第三方面,本发明实施例提供一种计算机系统,包括:处理器、混合内存和如前述第二方面所述的缓存Cache,所述混合内存包括DRAM和NVM,所述处理器、所述混合内存与所述缓存Cache通过总线连接。
本发明实施例中,Cache在访问请求不命中需要确定待替换的Cache line 时,不仅需要考虑Cache line的历史访问频率,还要考虑Cache line对应的内存类型,从而可以优先替换与DRAM内存类型对应的Cache line,减少Cache对存储于DRAM的数据的缓存量,从而使得Cache能够提高对存储于NVM中的数据的缓存量,使得针对存储于NVM中的数据的访问请求尽可能的在Cache中能够找到相应的数据,从而减少到NVM中读取数据的情形,从而减少了从NVM中读取数据的延迟,有效的提高访问效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种计算机系统的架构图;
图2为本发明实施例提供的一种数据缓存方法的流程图;
图3为本发明实施例提供的一种缓存的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
图1为本发明实施例提供的一种计算机系统的架构图,如图1所示,该计算机系统包括:处理器(Processor)11和混合内存(Hybrid Memory)12,其中,Processor11可以包括CPU111、缓存(Cache)112以及存储控制器(Memory controller)113,Hybrid Memory12可以包括DRAM121和NVM122,Processor11、Hybrid Memory12以及缓存(Cache)112通过总线连接。Hybrid Memory12与Memory controller113之间可以通过内存总线(Memory Bus)13连接。
在进行数据访问时,CPU可以向Cache发出数据访问请求,该数据访问请求中包含访问地址。Cache中缓存了Hybrid Memory中的部分数据以提高响应速度。因此,Cache首先根据访问地址确定CPU请求访问的数据是否缓存在Cache中,换一种表达方式,Cache首先根据访问地址判断是否能够Cache中命中所述请求访问的数据。当Cache命中时,即在确定所述待访问数据缓存在所述Cache中的情况下,Cache可以直接向CPU返回其请求访问的数据,当Cache没有命中时,即在确定所述待访问数据没有缓存在所述Cache中的情况下,该数据访问请求将会通过Memory controller发往Hybrid Memory,以从Hybrid Memory中读取CPU请求访问的数据。
由于Cache的缓存空间一般较小,因此,Cache在数据访问过程中,需要根据访问情况不断更新其缓存内容,以满足不断变化的访问需求。具体来说,当本次数据访问在Cache中命中时,则不进行缓存线(Cache line)的替换更新,当本次数据访问在Cache中不命中时,Cache需要从当前缓存的缓存线(Cache line)中确定一待替换的Cache line,采用从内存中读取的新的Cache line来替换该Cache line。
本领域技术人员可以知道,Cache与内存交换数据是通过缓存控制器Cache controller来执行的。其中,Cache line是Cache Controller的最小操作单位。换一种表达方式,当Cache Controller向内存中写数据时,Cache Controller是按照Cache line为单位将一行line数据写到内存中,当Cache Controller从内存中读数据时,也是按照Cache line来读数据的。为了描述方便,在本发明实施例中,一个Cache line可以表示一个Cache line的数据。本发明实施例中的“替换Cache line”是指用从内存中读出的一个Cache line的数据来替换Cache中的一个Cache line的数据。在上述过程中,Cache是从当前缓存的Cache line中查找访问频率最低的Cache line,将该访问频率最低的Cache line确定为待替换的Cache line。而Cache并不感知该待替换的Cache line对应的内存类型是DRAM类型还是NVM类型,即不感知该待替换的Cache line是来源于DRMA还是NVM。
经过对DRAM和几种NVM进行的分析发现,DRAM和NVM的读写性能存在差异。其中,在本发明实施例中,NVM包括但不限于:相变存储器(Phase Change Memory,以下简称:PCM)、自旋转移矩磁性随机访问存储 器(Spin Transfer Torque-Magnetic Random Access Memory,以下简称:STT-MRAM)、阻变式随机访问存储器(Resistive Random Access Memory,以下简称RRAM)。DRAM和NVM的读写性能具体可以如表1所示:
表1
  DRAM PCM STT-MRAM RRAM
读延迟 <10ns 12ns 35ns <50ns
写延迟 <10ns 100ns 35ns <0.3ns
写次数 >1E16 1.00E+09 >1E12 1.00E+12
保留时间 64ms >10y >10y >10y
从表1中可以看出,虽然NVM的存储容量大于DRAM,且能耗低于DRAM,但是NVM的读写延迟大于DRAM的读写延迟,并且NVM有写次数的限制。
因此,采用上述现有技术,如果Cache确定的待替换的Cache line对应的内存类型为NVM类型,则该待替换的Cache line将被从Cache中删除,后续访问请求都必须要从Hybrid Memory的NVM中调取,而NVM的读写延迟相对DRAM又较大,则必然带来访问延迟的问题,无法满足应用对访问延迟较高的需求。
因此,本方法实施例中,Cache确定待替换的Cache line时,不仅考虑当前缓存的各Cache line的历史访问频率,还要进一步考虑各Cache line对应的内存类型,优先替换内存类型为DRAM类型的Cache line,即优先替换内存类型为DRAM的Cache line。这样,即使DRAM的Cache line从Cache中删除,后续数据访问需要到DRAM中调取数据,其访问延迟不至于过大。其中,内存类型为DRAM类型的Cache line,即表示该Cache line中的数据存储在内存中的DRAM上。内存类型为NVM类型的Cache line,即表示该Cache line中的数据存储在内存中的NVM上。
下面采用具体的实施例对本发明的技术方案进行详细说明。
图2为本发明实施例提供的一种数据缓存方法的流程图,该方法可以由Cache来执行,具体的可以由Cache中的Cache Controller来执行。如图1和图2所示,本实施例的方法可以包括:
S201、Cache接收CPU发送的数据访问请求,该数据访问请求中包含访问地址;
S202、Cache根据所述访问地址判断待访问数据是否缓存在所述Cache中,若是,则执行S207,若否,则执行S203;
具体来说,CPU可以接收应用发送的数据访问请求,从而将该数据访问请求发送给Cache。Cache可以将CPU请求的访问地址与其缓存的各Cache line的地址进行比较,确定请求的数据是否缓存在Cache中,即是否命中。在具体实现时,如果Cache的映射策略是全相连,则Cache是在整个缓存范围内进行查找比较,如果Cache的映射策略不是全相连,而是采用组划分的方式,则Cache可以根据访问地址中的索引(index)位,确定该访问地址在Cache中所在的组,进而根据访问地址中的位(tag)位确定在该组中是否包含该访问地址,如果包含,则Cache可以根据有效位判断缓存的数据是否有效,如果数据有效,则可以根据访问地址中的数据偏移找到对应数据,并将该数据返回给CPU。
S203、Cache根据所述Cache中的缓存线Cache line的历史访问频率以及Cache line对应的内存的类型,判断所述Cache中是否存在满足第一预设条件的Cache line,若是,则执行S204,否则执行S208;
其中,满足第一预设条件的Cache line包括历史访问频率低于预设频率且与动态随机访问存储器DRAM类型的内存对应的Cache line,所述内存包括DRAM类型的内存和非易失性存储器NVM类型的内存;
S204、在满足第一预设条件的Cache line中选择待替换的第一Cache line;
在待访问数据未缓存在Cache中,即未命中的情况下,Cache需要从当前缓存的Cache line中确定一个Cache line作为待替换的Cache line,该被确定作为待替换的Cache line即为第一Cache line。在确定该第一Cache line时,既需要参考当前缓存的各Cache line的历史访问频率,也需要参考各Cache line对应的内存类型。其中,Cache line的历史访问频率用于表征对应缓存数据的访问热度,Cache line对应的内存类型则表征该Cache line的来源是DRAM还是NVM。
本实施例中,Cache确定第一Cache line的策略可以为在考虑历史访问频率的基础上,尽可能替换内存类型为DRAM的Cache line,即优先替换内存类型为DRAM类型的Cache line。
举例来说,Cache可以从缓存的所有Cache line中先选取历史访问频率低于预设频率的几个Cache line,然后再根据这几个Cache line中每个Cache line对应的内存类型,确定与DRAM类型的内存对应的Cache line作为待替换的第一Cache line。若这几个Cache line中包含两个或者更多个内存类型为DRAM的Cache line,则可以确定历史访问频率最低且内存类型为DRAM的Cache line为待替换的第一Cache line。
S205、Cache根据所述访问地址从所述内存中读取所述待访问数据;
Cache在确定了待替换的第一Cache line之后,即可根据访问地址从内存中读取待访问数据,该待访问数据既可能存储在DRAM上,也可能存储在NVM上。
S206、Cache采用第二Cache line替换所述第一Cache line,所述第二Cache line包含所述访问地址和所述待访问数据;
Cache在读取待访问数据之后,可以采用第二Cache line替换第一Cache line。需要说明的是,本发明实施例中,为了描述方便,将从内存中读出的一个Cache line的数据称为第二Cache line。采用第二Cache line替换所述第一Cache line是指将从内存中读出的数据缓存在Cache中,并将第一Cache line的数据删除或写回内存中。在具体实现时,可以在Cache中针对各个Cache line增加一个标识位(Location),通过该标识位来标识该Cache line对应的内存类型是DRAM类型还是NVM类型。
Cache可以根据访问地址指向的内存中的存储介质的类型,记录第二Cache line对应的内存的类型,即设置该标志位。例如,该标识位可以采用一个bit,该bit为1则表征对应的内存类型为DRAM类型,该bit为0则代表对应的内存类型为NVM类型。
在从Cache中删除第一Cache line时,若待替换的第一Cache line的修改(Modify)位为干净(clean),则表示第一Cache line中的数据并未被修改过,与内存中存储的数据一致,则此时无需将第一Cache line的数据写回到内存中,可以直接删除第一Cache line的数据。若待替换的第一Cache line的Modify位为脏(dirty),则表示第一Cache line中的数据被修改过,与内存中存储的数据不一致,则此时需要先将第一Cache line的数据写回到内存中,再删除该第一Cache line。
S207、Cache向CPU发送待访问数据,结束。
在完成上述操作后,Cache可以向CPU发送读取的待访问数据。
综上,本实施例的方法中,Cache在访问请求不命中需要确定待替换的Cache line时,不仅需要考虑Cache line的历史访问频率,还要考虑Cache line对应的内存类型,从而可以优先替换与DRAM内存类型对应的Cache line,减少Cache对存储于DRAM的数据的缓存量,从而使得Cache能够提高对存储于NVM中的数据的缓存量,使得针对存储于NVM中的数据的访问请求尽可能的在Cache中能够找到相应的数据,从而减少到NVM中读取数据的情形,从而减少了从NVM中读取数据的延迟,有效的提高访问效率。
上述实施例可以借助最近最少访问(Least Recently Used,以下简称:LRU)链表来获取各Cache line的历史访问频率。
具体来说,LRU链表是按照访问频率从低到高的顺序记录Cache line,Cache在判断Cache中是否存在满足第一预设条件的Cache line时,具体可以为:
判断所述Cache的最近最少访问LRU链表中前M个Cache line中是否存在与DRAM类型的内存对应的Cache line,其中所述LRU链表中前M个Cache line为历史访问频率低于预设频率的Cache line;
在确定LRU链表中前M个Cache line中存在与DRAM类型的内存对应的Cache line的情况下,在确定的所述前M个Cache line中与DRAM类型的内存对应的Cache line中选择第一个Cache line为所述第一Cache line。
其中,“第一个”是指的在LRU链表的前M个cache line中的与DRAM类型的内存对应的Cache line中位置最靠前的Cache line。
需要说明的是,若M的取值设置得较大时,则可以提高内存类型为DRAM类型的Cache line被替换的概率,但M的取值又不能设置得过大,否则DRAM中存储的数据将无法进入Cache。本领域技术人员可以根据需求,自行设定M的取值。
S208、Cache判断该Cache中是否存在满足第二预设条件的cache line,若是,则执行S209,否则执行S210;
其中该满足第二预设条件的cache line包括历史访问频率低于所述预设频率、与NVM类型的内存对应并且未被修改过的Cache line;
具体来说,Cache在确定所述Cache中不存在满足第一预设条件的Cache line的情况下,此时,Cache可能不得不替换内存类型为NVM的Cache line。
经过分析发现,Cache line的Modify位为clean,即Cache line中的数据并未被修改过,与内存中存储的数据一致,则在替换时无需将Cache line的数据写回到内存中,若Cache line的Modify位为Dirty,即Cache line中的数据被修改过,与内存中存储的数据不一致,则在替换时需要先将Cache line的数据写回到内存中。但是,NVM具有写次数限制,因此,当不得不替换内存类型为NVM的Cache line时,可以优先替换Modify位为clean的Cache line,从而尽可能减少NVM的写次数。
S209、在满足第二预设条件的cache line中选择待替换的第一Cache line,并执行S205。
其中,Cache判断该Cache中是否存在满足第二预设条件的cache line,可以具体为:
判断所述Cache的LRU链表中前N个Cache line中是否存在与NVM类型的内存对应并且修改Modify标识位表示干净clean的Cache line,其中所述前N个Cache line为历史访问频率低于预设频率的Cache line;
在确定所述LRU链表中前N个Cache line中存在与NVM类型的内存对应且Modify位为clean的Cache line的情况下,在确定的所述前N个Cache line中与NVM类型的内存对应且Modify位为clean的Cache line中选择第一个Cache line为所述第一Cache line。
在具体实现时,M和N的值可以根据应用行为调优。因为不同内存类型的Cache line,其替换延迟存在如下关系:DRAM<NVM(clean)<NVM(dirty),因此,可以适当延长来自NVM的Cache line在Cache中的存储时间。因此,一般可以设定N≤M。
S210、Cache选择历史访问频率最低的Cache line为所述第一Cache line,并执行S205。
进一步的,在确定所述Cache中不存在满足第二预设条件的cache line的情况下,确定所述LRU链表中最前端的Cache line为所述第一Cache line。
具体来说,若LRU链表中前N个Cache line中也不存在内存类型为NVM类型且Modify位为clean的Cache line,则Cache即可确定LRU链表中最前 端的Cache line为第一Cache line。此时确定的第一Cache line即为内存类型为NVM类型且Modify位为dirty的Cache line。
在本发明另一个实施例中,Cache还可以根据访问地址所指向的内存中的存储介质的类型,记录所述第二Cache line对应的内存的类型。
具体来说,该Cache可以采用以下两种实现方式来获取Cache line对应的内存的类型:
方式一、根据访问地址在内存中所属的地址范围确定第二Cache line对应的内存类型并记录;
具体来说,在该方式中,Hybrid Memory中的物理地址是连续的,例如前nGB为DRAM,后nGB为NVM。
因此,Cache可以确定访问地址属于DRAM的地址范围还是属于NVM的地址范围;若访问地址属于DRAM的地址范围,则Cache可以记录第二Cache line对应的内存类型为DRAM类型;若访问地址属于NVM的地址范围,则Cache可以记录第二Cache line对应的内存类型为NVM类型。
方式二、根据Memory Controller反馈的信息确定第二Cache line对应的内存类型并记录。
具体来说,在该方式中,Memory Controller中可以保存一映射表,该映射表记录有DRAM的地址范围或NVM的地址范围,或者,也可以同时记录DRAM的地址范围和NVM的地址范围,该地址范围可以连续,也可以不连续。Memory Controller可以根据访问地址和该映射表确定访问地址在Hybrid Memory中的存储位置。而且,Memory Controller和Cache的交互数据中可以增加一个位,Memory Controller可以将访问地址在Hybrid Memory中的存储位置通过该增加的位发送给Cache。
在具体实现时,Cache在未命中时,需要向Memory Controller发送数据读取请求,该数据读取请求中包含访问地址;Memory Controller可以根据该访问请求到Hybrid Memory中读取待访问数据,并且Memory Controller可以根据该访问地址和映射表获取访问地址在Hybrid Memory中的存储位置,此后,Memory Controller可以向Cache发送数据读取响应,该数据读取响应中包含待访问数据以及访问地址在Hybrid Memory中的存储位置;相应的,Cache可以根据Memory Controller发送的数据读取响应中包含的存储位置, 记录第二Cache line对应的内存类型。
下面采用两个具体的实施例,对上述实施例的技术方案进行详细说明。
假设:Hybrid Memory中,前2GB为DRAM,后2GB为NVM,Cache映射策略为全相连;CPU向Cache请求数据,访问地址为0x08000000,该访问地址位于Hybrid Memory第1GB,数据总线为32位。
例一:Cache在本地查找后,未找到对应数据,说明未命中,则Cache查找LRU链表前5个Cache line,并根据这前5个Cache line的location位,确定未找到内存类型为DRAM的Cache line,则Cache查看LRU链表前3个Cache line的Modify位,找到第2个Cache line的Modify位为0,即表示clean,则该Cache line即为待替换的第一Cache line。Cache可以将包含访问地址为0x08000000的数据的Cache line读入到Cache中替换第一Cache line,并将读取到的数据返回给CPU。并且,Cache可以根据访问地址(0x08000000)判断数据存储在DRAM中,因此将新读入的Cache line即第二Cache line加入到LRU链表的最尾端,并且将其location位置0(表证该第二Cache line来源于DRAM),并且Modify位置为0(表征该第二Cache line的数据未被修改过)。
例二:Cache在本地查找后,未找到对应数据,说明未命中,则Cache查找LRU链表前5个Cache line,并根据这前5个Cache line的location位,确定找到内存类型为DRAM的Cache line,则该Cache line即为待替换的第一Cache line。若该DRAM的Modify位为1(表征该第一Cache line的数据被修改过),则Cache先对第一Cache line进行数据写回,然后,Cache可以将包含访问地址为0x08000000的数据的Cache line读入到Cache中以替换第一Cache line,并将读取到的数据返回给CPU。并且,Cache可以根据访问地址(0x08000000)判断数据存储在DRAM中,因此将新读入的Cache line即第二Cache line加入到LRU链表的最尾端,并且将其location位置0(表证该第二Cache line来源于DRAM),并且Modify位置为0(表征该第二Cache line的数据未被修改过)。
由上述过程可知,本发明实施例中,Cache在确定待替换的Cache line时,优先替换内存类型为DRAM的Cache line,从而尽量保留内存类型为NVM的Cache line,以避免从NVM中读取数据的延迟;在没有可替换的内 存类型为DRAM的Cache line而只能替换内存类型为NVM的Cache line时,优先替换内存类型为NVM且Modify位为clean的Cache line,从而尽可能减少对NVM的写次数,提高内存的使用寿命。
图3为本发明实施例提供的一种缓存Cache的结构示意图,如图3所示,本实施例的Cache可以包括:
接收模块31,用于接收CPU发送的数据访问请求,所述数据访问请求中包含访问地址;
命中确定模块32,用于根据所述访问地址判断待访问数据是否缓存在所述Cache中;
替换确定模块33,用于用于在确定所述待访问数据没有缓存在所述Cache中的情况下,根据所述Cache中的缓存线Cache line的历史访问频率以及Cache line对应的内存的类型,判断所述Cache中是否存在满足第一预设条件的Cache line,其中,所述满足第一预设条件的Cache line包括历史访问频率低于预设频率且与动态随机访问存储器DRAM类型的内存对应的Cache line,所述内存包括DRAM类型的内存和非易失性存储器NVM类型的内存;在确定所述Cache中存在满足第一预设条件的Cache line的情况下,在满足第一预设条件的Cache line中选择待替换的第一Cache line;
读取模块34,用于根据所述访问地址从所述内存中读取所述待访问数据;
替换模块35,用于采用第二Cache line替换所述第一Cache line,所述第二Cache line包含所述访问地址和所述待访问数据;
发送模块36,用于向所述CPU发送所述待访问数据。
进一步的,替换确定模块33,具体用于:
替换确定模块,具体用于:
判断所述Cache的最近最少访问LRU链表中前M个Cache line中是否存在与DRAM类型的内存对应的Cache line,其中所述LRU链表中前M个Cache line为历史访问频率低于预设频率的Cache line;
在确定所述LRU链表中前M个Cache line中存在与DRAM类型的内存对应的Cache line的情况下,在确定的所述前M个Cache line中与DRAM类型的内存对应的Cache line中选择第一个Cache line为所述第一Cache line。
进一步的,替换确定模块33,还用于:
在确定所述Cache中不存在满足第一预设条件的Cache line的情况下,判断所述Cache中是否存在满足第二预设条件的cache line,其中所述满足第二预设条件的cache line包括历史访问频率低于所述预设频率、与NVM类型的内存对应并且未被修改过的Cache line;
在确定所述Cache中存在满足第二预设条件的cache line的情况下,在所述满足第二预设条件的cache line中选择待替换的第一Cache line。
进一步的,替换确定模块33,具体用于:
判断所述Cache的LRU链表中前N个Cache line中是否存在与NVM类型的内存对应并且修改Modify标识位表示干净clean的Cache line,其中所述前N个Cache line为历史访问频率低于预设频率的Cache line;
在确定所述LRU链表中前N个Cache line中存在与NVM类型的内存对应且Modify位为clean的Cache line的情况下,在确定的所述前N个Cache line中与NVM类型的内存对应且Modify位为clean的Cache line中选择第一个Cache line为所述第一Cache line。
更进一步的,替换确定模块33,还用于:
在确定所述Cache中不存在满足第二预设条件的cache line的情况下,确定所述LRU链表中最前端的Cache line为所述第一Cache line。
进一步的,替换模块35,还用于:
在采用第二Cache line替换所述第一Cache line之后,根据所述访问地址指向的所述内存中的存储介质的类型,记录所述第二Cache line对应的内存的类型。
本实施例的Cache可以用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,此处不再赘述。
本发明实施例,还可以提供一计算机系统实施例,该计算机系统实施例的结构示意图可以参考图1所示架构,即包括处理器(Processor)11和混合内存(Hybrid Memory)12,其中,Processor11可以包括CPU111、CPU缓存(Cache)112以及存储控制器(Memory controller)113,Hybrid Memory12可以包括DRAM121和NVM122,Hybrid Memory12与Memory controller113之间可以通过内存总线(Memory Bus)13连接;
其中,Cache112可以采用上述Cache实施例所述的结构,且该Cache112可以执行上述方法实施例的技术方案。其实现原理和技术效果类似,此处不再赘述。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制。本申请所提供的实施例仅仅是示意性的。所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。在本发明实施例、权利要求以及附图中揭示的特征可以独立存在也可以组合存在。

Claims (13)

  1. 一种数据缓存方法,所述方法由缓存Cache执行,其特征在于,包括:
    接收CPU发送的数据访问请求,所述数据访问请求中包含访问地址;
    根据所述访问地址判断待访问数据是否缓存在所述Cache中;
    在确定所述待访问数据没有缓存在所述Cache中的情况下,根据所述Cache中的缓存线Cache line的历史访问频率以及Cache line对应的内存的类型,判断所述Cache中是否存在满足第一预设条件的Cache line,其中,所述满足第一预设条件的Cache line包括历史访问频率低于预设频率且与动态随机访问存储器DRAM类型的内存对应的Cache line,所述内存包括DRAM类型的内存和非易失性存储器NVM类型的内存;
    在确定所述Cache中存在满足第一预设条件的Cache line的情况下,在满足第一预设条件的Cache line中选择待替换的第一Cache line;
    根据所述访问地址从所述内存中读取所述待访问数据;
    采用第二Cache line替换所述第一Cache line,所述第二Cache line包含所述访问地址和所述待访问数据;
    所述Cache向所述CPU发送所述待访问数据。
  2. 根据权利要求1所述的方法,其特征在于,所述判断所述Cache中是否存在满足第一预设条件的Cache line,包括:
    判断所述Cache的最近最少访问LRU链表中前M个Cache line中是否存在与DRAM类型的内存对应的Cache line,其中所述LRU链表中前M个Cache line为历史访问频率低于预设频率的Cache line;
    所述在确定所述Cache中存在满足第一预设条件的Cache line的情况下,在满足第一预设条件的Cache line中选择待替换的第一Cache line,包括:
    在确定所述LRU链表中前M个Cache line中存在与DRAM类型的内存对应的Cache line的情况下,在确定的所述前M个Cache line中与DRAM类型的内存对应的Cache line中选择第一个Cache line为所述第一Cache line。
  3. 根据权利要求1或2所述的方法,其特征在于,还包括:
    在确定所述Cache中不存在满足第一预设条件的Cache line的情况下,判断所述Cache中是否存在满足第二预设条件的cache line,其中所述满足第 二预设条件的cache line包括历史访问频率低于所述预设频率、与NVM类型的内存对应并且未被修改过的Cache line;
    在确定所述Cache中存在满足第二预设条件的cache line的情况下,在所述满足第二预设条件的cache line中选择待替换的第一Cache line。
  4. 根据权利要求3所述的方法,其特征在于,所述判断所述Cache中是否存在满足第二预设条件的cache line,包括:
    判断所述Cache的LRU链表中前N个Cache line中是否存在与NVM类型的内存对应并且修改Modify标识位表示干净clean的Cache line,其中所述前N个Cache line为历史访问频率低于预设频率的Cache line;
    所述在确定所述Cache中存在满足第二预设条件的cache line的情况下,在所述满足第二预设条件的cache line中选择待替换的第一Cache line包括:
    在确定所述LRU链表中前N个Cache line中存在与NVM类型的内存对应且Modify位为clean的Cache line的情况下,在确定的所述前N个Cache line中与NVM类型的内存对应且Modify位为clean的Cache line中选择第一个Cache line为所述第一Cache line。
  5. 根据权利要求4所述的方法,其特征在于,还包括:
    在确定所述Cache中不存在满足第二预设条件的cache line的情况下,确定所述LRU链表中最前端的Cache line为所述第一Cache line。
  6. 根据权利要求1~5中任一项所述的方法,其特征在于,所述采用第二Cache line替换所述第一Cache line之后,还包括:
    根据所述访问地址指向的所述内存中的存储介质的类型,记录所述第二Cache line对应的内存的类型。
  7. 一种缓存Cache,其特征在于,包括:
    接收模块,用于接收CPU发送的数据访问请求,所述数据访问请求中包含访问地址;
    命中确定模块,用于根据所述访问地址判断待访问数据是否缓存在所述Cache中;
    替换确定模块,用于在确定所述待访问数据没有缓存在所述Cache中的情况下,根据所述Cache中的缓存线Cache line的历史访问频率以及Cache line对应的内存的类型,判断所述Cache中是否存在满足第一预设条件的 Cache line,其中,所述满足第一预设条件的Cache line包括历史访问频率低于预设频率且与动态随机访问存储器DRAM类型的内存对应的Cache line,所述内存包括DRAM类型的内存和非易失性存储器NVM类型的内存;在确定所述Cache中存在满足第一预设条件的Cache line的情况下,在满足第一预设条件的Cache line中选择待替换的第一Cache line;
    读取模块,用于根据所述访问地址从所述内存中读取所述待访问数据;
    替换模块,用于采用第二Cache line替换所述第一Cache line,所述第二Cache line包含所述访问地址和所述待访问数据;
    发送模块,用于向所述CPU发送所述待访问数据。
  8. 根据权利要求7所述的Cache,其特征在于,所述替换确定模块,具体用于:
    判断所述Cache的最近最少访问LRU链表中前M个Cache line中是否存在与DRAM类型的内存对应的Cache line,其中所述LRU链表中前M个Cache line为历史访问频率低于预设频率的Cache line;
    在确定所述LRU链表中前M个Cache line中存在与DRAM类型的内存对应的Cache line的情况下,在确定的所述前M个Cache line中与DRAM类型的内存对应的Cache line中选择第一个Cache line为所述第一Cache line。
  9. 根据权利要求7或8所述的Cache,其特征在于,所述替换确定模块,还用于:
    在确定所述Cache中不存在满足第一预设条件的Cache line的情况下,判断所述Cache中是否存在满足第二预设条件的cache line,其中所述满足第二预设条件的cache line包括历史访问频率低于所述预设频率、与NVM类型的内存对应并且未被修改过的Cache line;
    在确定所述Cache中存在满足第二预设条件的cache line的情况下,在所述满足第二预设条件的cache line中选择待替换的第一Cache line。
  10. 根据权利要求9所述的Cache,其特征在于,所述替换确定模块,具体用于:
    判断所述Cache的LRU链表中前N个Cache line中是否存在与NVM类型的内存对应并且修改Modify标识位表示干净clean的Cache line,其中所述前N个Cache line为历史访问频率低于预设频率的Cache line;
    在确定所述LRU链表中前N个Cache line中存在与NVM类型的内存对应且Modify位为clean的Cache line的情况下,在确定的所述前N个Cache line中与NVM类型的内存对应且Modify位为clean的Cache line中选择第一个Cache line为所述第一Cache line。
  11. 根据权利要求10所述的Cache,其特征在于,所述替换确定模块,还用于:
    在确定所述Cache中不存在满足第二预设条件的cache line的情况下,确定所述LRU链表中最前端的Cache line为所述第一Cache line。
  12. 根据权利要求7~11中任一项所述的Cache,其特征在于,所述替换模块,还用于:
    在采用第二Cache line替换所述第一Cache line之后,根据所述访问地址指向的所述内存中的存储介质的类型,记录所述第二Cache line对应的内存的类型。
  13. 一种计算机系统,其特征在于,包括:处理器、内存和如权利要求7~12中任一项所述的缓存Cache,所述内存包括DRAM和NVM,所述处理器、所述内存与所述Cache通过总线连接。
PCT/CN2015/078502 2014-05-09 2015-05-07 数据缓存方法、缓存和计算机系统 WO2015169245A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP15789250.6A EP3121703B1 (en) 2014-05-09 2015-05-07 Data caching method, cache and computer system
KR1020167028378A KR102036769B1 (ko) 2014-05-09 2015-05-07 데이터 캐싱 방법, 캐시 및 컴퓨터 시스템
JP2016564221A JP6277572B2 (ja) 2014-05-09 2015-05-07 データキャッシング方法、キャッシュおよびコンピュータシステム
US15/347,776 US10241919B2 (en) 2014-05-09 2016-11-09 Data caching method and computer system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410193960.4 2014-05-09
CN201410193960.4A CN105094686B (zh) 2014-05-09 2014-05-09 数据缓存方法、缓存和计算机系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/347,776 Continuation US10241919B2 (en) 2014-05-09 2016-11-09 Data caching method and computer system

Publications (1)

Publication Number Publication Date
WO2015169245A1 true WO2015169245A1 (zh) 2015-11-12

Family

ID=54392171

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/078502 WO2015169245A1 (zh) 2014-05-09 2015-05-07 数据缓存方法、缓存和计算机系统

Country Status (6)

Country Link
US (1) US10241919B2 (zh)
EP (1) EP3121703B1 (zh)
JP (1) JP6277572B2 (zh)
KR (1) KR102036769B1 (zh)
CN (1) CN105094686B (zh)
WO (1) WO2015169245A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017083949A (ja) * 2015-10-23 2017-05-18 富士通株式会社 キャッシュメモリおよびキャッシュメモリの制御方法
EP3547142A4 (en) * 2016-12-28 2020-04-22 New H3C Technologies Co., Ltd. INFORMATION PROCESSING
CN112289353A (zh) * 2019-07-25 2021-01-29 上海磁宇信息科技有限公司 一种优化的具有ecc功能的mram系统及其操作方法

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108139872B (zh) * 2016-01-06 2020-07-07 华为技术有限公司 一种缓存管理方法、缓存控制器以及计算机系统
CN107229575A (zh) * 2016-03-23 2017-10-03 上海复旦微电子集团股份有限公司 缓存性能的评估方法及装置
CN107229574A (zh) * 2016-03-23 2017-10-03 上海复旦微电子集团股份有限公司 缓存及其控制方法
CN107229546A (zh) * 2016-03-23 2017-10-03 上海复旦微电子集团股份有限公司 缓存的模拟方法及装置
CN108021514B (zh) * 2016-10-28 2020-11-06 华为技术有限公司 一种缓存替换的方法和设备
CN106569961B (zh) * 2016-10-31 2023-09-05 珠海市一微半导体有限公司 一种基于访存地址连续性的cache模块及其访存方法
US10417134B2 (en) * 2016-11-10 2019-09-17 Oracle International Corporation Cache memory architecture and policies for accelerating graph algorithms
CN107368437B (zh) * 2017-07-24 2021-06-29 郑州云海信息技术有限公司 一种末级缓存管理方法及系统
CN111433749B (zh) * 2017-10-12 2023-12-08 拉姆伯斯公司 具有dram高速缓存的非易失性物理存储器
CN108572932B (zh) * 2017-12-29 2020-05-19 贵阳忆芯科技有限公司 多平面nvm命令融合方法与装置
US20190303037A1 (en) * 2018-03-30 2019-10-03 Ca, Inc. Using sequential read intention to increase data buffer reuse
JP7071640B2 (ja) * 2018-09-20 2022-05-19 富士通株式会社 演算処理装置、情報処理装置及び演算処理装置の制御方法
CN110134514B (zh) * 2019-04-18 2021-04-13 华中科技大学 基于异构内存的可扩展内存对象存储系统
CN110347338B (zh) * 2019-06-18 2021-04-02 重庆大学 混合内存数据交换处理方法、系统及可读存储介质
CN112667528A (zh) * 2019-10-16 2021-04-16 华为技术有限公司 一种数据预取的方法及相关设备
CN111221749A (zh) * 2019-11-15 2020-06-02 新华三半导体技术有限公司 数据块写入方法、装置、处理器芯片及Cache
CN112612727B (zh) * 2020-12-08 2023-07-07 成都海光微电子技术有限公司 一种高速缓存行替换方法、装置及电子设备
WO2022178869A1 (zh) * 2021-02-26 2022-09-01 华为技术有限公司 一种缓存替换方法和装置
CN113421599A (zh) * 2021-06-08 2021-09-21 珠海市一微半导体有限公司 一种预缓存外部存储器数据的芯片及其运行方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236530A (zh) * 2008-01-30 2008-08-06 清华大学 高速缓存替换策略的动态选择方法
CN102831087A (zh) * 2012-07-27 2012-12-19 国家超级计算深圳中心(深圳云计算中心) 基于混合存储器的数据读写处理方法和装置
CN103092534A (zh) * 2013-02-04 2013-05-08 中国科学院微电子研究所 一种内存结构的调度方法和装置
CN103548005A (zh) * 2011-12-13 2014-01-29 华为技术有限公司 替换缓存对象的方法和装置
US20140032818A1 (en) * 2012-07-30 2014-01-30 Jichuan Chang Providing a hybrid memory

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5307477A (en) 1989-12-01 1994-04-26 Mips Computer Systems, Inc. Two-level cache memory system
US6349363B2 (en) * 1998-12-08 2002-02-19 Intel Corporation Multi-section cache with different attributes for each section
US7457926B2 (en) 2005-05-18 2008-11-25 International Business Machines Corporation Cache line replacement monitoring and profiling
US7478197B2 (en) 2006-07-18 2009-01-13 International Business Machines Corporation Adaptive mechanisms for supplying volatile data copies in multiprocessor systems
US7568068B2 (en) * 2006-11-13 2009-07-28 Hitachi Global Storage Technologies Netherlands B. V. Disk drive with cache having volatile and nonvolatile memory
CN100481028C (zh) * 2007-08-20 2009-04-22 杭州华三通信技术有限公司 一种利用缓存实现数据存储的方法和装置
US7962695B2 (en) 2007-12-04 2011-06-14 International Business Machines Corporation Method and system for integrating SRAM and DRAM architecture in set associative cache
JP2011022933A (ja) * 2009-07-17 2011-02-03 Toshiba Corp メモリ管理装置を含む情報処理装置及びメモリ管理方法
US20100185816A1 (en) * 2009-01-21 2010-07-22 Sauber William F Multiple Cache Line Size
EP2455865B1 (en) 2009-07-17 2020-03-04 Toshiba Memory Corporation Memory management device
US8914568B2 (en) 2009-12-23 2014-12-16 Intel Corporation Hybrid memory architectures
US9448938B2 (en) * 2010-06-09 2016-09-20 Micron Technology, Inc. Cache coherence protocol for persistent memories
CN102253901B (zh) 2011-07-13 2013-07-24 清华大学 一种基于相变内存的读写区分数据存储替换方法
CN102760101B (zh) * 2012-05-22 2015-03-18 中国科学院计算技术研究所 一种基于ssd 的缓存管理方法及系统
CN103927203B (zh) 2014-03-26 2018-06-26 上海新储集成电路有限公司 一种计算机系统及控制方法
CN103914403B (zh) 2014-04-28 2016-11-02 中国科学院微电子研究所 一种混合内存访问情况的记录方法及其系统
CN103927145B (zh) 2014-04-28 2017-02-15 中国科学院微电子研究所 一种基于混合内存的系统休眠、唤醒方法及装置
CN104035893A (zh) 2014-06-30 2014-09-10 浪潮(北京)电子信息产业有限公司 一种在计算机异常掉电时的数据保存方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236530A (zh) * 2008-01-30 2008-08-06 清华大学 高速缓存替换策略的动态选择方法
CN103548005A (zh) * 2011-12-13 2014-01-29 华为技术有限公司 替换缓存对象的方法和装置
CN102831087A (zh) * 2012-07-27 2012-12-19 国家超级计算深圳中心(深圳云计算中心) 基于混合存储器的数据读写处理方法和装置
US20140032818A1 (en) * 2012-07-30 2014-01-30 Jichuan Chang Providing a hybrid memory
CN103092534A (zh) * 2013-02-04 2013-05-08 中国科学院微电子研究所 一种内存结构的调度方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG, QIANG ET AL.: "Efficient Management with Hotspots Control for Hybrid Memory System", MICROELECTRONICS & COMPUTER, vol. 31, no. 1, 31 January 2014 (2014-01-31), pages 1, XP008182988 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017083949A (ja) * 2015-10-23 2017-05-18 富士通株式会社 キャッシュメモリおよびキャッシュメモリの制御方法
EP3547142A4 (en) * 2016-12-28 2020-04-22 New H3C Technologies Co., Ltd. INFORMATION PROCESSING
CN112289353A (zh) * 2019-07-25 2021-01-29 上海磁宇信息科技有限公司 一种优化的具有ecc功能的mram系统及其操作方法
CN112289353B (zh) * 2019-07-25 2024-03-12 上海磁宇信息科技有限公司 一种优化的具有ecc功能的mram系统及其操作方法

Also Published As

Publication number Publication date
EP3121703A1 (en) 2017-01-25
US10241919B2 (en) 2019-03-26
JP6277572B2 (ja) 2018-02-14
KR102036769B1 (ko) 2019-10-25
CN105094686A (zh) 2015-11-25
EP3121703B1 (en) 2019-11-20
JP2017519275A (ja) 2017-07-13
US20170060752A1 (en) 2017-03-02
EP3121703A4 (en) 2017-05-17
KR20160132458A (ko) 2016-11-18
CN105094686B (zh) 2018-04-10

Similar Documents

Publication Publication Date Title
WO2015169245A1 (zh) 数据缓存方法、缓存和计算机系统
CN102760101B (zh) 一种基于ssd 的缓存管理方法及系统
US9298384B2 (en) Method and device for storing data in a flash memory using address mapping for supporting various block sizes
US9449005B2 (en) Metadata storage system and management method for cluster file system
CN103777905B (zh) 一种软件定义的固态盘融合存储方法
US20110231598A1 (en) Memory system and controller
CN108804350A (zh) 一种内存访问方法及计算机系统
US10740251B2 (en) Hybrid drive translation layer
CN107391398B (zh) 一种闪存缓存区的管理方法及系统
US9268705B2 (en) Data storage device and method of managing a cache in a data storage device
CN109952565B (zh) 内存访问技术
WO2016095761A1 (zh) 缓存的处理方法和装置
US20090094391A1 (en) Storage device including write buffer and method for controlling the same
CN104991743B (zh) 应用于固态硬盘阻变存储器缓存的损耗均衡方法
US20160124639A1 (en) Dynamic storage channel
WO2023000536A1 (zh) 一种数据处理方法、系统、设备以及介质
WO2013189186A1 (zh) 非易失性存储设备的缓存管理方法及装置
JP2017126334A (ja) 記憶装置及びその動作方法並びにシステム
CN106909323B (zh) 适用于dram/pram混合主存架构的页缓存方法及混合主存架构系统
US11126624B2 (en) Trie search engine
US20240020014A1 (en) Method for Writing Data to Solid-State Drive
CN108647157A (zh) 一种基于相变存储器的映射管理方法及固态硬盘
CN104268102A (zh) 一种存储服务器采用混合方式写缓存的方法
CN110968527B (zh) Ftl提供的缓存
JP2013222434A (ja) キャッシュ制御装置、キャッシュ制御方法、及びそのプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15789250

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20167028378

Country of ref document: KR

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2015789250

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015789250

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2016564221

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE