KR101443678B1 - A buffer cache method for considering both hybrid main memory and flash memory storages - Google Patents

A buffer cache method for considering both hybrid main memory and flash memory storages Download PDF

Info

Publication number
KR101443678B1
KR101443678B1 KR1020130064098A KR20130064098A KR101443678B1 KR 101443678 B1 KR101443678 B1 KR 101443678B1 KR 1020130064098 A KR1020130064098 A KR 1020130064098A KR 20130064098 A KR20130064098 A KR 20130064098A KR 101443678 B1 KR101443678 B1 KR 101443678B1
Authority
KR
South Korea
Prior art keywords
buffer
block
hac
dram
flash memory
Prior art date
Application number
KR1020130064098A
Other languages
Korean (ko)
Inventor
류연승
Original Assignee
명지대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 명지대학교 산학협력단 filed Critical 명지대학교 산학협력단
Priority to KR1020130064098A priority Critical patent/KR101443678B1/en
Application granted granted Critical
Publication of KR101443678B1 publication Critical patent/KR101443678B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention relates to a hybrid main memory and a buffer cache method for a flash memory storage device, the method comprising: (a) determining whether a HAC first refers to a block in a block of flash memory; (b) when the page is first referenced to the block of the flash memory as a result of the determination in the step (a), the HAC allocates a new buffer and connects the buffer of the page to the header of the block; (C) determining whether a HAC has a free DRAM buffer; (d) if a pre-buffer exists as a result of the determination in step (c), the HAC preferentially allocates a new buffer in the DRAM; (E) the HAC maintains a memory type (DRAM or MRAM) of each block and allocates a buffer of the same memory type to the block; (F) storing the image in a buffer to which the HAC is assigned; And (g) moving all pages to the MRU location in the same block each time a page is referenced by the HAC.
According to the present invention, the buffer cache is controlled in consideration of the limited write operation performance of the DRAM / MRAM hybrid main memory, thereby minimizing the number of erase operations of the flash memory storage device.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to a hybrid main memory and a flash memory storage device,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a hybrid main memory and a buffer cache method for a flash memory storage device, and more particularly, to a technique for minimizing the number of erase operations of a flash memory storage device considering a limited write operation performance of a DRAM / MRAM hybrid main memory .

Most modern operating systems typically use a buffer cache mechanism to improve I / O performance, which is limited by slow secondary storage. Over the past few decades, the buffer cache scheme has been implemented as DRAM-based main memory and hard disk-based secondary storage.

However, recent studies have shown that DRAM-based main memory consumes a significant portion of the total system power. These problems pose serious problems for mobile computers such as battery operated smartphones and tablet PCs. Fortunately, low-power non-volatile memories such as Phase Change RAM (PRAM) and Magnetic RAM (MRAM) have been developed.

Among nonvolatile memories, MRAM is attracting attention as a next generation memory for mobile information devices and various computers with high density and low power consumption. In order to solve the energy loss of DRAM based main memory, recent research introduced MRAM based main memory technique.

The MRAM is formed of a magnetic tunneling junction (MTJ) cell having a magnetic tunnel junction structure on a single transistor in which a resistance of an electric conductor is stored by using a magneto resistance effect which changes according to a surrounding magnetic field . The read performance (latency, power consumption) of MRAM is comparable to that of DRAM, but write performance (latency, power consumption) is lower than DRAM (latency 1.25 times to 2 times, power consumption 5 times to 10 times).

However, the MRAM is faster than the conventional memory such as DRAM and flash memory, has low power consumption, and is nonvolatile. Also, the cell size is small and the integration density is high.

In most mobile computers, storage devices based on NAND flash memory are generally used because flash memory consumes less power and is faster than hard disks.

However, the flash memory can not immediately change the stored data, and the change contents can be written after erasing the block in which the data is stored. In addition, there is a problem that the number of deletion operations for one block is limited.

It is an object of the present invention to minimize the number of erase operations of a flash memory storage device by controlling a buffer cache in consideration of limited write operation performance of a DRAM / MRAM hybrid main memory.

According to an aspect of the present invention, there is provided a buffer cache method for a hybrid main memory and a flash memory storage device, the method comprising: (a) determining whether a HAC first refers to a page in a block of a flash memory; (b) determining whether a HAC has a free DRAM buffer when a page is first referenced to a block of the flash memory as a result of the determination in step (a); (c) if a free DRAM buffer exists, the HAC preferentially allocates a new buffer in the DRAM in step (b); (D) HAC maintaining a memory type (DRAM or MRAM) of each block and allocating a buffer of the same memory type to the block; (E) storing a page in a buffer to which the HAC is allocated; (F) determining whether a HAC has a block header; (g) connecting a buffer of a page to a header of a block when a block header exists as a result of the determination in step (f); And (h) moving all the pages to the MRU location in the same block each time the HAC is referenced to the page.

According to the present invention, the buffer cache is controlled in consideration of the limited write operation performance of the DRAM / MRAM hybrid main memory, thereby minimizing the number of deletion operations of the flash memory storage device.

1 shows a parallel MTJ of an MRAM cell and an MRAM.
FIG. 2 is a diagram showing a characteristic of a general MRAM and a system structure according to a flash memory, an FTL, and a conventional buffer cache technique.
3 is a diagram illustrating a buffer cache structure based on a buffer cache method of a hybrid main memory and a flash memory storage device according to the present invention.
FIG. 4 illustrates trace-based simulation results for a system based on a buffer cache method of a hybrid main memory and a flash memory storage device according to the present invention. FIG.
5 is a flowchart illustrating a buffer cache method of a hybrid main memory and a flash memory storage device according to the present invention.
FIG. 6 is a flowchart showing a step-by-step addition process of a buffer cache method of a hybrid main memory and a flash memory storage device according to the present invention;

Specific features and advantages of the present invention will become more apparent from the following detailed description based on the accompanying drawings. Prior to this, terms and words used in the present specification and claims are to be interpreted in accordance with the technical idea of the present invention based on the principle that the inventor can properly define the concept of the term in order to explain his invention in the best way. It should be interpreted in terms of meaning and concept. It is to be noted that the detailed description of known functions and constructions related to the present invention is omitted when it is determined that the gist of the present invention may be unnecessarily blurred.

Referring to FIG. 1, a system structure based on a buffer cache method of a hybrid main memory and a flash memory storage apparatus according to the present invention will be described below.

1, a system S based on a buffer cache method of a hybrid main memory and a flash memory storage apparatus according to the present invention includes a main memory 100, a flash memory 200, and an operating system 300 .

The main memory 100 includes a DRAM 110 and an MRAM 120 and is connected to the buffer cache 310 of the operating system 300. The flash memory 200 is connected to the FTL (Flash Translation Layer) 320. As shown in FIG.

The operating system 300 mediates data processing between the main memory 100 and the virtual memory and the file system 330 through the buffer cache 310 and transfers the data processing between the flash memory 200 and the FTL 320).

Hereinafter, the characteristics of the general MRAM, the flash memory, the FTL, and the conventional buffer cache technique will be described with reference to FIG.

1) MRAM

First, a cell of an MRAM (Magnetic RAM) uses a magnetic tunnel junction (MTJ) storing binary data.

In addition, the MTJ is composed of two ferromagnetic layers and a tunnel barrier layer, which are made of a reference layer and a free layer. The magnetic direction of the reference layer is fixed, while the magnetic direction of the free layer is divided parallel and antiparallel to the binary data stored in the cell.

2 (a) shows a structure of an MRAM cell. Like a DRAM cell, a cell of an MRAM includes an access transistor connecting a storage device and a bit line. However, unlike a DRAM, the other end of the storage device is not connected to the ground, but is configured to be connected to a sense line.

On the other hand, in MJT, data is stored in the magnetic direction of the free layer, which determines the electrical resistance of the device used to read the data stored in the cell.

As shown in FIG. 2 (b), when the magnetic field of the free layer and the reference layer are parallel (that is, aligned in the same direction), the resistance of the MTJ is low and the logic is zero. On the other hand, if the magnetic field of the free layer and the reference layer are antiparallel (ie, aligned in the opposite direction), the resistance of the MTJ is high and the logic is one. To read the data stored in the cell, a very small voltage is consumed between the sense and bit lines, and the amount of current flow is sensed.

That is, writing data to an MRAM cell operates in a current-mode rather than a voltage-mode. In particular, to write data to the MTJ, a large amount of current must be introduced through the MTJ to change the magnetic orientation of the free layer.

At this time, depending on the direction of the current, the free layer is parallel or anti-parallel to become a fixed layer, and the amount of current required to write data to the MTJ is much larger than the amount required to read.

2) NAND flash memory

NAND flash memory consists of blocks, each block consisting of a fixed number of pages. The basic commands for such NAND flash memory are read, write, and erase. The basic unit of reading and writing is the page, and deletion is performed in block units.

In addition, the flash memory can not be changed unless a block in which data is stored is deleted in advance, the number of deletion operations for one block is limited, and deletion operations are performed only in the entire block.

Therefore, the operating system uses a device driver called FTL (Flash Translation Layer) to solve the erase-before-write problem. In order to reduce the number of erase operations, the FTL converts the logical address to a physical address, Most address translation schemes use a log block mechanism.

In addition, a page-level technique called CFLRU (Clean First Least Recently Used) has been proposed as a study of buffer cache technique considering flash memory.

Specifically, the CFLRU maintains a page list in the LRU order, which contains a working region having relatively recently referenced pages in the LRU list and pages that were referenced relatively long ago It is divided into clean-first regions.

Here, a clean page means a page whose contents have not been changed, and a dirty page means a page whose contents have been changed.

At this time, in order to reduce the cost of the write operation, the CFLRU finds the clean page of the clean-priority area from the end of the LRU list and replaces it when the buffer is replaced. If there is no clean page in the clean-first area, replace the last dirty page in the LRU list. In addition, CFLRU can reduce the number of write and delete operations by delaying the flush of dirty pages in the page cache.

On the other hand, the block level technique manages the buffer in units of erase blocks of the flash memory and selects all the blocks in the buffer when replacing the buffer.

BPLRU (Block Padding LRU) manages LRU lists based on flash memory blocks. Every time a page is referenced in the buffer cache, all pages of the same block are moved to MRU (Most Recently Used) In the case of a car, BPLRU replaces all pages of the victim block.

In addition, BPLRU proposed a page padding scheme for switch merging because the page in which the FTL is to be stored is stored in the log block. This page padding scheme is a technique in which a page not in the buffer among the pages of the victim block is buffered from the flash memory Read, and store all pages of a block in flash memory at one time. In BPLRU, all log blocks perform switch merging, thus reducing the number of delete operations.

The buffer cache structure based on the buffer cache method of the hybrid main memory and the flash memory storage apparatus according to the present invention will be described with reference to FIG.

 As shown in FIG. 3, Hybrid Memory Aware Caching (HAC) manages a list of LRUs (Least Recently Used) based on blocks of a flash memory, and this LRU list is used for managing each page And a header of the block.

Hereinafter, the detailed description will be omitted. However, according to the present invention, it is assumed that the main memory is divided into a DRAM and an MRAM by a memory address.

First, when page p of block B of the flash memory is first referenced, the HAC allocates a new buffer and stores page p in the allocated buffer. The HAC maintains the memory type (DRAM or MRAM) of each block. This is determined by the memory type of the buffer initially allocated to the block. For example, if the first allocated buffer is allocated in DRAM in main memory, the memory type of the block becomes DRAM.

At this time, if the block header of the block B does not exist, the HAC allocates a new block header.

Then, the HAC concatenates the buffer of page p into the header of block B, and whenever a page in the buffer cache is referenced, it moves all pages to the MRU (Most Recently Used) location in the same block.

Also, if the HAC allocates a new buffer, it tries to allocate a buffer of the same memory type as the block's memory type.

If there is no free DRAM buffer when a DRAM buffer is to be allocated, the HAC searches for a clean DRAM buffer in a search region of the LRU list and assigns it to the free DRAM buffer.

At this time, HAC suggests a technique of reserving the clean DRAM buffer in use as a free DRAM buffer even though the pre-clean buffer in the system is still available.

To this end, the HAC periodically searches for a clean block using the DRAM buffer in the search area of the LRU list when the number of free DRAM buffers is less than the threshold value.

When such a clean block is found, the DRAM buffers of the found block are returned to the pre-DRAM buffer.

In the case of a system using a large main memory, since there are many used buffers that are not accessed in a short time, buffers can be freed with little effect on the cache performance.

In addition, in order to further reduce the number of write operations of the MRAM, when the clean page of the MRAM is referred to by the write operation, the HAC allocates the DRAM buffer, writes the requested data to the DRAM buffer, and returns the MRAM buffer.

At this time, if there is no free DRAM buffer, the HAC frees the clean DRAM buffer of the search area and uses this buffer to store the requested write data.

If all buffers are in use, the HAC selects the victim block in the search area.

At this time, in order to reduce the number of deletion operations of the flash memory, the HAC searches for a clean block and switches all the pages of the block to a free state.

At this time, if there is no clean block in the search area, the HAC selects a block in the LRU position of the LRU list as a sacrifice block, and flushes all pages of the sacrifice block to the flash memory.

Hereinafter, the trace-based simulation results of the system based on the buffer cache method of the hybrid main memory and the flash memory storage device according to the present invention will be described with reference to FIG.

First, in order to collect the workload of the mobile computer, a lot of applications were run on the notebook PC for a week to collect disk I / O. The number of I / Os is 706,833 and the read / write ratio is 55:45.

We also measured the buffer cache hit ratio, the number of write operations in MRAM, and the number of erase operations in flash while changing the buffer cache size from 10,000 to 300,000.

4 (a) shows the cache hit ratio when the size of the buffer cache is changed for each buffer replacement technique, and FIG. 4 (b) shows the write operation counts of the MRAM for each buffer replacement technique. The average number of write operations is reduced by about 13% and up to 26%.

Also, as shown in FIG. 4 (c), it is confirmed that HAC reduces the number of erase operations of the flash memory as much as BPLRU. In addition, it can be seen that HAC outperforms Block Padding LRU because it considers clean blocks to avoid erase operations during the replacement procedure.

That is, the buffer cache method of the hybrid main memory and the flash memory storage device according to the present invention supports the DRAM / MRAM hybrid main memory and the flash memory storage device, and through the trace-based simulation result, .

5 is a flowchart illustrating a buffer cache method of a hybrid main memory and a flash memory storage device according to the present invention.

First, the HAC determines whether a page is first referred to a block of the flash memory (S10).

As a result of the determination in operation S10, when the page is first referenced to a block of the flash memory, the HAC determines whether a free DRAM buffer exists (S20).

If it is determined in operation S20 that there is a free DRAM buffer, the HAC preferentially allocates a new buffer in the DRAM in operation S30.

Subsequently, the HAC maintains the memory type (DRAM or MRAM) of each block and allocates a buffer of the same memory type to the block (S40).

Then, the page is stored in the buffer to which the HAC is allocated (S50).

Subsequently, the HAC determines whether a block header exists (S60).

If it is determined in step S60 that the block header exists, the HAC concatenates the buffer of the page in the header of the block (S70).

Then, every time the HAC refers to the page, all pages are moved to the MRU position of the LRU list in the same block (S80).

On the other hand, if it is determined in step S10 that the page is not first referred to the block of the flash memory, the HAC determines whether the clean page of the MRAM is referred to by the write operation (S90).

If it is determined in operation S90 that the clean page of the MRAM is referred to by the write operation, the HAC allocates the DRAM buffer and writes the requested data in the DRAM buffer, and returns the MRAM buffer (S100).

If it is determined in operation S90 that the clean page of the MRAM is not referred to by the write operation, the procedure goes to operation S80.

If it is determined in operation S20 that the free DRAM buffer does not exist, the HAC reserves the free buffer in the search region of the LRU list, and performs the procedure in operation S30 (operation S110) .

If it is determined in step S60 that the HAC does not have a block header, the HAC allocates a new block header, and then proceeds to step S80 (step S120).

Hereinafter, an additional step of the buffer cache method of the hybrid main memory and the flash memory storage apparatus according to the present invention will be described with reference to FIG.

In step S100, if there is no free DRAM buffer, if there is a clean DRAM buffer in the search area, the HAC allocates the buffer and stores the requested write data (S100-1).

In step S110, the HAC searches for a clean block in the search area and switches all the pages of the clean block to the free state (S110-1).

If there is no clean block in the search area in step S110-1, the HAC selects a block in the LRU position of the LRU list as a sacrifice block, and flushes all the pages of the sacrifice block to the flash memory (S110- 2).

If the number of free DRAM buffers is smaller than the threshold, the HAC searches for a clean block using the DRAM buffer in the search area and returns the DRAM buffer of the found clean block to the free DRAM buffer (S110-3 ).

While the present invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. It will be appreciated by those skilled in the art that numerous changes and modifications may be made without departing from the invention. And all such modifications and changes as fall within the scope of the present invention are therefore to be regarded as being within the scope of the present invention.

S: System based on buffer cache method of hybrid main memory and flash memory storage
100: main memory 110: DRAM
120: MRAM 200: flash memory
300: Operating system 310: Buffer cache
320: FTL 330: Virtual memory and file system

Claims (6)

A method of buffer cache of a hybrid main memory and a flash memory storage device,
(a) determining whether a HAC first references a page in a block of flash memory;
(b) if the page is first referenced to a block of the flash memory as a result of the determining in the step (a), determining whether a free DRAM buffer exists in the HAC;
(c) if it is determined in step (b) that the free DRAM buffer exists, the HAC preferentially allocates a new buffer in the DRAM;
(d) the HAC maintains a memory type (DRAM or MRAM) of each block and allocates a buffer of the same memory type to the block;
(e) storing the page in the buffer to which the HAC is allocated;
(f) determining whether the HAC has a block header;
(g) connecting the buffer of the page to the header of the block if the block header exists as a result of the determining in the step (f); And
(h) moving all the pages to the MRU location in the same block each time a page is referenced by the HAC. < Desc / Clms Page number 19 >
The method according to claim 1,
As a result of the determination in step (a)
(i) if the page is not first referenced to a block of flash memory, determining whether the HAC is referenced by a write operation of the MRAM's clean page;
(j) returning the MRAM buffer after the HAC allocates the DRAM buffer and writes the requested data in the DRAM buffer when the clean page of the MRAM is referenced by the write operation as a result of the determination in the step (i); And
(k) if the clean page of the MRAM is not referred to by the write operation as a result of the determining in the step (i), performing the procedure to the step (h). A method for buffer cache of a flash memory storage device.
The method according to claim 1,
As a result of the determination in step (b)
(l) if the free DRAM buffer does not exist, the HAC secures the pre-buffer in the search region of the LRU list, and performing the procedure to step (c) A method for buffer cache of a hybrid main memory and a flash memory storage device.
The method according to claim 1,
As a result of the determination in step (f)
(m) if the HAC does not have a block header, the HAC allocates a new block header, and performing the procedure to the step (h). Buffer cache method.
3. The method of claim 2,
The step (j)
(j-1) if the free DRAM buffer is up, and if there is a clean DRAM buffer in the search area, the HAC allocates the buffer and stores the requested write data. A method for buffer cache of a storage device.
The method of claim 3,
The step (l)
(l-1) searching for a clean block in the search area by the HAC, and converting all the pages of the clean block to the free state;
(1-2) In the step (l-1), if there is no clean block in the search area, the HAC selects a block in the LRU position of the LRU list as a victim block, and flushes all pages of the victim block to the flash memory flushing; And
(1-3) periodically or when the number of free DRAM buffers is smaller than the threshold, the HAC searches for a clean block using the DRAM buffer in the search area and returns the DRAM buffer of the found clean block to the free DRAM buffer Further comprising: a buffer memory for storing data in the buffer memory of the hybrid main memory and the flash memory storage device.
KR1020130064098A 2013-06-04 2013-06-04 A buffer cache method for considering both hybrid main memory and flash memory storages KR101443678B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130064098A KR101443678B1 (en) 2013-06-04 2013-06-04 A buffer cache method for considering both hybrid main memory and flash memory storages

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130064098A KR101443678B1 (en) 2013-06-04 2013-06-04 A buffer cache method for considering both hybrid main memory and flash memory storages

Publications (1)

Publication Number Publication Date
KR101443678B1 true KR101443678B1 (en) 2014-09-26

Family

ID=51760934

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130064098A KR101443678B1 (en) 2013-06-04 2013-06-04 A buffer cache method for considering both hybrid main memory and flash memory storages

Country Status (1)

Country Link
KR (1) KR101443678B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101546707B1 (en) 2014-02-04 2015-08-24 한국과학기술원 Hybrid main memory-based memory access control method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008108026A (en) 2006-10-25 2008-05-08 Hitachi Ltd Storage system with volatile cache memory and nonvolatile memory
KR101104361B1 (en) 2009-12-29 2012-01-16 주식회사 프롬나이 Computing system and method using nvram and volatile ram to implement process persistence selectively
KR20130024212A (en) * 2011-08-31 2013-03-08 세종대학교산학협력단 Memory system and management method therof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008108026A (en) 2006-10-25 2008-05-08 Hitachi Ltd Storage system with volatile cache memory and nonvolatile memory
KR101104361B1 (en) 2009-12-29 2012-01-16 주식회사 프롬나이 Computing system and method using nvram and volatile ram to implement process persistence selectively
KR20130024212A (en) * 2011-08-31 2013-03-08 세종대학교산학협력단 Memory system and management method therof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101546707B1 (en) 2014-02-04 2015-08-24 한국과학기술원 Hybrid main memory-based memory access control method

Similar Documents

Publication Publication Date Title
US11893238B2 (en) Method of controlling nonvolatile semiconductor memory
US9563382B2 (en) Methods, systems, and computer readable media for providing flexible host memory buffer
US9652386B2 (en) Management of memory array with magnetic random access memory (MRAM)
US20070124531A1 (en) Storage device, computer system, and storage device access method
US10379782B2 (en) Host managed solid state drivecaching using dynamic write acceleration
US10503411B2 (en) Data storage device and method for operating non-volatile memory
KR101297563B1 (en) Storage management method and storage management system
US20150356020A1 (en) Methods, systems, and computer readable media for solid state drive caching across a host bus
US9208101B2 (en) Virtual NAND capacity extension in a hybrid drive
CN103425600A (en) Address mapping method for flash translation layer of solid state drive
US20090094391A1 (en) Storage device including write buffer and method for controlling the same
US20140331024A1 (en) Method of Dynamically Adjusting Mapping Manner in Non-Volatile Memory and Non-Volatile Storage Device Using the Same
KR20140003805A (en) Data storage device and operating method thereof
KR20170109133A (en) Hybrid memory device and operating method thereof
CN111796761A (en) Memory device, controller, and method for operating controller
US11422930B2 (en) Controller, memory system and data processing system
KR101180288B1 (en) Method for managing the read and write cache in the system comprising hybrid memory and ssd
KR20130063244A (en) Non-volatile memory system
JP2011186563A (en) Device and method for managing memory
KR101443678B1 (en) A buffer cache method for considering both hybrid main memory and flash memory storages
Chen et al. Beyond address mapping: A user-oriented multiregional space management design for 3-D NAND flash memory
US20140289486A1 (en) Information processing apparatus, information processing method, and recording medium
Kim et al. Utilizing subpage programming to prolong the lifetime of embedded NAND flash-based storage
Kwon et al. FARS: A page replacement algorithm for NAND flash memory based embedded systems
KR102014723B1 (en) Page merging for buffer efficiency in hybrid memory systems

Legal Events

Date Code Title Description
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20180905

Year of fee payment: 5

FPAY Annual fee payment

Payment date: 20190918

Year of fee payment: 6