CN111984188B - Management method and device of hybrid memory data and storage medium - Google Patents

Management method and device of hybrid memory data and storage medium Download PDF

Info

Publication number
CN111984188B
CN111984188B CN202010616882.XA CN202010616882A CN111984188B CN 111984188 B CN111984188 B CN 111984188B CN 202010616882 A CN202010616882 A CN 202010616882A CN 111984188 B CN111984188 B CN 111984188B
Authority
CN
China
Prior art keywords
memory
data
data page
access
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010616882.XA
Other languages
Chinese (zh)
Other versions
CN111984188A (en
Inventor
刘铎
杨涓
谭玉娟
陈咸彰
梁靓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202010616882.XA priority Critical patent/CN111984188B/en
Publication of CN111984188A publication Critical patent/CN111984188A/en
Application granted granted Critical
Publication of CN111984188B publication Critical patent/CN111984188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/068Hybrid storage device

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a management method of hybrid memory data, which comprises the following steps: distributing the access data to corresponding memories according to the access types of the data pages; wherein the memory comprises a first memory and a second memory; when a data page in the memory is selected, updating the access record information of the data page to maintain the access time locality information and the read-write information of all the data pages; and carrying out data page migration of the first storage and the second storage according to the memory space of the first storage, the read-write information and the historical access read-write information of the second storage. The invention can effectively improve the migration efficiency of the data page of the hybrid memory and improve the performance of a computer system.

Description

Management method and device of hybrid memory data and storage medium
Technical Field
The present invention relates to the field of computer memory management technologies, and in particular, to a method and an apparatus for managing hybrid memory data, and a storage medium.
Background
Dynamic Random Access Memory (DRAM) has been widely used in computer systems for the past few decades with its good read and write performance. With the development of computer technology and the coming of big data era, the problems of "memory wall" and "energy wall" are presented due to low storage density and high refresh energy consumption of DRAM, which causes the computer system to present a serious challenge to the capacity, performance, power consumption, expansibility, etc. of the traditional memory system. Non-Volatile Memory (NVM) has the advantages of Non-volatility, byte-addressable, zero leakage power consumption, scalability, etc., and is considered as a potential substitute for conventional Memory devices. However, the nonvolatile memory also has the defects of high write delay, high write power consumption, limited write endurance and the like, and cannot directly replace the conventional memory device. While a hybrid memory architecture based on DRAM and NVM can take full advantage of both media.
In the mixed memory data page management research in the prior art, such as CLOCK-DWF algorithm, uimirate algorithm and the like, write-in counting or historical heat statistics are mostly used for maintaining the data heat, and data distribution and migration are guided according to the data heat, so that the system performance is improved. In addition, in order to reduce the write operation in the NVM extremely, the CLOCK-DWF algorithm adopts a strategy of migrating the page once the write operation hit occurs in the NVM; whereas the M-CLOCK algorithm allows migration after only one write operation. Specifically, assuming that a certain page in the NVM is hit by an accidental write operation only, the page is migrated to the DRAM, and if the DRAM is not full at this time, the page is only copied to the DRAM, but due to the hit by the accidental write operation, the page is quickly eliminated to the NVM; if the current DRAM has no idle memory frame, the page identified as cold data in the DRAM needs to be eliminated to ensure that the DRAM has an idle space to place the migration page,
the inventor finds that the following technical problems exist in the prior art in the process of implementing the invention:
the existing statistical mode of the data heat is accompanied by a large amount of data calculation and storage expenses, the new statistical heat is used as a standard for the distribution and migration of data each time, and the calculation energy consumption and the access delay are greatly increased; moreover, the existing migration mechanism will generate a large amount of invalid page migrations, so that the migration operation cannot really bring performance improvement, that is, the performance benefit generated by some migrated pages of the NVM is less than the performance loss when the migrated pages are not migrated, and the system performance is reduced because the migration operation requires operations such as system interrupt copy.
Disclosure of Invention
Embodiments of the present invention provide a method for managing hybrid memory data, which can effectively improve migration efficiency of a hybrid memory data page and improve performance of a computer system.
An embodiment of the present invention provides a method for managing hybrid memory data, including:
distributing the access data to corresponding memories according to the access types of the data pages; wherein the memory comprises a first memory and a second memory; specifically, the first memory refers to a DRAM, and the second memory refers to an NVM;
when a data page in the memory is selected, updating the access record information of the data page to maintain the access time locality information and the read-write information of all the data pages;
and carrying out data page migration of the first storage and the second storage according to the memory space of the first storage, the read-write information and the historical access read-write information of the second storage.
As an improvement of the above solution, the allocating access data to the corresponding memory according to the data page access type includes:
when the data page access type is write operation, loading the accessed data page into the first memory; if the first storage is full, carrying out data page migration of the first storage and the second storage according to the memory space of the first storage and the read-write information, acquiring the free space of the first storage, and distributing the access data to the free space;
when the data page access type is a read operation and an access record is reserved in a record stack of the first memory, loading an accessed page into the first memory; if the first storage is full, carrying out data page migration of the first storage and the second storage according to the memory space of the first storage and the read-write information, acquiring the free space of the first storage, and distributing the access data to the free space;
and when the data page access type is read operation and no access record is reserved in the record stack of the first memory, writing the access data into the second memory.
As an improvement of the above scheme, the method further comprises the following steps: and after the access data are written into the first storage, determining the storage position and the state of the access data according to the memory state of the first storage.
As an improvement of the above scheme, the determining the storage location and the state of the access data according to the memory state of the first storage specifically includes:
when the memory space of the first memory is not full and the number of the hot data in the first memory does not reach the preset upper limit, all the written data pages are judged as hot data pages, and the states of all the written data pages are set to be hot states; wherein a page of write data set to a hot state is not evicted from the first memory for a first time; wherein the first time is a shorter time; since the page of write data that is set to the hot state is likely to be evicted from the first memory after it has been changed to cold data, it is understood that the first time is not a constant time value;
when the memory space of the first memory is full, if an access record exists in the stack in the access page, namely the access record is in an NR state, migrating the record at the bottom of the stack to the tail of the queue, acquiring the free space of the stack, marking the record state of the data page at the bottom of the stack as CR, and writing a new access data page into the top of the stack; and if the data page does not have an access record in the first memory, directly writing the data page into the queue tail, and setting the state of the data page according to the data page access type of the data page.
As an improvement of the above solution, when a data page in the memory is selected, the updating the access log information of the data page to maintain the access time locality information and the read-write information of all data pages includes:
when the data page in the first memory is selected, judging the state of the selected data page; if the state of the selected data page is a cold state; when the selected data page in the first memory only exists in a queue, the state of the selected data page is not converted, the state of the selected data page is updated to CR or CW according to the access type, and the record of the selected data page is added to the top of the stack; when the access record of the selected data page in the first memory exists in a stack, changing the state of the selected data page into a hot state and placing the data page at the top of the stack, migrating the data page at the bottom of the stack and in the hot state to the tail of a queue, recording the state of the data page as CR, and performing stack pruning operation;
when the data page in the first memory is selected, judging the state of the selected data page; if the state of the selected data page is a hot state; migrating the selected data page to the top of the stack;
wherein the stack pruning operation comprises: and deleting the record with the state of cold in all the data pages at the bottom of the stack until the state of the data pages at the bottom of the stack is the hot state.
As an improvement of the above scheme, when a data page in the memory is selected, the updating of the access log information of the data page is performed to maintain access time locality information and read-write information of all data pages, specifically including:
when the data pages in the second memory are selected, the access history information of all the data pages in the second memory is maintained through an LRU chain, and the access history information of the data pages accessed by the write operation in the second memory is maintained through a W-LRU.
As an improvement of the above scheme, the performing data page migration of the first storage and the second storage according to the memory space of the first storage and the read-write information specifically includes:
when the memory space of the first memory is smaller than the new data to be accessed, selecting a candidate data page from the first memory, and migrating the candidate data page from the first memory to the second memory;
when the memory space of the first memory is not smaller than the new data to be accessed, judging the current state of the data page in the second memory, and migrating the data meeting the migration condition to the first memory;
wherein the migration conditions include: and the number of times of writing operation on the second memory is greater than a preset threshold value.
As an improvement of the above scheme, the first memory is a dynamic random access memory; the second memory is a nonvolatile memory; wherein the first memory and the second memory are in a parallel structure.
Correspondingly, an embodiment of the present invention provides a management apparatus for hybrid memory data, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor executes the computer program to implement the management method for hybrid memory data according to the first embodiment of the present invention.
Correspondingly, the third embodiment of the present invention provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, and when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the method for managing hybrid memory data according to the first embodiment of the present invention.
Compared with the prior art, the management method, the management device and the storage medium for the hybrid memory data provided by the embodiment of the invention have the following beneficial effects:
based on the memory data management algorithm of time locality, access interval and state transition, the time locality, the access interval and the read-write tendency are utilized, the heat degree of a data page is accurately identified under the condition of no write-in count, hot data is maintained in a first memory as much as possible, and cold data is maintained in a second memory; and allowing a proper amount of write operations in the NVM by using the state transfer, reducing the migration operations in the hybrid memory and improving the system performance; by carrying out data page migration of the first storage and the second storage according to the memory space of the first storage and the read-write information, the migration efficiency of the mixed memory data page can be effectively improved, and the performance of a computer system is improved; the dynamic random access memory is used as a first memory of the hybrid memory, and the nonvolatile memory is used as a second memory of the hybrid memory, so that the effects of strong reading and writing of the dynamic random access memory, nonvolatile property, byte addressing, zero leakage power consumption and high expandability of the nonvolatile memory can be realized at the same time; the first storage and the second storage form a parallel structure, so that the address space can be addressed uniformly, the data management is more flexible and simpler, and the memory capacity is the sum of the first storage and the second storage, so that the larger memory space storage capacity can be obtained.
Drawings
Fig. 1 is a flowchart illustrating a method for managing hybrid memory data according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a calculation method of IRR and R values according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a data distribution process according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a data writing process of the first memory according to an embodiment of the invention.
Fig. 5 is a schematic diagram of a state transition according to an embodiment of the present invention.
Fig. 6 is a flowchart illustrating a method for managing hybrid memory data according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flow chart of a method for managing hybrid memory data according to an embodiment of the present invention is shown, where the method includes:
s101, distributing access data to corresponding memories according to the access types of the data pages; wherein the memory comprises a first memory and a second memory;
s102, when a data page in the memory is selected, updating the access record information of the data page to maintain the access time locality information and the read-write information of all the data pages;
s103, data page migration of the first storage and the second storage is carried out according to the memory space of the first storage, the read-write information and historical access read-write information of the second storage.
In particular, the data management in the first memory is divided into two parts, a stack and a queue. The stack in the first memory will maintain more hot data, with the data hot from the top of the stack to the bottom of the stack decreasing in order. The first memory queue maintains less cold data in the first memory, namely candidate pages, the data in the queue is identified by the latest read-write operation, a first-in first-out policy is adopted by the queue, the eliminated data at the bottom of the stack is stored at the tail of the queue, and the candidate pages are selected from the head of the queue when the sacrifice page is selected. The hot data (i.e., with smaller IRR) state is denoted as hot state h (hot), the cold data (i.e., with larger IRR) state is denoted as cold state c (cold), and the cold data in the queue is divided into cr (cold read) and cw (cold write) according to their read and write states. When there is only an access record in the stack but its actual data is not in the first memory, it indicates that the data has been hotter, and this state is considered to be the cold state, but due to the particularity of its state, its state is given as NR (Non-response). Wherein, IRR (Inter-Reference) represents a non-repeated interval data page of the same data page in the access process, and r (Reference) represents a non-repeated interval data page number of the data page and the data page accessed last time. The specific calculation manner of IRR and R is as shown in fig. 2, where there are 4 data pages in the a data page access interval, but the number of data pages is not repeated is 3, and IRR is 3; the interval number of the A data pages from the current access data page is 3, the page number of the unreduplicated data is 2, and R is 2.
The total upper limit of entries of the hot and cold data in the first storage is equal to the capacity of the memory space of the first storage, and the following are provided:
Mhot+Mcold=VDRAM
in the formula, MhotRepresenting an upper limit number, M, of hot data in the first memorycoldRepresenting an upper limit number, V, of cold data in the first memoryDRAMRepresenting the total capacity of the memory space of the first memory. It can be found from experiments that when M is satisfiedhot=99%VDRAMAnd when the data migration rate and the writing times of the second memory are the lowest, the cache miss rate is the lowest and is consistent with that when the hot data in the LIRS is set to be 99%. There may be three states of data in the first memory stack: H. c, NR, there is only C-state data in the queue, the entry in the first memory stack is slightly larger than VDRAMThe entries in the first memory queue are 0.01V, and the total record overhead is approximately equal to VDRAM
Further, for step S101, the allocating access data to corresponding memories according to the data page access type includes:
when the data page access type is write operation, loading the accessed data page into the first memory; if the first storage is full, carrying out data page migration of the first storage and the second storage according to the memory space of the first storage and the read-write information, acquiring the free space of the first storage, and distributing the access data to the free space;
when the data page access type is a read operation and an access record is reserved in a record stack of the first memory, loading an accessed page into the first memory; if the first storage is full, carrying out data page migration of the first storage and the second storage according to the memory space of the first storage and the read-write information, acquiring the free space of the first storage, and distributing the access data to the free space;
and when the data page access type is read operation and no access record is reserved in the record stack of the first memory, writing the access data into the second memory.
In a specific embodiment, the first memory is a dynamic random access memory DRAM, and the second memory is a non-volatile memory NVM. Fig. 3 is a schematic diagram of a data distribution process according to an embodiment of the present invention. When the data is missing, distributing the access data to the corresponding memories according to the access types of the data pages, wherein the method comprises the following steps: firstly, initializing an accessed data page into a DRAM (dynamic random access memory) when the data page access type is write operation, trying to migrate a candidate data page to obtain an idle space according to the method of the step S103 when the DRAM is full, and writing data after obtaining the idle space; secondly, the data page access type is read operation, the access record of the data page is reserved in a DRAM record stack (namely actual data is in an external memory and is not in an internal memory), the data state is NR state, the accessed data page is loaded into the DRAM, when the DRAM is full, the candidate page is tried to be transferred according to the method of the step S103 to obtain a free space, and data is written after the free space is obtained; and thirdly, the data page access type is read operation, the access record of the DRAM is not reserved in the DRAM record stack, and the data page is written into the NVM.
Further, still include: and after the access data are written into the first storage, determining the storage position and the state of the access data according to the memory state of the first storage.
Further, the determining the storage location and the state of the access data according to the memory state of the first storage specifically includes:
when the memory space of the first memory is not full and the number of the hot data in the first memory does not reach the preset upper limit, all the written data pages are judged as hot data pages, and the states of all the written data pages are set to be hot states; wherein a page of write data set to a hot state is not evicted from the first memory for a first time; wherein the first time is a shorter time; since the page of write data that is set to the hot state is likely to be evicted from the first memory after it has been changed to cold data, it is understood that the first time is not a constant time value;
when the memory space of the first memory is full, if the selected data has an access record in the stack, namely the state of the selected data is NR, migrating the record at the bottom of the stack to the tail of the queue, acquiring the free space of the stack, marking the record state of the data page at the bottom of the stack as CR, and writing a new access data page into the top of the stack; and if the data page does not have an access record in the first memory, directly writing the data page into the queue tail, and setting the state of the data page according to the data page access type of the data page.
In a specific embodiment, the first memory is a dynamic random access memory DRAM, and the second memory is a non-volatile memory NVM. Fig. 4 is a schematic diagram of a data writing process of the first memory according to an embodiment of the present invention. After the data is determined to be written into the DRAM, the storage position and the state of the record need to be determined according to the condition in the DRAM. When DRAM is not full and MhotAnd when the upper limit is not reached, a large amount of free space exists in the DRMA at the moment, all the written data pages are considered as hot pages, namely the states of the pages are all set to be H states, the pages cannot be ejected out of the DRAM in a short time, the pages are maintained in the DRMA as much as possible, and the system performance is improved depending on the good read-write performance of the DRAM.
When the DRAM is full, the specific process of selecting a suitable data page from the DRAM to migrate to the NVM to obtain free space, and selecting a suitable data page, i.e. a "victim page", is as in step S103. When the DRAM is full, MhotAnd if the data keeps an access record in the DRAM, namely the page state is NR, the record at the bottom of the stack is moved to the tail of the queue to obtain the free space of the stack, the record state at the bottom of the stack is marked as CR, and then the new data page is written to the top of the stack. Since the H record at the bottom of the stack is migrated to the queue, the data at the bottom of the stack may not be in the H state, and it needs to be pruned. If the data does not have access records in the DRAM, the page heat is not higher than the data heat in the stack, the page heat is directly written into the tail of the queue, and the state of the page is set to be CR or CW according to the access type.
In another embodiment, when data is determined to be written to the second memory, the access type is determined to be a read operation, and the data is written to the LRU queue in the second memory only, and the data is written to the tail of the LRU queue according to the history of the access time. For data allocation in the second memory, no other considerations need to be taken into account since the access type is fixed at the time of allocation.
Further, for step S102, when a data page in the memory is selected, updating the access log information of the data page to maintain the access time locality information and the read-write information of all data pages includes:
when the data page in the first memory is selected, judging the state of the selected data page; if the state of the selected data page is a cold state;
when the selected data page in the first memory only exists in a queue, the state of the selected data page is not converted, the state of the selected data page is updated to CR or CW according to the access type, and the record of the selected data page is added to the top of the stack; when the access record of the selected data page in the first memory exists in a stack, changing the state of the selected data page into a hot state and placing the data page at the top of the stack, migrating the data page at the bottom of the stack and in the hot state to the tail of a queue, recording the state of the data page as CR, and performing stack pruning operation;
when the data page in the first memory is selected, judging the state of the selected data page; if the state of the selected data page is a hot state; migrating the selected data page to the top of the stack;
wherein the stack pruning operation comprises: and deleting the record with the state of cold in all the data pages at the bottom of the stack until the state of the data pages at the bottom of the stack is the hot state.
In one embodiment, the first page of data in the first memory is in either the C state or the H state when selected. If the data page in the C state is selected, its new IRR is equal to its original R value, i.e., the distance from the top of the stack.
When the selected data page only exists in the queue, the page is not accessed for a long time and is removed from the stack, the new IRR of the selected data page in the queue is obviously larger than that of any page in the stack, at the moment, state conversion is not needed, the state of the data page is updated to be CR/CW according to the access type, and the record of the data page is added to the top of the stack. When the state needs to be converted, namely the access record of the data page exists in the stack, the state of the data page is changed into H and placed at the top of the stack, the H page at the bottom of the stack is migrated to the tail of the queue, the record state is CR, and then stack pruning operation is carried out on the data pages.
To maintain a correct transition of the data page state, it is necessary to ensure that the state of the bottom data page is H. When the state conversion is needed, at least one H page closer to the stack bottom than the data page in the state C is needed, if the data page in the stack bottom is not in the H state, the selected data page in the state C may be closer to the stack bottom than other H pages in the stack, the IRR of the selected data page is greater than the IRR of the other data pages in the state H, even if the data page is hit, the data page cannot be converted into the H page, and the page state conversion cannot be normally performed. Therefore, when the data page with the state of the stack bottom being H changes, if the data page is hit to be migrated to the stack top or migrated out of the stack, the stack needs to be pruned, so as to ensure that the state of the page at the stack bottom is H. The stack pruning process is to delete all records with the stack bottom data page in the C state until the stack bottom data page is in the H state.
Further, when a data page in the memory is selected, the updating the access record information of the data page to maintain access time locality information and read-write information of all data pages specifically includes:
when the data pages in the second memory are selected, the access history information of all the data pages in the second memory is maintained through an LRU chain, and the access history information of the data pages accessed by the write operation in the second memory is maintained through a W-LRU.
In a specific embodiment, when the data page in the second memory is selected, the access information of the data page in the second memory needs to be maintained. The LRU chain maintains access history information for all data pages in the second memory, and the W-LRU maintains access history information for data pages in the second memory that have been accessed by the write operation. The state transition diagram is shown in FIG. 5, where three states exist only in the W-LRU chain. Except that the data page with the state of 1 is selected by the write operation to generate the migration, the records in the LRU and the W-LRU are deleted, the other data pages are selected, and the LRU updates according to the traditional LRU algorithm and migrates the record of the selected data page to the tail of the queue. When the access type of the selected data page is a read operation and is only in the LRU, the data page is only accessed by the read operation for a long time, and no state transition exists. When the access type of the selected data page is write operation, if the write tendency of the data page is not maintained in the W-LRU, that is, the previous operations of the data page are all read operations, the record in the LRU is copied to the W-LRU, and is written into the tail of the W-LRU queue, and the initial state is recorded as N. The state N is changed into a 1 state by writing and selecting again, so that the writing tendency is enhanced; otherwise, the state is selected to be changed into the 0 state by the read operation, and the data is only changed into the 0 state at this time in order to protect the write tendency thereof, so that the write possibility thereof cannot be reduced to the minimum by one read operation. The state 0 is write-selected to transition to the 1 state, which indicates that the page was recently selected by a continuous operation such as a write operation-read operation, and is now selected again by a write operation, i.e. it has been selected more than twice, and transitioning it to the 1 state also increases the write-propensity, unless it is selected by a continuous read operation, the data page will migrate very quickly. On the contrary, the data page is deleted from the W-LRU in the read selection, namely the data page is selected by continuous read operation, the data page has high reading possibility in the future and low writing possibility, and the writing tendency of the data page is not required to be maintained.
Further, the performing, according to the memory space of the first storage and the read-write information, data page migration of the first storage and the second storage specifically includes:
when the memory space of the first memory is smaller than the new data to be accessed, selecting a candidate data page from the first memory, and migrating the candidate data page from the first memory to the second memory;
when the memory space of the first memory is not smaller than the new data to be accessed, judging the current state of a data page in the second memory, and migrating the data meeting the conditions to the first memory;
wherein the migration conditions include: and the number of times of writing operation on the second memory is greater than a preset threshold value.
In a specific embodiment, the first memory is a dynamic random access memory DRAM, and the second memory is a non-volatile memory NVM. And the writing performance in the NVM is poor, and frequent write data migration to the DRAM is attempted, so that the system performance can be improved.
Further, the first memory is a Dynamic Random Access Memory (DRAM); the second memory is a non-volatile memory (NVM); wherein the first memory and the second memory are in a parallel structure.
Specifically, the parallel structure is that two storage media are placed in the same storage layer, the address space of the parallel structure can be uniformly addressed, data management is more flexible and simpler, and the memory capacity is the sum of the first storage and the second storage, so that the larger memory space storage capacity can be obtained.
Referring to fig. 6, a flowchart of a method for managing hybrid memory data according to an embodiment of the present invention is shown, where the method includes: data allocation, namely allocating access data to a corresponding memory according to the data page access type; wherein the memory comprises a first memory and a second memory; updating and maintaining data, namely updating the access record information of the data page when one data page in the memory is selected so as to maintain the access time locality information and the read-write information of all the data pages; and thirdly, data migration, namely, data page migration of the first storage and the second storage is carried out according to the memory space of the first storage and the read-write information. Data management in DRAM is divided into two parts, a stack and a queue. The stack in DRAM will maintain more hot data, the heat of the top of the stack is the highest, the heat of the data from the top of the stack to the bottom of the stack is reduced in sequence, in FIG. 6, the left side of the stack in DRAM is the top of the stack, and the right side is the bottom of the stack. In the case of needing to mark, the heat condition is expressed by using light color, and the page with higher heat is marked with darker color.
Compared with the prior art, the management method of the hybrid memory data provided by the embodiment of the invention has the following beneficial effects:
based on the memory data management algorithm of time locality, access interval and state transition, the time locality, the access interval and the read-write tendency are utilized, the heat degree of a data page is accurately identified under the condition of no write-in count, hot data is maintained in a first memory as much as possible, and cold data is maintained in a second memory; and allowing a proper amount of write operations in the NVM by using the state transfer, reducing the migration operations in the hybrid memory and improving the system performance; by carrying out data page migration of the first storage and the second storage according to the memory space of the first storage and the read-write information, the migration efficiency of the mixed memory data page can be effectively improved, and the performance of a computer system is improved; the dynamic random access memory is used as a first memory of the hybrid memory, and the nonvolatile memory is used as a second memory of the hybrid memory, so that the effects of strong reading and writing of the dynamic random access memory, nonvolatile property, byte addressing, zero leakage power consumption and high expandability of the nonvolatile memory can be realized at the same time; the first storage and the second storage form a parallel structure, so that the address space can be addressed uniformly, the data management is more flexible and simpler, and the memory capacity is the sum of the first storage and the second storage, so that the larger memory space storage capacity can be obtained.
Correspondingly, an embodiment of the present invention provides a management apparatus for hybrid memory data, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor implements a management method for hybrid memory data according to an embodiment of the present invention when executing the computer program. The management device of the mixed memory data can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The management device of the hybrid memory data can include, but is not limited to, a processor and a memory.
Correspondingly, the third embodiment of the present invention provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, and when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the method for managing hybrid memory data according to the first embodiment of the present invention.
The processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor is a control center of the management device for the mixed memory data, and various interfaces and lines are used to connect various parts of the management device for the entire mixed memory data.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the hybrid memory data management device by running or executing the computer programs and/or modules stored in the memory and invoking the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash memory card (FlashCard), at least one disk storage device, a flash memory device, or other volatile solid state storage device.
The module/unit integrated by the management device for hybrid memory data can be stored in a computer readable storage medium if it is implemented in the form of a software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, Read-only memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that the above-described device embodiments are merely illustrative, and units illustrated as separate components may or may not be physically separate, and components illustrated as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (9)

1. A method for managing hybrid memory data is characterized by comprising the following steps:
distributing the access data to corresponding memories according to the access types of the data pages; wherein the memory comprises a first memory and a second memory;
when a data page in the memory is selected, updating the access record information of the data page to maintain the access time locality information and the read-write information of all the data pages;
performing data page migration of the first storage and the second storage according to the memory space of the first storage, the read-write information and historical access read-write information of the second storage;
the allocating the access data to the corresponding memory according to the data page access type includes:
when the data page access type is write operation, loading the accessed data page into the first memory; if the first storage is full, carrying out data page migration of the first storage and the second storage according to the memory space of the first storage and the read-write information, acquiring the free space of the first storage, and distributing the access data to the free space;
when the data page access type is a read operation and an access record is reserved in a record stack of the first memory, loading an accessed page into the first memory; if the first storage is full, carrying out data page migration of the first storage and the second storage according to the memory space of the first storage and the read-write information, acquiring the free space of the first storage, and distributing the access data to the free space;
and when the data page access type is read operation and no access record is reserved in the record stack of the first memory, writing the access data into the second memory.
2. The method for managing hybrid memory data according to claim 1, further comprising: and after the access data are written into the first storage, determining the storage position and the state of the access data according to the memory state of the first storage.
3. The method according to claim 2, wherein the determining the storage location and the state of the access data according to the memory state of the first storage specifically comprises:
when the memory space of the first memory is not full and the number of the hot data in the first memory does not reach the preset upper limit, all the written data pages are judged as hot data pages, and the states of all the written data pages are set to be hot states; wherein a page of write data set to a hot state is not evicted from the first memory for a first time;
when the memory space of the first memory is full, if the selected data has an access record in the stack, namely the state is NR, migrating the record at the bottom of the stack to the tail of the queue, acquiring the free space of the stack, marking the record state of the data page at the bottom of the stack as CR, and writing a new access data page into the top of the stack; and if the data page does not have an access record in the first memory, directly writing the data page into the queue tail, and setting the state of the data page according to the data page access type of the data page.
4. The method according to claim 1, wherein when a data page in the memory is selected, the updating the access log information of the data page to maintain the access time locality information and the read-write information of all data pages comprises:
when the data page in the first memory is selected, judging the state of the selected data page; if the state of the selected data page is a cold state;
when the selected data page in the first memory only exists in a queue, the state of the selected data page is not converted, the state of the selected data page is updated to CR or CW according to the access type, and the record of the selected data page is added to the top of the stack; when the access record of the selected data page in the first memory exists in a stack, changing the state of the selected data page into a hot state and placing the data page at the top of the stack, migrating the data page at the bottom of the stack and in the hot state to the tail of a queue, recording the state of the data page as CR, and performing stack pruning operation;
when the data page in the first memory is selected, judging the state of the selected data page; if the state of the selected data page is a hot state, migrating the selected data page to the top of the stack;
wherein the stack pruning operation comprises: and deleting the record with the state of cold in all the data pages at the bottom of the stack until the state of the data pages at the bottom of the stack is the hot state.
5. The method according to claim 1, wherein when a data page in the memory is selected, the method updates the access log information of the data page to maintain access time locality information and read-write information of all data pages, and specifically comprises:
when the data pages in the second memory are selected, the access history information of all the data pages in the second memory is maintained through an LRU chain, and the access history information of the data pages accessed by the write operation in the second memory is maintained through a W-LRU.
6. The method according to claim 1, wherein the performing the data page migration of the first storage and the second storage according to the memory space of the first storage and the read-write information specifically includes:
when the memory space of the first memory is smaller than the new data to be accessed, selecting a candidate data page from the first memory, and migrating the candidate data page from the first memory to the second memory;
when the memory space of the first memory is not smaller than the new data to be accessed, reading the current state of the data page in the second memory, and migrating the data meeting the migration condition to the first memory;
wherein the migration conditions include: and the number of times of writing operation on the second memory is greater than a preset threshold value.
7. The method for managing hybrid memory data according to claim 1, wherein the first memory is a dynamic random access memory; the second memory is a nonvolatile memory; wherein the first memory and the second memory are in a parallel structure.
8. A management apparatus of hybrid memory data, comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the management method of hybrid memory data according to any one of claims 1 to 7 when executing the computer program.
9. A computer-readable storage medium, comprising a stored computer program, wherein when the computer program runs, the computer-readable storage medium controls a device to execute the method for managing hybrid memory data according to any one of claims 1 to 7.
CN202010616882.XA 2020-06-30 2020-06-30 Management method and device of hybrid memory data and storage medium Active CN111984188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010616882.XA CN111984188B (en) 2020-06-30 2020-06-30 Management method and device of hybrid memory data and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010616882.XA CN111984188B (en) 2020-06-30 2020-06-30 Management method and device of hybrid memory data and storage medium

Publications (2)

Publication Number Publication Date
CN111984188A CN111984188A (en) 2020-11-24
CN111984188B true CN111984188B (en) 2021-09-17

Family

ID=73437596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010616882.XA Active CN111984188B (en) 2020-06-30 2020-06-30 Management method and device of hybrid memory data and storage medium

Country Status (1)

Country Link
CN (1) CN111984188B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113524178A (en) * 2021-06-28 2021-10-22 南京大学 Data communication method and device for man-machine fusion system
CN113835624A (en) * 2021-08-30 2021-12-24 阿里巴巴(中国)有限公司 Data migration method and device based on heterogeneous memory
CN114816749B (en) * 2022-04-22 2023-02-10 江苏华存电子科技有限公司 Intelligent management method and system for memory
CN115344505B (en) * 2022-08-01 2023-05-09 江苏华存电子科技有限公司 Memory access method based on perception classification
CN117234432B (en) * 2023-11-14 2024-02-23 苏州元脑智能科技有限公司 Management method, management device, equipment and medium of hybrid memory system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104360963A (en) * 2014-11-26 2015-02-18 浪潮(北京)电子信息产业有限公司 Heterogeneous hybrid memory method and device oriented to memory computing
CN105786717A (en) * 2016-03-22 2016-07-20 华中科技大学 DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory access method and system adopting software and hardware collaborative management
CN107193646A (en) * 2017-05-24 2017-09-22 中国人民解放军理工大学 A kind of high-efficiency dynamic paging method that framework is hosted based on mixing
CN107818052A (en) * 2016-09-13 2018-03-20 华为技术有限公司 Memory pool access method and device
CN109901800A (en) * 2019-03-14 2019-06-18 重庆大学 A kind of mixing memory system and its operating method
CN110347338A (en) * 2019-06-18 2019-10-18 重庆大学 Mix internal storage data exchange and processing method, system and readable storage medium storing program for executing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130032155A (en) * 2011-09-22 2013-04-01 삼성전자주식회사 Data storage device and data management method thereof
US9535831B2 (en) * 2014-01-10 2017-01-03 Advanced Micro Devices, Inc. Page migration in a 3D stacked hybrid memory

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104360963A (en) * 2014-11-26 2015-02-18 浪潮(北京)电子信息产业有限公司 Heterogeneous hybrid memory method and device oriented to memory computing
CN105786717A (en) * 2016-03-22 2016-07-20 华中科技大学 DRAM (dynamic random access memory)-NVM (non-volatile memory) hierarchical heterogeneous memory access method and system adopting software and hardware collaborative management
CN107818052A (en) * 2016-09-13 2018-03-20 华为技术有限公司 Memory pool access method and device
CN107193646A (en) * 2017-05-24 2017-09-22 中国人民解放军理工大学 A kind of high-efficiency dynamic paging method that framework is hosted based on mixing
CN109901800A (en) * 2019-03-14 2019-06-18 重庆大学 A kind of mixing memory system and its operating method
CN110347338A (en) * 2019-06-18 2019-10-18 重庆大学 Mix internal storage data exchange and processing method, system and readable storage medium storing program for executing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
混合内存页面管理策略的性能和能耗研究;陈俊熹等;《现代计算机》;20170415(第11期);第10-17页 *

Also Published As

Publication number Publication date
CN111984188A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN111984188B (en) Management method and device of hybrid memory data and storage medium
US11893238B2 (en) Method of controlling nonvolatile semiconductor memory
US20180121351A1 (en) Storage system, storage management apparatus, storage device, hybrid storage apparatus, and storage management method
US7475185B2 (en) Nonvolatile memory system, nonvolatile memory device, memory controller, access device, and method for controlling nonvolatile memory device
US20140129758A1 (en) Wear leveling in flash memory devices with trim commands
US8386713B2 (en) Memory apparatus, memory control method, and program
CN111033477A (en) Logical to physical mapping
US20100011154A1 (en) Data accessing method for flash memory and storage system and controller using the same
CN111638852A (en) Method for writing data into solid state disk and solid state disk
CN105095116A (en) Cache replacing method, cache controller and processor
CN106294197B (en) Page replacement method for NAND flash memory
CN111512290B (en) File page table management technique
CN113094003B (en) Data processing method, data storage device and electronic equipment
CN110888600B (en) Buffer area management method for NAND flash memory
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
US11138104B2 (en) Selection of mass storage device streams for garbage collection based on logical saturation
US20230359380A1 (en) Memory system and method for controlling nonvolatile memory
KR20130022604A (en) Apparatus and method for data storing according to an access degree
US20190042415A1 (en) Storage model for a computer system having persistent system memory
CN109690465B (en) Storage device management method and user terminal
CN116364148A (en) Wear balancing method and system for distributed full flash memory system
CN114327270A (en) Request processing method, device, equipment and readable storage medium
US20240020014A1 (en) Method for Writing Data to Solid-State Drive
CN112148639A (en) High-efficiency small-capacity cache memory replacement method and system
Kwon et al. Fast responsive flash translation layer for smart devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant