CN110674051A - Data storage method and device - Google Patents

Data storage method and device Download PDF

Info

Publication number
CN110674051A
CN110674051A CN201910905962.4A CN201910905962A CN110674051A CN 110674051 A CN110674051 A CN 110674051A CN 201910905962 A CN201910905962 A CN 201910905962A CN 110674051 A CN110674051 A CN 110674051A
Authority
CN
China
Prior art keywords
fine
page
memory
data
grained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910905962.4A
Other languages
Chinese (zh)
Inventor
郝晓冉
陈岚
倪茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Microelectronics of CAS
Original Assignee
Institute of Microelectronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Microelectronics of CAS filed Critical Institute of Microelectronics of CAS
Priority to CN201910905962.4A priority Critical patent/CN110674051A/en
Publication of CN110674051A publication Critical patent/CN110674051A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0646Configuration or reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0882Page mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the application discloses a data storage method and a data storage device, wherein the method comprises the following steps: responding to a storage request carrying data to be stored, and distributing a corresponding virtual page for the data to be stored according to the size of the data to be stored to obtain a target virtual page; determining a fine-grained memory page mapped by a target virtual page to obtain a target fine-grained memory page; one physical page comprises at least one fine-grained memory page; and writing the data to be stored into a target fine-grained memory page. The embodiment of the application can reduce data exchange between the memory and the hard disk, effectively prevent the increase of system input and output caused by exchanging partial hot data with cold data, and ensure the processing performance of the system.

Description

Data storage method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data storage method and apparatus.
Background
In existing computer architectures, the mapping of a user process virtual address space to a physical address space is done by a physical memory management system in an operating system. In actual operation, a physical memory management system dynamically allocates a physical memory space for data. When the idle physical memory space is small, the memory process KSWAPD is awakened, and memory pages which are not frequently used in the memory are temporarily swapped out to a SWAP partition SWAP of the hard disk; when the memory page is accessed again, the memory page needs to be swapped into the memory again from SWAP. If the memory page swapped out to SWAP is swapped in to the memory again in a short time, unnecessary memory data swapping in and out will be caused, thereby affecting the processing performance of the system.
Disclosure of Invention
In view of this, the present application provides a data storage method and apparatus, which can solve the problem of system performance degradation caused by unnecessary swap-in and swap-out of memory data in the prior art.
A first aspect of an embodiment of the present application provides a data storage method, including:
responding to a storage request carrying data to be stored, and distributing a corresponding virtual page for the data to be stored to obtain a target virtual page; the virtual pages correspond to data to be stored one by one;
determining a fine-grained memory page mapped by the target virtual page to obtain a target fine-grained memory page; the virtual pages correspond to the fine-grained memory pages one by one, and the size of the target fine-grained memory page is larger than or equal to that of the data to be stored;
writing the data to be stored into the target fine-grained memory page;
the fine-grained memory page is obtained by the following steps:
applying for a continuous memory space from the memory; the memory space comprises at least one physical page;
and dividing each physical page included in the memory space into a plurality of fine-grained memory pages based on the size of the currently received data to be stored.
Optionally, the dividing, based on the size of the currently received data to be written, each physical page included in the memory space into a plurality of fine-grained memory pages specifically includes:
determining the size of a target memory from a plurality of preset memory sizes according to the size of the currently received data to be stored; the target memory size is the smallest to-be-selected memory size, and the to-be-selected memory size is the memory size larger than or equal to the currently received to-be-stored data size in the plurality of memory sizes;
when the size of the physical page can be divided by the size of the target memory, dividing each physical page included in the continuous memory space into a plurality of fine-grained memory pages with the same size and equal to the size of the target memory;
when the size of a physical page cannot be divided by the size of the target memory, dividing each physical page included in the continuous memory space into a first fine-grained memory page and at least one second fine-grained memory page; the size of the first fine-grained memory page is smaller than the size of the target memory, and the size of the second fine-grained memory page is equal to the size of the target memory.
Optionally, the dividing, based on the size of the currently received data to be stored, each physical page included in the continuous memory space into a plurality of fine-grained memory pages, and then further includes:
obtaining a memory page identifier of each fine-grained memory page; the memory page identification comprises the size of a fine-grained memory page and the page offset of the fine-grained memory page in a corresponding physical page;
according to the memory page identification, linking the divided fine-grained memory pages into corresponding idle linked lists; the idle link list corresponds to the memory page identifier one to one.
Optionally, the determining the memory page mapped by the corresponding virtual page to obtain a target fine-grained memory page specifically includes:
determining an intra-page offset of the data to be stored based on the target virtual page;
and taking out a fine-grained memory page from the corresponding idle linked list as the target fine-grained memory page according to the size of the data to be stored and the offset in the page.
Optionally, the writing the data to be stored into the target fine-grained memory page further includes:
according to the memory page identification, the target fine-grained memory page is linked into a corresponding first-in first-out linked list, and an access label of the target fine-grained memory page is set as a first identification; the first-in first-out linked list corresponds to the memory page identification one by one;
when the first fifo chain is full, the method further comprises:
reading out a fine-grained memory page from the first-in first-out linked list to obtain a third fine-grained memory page;
when the access tag of the third fine-grained memory page is a first identifier, re-linking the third fine-grained memory page into a corresponding first-in first-out linked list, and setting the access tag of the third fine-grained memory page as a second identifier;
when the access tag of the third fine-grained memory page is a second identifier, linking the third fine-grained memory page into a corresponding second first-in first-out linked list according to the memory page identifier; the second first-in first-out linked list corresponds to the memory page identification one by one;
when the second fifo chain is full, the method further comprises:
reading out a fine-grained memory page from the second first-in first-out linked list to obtain a fourth fine-grained memory page;
and writing the data in the fourth fine-grained memory page into a bottom-layer storage device.
Optionally, the method further includes:
responding to the access to the data in the memory, and setting an access tag of a fine-grained memory page corresponding to the accessed data as a first identifier;
then, the reading out the fine-grained memory page from the second fifo linked list to obtain a fourth fine-grained memory page, and then specifically including:
when the access tag of the fourth fine-grained memory page is a second identifier, writing data in the fourth fine-grained memory page into bottom-layer storage equipment;
and when the access tag of the fourth fine-grained memory page is the first identifier, re-linking the fourth fine-grained memory page into a corresponding second first-in first-out linked list, and setting the access tag of the fourth fine-grained memory page as a second identifier.
A second aspect of embodiments of the present application provides a data storage device, including: the device comprises a distribution module, a determination module, a writing module, an acquisition module and a division module;
the allocation module is used for responding to a storage request carrying data to be stored, and allocating a corresponding virtual page to the data to be stored according to the size of the data to be stored to obtain a target virtual page; the virtual pages correspond to data to be stored one by one;
the determining module is configured to determine a fine-grained memory page mapped by the target virtual page, so as to obtain a target fine-grained memory page; the virtual pages correspond to the fine-grained memory pages one by one, and the size of the target fine-grained memory page is larger than or equal to that of the data to be stored;
the writing module is configured to write the data to be stored into the target fine-grained memory page;
the acquisition module is used for applying for a continuous memory space from the memory; the continuous memory space comprises at least one physical page;
the dividing module is configured to divide each physical page included in the continuous memory space into a plurality of fine-grained memory pages based on a size of currently received data to be stored.
A third aspect of the embodiments of the present application provides a computer-readable storage medium, on which a program is stored, where the program, when executed by a processor, implements any one of the data storage methods provided by the first aspect of the embodiments of the present application.
A fourth aspect of the embodiments of the present application provides a processor, where the processor is configured to execute a program, where the program executes any one of the data storage methods provided in the first aspect of the embodiments of the present application when running.
A fifth aspect of embodiments of the present application provides a computer, including a memory and a processor; the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute any one of the data storage methods provided in the first aspect of the embodiments of the present application according to an instruction in the program code.
Compared with the prior art, the method has the advantages that:
in the embodiment of the present application, memory data management is performed with the size of data to be stored as a granularity, and each physical page included in a continuous memory space applied from a memory is divided into a plurality of fine-grained memory pages based on the size of currently received data to be stored. Then, when a storage request carrying data to be stored is received, allocating a one-to-one corresponding virtual page to the data to be stored according to the size of the data to be stored to obtain a target virtual page, then determining a fine-grained memory page mapped by the target virtual page to obtain a target fine-grained memory page, writing the data to be stored into the target fine-grained memory page, and realizing the storage of the data, wherein the size of the target fine-grained memory page is not smaller than the size of the data to be stored. In the embodiment of the application, because the physical pages are divided by taking the size of the data to be stored as the granularity, and the memory data management is performed by taking the divided fine-grained memory pages as the unit, when the data of the memory and the hard disk are swapped in and out, the data to be stored of the user is also swapped in and out by taking the data to be stored of the user as the granularity, so that the swapped out memory page only comprises cold data to be swapped out, the data swapping in and out between the memory and the hard disk is reduced, the increase of system input and output caused by swapping out part of hot data along with the cold data is effectively prevented, and the processing performance of the system is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a data processing method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating another data storage method according to an embodiment of the present disclosure;
FIG. 3 is a diagram illustrating a mapping relationship between a virtual page and a physical page in an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating another data storage method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following first introduces a number of technical terms involved in the embodiments of the present application:
the physical page refers to a physical memory page obtained after paging a physical memory, and each physical page may record different information, and generally, the size of each physical page is 4 KB.
The virtual page corresponds to the physical page, and refers to a virtual memory page obtained after the virtual memory of the system is paged.
In the prior art, for a database adopting a key-value index structure, such as a B-Tree, a process has a higher access frequency to a keyword (key) than to a value (value). When the free physical memory space is insufficient, data in a part of physical pages needs to be swapped out from the memory to an underlying storage device (such as a hard disk), because the swapped out physical pages may include key data and the access frequency to the key is high, the data in the swapped out physical pages is likely to be swapped in the memory again in a short time, which causes unnecessary swapping in and out of memory data, thereby affecting the processing performance of the system.
Therefore, in the embodiment of the present application, memory management is performed by using the memory size (i.e. the size of the data to be stored written in, which may be referred to as object) applied by the user once as granularity, and the manageable physical page is pre-divided into a plurality of fine-grained memory pages of different levels according to the object, where the fine-grained memory pages of each level have different sizes. When a user applies for the memory space to write data, the data to be stored is written into a fine-grained memory page matched with the user application space. Therefore, when the data of the memory and the bottom storage device are swapped in and out, the data to be stored of the user is also used as the granularity, the swapped out memory page only comprises cold data (namely, data with low access frequency) needing to be swapped out, the data swapping in and out between the memory and the hard disk is reduced, the increase of the input and output of the system caused by that part of hot data (namely, data with high access frequency) is swapped out of the memory along with the cold data is effectively prevented, and the processing performance of the system is ensured.
Based on the above-mentioned ideas, in order to make the above-mentioned objects, features and advantages of the present application more comprehensible, specific embodiments of the present application are described in detail below with reference to the accompanying drawings.
For convenience of understanding, a method for obtaining the fine-grained memory page in the embodiment of the present application is first described below. Referring to fig. 1, the figure is a schematic flowchart of a data storage method according to an embodiment of the present application.
In the embodiment of the present application, the fine-grained memory page is obtained by using the following steps:
s101: a continuous memory space is applied from the memory.
In the embodiment of the present application, when data needs to be stored in the memory, a continuous memory space (which may be referred to as a slab) needs to be first requested from the memory for storing the data. For example, a continuous memory space of 1MB may be applied once, which is not limited herein. It is understood that a request for a slab includes at least one physical page.
In practical applications, step S101 may be performed at any stage of data storage, such as when a data storage request is first received, when a currently applied slab is not enough, and the like, which is not limited herein.
S102: based on the size of currently received data to be stored, dividing each physical page included in the memory space into a plurality of fine-grained memory pages.
In this embodiment of the present application, based on a current data storage requirement, each physical page included in one slab is divided into a plurality of fine-grained memory pages to store current data. Therefore, when data is swapped in and swapped out, the data is swapped in and swapped out by taking the fine-grained memory page as a unit, thereby avoiding swapping in and swapping out of redundant data and saving system resources.
As an example, when a data storage request is received, the size of data to be stored is 1KB, and if the requested memory space is insufficient, a slab is requested from a memory first, and each physical page in the requested slab is divided into a plurality of fine-grained memory pages with the size of 1KB, so that the data to be stored can be stored by using the divided fine-grained memory pages with the size of 1KB, and when the data to be stored is swapped in and out, the corresponding fine-grained memory pages are also used as granularities.
In some possible implementation manners of the embodiment of the present application, step S102 may specifically include:
s1021: and determining the size of a target memory from a plurality of preset memory sizes according to the size of the currently received data to be stored.
In this embodiment of the present application, the preset sizes of the multiple memories may be size levels of fine-grained memory pages set based on data storage requirements, and the fine-grained memory pages with different size levels may adapt to storage requirements of data with different sizes. In practical applications, the preset memory sizes may be set according to actual storage conditions, and may be, for example, 0.5KB, 1KB, 1.5KB, 2KB, 3.5KB, 4KB, and the like, which is not limited herein. It is understood that the predetermined memory sizes are not larger than the physical page size.
It should be noted that the target memory size is a partition basis when a fine-grained memory page is currently partitioned. The target memory size is the smallest to-be-selected memory size, and the to-be-selected memory size is the memory size larger than or equal to the currently received to-be-stored data size in the plurality of memory sizes. That is, from a plurality of preset memory sizes, the minimum memory size not smaller than the size of the currently received data to be stored is selected as the target memory size. When data is stored, the size of a divided fine-grained memory page (namely, a target storage size) is determined according to the size of currently received data to be stored, so that a physical page is divided into the fine-grained memory page which is suitable for the storage requirement of the currently received data to be stored. For example, the preset memory sizes are 0.5KB, 1KB, 1.5KB, 2KB, 3.5KB and 4KB, and the size of the currently received data to be stored is 3KB, then the target memory size is 3.5 KB; if the size of the currently received data to be stored is 2KB, the size of the target memory is 2KB, and so on.
S1022: when the size of the physical page can be divided by the size of a target memory, dividing each physical page included in the continuous memory space into a plurality of fine-grained memory pages with the same size and equal to the size of the target memory; when the size of the physical page cannot be evenly divided by the size of the target memory, dividing each physical page included in the continuous memory space into a first fine-grained memory page and at least one second fine-grained memory page.
In this embodiment of the present application, the size of the first fine-grained memory page is smaller than the size of the target memory, and the size of the second fine-grained memory page is equal to the size of the target memory. For example, when the target memory size is 2KB, the physical page is divided into 2 fine-grained memory pages of 2 KB. When the target memory size is 3.5KB, the physical page is divided into 1 fine-grained memory page of 3.5KB and 1 fine-grained memory page of 0.5 KB.
A specific data storage process is explained below. Referring to fig. 2, the figure is a schematic flow chart of another data storage method according to an embodiment of the present application.
The data storage method provided by the embodiment of the application comprises the following steps:
s201: responding to a storage request carrying data to be stored, and distributing a corresponding virtual page for the data to be stored according to the size of the data to be stored to obtain a target virtual page.
In the embodiment of the application, when a storage request is received, a virtual page is allocated to data to be stored to obtain a target virtual page. The virtual pages correspond to data to be stored one by one, and the virtual pages correspond to the fine-grained memory pages one by one. It can be understood that, because one physical page is divided into at least one fine-grained memory page in the embodiment of the present application, unlike the prior art, one physical page corresponds to at least one virtual page in the embodiment of the present application, for example, as shown in fig. 3. After the virtual page is distributed, the in-page offset of the data to be stored can be determined, so that a suitable fine-grained memory page is selected and written into the data to be stored.
S202: and determining the fine-grained memory page mapped by the target virtual page to obtain the target fine-grained memory page.
In the embodiment of the present application, the size of the target fine-grained memory page is greater than or equal to the size of the data to be stored. In one example, a fine-grained memory page having a size closest to that of data to be stored may be selected as the target memory page, so as to save memory space. For example, when the size of the data to be stored is 2KB, selecting a fine-grained memory page with the size of 2KB as a target fine-grained memory page; and when the size of the data to be stored is 0.8KB, selecting a fine-grained memory page with the size of 1KB as a target fine-grained memory page.
It should be noted here that, in order to facilitate management and use of fine-grained memory pages of different sizes, in some possible implementations of the embodiment of the present application, step S102 may further include:
obtaining a memory page identifier of each fine-grained memory page; and linking the fine-grained memory pages obtained by dividing into corresponding idle linked lists according to the memory page identifiers.
In this embodiment of the present application, the memory page identifier (may be referred to as a shape) includes a size (size) of a fine-grained memory page and an intra-page offset (offset) of the fine-grained memory page in a corresponding physical page, and the free link table and the memory page identifier correspond to each other one by one, that is, one shape corresponds to one free link table. In the data storage process, the corresponding shape can be obtained according to the known size of the data to be stored and the page offset determined after the virtual page is distributed to the data to be stored, and then the fine-grained memory page is taken out from the idle linked list corresponding to the shape to store the data to be stored.
Then, in some possible implementation manners of the embodiment of the present application, step S202 may specifically include:
determining an intra-page offset of data to be stored based on the target virtual page; and taking out a fine-grained memory page from the corresponding idle linked list as a target fine-grained memory page according to the size of the data to be stored and the offset in the page.
It should be noted that the offset amount in the page can be determined according to the previous data storage situation. Assuming that the size of the data to be stored is 1KB and 2 data with the size of 1KB have been stored, the intra-page offset corresponding to the allocation of the virtual page to the data to be stored is 2KB, and the intra-page offset of the data to be stored is 2 KB. And obtaining a corresponding shape according to the known size of the data to be stored and the determined offset in the page after the virtual page is distributed to the data to be stored, taking out a fine-grained memory page from an idle linked list corresponding to the shape as a target fine-grained memory page, and storing the data to be stored.
S203: and writing the data to be stored into a target fine-grained memory page.
In the embodiment of the present application, memory data management is performed with the size of data to be stored as a granularity, and each physical page included in a continuous memory space applied from a memory is divided into a plurality of fine-grained memory pages based on the size of currently received data to be stored. Then, when a storage request carrying data to be stored is received, allocating a one-to-one corresponding virtual page to the data to be stored according to the size of the data to be stored to obtain a target virtual page, then determining a fine-grained memory page mapped by the target virtual page to obtain a target fine-grained memory page, writing the data to be stored into the target fine-grained memory page, and realizing the storage of the data, wherein the size of the target fine-grained memory page is not smaller than the size of the data to be stored. In the embodiment of the application, because the physical pages are divided by taking the size of the data to be stored as the granularity, and the memory data management is performed by taking the divided fine-grained memory pages as the unit, when the data of the memory and the hard disk are swapped in and out, the data to be stored of the user is also swapped in and out by taking the data to be stored of the user as the granularity, so that the swapped out memory page only comprises cold data to be swapped out, the data swapping in and out between the memory and the hard disk is reduced, the increase of system input and output caused by swapping out part of hot data along with the cold data is effectively prevented, and the processing performance of the system is ensured.
The above description describes how to write data into the memory, and how to swap data out of the memory to the underlying storage device is described below.
Referring to fig. 4, this figure is a schematic flow chart of another data storage method according to an embodiment of the present application. This figure provides a more detailed data storage method than figures 1 and 2.
In some possible implementations of the embodiment of the present application, after step S203, the method may further include:
s401: and according to the memory page identification, linking the target fine-grained memory page into a corresponding first-in first-out linked list, and setting an access label of the target fine-grained memory page as a first identification.
In the embodiment of the present application, the first fifo linked list corresponds to the memory page identifier one to one, and the length of the first fifo linked list may be set according to actual needs, which is not limited here. The access identifier is used for recording the access frequency of the user to the data so as to distinguish the data with high access frequency from the data with low access frequency, and the data with high access frequency is prevented from being exchanged out of the memory to influence the system efficiency. The first identification represents that the corresponding data has just been accessed by the user. In practical applications, the first identifier may be set according to actual needs, for example, the first identifier may be 1.
S402: and when the first-in first-out linked list is full, reading out a fine-grained memory page from the first-in first-out linked list to obtain a third fine-grained memory page.
It can be understood that one fine-grained memory page (i.e., the third fine-grained memory page) is read out from the head of the first fifo linked list so as to swap out the memory data with a lower access frequency, thereby ensuring sufficient memory space.
S403: and if the access tag of the third fine-grained memory page is the first identifier, re-linking the third fine-grained memory page into the corresponding first-in first-out linked list, and setting the access tag of the third fine-grained memory page as a second identifier.
It will be appreciated that the second identifier represents that the corresponding data has not been recently accessed by the user. In practical applications, the second flag may be set according to actual needs, for example, the second flag may be 0.
S404: and when the access tag of the third fine-grained memory page is the second identifier, linking the third fine-grained memory page into a corresponding second first-in first-out linked list according to the memory page identifier.
In the embodiment of the present application, the second fifo linked list corresponds to the memory page identifier one to one, and the length of the second fifo linked list may be set according to actual needs, and may be equal to or unequal to the length of the first fifo linked list, and is not limited here. And when the data corresponding to the third fine-grained memory page is not recently accessed by the user (namely, the access tag is the second identifier), eliminating the third fine-grained memory page to the second first-in first-out linked list.
S405: and when the second first-in first-out linked list is full, reading out a fine-grained memory page from the second first-in first-out linked list to obtain a fourth fine-grained memory page.
S406: and writing the data in the fourth fine-grained memory page into the bottom-layer storage device.
When the second fifo link table is full, the data corresponding to the fine-grained memory page (i.e. the fourth fine-grained memory page) at the head of the second fifo link table is swapped out of the memory to ensure the available memory space.
It should be noted here that, due to the randomness of the user accessing the data, in order to further avoid the hot spot data from being swapped out of the memory, in some possible implementation manners of the embodiment of the present application, in order to further avoid the data with a high access frequency from being swapped out of the memory, the method may further include:
and responding to the access of the data in the memory, and setting an access tag of a fine-grained memory page corresponding to the accessed data as a first identifier.
That is, when the user accesses the data in the memory again, if the access tag setting of the fine-grained memory page corresponding to the accessed data is not the first identifier, the accessed data is reset to the first identifier, and it is marked that the data is recently accessed by the user and becomes hot data.
Then, after step S405, the method may specifically include:
when the access tag of the fourth fine-grained memory page is the second identifier, writing the data in the fourth fine-grained memory page into the bottom-layer storage device; and when the access tag of the fourth fine-grained memory page is the first identifier, re-linking the fourth fine-grained memory page into the corresponding second first-in first-out linked list, and setting the access tag of the fourth fine-grained memory page as the second identifier.
It can be understood that, when the second fifo link table is full, a fine-grained memory page (i.e., a fourth fine-grained memory page) is read out from the header of the second fifo link table, and if data of the fourth fine-grained memory page has been recently accessed by a user (i.e., the access tag is the first identifier), the access tag of the fourth fine-grained memory page is set as the second identifier, and then the fourth fine-grained memory page is linked into the second fifo link table again; and if the data of the fourth fine-grained memory page is not accessed recently by the user (namely, the access tag is the second identifier), replacing the data of the fourth fine-grained memory page with the memory and writing the data into the bottom-layer storage device. That is, the cold data stored in the head fine-grained memory page of the second fifo link table is swapped out of the memory, so as to avoid swapping out of the hot data and save system resources.
In actual practice, in order to establish a mapping between virtual addresses and logical addresses in the underlying storage device, an Object list (OT) may be defined. The OT is indexed by a virtual address, which holds the size of the object, the in-page offset, and the logical address. When a virtual page is successfully allocated, the size of the object in the virtual page and the offset amount in the page are written into the OT. When this object is evicted to the underlying storage device, its logical address in the underlying storage device will be written to the OT.
In a specific example, the underlying storage device is a NAND-flash-based Solid State Drive (SSD), the SSD reads data in units of 4KB, and writes data in units of blocks (e.g. 256KB), in order to adapt to the read-write characteristic of the SSD, a block buffer (block buffer) may be reserved, an object that needs to be eliminated to the SSD is first stored in the block buffer together with its virtual address, and when the block buffer is full, the data in the entire block buffer is written into the SSD. And writing the logical address of each object in the SSD into the OT according to the virtual address of the object contained in each block header.
Since SSDs are written in blocks, garbage collection of the logical address space of the SSD is required. When the available logical space of the SSD is smaller than a certain threshold, the garbage collection process is started in the background, the system creates a table (such as GC _ table) for storing the total amount of invalid data in each data block for the whole SSD logical address space, and when the invalid data amount contained in one data block exceeds a certain threshold, the data block becomes an alternative data block of the garbage collection process.
In practical applications, the method may be packaged as an independent system (FG _ MM), and all memories may be managed in the data storage manner provided by the embodiment of the present application, which is not limited herein; or, a part of the memory may be managed in the prior art, and the remaining memory may be managed in the data storage manner provided in the embodiment of the present application.
In specific implementation, when the data storage method provided by the embodiment of the present application and the method in the prior art are combined to perform memory management, in order to facilitate the use of a user and the integration with an original system (such as Linux), the FG _ MM is designed according to the following principle:
the memory allocation and release interface provided by FG _ MM, the data read interface and the data write interface are completely independent from the original interface of the system, and the virtual address space and the physical address space managed by the FG _ MM are also completely independent. Therefore, the user can flexibly select the called interface according to the characteristics of the application, such as the size of user data or the importance degree of the data. For example, when the total amount of user data is small (DRAM can meet the requirements and memory exchange generally does not occur), the user can call the original interface of the system to allocate and manage the memory; if the total amount of user data is large (far beyond the capacity of the DRAM), and the user data has access frequency difference, for the user data with high access frequency, the user still calls the original interface of the system to allocate and manage the memory, and for the user data with low access frequency, the user can call the memory allocation and release interface provided by the FG _ MM to complete the allocation and management of the memory.
The FG _ MM memory management system may be specifically encapsulated in a library form, and the following six interfaces are provided for the user to call.
(1) void sysInit (), void sysQuit (): sysInit () is used to initialize metadata in the FG _ MM system; sysQuit () is used to free up the memory space used before the system exits.
(2) void nv _ malloc (SIZE _ t userReqSize). this function is used to allocate memory space of SIZE userReqSize for each object.
(3) void nv _ calloc (SIZE _ t nmemb, SIZE _ t userReqSize): the function allocates memory space of size userReqSize to nmemb objects respectively.
(4) void nv _ realloc (void vaddr, SIZE _ t userReqSize): the function adjusts the size of the memory space pointed to by vaddr to userReqSize bytes.
(5) void nv _ free (void × vaddr): this function is used to free the memory space pointed to by the pointer vaddr.
Based on the data storage method provided by the above embodiment, the embodiment of the application also provides a data storage device.
Referring to fig. 5, a schematic structural diagram of a data storage device according to an embodiment of the present application is shown.
The data storage device provided by the embodiment of the application comprises: the device comprises an allocation module 501, a determination module 502, a writing module 503, an acquisition module 504 and a division module 505;
the allocating module 501 is configured to, in response to a storage request carrying data to be stored, allocate a corresponding virtual page to the data to be stored according to the size of the data to be stored, so as to obtain a target virtual page; the virtual pages correspond to data to be stored one by one;
a determining module 502, configured to determine a fine-grained memory page mapped by a target virtual page, to obtain a target fine-grained memory page; the virtual pages correspond to the fine-grained memory pages one by one, and the size of the target fine-grained memory page is larger than or equal to that of the data to be stored;
a writing module 503, configured to write data to be stored into a target fine-grained memory page;
an obtaining module 504, configured to apply for a continuous memory space from a memory; the continuous memory space comprises at least one physical page;
the dividing module 505 is configured to divide each physical page included in the continuous memory space into a plurality of fine-grained memory pages based on the size of currently received data to be stored.
In some possible implementation manners of the embodiment of the present application, the dividing module 505 may specifically include: determining a sub-module and dividing the sub-module;
the determining submodule is used for determining the size of a target memory from a plurality of preset memory sizes according to the size of currently received data to be stored; the size of the target memory is the smallest size of the memory to be selected, and the size of the memory to be selected is the size of the memory of the size of the data to be stored which is larger than or equal to the size of the currently received data to be stored in the plurality of memory sizes;
the partitioning submodule is used for partitioning each physical page included in the continuous memory space into a plurality of fine-grained memory pages with the same size and the size equal to that of the target memory when the size of the physical page can be divided by the size of the target memory; when the size of the physical page cannot be divided by the size of the target memory, dividing each physical page included in the continuous memory space into a first fine-grained memory page and at least one second fine-grained memory page; the size of the first fine-grained memory page is smaller than that of the target memory, and the size of the second fine-grained memory page is equal to that of the target memory.
In some possible implementation manners of the embodiment of the present application, the apparatus may further include: the system comprises an identification acquisition module and a first chaining-in module;
the identification acquisition module is used for acquiring the memory page identification of each fine-grained memory page; the memory page identification comprises the size of a fine-grained memory page and the page offset of the fine-grained memory page in a corresponding physical page;
the first chaining module is used for chaining the divided fine-grained memory pages into corresponding idle linked lists according to the memory page identifiers; the idle linked list and the memory page identification are in one-to-one correspondence.
In some possible implementation manners of the embodiment of the present application, the determining module 502 may specifically include: determining a submodule and taking out the submodule;
the determining submodule is used for determining the in-page offset of the data to be stored based on the target virtual page;
and the taking-out submodule is used for taking out a fine-grained memory page from the corresponding idle linked list as a target fine-grained memory page according to the size of the data to be stored and the offset in the page.
In some possible implementation manners of the embodiment of the present application, the apparatus may further include: the device comprises a second chaining-in module, a first taking-out module, a second chaining-in module and a second taking-out module;
the second chaining-in module is used for chaining the target fine-grained memory page into the corresponding first FIFO linked list according to the memory page identifier and setting the access label of the target fine-grained memory page as the first identifier; the first-in first-out linked list corresponds to the memory page identification one by one;
the first fetching module is used for reading out a fine-grained memory page from the first-in first-out linked list to obtain a third fine-grained memory page when the first-in first-out linked list is full;
a second chaining module, configured to, when an access tag of a third fine-grained memory page is a first identifier, re-chain the third fine-grained memory page into a corresponding first-in first-out linked list, and set the access tag of the third fine-grained memory page as a second identifier; when the access tag of the third fine-grained memory page is a second identifier, linking the third fine-grained memory page into a corresponding second first-in first-out linked list according to the memory page identifier; the second first-in first-out linked list corresponds to the memory page identification one by one;
the second fetching module is used for reading out a fine-grained memory page from the second first-in first-out linked list when the second first-in first-out linked list is full, and obtaining a fourth fine-grained memory page;
the writing module 503 is further configured to write the data in the fourth fine-grained memory page into the underlying storage device.
In some possible implementation manners of the embodiment of the present application, the apparatus may further include: a setting module and a third chaining module;
the setting module is used for responding to the access of data in the memory and setting an access tag of a fine-grained memory page corresponding to the accessed data as a first identifier;
a writing module 503, configured to specifically write data in the fourth fine-grained memory page into the bottom-layer storage device when the access tag of the fourth fine-grained memory page is the second identifier;
and a third chaining-in module, configured to, when the access tag of the fourth fine-grained memory page is the first identifier, re-chain the fourth fine-grained memory page into the corresponding second fifo link table, and set the access tag of the fourth fine-grained memory page as the second identifier.
In the embodiment of the present application, memory data management is performed with the size of data to be stored as a granularity, and each physical page included in a continuous memory space applied from a memory is divided into a plurality of fine-grained memory pages based on the size of currently received data to be stored. Then, when a storage request carrying data to be stored is received, allocating a one-to-one corresponding virtual page to the data to be stored according to the size of the data to be stored to obtain a target virtual page, then determining a fine-grained memory page mapped by the target virtual page to obtain a target fine-grained memory page, writing the data to be stored into the target fine-grained memory page, and realizing the storage of the data, wherein the size of the target fine-grained memory page is not smaller than the size of the data to be stored. In the embodiment of the application, because the physical pages are divided by taking the size of the data to be stored as the granularity, and the memory data management is performed by taking the divided fine-grained memory pages as the unit, when the data of the memory and the hard disk are swapped in and out, the data to be stored of the user is also swapped in and out by taking the data to be stored of the user as the granularity, so that the swapped out memory page only comprises cold data to be swapped out, the data swapping in and out between the memory and the hard disk is reduced, the increase of system input and output caused by swapping out part of hot data along with the cold data is effectively prevented, and the processing performance of the system is ensured.
Based on the data storage method and apparatus provided by the foregoing embodiments, the present application further provides a computer-readable storage medium, on which a program is stored, and when the program is executed by a processor, the computer-readable storage medium implements any one of the data storage methods provided by the foregoing embodiments.
Based on the data storage method and the data storage device provided by the above embodiments, an embodiment of the present application further provides a processor, where the processor is configured to execute a program, where the program executes any one of the data storage methods provided by the above embodiments when running.
Based on the data storage method and device provided by the above embodiment, the embodiment of the application further provides a computer, which comprises a memory and a processor; a memory for storing the program code and transmitting the program code to the processor; a processor for executing any one of the data storage methods provided by the above embodiments according to instructions in the program code.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The system or the device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application in any way. Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application. Those skilled in the art can now make numerous possible variations and modifications to the disclosed embodiments, or modify equivalent embodiments, using the methods and techniques disclosed above, without departing from the scope of the claimed embodiments. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present application still fall within the protection scope of the technical solution of the present application without departing from the content of the technical solution of the present application.

Claims (10)

1. A method of data storage, the method comprising:
responding to a storage request carrying data to be stored, and distributing a corresponding virtual page for the data to be stored according to the size of the data to be stored to obtain a target virtual page; the virtual pages correspond to data to be stored one by one;
determining a fine-grained memory page mapped by the target virtual page to obtain a target fine-grained memory page; the virtual pages correspond to the fine-grained memory pages one by one, and the size of the target fine-grained memory page is larger than or equal to that of the data to be stored;
writing the data to be stored into the target fine-grained memory page;
the fine-grained memory page is obtained by the following steps:
applying for a continuous memory space from the memory; the memory space comprises at least one physical page;
and dividing each physical page included in the memory space into a plurality of fine-grained memory pages based on the size of the currently received data to be stored.
2. The method according to claim 1, wherein the dividing, based on the size of the currently received data to be written, each physical page included in the memory space into a plurality of fine-grained memory pages specifically comprises:
determining the size of a target memory from a plurality of preset memory sizes according to the size of the currently received data to be stored; the target memory size is the smallest to-be-selected memory size, and the to-be-selected memory size is the memory size larger than or equal to the currently received to-be-stored data size in the plurality of memory sizes;
when the size of the physical page can be divided by the size of the target memory, dividing each physical page included in the continuous memory space into a plurality of fine-grained memory pages with the same size and equal to the size of the target memory;
when the size of a physical page cannot be divided by the size of the target memory, dividing each physical page included in the continuous memory space into a first fine-grained memory page and at least one second fine-grained memory page; the size of the first fine-grained memory page is smaller than the size of the target memory, and the size of the second fine-grained memory page is equal to the size of the target memory.
3. The method according to claim 1, wherein the dividing, based on the size of the currently received data to be stored, each physical page included in the continuous memory space into a plurality of fine-grained memory pages, and then further comprising:
obtaining a memory page identifier of each fine-grained memory page; the memory page identification comprises the size of a fine-grained memory page and the page offset of the fine-grained memory page in a corresponding physical page;
according to the memory page identification, linking the divided fine-grained memory pages into corresponding idle linked lists; the idle link list corresponds to the memory page identifier one to one.
4. The method according to claim 3, wherein the determining the memory page mapped by the corresponding virtual page to obtain a target fine-grained memory page specifically includes:
determining an intra-page offset of the data to be stored based on the target virtual page;
and taking out a fine-grained memory page from the corresponding idle linked list as the target fine-grained memory page according to the size of the data to be stored and the offset in the page.
5. The method according to any one of claims 1 to 4, wherein the writing the data to be stored into the target fine-grained memory page further comprises:
according to the memory page identification, the target fine-grained memory page is linked into a corresponding first-in first-out linked list, and an access label of the target fine-grained memory page is set as a first identification; the first-in first-out linked list corresponds to the memory page identification one by one;
when the first fifo chain is full, the method further comprises:
reading out a fine-grained memory page from the first-in first-out linked list to obtain a third fine-grained memory page;
when the access tag of the third fine-grained memory page is a first identifier, re-linking the third fine-grained memory page into a corresponding first-in first-out linked list, and setting the access tag of the third fine-grained memory page as a second identifier;
when the access tag of the third fine-grained memory page is a second identifier, linking the third fine-grained memory page into a corresponding second first-in first-out linked list according to the memory page identifier; the second first-in first-out linked list corresponds to the memory page identification one by one;
when the second fifo chain is full, the method further comprises:
reading out a fine-grained memory page from the second first-in first-out linked list to obtain a fourth fine-grained memory page;
and writing the data in the fourth fine-grained memory page into a bottom-layer storage device.
6. The method of claim 5, further comprising:
responding to the access to the data in the memory, and setting an access tag of a fine-grained memory page corresponding to the accessed data as a first identifier;
then, the reading out the fine-grained memory page from the second fifo linked list to obtain a fourth fine-grained memory page, and then specifically including:
when the access tag of the fourth fine-grained memory page is a second identifier, writing data in the fourth fine-grained memory page into bottom-layer storage equipment;
and when the access tag of the fourth fine-grained memory page is the first identifier, re-linking the fourth fine-grained memory page into a corresponding second first-in first-out linked list, and setting the access tag of the fourth fine-grained memory page as a second identifier.
7. A data storage device, characterized in that the device comprises: the device comprises a distribution module, a determination module, a writing module, an acquisition module and a division module;
the allocation module is used for responding to a storage request carrying data to be stored, and allocating a corresponding virtual page to the data to be stored according to the size of the data to be stored to obtain a target virtual page; the virtual pages correspond to data to be stored one by one;
the determining module is configured to determine a fine-grained memory page mapped by the target virtual page, so as to obtain a target fine-grained memory page; the virtual pages correspond to the fine-grained memory pages one by one, and the size of the target fine-grained memory page is larger than or equal to that of the data to be stored;
the writing module is configured to write the data to be stored into the target fine-grained memory page;
the acquisition module is used for applying for a continuous memory space from the memory; the continuous memory space comprises at least one physical page;
the dividing module is configured to divide each physical page included in the continuous memory space into a plurality of fine-grained memory pages based on a size of currently received data to be stored.
8. A computer-readable storage medium on which a program is stored, which program, when executed by a processor, implements the data storage method of any one of claims 1-6.
9. A processor for running a program, wherein the program when running performs the data storage method of any one of claims 1 to 6.
10. A computer comprising a memory and a processor; the memory is used for storing program codes and transmitting the program codes to the processor;
the processor, configured to execute the data storage method according to any one of claims 1 to 6 according to instructions in the program code.
CN201910905962.4A 2019-09-24 2019-09-24 Data storage method and device Pending CN110674051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910905962.4A CN110674051A (en) 2019-09-24 2019-09-24 Data storage method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910905962.4A CN110674051A (en) 2019-09-24 2019-09-24 Data storage method and device

Publications (1)

Publication Number Publication Date
CN110674051A true CN110674051A (en) 2020-01-10

Family

ID=69077411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910905962.4A Pending CN110674051A (en) 2019-09-24 2019-09-24 Data storage method and device

Country Status (1)

Country Link
CN (1) CN110674051A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352863A (en) * 2020-03-10 2020-06-30 腾讯科技(深圳)有限公司 Memory management method, device, equipment and storage medium
CN114265562A (en) * 2021-12-27 2022-04-01 北京国腾创新科技有限公司 File storage method and system based on flash memory
CN115576868A (en) * 2022-11-24 2023-01-06 苏州浪潮智能科技有限公司 Multi-level mapping framework, data operation request processing method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110173395A1 (en) * 2010-01-12 2011-07-14 International Business Machines Corporation Temperature-aware buffered caching for solid state storage
CN106612247A (en) * 2015-10-21 2017-05-03 中兴通讯股份有限公司 A data processing method and a storage gateway
CN107544756A (en) * 2017-08-03 2018-01-05 上海交通大学 Method is locally stored in Key Value log types based on SCM
CN107817945A (en) * 2016-09-13 2018-03-20 中国科学院微电子研究所 A kind of method for reading data and system for mixing internal storage structure
CN108920276A (en) * 2018-06-27 2018-11-30 郑州云海信息技术有限公司 Linux system memory allocation method, system and equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110173395A1 (en) * 2010-01-12 2011-07-14 International Business Machines Corporation Temperature-aware buffered caching for solid state storage
CN106612247A (en) * 2015-10-21 2017-05-03 中兴通讯股份有限公司 A data processing method and a storage gateway
CN107817945A (en) * 2016-09-13 2018-03-20 中国科学院微电子研究所 A kind of method for reading data and system for mixing internal storage structure
CN107544756A (en) * 2017-08-03 2018-01-05 上海交通大学 Method is locally stored in Key Value log types based on SCM
CN108920276A (en) * 2018-06-27 2018-11-30 郑州云海信息技术有限公司 Linux system memory allocation method, system and equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIYU WANG ET AL: "Fine-Grained Data Management for DRAM/SSD Hybrid Main", 《IEICE TRANS. INF. & SYST》 *
郝晓冉,倪茂,王力玉,陈岚: "面向数据密集型应用的细粒度内存管理方案", 《北京邮电大学学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352863A (en) * 2020-03-10 2020-06-30 腾讯科技(深圳)有限公司 Memory management method, device, equipment and storage medium
CN111352863B (en) * 2020-03-10 2023-09-01 腾讯科技(深圳)有限公司 Memory management method, device, equipment and storage medium
CN114265562A (en) * 2021-12-27 2022-04-01 北京国腾创新科技有限公司 File storage method and system based on flash memory
CN115576868A (en) * 2022-11-24 2023-01-06 苏州浪潮智能科技有限公司 Multi-level mapping framework, data operation request processing method and system

Similar Documents

Publication Publication Date Title
US11467955B2 (en) Memory system and method for controlling nonvolatile memory
CN114115747B (en) Memory system and control method
JP2021128582A (en) Memory system and control method
EP3414665B1 (en) Profiling cache replacement
EP2266040B1 (en) Methods and systems for dynamic cache partitioning for distributed applications operating on multiprocessor architectures
EP2645259B1 (en) Method, device and system for caching data in multi-node system
US10310997B2 (en) System and method for dynamically allocating memory to hold pending write requests
CN110674051A (en) Data storage method and device
CN108595349B (en) Address translation method and device for mass storage device
CN109960471B (en) Data storage method, device, equipment and storage medium
CN114610232A (en) Storage system, memory management method and management node
CN108038062B (en) Memory management method and device of embedded system
US20190004703A1 (en) Method and computer system for managing blocks
US11126553B2 (en) Dynamic allocation of memory between containers
US9552295B2 (en) Performance and energy efficiency while using large pages
CN115617542A (en) Memory exchange method and device, computer equipment and storage medium
CN111639037A (en) Dynamic cache allocation method and device and DRAM-Less solid state disk
WO2012052567A1 (en) Improving storage lifetime using data swapping
CN113010452B (en) Efficient virtual memory architecture supporting QoS
CN113986773A (en) Write amplification optimization method and device based on solid state disk and computer equipment
CN111475099A (en) Data storage method, device and equipment
CN114281719A (en) System and method for extending command orchestration through address mapping
CN115079957B (en) Request processing method, device, controller, equipment and storage medium
CN117130955A (en) Method and system for managing associated memory
CN107656697B (en) Method and device for operating data on storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200110

WD01 Invention patent application deemed withdrawn after publication