CN111639037B - Dynamic allocation method and device for cache and DRAM-Less solid state disk - Google Patents

Dynamic allocation method and device for cache and DRAM-Less solid state disk Download PDF

Info

Publication number
CN111639037B
CN111639037B CN202010398316.6A CN202010398316A CN111639037B CN 111639037 B CN111639037 B CN 111639037B CN 202010398316 A CN202010398316 A CN 202010398316A CN 111639037 B CN111639037 B CN 111639037B
Authority
CN
China
Prior art keywords
cache
benefit
mapping table
virtual
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010398316.6A
Other languages
Chinese (zh)
Other versions
CN111639037A (en
Inventor
张吉兴
杨亚飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Dapu Microelectronics Co Ltd
Original Assignee
Shenzhen Dapu Microelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dapu Microelectronics Co Ltd filed Critical Shenzhen Dapu Microelectronics Co Ltd
Priority to CN202010398316.6A priority Critical patent/CN111639037B/en
Publication of CN111639037A publication Critical patent/CN111639037A/en
Application granted granted Critical
Publication of CN111639037B publication Critical patent/CN111639037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention relates to the field of solid state disk application, and discloses a dynamic allocation method and device of a cache and a DRAM-Less solid state disk, wherein the DRAM-Less solid state disk comprises a main controller, the main controller comprises a cache space, the cache space comprises a data cache and a mapping table cache, and the method comprises the following steps: pre-distributing the memory size of the data cache and the mapping table cache; establishing a first virtual profit table corresponding to the data cache and a second virtual profit table corresponding to the mapping table cache; when the preset updating period is finished, calculating a first average virtual benefit of a first virtual benefit table corresponding to the data cache and a second average virtual benefit of a second virtual benefit table corresponding to the mapping table cache; and dynamically distributing the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit. By dynamically distributing the buffer space of the main controller, the invention can improve the read-write performance of the DRAM-Less solid state disk.

Description

Dynamic allocation method and device for cache and DRAM-Less solid state disk
Technical Field
The present invention relates to the field of solid state disk applications, and in particular, to a method and an apparatus for dynamically allocating a cache, and a DRAM-Less solid state disk.
Background
The solid state disk (Solid State Drives, SSD) is a hard disk made of a solid state electronic memory chip array, and the solid state disk comprises a control unit and a memory unit (FLASH memory chip or DRAM memory chip). Some of the current solid state disk systems have dynamic random access memories (Dynamic Random Access Memory, DRAM), so SSD has a large data buffer space for buffering data.
However, at present, some SSD controllers do not have DRAM outside, but only have a small internal static random access memory (Static Random Access Memory, SRAM), which greatly limits the number of caches in the SSD, so that the existing cache management method can only cache less data, and reduces the cache effect. In particular, when there is no DRAM, the mapping table in the FLASH translation layer (Flash translation layer, FTL) is stored in the FLASH memory, and is only scheduled to the SRAM in a segmented manner when in use, when the mapping table is in the SRAM, a part of data is read into the cache according to the cache policy, and if the next re-read hits the cache, the data is directly read from the cache.
Because of the lack of DRAM, the user data and FTL mapping table can only be partially cached in the SRAM of the host controller, and the SRAM of the host controller generally has only several hundred KB, but the prior art does not solve the problem that the lack of DRAM causes the read-write performance of the SSD to be reduced, so how to better utilize the host controller to improve the read-write performance of the SSD becomes a problem to be solved.
Disclosure of Invention
The embodiment of the invention aims to provide a dynamic allocation method and device for a cache and a DRAM-Less solid state disk, which solve the technical problem that the read-write performance of SSD is reduced due to the lack of a DRAM at present, and improve the read-write performance of the DRAM-Less solid state disk.
In order to solve the technical problems, the embodiment of the invention provides the following technical scheme:
in a first aspect, an embodiment of the present invention provides a method for dynamically allocating a cache, which is applied to a DRAM-Less solid state disk, where the DRAM-Less solid state disk includes a host controller, the host controller includes a cache space, and the cache space includes a data cache and a mapping table cache, and the method includes:
the memory size of the data cache and the mapping table cache is pre-allocated;
establishing a first virtual benefit table corresponding to the data cache and a second virtual benefit table corresponding to the mapping table cache;
when the preset updating period is finished, calculating a first average virtual benefit of a first virtual benefit table corresponding to the data cache and a second average virtual benefit of a second virtual benefit table corresponding to the mapping table cache;
and dynamically distributing the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit.
In some embodiments, the first virtual benefit table corresponding to the data cache includes one-to-one logical block address data, a read benefit value, and a write benefit value, and the calculating the first average virtual benefit of the first virtual benefit table corresponding to the data cache includes:
acquiring a read benefit value and a write benefit value corresponding to each logic block address data in the first virtual benefit table;
summing the read benefit value and the write benefit value corresponding to each logic block address data, and calculating the sum value of the first virtual benefit table;
acquiring the number of the logic block address data eliminated by the data cache when the updating period is finished;
and calculating a first average virtual benefit of the first virtual benefit table corresponding to the data cache according to the sum value of the first virtual benefit table and the number of the logic block address data.
In some embodiments, the method further comprises:
when the logic block address data read by the host IO hits the logic block address data in the first virtual benefit table, the read benefit value corresponding to the logic block address data in the first virtual benefit table is increased;
when the logic block address data written by the host IO hits the logic block address data in the first virtual benefit table, the write benefit value corresponding to the logic block address data in the first virtual benefit table is increased.
In some embodiments, the second virtual benefit table corresponding to the mapping table cache includes a mapping table management unit, a read benefit value, and a write benefit value, and the calculating the second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache includes:
acquiring a read benefit value and a write benefit value corresponding to each mapping table management unit in the second virtual benefit table;
summing the read benefit value and the write benefit value corresponding to each mapping table management unit, and calculating the sum value of the second virtual benefit table;
acquiring the number of mapping table management units of the second virtual profit table at the end of the updating period;
and calculating a second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache according to the sum value of the second virtual benefit table and the number of mapping table management units of the second virtual benefit table.
In some embodiments, the method further comprises:
if the mapping table cache memory is full, selecting a mapping table management unit in the mapping table cache based on an LRU algorithm, and adding the mapping table management unit into the second virtual profit table;
when the logic block address data read by the host IO hits the mapping table management unit corresponding to the logic block address data in the second virtual benefit table, the reading benefit value of the mapping table management unit is increased;
When the logic block address data written by the host IO hits the mapping table management unit corresponding to the logic block address data in the second virtual benefit table, the writing benefit value of the mapping table management unit is increased.
In some embodiments, dynamically allocating the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit includes:
if the first average virtual benefit is greater than the second average virtual benefit, dividing part of the memory of the mapping table cache into the data cache;
and if the first average virtual benefit is smaller than the second average virtual benefit, dividing part of the memory of the data cache into the mapping table cache.
In some embodiments, each of the logical block address data has the same size as a memory space occupied by the mapping table management unit, and the partitioning the partial memory of the mapping table cache into the data cache includes:
selecting a least recently used mapping table management unit in the mapping table cache based on an LRU algorithm, and dividing a memory space corresponding to the least recently used mapping table management unit into the data cache;
the dividing the partial memory of the data cache into the mapping table cache includes:
And selecting the least recently used logic block address data in the data cache based on an LRU algorithm, and dividing the corresponding memory space into the mapping table cache.
In some embodiments, after dynamically allocating the memory sizes of the data cache and the mapping table cache, the method further comprises:
and after the updating period is finished, the first virtual benefit table corresponding to the data cache and the second virtual benefit table corresponding to the mapping table cache are emptied.
In a second aspect, an embodiment of the present invention provides a dynamic allocation device for a cache, which is applied to a DRAM-Less solid state disk, where the DRAM-Less solid state disk includes a main controller, the main controller includes a cache space, the cache space includes a data cache and a mapping table cache, and the device includes:
the memory allocation unit is used for pre-allocating the memory sizes of the data cache and the mapping table cache;
the virtual profit table establishing unit is used for establishing a first virtual profit table corresponding to the data cache and a second virtual profit table corresponding to the mapping table cache;
the average virtual benefit calculating unit is used for calculating a first average virtual benefit of the first virtual benefit table corresponding to the data cache and a second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache when the preset updating period is finished;
And the dynamic allocation unit is used for dynamically allocating the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit.
In some embodiments, the first virtual benefit table corresponding to the data cache includes logical block address data, a read benefit value, and a write benefit value, and the average virtual benefit calculating unit is specifically configured to:
acquiring a read benefit value and a write benefit value corresponding to each logic block address data in the first virtual benefit table;
summing the read benefit value and the write benefit value corresponding to each logic block address data, and calculating the sum value of the first virtual benefit table;
acquiring the number of the logic block address data eliminated by the data cache when the updating period is finished;
and calculating a first average virtual benefit of the first virtual benefit table corresponding to the data cache according to the sum value of the first virtual benefit table and the number of the logic block address data.
In some embodiments, the mapping table cache includes a mapping table management unit, a read benefit value, and a write benefit value, and the average virtual benefit calculation unit is specifically configured to:
Acquiring a read benefit value and a write benefit value corresponding to each mapping table management unit in the second virtual benefit table;
summing the read benefit value and the write benefit value corresponding to each mapping table management unit, and calculating the sum value of the second virtual benefit table;
acquiring the number of mapping table management units of the second virtual profit table at the end of the updating period;
and calculating a second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache according to the sum value of the second virtual benefit table and the number of mapping table management units of the second virtual benefit table.
In some embodiments, the dynamic allocation unit is specifically configured to:
if the first average virtual benefit is greater than the second average virtual benefit, dividing part of the memory of the mapping table cache into the data cache;
and if the first average virtual benefit is smaller than the second average virtual benefit, dividing part of the memory of the data cache into the mapping table cache.
In some embodiments, the size of the logical block address data is the same as the size of the memory space occupied by the mapping table management unit, and the dynamic allocation unit is specifically configured to:
Selecting a least recently used mapping table management unit in the mapping table cache based on an LRU algorithm, and dividing a memory space corresponding to the least recently used mapping table management unit into the data cache;
the dividing the partial memory of the data cache into the mapping table cache includes:
and selecting the least recently used logic block address data in the data cache based on an LRU algorithm, and dividing the corresponding memory space into the mapping table cache.
In some embodiments, the apparatus further comprises:
and the resetting unit is used for clearing the first virtual benefit table corresponding to the data cache and the second virtual benefit table corresponding to the mapping table cache after the updating period is finished.
In a third aspect, an embodiment of the present invention provides a DRAM-Less solid state disk, where the DRAM-Less solid state disk includes:
a flash memory chip comprising a plurality of wafers, each wafer comprising a plurality of groupings, each grouping comprising a plurality of physical blocks, each physical block comprising a plurality of physical pages;
a main controller, the main controller comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of dynamically allocating cache as described above.
In a fourth aspect, embodiments of the present invention further provide a non-volatile computer readable storage medium storing computer executable instructions for enabling a DRAM-Less solid state disk to perform a method for dynamically allocating a cache as described above.
The embodiment of the invention has the beneficial effects that: compared with the prior art, the dynamic allocation method of the cache provided by the embodiment of the invention is applied to the DRAM-Less solid state disk, wherein the DRAM-Less solid state disk comprises a main controller, the main controller comprises a cache space, the cache space comprises a data cache and a mapping table cache, and the method comprises the following steps: the memory size of the data cache and the mapping table cache is pre-allocated; establishing a first virtual benefit table corresponding to the data cache and a second virtual benefit table corresponding to the mapping table cache; when the preset updating period is finished, calculating a first average virtual benefit of a first virtual benefit table corresponding to the data cache and a second average virtual benefit of a second virtual benefit table corresponding to the mapping table cache; and dynamically distributing the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit. By dynamically distributing the buffer space of the main controller, the invention can improve the read-write performance of the DRAM-Less solid state disk.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures of the drawings are not to be taken in a limiting sense, unless otherwise indicated.
FIG. 1 is a schematic structural diagram of a DRAM-Less solid state disk according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an FTL mapping table according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a data cache and a mapping table cache according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for dynamically allocating a cache according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a first virtual revenue table and a second virtual revenue table provided by the present invention;
FIG. 6 is a detailed flow chart of step S30 in FIG. 4;
fig. 7 is another refinement flowchart of step S30 in fig. 4;
FIG. 8 is a schematic diagram of a mapping table management unit corresponding to a logical block address according to an embodiment of the present invention;
FIG. 9 is a schematic workflow diagram of a method for dynamically allocating a cache according to an embodiment of the present invention;
FIG. 10 is a schematic diagram illustrating an initial state of a data buffer and a mapping table buffer according to an embodiment of the present invention;
FIG. 11 is a schematic diagram illustrating another state of a data cache and a mapping table cache according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of still another state of a data cache and a mapping table cache according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of still another state of the data cache and the mapping table cache according to the embodiment of the present invention;
FIG. 14 is a schematic diagram of an updated data cache and mapping table cache according to an embodiment of the present invention;
fig. 15 is a schematic structural diagram of a dynamic allocation device for cache according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The solid state disk (Solid State Drives, SSD) has no external dynamic random access memory (Dynamic Random Access Memory, DRAM), i.e. no external DRAM cache, and is also commonly referred to as a cache-less solution, which refers to a solid state disk solution that omits DRAM chips by firmware adaptation. However, no external DRAM cache does not mean no cache at all, and in fact it is also required to look up a mapping table, except that the mapping table has a different structure, a smaller capacity, and is stored in a small-capacity SRAM integrated in the host. Because no extra DRAM chip is needed, the cost of the solid state disk without the external DRAM cache is lower, and better cost performance can be obtained on the basis of the relation between the processed performance and the cost.
However, because the DRAM-Less solid state disk lacks DRAM, the user data and FTL mapping table can only be partially cached in the main-controlled SRAM, and the main-controlled SRAM generally has a smaller memory space, in the prior art, SSD firmware adopts the size of the data cache and mapping table cache which are allocated in advance, and this way has the disadvantage that the read-write performance of the solid state disk is reduced.
Based on the above, the invention provides a dynamic allocation method and device for a cache and a DRAM-Less solid state disk, and the read-write performance of the DRAM-Less solid state disk is improved by dynamically allocating the cache space of a main controller.
In the embodiment of the invention, the solid state disk is a solid state disk without dynamic random access memory (Dynamic Random Access Memory, DRAM), namely a DRAM-Less solid state disk, please refer to FIG. 1, FIG. 1 is a schematic structural diagram of the DRAM-Less solid state disk provided in the embodiment of the invention; the DRAM-Less solid state disk is composed of a series of flash memory arrays, a plurality of flash memory controllers (Flash Memory Controller, FMC) are arranged in the DRAM-Less solid state disk, each flash memory controller controls a Channel (Channel), the flash memory controllers independently work, each Channel is provided with a Channel bus, and a plurality of flash memory chips (chips) are mounted on each Channel.
As shown in fig. 1, the DRAM-Less solid state disk 10 includes: a main controller 11 and a flash memory chip 12, wherein the main controller 11 is connected with the flash memory chip 12;
specifically, the main controller 11 includes: one or more processors 111, and a memory 112. In fig. 1, a processor 111 is taken as an example.
The processor 111 and the memory 112 may be connected by a bus or otherwise, which is illustrated in fig. 1 as a bus connection.
The memory 112 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor 111 executes various functional applications and data processing of the dynamic allocation method of the cache of the DRAM-Less solid state disk according to the embodiment of the present invention by running nonvolatile software programs, instructions, and modules stored in the memory 112.
Memory 112 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 112 may optionally include memory located remotely from processor 111, such remote memory being connectable to processor 111 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The modules are stored in the memory 112, and when executed by the one or more processors 111, perform the dynamic allocation method of the cache of the DRAM-Less solid state disk in the embodiment of the present invention.
Specifically, the flash memory chip 12 includes a plurality of wafers (Die), each wafer is composed of a plurality of groups (planes), each group is composed of a plurality of blocks (blocks), i.e., physical blocks (blocks) described in the present invention, where a Block is a basic unit erased by the flash memory chip 12, and each Block has a plurality of pages (pages), i.e., physical pages, where a physical Page (Page) is a basic unit read from and written to the flash memory chip 12.
Because the solid state disk of DRAM-Less lacks DRAM, but there is a very small amount of static random access memory (Static Random Access Memory, SRAM) in the main stream, offer firmware as the buffer memory of the mapping table, but its capacity is small and expensive, the invention presumes the logical block address (Logical Block Address, LBA) of the host computer is equal to the size of the physical page of the flash memory and is all 4KB, and SSD firmware adopts the mapping mechanism of 4KB granularity.
Referring to fig. 2, fig. 2 is a schematic diagram of an FTL mapping table according to an embodiment of the present invention;
as shown in fig. 2, each LBA data of the host and each flash physical page have a size of 4KB, and each mapping table management unit in the FTL mapping table has a size of 4B, and each mapping table management unit corresponds to one LBA data, where the mapping table management unit is configured to store a physical address of the LBA data stored in the flash memory.
Referring to fig. 3 again, fig. 3 is a schematic diagram of a data buffer and a mapping table buffer according to an embodiment of the present invention;
as shown in fig. 3, the main controller (main control) of the solid state disk includes an internal static random access memory (Internal Static Random Access Memory, ISRAM), in order to improve the read-write performance of the SSD, the SSD firmware generally designs a data buffer and a mapping table buffer, small trivial data written by the host is first buffered in the data buffer, and is flushed to the Flash memory after a physical page size is fully reached, so that the performance of the write command is effectively improved, and in addition, if the data requested by the host is just in the data buffer, the read performance is also greatly improved.
Because of the write-in characteristic of the physical page of the flash memory in different places, SSD firmware needs to maintain a mapping table to record the position of LBA data written in the physical page in real time so as to find the position of the data in the subsequent reading, and if the mapping table is stored in a cache, the delay of accessing the mapping table is small, and the performance can be improved.
It will be appreciated that the larger the data cache and the map cache, the better the performance. For DRAM-Less solid state drives without DRAM, only the expensive and rare high-speed memory ISRAM of the master itself can be used as a cache, however, the total amount of ISRAM is very small, as shown in fig. 3, the ISRAM stores a small portion of user data and a mapping table, and the complete user data and mapping table are stored in the flash memory.
In the prior art, the SSD firmware is pre-allocated with the size of the data cache and the size of the mapping table cache, and the size of the data cache and the mapping table cache is not changed no matter how the workload (workload) of the host changes. Both the data cache and the mapping table cache are managed at 4KB granularity and both use the least recently used algorithm (Least Recently Used, LRU) to eliminate the coldest cache data.
The sizes of the data cache and the mapping table cache cannot be adjusted, so that the read-write performance of the SSD is reduced, the main control ISRAM is assumed to be 768KB in total, the main control ISRAM is allocated to the data cache 512KB, the mapping table cache 256KB, and the performance of the SSD working scene cannot be optimized due to the fact that the data cache and the mapping table cache are fixed:
workload 1: the host accesses the data space of 8MB in the LBA sequence, and the maximum mapping table size required to be accessed by the workload is 8MB/4kb×4b=8kb (mapping table buffer), the remaining 256 KB-8kb=248 KB of the mapping table buffer is of no value, and if the data buffer can use it, the system performance will be improved.
Workload 2: the host computer does not request to issue, the data cache is idle, garbage collection is triggered in the SSD, the garbage collection can frequently update the whole mapping table, and the larger mapping table cache can improve the garbage collection efficiency. However, the data cache is not utilized by garbage collection tasks due to static configuration policies.
The above examples can show that the memory sizes of the fixed data cache and the mapping table cache cannot improve the read-write performance of the solid state disk, so the invention creatively proposes a way of dynamically allocating the data cache and the mapping table cache to improve the read-write performance of the solid state disk.
Referring to fig. 4, fig. 4 is a flow chart of a dynamic allocation method of a buffer according to an embodiment of the present invention;
the dynamic allocation method of the cache is applied to the DRAM-Less solid state disk, the DRAM-Less solid state disk comprises a main controller, the main controller comprises a cache space, the cache space comprises a data cache and a mapping table cache, and an execution main body of the dynamic allocation method of the cache is the main controller of the DRAM-Less solid state disk.
As shown in fig. 4, the dynamic allocation method of the cache includes:
Step S10: the memory size of the data cache and the mapping table cache is pre-allocated;
specifically, the main controller of the DRAM-Less solid state disk includes a buffer space, and the memory of the buffer space is limited, so that the buffer space needs to be allocated, that is, the memory sizes of the data buffer and the mapping table buffer are allocated in advance, it is understood that, in order to utilize the buffer space as much as possible, the sum of the data buffer and the mapping table buffer is the memory size of the buffer space, for example: if the memory size of the buffer space is a, the memory size of the data buffer is B, and the memory size of the mapping table buffer is C, then: a=b+c. In the embodiment of the invention, the cache space of the main controller is an internal static random access memory (Internal Static Random Access Memory, ISRAM) of the main controller.
Step S20: establishing a first virtual benefit table corresponding to the data cache and a second virtual benefit table corresponding to the mapping table cache;
referring to fig. 5 again, fig. 5 is a schematic diagram of a first virtual profit table and a second virtual profit table according to the present invention;
As shown in fig. 5, the data buffer corresponds to a first virtual benefit table, the mapping table buffer corresponds to a second virtual benefit table, the data buffer includes a plurality of LBA data (logical block address data), the mapping table buffer includes a plurality of L2P Entry (mapping table management unit), the first virtual benefit table includes LBA data and its corresponding Read hit benefit value (Read benefit value), write hit benefit value (Write benefit value), and the second virtual benefit table includes a plurality of L2P Entry and its corresponding Read hit benefit value (Read benefit value), write hit benefit value (Write benefit value).
Specifically, the first virtual benefit table corresponding to the data cache and the second virtual benefit table corresponding to the mapping table cache are stored in a chip of a main controller of the DRAM-Less solid state disk, for example: high speed registers of the host controller, such as: the data tight coupling storage (Data Tightly Coupled Memory, DTCM) can reflect the hit rate of the host IO to the data cache or the mapping table cache by establishing the first virtual benefit table corresponding to the data cache and the second virtual benefit table corresponding to the mapping table cache, and the first virtual benefit table corresponding to the data cache and the second virtual benefit table corresponding to the mapping table cache are stored in the high-speed register of the main controller, so that the read-write performance of the solid state disk can be improved on the premise of not occupying the cache space of the main controller.
Step S30: when the preset updating period is finished, calculating a first average virtual benefit of a first virtual benefit table corresponding to the data cache and a second average virtual benefit of a second virtual benefit table corresponding to the mapping table cache;
specifically, the update period is a time slice set by the user, and the update period is used for determining that the host IO hits the logical block address data in the first virtual benefit table corresponding to the data cache within a period of time and determining that the host IO hits the mapping table management unit in the second virtual benefit table corresponding to the mapping table cache within a period of time. Through presetting the update period, the memory sizes of the data cache and the mapping table cache can be determined based on the host IO in a period of time, so that the effective allocation of the cache space of the main controller is realized.
Referring to fig. 6 again, fig. 6 is a detailed flowchart of step S30 in fig. 4;
specifically, the first virtual benefit table corresponding to the data cache includes one-to-one corresponding logical block address data, a read benefit value, and a write benefit value, as shown in fig. 6, and the calculating the first average virtual benefit of the first virtual benefit table corresponding to the data cache includes:
Step S311: acquiring a read benefit value and a write benefit value corresponding to each logic block address data in the first virtual benefit table;
specifically, the data cache corresponds to a first virtual benefit table, the data cache is located in a cache space of the main controller, the first virtual benefit table is located inside a chip of the main controller, when logic block address data in the data cache is eliminated, the main controller adds the logic block address data in the first virtual benefit table, and each logic block address data in the first virtual benefit table corresponds to a Read benefit value (Read hit) and a Write benefit value (Write hit), and the Read benefit value and the Write benefit value are respectively used for indicating the Read times and the Write times of the logic block address data hit by the host IO after the logic block address data is eliminated by the data cache.
In an embodiment of the present invention, the method further includes:
and in the updating period, when the host IO does not hit the data cache, selecting one eliminated logic block address data to add into the first virtual benefit table based on the LRU algorithm.
In an embodiment of the present invention, the method further includes:
when the logic block address data read by the host IO hits the logic block address data in the first virtual benefit table, the read benefit value corresponding to the logic block address data in the first virtual benefit table is increased;
When the logic block address data written by the host IO hits the logic block address data in the first virtual benefit table, the write benefit value corresponding to the logic block address data in the first virtual benefit table is increased.
Specifically, when the logical block address data read by the host IO hits the logical block address data in the first virtual benefit table, the read benefit value corresponding to the logical block address data in the first virtual benefit table is increased by one unit, and in this embodiment of the present invention, the one unit may be freely set, for example: the write benefit value is set to 1, 2, 3, 4, etc., and the increasing manner of the write benefit value is similar to that of the read benefit value, and will not be described herein.
Step S312: summing the read benefit value and the write benefit value corresponding to each logic block address data, and calculating the sum value of the first virtual benefit table;
specifically, there may be no logical block address data in the first virtual benefit table, or there may be one or more logical block address data, and the sum of the read benefit value and the write benefit value corresponding to each logical block address data, that is, the sum of the read benefit value and the write benefit value corresponding to all the logical block address data in the first virtual benefit table, thereby obtaining the sum of the first virtual benefit table.
Step S313: acquiring the number of the logic block address data eliminated by the data cache when the updating period is finished;
specifically, at the end of the update period, the number of the logic block address data eliminated by the data cache is obtained, that is, the number of the logic block address data in the first virtual benefit table is obtained.
Step S314: and calculating a first average virtual benefit of the first virtual benefit table corresponding to the data cache according to the sum value of the first virtual benefit table and the number of the logic block address data.
Specifically, the first average virtual benefit=sum value of the first virtual benefit table/number of logical block address data.
Referring back to fig. 7, fig. 7 is another refinement flowchart of step S30 in fig. 4;
specifically, the second virtual benefit table corresponding to the mapping table cache includes one-to-one mapping table management units, a read benefit value and a write benefit value, the mapping table cache is located in a cache space of the main controller, the second virtual mapping table is located in a chip of the main controller, when the address data of the logic block in the data cache changes, the mapping table cache also changes, and the mapping table cache selects the corresponding mapping table management unit to be added into the second virtual benefit table, as shown in fig. 7, the calculating the second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache includes:
Step S321: acquiring a read benefit value and a write benefit value corresponding to each mapping table management unit in the second virtual benefit table;
specifically, after the mapping table management unit in the mapping table cache is replaced, the mapping table management unit is added to the second virtual benefit table, and each mapping table management unit in the second virtual benefit table corresponds to a reading benefit value and a writing benefit value, wherein the reading benefit value and the writing benefit value are respectively used for indicating the reading times and the writing times of the corresponding logic block address data hit by the host computer IO after the mapping table management unit is added to the second virtual benefit table.
In an embodiment of the present invention, the method further includes:
if the mapping table cache memory is full, selecting a mapping table management unit in the mapping table cache based on an LRU algorithm, and adding the mapping table management unit into the second virtual profit table;
when the logic block address data read by the host IO hits the mapping table management unit corresponding to the logic block address data in the second virtual benefit table, the reading benefit value of the mapping table management unit is increased;
when the logic block address data written by the host IO hits the mapping table management unit corresponding to the logic block address data in the second virtual benefit table, the writing benefit value of the mapping table management unit is increased.
Specifically, the LRU algorithm is a least recently used algorithm (Least Recently Used, LRU), and when the logical block address data read by the host IO hits a mapping table management unit corresponding to the logical block address data in the second virtual benefit table, the read benefit value of the mapping table management unit is increased by one unit, and in the embodiment of the present invention, the one unit may be freely set, for example: the write benefit value is set to 1, 2, 3, 4, etc., and the increasing manner of the write benefit value is similar to that of the read benefit value, and will not be described herein.
Step S322: summing the read benefit value and the write benefit value corresponding to each mapping table management unit, and calculating the sum value of the second virtual benefit table;
specifically, there may be no mapping table management unit in the second virtual benefit table, or there may be one or more mapping table management units, and the sum of the read benefit value and the write benefit value corresponding to each mapping table management unit is performed, that is, the sum of the read benefit value and the write benefit value corresponding to all mapping table management units in the second virtual benefit table is performed, so as to obtain the sum of the second virtual benefit table.
Step S323: acquiring the number of mapping table management units of the second virtual profit table at the end of the updating period;
specifically, when the update period is finished, the number of mapping table management units in the second virtual benefit table is obtained, where the number of mapping table management units is the mapping table management unit selected by the LRU algorithm in the mapping table cache, and it is understood that, at a certain time, the mapping table management unit in the second virtual benefit table may be the same as the mapping table management unit in the mapping table cache.
Step S324: and calculating a second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache according to the sum value of the second virtual benefit table and the number of mapping table management units of the second virtual benefit table.
Specifically, the second average virtual benefit of the second virtual benefit table=sum value of the second virtual benefit table/number of mapping table management units of the second virtual benefit table.
Step S40: and dynamically distributing the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit.
Specifically, the dynamically allocating the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit includes:
If the first average virtual benefit is greater than the second average virtual benefit, dividing part of the memory of the mapping table cache into the data cache;
and if the first average virtual benefit is smaller than the second average virtual benefit, dividing part of the memory of the data cache into the mapping table cache.
When the update period is over, the hit rates of the host IO on the first virtual benefit table and the second virtual benefit table need to be calculated, the hit rates of the host IO on the first virtual benefit table and the second virtual benefit table are represented by a first average virtual benefit and a second average virtual benefit, if the first average virtual benefit is greater than the second average virtual benefit, this means that adding the data cache may bring better read-write performance in the next time period, so that the read-write performance of the solid state disk is improved by dividing part of the memory cached by the mapping table into the data cache;
if the first average virtual benefit is smaller than the second average virtual benefit, it means that adding the mapping table buffer may bring better read-write performance in the next time period, so that the read-write performance of the solid state disk is improved by dividing part of the memory of the data buffer into the mapping table buffer.
Specifically, the dividing the partial memory of the mapping table cache into the data cache includes:
selecting a least recently used mapping table management unit in the mapping table cache based on an LRU algorithm, and dividing a memory space corresponding to the least recently used mapping table management unit into the data cache;
specifically, the dividing the partial memory of the data cache into the mapping table cache includes:
and selecting the least recently used logic block address data in the data cache based on an LRU algorithm, and dividing the corresponding memory space into the mapping table cache.
In an embodiment of the present invention, the method further includes:
and dynamically setting the updating period according to the frequency of accessing the DRAM-Less solid state disk by the host.
Specifically, the frequency of the host accessing the DRAM-Less solid state disk is represented by the number of times of reading and writing of the host IO in the update period, if the number of times of reading and writing of the host IO in the update period is greater than the preset number of times of reading and writing, the frequency of the host accessing the DRAM-Less solid state disk is determined to be high, and if the number of times of reading and writing of the host IO in the update period is Less than or equal to the preset number of times of reading and writing, the frequency of the host accessing the DRAM-Less solid state disk is determined to be low.
If the frequency of the host accessing the DRAM-Less solid state disk is high, the update period is shortened, and if the frequency of the host accessing the DRAM-Less solid state disk is low, the update period is prolonged, wherein the update period can be dynamically adjusted by setting a shortening coefficient and a lengthening coefficient, for example: when the frequency of the host accessing the DRAM-Less solid state disk is high, the current update period is multiplied by a shortening coefficient to determine an updated update period, and when the frequency of the host accessing the DRAM-Less solid state disk is low, the current update period is multiplied by an extending coefficient to determine an updated update period, wherein the shortening coefficient and the extending coefficient can be set according to specific requirements, for example: the shortening coefficient is set to 0.8 and the lengthening coefficient is set to 1.2.
In an embodiment of the present invention, after dynamically allocating the memory sizes of the data cache and the mapping table cache, the method further includes:
and after the updating period is finished, the first virtual benefit table corresponding to the data cache and the second virtual benefit table corresponding to the mapping table cache are emptied.
It can be understood that after the update period is finished, the data buffer and the mapping table buffer are reassigned, and at this time, the first virtual benefit table corresponding to the data buffer and the second virtual benefit table corresponding to the mapping table buffer need to be reset, so as to facilitate calculation of the first average virtual benefit and the second average virtual benefit of the next update period, and so on.
Referring to fig. 8 again, fig. 8 is a schematic diagram of a mapping table management unit corresponding relation between a logical block address according to an embodiment of the present invention;
in the embodiment of the present invention, the size of the memory space occupied by each logical block address data and each mapping table management unit is the same, as shown in fig. 8, each mapping table management unit (L2P Entry) corresponds to 1K logical block address data (LBA), and the size of the memory occupied by each mapping table management unit (L2P Entry) is 4KB, which is equivalent to that the position of the LBA data stored in the flash memory is recorded in 4 bytes of the mapping table, that is, one mapping table management unit (L2P Entry) records physical address information corresponding to 1K logically continuous LBAs.
Referring to fig. 9 again, fig. 9 is a schematic workflow diagram of a dynamic allocation method of a buffer according to an embodiment of the present invention;
as shown in fig. 9, the dynamic allocation method of the cache includes:
step S91: resetting a first virtual benefit table of the data cache and a second virtual benefit table of the mapping table cache;
specifically, after the last update period is finished, the first virtual benefit table of the data cache and the second virtual benefit table of the mapping table cache are reset.
Step S92: processing a new host read-write request LBA_x;
specifically, the main controller receives an IO request issued by a host, and processes the IO request, for example: and reading or writing LBA_x data issued by the host.
Step S93: whether LBA_x hits the first virtual benefit table of the data cache;
specifically, it is determined whether the lba_x hits the first virtual profit table of the data cache, if yes, step S94 is entered: adding 1 to the hit benefit value corresponding to lba_x in the first virtual benefit table of the data cache, for example: and if the request issued by the host is the Write LBA_x, adding 1 to the Write hit benefit value corresponding to the LBA_x in the first virtual benefit table, and if the request issued by the host is the Read LBA_x, adding 1 to the Read hit benefit value corresponding to the LBA_x in the first virtual benefit table. If not, go to step S95: it is determined whether LBA_x hits in the data cache.
Step S94: the hit gain value corresponding to LBA_x in the first virtual gain table is increased by 1;
specifically, if the request issued by the host is written lba_x, the Write hit benefit value corresponding to lba_x in the first virtual benefit table is added by 1, and if the request issued by the host is Read lba_x, the Read hit benefit value corresponding to lba_x in the first virtual benefit table is added by 1.
Step S95: whether LBA_x hits the data cache;
specifically, if the lba_x hits the data cache, step S97 is entered: whether the update period is ended; if the lba_x does not hit the data cache, the process proceeds to step S96: and adding the replaced LBA_t into the first virtual benefit table.
It can be understood that the present invention can also change the determination sequence of whether the lba_x hits the data cache and whether the lba_x hits the first virtual profit table of the data cache, that is, can determine whether the lba_x hits the data cache first and then determine whether the lba_x hits the first virtual profit table of the data cache, without affecting the substantial result.
Step S96: adding the replaced LBA_t into a first virtual profit table;
specifically, if the lba_x does not hit the data cache, according to the LRU algorithm, one logical block address data lba_t is selected from the data cache, the lba_t is replaced by the lba_x in the data cache, and the lba_t is added to the first virtual profit table.
It should be understood that, the manner of processing the host IO for the mapping table is similar to that of processing the IO for the data buffer, which is not described herein again, it should be noted that the judging order of the mapping table buffer is consistent with the judging order of the data buffer, for example: if the judging sequence of the data cache is that whether the LBA_x hits the first virtual benefit table of the data cache or not is judged first, and then whether the LBA_x hits the data cache or not is judged, the judging sequence of the mapping table cache is that whether the second virtual benefit table of the mapping table cache is hit or not is judged first, and then whether the second virtual benefit table of the mapping table cache is hit or not is judged, so that consistency is maintained.
Step S97: judging whether the update period is ended;
specifically, whether the preset update period is ended is determined, if yes, step S98 is entered: calculating a first average virtual benefit T1 and a second average virtual benefit T2 of the data cache and the mapping table cache; if not, return to step S92: processing new host read-write request;
step S98: calculating a first average virtual benefit T1 and a second average virtual benefit T2 of the data cache and the mapping table cache;
specifically, according to the read benefit value and the write benefit value of all the logic block address data in the first virtual benefit table of the data cache and the number of the logic block address data in the first virtual benefit table, calculating the first average virtual benefit T1; and calculating the second average virtual benefit T2 according to the read benefit value and the write benefit value of all mapping table management units in the second virtual benefit table cached by the mapping table and the number of the mapping table management units in the second virtual benefit table.
Step S99: t1 > T2;
specifically, whether the first average virtual benefit T1 is greater than the second average virtual benefit T2 is determined, and if so, step S991 is entered; if not, go to step S992;
Step S991: dividing part of memory space from the mapping table cache to the data cache;
specifically, a memory space of the logical block address data is divided from the mapping table buffer to the data buffer, i.e. 4KB buffer is divided from the mapping table buffer to the data buffer.
Step S992: dividing part of memory space from the data cache to the mapping table cache;
specifically, a memory space of a mapping table management unit is divided from the data cache to the mapping table cache, namely, a 4KB buffer is divided from the data cache to the mapping table cache;
the dynamic allocation of the size of the data buffer and the size of the mapping table buffer are realized by cutting off a small block buffer of the data buffer for the mapping table buffer, or conversely, by cutting off a small block buffer of the mapping table buffer for the data buffer, so that the overall performance of the system is improved. The key is how to decide to sacrifice whose buffer. The system performance is improved because the IO hits the data cache or the mapping table cache, so the decision model basically focuses on which hit rate is improved. As shown in fig. 5, in the current cache configuration, when the host IO does not hit the data cache, the LRU algorithm selects a obsolete lba_x, and after the lba_x is obsolete, if the subsequent host reads and writes the lba_x, a miss occurs, and the more the number of times the host reads and writes the lba_x, the greater the data cache hit rate decreases, which is a cost of maintaining the current data cache size unchanged.
If the system has an extra 4KB free buffer for the data buffer, the data buffer will hit the data buffer without eliminating the lba_x, which is to increase the hit rate of a 4KB buffer, as shown in fig. 5, all lba_y eliminated from the data buffer in one period and the number of times (r_y, w_y) of the subsequent host to read and write from the data buffer are recorded in the first virtual benefit table, and the first average virtual benefit T1 of the data buffer (average hit rate increase of 4KB buffer) = (r_1+w_1+r_2+w_2+.+ r_n+w_n)/n, n is the number of LBAs eliminated from the data buffer in the period at the end of the update period.
Similarly, a second virtual benefit table corresponding to the mapping table cache is maintained, the second virtual benefit table records L2P entry_i which is eliminated and corresponding read-write hit times (R_i, W_i), after the update period is finished, a second average virtual benefit T2 of the mapping table cache is calculated, then T1 and T2 are compared, if T1 is larger than T2, the benefit (hit rate improvement) brought by improving the data cache is larger in the update period, according to the time limitation of a host work load, it is reasonable to believe that the increase of the data cache in the next update period is more beneficial to the improvement of the total hit rate, and therefore firmware decides to cut a 4KB buffer from the mapping table cache to the data cache when the next update period is started; otherwise, if T1 is smaller than T2, a 4KB buffer is cut from the data cache to the mapping table cache. And when the next updating period starts, the first virtual benefit table and the second virtual benefit table which correspond to the data cache and the mapping table cache respectively are emptied.
How to dynamically allocate the data cache and the memory size of the mapping table cache is specifically described below in conjunction with examples.
Referring to fig. 10, fig. 10 is a schematic diagram illustrating an initial state of a data buffer and a mapping table buffer according to an embodiment of the present invention;
assuming that the update period is set to time_th, the data buffer size at the beginning of a certain update period is 9×4kb, and the map buffer size is 4×4kb, as shown in fig. 10, the LBA Index and the L2P Entry Index stored in the data buffer and the map buffer of the SSD ISRAM in the initial state are respectively: LBA 0,1,3,4097,4098,4099,8192,8193,8194 is cached in the data cache, L2P Entry 0,4,8,9 is cached in the mapping table cache, and the corresponding first virtual benefit table and second virtual benefit table are in a clear state.
In a later Time period time_th, the host initiates an IO stream of the read LBA 1024,1025,1026,0,1,3,0,1,3, firstly, the LBA 1024,1025,1026 replaces the oldest LBA 0,1,3 in the data cache, then the first virtual benefit table records the LBA 0,1,3, and then the LBA 1024,1025,1026 enters the mapping table cache, and all the three LBAs belong to the L2P Entry 1, so that the L2P Entry 1 replaces the L2P Entry 0 in the mapping table cache, and records the L2P Entry 0 in the second virtual benefit table, and referring to fig. 11, fig. 11 is a schematic diagram of another state of the data cache and the mapping table cache provided by the embodiment of the present invention, and at this Time, the state of the SSD ISRAM is shown in fig. 11;
It will be appreciated that the LBA data is replaced one by one after entering the data cache, and is illustrated herein for simplicity and once by 3 examples, the manner in which the LBA data is selected in the present invention is the LRU algorithm.
Thereafter, LBA 0,1,3 enters the data cache to replace the oldest LBA 4097,4098,4099 and LBA 4097,4098,4099 is added to the first virtual profit table, and since LBA 0,1,3 has been previously recorded in the first virtual profit table, the read hit profit value corresponding to LBA 0,1,3 is each added by 1; similarly, LBA 0,1,3 belongs to L2P Entry 0, so L2P Entry 4 in the mapping table cache is replaced, L2P Entry 4 is added to the second virtual benefit table, and L2P Entry 0 has been recorded in the second virtual benefit table before, so its read hit benefit value is increased by 3 (the process is 3 IO lookup mapping tables), and the state of SSD ISRAM is shown in FIG. 12;
then, the LBA 0,1,3 enters the data cache for the second time, and since the LBA 0,1,3 exists in the data cache, the read hit benefit value corresponding to the first virtual benefit table is directly updated, and similarly, the value in the second virtual benefit table corresponding to the mapping table cache is directly updated, and the state of the SSD ISRAM is shown in fig. 13;
After the update period is finished, the first average virtual benefit t1= (2+2+2+0+0+0)/6=1 of the data cache is calculated, the second average virtual benefit t2= (6+0)/2=3 of the mapping table cache is calculated, so that the benefit (hit rate improvement) caused by increasing the mapping table cache is relatively large for the current workload, the SSD firmware cuts out 4KB buffers (coldest) from the data cache to the mapping table cache, clears the content in the respective virtual benefit table, starts the next round of period, and continues continuously, as shown in fig. 14, at this time, the data cache is increased by one more space of the LBA data memory size, and the mapping table cache is reduced by one space of the L2P Entry memory size, thereby dynamically allocating the cache space of the host controller.
In an embodiment of the present invention, a dynamic allocation method of a cache is provided, which is applied to a DRAM-Less solid state disk, where the DRAM-Less solid state disk includes a main controller, the main controller includes a cache space, and the cache space includes a data cache and a mapping table cache, and the method includes: the memory size of the data cache and the mapping table cache is pre-allocated; establishing a first virtual benefit table corresponding to the data cache and a second virtual benefit table corresponding to the mapping table cache; when the preset updating period is finished, calculating a first average virtual benefit of a first virtual benefit table corresponding to the data cache and a second average virtual benefit of a second virtual benefit table corresponding to the mapping table cache; and dynamically distributing the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit. By dynamically distributing the buffer space of the main controller, the invention can improve the read-write performance of the DRAM-Less solid state disk.
Referring to fig. 15, fig. 15 is a schematic structural diagram of a dynamic allocation device for cache according to an embodiment of the present invention.
The dynamic allocation device of the cache is applied to a DRAM-Less solid state disk, the DRAM-Less solid state disk comprises a main controller, the main controller comprises a cache space, the cache space comprises a data cache and a mapping table cache, and the device comprises:
a memory allocation unit 151, configured to allocate the memory sizes of the data cache and the mapping table cache in advance;
a virtual benefit table establishing unit 152, configured to establish a first virtual benefit table corresponding to the data cache and a second virtual benefit table corresponding to the mapping table cache;
the average virtual benefit calculating unit 153 is configured to calculate, when a preset update period is over, a first average virtual benefit of the first virtual benefit table corresponding to the data cache and a second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache;
and a dynamic allocation unit 154, configured to dynamically allocate the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit.
In the embodiment of the present invention, the first virtual benefit table corresponding to the data cache includes logical block address data, a read benefit value and a write benefit value, and the average virtual benefit calculating unit is specifically configured to:
Acquiring a read benefit value and a write benefit value corresponding to each logic block address data in the first virtual benefit table;
summing the read benefit value and the write benefit value corresponding to each logic block address data, and calculating the sum value of the first virtual benefit table;
acquiring the number of the logic block address data eliminated by the data cache when the updating period is finished;
and calculating a first average virtual benefit of the first virtual benefit table corresponding to the data cache according to the sum value of the first virtual benefit table and the number of the logic block address data.
In the embodiment of the present invention, the second virtual benefit table corresponding to the mapping table cache includes a mapping table management unit, a read benefit value and a write benefit value, and the average virtual benefit calculation unit is specifically configured to:
acquiring a read benefit value and a write benefit value corresponding to each mapping table management unit in the second virtual benefit table;
summing the read benefit value and the write benefit value corresponding to each mapping table management unit, and calculating the sum value of the second virtual benefit table;
acquiring the number of mapping table management units of the second virtual profit table at the end of the updating period;
And calculating a second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache according to the sum value of the second virtual benefit table and the number of mapping table management units of the second virtual benefit table.
In an embodiment of the present invention, the dynamic allocation unit is specifically configured to:
if the first average virtual benefit is greater than the second average virtual benefit, dividing part of the memory of the mapping table cache into the data cache;
and if the first average virtual benefit is smaller than the second average virtual benefit, dividing part of the memory of the data cache into the mapping table cache.
In the embodiment of the present invention, the size of the memory space occupied by the logical block address data and the mapping table management unit is the same, and the dynamic allocation unit is specifically configured to:
selecting a least recently used mapping table management unit in the mapping table cache based on an LRU algorithm, and dividing a memory space corresponding to the least recently used mapping table management unit into the data cache;
the dividing the partial memory of the data cache into the mapping table cache includes:
and selecting the least recently used logic block address data in the data cache based on an LRU algorithm, and dividing the corresponding memory space into the mapping table cache.
In an embodiment of the present invention, the apparatus further includes:
and the resetting unit is used for clearing the first virtual benefit table corresponding to the data cache and the second virtual benefit table corresponding to the mapping table cache after the updating period is finished.
Since the apparatus embodiments and the method embodiments are based on the same concept, on the premise that the contents do not conflict with each other, the contents of the apparatus embodiments may refer to the method embodiments, which are not described herein.
In an embodiment of the present invention, a dynamic allocation device for a cache is provided, where the dynamic allocation device is applied to a DRAM-Less solid state disk, where the DRAM-Less solid state disk includes a main controller, and the main controller includes a cache space, where the cache space includes a data cache and a mapping table cache, and the device includes: the memory allocation unit is used for pre-allocating the memory sizes of the data cache and the mapping table cache; the virtual profit table establishing unit is used for establishing a first virtual profit table corresponding to the data cache and a second virtual profit table corresponding to the mapping table cache; the average virtual benefit calculating unit is used for calculating a first average virtual benefit of the first virtual benefit table corresponding to the data cache and a second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache when the preset updating period is finished; and the dynamic allocation unit is used for dynamically allocating the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit. By dynamically distributing the buffer space of the main controller, the invention can improve the read-write performance of the DRAM-Less solid state disk.
Embodiments of the present invention also provide a non-volatile computer storage medium storing computer-executable instructions that are executed by one or more processors, such as the one processor 111 in fig. 1, to cause the one or more processors to perform the method for dynamically allocating a cache in any of the method embodiments described above, such as the steps shown in fig. 4 described above; the functions of the individual units described in fig. 15 can also be implemented.
The above-described embodiments of the apparatus or device are merely illustrative, in which the unit modules illustrated as separate components may or may not be physically separate, and the components shown as unit modules may or may not be physical units, may be located in one place, or may be distributed over multiple network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the related art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for up to a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the invention, the steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. The dynamic allocation method of the cache is applied to a DRAM-Less solid state disk, the DRAM-Less solid state disk comprises a main controller, and the main controller comprises a cache space, and is characterized in that the cache space comprises a data cache and a mapping table cache, and the method comprises the following steps:
the memory size of the data cache and the mapping table cache is pre-allocated;
Establishing a first virtual benefit table corresponding to the data cache and a second virtual benefit table corresponding to the mapping table cache;
when the preset updating period is finished, calculating a first average virtual benefit of a first virtual benefit table corresponding to the data cache and a second average virtual benefit of a second virtual benefit table corresponding to the mapping table cache;
and dynamically distributing the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit.
2. The method of claim 1, wherein the first virtual benefit table corresponding to the data cache includes one-to-one logical block address data, a read benefit value, and a write benefit value, and wherein the calculating the first average virtual benefit of the first virtual benefit table corresponding to the data cache includes:
acquiring a read benefit value and a write benefit value corresponding to each logic block address data in the first virtual benefit table;
summing the read benefit value and the write benefit value corresponding to each logic block address data, and calculating the sum value of the first virtual benefit table;
acquiring the number of the logic block address data eliminated by the data cache when the updating period is finished;
And calculating a first average virtual benefit of the first virtual benefit table corresponding to the data cache according to the sum value of the first virtual benefit table and the number of the logic block address data.
3. The method according to claim 2, wherein the method further comprises:
when the logic block address data read by the host IO hits the logic block address data in the first virtual benefit table, the read benefit value corresponding to the logic block address data in the first virtual benefit table is increased;
when the logic block address data written by the host IO hits the logic block address data in the first virtual benefit table, the write benefit value corresponding to the logic block address data in the first virtual benefit table is increased.
4. The method of claim 1, wherein the second virtual benefit table corresponding to the mapping table cache includes a one-to-one mapping table management unit, a read benefit value, and a write benefit value, and wherein calculating the second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache includes:
acquiring a read benefit value and a write benefit value corresponding to each mapping table management unit in the second virtual benefit table;
Summing the read benefit value and the write benefit value corresponding to each mapping table management unit, and calculating the sum value of the second virtual benefit table;
acquiring the number of mapping table management units of the second virtual profit table at the end of the updating period;
and calculating a second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache according to the sum value of the second virtual benefit table and the number of mapping table management units of the second virtual benefit table.
5. The method according to claim 4, wherein the method further comprises:
if the mapping table cache memory is full, selecting a mapping table management unit in the mapping table cache based on an LRU algorithm, and adding the mapping table management unit into the second virtual profit table;
when the logic block address data read by the host IO hits the mapping table management unit corresponding to the logic block address data in the second virtual benefit table, the reading benefit value of the mapping table management unit is increased;
when the logic block address data written by the host IO hits the mapping table management unit corresponding to the logic block address data in the second virtual benefit table, the writing benefit value of the mapping table management unit is increased.
6. The method of claim 5, wherein dynamically allocating the memory sizes of the data cache and the mapping table cache based on the first average virtual benefit and the second average virtual benefit comprises:
if the first average virtual benefit is greater than the second average virtual benefit, dividing part of the memory of the mapping table cache into the data cache;
and if the first average virtual benefit is smaller than the second average virtual benefit, dividing part of the memory of the data cache into the mapping table cache.
7. The method of claim 6, wherein each of the logical block address data is the same size as a memory space occupied by the mapping table management unit, the partitioning the portion of memory of the mapping table cache into the data cache comprising:
selecting a least recently used mapping table management unit in the mapping table cache based on an LRU algorithm, and dividing a memory space corresponding to the least recently used mapping table management unit into the data cache;
the dividing the partial memory of the data cache into the mapping table cache includes:
and selecting the least recently used logic block address data in the data cache based on an LRU algorithm, and dividing the corresponding memory space into the mapping table cache.
8. The method of claim 1, wherein after dynamically allocating the memory sizes of the data cache and the map cache, the method further comprises:
and after the updating period is finished, the first virtual benefit table corresponding to the data cache and the second virtual benefit table corresponding to the mapping table cache are emptied.
9. The utility model provides a dynamic allocation device of buffer memory, is applied to DRAM-Less solid state hard disk, DRAM-Less solid state hard disk includes main control unit, main control unit includes the buffer memory space, its characterized in that, the buffer memory space includes data buffer memory and mapping table buffer memory, the device includes:
the memory allocation unit is used for pre-allocating the memory sizes of the data cache and the mapping table cache;
the virtual profit table establishing unit is used for establishing a first virtual profit table corresponding to the data cache and a second virtual profit table corresponding to the mapping table cache;
the average virtual benefit calculating unit is used for calculating a first average virtual benefit of the first virtual benefit table corresponding to the data cache and a second average virtual benefit of the second virtual benefit table corresponding to the mapping table cache when the preset updating period is finished;
And the dynamic allocation unit is used for dynamically allocating the memory sizes of the data cache and the mapping table cache according to the first average virtual benefit and the second average virtual benefit.
10. The DRAM-Less solid state disk is characterized by comprising:
a flash memory chip comprising a plurality of wafers, each wafer comprising a plurality of groupings, each grouping comprising a plurality of physical blocks, each physical block comprising a plurality of physical pages;
a main controller, the main controller comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of dynamic allocation of a cache as claimed in any one of claims 1 to 8.
CN202010398316.6A 2020-05-12 2020-05-12 Dynamic allocation method and device for cache and DRAM-Less solid state disk Active CN111639037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010398316.6A CN111639037B (en) 2020-05-12 2020-05-12 Dynamic allocation method and device for cache and DRAM-Less solid state disk

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010398316.6A CN111639037B (en) 2020-05-12 2020-05-12 Dynamic allocation method and device for cache and DRAM-Less solid state disk

Publications (2)

Publication Number Publication Date
CN111639037A CN111639037A (en) 2020-09-08
CN111639037B true CN111639037B (en) 2023-06-09

Family

ID=72330070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010398316.6A Active CN111639037B (en) 2020-05-12 2020-05-12 Dynamic allocation method and device for cache and DRAM-Less solid state disk

Country Status (1)

Country Link
CN (1) CN111639037B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559385A (en) * 2020-12-22 2021-03-26 深圳忆联信息系统有限公司 Method and device for improving SSD writing performance, computer equipment and storage medium
CN113778911B (en) * 2021-08-04 2023-11-21 成都佰维存储科技有限公司 L2P data caching method and device, readable storage medium and electronic equipment
CN115203075B (en) * 2022-06-27 2024-01-19 威胜能源技术股份有限公司 Distributed dynamic mapping cache design method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049397A (en) * 2012-12-20 2013-04-17 中国科学院上海微系统与信息技术研究所 Method and system for internal cache management of solid state disk based on novel memory
CN103678166A (en) * 2013-08-16 2014-03-26 记忆科技(深圳)有限公司 Method and system for using solid-state disk as cache of computer
CN104166634A (en) * 2014-08-12 2014-11-26 华中科技大学 Management method of mapping table caches in solid-state disk system
CN104657461A (en) * 2015-02-10 2015-05-27 北京航空航天大学 File system metadata search caching method based on internal memory and SSD (Solid State Disk) collaboration
WO2017054737A1 (en) * 2015-09-30 2017-04-06 华为技术有限公司 Address mapping method and device based on mass solid-state storage
CN110968529A (en) * 2019-11-28 2020-04-07 深圳忆联信息系统有限公司 Method and device for realizing non-cache solid state disk, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049397A (en) * 2012-12-20 2013-04-17 中国科学院上海微系统与信息技术研究所 Method and system for internal cache management of solid state disk based on novel memory
CN103678166A (en) * 2013-08-16 2014-03-26 记忆科技(深圳)有限公司 Method and system for using solid-state disk as cache of computer
CN104166634A (en) * 2014-08-12 2014-11-26 华中科技大学 Management method of mapping table caches in solid-state disk system
CN104657461A (en) * 2015-02-10 2015-05-27 北京航空航天大学 File system metadata search caching method based on internal memory and SSD (Solid State Disk) collaboration
WO2017054737A1 (en) * 2015-09-30 2017-04-06 华为技术有限公司 Address mapping method and device based on mass solid-state storage
CN110968529A (en) * 2019-11-28 2020-04-07 深圳忆联信息系统有限公司 Method and device for realizing non-cache solid state disk, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李东阳 ; 刘鹏 ; 丁科 ; 田浪军 ; .基于固态硬盘的云存储分布式缓存策略.计算机工程.2013,(第04期),全文. *

Also Published As

Publication number Publication date
CN111639037A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN111639037B (en) Dynamic allocation method and device for cache and DRAM-Less solid state disk
US10126964B2 (en) Hardware based map acceleration using forward and reverse cache tables
EP3414665B1 (en) Profiling cache replacement
US8443144B2 (en) Storage device reducing a memory management load and computing system using the storage device
US6968424B1 (en) Method and system for transparent compressed memory paging in a computer system
CN105930282B (en) A kind of data cache method for NAND FLASH
US20160217071A1 (en) Cache Allocation in a Computerized System
JP2017138852A (en) Information processing device, storage device and program
CN105339910B (en) Virtual NAND capacity extensions in hybrid drive
JP7013294B2 (en) Memory system
JP2012141946A (en) Semiconductor storage device
KR101297442B1 (en) Nand flash memory including demand-based flash translation layer considering spatial locality
JP2003131946A (en) Method and device for controlling cache memory
CN113342265B (en) Cache management method and device, processor and computer device
CN103543955A (en) Method and system for reading cache with solid state disk as equipment and solid state disk
US11630779B2 (en) Hybrid storage device with three-level memory mapping
CN113243007A (en) Storage class memory access
WO2021062982A1 (en) Method and apparatus for managing hmb memory, and computer device and storage medium
CN116795735B (en) Solid state disk space allocation method, device, medium and system
US20240020014A1 (en) Method for Writing Data to Solid-State Drive
CN110275678B (en) STT-MRAM-based solid state memory device random access performance improvement method
US20230120184A1 (en) Systems, methods, and devices for ordered access of data in block modified memory
WO2017031637A1 (en) Memory access method, apparatus and system
CN110968527A (en) FTL provided caching
CN114185492A (en) Solid state disk garbage recycling algorithm based on reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant