CN111026678B - Cache design method and device based on solid state disk and computer equipment - Google Patents
Cache design method and device based on solid state disk and computer equipment Download PDFInfo
- Publication number
- CN111026678B CN111026678B CN201911340691.9A CN201911340691A CN111026678B CN 111026678 B CN111026678 B CN 111026678B CN 201911340691 A CN201911340691 A CN 201911340691A CN 111026678 B CN111026678 B CN 111026678B
- Authority
- CN
- China
- Prior art keywords
- cache
- data
- lba
- mapping table
- lba data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1024—Latency reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1056—Simplification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/21—Employing a record carrier using a specific recording technology
- G06F2212/214—Solid state disk
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The application relates to a cache design method and device based on a solid state disk, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring a cache design request based on a solid state disk; according to the cache design request based on the solid state disk, storing the LBA data in a corresponding cache unit; after the LBA data is stored in the cache unit, recording the physical position of the LBA data in the cache unit through a mapping table; when the subsequent host needs to read the LBA data, the cache directly inquires the physical position of the corresponding LBA data recorded in the mapping table; and directly reading corresponding LBA data from the cache according to the query result. The invention combines the global characteristic of the mapping table in the SSD, combines the design of the cache with the mapping table, and records the physical position of the LBA data in the cache by using the mapping table, thereby realizing the purpose of reducing the complexity of the search time to the optimal theory and achieving the technical effect of improving the cache search efficiency.
Description
Technical Field
The invention relates to the technical field of solid state disks, in particular to a cache design method and device based on a solid state disk, computer equipment and a storage medium.
Background
At present, both enterprise-level Solid State Disks (SSDs) and consumer-level SSDs have high requirements on bandwidth (Throughput) and Latency (Latency) of read/write commands, generally speaking, a cache (RAM) is designed in an SSD, data of a write command is first stored in the cache of an SSD, and packed and written into a flash memory (NAND) after being combined into a certain amount of data, as shown in fig. 1, which can effectively utilize the efficient random access characteristic of the cache and the write-per-physical page characteristic of the flash memory.
Specifically, when the read command is issued to the SSD, the SSD needs to determine whether data required by the read command is in the cache, and if so, the data in the cache is directly transmitted to the host, which is higher in efficiency. One key point of cache design is whether a search algorithm for judging whether a read command hits a cache is efficient or not, the existing mainstream search algorithm mainly aims to optimize a storage structure of the cache and achieve the purpose of reducing search time, but the search time in the mode is in a certain direct proportion relation with the size of the cache, and optimal performance cannot be realized.
Disclosure of Invention
Therefore, in order to solve the above technical problems, a solid state disk-based cache design method, an apparatus, a computer device, and a storage medium are needed to provide a cache design method, an apparatus, a computer device, and a storage medium that can optimize search efficiency theoretically.
A cache design method based on a solid state disk comprises the following steps:
acquiring a cache design request based on a solid state disk;
according to the cache design request based on the solid state disk, storing the LBA data in a corresponding cache unit;
after the LBA data is stored in the cache unit, recording the physical position of the LBA data in the cache unit through a mapping table;
when the subsequent host needs to read the LBA data, the cache directly inquires the physical position of the corresponding LBA data recorded in the mapping table;
and directly reading corresponding LBA data from the cache according to the query result.
In one embodiment, after the LBA data is stored in the cache unit, the step of recording the physical location of the LBA data in the cache unit through the mapping table further includes:
after the LBA data is stored in the cache unit, recording the physical position of the cache unit in the context of the corresponding LBA position in the mapping table;
the physical position of the cache unit comprises a physical address and a flag bit, the physical address represents an offset position of the cache unit in the cache or a position of a physical page in the flash memory, and the flag bit is used for representing whether data is in the cache or not.
In one embodiment, the method further comprises:
acquiring an LBA data read command request sent by a host;
reading the context corresponding to the LBA in the mapping table according to the LBA data read command request;
judging whether the flag bit in the read context is 1;
and if the flag bit in the read context is 1, reading the corresponding data from the cache.
In one embodiment, after the step of determining whether the flag bit in the read context is 1, the method further includes:
and if the flag bit in the read context is not 1, reading the corresponding data from the flash memory.
A cache design device based on a solid state disk, the device comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a cache design request based on a solid state disk;
the cache module is used for storing the LBA data in the corresponding cache unit according to the cache design request based on the solid state disk;
the recording module is used for recording the physical position of the LBA data in the cache unit through a mapping table after the LBA data is stored in the cache unit;
the query module is used for caching and directly querying the physical position of the corresponding LBA data recorded in the mapping table when the subsequent host needs to read the LBA data;
and the reading module is used for directly reading the corresponding LBA data from the cache according to the query result.
In one embodiment, the recording module is further configured to:
after the LBA data is stored in the cache unit, recording the physical position of the cache unit in the context of the corresponding LBA position in the mapping table;
the physical position of the cache unit comprises a physical address and a flag bit, the physical address represents an offset position of the cache unit in the cache or a position of a physical page in the flash memory, and the flag bit is used for representing whether data is in the cache or not.
In one embodiment, the apparatus further comprises a read command module, the read command module is configured to:
acquiring an LBA data read command request sent by a host;
reading the context corresponding to the LBA in the mapping table according to the LBA data read command request;
judging whether the flag bit in the read context is 1;
and if the flag bit in the read context is 1, reading the corresponding data from the cache.
In one embodiment, the read command module is further configured to:
and if the flag bit in the read context is not 1, reading the corresponding data from the flash memory.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the above methods when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the methods described above.
According to the cache design method and device based on the solid state disk, the computer equipment and the storage medium, the cache design request based on the solid state disk is obtained; according to the cache design request based on the solid state disk, storing the LBA data in a corresponding cache unit; after the LBA data is stored in the cache unit, recording the physical position of the LBA data in the cache unit through a mapping table; when the subsequent host needs to read the LBA data, the cache directly inquires the physical position of the corresponding LBA data recorded in the mapping table; and directly reading corresponding LBA data from the cache according to the query result. The invention combines the global characteristic of the mapping table in the SSD, combines the design of the cache with the mapping table, and records the physical position of the LBA data in the cache by using the mapping table, thereby realizing the purpose of reducing the complexity of the search time to the optimal theory and achieving the technical effect of improving the cache search efficiency.
Drawings
FIG. 1 is a diagram illustrating a caching mechanism in the prior art;
FIG. 2 is a diagram illustrating reading data through a mapping table according to the prior art;
FIG. 3 is a flowchart illustrating a solid state disk-based cache design method according to an embodiment;
FIG. 4 is a schematic flow chart illustrating a solid state disk-based cache design method according to another embodiment;
FIG. 5 is a diagram illustrating the operation of mapping tables across modules in one embodiment;
FIG. 6 is a diagram illustrating a mapping table change in a data write cache, according to an embodiment;
FIG. 7 is a diagram illustrating mapping table changes in writing data to a flash memory according to one embodiment;
FIG. 8 is a schematic flow chart diagram illustrating host reading data in one embodiment;
FIG. 9 is a block diagram of an embodiment of a solid state disk-based cache design apparatus;
FIG. 10 is a block diagram of a solid state disk-based cache design apparatus according to another embodiment;
FIG. 11 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The minimum unit of the host communicating with the solid state disk is lba (logical Block address), which is usually 512Byte or 4KByte, and is assumed to be 4KByte for convenience of description. The mainstream solid state disk adopts a mapping mechanism of 4KB unit, so 4KB is also used as a storage and management unit in the cache. As shown in fig. 1, the buffer has N storage units, each storage unit records the LBA logical address issued by the host and stores data (4KB) corresponding to the LBA, and all the LBAs and corresponding data issued by the host are first stored in the buffer in sequence, for example, if N is 8, the host writes LBA0, LBA3, LBA5, LBA10, LBA20, LBA13, LBA2, LBA7 to the linear buffer of the SSD in sequence. When the host initiates a request to read the LBA7, the SSD buffer determines whether the data of the LBA7 is in the buffer, and any buffer unit may store the LBA7 due to the linear buffer, so the search algorithm needs to traverse the LBA logical address of each buffer unit from the beginning to see whether the LBA logical address is equal to the LBA7, in this example, 8 searches are needed to determine that the LBA7 is indeed in the buffer, and the data is stored in the 8 th buffer unit, and in fact, the complexity of the search time of the buffer design in the linear structure is o (n). This is very inefficient.
In the prior art, besides a linear cache structure, there are other structures with a little high efficiency, such as a hash structure, where the average search time complexity is O (N/K), the worst time complexity is O (N), and K is the number of hash key values, which is not described in detail herein; red black tree structure, search time complexity is o (log n). These cache structures are proposed to reduce the complexity of the search time as much as possible, but are in any case proportional to the cache size N, and the time complexity increases as N increases.
Based on the above, the invention provides a cache design method based on a solid state disk, and the search time complexity is expected to be reduced to the theoretically optimal O (1) so as to improve the search efficiency of the cache.
In an embodiment, as shown in fig. 3, a solid state disk-based cache design method is provided, including:
step 302, obtaining a cache design request based on a solid state disk;
step 304, storing the LBA data in a corresponding cache unit according to the cache design request based on the solid state disk;
step 306, after the LBA data is stored in the cache unit, recording a physical location of the LBA data in the cache unit through the mapping table;
step 308, when the subsequent host needs to read the LBA data, the cache directly queries the physical location of the corresponding LBA data recorded in the mapping table;
step 310, according to the query result, directly reading the corresponding LBA data from the cache.
Specifically, since the flash memory medium has the characteristic that data can only be written after the physical block is erased, the data of the host-initiated logical address LBA _ x is not necessarily written at the physical address offset by x, but at any possible physical address y, so a global mapping table is required inside the SSD to record the correspondence between the logical address LBA _ x and the physical address y, as shown in fig. 2, assuming that the physical page size is also 4KB, after the data of the first four cache units 0, LBA3, LBA5, LBA10 in the cache are written into the physical pages 0-3 of the physical block 0 after the flash memory, the physical positions [0,0], [0,1], [0,2], [0,3] ([ physical block number, physical page number ]) written in the flash memory are recorded in the context corresponding to the global mapping table, LBA0, LBA3, LBA5, LBA 10. The subsequent host reads the data of the LBA5, and if the SSD cache does not search for the data (the time complexity is O (n) is consumed after the data is written into the flash memory), the physical address is found to be [0,2] at the position of the mapping table LBA5 (the time complexity is O (1)), and then a back-end command is initiated to read the data from the physical address [0,2] of the flash memory.
Since the mapping table stores the physical locations in the order of logical addresses LBA 0-Max, querying the mapping table by LBA address requires only the complexity of O (1).
In the existing design, the cache and the mapping table are independent and not related. In the embodiment, the two designs are connected together, and the core idea is as follows: the mapping table function is promoted, after the data of the LBA _ x is stored in the cache unit z, the location of the cache unit z is recorded in the context of the LBA _ x in the mapping table, as shown in fig. 5, when the host reads the LBA _ x subsequently, the cache directly queries the location recorded in the LBA _ x in the mapping table, and finds that the cache unit z directly reads the data from the cache, the search time complexity is equal to O (1), and the time complexity is O (1) no matter how the cache size N changes.
In the embodiment, a cache design request based on a solid state disk is obtained; according to the cache design request based on the solid state disk, storing the LBA data in a corresponding cache unit; after the LBA data is stored in the cache unit, recording the physical position of the LBA data in the cache unit through a mapping table; when the subsequent host needs to read the LBA data, the cache directly inquires the physical position of the corresponding LBA data recorded in the mapping table; and directly reading corresponding LBA data from the cache according to the query result. The invention combines the global characteristic of the mapping table in the SSD, combines the design of the cache with the mapping table, and records the physical position of the LBA data in the cache by using the mapping table, thereby realizing the purpose of reducing the complexity of the search time to the optimal theory and achieving the technical effect of improving the cache search efficiency.
In one embodiment, after the LBA data is stored in the cache unit, the step of recording the physical location of the LBA data in the cache unit through the mapping table further includes:
after the LBA data is stored in the cache unit, recording the physical position of the cache unit in the context of the corresponding LBA position in the mapping table;
the physical position of the cache unit comprises a physical address and a flag bit, wherein the physical address represents an offset position of the cache unit in the cache or a position of a physical page in the flash memory, and the flag bit is used for representing whether data is in the cache or not.
Specifically, to distinguish the expression manner of the cache physical location from the flash physical location, in this embodiment, the cache physical location is defined as: the physical address represents the offset position of a cache unit in the cache or the position of a physical page in the flash memory, and the in _ cache represents whether the page is in the cache, for example: when in _ cache is 1, it means that data is stored in the cache.
Specifically, assuming that N is 8, the host writes LBA0, LBA3, LBA5, and LBA10 in sequence to the first 4 cache units of the SSD linear cache, and the context corresponding to the LBA in the mapping table is set as the physical address of the cache, as shown in fig. 6, the subsequent host initiates a request to read LBA10, and directly finding that the LBA10 is ([ #3],1) at one time indicates that the data is in the third cache unit of the cache, and the data is directly transmitted from the cache, which is higher in efficiency. When the cache is full and then all data is flushed to physical locations [0,0], [0,1], [0,2], [0,3] in the flash memory, the context in the mapping table is updated to the flash physical address, then the state of the mapping table, cache and flash memory is as shown in FIG. 7, the subsequent host again initiates a request to read LBA10, and the mapping table context is ([0,3],0) indicates that the data is in physical page 3 of flash physical block 0.
In an embodiment, as shown in fig. 4, a method for designing a cache based on a solid state disk is provided, where the method further includes:
step 402, obtaining a LBA data read command request sent by a host;
step 404, reading a context corresponding to the LBA in the mapping table according to the LBA data read command request;
step 406, judging whether the flag bit in the read context is 1;
step 408, if the flag bit in the read context is 1, reading corresponding data from the cache;
in step 410, if the flag bit in the read context is not 1, the corresponding data is read from the flash memory.
Specifically, the present embodiment provides a flow of a read command based on the cache design method in the foregoing embodiment, and as shown in fig. 8, the method specifically includes the following steps:
first, a request for the host to read LBA _ x is acquired. Then, the firmware reads the context corresponding to LBA _ x in the mapping table. Then, according to whether in _ cache is equal to 1 in the read context. If yes, reading data from the cache; if not, reading data from the flash memory.
It should be understood that although the various steps in the flow charts of fig. 3-8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 3-8 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 9, there is provided an apparatus 900 for designing a solid state disk-based cache, the apparatus including:
an obtaining module 901, configured to obtain a cache design request based on a solid state disk;
the cache module 902 is configured to store the LBA data in a corresponding cache unit according to the cache design request based on the solid state disk;
a recording module 903, configured to record, after the LBA data is stored in the cache unit, a physical location of the LBA data in the cache unit through a mapping table;
a query module 904, configured to, when the subsequent host needs to read the LBA data, directly query, by the cache, a physical location of the corresponding LBA data recorded in the mapping table;
the reading module 905 is configured to directly read the corresponding LBA data from the cache according to the query result.
In one embodiment, the recording module 903 is further configured to:
after the LBA data is stored in the cache unit, recording the physical position of the cache unit in the context of the corresponding LBA position in the mapping table;
the physical position of the cache unit comprises a physical address and a flag bit, the physical address represents an offset position of the cache unit in the cache or a position of a physical page in the flash memory, and the flag bit is used for representing whether data is in the cache or not.
In one embodiment, as shown in fig. 10, an apparatus 900 for designing a solid state disk-based cache is provided, the apparatus further includes a read command module 906 configured to:
acquiring an LBA data read command request sent by a host;
reading the context corresponding to the LBA in the mapping table according to the LBA data read command request;
judging whether the flag bit in the read context is 1;
and if the flag bit in the read context is 1, reading the corresponding data from the cache.
In one embodiment, the read command module 906 is further to:
and if the flag bit in the read context is not 1, reading the corresponding data from the flash memory.
For specific limitations of the solid state disk-based cache design apparatus, reference may be made to the above limitations of the solid state disk-based cache design method, which is not described herein again.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 11. The computer apparatus includes a processor, a memory, and a network interface connected by a device bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The nonvolatile storage medium stores an operating device, a computer program, and a database. The internal memory provides an environment for the operation device in the nonvolatile storage medium and the execution of the computer program. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to realize a cache design method based on the solid state disk.
Those skilled in the art will appreciate that the architecture shown in fig. 11 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method embodiments when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above respective method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (8)
1. A cache design method based on a solid state disk is characterized by comprising the following steps:
acquiring a cache design request based on a solid state disk;
according to the cache design request based on the solid state disk, storing the LBA data in a corresponding cache unit;
after the LBA data is stored in the cache unit, recording the physical position of the LBA data in the cache unit through a mapping table;
when the subsequent host needs to read the LBA data, the cache directly inquires the physical position of the corresponding LBA data recorded in the mapping table;
directly reading corresponding LBA data from the cache according to the query result;
after the LBA data is stored in the cache unit, recording a physical location of the LBA data in the cache unit through a mapping table further includes: after the LBA data is stored in the cache unit, recording the physical position of the cache unit in the context of the corresponding LBA position in the mapping table; the physical position of the cache unit comprises a physical address and a flag bit, the physical address represents an offset position of the cache unit in the cache or a position of a physical page in the flash memory, and the flag bit is used for representing whether data is in the cache or not.
2. The cache design method based on the solid state disk of claim 1, wherein the method further comprises:
acquiring an LBA data read command request sent by a host;
reading the context corresponding to the LBA in the mapping table according to the LBA data read command request;
judging whether the flag bit in the read context is 1;
and if the flag bit in the read context is 1, reading the corresponding data from the cache.
3. The cache design method based on the solid state disk of claim 2, wherein after the step of determining whether the flag bit in the read context is 1, the method further comprises:
and if the flag bit in the read context is not 1, reading the corresponding data from the flash memory.
4. The utility model provides a buffer memory design device based on solid state hard drives, its characterized in that, the device includes:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring a cache design request based on a solid state disk;
the cache module is used for storing the LBA data in the corresponding cache unit according to the cache design request based on the solid state disk;
the recording module is used for recording the physical position of the LBA data in the cache unit through a mapping table after the LBA data is stored in the cache unit;
the query module is used for caching and directly querying the physical position of the corresponding LBA data recorded in the mapping table when the subsequent host needs to read the LBA data;
the reading module is used for directly reading the corresponding LBA data from the cache according to the query result;
after the LBA data is stored in the cache unit, recording a physical location of the LBA data in the cache unit through a mapping table further includes: after the LBA data is stored in the cache unit, recording the physical position of the cache unit in the context of the corresponding LBA position in the mapping table; the physical position of the cache unit comprises a physical address and a flag bit, the physical address represents an offset position of the cache unit in the cache or a position of a physical page in the flash memory, and the flag bit is used for representing whether data is in the cache or not.
5. The solid state disk-based cache design device of claim 4, further comprising a read command module, wherein the read command module is configured to:
acquiring an LBA data read command request sent by a host;
reading the context corresponding to the LBA in the mapping table according to the LBA data read command request;
judging whether the flag bit in the read context is 1;
and if the flag bit in the read context is 1, reading the corresponding data from the cache.
6. The solid state disk-based cache design device of claim 5, wherein the read command module is further configured to:
and if the flag bit in the read context is not 1, reading the corresponding data from the flash memory.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 3 are implemented when the computer program is executed by the processor.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911340691.9A CN111026678B (en) | 2019-12-23 | 2019-12-23 | Cache design method and device based on solid state disk and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911340691.9A CN111026678B (en) | 2019-12-23 | 2019-12-23 | Cache design method and device based on solid state disk and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111026678A CN111026678A (en) | 2020-04-17 |
CN111026678B true CN111026678B (en) | 2021-11-16 |
Family
ID=70212782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911340691.9A Active CN111026678B (en) | 2019-12-23 | 2019-12-23 | Cache design method and device based on solid state disk and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111026678B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4187363B1 (en) * | 2020-07-31 | 2024-09-25 | Huawei Technologies Co., Ltd. | Storage controller, storage control method, solid state disk and storage system |
CN115033175A (en) * | 2022-05-27 | 2022-09-09 | 阿里巴巴(中国)有限公司 | Data reading method and device, storage medium and processor |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110442533A (en) * | 2019-07-18 | 2019-11-12 | 合肥杰发科技有限公司 | A kind of method, equipment and storage medium improving access performance |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9519591B2 (en) * | 2013-06-22 | 2016-12-13 | Microsoft Technology Licensing, Llc | Latch-free, log-structured storage for multiple access methods |
CN104636285B (en) * | 2015-02-03 | 2016-03-23 | 北京麓柏科技有限公司 | A kind of flash-memory storage system and read-write thereof, delet method |
CN107066393B (en) * | 2017-01-12 | 2020-06-09 | 安徽大学 | Method for improving mapping information density in address mapping table |
CN107193758A (en) * | 2017-05-19 | 2017-09-22 | 记忆科技(深圳)有限公司 | The mapping table management method and solid state hard disc of a kind of solid state hard disc |
CN107832013B (en) * | 2017-11-03 | 2019-10-25 | 中国科学技术大学 | A method of management solid-state hard disc mapping table |
CN108268219B (en) * | 2018-02-01 | 2021-02-09 | 杭州宏杉科技股份有限公司 | Method and device for processing IO (input/output) request |
CN109684238A (en) * | 2018-12-19 | 2019-04-26 | 湖南国科微电子股份有限公司 | A kind of storage method, read method and the solid state hard disk of solid state hard disk mapping relations |
CN110109845B (en) * | 2019-04-26 | 2021-03-05 | 深圳忆联信息系统有限公司 | Cache data management method and device, computer equipment and storage medium |
-
2019
- 2019-12-23 CN CN201911340691.9A patent/CN111026678B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110442533A (en) * | 2019-07-18 | 2019-11-12 | 合肥杰发科技有限公司 | A kind of method, equipment and storage medium improving access performance |
Also Published As
Publication number | Publication date |
---|---|
CN111026678A (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11119940B2 (en) | Sequential-write-based partitions in a logical-to-physical table cache | |
US10915475B2 (en) | Methods and apparatus for variable size logical page management based on hot and cold data | |
US9229876B2 (en) | Method and system for dynamic compression of address tables in a memory | |
US10127166B2 (en) | Data storage controller with multiple pipelines | |
US8010770B2 (en) | Caching device for NAND flash translation layer | |
JP6224253B2 (en) | Speculative prefetching of data stored in flash memory | |
JP5585919B2 (en) | Power shutdown management | |
US10360155B1 (en) | Multi-tier memory management | |
US10223027B2 (en) | Optimized garbage collection for solid-state storage devices | |
EP3338193B1 (en) | Convertible leaf memory mapping | |
US11422945B2 (en) | Generating, maintaining, or utilizing a compressed logical-to-physical table based on sequential writes | |
CN111026678B (en) | Cache design method and device based on solid state disk and computer equipment | |
US11176033B2 (en) | Data storage devices and data processing methods | |
CN112835828A (en) | Direct Memory Access (DMA) commands for non-sequential source and destination memory addresses | |
US9524236B1 (en) | Systems and methods for performing memory management based on data access properties | |
CN111352865B (en) | Write caching for memory controllers | |
US20110264848A1 (en) | Data recording device | |
KR20120034976A (en) | Apparatus and method for mapping the data address in nand flash memory | |
CN107562654B (en) | IO command processing method and device | |
CN104978280B (en) | Data storage system and specific instruction execution method thereof | |
CN110442531B (en) | Method and device for improving reading performance based on solid state disk and computer equipment | |
CN111625477A (en) | Method and device for processing read request for accessing erase block | |
US10474569B2 (en) | Information processing device including nonvolatile cache memory and processor | |
US20240143512A1 (en) | Write buffer linking for easy cache reads | |
CN115220660A (en) | Write command processing optimization method and device for solid state disk and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |