CN111563052A - Cache method and device for reducing read delay, computer equipment and storage medium - Google Patents

Cache method and device for reducing read delay, computer equipment and storage medium Download PDF

Info

Publication number
CN111563052A
CN111563052A CN202010366048.XA CN202010366048A CN111563052A CN 111563052 A CN111563052 A CN 111563052A CN 202010366048 A CN202010366048 A CN 202010366048A CN 111563052 A CN111563052 A CN 111563052A
Authority
CN
China
Prior art keywords
cache
command
block address
cache module
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010366048.XA
Other languages
Chinese (zh)
Other versions
CN111563052B (en
Inventor
吴娴
刘金雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Union Memory Information System Co Ltd
Original Assignee
Shenzhen Union Memory Information System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Union Memory Information System Co Ltd filed Critical Shenzhen Union Memory Information System Co Ltd
Priority to CN202010366048.XA priority Critical patent/CN111563052B/en
Publication of CN111563052A publication Critical patent/CN111563052A/en
Application granted granted Critical
Publication of CN111563052B publication Critical patent/CN111563052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to a cache method, a cache device, computer equipment and a storage medium for reducing read delay; the method comprises the following steps: s1, receiving the command sent by the host; s2, judging the command of the logic block address x sent by the host as a write command or a read command; s3, judging whether the cache module y of the flash memory is flushed with the cache data in the cache; s4, setting the management information of the cache module y as a locked state; s5, transmitting the command of the logic block address x to the cache module y, and setting the management information of the cache module y to be in an unlocked state; s6, judging whether the cache space of the cache module y is full; s7, all data in the cache module y are flushed to the flash memory, and index information in the management information is reserved; s8, setting the cache module y to be in a flash memory state after the cache data is flushed. According to the invention, when the cache data in the cache module is flushed to the flash memory, the cache module still retains the index information, so that the time delay is reduced.

Description

Cache method and device for reducing read delay, computer equipment and storage medium
Technical Field
The invention relates to the technical field of solid state disk reading and writing, in particular to a caching method and device for reducing reading delay, computer equipment and a storage medium.
Background
Generally, a cache (RAM) is definitely designed in the SSD, and data of the write command is first stored in the cache of the SSD, and is packed and written into a flash memory (NAND) after being combined into a certain amount of data, which can effectively utilize the efficient random access characteristic of the cache and the write characteristic of the flash memory according to physical pages.
In the design of the existing cache technology, after the cache is full, data is flushed to the flash memory, and an index of the data is no longer kept in the cache (i.e., the data cannot be queried in the cache, although the data is still in the DRAM), taking tlc (triple level cell) flash memory as an example, the typical time for writing data into the flash memory is 800us, and the typical time for reading data is 80us, if the host just reads the data being written into the flash memory during this time, the host must wait for reading the flash memory after completing writing, and the total read delay reaches 880 us; therefore, the demand cannot be satisfied.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a caching method, a caching device, computer equipment and a storage medium for reducing read delay.
In order to achieve the purpose, the invention adopts the following technical scheme:
the cache method for reducing the read delay comprises the following steps:
s1, receiving the command of logic block address x sent by the host;
s2, judging the command of the logic block address x sent by the host as a write command or a read command; if yes, go to S3;
s3, judging whether the cache module y of the flash memory is flushed with the cache data in the cache; if so, go to S4; if not, go to S7;
s4, setting the management information of the cache module y as a locked state;
s5, transmitting the command of the logic block address x to the cache module y, and setting the management information of the cache module y to be in an unlocked state;
s6, judging whether the cache space of the cache module y is full; if yes, go to S7; if not, returning to execute the step S1;
s7, all data in the cache module y are flushed to the flash memory, and index information in the management information is reserved;
s8, when the command of the logical block address x is transmitted, the cache module y is set to the state that the cache data is flushed to the flash memory.
The further technical scheme is as follows: in the step "S2," it is determined that the command of the logical block address x sent by the host is a write command or a read command, "if yes, the process goes to S9;
s9, judging whether there is a buffer module y in the buffer memory whose index information corresponds to the logic block address x; if so, go to S10; if not, go to S12;
s10, judging whether the management information of the cache module y is in a locked state; if not, the process goes to S11; if yes, go back to step S10;
s11, reading the data corresponding to the command of the logical block address x from the buffer module y;
s12, data corresponding to the command of logical block address x is read from the flash memory.
The further technical scheme is as follows: the command size for the logical block address x is 4 KB.
The further technical scheme is as follows: in the step "S8, after the command of the logical block address x is transmitted, and the cache module y is set to have the cache data flushed down to the flash memory state", the method further includes: returning to step S1, the host issues a command for logical block address x.
A cache device for reducing read latency, comprising: the device comprises a receiving unit, a first judging unit, a second judging unit, a first setting unit, a transmission setting unit, a third judging unit, a lower brush retaining unit and a second setting unit;
the receiving unit is used for receiving a command of a logical block address x sent by a host;
the first judging unit is used for judging whether a command of a logic block address x sent by a host is a write command or a read command;
the second judging unit is used for judging whether cache data exist in the cache and are flushed to a cache module y of the flash memory;
the first setting unit is used for setting the management information of the cache module y to be in a locked state;
the transmission setting unit is used for transmitting the command of the logic block address x to the cache module y and setting the management information of the cache module y to be in an unlocked state;
the third judging unit is used for judging whether the cache space of the cache module y is full;
the flushing reservation unit is used for flushing all data in the cache module y to the flash memory and reserving index information in the management information;
and the second setting unit is used for setting the cache module y to be in a state that the cache data is flushed to the flash memory when the command of the logical block address x is transmitted.
The further technical scheme is as follows: further comprising: a fourth judging unit, a fifth judging unit, a first reading unit and a second reading unit;
the fourth judging unit is configured to judge whether a cache module y whose index information corresponds to the logical block address x exists in the cache;
the fifth judging unit is used for judging whether the management information of the cache module y is in a locked state;
the first reading unit is used for reading data corresponding to the command of the logical block address x from the cache module y;
and the second reading unit is used for reading data corresponding to the command of the logical block address x from the flash memory.
The further technical scheme is as follows: the command size for the logical block address x is 4 KB.
The further technical scheme is as follows: further comprising: and the return unit is used for returning and executing the command of the logical block address x sent by the receiving host.
A computer device, the computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the caching method for reducing read latency as described above when executing the computer program.
A storage medium storing a computer program comprising program instructions which, when executed by a processor, implement a caching method for reducing read latency as described above.
Compared with the prior art, the invention has the beneficial effects that: when the cache data in the cache module is flushed to the flash memory, the cache module still retains the index information, and a host reading command for commanding the cache in the flushing process can directly read the data from the cache, so that the time delay is greatly reduced, and the requirement can be better met.
The invention is further described below with reference to the accompanying drawings and specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating an application of a conventional caching mechanism;
FIG. 2 is a diagram illustrating an application of a conventional cache management module and management information;
FIG. 3 is a diagram illustrating an application of a conventional host write cache;
FIG. 4 is a schematic diagram illustrating an application of a conventional flash memory to flush cache data;
fig. 5 is a schematic flowchart of a caching method for reducing read latency according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an application of a host data write cache according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating an application of the cache data to the flash memory according to the embodiment of the present invention;
FIG. 8 is a diagram illustrating an application of a host read command hit cache according to an embodiment of the present invention;
FIG. 9 is a schematic block diagram of a cache apparatus for reducing read latency according to an embodiment of the present invention;
FIG. 10 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to the embodiments shown in fig. 1 to 10, in the prior art shown in fig. 1 to 4, the minimum unit for the host to communicate with the solid state disk is LBA (Logical Block Address), which represents the data size determined by the host, and it is assumed that the data size is 4KB, and the mainstream solid state disk adopts a mapping mechanism of 4KB units, so that 4KB is also used as a storage and management unit in the cache.
As shown in fig. 1, the enterprise-level solid state disk has a high requirement on the delay of the read/write command, generally, a cache (RAM) is certainly designed in the SSD, and data of the write command is first stored in the cache of the SSD, and is packed and written into a flash memory (NAND) after being combined into a certain amount of data, which can effectively utilize the efficient random access characteristic of the cache and the write-per-physical page characteristic of the flash memory. As shown in fig. 2, each 4KB buffer unit in the buffer stores data of one LBA, and the management information of each buffer unit includes:
LBA index value: indicating which LBA address the cache unit stores;
data address: indicating the initial address of the cache unit in the DRAM;
when the host issues a write command, the solid state disk firmware writes data into the cache unit and updates management information, and when the cache is full, the data in the cache is downloaded to the flash memory, for example, assuming that the size of the cache is 16KB, the physical page of the flash memory is 4KB, the host writes LBA0-3 to the solid state disk, the firmware caches the data in the DRAM, the management information is shown in fig. 3, and when the cache is full, the firmware downloads the data in the cache to the flash memory, as shown in fig. 4, LBA index information in the current cache is cleared, and is converted into a plurality of backend requests to be sent to the backend, each backend request describes a segment of 4KB data unit in the DRAM and a corresponding LBA logical address, the firmware backend receives the request and writes the data into the flash memory, after the write is completed, the status is returned to the cache module, and the cache module can empty the cache unit for caching new host data, the process takes approximately 800 us; if the host reads the data of LBA0 just after the cache has been flushed to the back end, and the cache module finds that the data is not in the cache, then the request is sent to the back end, since LBA0 is writing to flash memory, the request to read LBA0 must wait 800us for the write request to complete before 80us can be spent reading data from flash memory, so the latency of the read command is about 880us, which is a considerable latency for the read command.
Referring to fig. 5 to 8, the present invention discloses a cache method for reducing read latency, which includes the following steps:
s1, receiving the command of logic block address x sent by the host;
where x is not a fixed value, it merely means something.
S2, judging the command of the logic block address x sent by the host as a write command or a read command; if yes, go to S3;
s3, judging whether the cache module y of the flash memory is flushed with the cache data in the cache; if so, go to S4; if not, go to S7;
where y is not a fixed value, it merely means something.
S4, setting the management information of the cache module y as a locked state;
s5, transmitting the command of the logic block address x to the cache module y, and setting the management information of the cache module y to be in an unlocked state;
s6, judging whether the cache space of the cache module y is full; if yes, go to S7; if not, returning to execute the step S1;
s7, all data in the cache module y are flushed to the flash memory, and index information in the management information is reserved;
s8, when the command of the logical block address x is transmitted, the cache module y is set to the state that the cache data is flushed to the flash memory.
In step S8, after the command of the logical block address x is transmitted, the method sets the cache module y to the flash memory status that the cache data has been flushed down, further includes: returning to step S1, the host issues a command for the logical block address x to form a loop.
In the step "S2," it is determined that the command of the logical block address x sent by the host is a write command or a read command, "if yes, the process goes to S9;
s9, judging whether there is a buffer module y in the buffer memory whose index information corresponds to the logic block address x; if so, go to S10; if not, go to S12;
s10, judging whether the management information of the cache module y is in a locked state; if not, the process goes to S11; if yes, go back to step S10;
s11, reading the data corresponding to the command of the logical block address x from the buffer module y;
s12, data corresponding to the command of logical block address x is read from the flash memory.
Wherein, after step S11 and step S12, step S1 is returned to be executed to constitute loop execution.
Wherein the command size of the logical block address x is 4 KB.
In the technical scheme of the invention, the host and the back-end module of the firmware can index the cache module, so that the following actions can occur simultaneously: 1. a host writes to a cache module x; 2. reading a cache module x by the host; 3. the back-end control module reads the cache module x; 4. the back end control module writes to the cache module x. In order to solve the problem, a piece of management information is added to the management information of the cache module: lock, as shown in fig. 6, when Lock is in a locked state, it indicates that data is being written into the cache module, and other data is still written into the cache module, otherwise other data may be written into the cache module, and in addition, a flag is needed to indicate whether the data of the cache module has been flushed to the flash memory: flushed (lower brush).
It is still assumed that the host writes LBA0-3 to the solid state disk, and the operation flow is as follows:
1) firmware caches LBA0-3 data in a cache (DRAM), and since data is written to the cache, management information Lock of each cache module is set to True (locked state) and flush is set to False (not Flushed to flash), as shown in fig. 6;
2) after the data transmission of the host is finished, setting Lock in the management information of the cache module as False (unlocked state);
3) as the cache is full, the LBA0-3 data is flushed to the flash memory by the backend request, and meanwhile, the cache modules still retain the index information of the cache modules, and the management information is as shown in fig. 7, since the process is that the backend control module reads data from the cache (writes to the flash memory), the Lock state is still False (unlocked state);
4) after the back end requests to return the completion state to the cache module, setting the flush of the management information of each cache module as True (the status of being Flushed to the flash memory) to indicate that the data in the cache module is Flushed to the flash memory, and then directly releasing the data of other LBAs stored in the cache module.
If during period 3), the host initiates a read command for the LBA0, the cache module retains the index information and determines that the cache hit, and the Lock state in the management information of the LBA0 is False (unlocked state), so that the cache data can be directly transmitted to the host at this time, as shown in fig. 8, the process takes about 1us through a main PCIe4 (a high speed serial computer expansion bus) channel interface, that is, the delay consumed by the read command in the solid state disk is 1us, which is a substantial leap compared with the prior art, and the delay is greatly reduced.
The invention improves the prior art, cache management information still retains index information when data in the cache is flushed to the flash memory, and if a read command hits the cache in the period, the data in the cache can be transmitted to the host, so that the read delay in the scene is equal to the time for reading the cache, and the theoretical optimal time is reached.
According to the invention, when the cache data in the cache module is flushed to the flash memory, the cache module still retains the index information, and the host reading command for commanding the cache in the flushing process can directly read the data from the cache, so that the time delay is greatly reduced, and the requirement can be better met.
Referring to fig. 9, the present invention also discloses a cache device for reducing read latency, including: a receiving unit 10, a first judging unit 20, a second judging unit 30, a first setting unit 40, a transmission setting unit 50, a third judging unit 60, a lower brush retaining unit 70, and a second setting unit 80;
the receiving unit 10 is configured to receive a command of a logical block address x sent by a host;
the first determining unit 20 is configured to determine that a command of a logical block address x sent by a host is a write command or a read command;
the second judging unit 30 is configured to judge whether the cache module y that the cache data has been flushed to the flash memory exists in the cache;
the first setting unit 40 is configured to set the management information of the cache module y to be in a locked state;
the transmission setting unit 50 is configured to transmit a command of the logical block address x to the cache module y, and set management information of the cache module y to be in an unlocked state;
the third judging unit 60 is configured to judge whether the cache space of the cache module y is full;
the flushing reservation unit 70 is configured to flush all data in the cache module y to the flash memory, and reserve index information in the management information;
the second setting unit 80 is configured to set the cache module y to a flash memory status after the cache data is flushed when the command of the logical block address x is transmitted.
Wherein, the device still includes: a fourth judging unit 90, a fifth judging unit 100, a first reading unit 110, and a second reading unit 120;
the fourth determining unit 90 is configured to determine whether a cache module y whose index information corresponds to the logical block address x exists in the cache;
the fifth judging unit 100 is configured to judge whether the management information of the cache module y is in a locked state;
the first reading unit 110 is configured to read data corresponding to a command of a logical block address x from the buffer module y;
the second reading unit 120 is configured to read data corresponding to the command of the logical block address x from the flash memory.
Wherein the command size of the logical block address x is 4 KB.
Wherein, the device still includes: a returning unit 130, configured to return a command for executing the received logical block address x issued by the host.
It should be noted that, as can be clearly understood by those skilled in the art, the specific implementation process of the cache apparatus for reducing read latency and each unit may refer to the corresponding description in the foregoing method embodiment, and for convenience and conciseness of description, no further description is provided herein.
The above-mentioned cache apparatus for reducing read latency may be implemented in the form of a computer program which can be run on a computer device as shown in fig. 10.
Referring to fig. 10, fig. 10 is a schematic block diagram of a computer device according to an embodiment of the present application; the computer device 500 may be a terminal or a server, where the terminal may be an electronic device with a communication function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device. The server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 10, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer programs 5032 include program instructions that, when executed, cause the processor 502 to perform a caching method that reduces read latency.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The memory 504 provides an environment for the computer program 5032 in the non-volatile storage medium 503 to run, and when the computer program 5032 is executed by the processor 502, the processor 502 may be enabled to perform a caching method for reducing read latency.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 10 is a block diagram of only a portion of the configuration relevant to the present teachings and is not intended to limit the computing device 500 to which the present teachings may be applied, and that a particular computing device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
It should be understood that, in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program comprises program instructions that, when executed by a processor, implement the above-described caching method for reducing read latency.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
The technical contents of the present invention are further illustrated by the examples only for the convenience of the reader, but the embodiments of the present invention are not limited thereto, and any technical extension or re-creation based on the present invention is protected by the present invention. The protection scope of the invention is subject to the claims.

Claims (10)

1. The cache method for reducing the read delay is characterized by comprising the following steps of:
s1, receiving the command of logic block address x sent by the host;
s2, judging the command of the logic block address x sent by the host as a write command or a read command; if yes, go to S3;
s3, judging whether the cache module y of the flash memory is flushed with the cache data in the cache; if so, go to S4; if not, go to S7;
s4, setting the management information of the cache module y as a locked state;
s5, transmitting the command of the logic block address x to the cache module y, and setting the management information of the cache module y to be in an unlocked state;
s6, judging whether the cache space of the cache module y is full; if yes, go to S7; if not, returning to execute the step S1;
s7, all data in the cache module y are flushed to the flash memory, and index information in the management information is reserved;
s8, when the command of the logical block address x is transmitted, the cache module y is set to the state that the cache data is flushed to the flash memory.
2. The cache method for reducing read latency of claim 1, wherein in the step "S2," it is determined that the command of the logical block address x issued by the host is a write command or a read command, "and if the command is a read command, the process proceeds to S9;
s9, judging whether there is a buffer module y in the buffer memory whose index information corresponds to the logic block address x; if so, go to S10; if not, go to S12;
s10, judging whether the management information of the cache module y is in a locked state; if not, the process goes to S11; if yes, go back to step S10;
s11, reading the data corresponding to the command of the logical block address x from the buffer module y;
s12, data corresponding to the command of logical block address x is read from the flash memory.
3. The method as claimed in claim 1, wherein the command size of the logical block address x is 4 KB.
4. The method as claimed in claim 1, wherein the step "S8, after the step of setting the cache module y to be in the flash memory status after the command at the logical block address x is transmitted, further comprises: returning to step S1, the host issues a command for logical block address x.
5. A cache device for reducing read latency, comprising: the device comprises a receiving unit, a first judging unit, a second judging unit, a first setting unit, a transmission setting unit, a third judging unit, a lower brush retaining unit and a second setting unit;
the receiving unit is used for receiving a command of a logical block address x sent by a host;
the first judging unit is used for judging whether a command of a logic block address x sent by a host is a write command or a read command;
the second judging unit is used for judging whether cache data exist in the cache and are flushed to a cache module y of the flash memory;
the first setting unit is used for setting the management information of the cache module y to be in a locked state;
the transmission setting unit is used for transmitting the command of the logic block address x to the cache module y and setting the management information of the cache module y to be in an unlocked state;
the third judging unit is used for judging whether the cache space of the cache module y is full;
the flushing reservation unit is used for flushing all data in the cache module y to the flash memory and reserving index information in the management information;
and the second setting unit is used for setting the cache module y to be in a state that the cache data is flushed to the flash memory when the command of the logical block address x is transmitted.
6. The cache apparatus for reducing read latency of claim 5, further comprising: a fourth judging unit, a fifth judging unit, a first reading unit and a second reading unit;
the fourth judging unit is configured to judge whether a cache module y whose index information corresponds to the logical block address x exists in the cache;
the fifth judging unit is used for judging whether the management information of the cache module y is in a locked state;
the first reading unit is used for reading data corresponding to the command of the logical block address x from the cache module y;
and the second reading unit is used for reading data corresponding to the command of the logical block address x from the flash memory.
7. The reduced read latency cache apparatus of claim 5, wherein the command size of the logical block address x is 4 KB.
8. The cache apparatus for reducing read latency of claim 5, further comprising: and the return unit is used for returning and executing the command of the logical block address x sent by the receiving host.
9. A computer device comprising a memory having a computer program stored thereon and a processor that, when executing the computer program, implements the method of caching for reducing read latency of any one of claims 1 to 4.
10. A storage medium storing a computer program comprising program instructions which, when executed by a processor, implement the caching method for reducing read latency according to any one of claims 1 to 4.
CN202010366048.XA 2020-04-30 2020-04-30 Caching method and device for reducing read delay, computer equipment and storage medium Active CN111563052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010366048.XA CN111563052B (en) 2020-04-30 2020-04-30 Caching method and device for reducing read delay, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010366048.XA CN111563052B (en) 2020-04-30 2020-04-30 Caching method and device for reducing read delay, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111563052A true CN111563052A (en) 2020-08-21
CN111563052B CN111563052B (en) 2023-08-08

Family

ID=72074620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010366048.XA Active CN111563052B (en) 2020-04-30 2020-04-30 Caching method and device for reducing read delay, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111563052B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930517A (en) * 2020-09-18 2020-11-13 北京中科立维科技有限公司 High-performance self-adaptive garbage collection method and computer system
CN112306908A (en) * 2020-11-19 2021-02-02 广州安凯微电子股份有限公司 Method, system, terminal device and medium for locating abnormality of ICACHE instruction cache region of CPU
CN112513988A (en) * 2020-11-06 2021-03-16 长江存储科技有限责任公司 Pseudo-asynchronous multiplanar independent read
CN117407928A (en) * 2023-12-13 2024-01-16 合肥康芯威存储技术有限公司 Storage device, data protection method for storage device, computer apparatus, and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6598128B1 (en) * 1999-10-01 2003-07-22 Hitachi, Ltd. Microprocessor having improved memory management unit and cache memory
US6970872B1 (en) * 2002-07-23 2005-11-29 Oracle International Corporation Techniques for reducing latency in a multi-node system when obtaining a resource that does not reside in cache
CN101794255A (en) * 2009-12-31 2010-08-04 浙江中控自动化仪表有限公司 Paperless recorder and method for storing data of same
CN102843396A (en) * 2011-06-22 2012-12-26 中兴通讯股份有限公司 Data writing and reading method and device in distributed caching system
CN104636285A (en) * 2015-02-03 2015-05-20 北京麓柏科技有限公司 Flash memory storage system and reading, writing and deleting method thereof
CN108920096A (en) * 2018-06-06 2018-11-30 深圳忆联信息系统有限公司 A kind of data storage method of SSD, device, computer equipment and storage medium
CN109101444A (en) * 2018-08-22 2018-12-28 深圳忆联信息系统有限公司 A kind of method and device reducing the random read latency of solid state hard disk
CN110196818A (en) * 2018-02-27 2019-09-03 华为技术有限公司 Data cached method, buffer memory device and storage system
CN110888603A (en) * 2019-11-27 2020-03-17 深圳前海环融联易信息科技服务有限公司 High-concurrency data writing method and device, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6598128B1 (en) * 1999-10-01 2003-07-22 Hitachi, Ltd. Microprocessor having improved memory management unit and cache memory
US6970872B1 (en) * 2002-07-23 2005-11-29 Oracle International Corporation Techniques for reducing latency in a multi-node system when obtaining a resource that does not reside in cache
CN101794255A (en) * 2009-12-31 2010-08-04 浙江中控自动化仪表有限公司 Paperless recorder and method for storing data of same
CN102843396A (en) * 2011-06-22 2012-12-26 中兴通讯股份有限公司 Data writing and reading method and device in distributed caching system
CN104636285A (en) * 2015-02-03 2015-05-20 北京麓柏科技有限公司 Flash memory storage system and reading, writing and deleting method thereof
CN110196818A (en) * 2018-02-27 2019-09-03 华为技术有限公司 Data cached method, buffer memory device and storage system
CN108920096A (en) * 2018-06-06 2018-11-30 深圳忆联信息系统有限公司 A kind of data storage method of SSD, device, computer equipment and storage medium
CN109101444A (en) * 2018-08-22 2018-12-28 深圳忆联信息系统有限公司 A kind of method and device reducing the random read latency of solid state hard disk
CN110888603A (en) * 2019-11-27 2020-03-17 深圳前海环融联易信息科技服务有限公司 High-concurrency data writing method and device, computer equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930517A (en) * 2020-09-18 2020-11-13 北京中科立维科技有限公司 High-performance self-adaptive garbage collection method and computer system
CN111930517B (en) * 2020-09-18 2023-07-14 北京中科立维科技有限公司 High-performance self-adaptive garbage collection method and computer system
CN112513988A (en) * 2020-11-06 2021-03-16 长江存储科技有限责任公司 Pseudo-asynchronous multiplanar independent read
CN112306908A (en) * 2020-11-19 2021-02-02 广州安凯微电子股份有限公司 Method, system, terminal device and medium for locating abnormality of ICACHE instruction cache region of CPU
CN112306908B (en) * 2020-11-19 2024-03-15 广州安凯微电子股份有限公司 ICACHE instruction cache region abnormality positioning method, system, terminal equipment and medium of CPU
CN117407928A (en) * 2023-12-13 2024-01-16 合肥康芯威存储技术有限公司 Storage device, data protection method for storage device, computer apparatus, and medium
CN117407928B (en) * 2023-12-13 2024-03-22 合肥康芯威存储技术有限公司 Storage device, data protection method for storage device, computer apparatus, and medium

Also Published As

Publication number Publication date
CN111563052B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
USRE48736E1 (en) Memory system having high data transfer efficiency and host controller
CN111563052B (en) Caching method and device for reducing read delay, computer equipment and storage medium
CN111459844B (en) Data storage device and method for accessing logical-to-physical address mapping table
US11210020B2 (en) Methods and systems for accessing a memory
US10635356B2 (en) Data management method and storage controller using the same
US20200104067A1 (en) Method for fast boot read
CN111414132A (en) Main storage device with heterogeneous memory, computer system and data management method
CN110187832B (en) Data operation method, device and system
CN111290973A (en) Data writing method and device, computer equipment and storage medium
CN114063893A (en) Data storage device and data processing method
CN110737607B (en) Method and device for managing HMB memory, computer equipment and storage medium
TWI710905B (en) Data storage device and method for loading logical-to-physical mapping table
CN115794682A (en) Cache replacement method and device, electronic equipment and storage medium
CN114780448A (en) Method and device for quickly copying data, computer equipment and storage medium
CN111124314A (en) SSD performance improving method and device for mapping table dynamic loading, computer equipment and storage medium
CN111813703A (en) Data storage device and method for updating logical-to-physical address mapping table
CN110716887B (en) Hardware cache data loading method supporting write hint
CN108519860B (en) SSD read hit processing method and device
TWI820426B (en) Memory system and control method
CN111913662B (en) SLC writing performance improving method and device, computer equipment and storage medium
EP3916567A1 (en) Method for processing page fault by processor
CN113220608A (en) NVMe command processor and processing method thereof
CN112000591A (en) SSD (solid State disk) scanning method and device capable of appointing logical block address, computer equipment and storage medium
CN112612726B (en) Data storage method and device based on cache consistency, processing chip and server
CN115309668A (en) SDD writing performance optimization method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant