CN109783025B - Reading method and device for granularity discrete distribution of sequential data page - Google Patents
Reading method and device for granularity discrete distribution of sequential data page Download PDFInfo
- Publication number
- CN109783025B CN109783025B CN201910022651.3A CN201910022651A CN109783025B CN 109783025 B CN109783025 B CN 109783025B CN 201910022651 A CN201910022651 A CN 201910022651A CN 109783025 B CN109783025 B CN 109783025B
- Authority
- CN
- China
- Prior art keywords
- request
- channel
- page data
- page
- data request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The application relates to a reading method, a reading device, computer equipment and a storage medium for the granularity discrete distribution of sequential data pages, wherein the method comprises the following steps: writing the sequential data according to a channel priority principle and a discrete distribution rule of page granularity; dispersing the sequential data read requests to each channel according to the address conversion result; judging whether the request sequence in the channel is regular or not; and converting the irregular request sequence in the channel into a regular request sequence through a sequencer. The invention realizes that the sequential reading performance of the solid state disk is effectively improved by using the multiplane operating characteristic of the Nand Flash.
Description
Technical Field
The invention relates to the technical field of solid state disks, in particular to a reading method and device for the granularity discrete distribution of sequential data pages, computer equipment and a storage medium.
Background
Currently, with the development of solid state disk technology, in a solid state disk formed by multiple dice, sequential data can effectively utilize concurrency characteristics among the dice and a data transmission function of a channel bus according to the principle that page granularity is discretely distributed in each Die.
In the conventional technology, a write cache in firmware caches data to be written according to the above page granularity discrete distribution principle, irregular data hash can be generated due to the influence of factors such as bad blocks when an algorithm layer is converted, and further, the request of each Plane in a certain time period during sequential reading Die is not uniform, so that the sequential reading performance of a solid state disk is reduced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a reading method, an apparatus, a computer device, and a storage medium for implementing a discrete distribution of granularity of sequential data pages, which can improve the sequential reading performance of a solid state disk.
A method of reading a discrete distribution of granularity of sequential data pages, the method comprising:
writing the discrete distribution rule of the granularity of the sequential data page according to a channel priority principle;
dispersing read requests of the sequential data to each channel according to address mapping;
judging whether the request sequence in the channel is regular or not;
and converting the irregular request sequence in the channel into a regular request sequence through a sequencer.
In one embodiment, the step of converting the irregular request sequence in the channel into a regular request sequence by the sequencer further comprises:
the sequencer reorders the sequential data read requests according to physical requirements according to a preset ordering mechanism; wherein the sequencer is implemented in a FIFO configuration.
In one embodiment, the principle of the preset ordering mechanism includes:
the preset ordering mechanism is used for caching the least read requests as far as possible so as to avoid that a large number of read requests are received by a back end in a burst manner;
the preset sequencing mechanism is used for reducing the operation of the CPU as much as possible;
the preset ordering mechanism is used for releasing the cached read requests as early as possible so as to avoid the increase of the interaction delay between the host side and the equipment side.
In one embodiment, the step of converting irregular request sequences in the channels into regular request sequences by the sequencer comprises:
acquiring a first page data request corresponding to a current host command on a channel, and directly descending the first page data request without caching;
acquiring a second page data request, caching the second page data request, and maintaining the original position of a Cache FIFO Pointer;
acquiring a third page data request, descending the third page data request, increasing the Cache FIFO Pointer progressively, and descending the second page data request simultaneously;
acquiring a fourth page data request, descending the fourth page data request, and increasing the Cache FIFO Pointer progressively;
and repeating the steps until all the requests on the channel are descended.
A device for reading a discrete distribution of granularity of a sequential data page, the device comprising:
the distribution module is used for writing the discrete distribution rule of the granularity of the sequential data page according to the channel priority principle when the sequential data is written;
the address conversion module is used for dispersing the sequential data reading request to each channel according to the address conversion result;
the judging module is used for judging whether the request sequence of the sequential data reading request in the channel is regular or not;
and the sequencing module is used for converting irregular request sequences in the channels into regular request sequences through a sequencer.
In one embodiment, the sorting module is further configured to:
the sequencer reorders the sequential data read requests according to physical requirements according to a preset ordering mechanism; wherein the sequencer is implemented in a FIFO configuration.
In one embodiment, the principle of the preset ordering mechanism includes:
the preset ordering mechanism is used for caching the least read requests as far as possible so as to avoid that a large number of read requests are received by a back end in a burst manner;
the preset sequencing mechanism is used for reducing the operation of the CPU as much as possible;
the preset ordering mechanism is used for releasing the cached read requests as early as possible so as to avoid the increase of the interaction delay between the host side and the equipment side.
In one embodiment, the sorting module is further configured to:
acquiring a first page data request corresponding to a current host command on a channel, and directly descending the first page data request without caching;
acquiring a second page data request, caching the second page data request, and maintaining the original position of a Cache FIFO Pointer;
acquiring a third page data request, descending the third page data request, increasing the Cache FIFO Pointer progressively, and descending the second page data request simultaneously;
acquiring a fourth page data request, descending the fourth page data request, and increasing the Cache FIFO Pointer progressively;
and repeating the steps until all the requests on the channel are descended.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the above methods when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the methods described above.
According to the reading method, the reading device, the computer equipment and the storage medium for the discrete distribution of the granularity of the sequential data page, firstly, the discrete distribution rule of the granularity of the sequential data page is written in according to the channel priority principle; dispersing the sequential data read requests to each channel according to the address conversion result; judging whether the request sequence in the channel is regular or not; and converting the irregular request sequence in the channel into a regular request sequence through a sequencer. The invention realizes that the sequential reading performance of the solid state disk is effectively improved by using the multiplane operating characteristic of the Nand Flash.
Drawings
FIG. 1 is a schematic diagram of Die structure in one embodiment;
FIG. 2 is a flowchart illustrating a method for reading a granular discrete distribution of sequential data pages according to one embodiment;
FIG. 3 is a schematic illustration of data distribution in one embodiment;
FIG. 4 is a flow diagram illustrating the steps of the principles of a predetermined ordering mechanism in one embodiment;
FIG. 5 is a flow diagram illustrating the steps of converting an irregular sequence of requests within a channel into a regular sequence of requests by a sequencer in one embodiment;
FIG. 6 is a diagram of a sorter that normalizes sequences of data requests in one embodiment;
FIG. 7 is a diagram illustrating the operation of the sequencer in one embodiment;
FIG. 8 is a block diagram of an embodiment of a read device with a discrete distribution of granularity for sequential data pages;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, the firmware design optimizes the read request sequence at the Nand Flash end, making use of the features that the Nand Flash provides to improve the read performance as much as possible. The Multi _ Plane operation characteristic is one of the characteristics that can improve the reading performance.
Multi _ Plane refers to data that gives a read command to a certain Block of Plane 0 and a certain Block under Plane 1 of a given Die at the same time, and can prepare request addresses of two planes at the time of consuming one tR (data preparation time). Two data reads were reduced by one tR compared to the Single _ Plane operation. The requirement of Multi _ Plane is that WordLine and Page under both planes are consistent, with no requirement for Block.
In addition, in terms of the solid state disk structure, the channel has an independent data bus and an independent command bus, more than one Die shared data bus can be mounted under the channel, and the Die are independent and have no coupling relation, but the bus can be used by only one Die at the same time. In the design, the read request sequence should be dispersed to each channel as much as possible, and the data granularity-page granularity corresponding to the read command of Nand Flash is subjected to discrete distribution according to the channel priority principle.
In one embodiment, as shown in FIG. 2, there is provided a method for reading a granular discrete distribution of sequential data pages, the method comprising:
step 202, writing in the discrete distribution rule of the granularity of the sequential data page according to the channel priority principle;
step 204, dispersing the sequential data read request to each channel according to the address conversion result;
step 206, judging whether the request sequence in the channel is regular;
at step 208, the irregular request sequence in the channel is converted into a regular request sequence by the sequencer.
Specifically, referring to fig. 3, a page granularity order data distribution diagram of channel priority is shown. The location of channel 1 in Super Block contains bad blocks resulting in an unexpected distribution of data for channel 0, with channel 2-7 data warping (i.e., the data request sequences A-B-C-D-E-F satisfy A and B, C and D, E and F satisfy the MultiPlane property to share tR). The data for channel 0 produces an unexpected distribution pattern due to the culling of bad blocks for channel 1 when writing.
And carrying out the discrete distribution of the granularity of the sequential data page according to the channel priority principle. In a design where a channel contains more than one Die, the channel priority principle can improve data parallelism and improve sequential read performance. Based on the operating characteristics of Nand Flash, the Multi _ Plane characteristic is applied as much as possible so that data reading of 2 pages consumes only one tR. The granularity of the sequential data pages is discretely distributed, and the address conversion based on the organization structure and the algorithm layer of the Write Cache is realized. The specific implementation is shown in fig. 3.
The Write Cache organization structure integrates the Write-in characteristic of Nand Flash and the characteristic of host computer sequential reading.
NVME Sequential Read commands a maximum data size of 128 KB. The Write Cache column represents the data composition of a sequential read command. The horizontal direction of the Write Cache uses the characteristics of Multi _ Plane and One Pass Program of TLC Nand Flash, and the safe writing and the efficient writing of data are both considered.
The data to be written cached by the Write Cache is wholly refreshed into a Nand Flash medium and is subjected to address conversion of an algorithm layer, a physical Block at the position of a Super Block A channel 1 is shown to be a bad Block correspondingly, and the bad Block is removed by the algorithm layer during writing.
Sequential reads initiate requests in a sequence of 0-1-2-3 … 45-46-47 …, each request corresponding to a page granularity. Die operations among the channels and bus data transmission of the channels are completely independent and can be executed concurrently.
With channel 2 analysis, within the request interval of 0-47, 1-9-17-25-33-41 is the 6 Page data request sequence, 1 and 9, 17 and 25, 33 and 41 satisfy WordLine consistent, Page consistent and belong to the same Die two planes respectively, and the Multi _ Plane operating characteristics of Nand Flash can be utilized, so that 1 and 9, 17 and 25, 33 and 41 respectively consume only one tR. Namely, when the request 1 is initiated, the following request is predicted to be 9, the physical address of the request 9 can be determined, and simultaneously, the data of the request 1 and the data of the request 9 are prepared and cached in a data cache region which is exclusive to each Plane in the Nand Flash; when request 17 is initiated, the data for request 17 and request 25 may be prepared at the same time. Channel 2 belongs to the ideal data distribution.
Analyzed by a channel 0, in the request interval of 0-47, Page data request sequences of 0-7-8-15-16-23-24-31-32-39-40-47, 0 and 8, 7 and 15, 16 and 24, 24 and 31, 32 and 40, 39 and 47 respectively satisfy WordLine consistency, Page type consistency and two planes belonging to the same Die respectively. Anticipating the initiation of request 0, while the data for request 0 and request 8 are ready; when request 7 is initiated, the data of request 7 and request 15 are prepared at the same time. However, a sequence of requests such as 0-7-8-15 … would result in the invalidation of the pre-cached 8, 15, 24, 31 … belonging to Plane 1.
And finally, the problem of irregular distribution possibly caused by the prior distribution of the granularity channels of the sequential data pages is solved through the matching of the sequencers.
In this embodiment, the discrete distribution rule of the granularity of the sequential data page is written according to the channel priority principle; dispersing the sequential data read requests to each channel according to the address conversion result; judging whether the request sequence in the channel is regular or not; and converting the irregular request sequence in the channel into a regular request sequence through a sequencer. The embodiment realizes that the sequential reading performance of the solid state disk is effectively improved by using the Multi _ Plane operating characteristic of the Nand Flash.
In one embodiment, the step of converting the irregular request sequence in the channel into the regular request sequence by the sequencer further comprises:
the sequencer reorders the sequential data read requests according to physical requirements according to a preset ordering mechanism; wherein the sequencer is implemented in a FIFO structure.
In one specific embodiment, referring to fig. 4, a method for reading a granularity discrete distribution of sequential data pages is provided, wherein the principle of the preset sorting mechanism includes:
step 402, a preset ordering mechanism is used for caching the least read requests as far as possible so as to avoid that a large number of read requests are received by a back end in a burst manner;
step 404, a preset sequencing mechanism is used for reducing the operation of the CPU as much as possible;
in step 406, the predetermined ordering mechanism is used to release the cached read request as early as possible to avoid the increase of the interaction delay between the host side and the device side.
Specifically, referring to fig. 3, for channel 0 with irregular data distribution, in order to effectively utilize the multiplane operating characteristic of Nand Flash to improve sequential reading performance, a mechanism is designed to convert the page 0-7-8-15-16-23-24-31-32-39-40-47 data request sequence of channel 0 into a page 0-8-7-15-16-24-23-31-32-40-39-47 data request sequence, i.e., to reorder the sequential data read requests according to physical requirements.
The principle of designing the sorting mechanism has the following points: the sequencing mechanism caches the least read requests as much as possible, so that the phenomenon that a large number of read requests are received by a back end in a burst mode is avoided; the sequencing mechanism reduces the operation of the CPU as much as possible; the sequencing mechanism releases the cached read request as early as possible, and avoids the increase of the interaction delay between the host end and the equipment end.
In one specific embodiment, referring to fig. 5, a method for reading a granularity discrete distribution of sequential data pages is provided, wherein the step of converting an irregular request sequence in a channel into a regular request sequence by a sequencer comprises:
step 502, acquiring a first page data request corresponding to a current host command on a channel, and directly descending the first page data request without caching;
step 504, acquiring a second page data request, caching the second page data request, and maintaining the original position of a Cache FIFO Pointer;
step 506, acquiring a third page data request, descending the third page data request, increasing the Cache FIFO Pointer, and descending the second page data request;
step 508, acquiring a fourth page data request, descending the fourth page data request, and increasing the Cache FIFO Pointer progressively;
step 510, repeat the above steps until all requests on the channel have been sent downstream.
Specifically, fig. 7 is a typical schematic diagram of irregular channel 0 data distribution caused by a scene containing a bad block, and in an actual system, except for the bad block, the irregular sequential data distribution is caused by the mixture of the scenes of incomplete Write Cache refresh writing to Nand Flash and sequential writing and random writing.
The sequencer shown in fig. 6 is implemented in a FIFO configuration, and fig. 7 shows the sequencer operating mechanism for lane 0. Specific examples are as follows: the sequencer page 0-7-8-15-16-23-24-31-32-39-40-47 data request sequence translates to page 0-8-7-15-16-24-23-31-32-40-39-47 data request sequence.
Firstly, a first page data request 0 corresponding to a current host command on a channel 0 can directly descend without caching;
secondly, a second page data request 7 is shown, the request is cached, and the Cache FIFO Pointer maintains the original position;
a third page data request 8 is subjected to downlink request, the Cache FIFO Pointer is increased in size, and simultaneously, a request 7 is subjected to downlink;
fourthly, requesting for data of a fourth page 15, requesting for descending 15, and gradually increasing the Cache FIFO Pointer;
……
this process is repeated until all requests on lane 0 have been downstream. The sequencer Cache FIFO Pointer increments and is issued based on the Plane 1 request.
②⑥The cached page data requests 7, 23, 39 reside for a short time, and are subsequently issued after the respective issue requests 8, 24, 40, thereby minimizing the response delay. Other requests are not cached, and are immediately downlink, so that the number of cached requests is effectively reduced. The sequencer is realized by an FIFO structure, and the sequencing calculation process of other schemes is effectively avoided.
It should be understood that although the various steps in the flow charts of fig. 2-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in FIG. 8, there is provided a reading apparatus 800 for a discrete distribution of granularity for sequential data pages, the apparatus comprising:
the distribution module 801 is used for writing the sequential data into a Flash medium according to a channel priority principle and a discrete distribution rule of page granularity;
an address translation module 802, configured to distribute the sequential data read requests to each channel according to the address translation result;
a judging module 803, configured to judge whether the request sequence in the channel is regular;
and the sorting module 804 is used for converting irregular request sequences in the channels into regular request sequences through the sorter.
In one embodiment, the sorting module 804 is further configured to:
the sequencer reorders the sequential data read requests according to physical requirements according to a preset ordering mechanism; wherein the sequencer is implemented in a FIFO structure.
In one embodiment, the principle of the preset ordering mechanism includes:
the preset ordering mechanism is used for caching the least read requests as far as possible so as to prevent a large number of read requests from being received by a back end in a burst manner;
the preset sequencing mechanism is used for reducing the operation of the CPU as much as possible;
the preset ordering mechanism is used for releasing the cached read request as early as possible so as to avoid the increase of the interaction delay between the host side and the device side.
In one embodiment, the sorting module 804 is further configured to:
acquiring a first page data request corresponding to a current host command on a channel, and directly descending the first page data request without caching;
acquiring a second page data request, caching the second page data request, and maintaining the original position of a Cache FIFO Pointer;
acquiring a third page data request, descending the third page data request, increasing the Cache FIFO Pointer progressively, and descending the second page data request simultaneously;
acquiring a fourth page data request, descending the fourth page data request, and increasing the Cache FIFO Pointer progressively;
and repeating the steps until all the requests on the channel are descended.
For the specific limitation of the reading device for the granularity discrete distribution of the sequential data pages, reference may be made to the above limitation on the reading method for the granularity discrete distribution of the sequential data pages, and details are not described here.
In one embodiment, a computer device is provided, the internal structure of which may be as shown in FIG. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of reading a discrete distribution of granularity of sequential data pages.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method embodiments when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above respective method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (8)
1. A method of reading a discrete distribution of granularity of sequential data pages, the method comprising:
writing the discrete distribution rule of the granularity of the sequential data page according to a channel priority principle;
dispersing read requests of the sequential data to each channel according to address mapping;
judging whether the request sequence in the channel is regular or not;
converting irregular request sequences in the channels into regular request sequences through a sequencer;
the step of converting the irregular request sequence in the channel into a regular request sequence by the sequencer comprises: acquiring a first page data request corresponding to a current host command on a channel, and directly descending the first page data request without caching; acquiring a second page data request, caching the second page data request, and maintaining the original position of a Cache FIFO Pointer; acquiring a third page data request, descending the third page data request, increasing the Cache FIFO Pointer progressively, and descending the second page data request simultaneously; acquiring a fourth page data request, descending the fourth page data request, and increasing the Cache FIFO Pointer progressively; and repeating the steps until all the requests on the channel are descended.
2. The method of claim 1, wherein the step of converting by the sequencer the irregular request sequence within the channel into a regular request sequence further comprises:
the sequencer reorders the sequential data read requests according to physical requirements according to a preset ordering mechanism; wherein the sequencer is implemented in a FIFO configuration.
3. The method as claimed in claim 2, wherein the principle of the predetermined sorting mechanism comprises:
the preset ordering mechanism is used for caching the least read requests so as to prevent a back end from receiving a large number of read requests in a burst manner;
the preset sequencing mechanism is used for reducing the operation of the CPU;
the preset ordering mechanism is used for releasing the cached read request in advance so as to avoid the increase of the interaction delay between the host end and the equipment end.
4. A device for reading a discrete distribution of granularity of a sequential data page, the device comprising:
the distribution module is used for writing the discrete distribution rule of the granularity of the sequential data page according to the channel priority principle when the sequential data is written;
the address conversion module is used for dispersing the sequential data reading request to each channel according to the address conversion result;
the judging module is used for judging whether the request sequence of the sequential data reading request in the channel is regular or not;
the sequencing module is used for converting irregular request sequences in the channels into regular request sequences through a sequencer;
the ranking module is further to: acquiring a first page data request corresponding to a current host command on a channel, and directly descending the first page data request without caching; acquiring a second page data request, caching the second page data request, and maintaining the original position of a Cache FIFO Pointer; acquiring a third page data request, descending the third page data request, increasing the Cache FIFO Pointer progressively, and descending the second page data request simultaneously; acquiring a fourth page data request, descending the fourth page data request, and increasing the Cache FIFO Pointer progressively; and repeating the steps until all the requests on the channel are descended.
5. The apparatus for reading a granular discrete distribution of sequential data pages as recited in claim 4, wherein said sorting module is further configured to:
the sequencer reorders the sequential data read requests according to physical requirements according to a preset ordering mechanism; wherein the sequencer is implemented in a FIFO configuration.
6. The apparatus for reading granular discrete distributions of sequential data pages according to claim 5, wherein the principle of said predetermined sorting mechanism comprises:
the preset ordering mechanism is used for caching the least read requests so as to prevent a back end from receiving a large number of read requests in a burst manner;
the preset sequencing mechanism is used for reducing the operation of the CPU;
the preset ordering mechanism is used for releasing the cached read request in advance so as to avoid the increase of the interaction delay between the host end and the equipment end.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 3 are implemented when the computer program is executed by the processor.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910022651.3A CN109783025B (en) | 2019-01-10 | 2019-01-10 | Reading method and device for granularity discrete distribution of sequential data page |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910022651.3A CN109783025B (en) | 2019-01-10 | 2019-01-10 | Reading method and device for granularity discrete distribution of sequential data page |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109783025A CN109783025A (en) | 2019-05-21 |
CN109783025B true CN109783025B (en) | 2022-03-29 |
Family
ID=66500387
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910022651.3A Active CN109783025B (en) | 2019-01-10 | 2019-01-10 | Reading method and device for granularity discrete distribution of sequential data page |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109783025B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1259214A (en) * | 1997-04-07 | 2000-07-05 | 英特尔公司 | Method and apparatus for reordering commands and restoring data to original command order |
CN101246460A (en) * | 2008-03-10 | 2008-08-20 | 华为技术有限公司 | Caching data writing system and method, caching data reading system and method |
CN106339326A (en) * | 2016-08-26 | 2017-01-18 | 记忆科技(深圳)有限公司 | Method for improving sequential read performance of solid state disk (SSD) |
CN107273304A (en) * | 2017-05-24 | 2017-10-20 | 记忆科技(深圳)有限公司 | A kind of method and solid state hard disc for improving solid state hard disc order reading performance |
CN107728953A (en) * | 2017-11-03 | 2018-02-23 | 记忆科技(深圳)有限公司 | A kind of method for lifting solid state hard disc mixing readwrite performance |
CN107924300A (en) * | 2015-08-13 | 2018-04-17 | 微软技术许可有限责任公司 | Use buffer and the data reordering of memory |
CN108920387A (en) * | 2018-06-06 | 2018-11-30 | 深圳忆联信息系统有限公司 | Reduce method, apparatus, computer equipment and the storage medium of read latency |
-
2019
- 2019-01-10 CN CN201910022651.3A patent/CN109783025B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1259214A (en) * | 1997-04-07 | 2000-07-05 | 英特尔公司 | Method and apparatus for reordering commands and restoring data to original command order |
CN101246460A (en) * | 2008-03-10 | 2008-08-20 | 华为技术有限公司 | Caching data writing system and method, caching data reading system and method |
CN107924300A (en) * | 2015-08-13 | 2018-04-17 | 微软技术许可有限责任公司 | Use buffer and the data reordering of memory |
CN106339326A (en) * | 2016-08-26 | 2017-01-18 | 记忆科技(深圳)有限公司 | Method for improving sequential read performance of solid state disk (SSD) |
CN107273304A (en) * | 2017-05-24 | 2017-10-20 | 记忆科技(深圳)有限公司 | A kind of method and solid state hard disc for improving solid state hard disc order reading performance |
CN107728953A (en) * | 2017-11-03 | 2018-02-23 | 记忆科技(深圳)有限公司 | A kind of method for lifting solid state hard disc mixing readwrite performance |
CN108920387A (en) * | 2018-06-06 | 2018-11-30 | 深圳忆联信息系统有限公司 | Reduce method, apparatus, computer equipment and the storage medium of read latency |
Also Published As
Publication number | Publication date |
---|---|
CN109783025A (en) | 2019-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20170364280A1 (en) | Object storage device and an operating method thereof | |
US20220066693A1 (en) | System and method of writing to nonvolatile memory using write buffers | |
WO2020169065A1 (en) | Pre-reading method and apparatus based on memory limited ssd, and computer device | |
US10055150B1 (en) | Writing volatile scattered memory metadata to flash device | |
CN108139994B (en) | Memory access method and memory controller | |
US20130103893A1 (en) | System comprising storage device and related methods of operation | |
US11461028B2 (en) | Memory writing operations with consideration for thermal thresholds | |
US12033700B2 (en) | Memory system and method of controlling nonvolatile memory | |
DE112017005782T5 (en) | Queue for storage operations | |
US11327929B2 (en) | Method and system for reduced data movement compression using in-storage computing and a customized file system | |
CN114077552A (en) | Memory access tracking for host resident translation layer | |
CN109783025B (en) | Reading method and device for granularity discrete distribution of sequential data page | |
CN109542346A (en) | Dynamic data cache allocation method, device, computer equipment and storage medium | |
CN113986773A (en) | Write amplification optimization method and device based on solid state disk and computer equipment | |
CN114115745B (en) | RAID optimization method and device for multi-Pass programming NAND and computer equipment | |
US20190266123A1 (en) | Memory system and data processing system including the memory system | |
CN113821465A (en) | SRAM-based AXI (advanced extensible interface) control method and device and computer equipment | |
CN113704027B (en) | File aggregation compatible method and device, computer equipment and storage medium | |
US20220189518A1 (en) | Method and apparatus and computer program product for reading data from multiple flash dies | |
CN114168225A (en) | Method and device for delaying updating of solid state disk mapping table, computer equipment and storage medium | |
CN114327274B (en) | Mapping table loading checking method and device based on solid state disk and computer equipment | |
WO2024146550A2 (en) | Hybrid ssd, performance optimization method and apparatus thereof, device and storage medium | |
US20240168801A1 (en) | Ensuring quality of service in multi-tenant environment using sgls | |
CN114415944A (en) | Solid state disk physical block management method and device, computer equipment and storage medium | |
US11615826B1 (en) | Dual-address command management using content addressable memory |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |