CN110515859B - Parallel processing method for read-write requests of solid state disk - Google Patents

Parallel processing method for read-write requests of solid state disk Download PDF

Info

Publication number
CN110515859B
CN110515859B CN201910614795.8A CN201910614795A CN110515859B CN 110515859 B CN110515859 B CN 110515859B CN 201910614795 A CN201910614795 A CN 201910614795A CN 110515859 B CN110515859 B CN 110515859B
Authority
CN
China
Prior art keywords
channel
queue
read
request
page
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910614795.8A
Other languages
Chinese (zh)
Other versions
CN110515859A (en
Inventor
姚英彪
孔小冲
范金龙
冯维
许晓荣
刘兆霆
徐欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910614795.8A priority Critical patent/CN110515859B/en
Publication of CN110515859A publication Critical patent/CN110515859A/en
Application granted granted Critical
Publication of CN110515859B publication Critical patent/CN110515859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0882Page mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0895Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System (AREA)

Abstract

The invention discloses a method for parallel processing read-write requests of a solid state disk. The invention is composed of an address mapping module, a channel distribution module, a channel queue module, a queue module to be processed, a data buffer area and a garbage recovery module. In the preprocessing process, for the read-write request, calculating a corresponding read-write channel, and then sending the read-write channel to a read-write queue corresponding to the channel; in the flash memory access process, according to the arrival sequence of the requests, scheduling the requests in parallel from the read-write queues of all the channels; and for garbage recovery, an active and passive combined mode is adopted to reduce the influence of garbage recovery on the read-write performance of the solid state disk. The invention utilizes the channel level parallel structure in the solid state disk, takes pages as units, and realizes the maximum parallel of data writing and reading through reasonable scheduling, thereby greatly reducing the average response time of requests and improving the overall performance of the system.

Description

Parallel processing method for read-write requests of solid state disk
Technical Field
The invention belongs to the field of solid state disk firmware optimization design, and particularly relates to a solid state disk read-write request parallel processing method.
Background
With the rapid development of new-generation information technologies such as cloud computing and mobile internet, the data volume shows an exponential growth trend, and therefore higher requirements are put forward on the storage speed and bandwidth of data. The NAND flash memory based solid state disk is one of the mainstream storage devices at present due to its advantages of high performance, low power consumption, no noise, etc.
The NAND flash memory adopted by the solid state disk has the following characteristics: 1) the device is formed by nesting pages (pages), blocks (blocks), planes (planes) and wafers (die) from small to large. 2) The basic operation is as follows: reading, writing and erasing, wherein the reading and writing are carried out by taking a page as a unit, and the erasing is carried out by taking a block as a unit. 3) The response time of three operations of reading, writing and erasing is different, the reading is fastest, the writing is second, and the erasing is slowest. 4) Erasing is necessary before writing data, and in-place updating is not supported. 5) The erasing times are limited, and the overall performance of the NAND flash memory is greatly reduced and the service life is limited when the erasing times exceed a certain erasing times.
Due to the unique operating characteristics of Flash memory chips, a Flash Translation Layer (FTL) needs to be added to the firmware to manage a large number of Flash memory chips. The flash translation layer simulates the solid state disk into a traditional hard disk only having read-write operation, and converts the read-write request of the upper layer file system into an operation command of the flash. Specifically, the flash translation layer needs to complete mapping between a logical address and a physical address, control wear leveling of each flash chip, and perform garbage collection on invalid data.
The solid state disk is internally provided with a rich internal parallel mechanism. The general solid state disk is composed of a plurality of independent channels, and each independent channel is connected with a plurality of independent flash memory chips; each chip comprises a plurality of wafers; each wafer comprises a plurality of planes; each plane contains a plurality of blocks, each block containing a plurality of pages. Therefore, the solid state disk internally comprises channel-level parallelism, chip-level parallelism, wafer-level parallelism and plane-level parallelism. In the process of processing the request by the flash translation layer, the read-write request accesses the flash memory in parallel by using the internal parallel mechanism of the flash memory, so that the request response time is greatly reduced.
Disclosure of Invention
Aiming at the defects of the prior art, the invention discloses a method for processing read-write requests of a solid state disk in parallel. The invention utilizes the channel level parallel structure in the solid state disk, takes pages as units, and realizes the maximum parallel of data writing and reading through reasonable scheduling, thereby greatly reducing the average response time of requests and improving the overall performance of the system.
In order to realize the purpose of the invention, the invention adopts the following technical scheme:
a parallel processing method for read-write requests of solid state disks comprises an address mapping module, a channel distribution module, a channel queue module, a queue module to be processed, a data buffer area and a garbage recovery module. The address mapping module is responsible for realizing the mapping from the logical address to the physical address of the request, adopts a page-level mapping mode and consists of an address mapping table and a page allocation module; the channel allocation module is responsible for calculating the channel which is actually required to be accessed by each page in the request; the channel queue module consists of queues of each channel, the queues of each channel are divided into a read request queue and a write request queue, and read-write access requests are inserted into the read-write queues of the channels by taking pages as units; the queue to be processed stores the requests of which the channel allocation is completed but the actual flash memory reading and writing are not completely completed, and the scheduling of the queue to be processed is completed in a first-in first-out mode; the data buffer area is used for temporarily storing the data read out from the flash memory; the garbage recycling module is responsible for recycling the invalid blocks of the solid state disk and completing garbage recycling operation.
A parallel processing method for read-write requests of solid state disks comprises three processes of preprocessing, flash memory access and garbage recovery, wherein the three processes can be executed in parallel.
When a new read-write request arrives and a queue to be processed is not full, triggering a preprocessing flow, wherein the specific working process is as follows:
p1, determine arrival request type: if yes, executing P2; otherwise, for a write request, P4 is executed.
And P2, calling the address mapping module, accessing the address mapping table, and acquiring the physical page number PPN corresponding to each logical page of the read request.
And P3, calling a channel allocation module, obtaining the flash memory channel where each page is located according to the PPN of each page, then inserting each page into the tail of the read queue of the corresponding flash memory channel, and executing P6.
And P4, calling a page allocation module of the address mapping module, allocating a new physical page number PPN for each page of the write request by adopting a remote updating strategy, updating the address mapping table and executing P5.
And P5, calling a channel allocation module, obtaining the flash memory channel where each page is located according to the PPN of each page, then inserting each page into the tail of the write queue of the corresponding flash memory channel, and finally executing P6.
And P6, inserting the request into the tail of the queue to be processed, and ending the preprocessing.
When the queue to be processed is not empty, triggering a flash memory access process, wherein the specific working process is as follows:
a1, judging the queue head request type of the queue to be processed: if the read request is received, A2 is executed; otherwise, a5 is executed.
A2, judging whether the first reading request of the request queue is completed: if so, perform A4, otherwise perform A3.
A3, reading the page of the head of the read queue of all channels from the flash memory to a data buffer, then eliminating the head of the read queue of all channels, and executing A2.
And A4, merging the read data from the same request in the data buffer, responding to the upper-layer file system, eliminating the queue head request of the queue to be processed, and ending the flash memory access operation.
A5, judging whether the request queue head write request is executed and completed: if so, perform A7, otherwise perform A6.
A6, writing the requests of the head of the write queue of all channels into the physical page under the corresponding channel, simultaneously eliminating the head of the write queue of all channels, and executing A5.
And A7, responding to the upper-layer file system, eliminating the queue head request of the queue to be processed, and ending the flash memory access operation.
The garbage recycling process is completed by a garbage recycling module, and a mode of combining active garbage recycling and passive garbage recycling is adopted. When the pair of columns to be processed is empty and the number of idle blocks in a certain channel is less than a threshold value TH1And triggering active garbage recycling operation. When a flash memory access operation is being performed, the number of free blocks in a channel is less than a threshold TH2Temporarily suspending flash memory access operationAnd triggering the passive garbage recycling operation. Further, the specific way of garbage collection operation is to select a data block containing the most invalid pages in the channel as an invalid block, migrate the valid pages in the invalid block to a free block in the same channel, and erase the invalid block. In addition, to minimize triggering of passive garbage collection operations, TH1Should be greater than TH2
Further, the specific flow of the page allocation method for the write request in the step of the preprocessing flow P4 is as follows:
p41, calculating the erasing times of each channel flash memory block, and sorting according to the erasing times from small to large;
p42, representing the physical page number N required by the write request in a form of N ═ N × C + q (C is the number of channels, N is the quotient of N/C, and q is the remainder of N/C);
p43, according to the strategy of the channel with the minimum erasing times to preferentially distribute writing and maximize the channel parallel writing, firstly distributing n page writing to each channel to realize the maximize parallel writing; the q page writes are then assigned to the first q channels with the smallest number of erasures to equalize the wear of each channel.
Compared with the prior art, the invention has the beneficial effects that:
the method for processing the read-write requests of the solid state disk in parallel utilizes a channel-level parallel structure of the solid state disk to maximize the parallelism of the read-write requests. In the preprocessing process, for the read-write request, calculating a corresponding read-write channel, and then sending the read-write channel to a read-write queue corresponding to the channel; in the flash memory access process, according to the arrival sequence of the requests, scheduling the requests in parallel from the read-write queues of all the channels; and for garbage recovery, an active and passive combined mode is adopted to reduce the influence of garbage recovery on the read-write performance of the solid state disk. In addition, channel parallelism and wear leveling are considered in the channel allocation process of the write request. Compared with the existing flash translation layer technology, the method disclosed by the invention can greatly reduce the request execution time and improve the read-write performance of the solid state disk.
Drawings
FIG. 1 general framework of the invention
FIG. 2 is a flow chart of the pretreatment process of the present invention
FIG. 3 is a flow chart of a flash memory access process of the present invention
FIG. 4 detailed case of the preprocessing procedure of the present invention
FIG. 5 is a specific example of the present invention process of accessing flash memory
Detailed Description
In order that those skilled in the art will better understand the technical solution of the present invention, the following detailed description of the present invention is provided in conjunction with the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, belong to the scope of the present invention.
In addition, the invention assumes that the minimum unit of the upper-layer file system for reading and writing the solid state disk is 1 flash memory page, namely, the whole page is read or written. In an actual system, the minimum unit for the file system to read and write the solid state disk is 1 sector. Sector access can be conveniently converted into page access through alignment operation, which is not described in the invention.
The method for parallel processing of read-write requests of the solid state disk comprises an address mapping module, a channel distribution module, a channel queue module, a queue module to be processed, a data buffer area and a garbage recovery module, and is shown in figure 1.
The address mapping module is responsible for mapping the requested logical address to the physical address, adopts a page-level mapping mode and consists of an address mapping table and a page allocation module. The address mapping table records the mapping relation between the logical page and the physical page, and the page allocation module allocates a new physical page to the write request. When a read request comes, acquiring a physical page corresponding to the logical page through an address mapping table of an address mapping module; when a write request comes, a page allocation module of the address mapping module allocates a new physical page to each logical page in the write request, and updates the address mapping table.
The channel allocation module is responsible for calculating the channel that each page in the request actually needs to access. And the channel distribution module calculates a channel corresponding to each page according to the physical page number obtained by the address mapping module and inserts the channel into the read-write queue of the corresponding channel.
The channel queue module is composed of queues of each channel, the queues of each channel are divided into a read request queue and a write request queue, and the channel distribution module inserts the read-write access request into the read-write queues of the channels by taking a page as a unit.
The queue to be processed stores the requests of which the channel allocation is completed but the actual flash memory reading and writing are not completely completed, and the scheduling of the queue to be processed is completed in a first-in first-out mode.
The data buffer is used for temporarily storing the data read from the flash memory.
The garbage recycling module is used for completing a garbage recycling process and adopts a mode of combining active garbage recycling and passive garbage recycling.
The invention provides a parallel processing method for read-write requests of a solid state disk, which relates to three processes of preprocessing, accessing a flash memory and garbage recovery when an access request is actually processed. The trigger conditions and processing flows of the three processes are as follows:
when a new read-write request arrives and the queue to be processed is not full, triggering a preprocessing flow, wherein the flow is shown in fig. 2, and the specific working process is as follows:
step 1, judging the type of the arrival request: if yes, executing step 2; otherwise, executing step 4.
And 2, calling an address mapping module, accessing the address mapping table, and acquiring a physical page number PPN corresponding to each logical page of the read request.
And 3, calling a channel allocation module, obtaining a corresponding flash memory channel according to the PPN of each page, then inserting each page into the tail of the read queue of the corresponding flash memory channel, and finally executing the step 6.
And 4, calling a page distribution module of the address mapping module, distributing a new physical page number PPN for each page of the write request by adopting a remote updating strategy, updating the address mapping table, and executing the step 5.
And 5, calling a channel allocation module, obtaining a corresponding flash memory channel according to the PPN of each page, then inserting each page into the tail of the write queue of the corresponding flash memory channel, and finally executing the step 6.
And 6, inserting the request into the tail of the queue to be processed, and finishing the preprocessing.
Further, the specific flow of the page allocation method for the write request in step 4 is as follows:
p41, calculating the erasing times of each channel flash memory block, and sorting according to the erasing times from small to large;
p42, representing the physical page number N required by the write request in a form of N ═ N × C + q (C is the number of channels, N is the quotient of N/C, and q is the remainder of N/C);
p43, according to the strategy of the channel with the minimum erasing times to preferentially distribute writing and maximize the channel parallel writing, firstly distributing n page writing to each channel to realize the maximize parallel writing; the q page writes are then assigned to the first q channels with the smallest number of erasures to equalize the wear of each channel.
When the queue to be processed is not empty, triggering a flash memory access flow, wherein the implementation flow is shown in fig. 3, and the specific working process is as follows:
step 1, judging the queue head request type of a queue to be processed: if yes, executing step 2; otherwise, step 5 is executed.
Step 2, judging whether the first reading request of the request queue is executed and completed: if yes, executing step 4, otherwise executing step 3.
And 3, reading the page of the head of the read queue of all the channels from the flash memory to a data buffer area, then eliminating the head of the read queue of all the channels, and executing the step 2.
And 4, merging the read data from the same request in the data buffer area, responding to an upper-layer file system, eliminating a queue head request of a queue to be processed, and ending the flash memory access operation.
Step 5, judging whether the first write request of the request queue is executed and completed: if yes, go to step 7, otherwise go to step 6.
And 6, writing the requests of the writing queue heads of all the channels into the physical page under the corresponding channel, simultaneously eliminating the queue heads of the writing queues of all the channels, and executing the step 5.
And 7, responding to the upper-layer file system, eliminating the queue head request of the queue to be processed, and ending the flash memory access operation.
When the queue to be processed is empty and the number of idle blocks in a certain channel is less than a threshold TH1And triggering active garbage recycling operation. When the flash memory access operation is performed, and the number of idle blocks in a certain channel is less than the threshold TH2And when the garbage is detected, the flash memory access operation is suspended, and the passive garbage collection operation is triggered. The actual garbage collection operation is to select the data block containing the most invalid pages under the channel as an invalid block, migrate the valid pages in the invalid block to a free block in the same channel, and erase the invalid block. In addition, to minimize triggering of passive garbage collection operations, TH1Should be greater than TH2
For further explanation of the parallel processing flow of the present invention, it is explained with reference to specific actual requests. For convenience of description, it is assumed that the flash memory channel number C is 4, and the flash memory array of each channel includes 128 physical pages, i.e., the physical pages 1 to 128 correspond to channel 1, the physical pages 129 and 256 correspond to channel 2, the physical pages 257 and 384 correspond to channel 3, and the physical pages 385 and 512 correspond to channel 4. The request queue and the channel read-write queue are both empty, and the channels are arranged into 4,3,2 and 1 according to the erase times of the flash memory blocks from small to large.
Example 1: and (5) preprocessing operation.
The pre-processing operation in the queue is shown in fig. 4. Requests R1, W1, R2 arrive in chronological order, where R represents read requests, W represents write requests, and the numbers represent arrival order. The treatment process is as follows:
c1, the pending queue is not full, the request pre-processing operation begins, read request R1(78, 2). Here 78 represents the starting logical page number of the access operation and 2 represents access to two pages (the same applies below). From the address mapping table shown in fig. 4, the physical page numbers 66 and 301 that actually need to be accessed can be obtained, and C2 is performed.
C2, physical pages 66, 301 are in lanes 1, 3, respectively. The channel assignment module inserts them into the read queues of channel 1 and channel 3, respectively, and performs C3.
C3, the pending queue is not full, and the request preprocessing operation continues to be performed, writing request W1(236, 5). The address mapping module allocates the first 4 pages of W1 into 4 channels and the 5 th page into channel 4. Assume that the distribution information is 236, 103,237, 247,238, 331, 239, 500,240, 501. The mapping information is updated to the address mapping module, and C4 is performed.
C4, physical page 103,247,331,500,501 at lanes 1,2,3,4, respectively. The channel allocation module inserts it into the write queues for channel 1, channel 2, channel 3, and channel 4, respectively, and performs C5.
C5, the pending queue is not full, and the request preprocessing operation continues to be performed, read request R2(126, 3). The access address mapping module obtains the address mapping information of 126-.
C6, physical pages 79,210,407 are in lanes 1,2, 4, respectively. The channel assignment module inserts it into the read queues for channel 1, channel 2, and channel 4, respectively, and performs C7.
C7, the pending queue is not full, but there is no subsequent request, and the execution of the pre-processing operation is finished.
Example 2: a flash access operation.
The read request processing in the queue is shown in fig. 5, and the processing procedure is as follows:
c1, the pending queue is not empty, the head of the queue is the read request R1, and R1 has not executed completion, so the head of the queue page number of the read queue of 4 channels is read, i.e., the physical pages 66, 210, 301, 407 are read from the 4 channels to the data buffer, respectively. And (3) the head request of the read queue of the 4 channels is removed, at the moment, the execution of R1 is completed, R1 is removed from the queue to be processed, the physical pages 66 and 301 of R1 in the buffer are merged and uploaded, and C2 is executed.
C2, the pending queue is not empty, the head of the queue is a read request W1, and W1 is not executed, so the head of the 4 channel write queue is written, and data is written into 103,247,331,500 physical pages under 4 channels respectively. C3 is performed by removing the 4 channel write queue head request.
C3, the pending queue is not empty, the head of the queue is the read request W1, and W1 has not executed completion, so write 4 channels write the head of the queue, only channel 4's write queue is not empty, write data to channel 4's 501 physical page. The channel 4 write queue head is removed, at which point the W1 execution is complete, the W1 is removed from the pending queue, and C4 is performed.
C4, the pending queue is not empty, the head of the queue is read request R2, and R2 has not executed completion, so the 4 channel read queue head is read, only the read queue of channel 1 is not empty, and physical page 79 is read from channel 1. And (3) removing the head of the read queue of the channel 1, finishing the execution of R2, removing R2 from the queue to be processed, merging the physical page 79,210,407 in the buffer and uploading, and executing C5.
C5, the queue to be processed is empty, and the execution is finished.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (1)

1. A parallel processing method for read-write request of solid state disk is characterized in that
The method comprises three processes of preprocessing, flash memory access and garbage recovery, wherein the three processes can be executed in parallel, and the method comprises the following specific steps:
when a new read-write request arrives and a queue to be processed is not full, triggering a preprocessing flow;
when the queue to be processed is not empty, triggering a flash memory access flow;
the garbage recycling process is completed by a garbage recycling module, and a mode of combining active garbage recycling and passive garbage recycling is adopted; when the pair of columns to be processed is empty and the number of idle blocks in a certain channel is less than a threshold value TH1Triggering active garbage recycling operation; when a flash memory access operation is being performed, the number of free blocks in a channel is less than a threshold TH2When the garbage is detected, the flash memory access operation is suspended, and the passive garbage recycling operation is triggered;further, the specific way of garbage collection operation is to select a data block containing the most invalid pages under the channel as an invalid block, migrate the valid pages in the invalid block to a free block in the same channel, and erase the invalid block; in addition, to minimize triggering of passive garbage collection operations, TH1Should be greater than TH2
The pretreatment process comprises the following steps:
p1, judging the type of the arrival request: if yes, executing P2; otherwise, executing P4 for the write request;
p2, calling an address mapping module, accessing an address mapping table, and acquiring a physical page number PPN corresponding to each logical page of the read request;
p3, calling a channel distribution module, obtaining a flash memory channel where each page is located according to the PPN of each page, then inserting each page into the tail of a read queue of the corresponding flash memory channel, and executing P6;
p4, calling a page distribution module of the address mapping module, distributing a new physical page number PPN for each page of the write request by adopting a remote updating strategy, updating an address mapping table and executing P5;
p5. calling a channel allocation module, obtaining the flash memory channel of each page according to the PPN of the page, then inserting each page into the tail of the write queue of the corresponding flash memory channel, and finally executing P6;
p6, inserting the request into the tail of the queue to be processed, and finishing the preprocessing;
the flash memory access flow is realized by the following steps:
A1. judging the queue head request type of the queue to be processed: if the read request is received, A2 is executed; otherwise, a5 is executed;
A2. judging whether the first read request of the request queue is executed and completed: if yes, executing A4, otherwise executing A3;
A3. reading the page of the head of the read queue of all channels from the flash memory to a data buffer area, then eliminating the head of the read queue of all channels, and executing A2;
A4. merging the read data from the same request in the data buffer area, responding to an upper-layer file system, eliminating a queue head request of a queue to be processed, and ending the flash memory access operation;
A5. judging whether the first write request of the request queue is executed and completed: if yes, executing A7, otherwise executing A6;
A6. writing the requests of the head of the write queue of all the channels into the physical page under the corresponding channel, simultaneously eliminating the head of the write queue of all the channels, and executing A5;
A7. responding to an upper-layer file system, eliminating a queue head request of a queue to be processed, and ending the flash memory access operation;
the specific flow of the page allocation method for the write request in the step of the preprocessing flow P4 is as follows:
p41, calculating the erasing times of each channel flash memory block, and sequencing according to the erasing times from small to large;
p42, representing the physical page number N required by the write request in a form of N ═ N × C + q, wherein C is the number of channels, N is the quotient of N/C, and q is the remainder of N/C;
p43, according to the strategy of preferentially distributing writing and maximizing channel parallel writing of the channel with the minimum erasing times, firstly distributing n page writes to each channel to realize the maximized parallel writing; then q page writes are distributed to the first q channels with the minimum erasing times so as to balance the abrasion degree of each channel;
the method relates to an address mapping module, a channel distribution module, a channel queue module, a queue module to be processed, a data buffer area and a garbage recycling module; the address mapping module is responsible for realizing the mapping from the logical address to the physical address of the request, adopts a page-level mapping mode and consists of an address mapping table and a page allocation module; the channel allocation module is responsible for calculating the channel which is actually required to be accessed by each page in the request; the channel queue module consists of queues of each channel, the queues of each channel are divided into a read request queue and a write request queue, and read-write access requests are inserted into the read-write queues of the channels by taking pages as units; the queue to be processed stores the requests of which the channel allocation is completed but the actual flash memory reading and writing are not completely completed, and the scheduling of the queue to be processed is completed in a first-in first-out mode; the data buffer area is used for temporarily storing the data read out from the flash memory; the garbage recycling module is responsible for recycling the invalid blocks of the solid state disk and completing garbage recycling operation.
CN201910614795.8A 2019-07-09 2019-07-09 Parallel processing method for read-write requests of solid state disk Active CN110515859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910614795.8A CN110515859B (en) 2019-07-09 2019-07-09 Parallel processing method for read-write requests of solid state disk

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910614795.8A CN110515859B (en) 2019-07-09 2019-07-09 Parallel processing method for read-write requests of solid state disk

Publications (2)

Publication Number Publication Date
CN110515859A CN110515859A (en) 2019-11-29
CN110515859B true CN110515859B (en) 2021-07-20

Family

ID=68623266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910614795.8A Active CN110515859B (en) 2019-07-09 2019-07-09 Parallel processing method for read-write requests of solid state disk

Country Status (1)

Country Link
CN (1) CN110515859B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111176883B (en) * 2019-12-24 2022-05-20 中山大学 Erasure code based active reconstruction method and reading method for flash memory solid-state disk data
CN111324307B (en) * 2020-02-13 2023-02-21 西安微电子技术研究所 Satellite-borne NAND FLASH type solid-state memory and method for distributing storage space
CN111580752B (en) * 2020-04-28 2023-09-26 中国人民大学 Data storage method, device, computer program and storage medium
CN111582739B (en) * 2020-05-13 2022-02-18 华中科技大学 Method for realizing high bandwidth under condition of multi-tenant solid-state disk performance isolation
CN111708713B (en) * 2020-05-20 2022-07-05 杭州电子科技大学 Intelligent garbage recycling and scheduling method for solid state disk
CN111857601B (en) * 2020-07-30 2023-09-01 暨南大学 Solid-state disk cache management method based on garbage collection and channel parallelism
CN112835534B (en) * 2021-02-26 2022-08-02 上海交通大学 Garbage recycling optimization method and device based on storage array data access
CN114185492B (en) * 2021-12-14 2023-07-07 福建师范大学 Solid state disk garbage recycling method based on reinforcement learning
CN114546294B (en) * 2022-04-22 2022-07-22 苏州浪潮智能科技有限公司 Solid state disk reading method, system and related components
CN115840542B (en) * 2023-02-24 2023-06-02 浪潮电子信息产业股份有限公司 Method and system for processing request of hard disk, storage medium and electronic equipment
CN116540948B (en) * 2023-07-04 2023-08-29 绿晶半导体科技(北京)有限公司 Solid state disk garbage disposal method and solid state disk thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101498994A (en) * 2009-02-16 2009-08-05 华中科技大学 Solid state disk controller
CN102713866A (en) * 2009-12-15 2012-10-03 国际商业机器公司 Reducing access contention in flash-based memory systems
CN103365788A (en) * 2013-08-06 2013-10-23 山东大学 Self-adaption local garbage collecting method used for real-time flash memory conversion layer
CN103902475A (en) * 2014-04-23 2014-07-02 哈尔滨工业大学 Solid state disk concurrent access method and device based on queue management mechanism
CN104090847A (en) * 2014-06-25 2014-10-08 华中科技大学 Address distribution method of solid-state storage device
CN109739775A (en) * 2018-11-20 2019-05-10 北京航空航天大学 The flash translation layer (FTL) composting recovery method locked based on the multistage

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101169703A (en) * 2007-11-05 2008-04-30 湖南源科创新科技股份有限公司 Solid state hard disk memory card based on RAID technology
CN107515728B (en) * 2016-06-17 2019-12-24 清华大学 Data management method and device for developing internal concurrency characteristics of flash memory device
US10684795B2 (en) * 2016-07-25 2020-06-16 Toshiba Memory Corporation Storage device and storage control method
US11029859B2 (en) * 2017-08-23 2021-06-08 Toshiba Memory Corporation Credit based command scheduling

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101498994A (en) * 2009-02-16 2009-08-05 华中科技大学 Solid state disk controller
CN102713866A (en) * 2009-12-15 2012-10-03 国际商业机器公司 Reducing access contention in flash-based memory systems
CN103365788A (en) * 2013-08-06 2013-10-23 山东大学 Self-adaption local garbage collecting method used for real-time flash memory conversion layer
CN103902475A (en) * 2014-04-23 2014-07-02 哈尔滨工业大学 Solid state disk concurrent access method and device based on queue management mechanism
CN104090847A (en) * 2014-06-25 2014-10-08 华中科技大学 Address distribution method of solid-state storage device
CN109739775A (en) * 2018-11-20 2019-05-10 北京航空航天大学 The flash translation layer (FTL) composting recovery method locked based on the multistage

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于混合缓存机制的垃圾回收策略研究;罗勇;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180115(第1期);第I137-184页 *
缓冲区管理层对固态盘的有效性研究;杜晨杰;《浙江万里学院学报》;20170331;第30卷(第2期);第72-77页 *

Also Published As

Publication number Publication date
CN110515859A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110515859B (en) Parallel processing method for read-write requests of solid state disk
CN107885456B (en) Reducing conflicts for IO command access to NVM
CN103049397B (en) A kind of solid state hard disc inner buffer management method based on phase transition storage and system
Jung et al. Sprinkler: Maximizing resource utilization in many-chip solid state disks
KR101486987B1 (en) Semiconductor memory device including nonvolatile memory and commnand scheduling method for nonvolatile memory
JP6163532B2 (en) Device including memory system controller
JP5759623B2 (en) Apparatus including memory system controller and associated method
US9058208B2 (en) Method of scheduling tasks for memories and memory system thereof
US20120110239A1 (en) Causing Related Data to be Written Together to Non-Volatile, Solid State Memory
CN1258713C (en) Data distribution dynamic mapping method based on magnetic disc characteristic
US20130212319A1 (en) Memory system and method of controlling memory system
CN103885728A (en) Magnetic disk cache system based on solid-state disk
EP3511814A1 (en) Storage device storing data in order based on barrier command
CN105760311B (en) Trim command response method and system and operating system
CN103902475B (en) Solid state disk concurrent access method and device based on queue management mechanism
JP2012221038A (en) Memory system
JP6443571B1 (en) Storage control device, storage control method, and storage control program
CN111399750B (en) Flash memory data writing method and computer readable storage medium
US20190042462A1 (en) Checkpointing for dram-less ssd
US20170003911A1 (en) Information processing device
CN114371813A (en) Identification and classification of write stream priorities
CN110968269A (en) SCM and SSD-based key value storage system and read-write request processing method
CN110377233A (en) SSD reading performance optimization method, device, computer equipment and storage medium
KR20190086341A (en) Storage device storing data in order based on barrier command
CN109799959A (en) A method of it improving open channel solid-state disk and writes concurrency

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant