WO2021035761A1 - 一种固态硬盘混合读写的实现方法以及装置 - Google Patents

一种固态硬盘混合读写的实现方法以及装置 Download PDF

Info

Publication number
WO2021035761A1
WO2021035761A1 PCT/CN2019/103900 CN2019103900W WO2021035761A1 WO 2021035761 A1 WO2021035761 A1 WO 2021035761A1 CN 2019103900 W CN2019103900 W CN 2019103900W WO 2021035761 A1 WO2021035761 A1 WO 2021035761A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
read
controller
channel
written
Prior art date
Application number
PCT/CN2019/103900
Other languages
English (en)
French (fr)
Inventor
陈林峰
刘光远
李由
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2019/103900 priority Critical patent/WO2021035761A1/zh
Priority to CN201980099732.1A priority patent/CN114286989B/zh
Publication of WO2021035761A1 publication Critical patent/WO2021035761A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers

Definitions

  • This application relates to the field of storage technology, and in particular to a method and device for implementing hybrid reading and writing.
  • the solid state disk (SSD, Solid State Disk) disk system is a hard disk composed of a controller and a memory.
  • the memory may be, for example, a dynamic random access memory (Dynamic Random Access Memory, DRAM) or a flash memory (Flash) chip. It can be NAND Flash.
  • DRAM Dynamic Random Access Memory
  • Flash flash memory
  • SSD disk systems usually use multiple memories to form a storage array.
  • each independent NAND Flash can also be called a flash memory particle.
  • the flash memory particle is composed of many storage logic units (also called storage media concurrent units), and a single-chip storage logic unit
  • the capacity and read and write performance are not high.
  • the capacity of a single-chip storage logic unit is 32GByte
  • the read bandwidth is up to 150MB/s
  • the write bandwidth is up to 30MB/s. Therefore, the read and write efficiency of a single-chip storage logic unit is relatively low, and the high performance of the entire SSD disk system depends on the concurrent operation of multiple storage logic units.
  • Each storage logic unit in the monolithic flash memory particle shares the same input/output (Input/Output, IO) channel (channel) .
  • the host In the hybrid access model of the host system, generally speaking, since the SSD disk system processes write operations in a write-back mode, the host generally does not see the high write latency delay of the storage logical unit. However, the read environment faced by the read operation is more complicated. It needs to actually access the storage logical unit and perform data return. If the read operation of a storage logical unit conflicts with the write data transmission of other storage logical units in the channel, the host can Seeing a larger read delay (a conflicting long tail delay), the read delay corresponding to a read operation may be hundreds of microseconds or even milliseconds, which is unacceptable for some hosts.
  • the embodiments of the present application provide a method and device for implementing mixed reading and writing, which can effectively reduce read delays, improve the response speed of read operations, and improve users in mixed reading and writing scenarios without substantially affecting the progress of write operations.
  • a method and device for implementing mixed reading and writing which can effectively reduce read delays, improve the response speed of read operations, and improve users in mixed reading and writing scenarios without substantially affecting the progress of write operations.
  • an embodiment of the present application provides a method for implementing hybrid read and write of a solid state drive, the method includes: the controller determines that a write data transmission operation for transmitting the data to be written needs to be performed to the first logic unit of the memory And, the controller determines that it is necessary to perform a read operation on the data in the second logic unit of the memory; wherein, the first logic unit and the second logic unit share an input output (IO) channel; the The controller processes the data to be written into multiple data packets; through the IO channel, the controller alternately performs each sub-operation of transmitting each of the data packets and executing the read operation; the read operation Each sub-operation includes read command issuance operation, status query operation and read data transmission operation.
  • IO input output
  • the SSD controller can organize the data to be written into a queue of multiple data packets (or called a small packet data queue) in a mixed read and write scenario, and transmit the data through the IO channel.
  • the transmission operation of each data packet of the data to be written and the various sub-operations of the read operation are interleaved, so that the read/write operations of different storage logic units of the same flash memory particle can be multiplexed together to maximize the use of IO channels
  • the data transmission bandwidth and scheduling of all logical units allow idle storage logical units to be scheduled and work as soon as possible, reduce the read delay and write delay of read operations, improve the response speed of read operations, and improve user experience.
  • the controller is, for example, an SSD controller
  • the memory is, for example, NAND Flash (or flash memory particles) or dynamic random access memory (DRAM).
  • the memory includes a large number of logic units (or called storage logic unit, or may be called storage medium concurrent unit).
  • the controller controls the memory through an IO channel, and the IO channel includes multiple ways (Way), and each way corresponds to a logical unit to organize concurrent media operations, that is, multiple logical units can share one IO channel.
  • Each logic unit may further include a plurality of blocks, each block includes a plurality of word-lines, and each word line includes a plurality of pages.
  • the block is the basic unit of the erase operation
  • the word line is the basic unit of the write operation
  • the page is the basic unit of the read operation.
  • the second logic unit is any logic unit different from the first logic unit in the memory, that is, the first logic unit and the second logic unit may be located in the same memory (for example, located in the same flash memory particle) and share the same IO channel.
  • the controller processing the data to be written into multiple data packets includes: after the controller receives the data to be written from the host, it determines that the data needs to be written Before performing the write data transmission operation and the read operation, the data to be written is processed into multiple data packets.
  • the SSD controller processes the complete data to be written into multiple data packets.
  • the specific process can be described as follows:
  • the controller receives the data to be written from the host; the controller processes the data to be written into multiple data packets; the controller determines that it is necessary to transmit the multiple data packets to the first logic unit of the memory The controller determines that it is necessary to perform a read operation on the data in the second logic unit of the memory; wherein, the first logic unit and the second logic unit share an input output (IO) channel; the controller Through the IO channel, each sub-operation of transmitting each of the data packets and performing the read operation is alternately performed; each sub-operation of the read operation includes a read command issuing operation, a status query operation, and a read data transmission operation.
  • IO input output
  • the SSD controller when the SSD controller receives the data to be written from the host, it can organize the data to be written into a queue of multiple data packets (or called a small packet data queue). During the transmission of the small packet data queue, if a read operation-related command is encountered, and the logical unit targeted by the write data transmission and the logical unit targeted by the read operation share the same IO channel, the SSD controller can transfer each sub-module of the read operation The transmission operations of each data packet of operation and writing data are interleaved, so that the read/write operations of different storage logical units of the same flash memory particle can be multiplexed together, maximize the use of the data transmission bandwidth of the IO channel and schedule all logical units, Allow idle storage logic units to be scheduled and work as soon as possible, reduce the read delay and write delay of read operations, and improve user experience.
  • the so-called “data to be written” may refer to the complete data to be written sent from the host to the SSD controller, that is, the host sends to the SSD controller to be written The data has not been transmitted to the flash memory particles, and the SSD controller organizes the complete data to be written into multiple data packets (ie, small packet data queues).
  • the so-called “data to be written” may also refer to the part of the data to be written in the complete data to be written that the host sends to the SSD controller, that is, the SSD The controller does not perform segmentation processing for a part of the data to be written in the complete data to be written, but organizes the data to be written in the other part of the complete data to be written (that is, the remaining data to be written) Into multiple packets.
  • the SSD controller After receiving the complete data to be written from the host, the SSD controller transmits the data to be written to the first logic unit through the IO channel. At a certain point in the write data transmission process, the SSD controller receives a read data request issued by the host, and determines that it needs to perform a read operation to the second logic unit through the IO channel. At this point in time, some of the data to be written has been transmitted to the first logic unit, then the SSD controller processes the data to be written that has not been transmitted (that is, the remaining data to be written) into multiple data packets .
  • the controller processing the data to be written into multiple data packets includes: the controller divides the data to be written into multiple pieces of data; The controller adds an error correction code to each of the multiple pieces of data, thereby obtaining multiple data packets.
  • the error correction code is, for example, a data check code (ECC) or a low density parity check code (LDPC) or a Bosch-Chadhury-Hokkung (BCH) code.
  • ECC data check code
  • LDPC low density parity check code
  • BCH Bosch-Chadhury-Hokkung
  • the flash memory particles in the SSD disk system use electrical signals as the physical form of information storage, the reliability of the electrical signals stored on the storage medium is not stable. This situation may cause errors in the data written to the flash memory particles.
  • the data and the corresponding check code are carried in each small packet of data to avoid errors and ensure the reliability and correctness of data reading.
  • the error correction code is an ECC check code
  • the SSD controller can calculate the ECC check code by the ECC algorithm according to the data in the small packet data, for example, Hamming (Hamming) Algorithm, Reed-Solomon (Reed-Solomon) or other ECC algorithms, this application does not limit this.
  • the controller alternately performs each sub-operation of transmitting each of the data packets and performing each of the read operations through the IO channel, including: the controller according to the The queue sequence of multiple data packets, one or more first data packets are transmitted through the IO channel; the one or more first data packets are one or more consecutive data in the multiple data packets Packet; the controller performs a first sub-operation through the IO channel after completing the transmission of the one or more first data packets; the first sub-operation is the respective sub-operations of the read operation One of the sub-operations in the operation; after the controller completes the transmission of the first sub-operation, the controller transmits one or more of the first data packets in the queue sequence after the one or more first data packets through the IO channel A second data packet, and the one or more second data packets are one or more consecutive data packets among the multiple data packets.
  • the first sub-operation is the read command issuance operation; correspondingly, the transmission of the one or more first data packets in the IO channel may occur Before the issuance of the read command for the read operation.
  • the transmission of the one or more second data packets in the IO channel may occur after the read command of the read operation is issued and before the status query command of the read operation is issued.
  • the controller further includes: The IO channel executes the status query operation; after the controller finishes performing the status query operation, the controller transmits through the IO channel the one after the one or more second data packets in the queue sequence Or multiple third data packets, the one or more third data packets are one or more consecutively sequential data packets among the multiple data packets; the one or more third data packets are in the
  • the transmission in the IO channel may occur after the status query command of the read operation is issued and before the read data transmission operation of the read operation.
  • the controller executes the reading through the IO channel after completing the transmission of one or more third data packets after the one or more second data packets in the queue sequence through the IO channel. Data transfer operation.
  • the sub-operations such as the transmission operation of each data packet belonging to different logical units and the read command issuance of the read operation, the status query, and the read data transmission can be intertwined. carried out. Therefore, a transmission pipeline with the maximum channel bandwidth utilization is formed on the same IO channel. Therefore, the implementation of the embodiment of the present application can efficiently use the entire channel bandwidth, effectively reduce the read delay, and minimize the write-to-write transmission. Interference.
  • the time difference from the execution of the read command issuance operation to the execution of the status query operation is greater than or equal to that of the transmission of the one or more second data packets. Duration; the time difference from executing the status query operation to executing the read data transmission operation is greater than or equal to the duration of transmitting one or more of the third data packets.
  • the implementation of the embodiments of this application can ensure that the sub-operations and small packet data transmission operations belonging to different logical units can be interleaved, so as to efficiently use the entire channel bandwidth, effectively reduce the read delay, and minimize the interference to the write transmission .
  • the controller determines that it is necessary to perform a write data transfer operation for transmitting the data to be written to the first logic unit of the memory, including: the controller receives the data from the host The data write request and the data to be written, the data write request includes the logical address of the data to be written; the controller determines that the data to be written needs to be written according to the logical address of the data to be written Write to the first logic unit of the memory.
  • the data write request and the data to be written may be sent by the host to the controller at the same time, or the data write request may be sent to the controller first, and then the data to be written may be sent to the controller.
  • the controller may be any combination of
  • the controller determining that it is necessary to perform a read operation on the data in the second logic unit of the memory includes: the controller receives a data read request from the host, so The read data request includes the logical address of the data to be read; the controller determines according to the logical address of the data to be read that a read operation needs to be performed on the second logical unit of the memory.
  • an embodiment of the present application provides a solid-state hard disk, including a controller and a memory, and the controller includes: a write determination unit, configured to determine that a first logic unit of the memory needs to be executed for transmitting data to be written And a read determination unit for determining the need to perform a read operation on the data in the second logic unit of the memory; wherein the first logic unit and the second logic unit share an IO channel
  • the data processing unit is used to process the data to be written into multiple data packets; the alternate reading and writing unit is used to alternately transmit each of the data packets and perform each of the read operations through the IO channel Sub-operations; each sub-operation of the read operation includes a read command issuance operation, a status query operation, and a read data transmission operation.
  • the functional units of the controller can be used to implement the method described in the first aspect.
  • an embodiment of the present application provides a solid-state hard disk, including: a controller and a memory; the controller and the memory are connected or coupled together through a bus; wherein the memory is used to store program instructions, so The controller is used to call the program instructions stored in the memory to execute the method described in any one of the possible implementation manners of the first aspect.
  • an embodiment of the present application provides a system, which is characterized by comprising: a host and a solid state drive; the host and the solid state drive are in communication connection; wherein, the host is used to send data to the solid state drive
  • the solid state drive is the solid state drive described in the second aspect, or the solid state drive is the solid state drive described in the third aspect.
  • an embodiment of the present application provides a computer-readable storage medium storing a computer program.
  • the computer program includes program instructions that, when executed by an executing processor, cause the processor to execute as in the first aspect. Any implementation of the method described.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a computer, it is executed to implement the method described in any implementation manner of the first aspect.
  • the SSD controller in the hybrid read-write scenario of the solid state drive, for example, when the SSD controller encounters a read operation-related command during the write data transmission process, if the logical unit for the write data transmission and the logical unit for the read operation Sharing the same IO channel, the SSD controller can interleave each sub-operation of the read operation and the transmission operation of each data packet of the write data, so that the read/write operations of different storage logic units of the same flash memory particle can be multiplexed together. Maximize the use of the data transmission bandwidth of the IO channel and schedule all logical units, so that idle storage logical units can be scheduled and work as soon as possible, reduce the read delay and write delay of read operations, and improve user experience.
  • Fig. 1 is an exemplary frame diagram of a NAND Flash control system to which this application is applied;
  • Figure 2 is a schematic diagram of a processing solution for a mixed read-write scenario in an existing solution
  • FIG. 3 is a schematic diagram of a processing solution for a mixed read-write scenario in another existing solution
  • FIG. 4 is a schematic flowchart of a method for implementing mixed reading and writing according to an embodiment of the present application
  • FIG. 5A is a schematic diagram of a scenario in which data to be written is organized into multiple small packets of data according to an embodiment of the present application
  • FIG. 5B is a schematic diagram of another scenario in which data to be written is organized into multiple small packets of data provided by an embodiment of the present application;
  • FIG. 6 is a schematic diagram of a processing solution for a mixed read-write scenario provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of yet another method for implementing mixed reading and writing provided by an application embodiment
  • FIG. 8 is a schematic structural diagram of a controller in a solid-state hard disk provided by an application embodiment
  • FIG. 9 is a schematic structural diagram of a solid-state hard disk provided by an application embodiment.
  • FIG. 10 is a schematic structural diagram of a system provided by an application embodiment.
  • FIG. 1 depicts an exemplary framework diagram of a NAND Flash control system to which this application is applied.
  • the NAND Flash control system includes a host system (HOST) and an SSD disk system, and the host system and SSD There can be communication connections between the disk systems.
  • the host system can issue operation commands to the SSD disk system, such as read, write, or erase operations related commands, and can also interact with the SSD disk system for data, such as Read the data in the SSD disk system, write data to the SSD disk system, etc.
  • the SSD disk system may further include an SSD controller (SSD Controller) and multiple memories, each of which may be called a flash memory particle or NAND Flash or DIE or LUN or PU or NAND memory or NAND device (NAND device), etc.
  • SSD controller SSD Controller
  • NAND Flash NAND Flash
  • DIE LUN
  • PU NAND memory or NAND device
  • This article mainly uses the concept of flash memory particles to describe the program.
  • the SSD controller can be used to implement operations such as command issuing, data storage, reading, and erasing to flash memory particles.
  • the SSD controller may include, for example, one or more processors, such as a central processing unit (CPU, central processing unit), and the one or more processors may be integrated on the same hardware chip.
  • processors such as a central processing unit (CPU, central processing unit)
  • CPU central processing unit
  • processors may be integrated on the same hardware chip.
  • the SSD controller communicates with the host system through the internal HOST interface to receive operation commands and data from the host system, and feedback responses and related data to the host system.
  • the SSD controller interacts with the flash memory particles through Input/Output (IO) channels, such as sending commands, transmitting data, and querying status.
  • IO Input/Output
  • a flash memory particle is independently mounted on each IO channel.
  • the flash memory control module in the SSD controller can operate multiple flash memory particles in parallel through the multiple IO channels to improve the overall read and write rate of the system.
  • Figure 1 only exemplarily shows the situation of 4 flash memory particles (ie flash memory particle 1-flash memory particle 4) and the corresponding 4 IO channels (ie IO channel 1-IO channel 4).
  • the flash memory control module can be used for management Read, write, and erase operations of each flash memory particle.
  • the number of flash memory particles may also be other numbers, such as 8 flash memory particles, 16 flash memory particles, etc., and the corresponding number of IO channels between the flash memory control module and these flash memory particles may be 8, 16, etc.
  • the flash memory control module can be implemented in the form of software and/or hardware.
  • the flash memory control module can include a hardware circuit integrated in the hardware chip of the SSD controller, and the flash memory control module can also include a hardware circuit running on the SSD controller. The software function of the hardware chip.
  • the flash memory particles are composed of many storage logic units (or logic units for short, or storage media concurrent units).
  • Each IO channel includes multiple channels (Way), and each channel corresponds to a storage logic unit to organize concurrent media operations, that is, multiple storage logic units can share one IO channel.
  • the IO channel 1 includes n channels, which respectively correspond to n storage logic units (ie, logic unit W1-logic unit Wn) at the back end.
  • each IO channel may include 8 channels (that is, n is 8), and each channel corresponds to 1 storage logic unit.
  • one IO channel is equivalent to an 8-bit data bus (bus).
  • n can also be other values, which are not limited in this application.
  • Flash memory particles are a non-volatile random access storage medium, which is characterized by the fact that data does not disappear after a power failure, so it can be used as an external memory.
  • the storage logic unit inside the flash memory particle may further include a plurality of blocks, each block includes a plurality of word lines, and each word line includes a plurality of pages.
  • the block is the basic unit of the erase operation
  • the word line is the basic unit of the write operation
  • the page is the basic unit of the read operation.
  • the storage logic unit may include 2048 blocks, each block may include 256 word lines, each word line may include 3 pages, and each page may have a data storage capacity of 32KB.
  • the data stored in the page may specifically include proprietary data and error correction codes.
  • the proprietary data is actual business data, and the error correction codes are used to implement independent error correction of the proprietary data.
  • Different SSD controllers may support different error correction codes.
  • the error correction code can be a data check code (Error Correction Code, ECC), or a low density parity check code (LDPC), or it can be Other check codes, such as BCH codes, etc., are not limited in this application.
  • the size of the proprietary data can be set to an integer multiple of the data size of the error correction code.
  • the data size of the error correction code can be 512B, 256B, 128B or other values.
  • the operations on the storage logic unit of the flash memory particles mainly include read operations, write operations and erase operations. These operations require commands to indicate, and these commands are issued in units of bits (Byte). Take the related commands corresponding to the read operation and write operation as an example.
  • the storage logic unit does business data (or user data) access (read or write), due to media characteristics and system implementation, there are generally order and time constraints .
  • the read operation specifically includes the issuance of a read command (1us-5us), a read latency (50us-150us), the issuance of a read status query command (for example, periodically sent underground), and read data transmission.
  • the read latency is used to prepare the data read inside the storage logic unit, and does not occupy the IO channel; the occupied time of the read data transmission depends on the IO channel bandwidth and the transmission data size, for example, for 4KB data, 400MB/s transmission bandwidth , The time overhead of read data transmission is about 10us.
  • the write operation specifically includes the issuance of a write command, a write data transmission, a write latency, and a write result query command issuance (for example, periodically issued underground).
  • the write latency is used to program and write data inside the storage logic unit, and does not occupy IO channels.
  • the time overhead is, for example, about 3ms; the occupancy time of write data transmission depends on the IO channel bandwidth and the size of the transmitted data, for example, for 96KB
  • the data to be written 400MB/s transmission bandwidth, and the time overhead of writing data transmission is about 240us.
  • the write data transfer may also be referred to as a write data transfer operation for transferring the data to be written, or as the transfer of the data to be written, or as the transfer operation of the data to be written.
  • the storage of data in the storage logical unit of the flash memory particles is based on a fixed address mapping relationship, and the address mapping relationship refers to a mapping relationship (Map) between a logical address and a physical address.
  • the data is stored in the SSD disk system according to the physical address.
  • the host system performs a read/write operation on the SSD disk system, the host system sends a read/write operation request to the SSD system, and the read/write operation request carries the logical address of the data.
  • the SSD controller is provided with a Flash Translation Layer (FTL) module to complete the conversion from the logical address of the host system to the physical address of the SSD disk system.
  • FTL Flash Translation Layer
  • the logical address of is parsed into a physical address according to the fixed address mapping relationship, and then the location of the data is found in the flash memory particles.
  • the FTL module may be a software function running on the hardware chip of the SSD controller. In another implementation, the FTL module may also be a hardware chip integrated in the SSD controller in the form of a hardware circuit.
  • the read speed of the single-chip flash memory particle read data volume/(read command issuing time + read latency + read data transmission time). For example, for a single-chip flash memory particle, if the read command issuance time is 2us, the read latency is 80us, and the read data transmission time is 10us, then the inherent read delay of the single-chip flash memory particle is at least 92us (2us+80us) +10us).
  • each storage logic unit in the I/O channel shares the data transmission bandwidth of the channel, and only one I/O channel can execute at the same time Relevant read/write commands or data transfer of storage logic unit.
  • the time for the media to perform concurrent write operations is generally an order of magnitude higher than the time for read operations, it is very likely that when a certain storage logic unit is performing a write operation, it needs to respond to a new request from the host for a read operation; this way , The flow of the read operation may conflict with the flow of the write operation.
  • the I/O channel is occupied by the process. If the SSD controller receives a new request from the host system and needs to read the data of the storage logic unit W2, it will take a long time (read waiting period) to wait for the storage logic unit W1 to release the I/O channel, for example, read waiting in the figure
  • the period can be as high as 240us.
  • the reading process of the data of the storage logic unit W2 includes issuing a new read command to the storage logic unit W2, performing the subsequent read latency and performing read data transmission, so from the host system's point of view, the read delay It can be as high as 332us (240us+92us) or even longer.
  • multiple storage logic units sharing the same I/O channel may all need to perform write data transmission, and these storage logic units have been queued for write data transmission.
  • another storage logic unit needs to read data, it needs to wait for the queue to complete the write data transmission. Then, the read delay seen by the host system will increase exponentially, even as high as milliseconds. This seriously affects the quality of service (QoS) of read operations.
  • QoS quality of service
  • FIG. 3 shows an existing solution.
  • the SSD controller receives a read data request issued by the host system to request to read the storage logic Data in cell W2.
  • the SSD controller can consume additional suspending overhead, suspend the execution of the write data transmission, and instead prioritize the read operation process. After the read operation process is completed, the write data transmission is resumed again.
  • this application provides a hybrid read-write implementation method, which can make full use of the IO channel bandwidth to achieve close multiplexing of read operations and write operations.
  • the impact of the process is minimized, and the read delay can be effectively reduced.
  • FIG. 4 is a schematic flowchart of a method for implementing hybrid read and write provided by an embodiment of the present application, and the method is mainly described from the perspective of the controller.
  • the controller involved in this method may be, for example, an SSD controller.
  • the memory involved in this method may be, for example, a Flash chip.
  • the Flash chip may specifically be NAND Flash; the memory may also be a dynamic random access memory (Dynamic Random Access Memory). , DRAM) or other types of non-volatile memory.
  • DRAM Dynamic Random Access Memory
  • the following description is mainly based on the controller as the SSD controller and the memory as the flash memory particles.
  • the method includes but is not limited to the following steps:
  • the controller determines that it is necessary to perform a write data transmission operation for transmitting the data to be written to the first logic unit of the memory.
  • the SSD controller receives a data write request and data to be written from the host system.
  • the data write request includes the logical address of the data to be written, so that the need is determined according to the logical address.
  • the data to be written is written to the first logic unit of the flash memory particle, where the first logic unit is any storage logic unit in the flash memory particle (for example, the first logic unit may be referred to as the storage logic unit W1).
  • the controller determines that the data to be written needs to be transmitted to the first logic unit through the IO channel (that is, the write data transmission operation needs to be performed).
  • the controller determines that it is necessary to perform a read operation on the data in the second logic unit of the memory.
  • the second logic unit is any storage logic unit in the flash memory particles that is different from the first logic unit (for example, the second logic unit may be called the storage logic unit W2), that is, the first logic unit and the second logic unit are located in the same flash memory. Particles, share the same IO channel.
  • the SSD controller receives a read data request from the host system, and the read data request includes the logical address of the data to be read, so that it is determined according to the logical address that the data needs to be read from the second logical unit. Based on the write request, the controller determines that the second logic unit needs to be read through the IO channel, which specifically includes the need to execute read command issuance, read status query command issuance (for example, periodically sent underground) and read data through the IO channel Transmission and other sub-operations.
  • step S201 may be executed before S202, or may be executed after S202, and S201 and S202 may also be executed at the same time.
  • the SSD controller when the IO channel is performing the write data transmission process for the first logic unit, the SSD controller detects that the read operation for the second logic unit needs to be performed.
  • the SSD controller when the IO channel is performing a write operation for the second logic unit, the SSD controller detects that it is necessary to perform write data transmission for the first logic unit.
  • the SSD controller simultaneously obtains the write data request and the read data request, such as the above request from the host system, so as to determine that it is necessary to implement the write data transmission and target data transmission to the first logic unit through the IO channel. Read operation of the second logic unit
  • the SSD controller processes the data to be written into multiple data packets.
  • the SSD controller breaks up the data to be written into multiple small packets of data (or called multiple data packets), thereby forming a small packet data queue.
  • FIG. 5A shows a scene in which a piece of data to be written is organized into multiple small packets of data. It can be seen that the business data in the data to be written is divided into multiple fine-grained data, and a multiple-bit error correction code is added to each fine-grained data to form multiple small packets of data.
  • the error correction code in the small packet data is calculated by a preset algorithm based on the data in the small packet data. Error correction codes can be used to correct data that may be wrong.
  • the reliability of the electrical signals stored on the storage medium is not stable. This situation may cause errors in the data written to the flash memory particles.
  • the reliability of the data is ensured by carrying the data and the corresponding check code in each small packet of data.
  • the storage of packet data can be performed in word-line units. Because the error correction code has the characteristics of correcting erroneous data, when the small packet data needs to be read from the first logic unit later, the SSD controller can compare the data in the small packet data with the error correction code to determine what to read If there is any error in the data, the correct data can be obtained.
  • the error correction code can be a data check code (Error Correction Code, ECC), or a low density parity check code (LDPC), or other Check codes, such as Bosch-Chadhury-Hokkungmu (BCH) codes, etc., which are not limited in this application.
  • ECC Error Correction Code
  • LDPC low density parity check code
  • BCH Bosch-Chadhury-Hokkungmu
  • the SSD controller can use the ECC algorithm to calculate the ECC check code based on the data in the packet data, for example, Hamming algorithm, Reed-Solomon or other ECC algorithms , This application does not limit this.
  • the ECC check code has a certain range of error correction bits. For data of the same length, the longer the ECC check code, the stronger the error correction ability, that is, the greater the ability to allow data errors.
  • the data size of the error correction code can be 512B, 256B, 128B or other values.
  • the size of the data in the small packet data can be set to an integer multiple of the data size of the error correction code.
  • the size of the data in the small packet data may be N*512B, and N is a natural number greater than zero.
  • each small packet of data can specifically include 4KB of data and 512B of ECC check code.
  • the size of the business data in the data to be written is 96K
  • the The data to be written is organized into 24 small packets of data, each of which includes 4KB of data and 512B of ECC check code.
  • the so-called “data to be written” may refer to the complete data to be written sent by the host to the SSD controller, that is, the data to be written sent by the host to the SSD controller has not been transmitted to
  • the SSD controller organizes the complete data to be written into multiple data packets (ie, small packet data queues). For the specific implementation process, refer to the description of FIG. 5A.
  • the SSD controller After receiving the data to be written from the host, the SSD controller processes the complete data to be written into multiple data packets regardless of whether a read operation requirement as shown in S202 occurs.
  • the SSD controller may first determine the data volume of the complete data to be written, if the data volume of the complete data to be written is greater than the preset
  • the threshold is used to process the complete data to be written into multiple data packets.
  • the preset threshold may be, for example, 16KB, 32KB, 64KB, etc., which is not specifically limited in this application.
  • the so-called “data to be written” may also refer to the part of the data to be written in the complete data to be written that the host sends to the SSD controller, that is, the SSD controller will be complete Part of the data to be written in the data to be written is not split, but the other part of the data to be written in the complete data to be written (that is, the remaining data to be written) is organized into multiple data package.
  • the SSD controller after receiving the complete data to be written from the host, transmits the data to be written to the first logic unit through the IO channel.
  • the SSD controller receives a read data request issued by the host, and determines that it needs to perform a read operation to the second logic unit through the IO channel.
  • the SSD controller processes the data to be written that has not been transmitted (that is, the remaining data to be written) into multiple data packets .
  • step S203 may be executed before S202 or after S203, and S203 and S202 may also be executed at the same time.
  • step S203 may be executed before S201 or after S201, and S203 and S201 may also be executed at the same time.
  • the SSD controller alternately executes each sub-operation of each data packet transmission operation and read operation through the IO channel.
  • the SSD controller implements each data packet transmission operation and read operation in the transmission of the same IO channel. Interleave of sub-operations (Interleave). That is to say, the SSD controller interleaves the transmission of the small packet data queue formed by S203 and the sub-operations such as the issuance of the read command, the status query and the read data transmission for the second logic unit. Thus, a transmission pipeline with the maximum channel bandwidth utilization rate is formed on the same IO channel.
  • the SSD controller can transmit one or more continuous data packets through the IO channel according to the queue sequence of the multiple data packets; the one or more continuous data packets are One or more consecutive data packets (small packet data) among multiple data packets (ie multiple small packet data) organized by the data to be written. After the transmission of the one or more continuous data packets is completed, a certain sub-operation can be transmitted through the IO channel.
  • the sub-operation is each sub-operation of the read operation (that is, the read command issuance operation, the status query operation, and the read data transmission Operation, etc.); after completing the transmission of the sub-operation, the next one or more continuous data packets of the one or more continuous data packets in the queue sequence are transmitted through the IO channel, so as to realize the data packet transmission operation and Interleaving of sub-operations of read operations.
  • the SSD controller initiates a write data transmission to the logic unit W1.
  • the SSD controller transmits to the logic unit W1 through the IO channel in the sequence of the small packet data queue.
  • Each small packet of data occupies a relatively short time in the channel (for example, it occupies a channel of 10 us), and the total superimposed duration of the transmission of all small packets of data is, for example, 240 us.
  • the SSD controller interleaves the read operation process for the logic unit W2 in the execution flow of the small packet data queue.
  • the second logic unit After completing the transmission of a small packet of data (such as one or more consecutive first data packets), immediately issue a read command (for example, occupy the channel 2us), and complete the read command. After the transmission, the next small packet data (such as one or more consecutive second data packets) are transmitted immediately. After receiving the read command, the second logic unit enters the read latency to prepare for data reading. Since the read latency is executed inside the second logic unit, this preparation does not occupy a channel, and the entire time overhead of the read latency is, for example, 80 us.
  • the status query command is issued immediately (for example, occupying the channel 2us). After the status query command is issued, the status query command is issued immediately. Then proceed to the next small packet data transmission.
  • a preset period of time such as 50us
  • start the issuance of periodic status query commands such as querying the status of the logical unit W2 every 10us to confirm whether the logical unit W2 has completed the reading Data preparation.
  • the issuing operation of the periodic status query command is also interleaved with the transmission of the small packet data, that is, it can continue to be used to transmit the small packet data within the issuing time gap of the two status query commands.
  • the read latency period ends. Then at another point in time, after completing the transmission of certain small packets of data (such as one or more consecutive third data packets), the SSD controller initiates the read data transmission from the logic unit W2, and the process of reading data transmission (such as occupying the channel) 10us) can specifically include the issuance of the read data transmission command and the return process of the read data. After completing the read data return, the SSD controller continues to transmit the next small packet data.
  • the time difference from the execution of the read command issuance operation to the execution of the status query operation can be greater than or equal to the duration of the transmission of one or more second data packets; from the execution of the status query operation to the execution of the read data transmission operation The time difference can be greater than or equal to the duration of transmitting one or more third data packets.
  • the data in the read data transmission may be data in the form of one or more data packets (for example, 4KB of data + 512B of ECC check code).
  • the data in the read data transmission may also be a complete data segment.
  • the entire read operation process can be completed in the middle of the write data transmission process.
  • the read delay for logic unit W2 is only 92us (that is, the read command is issued 2us+read latency 80us+read data transmission 10us), while the total delay of write data transmission is only 256us (that is, write data transmission).
  • the pure transmission time is 240us + the delay caused by the reading process is 92us).
  • the scheme of this application can not only fully and effectively use the entire bandwidth of the IO channel, but also avoid interference to the read operation process (for example, read wait), and minimize the read delay. .
  • the scheme of this application can not only fully and effectively use the entire bandwidth of the IO channel, but also avoid interference to the write transmission process (such as write pause), that is, it can ensure the lowest read time Delay, and can minimize the impact on write delay.
  • FIG. 4 mainly describes the solution to the scenario of mixed read-write conflicts of two storage logic units (W1 and W2), those skilled in the art can understand that the above technical ideas can also be used. It is applied to a scenario of mixed read-write conflicts with more storage logic units. For example, there are one or more storage logic units that need to perform write data transmission, and another one or more storage logic units need to perform read operations. Based on the description of the present application, those skilled in the art will understand the specific implementation process, which will not be elaborated herein.
  • the SSD controller performs the transmission operation of each data packet of the data to be written during the transmission of the IO channel and each sub-operation of the read operation in the scenario of mixed reading and writing. Interleaving allows the read/write operations of different storage logic units of the same flash memory particle to be multiplexed together, maximizing the use of the data transmission bandwidth of the IO channel and scheduling all logic units, so that idle storage logic units can be scheduled as soon as possible And work, reduce the read delay and write delay of the read operation, and improve the user experience.
  • the controller involved in this method may be, for example, an SSD controller.
  • the memory involved in this method may be, for example, a Flash chip.
  • the Flash chip may specifically be NAND Flash; the memory may also be a dynamic random access memory (Dynamic Random Access Memory). , DRAM) or other types of non-volatile memory.
  • DRAM Dynamic Random Access Memory
  • the SSD controller After receiving the data to be written from the host, the SSD controller organizes the data to be written into multiple data packets, such as 24 small packets of data, each of which includes 4KB of data and 512B of ECC check. Code to form a small packet data queue, and it is determined that the small packet data queue needs to be transmitted to a storage logic unit (for example, logic unit W1) of the IO channel (that is, a write data transmission operation for transmitting the small packet data queue is performed). For the specific implementation of this step, reference may be made to the related descriptions of S201 and S203 in the foregoing embodiment in FIG. 4, which will not be repeated here.
  • a storage logic unit for example, logic unit W1 of the IO channel
  • the SSD controller sequentially transmits the small packet data according to the small packet data queue.
  • the SSD controller detects whether there are other storage logic units on the same IO channel that need to perform a read operation or perform a sub-operation in the read operation at the granular boundary of small packet data transmission (for example, every 4KB), for example, whether it is necessary to issue a read Command, or need to issue status query command, or need to transmit read data, etc.
  • the SSD controller can actively check whether other storage logic units have high-priority service data that needs to perform a read operation or perform a certain sub-operation in the read operation after each small packet of data is transmitted.
  • the SSD controller can also check whether other storage logic units have high-priority service data that needs to perform a read operation or perform a certain sub-operation in the read operation after several small packets of data are transmitted.
  • step S304 is continued subsequently.
  • step S302 If there is no other storage logic unit that needs to perform a read operation, return to continue to perform step S302, and transmit the next small packet of data.
  • the SSD controller alternately executes each sub-operation of the transmission operation and the read operation of each data packet through the IO channel.
  • the SSD controller receives a data read request (carrying a logical address) issued by the host system to the SSD disk system through the HOST interface, requesting to read data in another storage logical unit (such as logical unit W2) of the same IO channel .
  • the SSD controller parses the read data request to obtain the logical address, and queries the mapping table from the logical address to the physical address through the FTL module to obtain information such as the IO channel number, storage logical unit number, storage location, etc., for example, to determine that the read data request needs to access the logical unit For the business data in W2, the logical unit W2 and the logical unit W1 share the same IO channel.
  • step S204 for the specific implementation process of each data packet transmission operation of the logical unit W1 and each sub-operation of the read operation of the logical unit W2, please refer to the related description of step S204 in the embodiment of FIG. 4, which will not be repeated here.
  • step S305 Determine whether the read operation process has been completed. If it has been completed, the SSD controller can transmit the read data corresponding to the read data request to the host and report that the read is normally completed, and then continue to step S306; if it is not completed, it will return to step S304 and continue to alternately execute the remaining data packets. The remaining sub-operations of the transfer operation and the read operation.
  • steps S302 and S303 that is, continue to transmit the remaining small packet data sequentially according to the small packet data queue, and check whether there are other storage logic units on the same IO channel at the granular boundary of the small packet data transmission Perform a read operation or perform a sub-operation in the read operation.
  • the SSD disk system continues to perform other related processing procedures of the write operation.
  • the logic unit W2 enters the write latency period, or the SSD controller makes a write query to the logic unit W2.
  • the SSD controller can interleave the sub-operations of the read operation and the transmission operations of the data packets of the write data, so that the read/write operations of different storage logic units of the same flash memory particle can be multiplexed together to maximize the use of the IO channel Data transmission bandwidth and scheduling of all logical units, so that idle storage logical units can be scheduled and work as soon as possible, reduce the read delay and write delay of read operations, and improve user experience.
  • the embodiment of the present application provides a solid-state hard disk.
  • the solid-state hard disk includes a controller 50 and a memory.
  • the controller may be, for example, an SSD controller.
  • the memory may be, for example, a Flash chip, and the Flash chip may specifically be NAND Flash; It can be a dynamic random access memory (Dynamic Random Access Memory, DRAM) or other types of non-volatile memory.
  • the controller further includes: a write determination unit 501, a read determination unit 502, a data processing unit 503, and an alternate reading and writing unit 503.
  • the write determination unit 501, the read determination unit 502, the data processing unit 503, and the alternate reading and writing unit 503 are applied to the flash memory control module shown in FIG.
  • the SSD controller may include, for example, one or more processors (CPUs), and the one or more processors may be integrated on the same hardware chip.
  • the flash memory control module may be implemented in software and/or hardware form.
  • the flash memory control module may include a hardware circuit integrated in the hardware chip of the SSD controller, and the flash memory control module may also include the software function of the hardware chip running on the SSD controller.
  • the FTL module can be a software function running on the hardware chip of the SSD controller, and the FTL module can also be a hardware chip integrated in the SSD controller in the form of a hardware circuit. among them:
  • the write determination unit 501 is configured to determine that a write data transmission operation for transmitting the data to be written needs to be performed to the first logic unit of the memory;
  • the read determination unit 502 is configured to determine that it is necessary to perform a read operation on the data in the second logic unit of the memory; wherein, the first logic unit and the second logic unit share an IO channel;
  • the data processing unit 503 is configured to process the data to be written into multiple data packets
  • the alternate reading and writing unit 504 is configured to alternately perform the transmission operation of each data packet and each sub-operation of the read operation through the IO channel; each sub-operation of the read operation includes a read command issuance operation and status Query operation and read data transfer operation.
  • Each functional module of the controller can be used to implement the method shown in Fig. 4 or Fig. 7.
  • the write determination unit 501 can be used to perform step S201
  • the read determination unit 502 can be used to perform S202
  • the data processing unit 503 can be used to perform S203
  • the alternate reading and writing unit 504 can be used to perform S204.
  • the solid-state hard disk includes a controller 521 and a memory 522.
  • the controller 521 may be, for example, an SSD controller.
  • the memory 522 may be, for example, a Flash chip, and the Flash chip may specifically be a flash memory. Particle (NAND Flash); the memory 522 may also be a dynamic random access memory (Dynamic Random Access Memory, DRAM) or other types of non-volatile memory.
  • the memory 522 can be used to store data, and perform operations such as read/write/erase based on commands from the controller 521.
  • the controller 521 is configured to execute program instructions.
  • the program instructions may be stored in the memory 522, and the program instructions may also be stored in other dedicated memories (not shown).
  • the dedicated memories include but are not limited to random access Memory (random access memory, RAM), read-only memory (ROM), or cache (cache), etc.
  • the controller 521 is specifically configured to call the program instructions to execute the method described in the embodiment of FIG. 4 or FIG. 7.
  • an embodiment of the present application provides a system, the system includes: a host 601 and a solid-state hard disk 602; where, in an example, the host 601 is, for example, the host system shown in FIG. 1, and the solid-state hard disk 602 is, for example, as shown in FIG. The SSD disk system shown in 1.
  • the solid-state hard disk 602 is, for example, the solid-state hard disk involved in the embodiment in FIG. 7. among them:
  • the host 601 is used to send a write operation request and a read operation request to the solid state hard disk 602.
  • the solid-state hard disk 602 is configured to implement the method shown in FIG. 4 or FIG. 7 according to the write operation request and the read operation request.
  • the program can be stored in a computer-readable storage medium, and when the program is executed , May include the content of the foregoing communication method implementations.
  • the readable storage medium mentioned above can be memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, register, hard disk, removable disk, CD-ROM, magnetic Disk, optical disc, or any other form of storage medium known in the technical field.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

本申请公开了一种混合读写的实现方法以及装置,方法包括:控制器确定需要向存储器的第一逻辑单元执行用于传输待写入数据的写数据传输操作;以及确定需要对第二逻辑单元中的数据执行读操作;第一逻辑单元和第二逻辑单元共享IO通道;将待写入数据处理成多个数据包;通过IO通道交替执行传输各个数据包和执行读操作的各个子操作;各个子操作包括读命令下发操作、状态查询操作和读数据传输操作。本申请能够实现在混合读写场景中,在基本不影响写操作的进程的情况下有效降低读延迟,提升读操作的响应速度,提升用户体验。

Description

一种固态硬盘混合读写的实现方法以及装置 技术领域
本申请涉及存储技术领域,尤其涉及一种混合读写的实现方法以及装置。
背景技术
固态硬盘(SSD,Solid State Disk)盘片系统是由控制器和存储器组成的硬盘,其中存储器例如可以为动态随机存取存储器(Dynamic Random Access Memory,DRAM)或者闪存(Flash)芯片,Flash芯片具体可为与非门闪存(NAND Flash)。SSD和传统的硬盘驱动器(Hard Disk Drive,HDD)相比,对外呈现更大的容量,更高读、写带宽、每秒进行的读写操作次数(Input/Output Per Second,IOPS),以及更好的服务质量(Quality of Service,QoS)。
SSD盘片系统通常采用多个存储器组成存储阵列。对于采用多个NAND Flash的SSD盘片系统,每个独立的NAND Flash又可称为闪存颗粒,闪存颗粒内部由很多存储逻辑单元(又可称为存储介质并发单元)组成,单片存储逻辑单元的容量以及读、写性能并不高。例如,单片存储逻辑单元的容量为32GByte,读带宽最高150MB/s,写带宽最高30MB/s。所以,单片存储逻辑单元的读写效率比较低,整个SSD盘片系统的高性能依赖于多片存储逻辑单元可以并发操作。闪存颗粒中存储逻辑单元的并发数越多,SSD盘片系统的读写性能就越高,单片闪存颗粒中的各个存储逻辑单元共享同一输入/输出(Input/Output,IO)通道(channel)。
在主机系统混合访问模型中,一般而言,由于SSD盘片系统以回写(Write Back)方式处理写操作,主机一般看不到存储逻辑单元的高写潜伏期延迟。但读操作令面临的读环境比较复杂,需要实际去访问存储逻辑单元并进行数据回传,如果某存储逻辑单元的读操作在通道内与其他存储逻辑单元的写数据传输发生冲突,则主机能看到较大的读延迟(冲突长尾延迟),读操作对应的读延迟可能是数百微秒级别甚至是毫秒级别,这对于某些主机而言将是不可接受的。
发明内容
本申请实施例提供了一种混合读写的实现方法以及装置,实现在混合读写场景中,在基本不影响写操作的进程的情况下有效降低读延迟,提升读操作的响应速度,提升用户体验。
第一方面,本申请实施例提供了一种固态硬盘混合读写的实现方法,所述方法包括:控制器确定需要向存储器的第一逻辑单元执行用于传输待写入数据的写数据传输操作;以及,所述控制器确定需要对所述存储器的第二逻辑单元中的数据执行读操作;其中,所述第一逻辑单元和所述第二逻辑单元共享输入输出(IO)通道;所述控制器将所述待写入数据处理成多个数据包;所述控制器通过所述IO通道,交替执行传输各个所述数据包和执行所述读操作的各个子操作;所述读操作的各个子操作包括读命令下发操作、状态查询操作和读数据传输操作。
可以看到,实施本申请的方法,SSD控制器在遇到混合读写的场景下,可将待写入数据组织成多个数据包的队列(或称小包数据队列),通过对IO通道传输过程中的待写入数据的各个数据包的传输操作和读操作的各个子操作进行交织,使得同一闪存颗粒的不同存储逻辑单元的读/写操作能够复用在一起,最大限度地使用IO通道的数据传输带宽和调度所有的逻辑单元,让空闲的存储逻辑单元可以尽早地得到调度和工作,降低读操作的读延迟和写延迟,提升读操作的响应速度,提升用户使用体验。
基于第一方面,在可能的实施方式中,所述控制器例如为SSD控制器,所述存储器例如为与非门闪存(NAND Flash,或称闪存颗粒)或者动态随机存取存储器(DRAM)。
其中,所述存储器内部包括很多个逻辑单元(或称为存储逻辑单元,或可称为存储介质并发单元)。所述控制器通过IO通道控制存储器,IO通道内包括多路(Way),每路对应一个逻辑单元来组织介质并发操作,也即是说,多个逻辑单元可共享一个IO通道。每个逻辑单元可进一步包括多个块(block),每个块包括多个字线(word-line),每个字线包括多个页(page)。其中,块为擦除操作的基本单元,字线是写操作的基本单元,页是读操作的基本单位。
其中,第二逻辑单元为所述存储器中区别于第一逻辑单元的任意一个逻辑单元,即第一逻辑单元和第二逻辑单元可位于同一存储器(如位于同一闪存颗粒),共享同一IO通道。
基于第一方面,在可能的实施方式中,所述控制器将所述待写入数据处理成多个数据包,包括:所述控制器收到来自主机的待写入数据后,在确定需要执行所述写数据传输操作以及所述读操作前,将所述待写入数据处理成多个数据包。
也就是说,在一种实施例中,SSD控制器在收到来自主机的待写入数据后,就将该完整的待写入数据处理成多个数据包,具体过程可描述如下:
控制器接收来自主机的待写入数据;所述控制器将所述待写入数据处理成多个数据包;所述控制器确定需要向存储器的第一逻辑单元执行传输所述多个数据包;所述控制器确定需要对所述存储器的第二逻辑单元中的数据执行读操作;其中,所述第一逻辑单元和所述第二逻辑单元共享输入输出(IO)通道;所述控制器通过所述IO通道,交替执行传输各个所述数据包和执行所述读操作的各个子操作;所述读操作的各个子操作包括读命令下发操作、状态查询操作和读数据传输操作。
可以看到,在本申请实施例中,SSD控制器在收到主机的待写入数据时,可将待写入数据组织成多个数据包的队列(或称小包数据队列),在SSD控制器进行小包数据队列的传输过程中,如果遇到读操作相关命令,且写数据传输针对的逻辑单元和读操作针对的逻辑单元同享同一IO通道,则SSD控制器可将读操作的各个子操作和写数据的各个数据包的传输操作交织,使得同一闪存颗粒的不同存储逻辑单元的读/写操作能够复用在一起,最大限度地使用IO通道的数据传输带宽和调度所有的逻辑单元,让空闲的存储逻辑单元可以尽早地得到调度和工作,降低读操作的读延迟和写延迟,提升用户使用体验。
基于第一方面,在可能的实施方式中,所谓“待写入数据”可以是指主机发给SSD控制器的完整的待写入数据,也就是说,主机发给SSD控制器的待写入数据尚未被传输至闪存颗粒,SSD控制器将该完整的待写入数据组织成多个数据包(即小包数据队列)。
基于第一方面,在可能的实施方式中,所谓“待写入数据”也可以是指主机发给SSD 控制器的完整的待写入数据中的部分的待写入数据,也就是说,SSD控制器将完整的待写入数据中的一部分的待写入数据不做切分处理,而将完整的待写入数据中的另一部分的待写入数据(即剩余的待写入数据)组织成多个数据包。
例如,SSD控制器在收到来自主机的完整的待写入数据后,通过IO通道向第一逻辑单元进行待写入数据的传输。在写数据传输过程的某一时间点,SSD控制器收到主机下达的读数据请求,确定需要通过该IO通道向第二逻辑单元执行读操作。在该时间点,已经有部分的待写入数据已经被传输至第一逻辑单元,那么,SSD控制器将尚未传输的待写入数据(即剩余的待写入数据)处理成多个数据包。
基于第一方面,在可能的实施方式中,所述控制器将所述待写入数据处理成多个数据包,包括:所述控制器将所述待写入数据切分成多份数据;所述控制器对所述多份数据中的每份数据分别添加纠错码,从而获得多个数据包。
其中,所述纠错码例如为数据校验码(ECC)或者低密度奇偶校验码(LDPC)或者博斯-查德胡里-霍昆格母(BCH)码。
由于SSD盘片系统中的闪存颗粒采用电信号作为信息存储的物理形式,但电信号在存储介质上存储的可靠性并不稳定,这种情况导致写入到闪存颗粒上的数据可能会出错。本申请实施例中,通过在每个小包数据中携带数据和对应的校验码来避免出错,保证数据读取的可靠性和正确性。
基于第一方面,在可能的实施方式中,所述纠错码为ECC校验码,SSD控制器可根据小包数据中的数据,以ECC算法计算ECC校验码,例如,汉明(Hamming)算法、里德-所罗门(Reed-Solomon)或其他ECC算法,本申请对此不做限定。
基于第一方面,在可能的实施方式中,所述控制器通过所述IO通道,交替执行传输各个所述数据包和执行所述读操作的各个子操作,包括:所述控制器根据所述多个数据包的队列顺序,通过所述IO通道传输一个或多个第一数据包;所述一个或多个第一数据包为所述多个数据包中的一个或多个顺序连续的数据包;所述控制器在完成传输所述一个或多个第一数据包的传输后,通过所述IO通道执行第一子操作;所述第一子操作为所述读操作的所述各个子操作中的一个子操作;所述控制器在完成传输所述第一子操作后,通过所述IO通道传输所述队列顺序中排在所述一个或多个第一数据包后的一个或多个第二数据包,所述一个或多个第二数据包为所述多个数据包中的一个或多个顺序连续的数据包。
可以看到,实施本申请实施例,在同一IO通道中,分属不同逻辑单元的至少一种子操作和小包数据的传输操作可以交织执行,从而能够降低读时延,减少对写传输的干扰。
基于第一方面,在可能的实施方式中,所述第一子操作为所述读命令下发操作;相应的,所述一个或多个第一数据包在所述IO通道中的传输可发生在所述读操作的读命令下发之前。所述一个或多个第二数据包在所述IO通道中的传输可发生在所述读操作的读命令下发之后、所述读操作的状态查询命令下发之前。所述控制器在完成通过所述IO通道传输所述队列顺序中排在所述一个或多个第一数据包后的一个或多个第二数据包之后,还包括:所述控制器通过所述IO通道执行所述状态查询操作;所述控制器在完成执行所述状态查询操作后,通过所述IO通道传输所述队列顺序中排在所述一个或多个第二数据包后的一个或多个第三数据包,所述一个或多个第三数据包为所述多个数据包中的一个或多个顺序连续 的数据包;所述一个或多个第三数据包在所述IO通道中的传输可发生在所述读操作的状态查询命令下发之后、所述读操作的读数据传输操作之前。所述控制器在完成通过所述IO通道传输所述队列顺序中排在所述一个或多个第二数据包后的一个或多个第三数据包后,通过所述IO通道执行所述读数据传输操作。
可以看到,实施本申请实施例,在同一IO通道中,分属不同逻辑单元的各个数据包的传输操作和读操作的读命令下发、状态查询和读数据传输等子操作能够交织在一起执行。从而,在同一个IO通道上形成了一个最大通道带宽利用率的传输流水线(Pipeline),所以实施本申请实施例能够高效地利用整个通道带宽,有效降低读时延,且最大程度减少对写传输的干扰。
基于第一方面,在可能的实施方式中,在所述IO通道,从执行所述读命令下发操作到执行所述状态查询操作的时间差大于等于传输所述一个或多个第二数据包的时长;从执行所述状态查询操作到执行所述读数据传输操作的时间差大于等于传输一个或多个所述第三数据包的时长。
实施本申请实施例,能够保证分属不同逻辑单元的各个子操作和小包数据的传输操作可以交织执行,从而高效地利用整个通道带宽,有效降低读时延,且最大程度减少对写传输的干扰。
基于第一方面,在可能的实施方式中,所述控制器确定需要向存储器的第一逻辑单元执行用于传输待写入数据的写数据传输操作,包括:所述控制器接收到来自主机的写数据请求和所述待写入数据,所述写数据请求包括所述待写入数据的逻辑地址;所述控制器根据所述待写入数据的逻辑地址确定需要将所述待写入数据写入到所述存储器的第一逻辑单元。
其中,写数据请求和所述待写入数据可以主机同时向所述控制器下发的,也可以是先下发写数据请求到所述控制器,然后再下发所述待写入数据到所述控制器。
基于第一方面,在可能的实施方式中,所述控制器确定需要对所述存储器的第二逻辑单元中的数据执行读操作,包括:所述控制器接收到来自主机的读数据请求,所述读数据请求包括待读取的数据的逻辑地址;所述控制器根据所述待读取的数据的逻辑地址确定需要对所述存储器的第二逻辑单元执行读操作。
第二方面,本申请实施例提供了一种固态硬盘,包括控制器和存储器,所述控制器包括:写确定单元,用于确定需要向存储器的第一逻辑单元执行用于传输待写入数据的写数据传输操作;以及,读确定单元,用于确定需要对所述存储器的第二逻辑单元中的数据执行读操作;其中,所述第一逻辑单元和所述第二逻辑单元共享IO通道;数据处理单元,用于将所述待写入数据处理成多个数据包;交替读写单元,用于通过所述IO通道,交替执行传输各个所述数据包和执行所述读操作的各个子操作;所述读操作的各个子操作包括读命令下发操作、状态查询操作和读数据传输操作。
所述控制器的各功能单元可用于实现第一方面所描述的方法。
第三方面,本申请实施例提供了一种固态硬盘,包括:控制器和存储器;所述控制器和所述存储器通过总线连接或耦合在一起;其中,所述存储器用于存储程序指令,所述控制器用于调用所述存储器存储的程序指令,以执行如第一方面任一项可能的实施方式所述 的方法。
第四方面,本申请实施例提供了一种系统,其特征在于,包括:主机和固态硬盘;所述主机和所述固态硬盘通信连接;其中,所述主机用于向所述固态硬盘发送的写操作请求和/或读操作请求,所述固态硬盘为如第二方面所述的固态硬盘,或者,所述固态硬盘为如第三方面所述的固态硬盘。
第五方面,本申请实施例提供一种计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被执行处理器执行时使所述处理器执行如第一方面任意实施方式所描述的方法。
第六方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品运行于计算机时,被执行以实现第一方面任意实施方式描述的方法。
可以看到,实施本申请,在固态硬盘混合读写场景中,比如当SSD控制器写数据传输过程中遇到读操作相关命令时,如果写数据传输针对的逻辑单元和读操作针对的逻辑单元同享同一IO通道,则SSD控制器可将读操作的各个子操作和写数据的各个数据包的传输操作交织,使得同一闪存颗粒的不同存储逻辑单元的读/写操作能够复用在一起,最大限度地使用IO通道的数据传输带宽和调度所有的逻辑单元,让空闲的存储逻辑单元可以尽早地得到调度和工作,降低读操作的读延迟和写延迟,提升用户使用体验。
附图说明
图1是一种应用本申请的与非门闪存(NAND Flash)控制系统的示例性框架图;
图2是一种现有方案中针对混合读写场景的处理方案示意图;
图3是又一种现有方案中针对混合读写场景的处理方案示意图;
图4是本申请实施例提供的一种混合读写的实现方法的流程示意图;
图5A本申请实施例提供的一种将待写入数据组织成多个小包数据的场景示意图;
图5B本申请实施例提供的又一种将待写入数据组织成多个小包数据的场景示意图;
图6是本申请实施例提供的一种针对混合读写场景的处理方案示意图;
图7是申请实施例提供的又一种混合读写的实现方法的流程示意图;
图8是申请实施例提供的一种固态硬盘中的控制器的结构示意图;
图9是申请实施例提供的一种固态硬盘的结构示意图;
图10是申请实施例提供的一种系统的结构示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。本申请的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。
为便于方案理解,首先结合相关附图来举例介绍本申请实施例的方案可能应用到的系统架构。
参见图1,图1描述了一种应用本申请的与非门闪存(NAND Flash)控制系统的示例性框架图,NAND Flash控制系统包括主机系统(HOST)和SSD盘片系统,主机系统与SSD盘片系统之间可进行通信连接,主机系统可向SSD盘片系统下发操作命令,例如读操作、写操作或擦除操作的相关命令,还可与SSD盘片系统进行数据的交互,例如读取SSD盘片系统中的数据、向SSD盘片系统写入数据等。
SSD盘片系统进一步可包括SSD控制器(SSD Controller)和多个存储器,其中每个存储器又可被称为闪存颗粒或NAND Flash或DIE或LUN或PU或NAND存储器或NAND设备(NAND device)等,本文主要使用闪存颗粒的概念进行方案的描述。SSD控制器可用于实现对闪存颗粒执行命令下发、数据存储、读取、擦除等操作。
SSD控制器例如可以包括一个或多个处理器,例如,中央处理器(CPU,central processing unit),该一个或多个处理器可以集成在同一块硬件芯片上。
SSD控制器通过内部设置的HOST接口与主机系统进行通信连接,以接收来自主机系统的操作命令及数据,以及向主机系统反馈响应及相关数据。SSD控制器通过输入输出(Input/Output,IO)通道(Channel)与闪存颗粒进行交互,如发送命令,传输数据,查询状态等。每个IO通道上独立挂载了一个闪存颗粒。
SSD控制器中的闪存控制模块可通过该多个IO通道并行操作多个闪存颗粒,以提高系统的整体读写速率。图1中仅示例性示出了4个闪存颗粒(即闪存颗粒1-闪存颗粒4)以及分别对应的4个IO通道(即IO通道1-IO通道4)的情况,闪存控制模块可用于管理各个闪存颗粒的读、写、擦除操作。在其他示例中,闪存颗粒还可以是其他的数量,例如8个闪存颗粒、16个闪存颗粒等等,闪存控制模块与这些闪存颗粒之间对应的IO通道数相应可以是8、16等。
在一种实现中,闪存控制模块可以以软件和/或硬件形态予以实现,例如闪存控制模块可以包括集成于SSD控制器的硬件芯片的硬件电路,闪存控制模块还可以包括运行于SSD控制器的硬件芯片的软件功能。
闪存颗粒内部由很多存储逻辑单元(或简称逻辑单元,或可称为存储介质并发单元)组成。每个IO通道内包括多路(Way),每路对应一个存储逻辑单元来组织介质并发操作,也即是说,多个存储逻辑单元可共享一个IO通道。以图中的IO通道1为例,IO通道1包括n路的,分别对应于后端的n个存储逻辑单元(即逻辑单元W1-逻辑单元Wn)。举例来说,每IO通道可包括8路(即n为8),每路对应1个存储逻辑单元,此时一个IO通道相当于8位(8-bit)的数据总线(bus)。当然,n还可以是其他值,本申请中不做限定。
闪存颗粒是一种非易失性随机访问存储介质,其特点是断电后数据不消失,因此可以作为外部存储器使用。闪存颗粒内部的存储逻辑单元可进一步包括多个块(block),每个块包括多个字线(word-line),每个字线包括多个页(page)。其中,块为擦除操作的基本单元,字线是写操作的基本单元,页是读操作的基本单位。
以一种闪存颗粒器件为例,存储逻辑单元例如可包含2048个块,每个块例如包含256个字线,每个字线例如包含3个页,每个页例如具有32KB的数据存储容量,页中存储的数据具体可包括专有数据和纠错码,专有数据即为实际的业务数据,纠错码用于实现对专有数据的独立纠错。不同的SSD控制器支持的纠错码可能不同。例如,在本申请具体的应用场景中,纠错码可以是数据校验码(Error Correction Code,ECC),也可以是低密度奇偶校验码(Low Density Parity Check Code,LDPC),还可以是其他的校验码,例如BCH码等,本申请对此不做限定。通常的,专有数据的大小可设置为纠错码的数据大小的整数倍。纠错码的数据大小可以为512B、256B、128B或其他数值。
对闪存颗粒的存储逻辑单元的操作主要有读操作、写操作和擦除操作。这些操作均需 要命令来指示,这些命令都是以比特(Byte)为单位发布的。以读操作和写操作对应的相关命令为例,存储逻辑单元在做业务数据(或称用户数据)访问(读或者写)时,由于介质特性和系统实现,一般而言会有顺序和时间约束。
本申请实施例中,读操作具体包括了读命令下发(1us–5us)、读潜伏期(50us–150us)、读状态查询命令下发(例如周期性地下发)和读数据传输。其中,读潜伏期用于进行存储逻辑单元内部的数据读取的准备工作,不占用IO通道;读数据传输的占用时长取决于IO通道带宽以及传输数据大小,例如对于4KB数据、400MB/s传输带宽,读数据传输的时间开销约为10us。
本申请实施例中,写操作具体包括了写命令下发、写数据传输、写潜伏期和写结果查询命令下发(例如周期性地下发)。其中,写潜伏期用于进行存储逻辑单元内部的数据编程写入工作,不占用IO通道,时间开销例如约为3ms;写数据传输的占用时长取决于IO通道带宽以及传输数据大小,例如对于96KB的待写入数据、400MB/s传输带宽,写数据传输的时间开销约为240us。本文中,写数据传输也可称为用于传输待写入数据的写数据传输操作,或称为传输待写入数据,或称为待写入数据的传输操作。
通常,数据在闪存颗粒的存储逻辑单元中的存放依据固定地址映射关系,该地址映射关系是指逻辑地址对物理地址的映射关系(Map)。数据在SSD盘片系统根据物理地址进行存放。而主机系统对SSD盘片系统进行读/写操作时,主机系统发送读/写操作的请求至SSD系统,读/写操作的请求中携带数据的逻辑地址。
SSD控制器中设置有闪存转换层(Flash Translation Layer,FTL)模块,用以完成主机系统的逻辑地址到SSD盘片系统的物理地址的转换。SSD盘片系统每次将数据写入闪存颗粒中时,都会记录下该数据的逻辑地址到物理地址的映射关系,这样当SSD盘片系统收到主机系统发送的读操作的请求后,将数据的逻辑地址根据固定地址映射关系解析为物理地址,进而在闪存颗粒中查找数据的位置。
在一种实现中,FTL模块可以是运行在SSD控制器的硬件芯片上软件功能。在另一种实现中,FTL模块也可以是以硬件电路的形态集成于SSD控制器的硬件芯片。
当需要从闪存颗粒的存储逻辑单元读取数据时,对单片闪存颗粒的读速度=读取数据量/(读命令下发时间+读潜伏期+读数据传输时间)。举例来说,对于某单片闪存颗粒,假如读命令下发时间为2us,读潜伏期为80us,读数据传输时间为10us,那么,单片闪存颗粒的固有读时延至少为92us(2us+80us+10us)。
然而,由于闪存颗粒的不同存储逻辑单元的命令和数据是共享I/O通道的,I/O通道内各存储逻辑单元共享该通道的数据传输带宽,且同一时间I/O通道只能执行一个存储逻辑单元的相关读/写命令或数据传输。
由于介质并发执行写操作的时间一般比执行读操作的时间高出一个数量级,很有可能针对某个存储逻辑单元在执行写操作的时候,又需要响应主机的一个新的请求进行读操作;这样,读操作的流程可能会与写操作的流程发生冲突。
例如参见图2,在一种现有场景中,当存储逻辑单元W1在进行写数据传输的流程时,I/O通道被该流程占据。如果SSD控制器接收主机系统新的请求需要读取存储逻辑单元W2的数据,需要花费较长的时间(读等待期)等待存储逻辑单元W1释放该I/O通道,例如在 图示中读等待期可以高达240us。等待写数据传输完成后,对存储逻辑单元W2的数据的读过程包括新的读命令下发到存储逻辑单元W2、执行后续的读潜伏期以及进行读数据传输,所以在主机系统看来,读延迟可高达332us(240us+92us)甚至更长。
事实上,在另一些场景中,共享同一I/O通道的多个存储逻辑单元可能均需要进行写数据传输,这些存储逻辑单元已经排好队列进行写数据传输。在这种情况下,当又一存储逻辑单元需要进行数据读取时,则需要等待所述队列完成写数据传输,那么,则主机系统能看到的读延迟将成倍增加,甚至高达毫秒级别,这严重影响了读操作的服务质量(Quality of Service,QoS)。
针对上述读时延较大的问题,图3示出了一种现有解决方案。如图3所示,当I/O通道正在执行针对存储逻辑单元W1的写数据传输时,在某一时间点,SSD控制器收到主机系统下发的读数据请求,以请求读取存储逻辑单元W2中的数据。SSD控制器在遵循约束的前提下,可以消耗额外的挂起开销,暂停写数据传输的执行,转而优先执行读操作过程。等读操作过程完成后,再重新恢复写数据传输。
然而,这种方案虽然保证了读操作过程不受写传输的影响,但是这样的方案却对会写数据传输过程带来了干扰,写数据传输的暂停不仅会带来额外的写入时延(如图示中写时延至少增加了332us),还可能因为写入的受阻而带来更多业务上的影响。在多个存储逻辑单元可能均需要进行写数据传输的场景中,由于写数据传输的暂停导致的连带影响更加显著。此外,由于读操作过程中的读命令下发、读状态查询命令下发(例如周期性地下发)和读数据传输这些子操作在时间上并不具有连续性,所以该方法也未能充分利用IO通道带宽。
为了克服上述技术缺陷,本申请提供了一种混合读写的实现方法,利用该方法将能充分利用IO通道带宽实现读操作和写操作的密切复用,既能将对读操作过程和写传输流程的影响降到最低,又能有效降低读时延。
对于下文描述的各方法实施例,为了方便起见,将其都表述为一系列的动作步骤的组合,但是本领域技术人员应该知悉,本申请技术方案的具体实现可不受所描述的一系列的动作步骤的顺序的限制。
参加图4,图4是本申请实施例提供的一种混合读写的实现方法的流程示意图,该方法主要从控制器的角度进行描述。本方法涉及的控制器例如可以是SSD控制器,本方法涉及的存储器例如可以是Flash芯片,Flash芯片具体可为闪存颗粒(NAND Flash);存储器还可以是动态随机存取存储器(Dynamic Random Access Memory,DRAM)或者其他类型的非易失性存储器。下文主要是以控制器为SSD控制器、存储器为闪存颗粒进行方案描述。该方法包括但不限于以下步骤:
S201、控制器确定需要向存储器的第一逻辑单元执行用于传输待写入数据的写数据传输操作
例如,在图1所描述的系统架构下,SSD控制器接收到来自主机系统的写数据请求和待写入数据,该写数据请求包括待写入数据的逻辑地址,从而根据该逻辑地址确定需要将待写入数据写入到闪存颗粒的第一逻辑单元,第一逻辑单元为闪存颗粒中的任意一个存储逻辑单元(例如第一逻辑单元可称为存储逻辑单元W1)。基于该写数据请求,控制器确定 需要将待写入数据通过IO通道传输到第一逻辑单元(即需要执行写数据传输操作)。
关于存储逻辑单元的相关内容已在前文做了描述,这里不再赘述。
S202、控制器确定需要对存储器的第二逻辑单元中的数据执行读操作。
其中,第二逻辑单元为闪存颗粒中区别于第一逻辑单元的任意一个存储逻辑单元(例如第二逻辑单元可称为存储逻辑单元W2),即第一逻辑单元和第二逻辑单元位于同一闪存颗粒,共享同一IO通道。
例如,SSD控制器接收到来自主机系统的读数据请求,该读数据请求包括待读取的数据的逻辑地址,从而根据该逻辑地址确定需要到第二逻辑单元读取数据。基于该写入请求,控制器确定需要通过IO通道对第二逻辑单元执行读操作,具体包括需要通过IO通道执行读命令下发、读状态查询命令下发(例如周期性地下发)和读数据传输等等子操作。
需要说明的是,步骤S201和S202之间没有必然的先后顺序,也就是说,步骤S201可能在S202之前执行,也可能在S202之后执行,S201和S202还可能同时执行。
举例来说,在一种实现中,当IO通道在执行针对第一逻辑单元的写数据传输过程中,SSD控制器检测到需要进行针对第二逻辑单元的读操作。
又举例来说,在一种实现中,当IO通道在执行针对第二逻辑单元的写操作过程中,SSD控制器检测到需要进行针对第一逻辑单元进行写数据传输。
又举例来说,在一种实现中,SSD控制器同时获得写数据请求和读数据请求,比如来自主机系统的上述请求,从而确定需要通过IO通道实现针对第一逻辑单元的写数据传输和针对第二逻辑单元的读操作
S203、SSD控制器将待写入数据处理成多个数据包。
本申请具体实施例中,SSD控制器将待写入数据打散成多个小包数据(或称多个数据包),从而形成小包数据队列。
举例来说,如图5A所示,图5A示出了将一段待写入数据组织成多个小包数据的场景。可以看到,待写入数据中的业务数据被分成多份细粒度的数据,对每份细粒度的数据分别添加一个若干位的纠错码,从而形成多个小包数据。小包数据中的纠错码是根据该小包数据中的数据以预设算法计算得到的。纠错码可用于纠正可能会出错的数据。
由于SSD盘片系统中的闪存颗粒采用电信号作为信息存储的物理形式,但电信号在存储介质上存储的可靠性并不稳定,这种情况导致写入到闪存颗粒上的数据可能会出错。本申请实施例中,通过在每个小包数据中携带数据和对应的校验码,来保证数据的可靠性。在闪存颗粒的第一逻辑单元中可以字线(word-line)为单位进行小包数据的存储。由于纠错码具有纠正错误数据的特点,所以当后续需要从第一逻辑单元读出该小包数据的时候,SSD控制器可以通过比对小包数据中的数据和纠错码,判断要读取的数据有没有错误,进而获得正确的数据。
在本申请具体的应用场景中,纠错码可以是数据校验码(Error Correction Code,ECC),也可以是低密度奇偶校验码(Low Density Parity Check Code,LDPC),还可以是其他的校验码,例如博斯-查德胡里-霍昆格母(BCH)码等,本申请对此不做限定。
以ECC校验码为例,SSD控制器可根据小包数据中的数据,以ECC算法计算ECC校验码,例如,汉明(Hamming)算法、里德-所罗门(Reed-Solomon)或其他ECC算法,本申 请对此不做限定。
ECC校验码可纠错比特数有一定范围,对于同样长度的数据,如果ECC校验码越长,则纠错的能力就越强,即允许数据出现错误的能力更大。通常的,纠错码的数据大小可以为512B、256B、128B或其他数值。小包数据中的数据的大小可设置为纠错码的数据大小的整数倍。
举例来说,ECC校验码的大小为512B时,小包数据中的数据的大小可以为N*512B,N为大于0的自然数。比如每个小包数据中可具体包括4KB的数据和512B的ECC校验码,此时在图5A所示场景中,示例性地,当待写入数据中的业务数据大小为96K时,可将待写入数据组织成24份小包数据,每份小包数据包括4KB的数据和512B的ECC校验码。
本申请一些实施例中,所谓“待写入数据”可以是指主机发给SSD控制器的完整的待写入数据,也就是说,主机发给SSD控制器的待写入数据尚未被传输至闪存颗粒,SSD控制器将该完整的待写入数据组织成多个数据包(即小包数据队列),具体实现过程可参考图5A的描述。
一种示例中,SSD控制器在收到来自主机的待写入数据后,无论是否有如S202所示的读操作需求发生,都将该完整的待写入数据处理成多个数据包。
又一种示例中,SSD控制器在收到来自主机的待写入数据后,可以先判断该完整的待写入数据的数据量大小,如果该完整的待写入数据的数据量大于预设阈值,才将该完整的待写入数据处理成多个数据包。预设阈值例如可以是16KB、32KB、64KB等等,本申请不做具体限定。
本申请又一些实施例中,所谓“待写入数据”也可以是指主机发给SSD控制器的完整的待写入数据中的部分的待写入数据,也就是说,SSD控制器将完整的待写入数据中的一部分的待写入数据不做切分处理,而将完整的待写入数据中的另一部分的待写入数据(即剩余的待写入数据)组织成多个数据包。
一种示例中,如图5B所示,SSD控制器在收到来自主机的完整的待写入数据后,通过IO通道向第一逻辑单元进行待写入数据的传输。在写数据传输过程的某一时间点,SSD控制器收到主机下达的读数据请求,确定需要通过该IO通道向第二逻辑单元执行读操作。在该时间点,已经有部分的待写入数据已经被传输至第一逻辑单元,那么,SSD控制器将尚未传输的待写入数据(即剩余的待写入数据)处理成多个数据包。
需要说明的是,上述示例仅用于解释本申请而非限定。
还需要说明的是,步骤S203和S202之间也没有必然的先后顺序。也就是说,步骤S203可能在S202之前执行,也可能在S203之后执行,S203和S202还可能同时执行。
还需要说明的是,步骤S203和S201之间也没有必然的先后顺序。也就是说,步骤S203可能在S201之前执行,也可能在S201之后执行,S203和S201还可能同时执行。
S204、SSD控制器通过IO通道,交替执行各个数据包的传输操作和读操作的各个子操作。
为了高效地利用整个通道带宽,有效降低读时延且最大程度减少对写传输的干扰,本申请中,SSD控制器在同一IO通道的传输中,实现各个数据包的传输操作和读操作的各个子操作的交织(Interleave)。也就是说,SSD控制器将通过S203所形成的小包数据队列 的传输和针对第二逻辑单元的读命令下发、状态查询和读数据传输等子操作交织在一起执行。从而,在同一个IO通道上形成了一个最大通道带宽利用率的传输流水线(Pipeline)。
以某一子操作与数据包交织为例,SSD控制器可根据所述多个数据包的队列顺序,通过所述IO通道传输一个或多个连续数据包;该一个或多个连续数据包为由待写入数据所组织的多个数据包(即多个小包数据)中的一个或多个顺序连续的数据包(小包数据)。在完成传输该一个或多个连续数据包的传输后,可通过IO通道传输某一子操作,该子操作为读操作的各个子操作(即读命令下发操作、状态查询操作和读数据传输操作等)中的任意子操作;在完成传输该子操作后,通过IO通道传输队列顺序中该一个或多个连续数据包的下一个或多个连续数据包,从而实现数据包的传输操作和读操作的子操作的交织。
为了便于理解,下面进一步以一个具体应用场景来进行读操作的各个子操作与小包数据队列交织方案的论述。参见图6,SSD控制器向逻辑单元W1,发起写数据传输。SSD控制器以小包数据队列的顺序,依次通过IO通道传输到逻辑单元W1。每个小包数据在通道中所占的时间较短(例如占用通道10us),所有小包数据的传输的叠加总时长例如为240us。SSD控制器将针对逻辑单元W2的读操作过程穿插交错在小包数据队列的执行流程中。
如图6所示,在某一时间点,完成某一小包数据(如一个或多个连续的第一数据包)传输后,立即进行读命令下发(例如占用通道2us),完成读命令下发后,紧接进行下一些小包数据(如一个或多个连续的第二数据包)的传输。第二逻辑单元收到读命令后,进入读潜伏期以进行读数据的准备工作,由于读潜伏期在第二逻辑单元内部执行,这个准备工作不占用通道,读潜伏期的整个时间开销例如为80us。
在又一时间点,完成某一些小包数据(如一个或多个连续的第二数据包)传输后,立即进行状态查询命令下发(例如占用通道2us),完成状态查询命令下发后,紧接进行下一小包数据的传输。为了更好的QoS,一般经过一段预设时长(比如50us)后,启动周期性的状态查询命令的下发,比如每隔10us查询一次的逻辑单元W2的状态,以确认逻辑单元W2是否完成读数据的准备工作。同理,周期性的状态查询命令的下发操作同样是和小包数据的传输交错在一起的,也就是说,在两个状态查询命令的下发时间间隙内可继续用于传输小包数据。
当逻辑单元W2完成了读数据的准备工作,即读潜伏期结束。那么在又一时间点,在完成某一些小包数据(如一个或多个连续的第三数据包)传输后,SSD控制器从逻辑单元W2发起读数据传输,读数据传输的过程(例如占用通道10us)具体可包括读数据传输命令的下发和读数据的回传过程。完成读数据回传后,SSD控制器继续进行下一些小包数据的传输。
在一种实现中,在IO通道,从执行读命令下发操作到执行状态查询操作的时间差可以大于等于传输一个或多个第二数据包的时长;从执行状态查询操作到执行读数据传输操作的时间差可以大于等于传输一个或多个第三数据包的时长。
在一种实现中,读数据传输中的数据可以是一个或多个数据包(例如4KB的数据+512B的ECC校验码)形式的数据。
在又一种实现中,读数据传输中的数据也可以是一段完整的数据段。
由于SSD控制器向逻辑单元W1发起的写数据传输过程的时长通常执行读操作的时长高 出一个数量级,所以,在一些场景中,整个读操作的过程可以在写数据传输过程中间完成。
通过图6实施方案可以看到,针对逻辑单元W2的读时延只有92us(即读命令下发2us+读潜伏期80us+读数据传输10us),而写数据传输的总时延只有256us(即写数据传输的纯粹传输时长240us+读过程造成的时延92us)。
对比于现有的图2方案,可以看到本申请方案既能充分有效地利用IO通道的整个带宽,又能避免对读操作过程造成干扰(例如读等待),最大程度地降低了读时延。
对比于现有的图3方案,可以看到本申请方案既能充分有效地利用IO通道的整个带宽,又能避免对写传输过程造成干扰(例如写暂停),即能保证有最低的读时延,又能最大程度降低对写时延的影响。
需要说明的是,虽然图4实施例主要从两个存储逻辑单元(W1和W2)的混合读写冲突的场景进行解决方案的描述,但本领域技术人员可理解的是,上述技术思想同样可以应用到更多个存储逻辑单元的混合读写冲突的场景,例如存在一个或多个存储逻辑单元需要进行写数据传输,另外一个或多个存储逻辑单元需要进行读操作。基于本申请的描述,本领域技术人员将可理解具体实现过程,本文不再展开详述。
可以看到,实施本申请的方法,SSD控制器在遇到混合读写的场景下,通过对IO通道传输过程中的待写入数据的各个数据包的传输操作和读操作的各个子操作进行交织,使得同一闪存颗粒的不同存储逻辑单元的读/写操作能够复用在一起,最大限度地使用IO通道的数据传输带宽和调度所有的逻辑单元,让空闲的存储逻辑单元可以尽早地得到调度和工作,降低读操作的读延迟和写延迟,提升用户使用体验。
为了更好理解本申请,下面进一步描述本申请实施例提供的又一种混合读写的实现方法的具体实现过程,参加图7,图7是申请实施例提供的又一种混合读写的实现方法的流程示意图,该方法主要从控制器的角度进行描述。本方法涉及的控制器例如可以是SSD控制器,本方法涉及的存储器例如可以是Flash芯片,Flash芯片具体可为闪存颗粒(NAND Flash);存储器还可以是动态随机存取存储器(Dynamic Random Access Memory,DRAM)或者其他类型的非易失性存储器。下文主要是以控制器为SSD控制器、存储器为闪存颗粒进行方案描述。该方法包括但不限于以下步骤:
S301、SSD控制器收到来自主机的待写入数据后,将该待写入数据组织成多个数据包,例如24份小包数据,每份小包数据例如包括4KB的数据和512B的ECC校验码,从而形成小包数据队别,并确定需要对IO通道的一个存储逻辑单元(例如逻辑单元W1)传输该小包数据队列(即执行用于传输该小包数据队列的写数据传输操作)。本步骤的具体实现可参考前文图4实施例S201和S203的相关描述,这里不再赘述。
S302、SSD控制器根据小包数据队列顺序传输小包数据。
S303、SSD控制器在小包数据传输的粒度边界(比如每隔4KB),检测同一IO通道上是否有其他存储逻辑单元需要执行读操作或者执行读操作中的某个子操作,例如是否需要下发读命令、或者需要下发状态查询命令、或者需要传输读数据,等等。示例性地,SSD控制器可以在每个小包数据传输后主动看下其他存储逻辑单元是否有高优先级的业务数据需要执行读操作或者执行读操作中的某个子操作。示例性地,SSD控制器也可以在若干个小 包数据传输后再看下其他存储逻辑单元是否有高优先级的业务数据需要执行读操作或者执行读操作中的某个子操作。
若有其他存储逻辑单元需要执行读操作或者执行读操作中的某个子操作,则后续继续执行步骤S304。
若没有其他存储逻辑单元需要执行读操作,则返回继续执行步骤S302,传输下一小包数据。
S304、SSD控制器通过IO通道,交替执行各个数据包的传输操作和读操作的各个子操作。
例如,SSD控制器通过HOST接口收到主机系统向SSD盘片系统下发的读数据请求(携带逻辑地址),请求读取同一IO通道的另一个存储逻辑单元(例如逻辑单元W2)中的数据。SSD控制器解析读数据请求获得逻辑地址,通过FTL模块查询逻辑地址到物理地址的映射表,从而获知IO通道号、存储逻辑单元号、存储位置等信息,例如确定该读数据请求需要访问逻辑单元W2中的业务数据,逻辑单元W2和逻辑单元W1共享同一IO通道。
针对逻辑单元W1的各个数据包的传输操作和针对逻辑单元W2的读操作的各个子操作的具体实现过程可参考图4实施例步骤S204的相关描述,这里不再赘述。
S305、确定读操作过程是否已完成。若已完成,SSD控制器可向主机传输读数据请求对应的读数据并报告读正常完成,后续继续执行步骤S306;若未完成,则返回继续执行步骤S304,继续交替执行剩余的各个数据包的传输操作和读操作的剩余的各个子操作。
S306、确定写数据传输过程是否已完成。
若未完成写数据传输过程,则返回继续执行步骤S302和S303,即继续根据小包数据队列顺序传输剩余的小包数据,以及在小包数据传输的粒度边界检测同一IO通道上是否有其他存储逻辑单元需要执行读操作或者执行读操作中的某个子操作。
若已完成写数据传输过程,则SSD盘片系统继续执行写操作的其他相关处理流程,例如,逻辑单元W2进入写潜伏期,又例如SSD控制器向逻辑单元W2进行写查询查询等。这些相关处理流程已被本领域技术人员熟知,本文不再展开描述。
可以看到,实施本申请的方法,在SSD控制器写数据传输过程中,遇到读操作相关命令时,如果写数据传输针对的逻辑单元和读操作针对的逻辑单元同享同一IO通道,则SSD控制器可将读操作的各个子操作和写数据的各个数据包的传输操作交织,使得同一闪存颗粒的不同存储逻辑单元的读/写操作能够复用在一起,最大限度地使用IO通道的数据传输带宽和调度所有的逻辑单元,让空闲的存储逻辑单元可以尽早地得到调度和工作,降低读操作的读延迟和写延迟,提升用户使用体验。
上文详细描述了本申请的相关方法,下面继续描述本申请的相关装置。
本申请实施例提供了一种固态硬盘,固态硬盘包括控制器50和存储器,控制器例如可以是SSD控制器,存储器例如可以是Flash芯片,Flash芯片具体可为闪存颗粒(NAND Flash);存储器还可以是动态随机存取存储器(Dynamic Random Access Memory,DRAM)或者其他类型的非易失性存储器。参见图8,该控制器进一步包括:写确定单元501、读确定单元502、数据处理单元503和交替读写单元503。在一种示例中,写确定单元501、读 确定单元502、数据处理单元503和交替读写单元503应用于图1所示的闪存控制模块;在又一种示例中,写确定单元501和读确定单元502应用于图1所示的FTL模块,数据处理单元503和交替读写单元503应用于图1所示的闪存控制模块。其中,在一种实现中,SSD控制器例如可以包括一个或多个处理器(CPU),该一个或多个处理器可以集成在同一块硬件芯片上。闪存控制模块可以以软件和/或硬件形态予以实现,例如闪存控制模块可以包括集成于SSD控制器的硬件芯片的硬件电路,闪存控制模块还可以包括运行于SSD控制器的硬件芯片的软件功能。FTL模块可以是运行在SSD控制器的硬件芯片上软件功能,FTL模块也可以是以硬件电路的形态集成于SSD控制器的硬件芯片。其中:
写确定单元501,用于确定需要向存储器的第一逻辑单元执行用于传输待写入数据的写数据传输操作;以及,
读确定单元502,用于确定需要对所述存储器的第二逻辑单元中的数据执行读操作;其中,所述第一逻辑单元和所述第二逻辑单元共享IO通道;
数据处理单元503,用于将所述待写入数据处理成多个数据包;
交替读写单元504,用于通过所述IO通道,交替执行各个所述数据包的传输操作和所述读操作的各个子操作;所述读操作的各个子操作包括读命令下发操作、状态查询操作和读数据传输操作。
控制器的各个功能模块可用于实现如图4或图7所示的方法。例如对于图4实施例,写确定单元501可用于执行步骤S201,读确定单元502可用于执行S202,数据处理单元503可用于执行S203,交替读写单元504可用于执行S204。为了说明书的简洁,这里不再赘述。
参见图9,本申请实施例提供了又一种固态硬盘,固态硬盘包括控制器521和存储器522,控制器521例如可以是SSD控制器,存储器522例如可以是Flash芯片,Flash芯片具体可为闪存颗粒(NAND Flash);存储器522还可以是动态随机存取存储器(Dynamic Random Access Memory,DRAM)或者其他类型的非易失性存储器。存储器522可用于存储数据,并基于控制器521的命令进行读/写/擦除等操作。控制器521被配置为执行程序指令,所述程序指令例如可存放于所述存储器522,所述程序指令也可存储于其他的专用存储器(图未示),专用存储器包括但不限于随机存取存储器(random access memory,RAM),只读存储器(read-only memory,ROM),或高速缓存(cache)等。本申请实施例中,控制器521具体被配置为调用所述程序指令以执行如图4或图7实施例所描述的方法。
参见图10,本申请实施例提供了一种系统,系统包括:主机601和固态硬盘602;其中,在一种示例中,主机601例如为图1所示的主机系统,固态硬盘602例如为图1所示的SSD盘片系统。在一种示例中,固态硬盘602例如为图7实施例所涉及的固态硬盘。其中:
主机601,用于向固态硬盘602发送写操作请求,以及读操作请求。
固态硬盘602用于根据所述写操作请求以及读操作请求,实现如图4或图7所示的方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,可以包括前述的通信方法各个实施方式的内容。上述提到的可读存储介质可以是存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、磁碟、光盘或技术领域内所公知的任意其它形式的存储介质中。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (22)

  1. 一种固态硬盘混合读写的实现方法,其特征在于,所述方法包括:
    控制器确定需要向存储器的第一逻辑单元执行用于传输待写入数据的写数据传输操作;以及,
    所述控制器确定需要对所述存储器的第二逻辑单元中的数据执行读操作;其中,所述第一逻辑单元和所述第二逻辑单元共享输入输出(IO)通道;
    所述控制器将所述待写入数据处理成多个数据包;
    所述控制器通过所述IO通道,交替执行传输各个所述数据包和执行所述读操作的各个子操作;所述读操作的各个子操作包括读命令下发操作、状态查询操作和读数据传输操作。
  2. 根据权利要求1所述的方法,其特征在于,所述控制器将所述待写入数据处理成多个数据包,包括:
    所述控制器收到来自主机的所述待写入数据后,在确定需要执行所述写数据传输操作以及所述读操作前,将所述待写入数据处理成所述多个数据包。
  3. 根据权利要求1或2所述的方法,其特征在于,所述控制器将所述待写入数据处理成多个数据包,包括:
    所述控制器将所述待写入数据切分成多份数据;
    所述控制器对所述多份数据中的每份数据分别添加纠错码,从而获得多个数据包。
  4. 根据权利要求3所述的方法,其特征在于,所述纠错码为数据校验码(ECC)或者低密度奇偶校验码(LDPC)或者博斯-查德胡里-霍昆格母(BCH)码。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述控制器通过所述IO通道,交替执行传输各个所述数据包和执行所述读操作的各个子操作,包括:
    所述控制器根据所述多个数据包的队列顺序,通过所述IO通道传输一个或多个第一数据包;所述第一数据包为所述多个数据包中的数据包;
    所述控制器在完成传输所述一个或多个第一数据包后,通过所述IO通道执行第一子操作;所述第一子操作为所述读操作的所述各个子操作中的一个子操作;
    所述控制器在完成传输所述第一子操作后,通过所述IO通道传输所述队列顺序中排在所述一个或多个第一数据包后的一个或多个第二数据包,所述第二数据包为所述多个数据包中的数据包。
  6. 根据权利要求5所述的方法,其特征在于,所述第一子操作为所述读命令下发操作;
    所述控制器在完成通过所述IO通道传输所述队列顺序中排在所述一个或多个第一数据包后的一个或多个第二数据包之后,还包括:
    所述控制器通过所述IO通道执行所述状态查询操作;
    所述控制器在完成执行所述状态查询操作后,通过所述IO通道传输所述队列顺序中排在所述一个或多个第二数据包后的一个或多个第三数据包,所述第三数据包为所述多个数据包中的数据包;
    所述控制器在完成通过所述IO通道传输所述一个或多个第三数据包后,通过所述IO通道执行所述读数据传输操作。
  7. 根据权利要求6所述的方法,其特征在于,在所述IO通道,从执行所述读命令下发操作到执行所述状态查询操作的时间差大于等于传输所述一个或多个第二数据包的时长;从执行所述状态查询操作到执行所述读数据传输操作的时间差大于等于传输所述一个或多个第三数据包的时长。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述控制器确定需要向存储器的第一逻辑单元执行用于传输待写入数据的写数据传输操作,包括:
    所述控制器接收到来自主机的写数据请求和所述待写入数据,所述写数据请求包括所述待写入数据的逻辑地址;
    所述控制器根据所述待写入数据的逻辑地址确定需要将所述待写入数据传输到所述存储器的第一逻辑单元。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述控制器确定需要对所述存储器的第二逻辑单元中的数据执行读操作,包括:
    所述控制器接收到来自主机的读数据请求,所述读数据请求包括待读取的数据的逻辑地址;
    所述控制器根据所述待读取的数据的逻辑地址确定需要对所述存储器的第二逻辑单元执行读操作。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述控制器为SSD控制器,所述存储器为与非门闪存(NAND Flash)或者动态随机存取存储器(DRAM)。
  11. 一种固态硬盘,其特征在于,包括控制器和存储器,所述控制器包括:
    写确定单元,用于确定需要向存储器的第一逻辑单元执行用于传输待写入数据的写数据传输操作;以及,
    读确定单元,用于确定需要对所述存储器的第二逻辑单元中的数据执行读操作;其中,所述第一逻辑单元和所述第二逻辑单元共享IO通道;
    数据处理单元,用于将所述待写入数据处理成多个数据包;
    交替读写单元,用于通过所述IO通道,交替执行传输各个所述数据包和执行所述读操作的各个子操作;所述读操作的各个子操作包括读命令下发操作、状态查询操作和读数据传输操作。
  12. 根据权利要求11所述的方法,其特征在于,所述数据处理单元具体用于在收到来自主机的所述待写入数据后,在确定需要执行所述写数据传输操作以及所述读操作前,将所述待写入数据处理成所述多个数据包。
  13. 根据权利要求11或12所述的固态硬盘,其特征在于,所述数据处理单具体用于:
    将所述待写入数据切分成多份数据;
    对所述多份数据中的每份数据分别添加纠错码,从而获得多个数据包。
  14. 根据权利要求13所述的固态硬盘,其特征在于,根据权利要求1所述的方法,其特征在于,所述纠错码为数据校验码(ECC)或者低密度奇偶校验码(LDPC)或者博斯-查德胡里-霍昆格母(BCH)码。
  15. 根据权利要求11-14任一项所述的固态硬盘,其特征在于,所述交替读写单元具体用于:
    根据所述多个数据包的队列顺序,通过所述IO通道传输一个或多个第一数据包;所述第一数据包为所述多个数据包中的数据包;
    在完成传输所述一个或多个第一数据包后,通过所述IO通道执行第一子操作;所述第一子操作为所述读操作的所述各个子操作中的一个子操作;
    在完成传输所述第一子操作后,通过所述IO通道传输所述队列顺序中排在所述一个或多个第一数据包后的一个或多个第二数据包,所述第二数据包为所述多个数据包中的数据包。
  16. 根据权利要求15所述的方法,其特征在于,所述第一子操作为所述读命令下发操作;所述交替读写单元还用于,在完成通过所述IO通道传输所述队列顺序中排在所述一个或多个第一数据包后的一个或多个第二数据包之后:
    通过所述IO通道执行所述状态查询操作;
    所述控制器在完成执行所述状态查询操作后,通过所述IO通道传输所述队列顺序中排在所述一个或多个第二数据包后的一个或多个第三数据包,所述第三数据包为所述多个数据包中的数据包;
    在完成通过所述IO通道传输所述一个或多个第三数据包后,通过所述IO通道执行所述读数据传输操作。
  17. 根据权利要求16所述的方法,其特征在于,在所述IO通道,从执行所述读命令下发操作到执行所述状态查询操作的时间差大于等于传输所述一个或多个第二数据包的时长;从执行所述状态查询操作到执行所述读数据传输操作的时间差大于等于传输所述一个或多个第三数据包的时长。
  18. 根据权利要求11-17任一项所述的固态硬盘,其特征在于,所述写确定单元具体 用于:
    接收到来自主机的写数据请求和所述待写入数据,所述写数据请求包括所述待写入数据的逻辑地址;
    所述控制器根据所述待写入数据的逻辑地址确定需要将所述待写入数据传输到所述存储器的第一逻辑单元。
  19. 根据权利要求11-18任一项所述的固态硬盘,其特征在于,所述读确定单元具体用于:
    接收到来自主机的读数据请求,所述读数据请求包括待读取的数据的逻辑地址;
    根据所述待读取的数据的逻辑地址确定需要对所述存储器的第二逻辑单元执行读操作。
  20. 根据权利要求11-19任一项所述的固态硬盘,其特征在于,所述控制器为SSD控制器,所述存储器为与非门闪存(NAND Flash)或者动态随机存取存储器(DRAM)。
  21. 一种固态硬盘,其特征在于,包括:控制器和存储器;所述控制器和所述存储器通过总线连接或耦合在一起;其中,所述存储器用于存储程序指令,所述控制器用于调用所述存储器存储的程序指令,以执行如权利要求1-10任一项所述的方法。
  22. 一种系统,其特征在于,包括:主机和固态硬盘;所述主机和所述固态硬盘通信连接;其中,所述主机用于向所述固态硬盘发送的写操作请求和/或读操作请求,所述固态硬盘为如权利要求11-20任一项所述的固态硬盘,或者,所述固态硬盘为如权利要求21所述的固态硬盘。
PCT/CN2019/103900 2019-08-31 2019-08-31 一种固态硬盘混合读写的实现方法以及装置 WO2021035761A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/103900 WO2021035761A1 (zh) 2019-08-31 2019-08-31 一种固态硬盘混合读写的实现方法以及装置
CN201980099732.1A CN114286989B (zh) 2019-08-31 2019-08-31 一种固态硬盘混合读写的实现方法以及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/103900 WO2021035761A1 (zh) 2019-08-31 2019-08-31 一种固态硬盘混合读写的实现方法以及装置

Publications (1)

Publication Number Publication Date
WO2021035761A1 true WO2021035761A1 (zh) 2021-03-04

Family

ID=74685346

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103900 WO2021035761A1 (zh) 2019-08-31 2019-08-31 一种固态硬盘混合读写的实现方法以及装置

Country Status (2)

Country Link
CN (1) CN114286989B (zh)
WO (1) WO2021035761A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657972A (zh) * 2022-12-27 2023-01-31 北京特纳飞电子技术有限公司 固态硬盘写入控制方法、装置与固态硬盘

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780032A (zh) * 2022-04-22 2022-07-22 山东云海国创云计算装备产业创新中心有限公司 一种数据读取方法、装置、设备及存储介质
CN115079944A (zh) * 2022-06-08 2022-09-20 阿里巴巴(中国)有限公司 提升固态硬盘性能的方法及装置和电子设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160210044A1 (en) * 2015-01-15 2016-07-21 Commvault Systems, Inc. Intelligent hybrid drive caching
CN106233270A (zh) * 2014-04-29 2016-12-14 华为技术有限公司 共享存储器控制器及其使用方法
CN108132895A (zh) * 2016-12-01 2018-06-08 三星电子株式会社 配置为与主机执行双向通信的存储装置及其操作方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101498994B (zh) * 2009-02-16 2011-04-20 华中科技大学 一种固态硬盘控制器

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106233270A (zh) * 2014-04-29 2016-12-14 华为技术有限公司 共享存储器控制器及其使用方法
US20160210044A1 (en) * 2015-01-15 2016-07-21 Commvault Systems, Inc. Intelligent hybrid drive caching
CN108132895A (zh) * 2016-12-01 2018-06-08 三星电子株式会社 配置为与主机执行双向通信的存储装置及其操作方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115657972A (zh) * 2022-12-27 2023-01-31 北京特纳飞电子技术有限公司 固态硬盘写入控制方法、装置与固态硬盘
CN115657972B (zh) * 2022-12-27 2023-06-06 北京特纳飞电子技术有限公司 固态硬盘写入控制方法、装置与固态硬盘

Also Published As

Publication number Publication date
CN114286989B (zh) 2023-09-22
CN114286989A (zh) 2022-04-05

Similar Documents

Publication Publication Date Title
TWI421680B (zh) Parallel flash memory controller
JP6163532B2 (ja) メモリシステムコントローラを含む装置
US9460763B2 (en) Holding by a memory controller multiple central processing unit memory access requests, and performing the multiple central processing unit memory request in one transfer cycle
WO2021035761A1 (zh) 一种固态硬盘混合读写的实现方法以及装置
JP5032027B2 (ja) 半導体ディスク制御装置
JP5918359B2 (ja) メモリシステムコントローラを含む装置および関連する方法
CN111435292A (zh) 利用自适应写缓冲区释放的存储介质编程
CN108153482B (zh) Io命令处理方法与介质接口控制器
JP4966404B2 (ja) メモリ制御装置、記憶装置、及びメモリ制御方法
TWI467574B (zh) 記憶體儲存裝置、記憶體控制器與其資料傳輸方法
US20220083266A1 (en) Plane-based queue configuration for aipr-enabled drives
TWI636366B (zh) 資料冗餘的處理方法及其相關電腦系統
US20180253391A1 (en) Multiple channel memory controller using virtual channel
US20220350655A1 (en) Controller and memory system having the same
CN114817093B (zh) 一种数据传输方法、系统、装置及存储介质
WO2010105520A1 (zh) 一种读数据的方法、装置和系统
CN115083451A (zh) 多通道的数据处理方法、装置、设备及存储介质
WO2019141050A1 (zh) 一种刷新处理方法、装置、系统及内存控制器
US20230030672A1 (en) Die-based high and low priority error queues
US10394727B2 (en) Semiconductor memory device with data buffering
US9152348B2 (en) Data transmitting method, memory controller and data transmitting system
US20080301366A1 (en) Raid system and data transfer method in raid system
CN113157205A (zh) 一种nand阵列的控制方法、控制器、电子设备及存储介质
WO2022160321A1 (zh) 一种访问内存的方法和装置
US20220206894A1 (en) Method and system for facilitating write latency reduction in a queue depth of one scenario

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19942827

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19942827

Country of ref document: EP

Kind code of ref document: A1