CN109388333A - Reduce the method and apparatus of read command processing delay - Google Patents
Reduce the method and apparatus of read command processing delay Download PDFInfo
- Publication number
- CN109388333A CN109388333A CN201710671697.9A CN201710671697A CN109388333A CN 109388333 A CN109388333 A CN 109388333A CN 201710671697 A CN201710671697 A CN 201710671697A CN 109388333 A CN109388333 A CN 109388333A
- Authority
- CN
- China
- Prior art keywords
- cpu
- read
- read command
- read request
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Provide the method and apparatus for reducing read command processing delay.The method of provided low latency processing I/O request, comprising: in response to receiving read request, obtain the physical address that read request is accessed;If the physical block of read request access has been fully written data, first kind read command is issued to respond read request to NVM chip;If the physical block of read request access is not yet fully written data, the second class read command is issued to NVM chip;And wherein, there is the processing smaller than the second class read command to postpone for first kind read command.
Description
Technical field
This application involves solid storage devices, and reduction is handled when handling read command more particularly, to solid storage device
Delay.
Background technique
Referring to Fig. 1, the block diagram of storage equipment is illustrated.Storage equipment 102 is coupled with host, deposits for providing for host
Energy storage power.Host with storage equipment 102 between can be coupled in several ways, coupled modes include but is not limited to for example, by
The connections master such as SATA, IDE, USB, PCIE, NVMe (NVM Express), SAS, Ethernet, optical-fibre channel, cordless communication network
Machine and storage equipment 102.Host can be the information processing equipment that can be communicated through the above way with storage equipment, example
Such as, personal computer, tablet computer, server, portable computer, the network switch, router, cellular phone, a number
Word assistant etc..Storing equipment 102 includes interface 103, control unit 104, one or more NVM (nonvolatile storage, Non-
Volatile Memory) chip 105 and optionally firmware memory 110.Interface 103 can adapt to for example, by SATA,
The modes such as IDE, USB, PCIE, NVMe, SAS, Ethernet, optical-fibre channel and host exchanging data.Control unit 104 is for controlling
Data transmission between interface 103, NVM chip 105 and firmware memory 110, with being also used to storage management, host logic
Location is to flash memory physical address map, erasure balance, bad block management etc..A variety of sides of software, hardware, firmware or combinations thereof can be passed through
Formula realizes control unit 104.Control unit 104 can be FPGA, and (Field-progra mmable gate array, scene can
Program gate array), ASIC (Application Specific Integrated Circuit, application specific integrated circuit) or
The form of person's a combination thereof.Control unit 104 also may include processor or controller.Control unit 104 is at runtime from solid
110 loading firmware of part memory.Firmware memory 110 can be NOR flash memory, ROM, EEPROM, be also possible to NVM chip 105
Part.
Control unit 104 includes flash interface controller (or being Media Interface Connector controller, flash memory channel controller), is dodged
It deposits interface controller and is coupled to NVM chip 105, and sent out in a manner of the interface protocol to follow NVM chip 105 to NVM chip 105
It orders out, to operate NVM chip 105, and receives the command execution results exported from NVM chip 105.Known NVM chip connects
Mouth agreement includes " Toggle ", " ONFI " etc..
Memory target (Target) is that the shared chip in nand flash memory encapsulation enables (CE, Chip Enable) signal
One or more logic units (Logic Unit).Each logic unit has logical unit number (LUN, Logic Unit
Number).It may include one or more tube cores (Die) in nand flash memory encapsulation.Typically, logic unit corresponds to single pipe
Core.Logic unit may include multiple planes (Plane).Multiple planes in logic unit can be with parallel access, and nand flash memory
Multiple logic units in chip can execute order and report state independently of one another.Can be from http: //
Www.micron.com/~/media/Documents/Products/Other%20Documents/ONFI3_
In " the Open NAND Flash Interface Specification (Revision3.0) " that 0Gold.ashx is obtained, provide
About target (target), logic unit, LUN, plane (Plane) meaning, be a part of the prior art.
Data are usually stored and read on storage medium by page.And data are erased in blocks.Block (also referred to as physical block) packet
Containing multiple pages.Block includes multiple pages.Page (referred to as Physical Page) on storage medium has fixed size, such as 17664 bytes.
Physical Page also can have other sizes.
In solid storage device, safeguarded using FTL (Flash Translation Layer, flash translation layer (FTL)) from
Map information of the logical address to physical address.Logical address constitutes the solid-state that the upper layer software (applications)s such as operating system are perceived and deposits
Store up the memory space of equipment.Physical address is the address for accessing the physical memory cell of solid storage device.In existing skill
Also implement address of cache using intermediate address form in art.Such as logical address is mapped as intermediate address, and then will be intermediate
Address is further mapped as physical address.
The table structure for storing the map information from logical address to physical address is referred to as FTL table.FTL table is that solid-state is deposited
Store up the important metadata in equipment.The data item of usual FTL table has recorded the ground in solid storage device as unit of data page
Location mapping relations.
FTL table includes multiple FTL table clauses (or list item).In one embodiment, it is had recorded in each FTL table clause
The corresponding relationship of one logical page address and a Physical Page.In another example, it is had recorded in each FTL table clause continuous
Multiple logical page addresses and continuous multiple Physical Page corresponding relationship.In yet another embodiment, in each FTL table clause
Have recorded the corresponding relationship of logical block address and physical block address.In still another embodiment, logical block is recorded in FTL table
The mapping relations and/or logical page address of address and physical block address and the mapping relations of physical page address.
Summary of the invention
The processing delay of I/O command is to embody the important indicator of storage device performance.Storage equipment is always wanted to lower
Delay disposal I/O command.In solid storage device, there are multiple physical blocks.Some physical blocks are being written into data, and another
A little physical blocks have been fully written data.Data/not written full data object is fully written by using different types of read command access
Block is managed, to reduce reading data delay.For the physical block of not written full data, read command writes life with access same physical block
It enables and there is conflict, it is also desirable to conflict is solved, to further decrease delay.Reduce the intermediate link of I/O command processing, it helps
Reduce delay.
According to a first aspect of the present application, it provides and I/O request is handled according to the first low latency of the application first aspect
Method, comprising: in response to receiving read request, obtain the physical address that read request is accessed;If the physical block of read request access is
Data are fully written, issue first kind read command to NVM chip to respond read request;If the physical block of read request access is still not written
Full data issue the second class read command to NVM chip;And wherein, first kind read command has smaller than the second class read command
Processing delay.
According to the method that the first low latency of the application first aspect handles I/O request, provide according to the application first party
The second method in face, further includes: if the physical block of read request access is not yet fully written data, also scheduling accesses the physical block
The processing sequence of read request and write request, with read request described in priority processing.
According to the method that the first or second low latency of the application first aspect handles I/O request, provide according to the application
The third method of first aspect, wherein first kind read command is the read command higher than the common read command error rate of NVM chip, and
Second class read command is the common read command of NVM chip.
According to the first of the application first aspect to third low latency processing I/O request one of method, provide according to this
Apply for the fourth method of first aspect, wherein identifying whether the physical block of read request access has been fully written by the physical address
Data.
One of the method that I/O request is handled according to first to fourth low latency of the application first aspect is provided according to this
Apply for the 5th method of first aspect, further includes: in response to receiving the read command processing result of NVM chip offer, if read command
Processing result instruction is handled successfully, indicates that read request processing is completed to read request sender;If at read command processing result instruction
Reason failure issues read command to NVM chip again by error handling procedures.
One of the method that I/O request is handled according to the first to the 5th low latency of the application first aspect is provided according to this
Apply for the 6th method of first aspect, in which: whether the physical block that the first CPU identification read request is accessed has been fully written data,
If the physical block of read request access has been fully written data, indicated to issue first kind read command to NVM chip to respond from the first CPU
Read request.
According to the method that the 6th low latency of the application first aspect handles I/O request, provide according to the application first party
7th method in face, wherein read request is transmitted to second by the first CPU if the physical block of read request access has been fully written data
CPU is indicated to issue the second class read command to NVM chip to respond read request from the 2nd CPU.
According to the method that the 6th or the 7th low latency of the application first aspect handles I/O request, provide according to the application
The eighth method of first aspect, further includes: the first CPU obtains physical address according to the logical address inquiry FTL table of read request.
One of the method that I/O request is handled according to the 6th to the 8th low latency of the application first aspect is provided according to this
Apply for the 9th method of first aspect, further includes: provide the processing result of NVM chip processing first kind read command to the first CPU;
And provide the processing result of NVM chip processing the second class read command to the 2nd CPU.
One of the method that I/O request is handled according to the 6th to the 8th low latency of the application first aspect is provided according to this
Apply for the tenth method of first aspect, further includes: if the read command processing result instruction that NVM chip provides is handled successfully, will locate
Reason result is supplied to the first CPU;And if NVM chip provide read command processing result indicate processing failure, by processing result
It is supplied to the 2nd CPU.
According to the method that the 9th low latency of the application first aspect handles I/O request, provide according to the application first party
11st method in face, further includes: if the processing result of NVM chip processing first kind read command indicates processing failure, the first CPU
Indicate the 2nd CPU error process process.
According to a second aspect of the present application, it provides and I/O request is handled according to the first low latency of the application second aspect
Method, comprising: in response to receiving read request, obtain the physical address that read request is accessed;If read request access bulk by
Full data are write, issue first kind read command to NVM chip to respond read request;If the bulk of read request access is not yet fully written number
According to the second class read command of sending of NVM chip;And wherein, first kind read command has the processing smaller than the second class read command
Delay.
According to the method that the first low latency of the application second aspect handles I/O request, provide according to the application second party
The second method in face, wherein bulk includes the physical block from multiple logic units.
According to the third aspect of the application, provides and I/O request is handled according to the first low latency of the application third aspect
Device, comprising: host interface, distributor, multiple CPU and Media Interface Connector;Host interface is for receiving read request;Distributor coupling
Host interface is closed, the received read request of host interface is distributed into the first CPU;First CPU identifies the physics of read request access
Whether block has been fully written data, if the physical block has been fully written data, instruction Media Interface Connector issues the first kind to NVM chip and reads
Order;If the physical block is not yet fully written data, read request is transmitted to the 2nd CPU by the first CPU;2nd CPU is in response to receiving
To read request, indicate that Media Interface Connector issues the second class read command to NVM chip;And wherein, first kind read command has than the
The small processing delay of two class read commands.
The device that I/O request is handled according to the first low latency of the application third aspect, provides according to the application third party
The second device in face, wherein the 2nd CPU, which is also dispatched, accesses the read request of the physical block and the processing sequence of write request, with preferential
Handle the read request.
The device that I/O request is handled according to the first or second low latency of the application third aspect, provides according to the application
The 3rd device of the third aspect, wherein first kind read command is the read command higher than the common read command error rate of NVM chip, and
Second class read command is the common read command of NVM chip.
According to the first of the application third aspect to third low latency processing I/O request one of device, provide according to this
Apply the third aspect the 4th device, wherein the first CPU according to the physical address identification read request access physical block whether
It has been fully written data.
One of the device that I/O request is handled according to first to fourth low latency of the application third aspect is provided according to this
Apply for the 5th device of the third aspect, wherein the first CPU is according to the FTL in the logical address inquiry memory for requesting read request
Table obtains physical address.
One of the device that I/O request is handled according to the first to the 5th low latency of the application third aspect is provided according to this
The 6th device for applying for the third aspect, wherein Media Interface Connector controller provides NVM chip for the instruction from the first CPU
The corresponding processing result of same instruction be supplied to the first CPU;And for the instruction from the 2nd CPU, Media Interface Connector controller
The corresponding processing result of same instruction that NVM chip provides is supplied to the 2nd CPU.
The device that I/O request is handled according to the 6th low latency of the application third aspect, provides according to the application third party
7th device in face, wherein the processing result that the first CPU identification NVM chip provides, if processing result instruction processing failure, first
CPU indicates the 2nd CPU error process process.
The device that I/O request is handled according to the 6th or the 7th low latency of the application third aspect, provides according to the application
8th device of the third aspect, wherein the processing result that the 2nd CPU identification NVM chip provides, if processing result instruction processing is lost
It loses, the 2nd CPU error process process.
One of the device that I/O request is handled according to the first to the 5th low latency of the application third aspect is provided according to this
Apply for the 9th device of the third aspect, the processing result for the read command that wherein Media Interface Connector controller identification NVM chip provides, if
Processing result instruction is handled successfully, and processing result is supplied to the first CPU, if processing result indicates processing failure, processing is tied
Fruit is supplied to the 2nd CPU;And the 2nd CPU error process process.
One of the device that I/O request is handled according to the first to the 5th low latency of the application third aspect is provided according to this
The tenth device for applying for the third aspect, further includes the 3rd CPU, is coupled to the distributor.
The device that I/O request is handled according to the 9th low latency of the application third aspect, provides according to the application third party
11st device in face, wherein logical address of the distributor according to read request access, is transmitted to the first CPU or third for read request
One of CPU.
The device that I/O request is handled according to the 11st low latency of the application third aspect, provides according to the application third
Tenth two devices of aspect, wherein whether the physical block of the 3rd CPU identification read request access has been fully written data, if physical block is
Data are fully written, instruction Media Interface Connector issues first kind read command to NVM chip.
The device that I/O request is handled according to the 12nd low latency of the application third aspect, provides according to the application third
13rd device of aspect, wherein read request is transmitted to the 2nd CPU by the 3rd CPU if physical block is not yet fully written data;Second
CPU issues the second class read command to NVM chip in response to receiving read request, instruction Media Interface Connector.
The device that I/O request is handled according to the 12nd low latency of the application third aspect, provides according to the application third
14th device of aspect, wherein read request is transmitted to the 4th CPU by the 3rd CPU if physical block is not yet fully written data;4th
CPU issues the second class read command to NVM chip in response to receiving read request, instruction Media Interface Connector.
According to the fourth aspect of the application, the dress that I/O request is handled according to the low latency of the application fourth aspect is provided
It sets, comprising: physical address obtains module, for obtaining the physical address that read request is accessed in response to receiving read request;First
Class read command generation module issues the first kind to NVM chip and reads life if the physical block for read request access has been fully written data
It enables to respond read request;Second class read command generation module, if the physical block for read request access is not yet fully written data, to
NVM chip issues the second class read command;And wherein, there is the processing smaller than the second class read command to postpone for first kind read command.
According to the 5th of the application aspect, a kind of computer program comprising computer program code is provided, when being loaded into
Computer system and when executing on the computer systems, the computer program code executes the computer system according to this
Apply for one of the method for the low latency processing I/O request that first aspect and second aspect provide.
According to the 6th of the application aspect, a kind of computer program comprising computer program code is provided, when being loaded into
When storing equipment and executing on a storage device, said program code executes the storage equipment according to the application first aspect
One of the method for the low latency processing I/O request provided with second aspect.
Detailed description of the invention
When being read together with attached drawing, by reference to the detailed description below to illustrative embodiment, will be best understood
The application and preferred use pattern and its further objects and advantages, wherein attached drawing include:
Fig. 1 is the block diagram of the storage equipment of the prior art;
Fig. 2 is the schematic diagram for being written into distribution of the data on physical block according to the embodiment of the present application;
Fig. 3 is the flow chart according to the method for the processing read request of the embodiment of the present application;
Fig. 4 is the flow chart according to the error handle of the read command of the embodiment of the present application;
Fig. 5 is the schematic diagram that I/O request is handled according to the control unit of the embodiment of the present application;
Fig. 6 is the schematic diagram according to the another embodiment of the application by the physical block tissue of NVM chip for bulk;And
Fig. 7 is the flow chart according to the method for the processing read request of the another embodiment of the application.
Specific embodiment
Fig. 2 is the schematic diagram for being written into distribution of the data on physical block according to the embodiment of the present application.As an example, it deposits
The NVM chip of storage equipment includes physical block 0, physical block 1, physical block 2 and physical block 3.Physical block 0, physical block 1 and physical block 2
Data have been fully written, and the part of physical block 3 is written into data, the other parts of physical block 3 are not yet written into data.
The data being written into stored in NVM chip are read in read command.In Fig. 2, the reading of read command 220 has been fully written number
According to physical block 1 in data, and the data that are not yet fully written in the physical block 3 of data are read in read command 222.
Fig. 3 is the flow chart according to the method for the processing read request of the embodiment of the present application.For purposes of clarity, hereafter
In, the instruction that control unit is sent to the access NVM chip of NVM chip referred to as " is ordered ", such as read command or write order;It will
Other forms of instruction from the received access storage equipment of host or the access storage equipment in control unit inter-process
Instruction, referred to as " request ", such as read request or write request.
In response to receiving read request (310) from host, whether the physical block that identification read request is accessed is to be fully written number
According to physical block (320).As an example, read request indicates the logical address to be accessed, and obtains same want by inquiry FTL table
The corresponding physical address in logical address address of access, and according to physical address identify physical address where physical block whether
It is fully written data.Optionally, record has been fully written the physical block of data and/or not written full data, to identify that read request is visited
Whether the physical address asked accesses the physical block for being fully written data.Still optionally, by physical block physical address sequence to
Data are written in physical block, then the read request access for accessing the physical address for the physical block being currently written into is not written full number
According to physical block, and the read request access for accessing physical block before the physical address of physical block being currently written into is
It is fully written the physical block of data.Still optionally, the physical block of solid storage device is divided into multiple groups, in every group, by physical block
Physical address sequence data are written to physical block, whether be being currently written into for each group by identify that read request accessed
Whether the physical block of data is the physical block being fully written identify read request access.As another example, read request refers to
Show whether the physical address to be accessed, the physical block where direct basis physical address identification physical address are fully written data.
If the physical address that read request is accessed is located at the physical block for being fully written data, there is no just for the physical block
Write order in commission, to directly issue read command (330) to the physical block.Further, it has been fully written the object of data
It manages in block, there is no the interference because of caused by partial write data, so that the error rate for being read data is relatively low.It can be used
The read command of specified type reads data from the physical block for being fully written data.NVM chip can be referred to lower delay disposal
Determine the read command of type.The NVM chip of the prior art mostly supports the read command of such specified type.As an example, class is specified
The read command of type is rectified by the specified mistake for reading threshold voltage, the cross-coupling interference treatment process of simplification and/or low complex degree
Positive treatment process reduces the delay of read command.The read command of specified type can also be that life is reformed in the reading with appointed sequence number
It enables.
It is dry for cross-coupling is reduced if the physical address that read request is accessed is located at the physical block for being not yet fully written data
The considerations of bit error rate caused by disturbing, reads data from the physical block for being not yet fully written data using another type of read command
(350).As an example, which is to access the common read command or the another type of reading of NVM chip
Order passes through the specified error correction processing for reading threshold voltage, the cross-coupling interference treatment process of reinforcing and/or high complexity
Process from NVM chip reads correct data.The another type of read command can also be that the reading with appointed sequence number is reformed
Order.
Further, if the physical address that read request is accessed is located at the physical block for being not yet fully written data, and the physics
The distance that address is specified away from the physical address being currently written into is (for example, at a distance of 2 Physical Page, 4 Physical Page or 8 physics
Page), need to further decrease cross-coupling interference, and the reading of such as specific numbers is used to reform order.
Further, this is not yet fully written in the logic unit (LUN) where the physical block of data that there may be holding
Write order capable or to be executed.(340) also are scheduled to the write order and/or read command that access the logic unit, with full
The performance requirement of sufficient I/O command processing.For example, it is too long to avoid the processing of read command from postponing, and there are write order and read command to access
When same logical units, priority processing read command, or the write order that pause is being performed, and it is changed to processing read command.
Fig. 4 is the flow chart according to the error handle of the read command of the embodiment of the present application.
It handles and completes in response to read command, NVM chip provides read command processing result to control unit 104 (referring to Fig. 1).
Read command processing result indicate successfully, failure or read command processing other states.Flash interface controller receives NVN chip and mentions
The read command processing result (410) of confession.If read command processing result instruction read command handle successfully (420), control unit 104 to
Host indicates that (430) are completed in read command processing, and read-out data are sent to host.If life is read in the instruction of read command processing result
Processing failure (420) are enabled, read data (440) by error handling procedures, such as carry out error handle, and/or again to NVM
Chip sends read command.Error handling procedures include attempting to reform order with the reading of different serial numbers to read again data, with having
The read command of different parameters reads again data, carries out error correction to the data of reading using the error handling techniques of enhancing,
Utilize the data reconstruction data (for example, RAID technique) etc. stored in other positions.In error handling procedures, to NVM chip
Read command is issued again, and receives the processing result (410) of read command again.
Fig. 5 is the schematic diagram that I/O request is handled according to the control unit of the embodiment of the present application.Control unit 500 includes host
Interface 510, distributor 530, for handling I/O request multiple CPU (CPU 0, CPU 1, CPU 2 with CPU 3), for accessing
The Media Interface Connector 520 of NVM chip 105, and it is coupled to one or more memories of control unit 500, such as DRAM (502,
504)。
Host interface 510 is used for host exchange command and data.In one example, host passes through with storage equipment
NVMe/PCIe protocol communication, host interface 510 handle PCIe protocol data packet, extract NVMe protocol command (I/O request), and
The processing result of NVMe protocol command is returned to host.
Distributor 530 is coupled to host interface 510, and receiving host is sent to the I/O request of storage equipment, and by I/O request
Distribute to one of multiple CPU for handling I/O request.Distributor can be realized by CPU or specialized hardware.The part of memory is empty
Between be used as storing the part (being indicated with FTL table 0 and FTL table 1) of FTL table (as indicated by DRAM 502, DRAM 504).
Distributor 530 obtains read request from host interface 510 and (referring also to Fig. 3, step 310), and provides it to for example
CPU 0.CPU 0 manages the FTL table 0 in DRAM 502.The logical address according to entrained by read request of CPU 0, passes through FTL table 0
Corresponding physical address is obtained with logical address.According to an embodiment of the present application, CPU 0 reads to ask according to physical address identification
Ask whether accessed physical block has been fully written data (referring also to Fig. 3, step 320).If the physical block that read request is accessed is
Data are fully written, CPU 0 directly provides physical address to Media Interface Connector 520, and instruction Media Interface Connector 520 is accessed with physical address
NVM chip 105.Optionally, CPU 0 also indicate Media Interface Connector 520 with the read command of specified type come handle read request (referring also to
Fig. 3, step 330), to reduce read command processing delay.
If the physical block that read request is accessed not yet is fully written data, CPU 0 provides physical address to CPU 1, and CPU 1 is adjusted
The I/O command to be handled by the physical block is spent, CPU 1 also indicates Media Interface Connector 520 and accesses NVM chip 105 with physical address.Example
Such as, CPU 1 dispatch one or more read commands, write order and the erasing order to be handled on the physical block etc. sequence and/
Or priority, and the sequence that indicates the one or more read commands of Media Interface Connector 520, write order and erasing order etc. and/or preferential
Grade.Optionally, CPU 1 also indicates Media Interface Connector 520 and handles read request (referring also to Fig. 3, step with another type of read command
340), to reduce the bit error rate for reading data.
Further, the data of the physical block of not written full data are written for blotter for the maintenance of CPU 1 caching.Example
Such as, the size of caching is sufficient to accommodate the data of 1 physical block, and when physical block is not written full, CPU 1 is by the data of the physical block
Also in the buffer, if there is the read request to these data, CPU 1 obtains data as to read request to blotter from caching
Response.Data after physical block is fully written data, in release caching.As another example, it is currently written into due to same
Physical Page at a distance of N pages (N positive integer, for example, 2,4,6 or 8) Physical Page in address range is interfered by biggish cross-coupling,
CPU 1 safeguards the caching for being sufficient to accommodate N page data, if there is the read request to these data, CPU 1 obtains data work from caching
For the response to read request.Caching is managed in the way of first in first out, as the new data of write-in physical block is added to caching,
Earliest data (Physical Page that distance is currently written into is farthest) in caching are removed from caching.Optionally, also according to slow
Number that data in depositing are read optimizes cache management strategy.As still another example, CPU 1 is simultaneously to multiple objects
It manages block and data is written, CPU 1 is each physical block maintenance caching.Caching is provided in such as DRAM 502.
According to an embodiment of the present application, CPU 0 had both safeguarded that FTL table (provided physically for the logical address of read request
Location), whether the physical block for also identifying that read request is accessed is fully written data.Due to according to the physical address obtained from FTL table
The corresponding physical block of physical address can be relatively easily obtained, also whether the readily identified physical block is fully written data in turn.Group
This two parts task is closed by same CPU continuous processing, helps to promote read request processing speed, reduces processing delay.
Optionally, Media Interface Connector 520 is asked according to read request from CPU 0 using the read command of specified type to handle reading
It asks.Media Interface Connector 520 handles read request using another type of read command from CPU 1 according to read request.
In one embodiment, in response to obtaining the implementing result of read command, Media Interface Connector 520 from NVM chip 105
According to the source side of read request, and the implementing result of read command is provided to the source side of read request.For example, for being connect from CPU 0
The implementing result of read command corresponding to it is supplied to CPU 0 by the read request of receipts, Media Interface Connector 520;For being connect from CPU 1
The implementing result of read command corresponding to it is supplied to CPU 1 by the read request of receipts, Media Interface Connector 520.And if implementing result
It indicates successfully, the implementing result of read request is directly or indirectly supplied to the sender of read request (referring to Fig. 4, step by CPU 0
430);If read request is transmitted to CPU 1, (is also joined by 1 error process process of CPU by implementing result instruction failure, CPU 0
See Fig. 4, step 440).By enabling 1 error process process of CPU, so that CPU 0 is able to be absorbed in the normal I/O request of processing,
The processing load for reducing CPU 0 improves the processing speed of CPU 0, also improves the processing speed of normal I/O request.
If implementing result indicates successfully, the implementing result of read request is directly or indirectly supplied to read request by CPU 1
Sender is (referring to Fig. 4, step 430), for example, pass through CPU 0.If implementing result instruction failure, 1 error process stream of CPU
Journey is (referring also to Fig. 4, step 440).
In another embodiment, in response to obtaining the implementing result of read command, Media Interface Connector from NVM chip 105
520 do not distinguish the source side of read request, and the implementing result to run succeeded is sent to CPU 0, will execute the execution knot of failure
Fruit is sent to CPU 1, and by 1 error process process of CPU, and the implementing result of read request is supplied to reading by CPU 0 and is asked
The sender asked.
With continued reference to Fig. 5, in alternative embodiments, multiple CPU is provided and carry out parallel processing I/O request.As an example,
CPU 0 and CPU 3 executes similar task, and CPU 1 and CPU 2 execute similar task.Distributor divides part I/O request
The processing of dispensing CPU 3.The logical address that CPU 3 is indicated according to I/O request, access FTL table 1 obtain corresponding physical address.For
Whether read request, CPU 3 have been fully written according to the physical block of its access and have selected directly to send read request to Media Interface Connector 520,
Or read request is forwarded by CPU 1 or CPU 2.As an example, FTL table 1 and FTL table 0 are respective with Different Logic
The FTL table of location range.Distributor 530 according to ranges of logical addresses indicated by I/O request, by I/O request distribute to CPU 0 or
One of CPU 3 is handled.As another example, FTL 1 and FTL table 0 are respectively complete FTL table.530 foundation of distributor
I/O request is distributed to one of CPU 0 and CPU 3 by the load of CPU 0 and CPU 3, or I/O request in turn or is randomly assigned to
One of CPU 0 and CPU 3.Optionally, CPU 2 is not provided, and substitutes the function of CPU 2 by CPU 1, and if desired, CPU
Read request is transmitted to Media Interface Connector 520 by CPU 1 by 3.
It is to be appreciated that being executed in Fig. 5 by each CPU for task can be realized by specialized hardware/circuit.
In yet another embodiment, physical address is indicated from the received I/O request of host interface 510.Correspondingly, CPU
0 (and optionally CPU 3) need not be I/O request maintenance FTL table, and the physical block directly whether accessed using physical address
It whether is to be fully written the physical block of data, and I/O request is directly forwarded to Media Interface Connector 520, or forward by CPU 1
To Media Interface Connector 520.Wherein CPU 1 will be scheduled the read/write requests of access same physical block.
Fig. 6 be according to an embodiment of the present application by the physical block tissue of NVM chip be bulk schematic diagram.It is shown in fig. 6
In solid storage device, bulk is constructed on every 16 logic units (LUN0, LUN1 ... LUN15), in each logic unit
The physical block of same physical address constitutes " bulk ".
In Fig. 6, the block B0 that address is 0 in LUN0-LUN15 constitutes bulk 0, wherein the physical block B0 of LUN0 to LUN14
For storing user data, and the physical block B0 of LUN15 is for storing the school being calculated according to the user data in block band
Test data.The physical block of NVM includes multiple pages, and the Physical Page with identical address constitutes page band in bulk, is each page of item
Band calculates verification data.For example, each Physical Page in the physical block B0 of LUN15 stores the page band where the Physical Page
The verification data that all customer data is calculated.
Similarly, in Fig. 6, the physical block B2 that address is 2 in LUN0-LUN15 constitutes bulk 2.Optionally, for storing
The physical block for verifying data can be located in any LUN in bulk.In the example of fig. 6, using the verification data in bulk,
In the case where the corrupted data of 1 Physical Page of bulk, it can restore from other pages of the page band where the Physical Page
Damage the data of page.
One of ordinary skill in the art, which will realize, can construct bulk with various other Method of Data Organization.In bulk
In include multiple physical blocks, and provide data protection by redundancy or RAID technique in bulk.To one in bulk
In the case where a or several physical block damages, the data of the physical block of damage can be rebuild by other blocks of bulk.By bulk
Implement to wipe for unit.When executing erasing operation, all physical blocks for belonging to bulk are erased together.
Referring to Fig. 6, bulk 0 has been fully written data to bulk 2, and bulk N is not yet fully written data.Read command is read
The data being written into stored in NVM chip.In Fig. 6, the data being fully written in the bulk 1 of data are read in read command 620,
And the data being not yet fully written in the bulk N of data are read in read command 622.
Fig. 7 is the flow chart according to the method for the processing read request of the another embodiment of the application.
In response to receiving read request (710) from host, whether the bulk that identification read request is accessed is to be fully written data
Bulk (720).As an example, read request indicates the logical address to be accessed, and to access by the way that inquiry FTL table acquisition is same
The corresponding physical address in logical address address, and according to physical address identify physical address where bulk whether be fully written
Data.Optionally, record has been fully written the bulk of data and/or not written full data, to identify physics that read request is accessed
Whether address accesses the bulk for being fully written data.Still optionally, number is written to physical block by the physical address sequence of bulk
According to the read request access of the physical address for the bulk that then access is currently written into same LUN (logic unit) is not
It is fully written the bulk of data, and the read request access of the bulk before accessing the physical address for the bulk being currently written into is
It has been fully written the bulk of data.Still optionally, the bulk of solid storage device is divided into multiple groups, in every group, by the object of bulk
It manages sequence of addresses and data is written to bulk, whether be that each group is currently written into data by identify that read request accessed
Whether bulk is the bulk being fully written identify read request access.As another example, read request instruction to be accessed
Whether physical address, the bulk where direct basis physical address identification physical address are fully written data.
If the physical address that read request is accessed is located at the bulk for being fully written data, there is no holding for the bulk
Write order in row, to directly issue read command (730) to the bulk.Further, it has been fully written in the bulk of data, no
In the presence of the interference because of caused by partial write data, so that the error rate for being read data is relatively low.Specified type can be used
Read command from be fully written data bulk read data.NVM chip can be with the reading of lower delay disposal specified type
Order.
If the physical address that read request is accessed is located at the bulk for being not yet fully written data, interfered for cross-coupling is reduced
The considerations of caused bit error rate, reads data (750) from the bulk for being not yet fully written data using another type of read command.Make
For citing, which reads threshold voltage, the cross-coupling interference treatment process of reinforcing and/or height by specified
The error correction treatment process of complexity reads correct data from NVM chip.The another type of read command can also be
Order is reformed in reading with appointed sequence number.
Further, one or more logics where one or more physical blocks of the bulk for being not yet fully written data
There may be carrying out or write order to be executed on unit (LUN).Also to the same logical units for accessing the bulk
Write order and/or read command are scheduled (740), to meet the performance requirement of I/O command processing.For example, avoiding the place of read command
Reason delay is too long, and when having write order and read command access same logical units, priority processing read command, or pause is
The write order being performed, and it is changed to processing read command.
In the another embodiment according to the application, referring back to Fig. 5, whether the bulk of the identification read request access of CPU 0
It is to be fully written the bulk of data (referring also to Fig. 7, step 710).And CPU 1 be not yet be fully written data bulk maintenance it is slow
It deposits.In one example, the data being sufficient to accommodate in bulk are cached.In another example, caching, which is sufficient to accommodate, is write
The data for entering the page band of data discharge the data of this page of band in caching after page band is fully written.0 basis of CPU
The corresponding physical address of read request directly indicates that medium connects if the bulk for identifying that read request is accessed has been fully written data
Mouth 520 reads data (referring also to Fig. 7, step 730) from the corresponding physical address of read request.If CPU 0 identifies read request institute
The bulk of access is not yet fully written data, then read request (together with physical address) is sent to CPU 1.The identification of CPU 1 reads to ask
Whether in the buffer to seek accessed data.If the data that read request is accessed are in the buffer, CPU 1 obtains data from caching
As the response to read request;If not in the buffer, CPU 1 indicates that Media Interface Connector 520 is asked from reading to the data that read request is accessed
Corresponding physical address is asked to read data (referring also to Fig. 7, step 750).Optionally, the object that CPU 1 also accesses read request
Read command, write order and erasing order in logic unit (LUN) corresponding to reason address are scheduled, to reduce read command
Processing delay.
The embodiment of the present application also provides a kind of program comprising program code, when being loaded into CPU and being executed in CPU,
Program makes CPU execute one of the method provided above according to the embodiment of the present application.
The embodiment of the present application also provides a kind of program including program code, stores equipment and in storage equipment when being loaded into
When upper execution, described program makes the processor for storing equipment execute one of the method provided above according to the embodiment of the present application.
It should be understood that the combination of the frame of each frame and block diagram and flow chart of block diagram and flow chart can be respectively by including
The various devices of computer program instructions are implemented.These computer program instructions can be loaded into general purpose computer, dedicated meter
To generate machine on calculation machine or other programmable datas control equipment, to control equipment in computer or other programmable datas
The instruction of upper execution creates for realizing the device for the function of specifying in one or more flow chart box.
These computer program instructions, which can also be stored in, can guide computer or other programmable datas to control equipment
Computer-readable memory in working in a specific way, so as to using being stored in computer-readable memory
Instruction to manufacture including the product for realizing the computer-readable instruction of specified function in one or more flow chart box.
Computer program instructions can also be loaded into computer or other programmable datas control equipment on so that computer or its
A series of operation operation is executed in his programmable data control equipment, to generate computer implemented process, and then is being counted
The instruction executed on calculation machine or other programmable datas control equipment provides for realizing institute in one or more flow chart box
The operation of specified function.
Thus, the frame of block diagram and flow chart is supported for executing the combination of the device of specified function, for executing specified function
The combination of the operation of energy and the combination of the program instruction means for executing specified function.It should also be understood that block diagram and flow chart
Each frame and the combination of frame of block diagram and flow chart can be by executing specified functions or operations, hardware based dedicated meters
Calculation machine system is realized, or is realized by the combination of specialized hardware and computer instruction.
Although the example of present invention reference is described, it is intended merely to the purpose explained rather than the limit to the application
System, the change to embodiment, increase and/or deletion can be made without departing from scope of the present application.
In the field benefited involved in these embodiments, from the description above with the introduction presented in associated attached drawing
Technical staff will be recognized the application recorded here it is many modification and other embodiments.It should therefore be understood that this Shen
It please be not limited to disclosed specific embodiment, it is intended to will modify and other embodiments include in the scope of the appended claims
It is interior.Although using specific term herein, them are only used on general significance and describing significance and not is
The purpose of limitation and use.
Claims (10)
1. a kind of method of low latency processing I/O request, comprising:
In response to receiving read request, the physical address that read request is accessed is obtained;
If the physical block of read request access has been fully written data, first kind read command is issued to respond read request to NVM chip;
If the physical block of read request access is not yet fully written data, the second class read command is issued to NVM chip;And
Wherein, there is the processing smaller than the second class read command to postpone for first kind read command.
2. according to the method described in claim 1, further include:
If the physical block of read request access is not yet fully written data, the read request and write request for accessing the physical block are also dispatched
Processing sequence, with read request described in priority processing.
3. method according to claim 1 or 2, wherein
Identify whether the physical block of read request access has been fully written data by the physical address.
4. method described in one of -3 according to claim 1, further includes:
In response to receiving the read command processing result of NVM chip offer, if the instruction of read command processing result is handled successfully, asked to reading
Sender is asked to indicate that read request processing is completed;If read command processing result indicates processing failure, again by error handling procedures
Read command is issued to NVM chip.
5. a kind of method of low latency processing I/O request, comprising:
In response to receiving read request, the physical address that read request is accessed is obtained;
If the bulk of read request access has been fully written data, first kind read command is issued to respond read request to NVM chip;
If the bulk of read request access is not yet fully written data, the second class read command is issued to NVM chip;And
Wherein, there is the processing smaller than the second class read command to postpone for first kind read command.
6. a kind of device of low latency processing I/O request, comprising:
Host interface, distributor, multiple CPU and Media Interface Connector;
Host interface is for receiving read request;Distributor is coupled to host interface, and the received read request of host interface is distributed to
First CPU;
Whether the physical block of the first CPU identification read request access has been fully written data, if the physical block has been fully written data, refers to
Show that Media Interface Connector issues first kind read command to NVM chip;If the physical block is not yet fully written data, the first CPU asks reading
It asks and is transmitted to the 2nd CPU;
2nd CPU issues the second class read command to NVM chip in response to receiving read request, instruction Media Interface Connector;And
Wherein, there is the processing smaller than the second class read command to postpone for first kind read command.
7. device according to claim 6,
2nd CPU, which is also dispatched, accesses the read request of the physical block and the processing sequence of write request, is asked with reading described in priority processing
It asks.
8. device according to claim 6 or 7, wherein
First CPU obtains physical address according to the FTL table in the logical address inquiry memory for requesting read request.
9. the device according to one of claim 6-8, wherein
For the instruction from the first CPU, Media Interface Connector controller proposes the corresponding processing result of same instruction that NVM chip provides
Supply the first CPU;
For the instruction from the 2nd CPU, Media Interface Connector controller proposes the corresponding processing result of same instruction that NVM chip provides
Supply the 2nd CPU;And
First CPU identifies the processing result that NVM chip provides, if processing result indicates processing failure,
First CPU indicates the 2nd CPU error process process.
10. a kind of device of low latency processing I/O request, comprising:
Physical address obtains module, for obtaining the physical address that read request is accessed in response to receiving read request;
First kind read command generation module issues the to NVM chip if the physical block for read request access has been fully written data
A kind of read command is to respond read request;
Second class read command generation module issues if the physical block for read request access is not yet fully written data to NVM chip
Second class read command;And
Wherein, there is the processing smaller than the second class read command to postpone for first kind read command.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710671697.9A CN109388333B (en) | 2017-08-08 | 2017-08-08 | Method and apparatus for reducing read command processing delay |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710671697.9A CN109388333B (en) | 2017-08-08 | 2017-08-08 | Method and apparatus for reducing read command processing delay |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109388333A true CN109388333A (en) | 2019-02-26 |
CN109388333B CN109388333B (en) | 2023-05-05 |
Family
ID=65414119
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710671697.9A Active CN109388333B (en) | 2017-08-08 | 2017-08-08 | Method and apparatus for reducing read command processing delay |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109388333B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111459402A (en) * | 2020-02-20 | 2020-07-28 | 华中科技大学 | Magnetic disk controllable buffer writing method, controller, hybrid IO scheduling method and scheduler |
WO2021004310A1 (en) * | 2019-07-10 | 2021-01-14 | 深圳大普微电子科技有限公司 | Method for enhancing quality of service of solid-state drive and solid-state drive |
CN113986137A (en) * | 2021-10-28 | 2022-01-28 | 英韧科技(上海)有限公司 | Storage device and storage system |
CN116185310A (en) * | 2023-04-27 | 2023-05-30 | 中茵微电子(南京)有限公司 | Memory data read-write scheduling method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101645043A (en) * | 2009-09-08 | 2010-02-10 | 成都市华为赛门铁克科技有限公司 | Methods for reading and writing data and memory device |
US20100128617A1 (en) * | 2008-11-25 | 2010-05-27 | Qualcomm Incorporated | Method and apparatus for two-way ranging |
CN105511964A (en) * | 2015-11-30 | 2016-04-20 | 华为技术有限公司 | I/O request processing method and device |
CN106326133A (en) * | 2015-06-29 | 2017-01-11 | 华为技术有限公司 | A storage system, a storage management device, a storage device, a mixed storage device and a storage management method |
CN106448737A (en) * | 2016-09-30 | 2017-02-22 | 北京忆芯科技有限公司 | Flash memory data reading method and device and solid disk drive |
CN106527967A (en) * | 2015-09-10 | 2017-03-22 | 蜂巢数据有限公司 | Reducing read command latency in storage devices |
CN106537365A (en) * | 2014-05-30 | 2017-03-22 | 桑迪士克科技有限责任公司 | Methods and systems for staggered memory operations |
CN106708441A (en) * | 2016-12-29 | 2017-05-24 | 忆正科技(武汉)有限公司 | Operating method for decreasing read delay of solid-state disk |
-
2017
- 2017-08-08 CN CN201710671697.9A patent/CN109388333B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100128617A1 (en) * | 2008-11-25 | 2010-05-27 | Qualcomm Incorporated | Method and apparatus for two-way ranging |
CN101645043A (en) * | 2009-09-08 | 2010-02-10 | 成都市华为赛门铁克科技有限公司 | Methods for reading and writing data and memory device |
CN106537365A (en) * | 2014-05-30 | 2017-03-22 | 桑迪士克科技有限责任公司 | Methods and systems for staggered memory operations |
CN106326133A (en) * | 2015-06-29 | 2017-01-11 | 华为技术有限公司 | A storage system, a storage management device, a storage device, a mixed storage device and a storage management method |
CN106527967A (en) * | 2015-09-10 | 2017-03-22 | 蜂巢数据有限公司 | Reducing read command latency in storage devices |
CN105511964A (en) * | 2015-11-30 | 2016-04-20 | 华为技术有限公司 | I/O request processing method and device |
CN106448737A (en) * | 2016-09-30 | 2017-02-22 | 北京忆芯科技有限公司 | Flash memory data reading method and device and solid disk drive |
CN106708441A (en) * | 2016-12-29 | 2017-05-24 | 忆正科技(武汉)有限公司 | Operating method for decreasing read delay of solid-state disk |
Non-Patent Citations (2)
Title |
---|
CHUA-CHIN WANG.ETC: "A Single-ended Disturb-free 5T Loadless SRAMwith Leakage Sensor and Read Delay CompensationUsing 40 nm CMOS Process" * |
魏元豪等: "针对固态硬盘的拥塞控制I/O调度器" * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021004310A1 (en) * | 2019-07-10 | 2021-01-14 | 深圳大普微电子科技有限公司 | Method for enhancing quality of service of solid-state drive and solid-state drive |
US11886743B2 (en) | 2019-07-10 | 2024-01-30 | Shenzhen Dapu Microelectronics Co., Ltd. | Method for enhancing quality of service of solid-state drive and solid-state drive |
CN111459402A (en) * | 2020-02-20 | 2020-07-28 | 华中科技大学 | Magnetic disk controllable buffer writing method, controller, hybrid IO scheduling method and scheduler |
CN111459402B (en) * | 2020-02-20 | 2021-07-27 | 华中科技大学 | Magnetic disk controllable buffer writing method, controller, hybrid IO scheduling method and scheduler |
CN113986137A (en) * | 2021-10-28 | 2022-01-28 | 英韧科技(上海)有限公司 | Storage device and storage system |
CN116185310A (en) * | 2023-04-27 | 2023-05-30 | 中茵微电子(南京)有限公司 | Memory data read-write scheduling method and device |
Also Published As
Publication number | Publication date |
---|---|
CN109388333B (en) | 2023-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10126964B2 (en) | Hardware based map acceleration using forward and reverse cache tables | |
CN109085997A (en) | Memory-efficient for nonvolatile memory continues key assignments storage | |
CN109388333A (en) | Reduce the method and apparatus of read command processing delay | |
CN106708425A (en) | Distributed multimode storage management | |
CN109101185B (en) | Solid-state storage device and write command and read command processing method thereof | |
CN108984429A (en) | Occupy the data storage device of period management with buffer | |
JP2014515534A (en) | Apparatus including memory system controller and associated method | |
TW201108231A (en) | Method for giving program commands to flash memory chips, and controller and storage system using the same | |
CN108153482A (en) | I/O command processing method and Media Interface Connector controller | |
CN106469126A (en) | Process method and its storage control of I/O Request | |
CN109164976A (en) | Optimize storage device performance using write buffer | |
KR20150052039A (en) | Information processing device | |
US11650760B2 (en) | Memory system and method of controlling nonvolatile memory with checking a total size indicative of a sum of data length specified by a write command | |
CN108572932A (en) | More plane NVM command fusion methods and device | |
CN114253461A (en) | Mixed channel memory device | |
CN108153582A (en) | I/O command processing method and Media Interface Connector controller | |
CN107562648A (en) | Without lock FTL access methods and device | |
CN108628759A (en) | The method and apparatus of Out-of-order execution NVM command | |
CN109815157A (en) | Program command processing method and device | |
US9152348B2 (en) | Data transmitting method, memory controller and data transmitting system | |
CN108877862A (en) | The data organization of page band and the method and apparatus that data are written to page band | |
CN114253462A (en) | Method for providing mixed channel memory device | |
CN114968849B (en) | Method and equipment for improving utilization rate of programming cache | |
CN108984108A (en) | For dispatching the method and solid storage device of I/O command | |
CN115576867A (en) | Extended address space for memory devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 100192 room A302, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing Applicant after: Beijing yihengchuangyuan Technology Co.,Ltd. Address before: 100192 room A302, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing Applicant before: BEIJING MEMBLAZE TECHNOLOGY Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |