CN102439576A - Memory controllers, memory systems, solid state drivers and methods for processing a number of commands - Google Patents

Memory controllers, memory systems, solid state drivers and methods for processing a number of commands Download PDF

Info

Publication number
CN102439576A
CN102439576A CN2010800227477A CN201080022747A CN102439576A CN 102439576 A CN102439576 A CN 102439576A CN 2010800227477 A CN2010800227477 A CN 2010800227477A CN 201080022747 A CN201080022747 A CN 201080022747A CN 102439576 A CN102439576 A CN 102439576A
Authority
CN
China
Prior art keywords
order
command
several
passage
rear ends
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010800227477A
Other languages
Chinese (zh)
Other versions
CN102439576B (en
Inventor
迈赫迪·阿斯纳阿沙里
廖玉松
杨芮尧
西亚麦克·内马齐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Publication of CN102439576A publication Critical patent/CN102439576A/en
Application granted granted Critical
Publication of CN102439576B publication Critical patent/CN102439576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/161Handling requests for interconnection or transfer for access to memory bus based on arbitration with latency improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays

Abstract

The present disclosure includes methods and devices for a memory controller. In one or more embodiments, a memory controller includes a plurality of back end channels, and a command queue communicatively coupled to the plurality of back end channels. The command queue is configured to hold host commands received from a host. Circuitry is configured to generate a number of back end commands at least in response to a number of the host commands in the command queue, and distribute the number of back end commands to a number of the plurality of back end channels.

Description

Memory Controller, accumulator system, solid-state drive and be used to handle the method for several orders
Technical field
In general, the present invention relates to semiconductor memory system, method and system, and more particularly, relate to Memory Controller, accumulator system, solid-state drive and be used to handle the method for several orders.
Background technology
Usually provide storage arrangement as the internal semiconductor integrated circuit in computing machine or other electronic installation.There is the many dissimilar storer that comprises volatibility and nonvolatile memory.Volatile memory can need electric power to keep its data and comprise random-access memory (ram), dynamic RAM (DRAM), Synchronous Dynamic Random Access Memory (SDRAM) and other storer.Nonvolatile memory can provide persistent data and can comprise NAND flash memory, NOR flash memory, ROM (read-only memory) (ROM), electrically erasable ROM (EEPROM), erasable programmable ROM (EPROM) and phase change random access memory devices (PCRAM) and other storer through when not being powered, keeping institute's canned data.
Storage arrangement can be combined in together to form solid state drive (SSD).SSD can comprise nonvolatile memory (for example, NAND flash memory and NOR flash memory) and/or can comprise the non-volatile and volatile memory of volatile memory (for example, DRAM and SRAM) and various other types.
SSD can be in order to replace the main storage means of hard disk drive as computing machine, because SSD can have the advantage that is superior to hard drives aspect performance, size, weight, durability, operating temperature range and the power consumption.For instance, SSD lacks moving-member and can have premium properties comparing Shi Yinqi with disc driver, lacks other dynamo-electric delay that moving-member can improve seek time, stand-by period and be associated with disc driver.SSD manufacturer can use non-volatile flash memory to form the quickflashing SSD that can not use the internal cell power supply, therefore allows said driver more general and compacter.
SSD can comprise several storage arrangements, and for example, (as used among this paper, " several " something can refer to one or more this things to several memory chips; For example, several storage arrangements can refer to one or more storage arrangements).As be understood by those skilled in the art that memory chip can comprise several nude films.Each nude film can comprise several memory arrays and on peripheral circuit.Memory array can comprise several planes, and wherein each plane comprises several physical memory cells arc pieces.Each physical block can comprise several pages or leaves that can store several data sector.
Accumulator system (for example; SSD) can be incorporated in the computing system; Said accumulator system can be through communication interface (for example; The order and the data that are primarily aimed between main frame and the mass storage device (for example, hard disk drive, optical drive and SSD) transmit Serial Advanced Technology Attachment (SATA) high-speed serial bus that is designed) be coupled to main frame communicatedly.
In the operating period of SSD, can use for example order and other orders such as program command, reading order and erase command.For instance, programming (for example, writing) order can in order to data programing on solid-state drive, reading order can be in order to reading the data on the solid-state drive, and erase command can be in order to wipe the data on the solid-state drive.
Summary of the invention
Description of drawings
Fig. 1 is the functional block diagram according to the computing system of one or more embodiment of the present invention.
Fig. 2 is the functional block diagram according to the computing system that comprises at least one accumulator system of one or more embodiment of the present invention.
Fig. 3 is the functional block diagram according to the accumulator system controller that is coupled to several storage arrangements communicatedly of one or more embodiment of the present invention.
Fig. 4 graphic extension according to the logic of one or more embodiment of the present invention to physical address map.
Fig. 5 is the functional block diagram according to the command queue of the front end DMA of one or more embodiment of the present invention.
Fig. 6 A and Fig. 6 B graphic extension are according to the operation of the command queue of the front end DMA of one or more embodiment of the present invention.
Fig. 7 is the process flow diagram that one or more embodiment according to the present invention are used for assignment commands in the middle of several rear end passages.
Fig. 8 is the functional block diagram of the interface of graphic extension one or more embodiment according to the present invention between front end and several passages.
Fig. 9 A is the functional block diagram according to direct memory access (DMA) module (DMA) descriptor block of one or more embodiment enforcements according to the present invention.
Fig. 9 B graphic extension is according to the clauses and subclauses in the dma descriptor piece (DDB) among Fig. 9 A that are illustrated in of one or more embodiment enforcements according to the present invention.
Embodiment
The present invention comprises Memory Controller, accumulator system, solid-state drive and is used to handle the method for several orders.In one or more embodiment, the command queue that Memory Controller comprises a plurality of rear ends passage and is coupled to said a plurality of rear ends passage communicatedly.Said command queue can be through being configured to keep the Host Command from the main frame reception.Circuit is through being configured at least in response to several Host Commands in the said Host Command in the said command queue sort command after producing several, and with said several rear end call allocation several rear end passages in the passage of said a plurality of rear ends.
The present invention also comprises method and the device that is used for Memory Controller.In one or more embodiment, a Memory Controller comprises a plurality of rear ends passage and is coupled to the preceding sort command allocator of said a plurality of rear ends passage and command queue communicatedly.Said order allocator can be through the clean change to storer that is configured to confirm to treat realized by said several orders, and revises one or more to optimize said several order distribution in the middle of the passage of said a plurality of rear ends in said several orders.
Figure among this paper follow first digit wherein or former numerals corresponding to graphic figure numbering and all the other digit recognition the element in graphic or the numbering convention of assembly.Like or assembly between the different figure can be discerned through using similar numeral.For instance, in Fig. 1,104 can refer to element " 04 ", and in Fig. 2, and like can be referred to as 204 etc.
Fig. 1 is the functional block diagram according to the computing system of one or more embodiment of the present invention.Assembly and the framework of an embodiment of the embodiment graphic extension computing system 100 of Fig. 1.Computing system 100 comprises accumulator system 104 (for instance; Solid state drive (SSD)); Accumulator system 104 is coupled to main frame (for example, main frame 102) communicatedly via interface 106 (for example, USB, PCI, SATA/150, SATA/300 or SATA/600 interface and other interface).
SATA is designed to be commonly referred to the follow-up standard of Advanced Technology Attachment (ATA) standard of Parallel ATA (PATA).First generation SATA interface (be also referred to as SATA/150 or be called SATA 1 officiously) communicates with the speed of about per second 1.5 lucky positions (Gb/s) or per second 150 megabyte (MB/s).Subsequently, add the 3.0Gb/s signaling rate to Physical layer, (for example, the maximum data handling capacity) is doubled to 300MB/s from 150MB/s thereby make maximal value effectively.3.0Gb/s standard also is called SATA/300 or is called SATA II or SATA2 officiously.The transfer rate of SATA/300 can temporarily satisfy the requirement of magnetic hard drive handling capacity; Yet; Can support much higher data transfer rate owing to use the solid-state drive of a plurality of quick quickflashing passages; Therefore can when supporting quickflashing solid-state drive reading speed, implement even SATA standard (SATA/600 that for example, has the handling capacity of 6Gb/s) faster.
Main frame 102 can comprise several separate integrated circuit, and perhaps above assembly or function can be positioned on the same integrated circuit.According to one or more embodiment, main frame 102 can physically be implemented in the computing system 100 as " motherboard " at least in part, and wherein SSD 104 physically is implemented on the independent card, and said motherboard and said SSD are coupled via bus communication ground.
Main frame 102 can comprise several processors 105 (for example, parallel processor, coprocessor, processor core etc.) that are coupled to storer and bus control piece 107 communicatedly.Several processors 105 can be the control circuit of microprocessor or certain other type, for example, and special IC (ASIC).Other assembly of said computing system also can have processor.Storer and bus control piece 107 are coupled to its storer and other assembly with can having direct communication; For instance; Dynamic RAM (DRAM) 111, graphical user interface 113 or other user interface (for example, display monitor, keyboard, mouse etc.).
Storer and bus control piece 107 also can have peripheral unit and the bus control piece 109 that is coupled to it communicatedly; Peripheral unit and bus control piece 109 can be connected to several devices again; For example, use flash drive 115, nonvolatile memory HCI (NVMHCI) flash memory 117 or the SSD 104 of USB (USB) interface.To understand like the reader, SSD 104 can with hard disk drive (HDD) together or place of hard disk drive (HDD) be used for several various computing systems.Illustrated computing system 100 instance of system for this reason among Fig. 1.
Fig. 2 is the functional block diagram according to the computing system that comprises at least one accumulator system of one or more embodiment of the present invention.Computing system 200 comprise be coupled to main frame 202 communicatedly accumulator system 204 (for example, SSD).SSD 204 can be coupled to main frame 202 communicatedly via interface 206 (for example, cable, bus, for example, SATA interface).SSD 204 can be similar to the solid-state drive 104 among Fig. 1.
The assembly of one or more embodiment of Fig. 2 graphic extension solid-state drive 204; (for example comprise controller 210, physical interface 208; Connector) and several storage arrangements 212-1 ..., 212-N; Promptly corresponding to several storage arrangements of several passages of controller 210 (for example, corresponding to one or more storage arrangements of the special modality).Correspondingly, storage arrangement 212-1 ..., 212-N is shown as " channel number storer " on graphic.As used among this paper, storage arrangement can comprise several memory cells (for example, nude film, chip, array or other group) of sharing the control input and can use several type of memory (for example, NAND quickflashing) to make.Control input can comprise address latch usually and launch (ALE), chip and launch (CE), read and launch (RE), ready/busy (R/B), write protection (WP) and I/O (I/O) and connect (for example, pin, pad etc.).In one or more embodiment, SSD 204 can comprise in order to sealing the shell of SSD 204, but this shell is not to be essential, for instance, main frame 202 and SSD 204 both can be by the computing system shell enclosure.
Interface 206 can be in order to convey a message between SSD 204 and another device (for example, main frame 202).According to one or more embodiment, SSD 204 can be used as the memory storage in the computing system 200.According to one or more embodiment, SSD 204 is configurable for being used for the outside or the pocket memory system of computing system 200, for example, has the insertion connectivity.
Controller 210 can with storage arrangement 212-1 ..., 212-N communication is with the memory cell of the said storage arrangement of operation (for example, read, programme (that is, writing), wipe etc.).Controller 210 can in order to the management with storage arrangement 212-1 ..., 212-N communication and be stored in the data in the said storage arrangement.Controller 210 can have the circuit that can be several integrated circuit.Controller 210 also can have the circuit that also can be several discrete component.For one or more embodiment, the circuit in the controller 210 can comprise be used for control cross over several passages and cross over several storage arrangements 212-1 ..., 212-N the control circuit of access.Memory Controller 210 can optionally be communicated by letter with the corresponding stored apparatus via said several passages.
Communication protocol between main frame 202 and the SSD 204 can be different from be used for the access memory device (for example, storage arrangement 212-1 ..., 212-N) required communication protocol.Memory Controller 210 can comprise through be configured to the command translation that receives from main frame 202 become appropriate command with realize crossing over several storage arrangements 212-1 ..., 212-N the control circuit of set operation.The circuit of Memory Controller 210 can provide translation layer between main frame 202 and SSD 204.Memory Controller 210 also can be processed into suitable channel command sequence (for instance) with storage and retrieve data with Host Command sequence, associated data and other information (for example, signal).Memory Controller 210 can be in due course between via the suitable channel selecting property ground assignment commands of going to the corresponding stored apparatus, reception and registration (for example, receive, send, transmission) associated data and other information.
According to one or more embodiment of the present invention, each storage arrangement 212-1 ..., 212-N can comprise several memory cells.Can use various types of volatibility or nonvolatile memory array (for example, NAND quickflashing, DRAM and other array) form storage arrangement 212-1 ..., 212-N.According to one or more embodiment of the present invention; Storage arrangement 212-1 ..., 212-N can comprise several flash memory cells with NAND framework, NOR framework, AND framework or certain other memory array architecture configuration, all said frameworks can be used for one or more embodiment of embodiment of the present invention.
Storage arrangement 212-1 ..., 212-N can comprise can be through being configured to provide several memory cells of physical or logic configuration (for example, page or leaf, piece, plane, array or other group).One page can be according to several physical data sector storage data.Each physical sector can and can comprise overhead information corresponding to a logic sector, for example, and error-correcting code (ECC) information and LBA (LBA) information and user data.As be understood by those skilled in the art that LBA is the scheme that is used for the recognition logic information sectors usually by main frame.As an instance, but the information of several data bytes of logic sector storage representation (for example, 256 bytes, 512 bytes or 1,024 byte).As used among this paper, one page is meant the programming and/or the unit of reading, for example can be together or as a functional group program and/or several unit that read or the plurality of data part of storing it on.For instance, some memory arrays can comprise several pages or leaves, and said several pages or leaves are formed memory cell block,, comprise the piece that can be used as the memory cell that a unit wipes together that is, and for example, roughly simultaneous system is wiped the unit in each physical block.Can comprise several pieces in the one memory cell plane.Can comprise several memory cell planes on one nude film.An array can comprise several nude films.With instance and be not limiting mode, the 128GB storage arrangement can comprise every page 4314 data bytes, every 128 pages or leaves, 2048 pieces in every plane and 16 planes of every device.Yet embodiment is not limited to this instance.
Each storage arrangement 212-1 ..., 212-N can comprise various types of volatibility and nonvolatile memory array, for example, quickflashing and DRAM array and other array.In one or more embodiment, storage arrangement 212-1 ..., 212-N can be the solid-state memory array.Storage arrangement 212-1 ..., 212-N can comprise several memory cells that can be gathered into some units.As used among this paper, a unit can comprise several memory cells, for example, and page or leaf, physical block, plane, whole array or other group of memory cells.For instance, a storage arrangement can be a memory array and comprises several planes, and wherein each plane comprises several physical blocks.Memory cell in each physical block can be used as a unit and wipes together, and for example, roughly simultaneous system is wiped the unit in each physical block.For instance, can in a single operation, wipe the unit in each physical block together.One physical block can comprise several pages or leaves.Memory cell in each page can be used as a unit and programmes together, for example, and the unit in each page of simultaneous system programming roughly.Memory cell in each page also can be used as a unit and reads together.
The physical sector of accumulator system can and can comprise overhead information corresponding to logic sector, for example, and error-correcting code (ECC) information and LBA (LBA) information and user data.As be understood by those skilled in the art that LBA is the scheme that is used for the recognition logic information sectors usually by main frame.As an instance, but the information of each several data byte of physical sector storage representation (for example, 256 bytes, 512 bytes or 1,024 byte and other number byte).Yet, the particular data byte number order that embodiments of the invention are not limited to be stored in the physical sector or are associated with logic sector.
Fig. 3 is the functional block diagram according to the accumulator system controller that is coupled to several storage arrangements communicatedly of one or more embodiment of the present invention.As shown in Figure 3, Memory Controller 310 can be coupled to communicatedly several (for example, 8) storage arrangements (for example, 312-1 ..., 312-N).In one or more embodiment, said storage arrangement can be among Fig. 2 with 212-1 ..., the storage arrangement showed of 212-N.Each storage arrangement (for example, 312-1 ..., 312-N) corresponding to the passage of controller 310 (for example, 350-1 ..., 350-N).As used among this paper, storage arrangement can comprise several memory cells of sharing the control input, like previous argumentation.In one or more embodiment, Memory Controller 310 can be the SSD controller.In one or more embodiment, Memory Controller 310 can be similar to the controller 210 shown in Fig. 2.
Each storage arrangement (for example, 312-1 ..., 312-N) can as before about storage arrangement 212-1 ..., tissue that 212-N describes and can being made on the individual dies maybe can be made on the stacked die.Each nude film can comprise several memory cell arrays.Memory Controller 310 can comprise fore-end 344 and rear end part 346.Controller 310 can be in front end 344 processing command and data, for example, (for example) optimized said several order distribution in the middle of the passage of a plurality of rear ends through the quantity that minimizing is transferred to the order on the rear end part 346).Controller 310 further in each in the passage of rear end processing command and data to realize additional efficiency about the storage operation of special modality.In this way, controller 310 management and storage arrangement 312-1 ..., 312-N communication.
As shown in Figure 3, fore-end 344 can comprise the assignment file 315 that is coupled to application layer 320 communicatedly and host buffer 322 (for example, FIFO) HPI 314.For instance, HPI 314 can be through being configured to pass on input and output information (for example, data stream) via physical interface on the SSD (for example, 208 among Fig. 2) and SATA interface (for example, 206 among Fig. 2) with main frame (for example, 202 among Fig. 2).According to one or more embodiment, can order (comprising command parameter) (for example, command component of input information) be directed to assignment file 315 and can the service load that be associated (for example, the data division of input information) be directed to main frame FIFO 322.
Assignment file 315 can be a dark formation and can communicate by letter with front end direct memory access (DMA) module (DMA) 316 with allocator 318 (hereinafter being called " order allocator ") via command processor.Order allocator 318 through configuration (for example; Comprise hardware) so that its can order after main frame arrives, contrast at once some criterion check assignment file 315 order (for example; Integrity checking); And in case about said criterion through check, the order that can accept to arrive and can it be assigned to front end DMA 316 and suitable rear end passage from assignment file 315.Used firmware to carry out previous method in order to integrity checking; Yet it is quicker in hardware, to carry out the Host Command integrity checking, thereby causes the increase of Host Command processing speed through order allocator 318.
Main frame FIFO 322 can be coupled to and have several crypto engines the encryption device 324 of (crypto engine of for example, implementing aes algorithm) communicatedly.Encryption device 324 can be through being configured to handle service load that (for example, encrypting) be associated with particular command and said service load being transferred to front end DMA 316.The title that can file an application on Dec 12nd, 2008 finds the additional detail about the operation of encryption device 324 for having at least one co-inventor and having in the 12/333rd, No. 822 patent application case of attorney docket 1002.0400001 of " parallel encryption/deciphering (Parallel Encryption/Decryption) ".
Fore-end 344 also can have several other processors 330, and it can comprise FEP (FEP) 328, storer 336 (for example, RAM, ROM), DMA 332 and main buffer 334.For instance, several processors 330 can be coupled to front end DMA 316 communicatedly through communication bus.
Front end DMA 316 can comprise the dma descriptor piece (DDB) that comprises the register that is associated and register 340 to be used to contain several data words.Front end DMA 316 also can comprise moderator 342 to be arbitrated being coupled to communicatedly between its several passages being used for.Encryption device 324 also can be coupled to FEP 328 communicatedly.But FEP 328 also is coupled to direct communication main frame FIFO 322 and front end DMA 316.
Front end DMA 316 can be coupled to order allocator 318 communicatedly.Controller 310 can comprise corresponding to several storage arrangements (for example, 312-1 ..., 312-N) several passages (for example, 1 ..., N).One-one relationship is described and be shown as in the drawings to relation between said several passages and said several storage arrangements in this article; Yet embodiments of the invention are not limited to this, and other configuration (for example, a plurality of storage arrangements corresponding to special modality, particular memory device corresponding to a plurality of passages or its combination) is contained in the present invention.Front end DMA 316 and the order allocator 318 effectively with front end 344 circuit communications be coupled to back-end circuit 346 (for example, rear end passage 1 (350-1) ..., rear end passage N (350-N)).According to one or more embodiment of the present invention, controller 310 (for example, 1, comprises 8 passages ..., 8).Embodiments of the invention are not limited to have the controller of 8 passages, and therefore, controller can have the passage that is greater than or less than 8 quantity through enforcement.
With reference now to the rear end part 346 of controller 310,, rear end part 346 comprise several passages (for example, 350-1 ..., 350-N).Each rear end passage can comprise a channel processor (for example, 356-1 ..., 356-N) and the passage DMA that is associated (for example, 354-1 ..., 354-N), each among the said passage DMA that is associated can be coupled to front end DMA 316 communicatedly.The order allocator 318 can through be configured to via the channel command formation (for example, 355-1 ..., 355-N) with call allocation to the respective channel processor (for example, 356-1 ..., 356-N).In one or more embodiment, the channel command formation (for example, 355-1 ..., 355-N) can keep several orders of receiving from order allocator 318.
Front end DMA 316 can through be configured to the data allocations that is associated with particular command to respective channel DMA (for example, 354-1 ..., 354-N).Passage DMA (for example; 354-1 ..., 354-N) can (for example be coupled to channel buffer communicatedly; 358-1 ..., 358-N), said channel buffer can be coupled to communicatedly again error-correcting code (ECC) and memory interface module (for example, 360-1 ..., 360-N).Channel processor (for example, 356-1 ..., 356-N) also can be coupled to communicatedly the ECC/ memory interface (for example, 360-1 ..., 360-N), passage DMA (for example, 354-1 ..., 354-N) and channel buffer (for example, 358-1 ..., 358-N).
Though the embodiment shown in Fig. 3 with each rear end passage 350-1 ..., 350-N be illustrated as comprise the rear end channel processor (for example, 356-1 ..., 356-N), but embodiments of the invention are not limited to this.For instance, rear end part 346 for example can comprise the circuit of sharing back-end processor, its comprise (for example) for example can to several rear end passages (for example, 350-1 ..., 350-N) hardware logic of the special IC (ASIC) operated.Therefore, said shared back-end processor can through be coupled communicatedly be similar to the designated lane processor (for example, 356-1 ..., 356-N) order allocator 318 and front end DMA 316 communications of described order allocator and front end DMA.As shown in Figure 3, particular memory device (for example, 312-1 ..., 312-N) corresponding to each passage (for example, 350-1 ..., 350-N) so that can carry out via said respective channel to the access of said particular memory device.
HPI 314 can be the communication interface between controller 310 and the main frame.In one or more embodiment, information conveyed can comprise several orders between main frame and controller, for example, and programming (for example, writing) order, reading order, erase command.Said order can be in order to operation associated storage apparatus.
Order allocator 318 can receive several orders from main frame (for example, 202 among Fig. 2) via HPI 314 and application layer 320.The order that order allocator 318 can keep being received and can with call allocation to several respective rear ends passages (for example, 350-1 ..., 350-N) respective channel command queue (for example, 355-1 ..., 355-N) and front end DMA 316.
Service load can with commands associated.For instance, to the order that writes to storer, the service load that is associated can be data to be written.Can receive the service load that is associated with particular command at front end DMA 316 places via main frame FIFO 322 and AES 324.Front end DMA 316 can with the data allocations that is associated with the particular command of order in the allocator 318 to passage DMA (for example, 354-1 ..., 354-N) or directly be assigned to the respective channel impact damper (for example, 358-1 ..., 358-N).Passage DMA (for example, 354-1 ..., 354-N) can with the data allocations that is associated with particular command to the respective channel impact damper (for example, 358-1 ..., 358-N).In one or more embodiment, channel buffer (for example, 358-1 ..., 358-N) can keep data corresponding to several orders, said data be via passage DMA (for example, 354-1 ..., 354-N) and receive from front end DMA 316.
In one or more embodiment, the information that is communicated to the order allocator 318 of controller 318 from main frame (for example, 202 among Fig. 2) can comprise several orders, for example, and program command, reading order and erase command and other order.Program command can in order to write data into storer (for example, storage arrangement 312-1 ..., 312-N), reading order can be in order to can be in order to wipe the part of said storer from memory read data and erase command.Said order can indicate operation types (for example, programme, read, wipe) together with starting position (for example, the quantity (for example, the number of logic sector) of related storer LBA) and in the said storage operation.
In one or more embodiment, LBA can be associated with the logic sector of main frame, and for example, each logic sector of main frame can be associated with specific LBA.For instance, LBA 1000 can be associated with first logic sector, and LBA1001 can be associated with second logic sector, and LBA 1002 can be associated with the 3rd logic sector etc.As another instance; In the programmed array corresponding to the order of the memory cell of 16 logical data sectors that start from LBA 1000 places able to programme with LBA 1000 to 1015 associated memory cells; For example, corresponding to the memory cell of the logical data sector that is associated with LBA 1000 to 1015.Therefore, each the logical data sector in the memory array can be referred to by specific LBA.LBA can (for example be mapped to the physical address that is associated with particular memory block by rear end 346; The start address of particular memory block); Or LBA can be mapped to memory block in the physical address (for example, the start address of specific memory sector) that is associated of particular sector.
The logic that Fig. 4 graphic extension one or more embodiment according to the present invention implement is to physical address map.Map addresses 461 graphic extension storage arrangements (for example, 312-1 ..., 312-N) LBA and the correlativity between the physical block address (PBA).For instance, LBA 1 (462-1) is corresponding to PBA A (464-1), and LBA 2 (462-2) is corresponding to PBA B (464-2); LBA 3 (462-3) is corresponding to PBA C (464-3); LBA 4 (462-4) is corresponding to PBA D (464-4) ..., and LBA M (462-M) is corresponding to PBA M (464-M).
Receive order
According to one or more embodiment of the present invention, front end DMA (for example, 316 among Fig. 3) can comprise command queue 386.Front end DMA (for example, 316 among Fig. 3) can keep via application layer 320 and order allocator 318 several orders from the main frame reception.Order allocator 318 can handle said order and with call allocation to front end DMA316 and several suitable rear end passages (for example, the 350-1 among Fig. 3 ..., 350-N).Operation by order allocator (for example, 318 among Fig. 3) is carried out can be implemented in hardware, software or its combination.Order allocator (for example, 318 among Fig. 3) can comprise command processor part and allocator part.Said command processor part and allocator partly can be the discrete hardware module, or corresponding function can be implemented by control circuit by integration mode.
Receiving order (hereinafter being called " Host Command ") from main frame afterwards; The command processor part of said order allocator (for example, 318 among Fig. 3) can be checked the integrality of said Host Command at once and then said Host Command is delivered to the allocator part of said order allocator forward.According to one or more embodiment of the present invention, the command processor of said order allocator (for example, 318 among Fig. 3) part can be accepted the LBA scope and finds out effective label and carry out other integrity test to find out through being configured to the inspection order.
Said allocator part can (for example be assigned to front end DMA 316 and several suitable rear end passages with said Host Command; 350-1 among Fig. 3 ..., 350-N); And (for example to the completion status of the said order of application layer 320 indication; Whether it is accepted and is handled), can said completion status be communicated to main frame and can send next Host Command with indication.In hardware, implement order the functional of allocator and reduce the Host Command processing time, for example, after main frame receives Host Command, ordering the time of the designator of completion status in order to handle said order and reception and registration (for example, transmission or transmission).Can increase the accumulator system handling capacity through the processing time that reduces between the Host Command of passing between main frame and the accumulator system.
Fig. 5 is the block diagram according to the command queue of the front end DMA of one or more embodiment of the present invention.Command queue 586 has the capacity that keeps C order of quantity, and for example, said command queue can have several order time slots, and each order time slot can keep an order.As shown in Figure 5, command queue 586 comprise several the order time slot, for example, the order time slot 1 (587-1), the order time slot 2 (587-2) ..., the order time slot C (587-C).For instance, in one or more embodiment, front end DMA (for example, 316 among Fig. 3) can comprise several command queues 386 of the capacity with 32 orders of storage; Yet embodiments of the invention are not limited to particular command number of time slot, command queue's capacity or can be by the simultaneously treated number of commands of order allocator.
In one or more embodiment, front end DMA (for example, 316 among Fig. 3) can initial order receive Host Command from main frame.Several command queues 386 can said initial order keep said several Host Commands, for example, and to receive the order of said Host Command from main frame.Command queue 386 can once keep a finite population order; Therefore, when the order allocator has reached its capacity and temporarily can not receive other Host Command from main frame through being configured to signal Host Command formation 386.
In one or more embodiment, order allocator (for example, 318 among Fig. 3) can be handled the Host Command that remains in the assignment file 315 and can it be assigned to the command queue 386 among the front end DMA 316.Then; Order allocator 318 can the Host Command from order formation 386 be received with it and the order in command queue 386 of lining up (for example, but the order of ordering, with the combination of aforementioned order) with the order that will carry out Host Command, with assign host machine or according to certain other suitable sequencing schemes is assigned to the rear end passage.
In one or more embodiment; The command processor of said order allocator part determines whether to revise the order that remains in the command queue 386 through being configured to, (for example) with optimize several orders in the middle of the passage of a plurality of rear ends distribution and individually or as a group revise Host Command.Revise order so that distribute economization can comprise (for instance) with command in combination to adjacent memory location and/or delete under situation about not reading subsequently by the order of overwrite from adjacent memory location; Realize the identical clean change of the storer that is used for write operation or realize reading only so that send less order, save time, handle resource and/or communication bandwidth and other purpose whereby from the identical of storer that is used for read operation.As used among this paper, order can comprise the order of Host Command, the Host Command of having revised and other type.Command processor part can analyze and revise in the command queue 386 order with more efficiently with call allocation to respective channel, make that specific order is more efficient, reliability, the performance of improving accumulator system of improving accumulator system, reduce the consume of accumulator system or improve quality, the efficient or mobile of order in the middle of the respective rear ends passage.For instance; Command processor part can resequence order in the order group, order and (for example make up through a plurality of orders being gathered into one or more; Merge) order or confirm not carry out particular command (for example, in the time can confirming that subsequent commands will be revised the data at specific memory location place) and other order optimization is technological.In one or more embodiment, FEP (FEP) 328 also can be carried out these tasks and make these and confirm.
Fig. 6 A and Fig. 6 B graphic extension are according to the operation of the command queue of the front end DMA of one or more embodiment of the present invention.According to one or more embodiment; The order allocator (for example; Among Fig. 3 318) and/or FEP (for example, 328 among Fig. 3) can determine whether to revise the order in the command queue that remains in front end DMA (for example, 386 among Fig. 3); And said order allocator can be revised said order through the mode of the order handling capacity that is configured to the front end (for example, 344 among Fig. 3) by set promotion controller.
In order to increase the order handling capacity, in one or more embodiment, order allocator 318 or FEP 328 processing host order (for example, when being associated channel buffer when full) so that only passage is busy in the rear end increases handling capacity.When said rear end passage when being busy (for example, when the channel buffer that is associated (for example, be respectively 358-1 ..., 358-N) when full), the fore-end that can prevent controller with call allocation to said rear end passage.Several rear end passages in the passage of said rear end can and be ready to accept under the condition of additional command; Should be by said order allocator deferred command not realize further optimization process; The completion of the Host Command because soar among the delay meeting deferred command formation 686A/B of the 686A/B of command queue; This postpones the transmission of additional command from main frame again, and the further optimization of order can not cause call allocation is betided under the situation of the delay of other rear end passage in the respective channel command queue (perhaps with even bigger efficient).Can be title having at least one co-inventor and having in the 12/351st, No. 206 patent application case of attorney docket 1002.0430001 and find additional detail for " revise order (Modifying Commands) " about the operation of rear end passage.
In one or more embodiment, the 686A of command queue can be similar to the command queue of discussing about Fig. 3 386.The 686A of command queue comprises several (for example, C) order time slot, for example, 687-1A, 687-2A, 687-3A, 687-4A, 687-5A, 687-6A, 687-7A, 687-8A ..., 687-CA.C order each in the time slot can be through being configured to temporary transient storage one order (for example, Host Command).For instance, order time slot 687-1A can store first order, and order time slot 687-2A can store second order, or the like.
Discuss about the order among the preceding sort command allocator processing command formation 686A hereinafter and in Fig. 6 A in the illustrated instance; Order time slot 1 (for example; Order 687-1A) can be the order that data programing (for example, writing) is related to the memory cell of 16 logic sectors that start from LBA 1000 places in the storage arrangement.(for example, the order in 687-2A) can be the order of the memory cell reading of data that from said storage arrangement, relates to 4 logic sectors that start from LBA 2000 to order time slot 2.(for example, the order in 687-3A) can be the order that programs data in the memory cell that relates to 48 logical data sectors that start from LBA 1000 places in the said storage arrangement to order time slot 3.(for example, the order in 687-4A) can be the order of reading the data in the memory cell that relates to 10 logical data sectors that start from LBA 2002 places in the storage arrangement to order time slot 4.(for example, the order in 687-5A) can be the order of reading the memory cell that relates to 16 logical data sectors that start from LBA 2000 places in the said storage arrangement to order time slot 5.(for example, the order in 687-6A) can be the order that relates to the memory cell of 16 logical data sectors that start from LBA 1040 places in the said storage arrangement of programming to order time slot 6.(for example, the order in 687-7A) can be the order that relates to the memory cell of 2 logical data sectors that start from LBA 3000 places in the said storage arrangement of programming to order time slot 7.(for example, the order in 687-8A) can be the order that relates to the memory cell of 2 logical data sectors that start from LBA 3002 places in the said storage arrangement of programming to order time slot 8.
Remain in that order among the 686A of command queue can be associated with a storage arrangement in (for example, all corresponding to same passage) or can apparatus associated with several different memories (for example, corresponding to a plurality of passages) at arbitrary special time.According to amount and the division (as by logic to physical address map (for example, map addresses 461 in Fig. 4) shine upon) of physical storage, can confirm the special modality that order is associated with it from LBA about each passage.For instance, physical block address can comprise channel recognition information.
Can one or more embodiment modifications according to the present invention remain in the order among the 686A of command queue.The individual command that can the synthetic programming of command group among order time slot 687-1,687-3 and the 687-6 be related to for instance, the memory cell of 56 logic sectors that start from LBA 1000 places.Therefore, the order allocator can be to same operation (for example, write operation) but relates to contiguous in logic memory location through being configured to confirm at least two orders.Said order allocator can be through ordering the synthetic individual command optimization that relates to the combination of said memory location of being close in logic of said two command group to the distribution of rear end passage at least.Under said contiguous in logic memory location and situation that single passage is associated, the most efficient through the order of making up.
Fig. 6 B is illustrated in the block diagram that one or more embodiment according to the present invention revise the channel command formation 686B afterwards of the order shown in Fig. 6 A.Shown in Fig. 6 B, remain in order 1 among the channel command time slot 687-1B and be in the programmed array order corresponding to the memory cell of 56 logic sectors that start from LBA 1000 places of main frame.Remain in order 2 among the channel command time slot 687-2B and be the order of reading in the said array, and to remain in order 3 among the channel command time slot 687-3B be corresponding to the order of the memory cell of 4 logic sectors that start from LBA 3000 places of said main frame in the said array of programming corresponding to the memory cell of 16 logic sectors that start from LBA 2000 places of said main frame.
Said order allocator also can be through being configured to confirm that at least two orders to same operation (for example are; Write operation) but relate to overlapping in logic memory location; For example, the memory location that relates to an order comprises at least a portion of the memory location of another order that relates to same type.Said order allocator can be through ordering the synthetic individual command optimization that relates to the combination of said overlapping in logic memory location of said two command group to the distribution of rear end passage at least.
Other order modification can be possible.For instance; (for example, first order LBA) can be under situation about carrying out before second order with the said specific memory location of overwrite, and said command processor part can be not with said first call allocation (for example can to confirm to relate among the 686A of command queue specific memory location in the command processor part; Delete, ignore, do not carry out) to its destination passage; Because its result will only be temporary transient, for example, up to carrying out said second order.
Can further understand the mentioned instance of preceding text about Fig. 6.Suppose the order that before order, to carry out near the top (for example, order time slot 687-1) of the 686A of command queue near the bottom (for example, order time slot 687-C) of the 686A of command queue.Order time slot 1 (for example, 687-1A) and order time slot 3 (for example, the LBA of the order in 687-3A) both be 1000.The order 1 and order 3 both are programming operations.Because order 3 starts from 48 sectors at LBA 1000 places with programming, therefore order 3 complete overwrite is programmed in the content whatsoever in 16 sectors that begin with LBA1000 as the result of order 1.Exist and get involved read operation (for example, order 2); Yet order 2 does not relate to 16 sectors with LBA 1000 beginnings.Therefore; Do not need (for example to distribute; Delete, ignore, do not carry out) order 1, whereby because of not ordering 1 to be sent to passage and to save time and optimize the speed that order from the 686A of command queue distribution and the 686A of command queue in the middle of the passage of a plurality of rear ends can accept extra Host Command.Other order rearrangement, combination and deletion can be optimized the distribution of the order shown in (for example, economization) 686A of command queue to several rear end passages.
Therefore; Said order allocator can be through being configured to confirm to treat the clean change to storer by several order realizations among the 686A of command queue; And revise said several orders that remain among the 686A of command queue based on said confirming, optimize said several order distribution in the middle of the passage of a plurality of rear ends whereby.Said order allocator can be through (for example being configured to not distribute; Delete, ignore, do not carry out) from one in several orders of the 686A of command queue, this betides said order allocator and can confirm to make like this when not changing said several orders to the determined clean change of storer according to remaining in order among the 686A of command queue at any given time.For instance; Said order allocator can through be configured to revise with the 686A of command queue in the memory range of first commands associated with the part of the memory range that comprises the order of second among the 686A of command queue, and after this delete said second order and do not change and treat the determined clean change that realizes by said several orders storer from the 686A of command queue.
As stated, the allocator part of order allocator can be assigned to several suitable passages with order (for example, Host Command).For instance, when the service load that is associated with particular command related to single passage, said allocator part can be assigned to suitable passage with said particular command.For the service load that relates to a plurality of passages; Said allocator part can be through being assigned to particular command the distribution that a plurality of passages are managed associated command, comprise its be used to operate corresponding to the peculiar parameter of passage of the respective stored apparatus of the certain logic block address of commands associated and sector count.Then, can analyze the service load with said commands associated, its several portions (for example, in a looping fashion) distributes in the middle of a plurality of passages.Similarly,, can in the middle of the passage of a plurality of rear ends, distribute the service load that is associated with reading order, and can corresponding reading order be assigned to the rear end passage that is associated with the data that in the middle of a plurality of passages, collect for read operation.
For instance, each rear end passage can be handled R continuous logic block address (LBA), but Host Command (that is the order that, receives from main frame) can relate to the relatively large number destination sector.Sort command after the order allocator can distribute in the middle of several rear end passages in a looping fashion, wherein each back sort command simulation framework order, each back sort command relates to except R the continuous LBA.Till cyclic process continues up to all sectors of Host Command are allocated in said rear end passage with R big or small " bulk " in the middle of.
Be further graphic extension, consider following Numerical examples, wherein Host Command is for writing 128 data sectors, wherein exist 4 rear end passages and wherein each rear end passage can handle 8 continuous LBA.For the sake of simplicity, will ignore the memory location skew in this instance.After receiving the host write command that relates to 128 sectors, produce the write command of a plurality of rear ends in response to the individual host write command at once.The first rear end write command can relate to preceding 8 LBA that go to rear end passage 1; Then; The second rear end write command can relate to following 8 LBA that go to rear end passage 2; The 3rd rear end write command can relate to following 8 LBA that go to rear end passage 3, and the 4th rear end write command can relate to following 8 LBA that go to rear end passage 4.This circular treatment also relates to following 8 LBA that go to rear end passage 1 with the said first rear end write command and continues, in the middle of all 128 sectors are allocated in 4 passages till.
Therefore, each passage will receive corresponding to host write command but compilation is 32 service load sectors of the set of string 8 LBA parts together.After this, corresponding write command is assigned to the respective rear ends passage to write 32 data sectors.Therefore, individual host order can produce N sort command (wherein having N rear end passage) afterwards, and it is simulation framework command action but relate to about 1/N of the service load that is associated with said Host Command separately.Every passage only distributes the appropriate section of an order together with the service load that is associated with said Host Command.Embodiments of the invention are not limited to Numerical examples quantity described herein and are not limited to write command.Those skilled in the art will appreciate that other order (for example, reading order) can be allocated in the middle of a plurality of passages (for example, from several rear end passages in the middle of reading of data) because of individual host order is parallel similarly.
In one or more embodiment, can for example, the part page or leaf be combined in the single operation together through the synthetic individual command of several command group being revised the order among the 686A of command queue so that eliminate or the operation of minimizing partial page program.Except that through reducing that the consume be associated with partial page program improves the performance of accumulator system and the reliability; Several order distribution in the middle of the passage of a plurality of rear ends of order optimization among the combination front end 686A of command queue; Because a plurality of program commands can reduce to the order (for example, individual command) of less number.
Come operating part page or leaf programming operation through following operation: find new memory cell block freely, will read from old page or leaf in the data buffer, with new data be integrated in the said data buffer, with whole page or leaf (comprising data) through merger be written to new memory page or leaf in the new piece, will be said old all other pages move to new piece and carry out mark and will be wiped free of to indicate it to said old.Though provided several instances are used for compound command (this optimizes several distribution of order in the middle of the passage of a plurality of rear ends) with graphic extension algorithm; But embodiments of the invention are not limited to the instance that provided; And other optimisation technique is contained in the present invention, for example relates in front end place deletion or rearrangement order those technology with the quantity that reduces the order that in the middle of the passage of a plurality of rear ends, distributes.
In one or more embodiment, the command queue (for example, 386 among Fig. 3) that Memory Controller comprises a plurality of rear ends passage and is coupled to said a plurality of rear ends passage communicatedly.Said command queue (for example, 386 among Fig. 3) can be through being configured to keep the Host Command from the main frame reception.Circuit is through at least in response to said command queue (for example being configured to; Among Fig. 3 386) several Host Commands in the Host Command in and sort command after producing several, and with said several rear end call allocation several rear end passages in the passage of said a plurality of rear ends.
Said several back sort commands can be less than or more than several Host Commands in the Host Command.For instance, in one or more embodiment, said circuit can through be configured to produce in response to individual host order corresponding in the passage of said a plurality of rear ends each after sort command.Said circuit can be further through be configured to the opposite rear end call allocation to its respective rear ends passage so that roughly handle concurrently said after sort command.In one or more embodiment, said circuit can be through being configured in the middle of different a plurality of rear ends passages, to distribute a plurality of Host Commands so that roughly carry out said a plurality of Host Command simultaneously.
Produce said several back sort commands and can comprise at least one of revising in said several Host Commands and the combination of deleting another person at least in said several Host Commands.Direct memory access (DMA) module (DMA) can through be configured to distribute with corresponding to produced several after the data that are associated of the Host Command of sort command.
After accomplishing the respective rear ends order; Said circuit can be at once will arrive main frame from the readout of carrying out said specific one in the sort command of said a plurality of backs at once after being configured to accomplish specific one in the sort command of a plurality of backs, and do not take into account said a plurality of after arbitrary other person's the completion of execution in the sort command.
Fig. 7 is the process flow diagram that one or more embodiment according to the present invention are used for assignment commands in the middle of several rear end passages.Said call allocation starts from 766 places.At 767 places, can the beginning LBA of institute's assignment commands be set at order LBA and LBA the skew with.Can confirm that (for example, calculating) beginning passage begins LBA together with beginning passage sector count, end passage, end passage sector count, residue sector count and passage.At 768 places, can with the order initial allocation to channel number be set at the beginning passage.Then,, can confirm (for example, calculating) beginning LBA and sector count, and can conclude that related channel status bits is (for example, involved_ch) to indicate special modality to relate to particular command to specific current channel number to working as prepass at 769 places.
Next, at 770 places, beginning LBA and the sector count that will work as prepass are loaded into the inbox (hereinafter is further discussed the passage inbox) when prepass.Whether can confirm to work as prepass at 771 places is to finish passage (referring to 767).If when prepass is not to be to finish passage; Join in 773 punishment that process moves to next passage (for example, current channel number is increased progressively) and said process continues (beginning LBA and the sector count that will work as prepass are loaded into the inbox when prepass) at 769 places so.If, can will begin passage, passage sector count and related passage at 772 places so and be loaded into dma descriptor piece (DDB when prepass is to finish passage; Hereinafter is further discussed), and said process turn back to backward 766 sentence the beginning next call allocation.
Fig. 8 is the functional block diagram of an embodiment of the interface of graphic extension one or more embodiment according to the present invention between front end and several passages.Fig. 8 show the rear end part 846 be arranged in Memory Controller several passages (for example, 850-1 ..., 850-N), said passage can be similar to the passage 350-1 shown in Fig. 3 ..., 350-N; Yet for the purpose of clear, some the passage details shown in Fig. 8, omitting on Fig. 3 are so that can show supernumerary structure in more detail.Fig. 8 also shows the front end DMA 816 and the FEP 828 (FEP) of the fore-end 844 that is arranged in said Memory Controller.Front end DMA 816 can be similar to the front end DMA 316 among Fig. 3, and FEP 828 can be similar to the FEP 328 among Fig. 3.In Fig. 8, front end DMA 816 and FEP 828 be shown as respectively the mode that describes in further detail with hereinafter be coupled to communicatedly several passages (for example, 850-1 ..., 850-N) in each.
Each passage (for example comprises channel processor; 856-1 ..., 856-N), the passage inbox (for example; 874-1 ..., 874-N), the passage input register (for example, 876-1 ..., 876-N) and the passage output register (for example, 878-1 ..., 878-N).In said passage output register and the inbox each is through be coupled to FEP 828 information to be provided communicatedly.In said passage inbox and the input register each is through be coupled with from front end DMA 816 reception information communicatedly.
Front end direct memory access (DMA) (DMA)
Fig. 9 A is the functional block diagram according to direct memory access (DMA) module (DMA) descriptor block of one or more embodiment enforcements according to the present invention.Streams data between DDB main control system and the rear end passage; And play a role with (for example) with respect to (for example remaining in command queue; Among Fig. 3 386) order in uses intelligent decision to formulate the optimization system handling capacity will be from the order formation (for example to increase; Among Fig. 3 386) call allocation is to the efficient of various rear ends passage and increase the speed of the order in the whole command queue (for example, 386 among Fig. 3) whereby.
For (for example having via several storage arrangements of respective channel access; Solid-state drive) accumulator system, the service load that can the service load that be associated with write command is programmed into several passages and can be associated with reading order from several passages compilation.During with service load that the particular command that relates to a plurality of passages is associated, DMA is distribute data in the middle of suitable passage in management.For instance, said DMA manages the service load that the service load that will be associated with write command is assigned to several passages and is associated with reading order from several passages compilations.Said DMA also promotes a plurality of (comprising parallel) command execution through the service load of a plurality of commands associated between management and main frame and the rear end passage.
When giving an order, DDB (for example, 340 among Fig. 3) coordinates service load to the distribution that reaches from N passage.For instance, write or read operation during, can use several passages in N the passage.Can at first upgrade (for example, loading) said DDB by order allocator (for example, 318 among Fig. 3) or FEP (FEP) (for example, 328 among Fig. 3), wherein the DDB label can be the address that is used for each Host Command.Can said DDB be set by FEP or order allocator.During " inerrancy " condition, can there is no need further to manage by the FEP of I/O processor.
Fig. 9 A show and to have several tag addresses clauses and subclauses (for example, DDB 1 ..., DDB 32) the content of DDB 988.Each tag addresses clauses and subclauses contains and the parameter that is provided with 990, state 992 and the command information 994 that is associated with the data transmission are associated.According to one or more embodiment of the present invention, can be according to the addressing of Serial Advanced Technology Attachment (SATA) standard implementation label.Therefore, the DDB clauses and subclauses can be confirmed which passage of access, will transmit what sectors and associated state and out of Memory to special modality.DDB can be through leaving over that DDB 1 (for example, once only handling via said DDB leaves over order but not in said DDB, manage a plurality of orders simultaneously) is only used in order but back compatible to what do not support a plurality of command queues.
Each clauses and subclauses among the DDB 988 have a label, and said label can be assigned or hinted.In one or more embodiment; Said label can with entry number (for example; The physical location of said clauses and subclauses in DDB) identical, therefore, the physical location of said clauses and subclauses in said DDB hints said label so that need not store the physical tags number field with each clauses and subclauses.Receive Host Command and its when DDB adds the new clauses and subclauses corresponding to said Host Command at controller, the label that each clauses and subclauses is associated with a label and output is associated with said new clauses and subclauses.As discussed previously, said controller is kept label that command queue's (for example, 386 among Fig. 3), reception be associated with said new clauses and subclauses, is added the newer command queue entries corresponding to said label, and the output area operation requests.
Fig. 9 B graphic extension is according to the clauses and subclauses in the dma descriptor piece (DDB) among Fig. 9 A that are illustrated in of one or more embodiment enforcements according to the present invention.Fig. 9 B indicates the data field of DDB clauses and subclauses through type (for example, setting, state, information), description, big or small (for example, the number of position) and the position in clauses and subclauses (for example, position, position).
Next enumeration data field 990A of each DDB clauses and subclauses (for example, " next_cnt " at 93 to 96 places, position, position) expression is to a number of giving the transferred data sector of routing.Next counting can transmit counting with first of regulation beginning passage by order allocator or FEP initialization.Next counting can be by hardware update to stipulate the transmission counting of last passage.Said be updated in when prepass accomplish its transmit after but all transmitting accomplish before generation.If the residue number of transferred whole sectors is greater than the maximum (for example, said counting multiply by the number on plane greater than every page sector count) of the transmissible sector of passage, the said maximum of available sector loads next counting so.Otherwise the residue number of available transferred whole sectors loads next counting.
Enumeration data field 990B (for example, " cnt " at 80 to 95 places, position, position) can be the whole transmission countings to particular command.Said counting can be by order allocator or FEP with total transmission counting initialization and can be by hardware update to indicate the residue number of transferred sector.According to one or more embodiment, do not use a position 79, for example, it is kept for using in the future.
Transmitting completion data field 990D (for example, " XC " at 78 places, position position) the said DMA of indication transmits and accomplishes.That is to say, can accomplish data phase, but possibly not send the designator of order completion status as yet.In case channel status (" ch_status ") equals particular value, can accomplish with the indication Host Command by this position of hardware setting.Then, hardware is dispatched the transmission of the designator of order completion status.When said designator was successfully sent to main frame, said hardware operation is before can receiving another Host Command, to remove valid data field (for example, " V " flag), and was of after a while.
Mistake can take place in order to indication in main frame misdata field 992A (for example, " HE " at 77 places, position position).If said mistake takes place, can set this position by I/O processor or HPI (for example, 314 among Fig. 3) so during main frame transmits.Storage arrangement (for example, NAND quickflashing) mistake can take place in order to indication in quickflashing misdata field 992B (for example, the position " FE " at 76 places, position).
Valid data field 992C (for example, " V " at 75 places, position position) can be in order to indicate effective clauses and subclauses.Can by order allocator or FEP set this position (for example, but V=1) with indication hardware access DDB clauses and subclauses, and said order allocator or FEP said clauses and subclauses of overwrite not.Can be after accomplishing Host Command and designator successfully having been sent to main frame by hardware with this bit clear, or when at processing command, exist when wrong can be by FEP with its removing (for example, V=0) to indicate the clauses and subclauses among the DDB can be used for from main frame reception newer command.
Next channel data field 992D (for example, " nxt_ch " at 72 to 74 places, position, position) is meant the passage that transmits taking place.This field can be by order allocator or FEP initialization with the beginning passage that is given for transmission and can be by hardware update to be given for next passage of transmission.Upgrade when prepass accomplish to transmit accessible all the continuous LBA of said passage in the ban.Sector count to particular command possibly not reach zero as yet, because can there be the residue sector of treating to said particular command transmission, comprises the extra bout to said passage as the part of cycle assignment, as stated.For a passage, when not existing the residue sector of treating to transmit when (for example, distributing last passage of service load in the cyclic sequence to it), will arrive zero to the sector count of said particular command to said particular command.
Use channel data field 992E (for example, " active_ch " at 64 to 71 places, position, position) to can be N position signal (for example), the wherein completion status of each its respective channel of expression corresponding to 8 positions of 8 passages at present.Before transmitting generation, can set position corresponding to each related passage.Then, in case order is accomplished to said passage, can reset to each.
Command information data field 994 (for example, " CMD_info " at 0 to 63 place, position, position) can comprise from four words of frame information structure (FIS) register, comprises order, priority bit, FUA position, LBA and sector count.
Though (for example described particular data field size in the above instance; A position) and the data field position, but embodiments of the invention be not limited to comprise those fields or particular data field size or the position of each description field like this and can comprise extra or alternate field.When the order allocator was just upgrading DDB, address pointer and update signal (for example, " update_ddb_en ") that input signal (for example, " xfer_TAG ") becomes said DDB became to write and launch.
Moderator (for example, 342 among Fig. 3) but can be the Round-robin arbiter of confirming at which passage of special time access.Said next available channel of moderator search.Said moderator stepping is through said passage, thereby trial makes selected available channel numbering and next passage coupling in the specific DDB clauses and subclauses.The DDB clauses and subclauses if said available channel does not match, moderator continues to repeat in a looping fashion (if necessary) till can finding selected available channel numbering and the coupling between next passage in the specific DDB clauses and subclauses so.In case the coupling of finding, said moderator are that the initiation communication agreement is to begin transmission.When transmitting completion, the available signal notice is accomplished agreement, upgrades channel information and next available channel of said moderator search in the DDB clauses and subclauses.
Existing each with in N of passage field 992E (for example, the register) position of specific label clauses and subclauses is corresponding to corresponding one in N the passage.Can be used for the particular host order in case passage can be regarded as, can set the position that is associated with said passage.When passage is accomplished to given Host Command to the transmission of said special modality, can set the order completion status of said passage, this can reset with the corresponding positions in the passage field to the existing of DDB clauses and subclauses again.In case to said at present with passage all the position reset, can send the designator of " completion " state of Host Command to application layer.Then, said application layer can send to main frame with the designator of " completion " state of Host Command.Can be after accomplishing said Host Command and the designator of said " completions " state successfully having been sent to said main frame by hardware (for example with the significance bit removing of clauses and subclauses; Or can (for instance) when handling said order, exist when wrong it to be removed with the clauses and subclauses among the indication DDB and can be used for receiving newer command V=0), from main frame by FEP.
The order completion is based on rear end passage indication and asks to transmit completion.According to one or more embodiment of the present invention; During crossing over a plurality of passages and carrying out the read operation with a plurality of commands associated simultaneously; Data one are prepared ready; DMA is about to said data any one from said passage and is transferred to main frame, and no matter the order that receives said order from said main frame how.Can be through with the order of accomplishing order by each rear end passage at least in part but not to receive or the order of initial said order is carried out said order (for example, will be transmitted back to main frame from the data that storage arrangement reads) and come to increase in fact the accumulator system data throughout.
For instance; Can receive first reading order and can the initial execution of being undertaken by said accumulator system from main frame by accumulator system, receive the execution of second reading command fetch and the initial said second reading command fetch that is undertaken by said accumulator system afterwards by said accumulator system from said main frame.Yet, can at first accomplish said second reading command fetch.According to one or more embodiment; Can before can the data that produced by said first reading order being turned back to main frame, will turn back to main frame, but not wait for that the completion of said first reading order is so that can at first turn back to said main frame with its data by the data that said second reading command fetch produces.
Lift another instance, can receive first reading order from main frame, receive the second reading command fetch by said accumulator system from said main frame afterwards by accumulator system.Yet, for raising the efficiency, can resequence said order (for example) and before carrying out first reading order, carry out the second reading command fetch of accumulator system with previous described mode, this causes said second reading command fetch before said first reading order, to be accomplished.According to one or more embodiment; Can when accomplishing said second reading command fetch, will turn back to main frame (its can before can the data that produced by first reading order being turned back to said main frame) by the data that said second reading command fetch produces, but not wait for the completion of first reading order.
When a plurality of storage arrangement of operation; Can cross over the service load (its several portions has a certain sequential order that it is relative to each other) that different channel allocations are associated with individual command; For example, can the first of said service load be stored in the first memory device and can the second portion of said service load to be stored in the second memory device medium.Therefore; Data (for example; Data by the reading order generation) several portions can turn back to the front end of controller not according to sequential order from different storage arrangements (and the passage that is associated); For example, can be able to, said first memory device retrieve said second portion before retrieving said first from said second memory device.According to one or more embodiment, when can supporting dma buffer when skew, can said part be transmitted back to main frame not according to sequential order (with the order of accomplishing order by the respective rear ends passage but not with the sequential order of said part correlation).
Several part storages (for example, in depositing) of the service load that in other words, will be associated with individual command are in the middle of several storage arrangements of solid-state drive.The said part of said service load is relative to each other with certain order when forming said service load.Single reading order can be in order to the said service load of compilation in the middle of several storage arrangements, said reading order be with respect to specific memory location suitably customization and be assigned to corresponding in several passages of several storage arrangements each and receive the appropriate section of said service load with from several storage arrangements each.According to one or more embodiment, said part is received by the accumulator system controller and sends to main frame when its order that is received the certain order that said part that Shi Yike is different from said service load is relative to each other when forming said service load.In other words, the said part of said service load was not assembled into said service load again before sending to said main frame, but when the several portions of said service load is received said controller in the middle of several storage arrangements, sent said part.
According to one or more embodiment of the present invention; (for example crossing over a plurality of passages; To corresponding a plurality of storage arrangements) (for example carry out a plurality of orders simultaneously; Write command) operating period, DMA can be after accomplishing particular command will send to main frame to the designator of the order completion status of said order at once, and this allows said next unsettled order of main frame transmission.In one or more embodiment, said a plurality of passages are asynchronous paths, and order (for example, Host Command) carry out can be not with receive the identical order of said order (with respect to other order that receives from said main frame) from said main frame and take place.
For instance, can receive first order and by initial its execution of said accumulator system from main frame, receive second order and by initial its execution of said accumulator system by said accumulator system from said main frame afterwards by accumulator system.Yet, can at first accomplish the second reading command fetch by several rear end passages in the passage of a plurality of rear ends.According to one or more embodiment; The designator that can send to before the main frame completion status that will said second order at the designator of completion status with said first order sends to main frame, before the designator of said first completion status of ordering is sent to said main frame but not wait for the completion of said first order so that can send to said main frame at the designator of completion status that can said second order.
Lift another instance, the Memory Controller of (for example) accumulator system receives first order from main frame, receives second order by said Memory Controller from said main frame afterwards.Yet, the resequence said order (for example) and before carrying out first order, carry out second order of said accumulator system with previous described mode, this causes said second order before said first order, to be accomplished.According to one or more embodiment; The designator that can send to before the said main frame completion status that will said second order at the designator of completion status with said first order sends to said main frame, before the designator of said first completion status of ordering is sent to said main frame but not wait for the completion of said first order so that can send to said main frame at the designator of completion status that can said second order.
Conclusion
The present invention comprises Memory Controller, accumulator system, solid-state drive and is used to handle the method for several orders.In one or more embodiment, the command queue (for example, 386 among Fig. 3) that Memory Controller comprises a plurality of rear ends passage and is coupled to said a plurality of rear ends passage communicatedly.Command queue 386 can be through being configured to keep the Host Command from the main frame reception.Circuit is through being configured at least in response to several Host Commands in the Host Command in the command queue 386 sort command after producing several, and with said several rear end call allocation several rear end passages in the passage of said a plurality of rear ends.
The present invention also comprises method and the device that is used for Memory Controller.In one or more embodiment, Memory Controller comprises a plurality of rear ends passage and is coupled to the preceding sort command allocator of said a plurality of rear ends passage communicatedly.Said order allocator is coupled to through the command queue that is configured to cushion several orders (for example, 386 among Fig. 3) communicatedly.Said order allocator can be through being configured to confirm to treat to revise in said several orders at least one to the clean change of storer and based on said confirming to optimize said several order distribution in the middle of the passage of said a plurality of rear ends by what said several orders realized.
In detailed description of the present invention,, and in alterations, show how to put into practice one or more embodiment of the present invention with way of illustration with reference to the alterations that forms a part of the present invention.Enough describe these embodiment in detail and be intended to make the those skilled in the art can put into practice embodiments of the invention, and should be understood that and to utilize other embodiment and can under the situation that does not deviate from scope of the present invention, make process, electricity or structural change.
As used among this paper, indications " N ", " M " reach " C " (especially about the Ref. No. in graphic) and indicate several under the situation of one or more embodiment of the present invention, can comprise the special characteristic of sign like this.As will understand, can add, exchange and/or eliminate the element shown in the various embodiment among this paper so that several extra embodiment of the present invention are provided.In addition, as will understanding, the ratio of the element that is provided among the figure and relative scale are planned graphic extension embodiments of the invention and should not be regarded as limiting meaning.
To understand, when first element being called " being connected to " another element or " with the coupling of another element ", planning said element is another person who physically is attached in said two elements.By contrast, when element being called " coupling " communicatedly, said element (comprise but be not limited to) communicates with one another through hardwired or wireless signal path.
To understand; When an element being called " on another element ", " being connected to another element " or " with another element coupling "; Said element can be directly on another element or layer, be connected to another element or layer or with another element or layer coupling, perhaps can exist and get involved element or layer.By contrast, when an element being called " directly on another element or layer ", " being directly connected to another element or layer " or " direct and another element or layer coupling ", not existing and get involved element or layer.As used among this paper, term " and/or " any combination and all combinations of several items in the cited items that are associated comprised.
To understand, though first, second grade of term in this article can be in order to describe various elements, assembly, zone, layer and/or section, these elements, assembly, zone, wiring, layer and section should not be subject to these terms.These terms are only in order to distinguish element, assembly, zone, wiring, layer or a section and another zone, layer or section.Therefore, under the situation that does not deviate from teaching content of the present invention, can first element, assembly, zone, wiring, layer or the section that hereinafter is discussed be called second element, assembly, zone, wiring, layer or section.
For ease of explanation, for example can use among this paper " in ... below ", " ... following ", " bottom ", " ... top ", " top " wait the space relative terms to describe like the relation of an illustrated element or characteristic and another (a bit) element or characteristic in scheming but not absolute orientation spatially.To understand, said space relative terms plans to include the different orientations of device in using or operating in scheming, described directed.For instance, if the upset of the device in will scheming, the element that is described as " below other element or characteristic " or " in other element or beneath " so will be orientated " at other element or above the characteristic ".Therefore, the exemplary term " ... following " can include ... Top orientation reaches ... Below orientation both.Device can be otherwise directed (revolve turn 90 degrees or with other orientation) and correspondingly in the herein interpreted used space language is described relatively.
Used term only is from the purpose of describing specific embodiment and is not to plan to limit the present invention among this paper.As used among this paper, singulative " (a) ", " one (an) " reach " said (the) " to plan also to comprise plural form, only if context has clearly indication in addition.Will be further understood that; When use a technical term in this instructions " comprising (comprises) " reaches " comprising (comprising) "; Its regulation exists institute's features set forth, integer, step, operation, element or assembly, but does not get rid of existence or add several further features, integer, step, operation, element, assembly or its group.
Only if definition otherwise, otherwise used all terms (comprising technology and scientific terminology) have the meaning with the same meaning of those skilled in the art in the invention's common sense among this paper.Will be further understood that; Should with for example in dictionary commonly used term such as defined those terms be interpreted as to have and its corresponding to meaning of meaning in correlation technique and context of the present invention; And should not explaining, only if clear and definite so definition among this paper with idealized or too formal meaning.
Function of reference property block diagram illustration is explained and is described embodiments of the invention among this paper, and said functional block diagram graphic extension is the schematic illustration of idealized embodiment of the present invention.Therefore, expect that the shape of said graphic extension can change because of (for instance) manufacturing technology and tolerance.Therefore, embodiments of the invention should not be interpreted as the given shape that is limited to zone illustrated among this paper, make the form variations that is produced but will comprise by (for instance).For instance, graphic extension or be described as smooth zone and can have coarse or nonlinear characteristic usually.In addition, can be with illustrated acute angle cavetto.Therefore, illustrated zone is schematically in itself among the said figure, and its shape and relative size, thickness etc. be not plan the graphic extension zone accurate shape/size/thickness and be not to plan to limit scope of the present invention.
Though graphic extension and described specific embodiment is understood by those skilled in the art that among this paper, be intended to realize the layout of the identical result specific embodiment shown in alternative.The present invention plans to contain change or the version of one or more embodiment of the present invention.Should be understood that non-limiting way is made above explanation with illustrative approach.After checking above explanation, it will be apparent to those skilled in the art that not specifically described other embodiment among combination and this paper of above embodiment.The scope of one or more embodiment of the present invention comprises other application of wherein using above structure and method.Therefore, should confirm the scope of one or more embodiment of the present invention with reference to appended claims together with the four corner of the equivalent of authorizing these claims.
In aforementioned embodiments, some characteristics are gathered among the single embodiment together from simplifying the object of the invention.The method of the present invention should not be construed as reflection of the present invention disclose embodiment must use than in each claim the intention of the more characteristic of characteristic clearly stated.But like the reflection of above claims: the invention subject matter is to be less than all characteristics of single announcement embodiment.Therefore, hereby above claims are incorporated in the embodiment, wherein each claim is independently as independent embodiment.

Claims (68)

1. Memory Controller, it comprises:
A plurality of rear ends passage;
Command queue, it is coupled to said a plurality of rear ends passage communicatedly, and said command queue is through being configured to keep the Host Command by main frame reception and registration; And
Circuit, it is through being configured to:
At least produce sort command after several in response to several Host Commands in the said Host Command in the said command queue, and
With said several rear end call allocation several rear end passages in the passage of said a plurality of rear ends.
2. Memory Controller according to claim 1, wherein said several back sort commands are less than said several Host Commands in the said Host Command.
3. Memory Controller according to claim 1, wherein said several back sort commands are more than said several Host Commands in the said Host Command.
4. Memory Controller according to claim 3, wherein said circuit is through being configured to:
Produce in response to individual host order corresponding to each the respective rear ends order in the passage of said a plurality of rear ends, and
Said respective rear ends call allocation is arrived its opposite rear end passage, so that roughly handle said back sort command concurrently.
5. Memory Controller according to claim 4; Wherein said circuit will arrive said main frame from the readout of carrying out said specific one in the sort command of said a plurality of backs at once after being configured to accomplish specific one in the sort command of said a plurality of backs, and do not take into account said a plurality of after arbitrary other person's the completion of execution in the sort command.
6. according to the described Memory Controller of arbitrary claim in the claim 1 to 5, wherein said circuit is through being configured in the middle of different a plurality of rear ends passages, to distribute a plurality of Host Commands so that roughly carry out said a plurality of Host Command simultaneously.
7. according to the described Memory Controller of arbitrary claim in the claim 1 to 5, wherein produce said several back sort commands and comprise at least one of revising in said several Host Commands and the combination of deleting another person at least in said several Host Commands.
8. according to the described Memory Controller of arbitrary claim in the claim 1 to 5, wherein direct memory access (DMA) module DMA through be configured to distribute with corresponding to the said data that the Host Command of sort command is associated after several that produced.
9. according to the described Memory Controller of arbitrary claim in the claim 1 to 5; Wherein said circuit is the order allocator; Said order allocator has through being configured to revise Host Command so that optimize the command processor part of the distribution of said order in the middle of the passage of said a plurality of rear ends, and said through revising the allocator part of order through being configured in the middle of the passage of said a plurality of rear ends, to distribute.
10. according to the described Memory Controller of arbitrary claim in the claim 1 to 5; Wherein said circuit is for being coupled to the FEP FEP of said a plurality of rear ends passage communicatedly, said FEP through being configured to revise Host Command so that optimize the distribution of said order in the middle of the passage of said a plurality of rear ends.
11. a Memory Controller, it comprises:
A plurality of rear ends passage; And
Before the sort command allocator, it is coupled to said a plurality of rear ends passage and communicatedly through being configured to keep the command queue of several reading orders,
Wherein said order allocator through be configured to confirm to treat by said several reading orders realize from the reading only of storer, and revise one or more in said several reading orders to optimize the distribution of said several reading orders in the middle of the passage of said a plurality of rear ends.
12. Memory Controller according to claim 11; The sort command allocator comprises through being configured to revise one or more orders so that optimize the command processor part of the distribution of said order in the middle of the passage of said a plurality of rear ends before wherein said, and through being configured to the allocator part of assignment commands in the middle of the passage of said a plurality of rear ends.
13. Memory Controller according to claim 11, wherein said preceding sort command allocator is through being configured in hardware, to carry out the Host Command integrity checking.
14. according to the described Memory Controller of arbitrary claim in the claim 11 to 13, it comprises the FEP FEP that is coupled to said a plurality of rear ends passage communicatedly,
Wherein said FEP through be configured to confirm to treat by said several reading orders realize from the reading only of storer, and revise one or more in said several reading orders to optimize the distribution of said several reading orders in the middle of the passage of said a plurality of rear ends.
15. a Memory Controller, it comprises:
A plurality of rear ends passage; And
Before the sort command allocator, it is coupled to said a plurality of rear ends passage and communicatedly through being configured to keep the command queue of several write commands,
The clean change to storer of wherein said order allocator through being configured to confirm to treat realize by said several write commands, and revise one or more in said several write commands to optimize the distribution of said several write commands in the middle of the passage of said a plurality of rear ends.
16. Memory Controller according to claim 15, wherein said order allocator is through being configured to revise said one or more in said several write commands optimized distribution so that only passage is busy in said a plurality of rear ends.
17. Memory Controller according to claim 15, wherein said order allocator do not distribute from not changing by the write command of said several write commands to the said determined clean change of storer in said several write commands of said command queue through being configured to.
18. Memory Controller according to claim 17; Wherein said order allocator through be configured to with said command queue in the memory range that is associated of first write command be revised as the part of the said memory range that comprises second write command in the said command queue, and after this do not distribute not changing from said command queue by said second write command of said several write commands to the said determined clean change of storer.
19. according to the described Memory Controller of arbitrary claim in the claim 15 to 18; Wherein said several write commands receive in said command queue by initial order, and said order allocator is changed into the order of said several write commands in the said command queue through changing order from said initial order through being configured to.
20. Memory Controller according to claim 19, wherein said order allocator is through being configured to according to said several write commands in the said command queue said through changing order but not come assignment commands according to said initial order.
21. according to the described Memory Controller of arbitrary claim in the claim 15 to 18, wherein said order allocator is through being configured to through command in combination is revised in said several write commands in the said command queue at least one to overlapping memory location.
22. Memory Controller according to claim 21, wherein said order allocator is through being configured in response in said several write commands at least one is modified the order modification of said several write commands in the said command queue to through changing order.
23. Memory Controller according to claim 21, wherein said order allocator through be configured to not distribute through confirm as relate to in by said command queue subsequently with the write command of several memory locations of the write command overwrite of carrying out.
24. Memory Controller according to claim 21, wherein said order allocator is through being configured to not carry out the write command that under nothing relates to the situation by the intervention read operation of said several memory locations of the write command overwrite of carrying out subsequently, relates to said several memory locations through confirming as.
25. Memory Controller according to claim 21, wherein said order allocator is through being configured to that a plurality of write commands are gathered into single write command.
26. Memory Controller according to claim 25; Wherein said order allocator is the same operation that relates to contiguous in logic memory location through being configured to definite at least two write commands, and said at least two write commands are combined into the single write command that relates to said memory location of being close in logic.
27. Memory Controller according to claim 25; Wherein said order allocator is through being configured to confirm that at least two write commands are the same operation that relates to overlapping in logic memory location, and said at least two write commands are combined into the single write command that relates to said overlapping in logic memory location.
28. according to the described Memory Controller of arbitrary claim in the claim 15 to 18; Wherein with the specific service load that writes commands associated be in the passage of said a plurality of rear ends more than one in the middle of distribute, and said order allocator is through being configured to said specific call allocation said in the passage of said a plurality of rear ends each in more than that writes.
29. Memory Controller according to claim 28; Wherein said order allocator through be configured to confirm said service load with said a plurality of rear ends passage in the LBA that is associated and the sector count of said each part that is associated in more than, and revise the said specific write command of the respective rear ends passage of the said LBA that is assigned to the said part that is associated and sector count with said service load.
30. according to the described Memory Controller of arbitrary claim in the claim 15 to 18, wherein said order allocator through be configured to prevent said order allocator with call allocation to said a plurality of rear ends the time durations during passage revise the write command in the said command queue.
31. an accumulator system, it comprises:
Several storage arrangements; And
Controller, it has front end direct memory access (DMA) module DMA and is coupled in several rear end passages between corresponding one in said several storage arrangements and the said front end DMA communicatedly; Said front end DMA is through being configured to handle the service load with the individual host commands associated of being passed on by main frame, the appropriate section of wherein said service load with just cross over roughly corresponding a plurality of rear ends commands associated of execution simultaneously of said several rear end passages.
32. accumulator system according to claim 31; The order of wherein said individual host is write command, and said front end DMA is through being configured to distribute in the middle of the above rear end passage corresponding to said a plurality of backs sort command in said several rear end passages and the said service load of said individual host commands associated.
33. accumulator system according to claim 31; The order of wherein said individual host is reading order, and said front end DMA through be configured to from said several rear end passages corresponding to an above rear end passage of said a plurality of backs sort command in the middle of the service load of compilation and said individual host commands associated.
34. accumulator system according to claim 31; Wherein said front end DMA through be configured to confirm said service load with said a plurality of backs sort command in the LBA and the sector count of each each appropriate section that is associated, each in wherein said a plurality of sort commands is afterwards simulated said Host Command but is had modified respective logic block address and the sector count corresponding to corresponding one in said several rear end passages.
35. according to the described accumulator system of arbitrary claim in the claim 31 to 34; Wherein said front end DMA is through being configured to by an order part with said a plurality of rear ends commands associated of said service load to be communicated to main frame, and said order is different from that said Host Command originally can produce the order of said service load under the situation that said Host Command has been carried out by single rear end passage.
36. accumulator system according to claim 35, wherein said front end DMA is through being configured to by passage receives the order with part said a plurality of rear ends commands associated said service load and passes on said part from said a plurality of rear ends.
37. according to the described accumulator system of arbitrary claim in the claim 31 to 34, wherein said front end DMA is through being configured to pass on the order of the order of Host Command will order the designator of completion status to be communicated to said main frame by being different from by said main frame.
38. according to the described accumulator system of claim 37, wherein said front end DMA is through being configured to will to order the designator of completion status to be communicated to said main frame by the order of sort command after being accomplished by said several rear end passages.
39. according to the described accumulator system of arbitrary claim in the claim 31 to 34; In the wherein said part each is corresponding to the specific back end passage, and wherein said front end DMA is through being configured to that each and other part in the said part individually are communicated to said main frame dividually.
40. according to the described accumulator system of claim 39, wherein said front end DMA is through being configured to receiving after one in the said part and said part not being assembled under the situation with the complete service load of said individual host commands associated and at once said appropriate section is communicated to said main frame from said several rear end passages.
41. according to the described accumulator system of claim 40, the order of wherein said individual host is associated with different rear end passages in reading order and the said part each.
42. according to the described accumulator system of claim 40, wherein said front end DMA is through being configured to accomplishing when handling with said part that it is associated in the said part each is communicated to said main frame in said several rear end passages.
43. according to the described accumulator system of arbitrary claim in the claim 31 to 34; The order of wherein said individual host is as the write command in the reception of said front end DMA place of the part of the Host Command of first order; Said Host Command is accomplished by second order, and said front end DMA is through being configured to according to said second order designator of the order completion status of said individual host order to be communicated to said main frame.
44. a solid-state drive, it comprises:
A plurality of flash memory devices;
Memory Controller, it comprises:
A plurality of rear ends passage is coupled to each rear end tunneling traffic several storage arrangements in said a plurality of storage arrangement;
Front end direct memory access (DMA) DMA, it is coupled to said a plurality of rear ends passage communicatedly;
Before the sort command allocator, it is coupled to the said DMA that has through the command queue that is configured to keep several orders communicatedly;
Wherein said order allocator is through being configured to based on remaining in said several orders and processing command in the said command queue at that time optimizing its distribution in the middle of the passage of said a plurality of rear ends, and said front end DMA handles and the corresponding service load that relates at least one commands associated more than in the passage of said a plurality of rear ends through being configured to.
45. according to the described solid-state drive of claim 44, it comprises the Serial Advanced Technology Attachment SATA interface that is coupled to said Memory Controller communicatedly, said SATA interface is through being configured to and main-machine communication.
46. according to the described solid-state drive of claim 44, each in the passage of wherein said a plurality of rear ends is coupled to corresponding one in said a plurality of storage arrangement communicatedly.
47. according to the described solid-state drive of claim 46, wherein said a plurality of flash memory devices comprise eight NAND flash memory devices, and wherein said a plurality of rear ends passage comprises eight rear end passages.
48. according to the described solid-state drive of arbitrary claim in the claim 44 to 47, wherein said order allocator through be configured to through optionally rearrangement order, optionally compound command and/or optionally delete command and on the front end at said controller processing command to optimize its distribution to the rear end of said controller.
49. according to the described solid-state drive of claim 48, each in the passage of wherein said a plurality of rear ends through the order that is configured to received through rearrangement optionally, optionally the order that received of combination and/or optionally the order that receives respectively of delete command and further handling to accelerate the execution of the said order that receives respectively.
50. one kind is to distribute several orders to handle the method for said several orders before in the middle of the passage of a plurality of rear ends, it comprises:
Receive several orders by an order from main frame;
Handle said several orders to improve the front end handling capacity, said processing comprises:
The said order of resequencing;
With the synthetic individual command of a plurality of command group; And/or
Deletion relates to the order of a memory location, and said memory location relates under the situation of interventional procedure of said memory location by the order overwrite of carrying out subsequently through confirming as in nothing,
Wherein with said several the order at least one be assigned to an above rear end passage.
51. according to the described method of claim 50, wherein said processing takes place when only passage is busy in said rear end.
52. according to the described method of claim 50, wherein said rearrangement is at least in part in response to compound command or delete command.
53., wherein make up a plurality of orders and comprise the order that will be referred to adjacent memory location and be gathered into individual command according to the described method of arbitrary claim in the claim 50 to 52.
54., wherein make up a plurality of orders and comprise the order that will be referred to several same memory location and be gathered into individual command according to the described method of arbitrary claim in the claim 50 to 52.
55. according to the described method of arbitrary claim in the claim 50 to 52, wherein the compound command order that comprises the connected storage position that will be referred on the particular memory device is merged into individual command.
56. according to the described method of arbitrary claim in the claim 50 to 52, wherein compound command comprises the order that will be referred to the overlapping memory location on the particular memory device and is merged into individual command.
57. according to the described method of arbitrary claim in the claim 50 to 52; Wherein said processing comprises respective logic block address and the sector count to each part of the said service load that at least one is associated in said at least one definite and said several orders in said several orders, and revises said at least one order that is assigned to the specific back end passage with said respective logic block address and sector count in said several orders.
58. a method of handling several orders, it comprises:
Receive several orders by an order from main frame;
Handle with relate to a plurality of rear ends passage at least one order of being associated of the service load more than.
59. according to the described method of claim 58, its be included in the passage of said a plurality of rear ends more than one in the middle of distribute the said service load with said at least one commands associated, wherein said at least one order is write command.
60. according to the described method of claim 58, it comprises the compilation and the said service load of said at least one commands associated more than one from the passage of said a plurality of rear ends, wherein said at least one order is reading order.
61. according to the described method of claim 58, it comprises:
Confirm to relate to the LBA that is associated and the sector count of each part of in the passage of said a plurality of rear ends and corresponding one said service load said at least one commands associated; And
Modification is assigned to said at least one order of the said respective rear ends passage with the distinctive said determined LBA of said respective rear ends passage and sector count.
62. according to the described method of claim 58, it comprises:
Receive the service load that is associated with said several one ordering from said rear end passage; And
Passed on the order of the said order of said several orders that the said service load that receives is communicated to said main frame according to being different from once by said main frame.
63. according to the described method of arbitrary claim in the claim 58 to 62; Its part that comprises the said service load of the receptions from the passage of said rear end is communicated to said main frame, and does not wait for that another person from the passage of said rear end receives another part of said service load.
64. according to the described method of claim 63, its said part that is included under the situation of the said service load of not collecting again the said one said service loads that receive that will be from the passage of said rear end is communicated to said main frame.
65., irrespectively the designator of the completion status of said particular command is communicated to said main frame when it is included in the particular command of accomplishing in said several orders and with the said order of once passing on said order by said main frame according to the described method of arbitrary claim in the claim 58 to 62.
66. according to the described method of claim 65, it comprises:
Before receiving said particular command, receive first order;
With said first call allocation to the first rear end passage;
Said particular command is assigned to the second rear end passage; And
After accomplishing said particular command and no matter said first completion of ordering how, the said designator with the said completion status of said particular command is communicated to said main frame at once.
67. according to the described method of arbitrary claim in the claim 58 to 62; It comprises to be to distribute in the middle of the passage of said a plurality of rear ends handles said several orders before said several orders improving the front end handling capacity, and said command process comprises one or more in the following:
The said order of resequencing;
Make up a plurality of orders; And/or
Delete command.
68. according to the described method of claim 67; It is included in central the distribution and at least one said service load that is associated through compound command more than one in the passage of said a plurality of rear ends, and the order that wherein just is being combined makes at least a portion of its service load relate to specific one in the passage of said a plurality of rear ends.
CN201080022747.7A 2009-04-09 2010-03-11 Memory controllers, memory systems, solid state drivers and methods for processing a number of commands Active CN102439576B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12/421,093 US8055816B2 (en) 2009-04-09 2009-04-09 Memory controllers, memory systems, solid state drives and methods for processing a number of commands
US12/421,093 2009-04-09
PCT/US2010/000732 WO2010117404A2 (en) 2009-04-09 2010-03-11 Memory controllers, memory systems, solid state drivers and methods for processing a number of commands

Publications (2)

Publication Number Publication Date
CN102439576A true CN102439576A (en) 2012-05-02
CN102439576B CN102439576B (en) 2015-04-15

Family

ID=42935228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201080022747.7A Active CN102439576B (en) 2009-04-09 2010-03-11 Memory controllers, memory systems, solid state drivers and methods for processing a number of commands

Country Status (7)

Country Link
US (7) US8055816B2 (en)
EP (2) EP2417527B1 (en)
JP (1) JP5729774B2 (en)
KR (1) KR101371815B1 (en)
CN (1) CN102439576B (en)
TW (1) TWI418989B (en)
WO (1) WO2010117404A2 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850354A (en) * 2014-02-18 2015-08-19 株式会社东芝 Information processing system and storage device system
CN106126437A (en) * 2015-05-07 2016-11-16 爱思开海力士有限公司 Storage system
CN106484317A (en) * 2015-08-27 2017-03-08 三星电子株式会社 Accumulator system, memory module and its method
CN106663074A (en) * 2014-09-03 2017-05-10 高通股份有限公司 Multi-channel audio communication in serial low-power inter-chip media bus (SLIMbus) system
CN107436730A (en) * 2016-05-25 2017-12-05 爱思开海力士有限公司 Data handling system and its operating method
CN108628759A (en) * 2017-12-29 2018-10-09 贵阳忆芯科技有限公司 The method and apparatus of Out-of-order execution NVM command
CN108733580A (en) * 2014-09-05 2018-11-02 慧荣科技股份有限公司 Method for scheduling read commands
CN108874685A (en) * 2018-06-21 2018-11-23 郑州云海信息技术有限公司 The data processing method and solid state hard disk of solid state hard disk
CN108958848A (en) * 2017-05-17 2018-12-07 慧与发展有限责任合伙企业 Nearly memory counting system structure
CN110245097A (en) * 2018-03-08 2019-09-17 爱思开海力士有限公司 Memory Controller and storage system with Memory Controller
CN110678853A (en) * 2017-06-15 2020-01-10 美光科技公司 Memory controller
CN110727393A (en) * 2018-07-17 2020-01-24 爱思开海力士有限公司 Data storage device, operation method thereof and storage system
CN110737540A (en) * 2019-09-29 2020-01-31 深圳忆联信息系统有限公司 Recovery optimization method, device, equipment and storage medium for SSD read exception
CN111566610A (en) * 2017-10-24 2020-08-21 美光科技公司 Command selection strategy
CN111868678A (en) * 2018-03-21 2020-10-30 美光科技公司 Hybrid memory system
CN112447227A (en) * 2019-08-28 2021-03-05 美光科技公司 Command tracking
CN112930520A (en) * 2018-09-13 2021-06-08 铠侠股份有限公司 System and method for storing data using an Ethernet driver and an Ethernet open channel driver
CN113360091A (en) * 2020-03-04 2021-09-07 美光科技公司 Internal commands for access operations
CN114265795A (en) * 2020-09-16 2022-04-01 铠侠股份有限公司 Apparatus and method for high performance memory debug record generation and management
CN114721984A (en) * 2022-03-30 2022-07-08 湖南长城银河科技有限公司 SATA interface data transmission method and system for low-delay application
CN115702417A (en) * 2020-06-12 2023-02-14 超威半导体公司 Dynamic multi-bank memory command coalescing
WO2023108989A1 (en) * 2021-12-16 2023-06-22 北京小米移动软件有限公司 Data access method and apparatus, and non-transient computer-readable storage medium

Families Citing this family (152)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101490327B1 (en) 2006-12-06 2015-02-05 퓨전-아이오, 인크. Apparatus, system and method for managing commands of solid-state storage using bank interleave
KR101006748B1 (en) * 2009-01-29 2011-01-10 (주)인디링스 Solid state disks controller of controlling simultaneously switching of pads
US8271720B1 (en) * 2009-03-02 2012-09-18 Marvell International Ltd. Adaptive physical allocation in solid-state drives
KR101662729B1 (en) * 2009-05-08 2016-10-06 삼성전자주식회사 Method for processing command of non-volatile storage device interfacing with host using serial interface protocol and Memory controller for performing the method
US20110004742A1 (en) * 2009-07-06 2011-01-06 Eonsil, Inc. Variable-Cycle, Event-Driven Multi-Execution Flash Processor
KR101608910B1 (en) * 2009-08-11 2016-04-04 마벨 월드 트레이드 리미티드 Controller for reading data from non-volatile memory
TW201111986A (en) * 2009-09-29 2011-04-01 Silicon Motion Inc Memory apparatus and data access method for memories
US9213628B2 (en) 2010-07-14 2015-12-15 Nimble Storage, Inc. Methods and systems for reducing churn in flash-based cache
JP4966404B2 (en) * 2010-10-21 2012-07-04 株式会社東芝 MEMORY CONTROL DEVICE, STORAGE DEVICE, AND MEMORY CONTROL METHOD
JP2012128644A (en) * 2010-12-15 2012-07-05 Toshiba Corp Memory system
US20120233401A1 (en) * 2011-03-08 2012-09-13 Skymedi Corporation Embedded memory system
US8856482B2 (en) * 2011-03-11 2014-10-07 Micron Technology, Inc. Systems, devices, memory controllers, and methods for memory initialization
US9021215B2 (en) * 2011-03-21 2015-04-28 Apple Inc. Storage system exporting internal storage rules
US8924627B2 (en) 2011-03-28 2014-12-30 Western Digital Technologies, Inc. Flash memory device comprising host interface for processing a multi-command descriptor block in order to exploit concurrency
WO2012143944A2 (en) * 2011-04-18 2012-10-26 Ineda Systems Pvt. Ltd Multi-host nand flash controller
US9436594B2 (en) * 2011-05-27 2016-09-06 Seagate Technology Llc Write operation with immediate local destruction of old content in non-volatile memory
US8543758B2 (en) * 2011-05-31 2013-09-24 Micron Technology, Inc. Apparatus including memory channel control circuit and related methods for relaying commands to logical units
KR101835604B1 (en) * 2011-06-03 2018-03-07 삼성전자 주식회사 Scheduler for memory
GB2494625A (en) * 2011-09-06 2013-03-20 St Microelectronics Grenoble 2 Minimizing the latency of a scrambled memory access by sending a memory access operation to the encryption engine and the memory controller in parallel
US8886872B1 (en) 2011-10-06 2014-11-11 Google Inc. Memory command dispatch in a data storage device
US8255618B1 (en) * 2011-10-06 2012-08-28 Google Inc. Performance isolation in a shared memory device
WO2013068862A1 (en) * 2011-11-11 2013-05-16 International Business Machines Corporation Memory module and memory controller for controlling a memory module
US8904091B1 (en) * 2011-12-22 2014-12-02 Western Digital Technologies, Inc. High performance media transport manager architecture for data storage systems
US9274937B2 (en) * 2011-12-22 2016-03-01 Longitude Enterprise Flash S.A.R.L. Systems, methods, and interfaces for vector input/output operations
US10157060B2 (en) 2011-12-29 2018-12-18 Intel Corporation Method, device and system for control signaling in a data path module of a data stream processing engine
US8811913B2 (en) * 2012-02-21 2014-08-19 Htc Corporation RF calibration data management in portable device
US9146856B2 (en) 2012-04-10 2015-09-29 Micron Technology, Inc. Remapping and compacting in a memory device
US9141296B2 (en) 2012-05-31 2015-09-22 Sandisk Technologies Inc. Method and host device for packing and dispatching read and write commands
CN102789439B (en) * 2012-06-16 2016-02-10 北京忆恒创源科技有限公司 The method of the interruption in control data transmission process and memory device
US9557800B2 (en) * 2012-08-31 2017-01-31 Micron Technology, Inc. Sequence power control
US10095433B1 (en) * 2012-10-24 2018-10-09 Western Digital Technologies, Inc. Out-of-order data transfer mechanisms for data storage systems
KR101988287B1 (en) 2012-11-26 2019-06-12 삼성전자주식회사 Storage device and computing system havint its and data transfering method thereof
US9256384B2 (en) * 2013-02-04 2016-02-09 Avago Technologies General Ip (Singapore) Pte. Ltd. Method and system for reducing write latency in a data storage system by using a command-push model
US9244877B2 (en) 2013-03-14 2016-01-26 Intel Corporation Link layer virtualization in SATA controller
US9170755B2 (en) 2013-05-21 2015-10-27 Sandisk Technologies Inc. Command and data selection in storage controller systems
JP6160294B2 (en) * 2013-06-24 2017-07-12 富士通株式会社 Storage system, storage apparatus, and storage system control method
TWI493455B (en) * 2013-07-02 2015-07-21 Phison Electronics Corp Method for managing command queue, memory controller and memory storage apparatus
JP5866032B2 (en) * 2013-08-19 2016-02-17 株式会社東芝 Memory system
US9304709B2 (en) 2013-09-06 2016-04-05 Western Digital Technologies, Inc. High performance system providing selective merging of dataframe segments in hardware
US10331583B2 (en) * 2013-09-26 2019-06-25 Intel Corporation Executing distributed memory operations using processing elements connected by distributed channels
US8854385B1 (en) * 2013-10-03 2014-10-07 Google Inc. Merging rendering operations for graphics processing unit (GPU) performance
US9292903B2 (en) 2013-10-03 2016-03-22 Google Inc. Overlap aware reordering of rendering operations for efficiency
US9824004B2 (en) 2013-10-04 2017-11-21 Micron Technology, Inc. Methods and apparatuses for requesting ready status information from a memory
US9880777B1 (en) * 2013-12-23 2018-01-30 EMC IP Holding Company LLC Embedded synchronous replication for block and file objects
US10108372B2 (en) 2014-01-27 2018-10-23 Micron Technology, Inc. Methods and apparatuses for executing a plurality of queued tasks in a memory
US9454310B2 (en) 2014-02-14 2016-09-27 Micron Technology, Inc. Command queuing
US20150261473A1 (en) * 2014-03-11 2015-09-17 Kabushiki Kaisha Toshiba Memory system and method of controlling memory system
JP2015215774A (en) * 2014-05-12 2015-12-03 Tdk株式会社 Memory controller, memory system and memory control method
US9507722B2 (en) 2014-06-05 2016-11-29 Sandisk Technologies Llc Methods, systems, and computer readable media for solid state drive caching across a host bus
US9563382B2 (en) 2014-06-05 2017-02-07 Sandisk Technologies Llc Methods, systems, and computer readable media for providing flexible host memory buffer
US9652415B2 (en) 2014-07-09 2017-05-16 Sandisk Technologies Llc Atomic non-volatile memory data transfer
US9904621B2 (en) 2014-07-15 2018-02-27 Sandisk Technologies Llc Methods and systems for flash buffer sizing
US9645744B2 (en) 2014-07-22 2017-05-09 Sandisk Technologies Llc Suspending and resuming non-volatile memory operations
US10268584B2 (en) 2014-08-20 2019-04-23 Sandisk Technologies Llc Adaptive host memory buffer (HMB) caching using unassisted hinting
US10228854B2 (en) 2014-08-20 2019-03-12 Sandisk Technologies Llc Storage devices and methods for optimizing use of storage devices based on storage device parsing of file system metadata in host write operations
US10007442B2 (en) * 2014-08-20 2018-06-26 Sandisk Technologies Llc Methods, systems, and computer readable media for automatically deriving hints from accesses to a storage device and from file system metadata and for optimizing utilization of the storage device based on the hints
US9760295B2 (en) 2014-09-05 2017-09-12 Toshiba Memory Corporation Atomic rights in a distributed memory system
US10101943B1 (en) * 2014-09-25 2018-10-16 EMC IP Holding Company LLC Realigning data in replication system
US20160094619A1 (en) * 2014-09-26 2016-03-31 Jawad B. Khan Technologies for accelerating compute intensive operations using solid state drives
US9910621B1 (en) 2014-09-29 2018-03-06 EMC IP Holding Company LLC Backlogging I/O metadata utilizing counters to monitor write acknowledgements and no acknowledgements
US9952978B2 (en) 2014-10-27 2018-04-24 Sandisk Technologies, Llc Method for improving mixed random performance in low queue depth workloads
US9753649B2 (en) 2014-10-27 2017-09-05 Sandisk Technologies Llc Tracking intermix of writes and un-map commands across power cycles
US9824007B2 (en) 2014-11-21 2017-11-21 Sandisk Technologies Llc Data integrity enhancement to protect against returning old versions of data
US9817752B2 (en) 2014-11-21 2017-11-14 Sandisk Technologies Llc Data integrity enhancement to protect against returning old versions of data
US9575669B2 (en) * 2014-12-09 2017-02-21 Western Digital Technologies, Inc. Programmable solid state drive controller and method for scheduling commands utilizing a data structure
US9778848B2 (en) * 2014-12-23 2017-10-03 Intel Corporation Method and apparatus for improving read performance of a solid state drive
TWI553476B (en) 2015-03-05 2016-10-11 光寶電子(廣州)有限公司 Region descriptor management method and electronic apparatus thereof
US9965323B2 (en) * 2015-03-11 2018-05-08 Western Digital Technologies, Inc. Task queues
US9652175B2 (en) 2015-04-09 2017-05-16 Sandisk Technologies Llc Locally generating and storing RAID stripe parity with single relative memory address for storing data segments and parity in multiple non-volatile memory portions
US10372529B2 (en) 2015-04-20 2019-08-06 Sandisk Technologies Llc Iterative soft information correction and decoding
US9778878B2 (en) * 2015-04-22 2017-10-03 Sandisk Technologies Llc Method and system for limiting write command execution
US10698607B2 (en) * 2015-05-19 2020-06-30 Netapp Inc. Configuration update management
KR102367982B1 (en) 2015-06-22 2022-02-25 삼성전자주식회사 Data storage device and data processing system having the same
US9870149B2 (en) 2015-07-08 2018-01-16 Sandisk Technologies Llc Scheduling operations in non-volatile memory devices using preference values
JP2017027479A (en) * 2015-07-24 2017-02-02 富士通株式会社 Data reading method and information processing system
US9715939B2 (en) 2015-08-10 2017-07-25 Sandisk Technologies Llc Low read data storage management
CN106547701B (en) * 2015-09-17 2020-01-10 慧荣科技股份有限公司 Memory device and data reading method
TWI587141B (en) * 2015-09-17 2017-06-11 慧榮科技股份有限公司 Storage device and data access method thereof
US9886196B2 (en) 2015-10-21 2018-02-06 Western Digital Technologies, Inc. Method and system for efficient common processing in memory device controllers
US10108340B2 (en) 2015-10-21 2018-10-23 Western Digital Technologies, Inc. Method and system for a common processing framework for memory device controllers
US10452596B2 (en) * 2015-10-29 2019-10-22 Micron Technology, Inc. Memory cells configured in multiple configuration modes
US9904609B2 (en) * 2015-11-04 2018-02-27 Toshiba Memory Corporation Memory controller and memory device
US10228990B2 (en) 2015-11-12 2019-03-12 Sandisk Technologies Llc Variable-term error metrics adjustment
US10126970B2 (en) 2015-12-11 2018-11-13 Sandisk Technologies Llc Paired metablocks in non-volatile storage device
US10437483B2 (en) * 2015-12-17 2019-10-08 Samsung Electronics Co., Ltd. Computing system with communication mechanism and method of operation thereof
US10275160B2 (en) 2015-12-21 2019-04-30 Intel Corporation Method and apparatus to enable individual non volatile memory express (NVME) input/output (IO) Queues on differing network addresses of an NVME controller
US9927997B2 (en) 2015-12-21 2018-03-27 Sandisk Technologies Llc Methods, systems, and computer readable media for automatically and selectively enabling burst mode operation in a storage device
US9837146B2 (en) 2016-01-08 2017-12-05 Sandisk Technologies Llc Memory system temperature management
US10732856B2 (en) 2016-03-03 2020-08-04 Sandisk Technologies Llc Erase health metric to rank memory portions
JP6502879B2 (en) * 2016-03-08 2019-04-17 東芝メモリ株式会社 Storage device
US10521118B2 (en) 2016-07-13 2019-12-31 Sandisk Technologies Llc Methods, systems, and computer readable media for write classification and aggregation using host memory buffer (HMB)
US10481830B2 (en) 2016-07-25 2019-11-19 Sandisk Technologies Llc Selectively throttling host reads for read disturbs in non-volatile memory system
US10200376B2 (en) 2016-08-24 2019-02-05 Intel Corporation Computer product, method, and system to dynamically provide discovery services for host nodes of target systems and storage resources in a network
DE112016007069T5 (en) * 2016-08-30 2019-03-28 Mitsubishi Electric Corporation PROGRAM EDITING DEVICE, PROGRAM EDITING PROCEDURE AND PROGRAM EDITING PROGRAM
US10176116B2 (en) * 2016-09-28 2019-01-08 Intel Corporation Computer product, method, and system to provide discovery services to discover target storage resources and register a configuration of virtual target storage resources mapping to the target storage resources and an access control list of host nodes allowed to access the virtual target storage resources
US10402168B2 (en) 2016-10-01 2019-09-03 Intel Corporation Low energy consumption mantissa multiplication for floating point multiply-add operations
US10558575B2 (en) 2016-12-30 2020-02-11 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10474375B2 (en) 2016-12-30 2019-11-12 Intel Corporation Runtime address disambiguation in acceleration hardware
US10416999B2 (en) 2016-12-30 2019-09-17 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10572376B2 (en) 2016-12-30 2020-02-25 Intel Corporation Memory ordering in acceleration hardware
KR102387922B1 (en) * 2017-02-07 2022-04-15 삼성전자주식회사 Methods and systems for handling asynchronous event request command in a solid state drive
US10521375B2 (en) * 2017-06-22 2019-12-31 Macronix International Co., Ltd. Controller for a memory system
US10467183B2 (en) 2017-07-01 2019-11-05 Intel Corporation Processors and methods for pipelined runtime services in a spatial array
US10469397B2 (en) 2017-07-01 2019-11-05 Intel Corporation Processors and methods with configurable network-based dataflow operator circuits
US10387319B2 (en) 2017-07-01 2019-08-20 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with memory system performance, power reduction, and atomics support features
US10515049B1 (en) 2017-07-01 2019-12-24 Intel Corporation Memory circuits and methods for distributed memory hazard detection and error recovery
US10515046B2 (en) 2017-07-01 2019-12-24 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10445234B2 (en) 2017-07-01 2019-10-15 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with transactional and replay features
US10445451B2 (en) 2017-07-01 2019-10-15 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with performance, correctness, and power reduction features
KR20190032809A (en) 2017-09-20 2019-03-28 에스케이하이닉스 주식회사 Memory system and operating method thereof
US10496574B2 (en) 2017-09-28 2019-12-03 Intel Corporation Processors, methods, and systems for a memory fence in a configurable spatial accelerator
US11086816B2 (en) 2017-09-28 2021-08-10 Intel Corporation Processors, methods, and systems for debugging a configurable spatial accelerator
US10445098B2 (en) 2017-09-30 2019-10-15 Intel Corporation Processors and methods for privileged configuration in a spatial array
US10380063B2 (en) 2017-09-30 2019-08-13 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator having a sequencer dataflow operator
US10445250B2 (en) 2017-12-30 2019-10-15 Intel Corporation Apparatus, methods, and systems with a configurable spatial accelerator
US10565134B2 (en) 2017-12-30 2020-02-18 Intel Corporation Apparatus, methods, and systems for multicast in a configurable spatial accelerator
US10417175B2 (en) 2017-12-30 2019-09-17 Intel Corporation Apparatus, methods, and systems for memory consistency in a configurable spatial accelerator
JP7013294B2 (en) * 2018-03-19 2022-01-31 キオクシア株式会社 Memory system
KR20190110360A (en) 2018-03-20 2019-09-30 에스케이하이닉스 주식회사 Controller, system having the same, and operating method thereof
JP2019175292A (en) * 2018-03-29 2019-10-10 東芝メモリ株式会社 Electronic device, computer system, and control method
US10564980B2 (en) 2018-04-03 2020-02-18 Intel Corporation Apparatus, methods, and systems for conditional queues in a configurable spatial accelerator
US11307873B2 (en) 2018-04-03 2022-04-19 Intel Corporation Apparatus, methods, and systems for unstructured data flow in a configurable spatial accelerator with predicate propagation and merging
US11200186B2 (en) 2018-06-30 2021-12-14 Intel Corporation Apparatuses, methods, and systems for operations in a configurable spatial accelerator
US10891240B2 (en) 2018-06-30 2021-01-12 Intel Corporation Apparatus, methods, and systems for low latency communication in a configurable spatial accelerator
US10853073B2 (en) 2018-06-30 2020-12-01 Intel Corporation Apparatuses, methods, and systems for conditional operations in a configurable spatial accelerator
US10459866B1 (en) 2018-06-30 2019-10-29 Intel Corporation Apparatuses, methods, and systems for integrated control and data processing in a configurable spatial accelerator
US10884920B2 (en) 2018-08-14 2021-01-05 Western Digital Technologies, Inc. Metadata-based operations for use with solid state devices
CN110851073B (en) * 2018-08-20 2023-06-02 慧荣科技股份有限公司 Storage device and execution method of macro instruction
CN110851372B (en) 2018-08-20 2023-10-31 慧荣科技股份有限公司 Storage device and cache area addressing method
CN110858127B (en) * 2018-08-22 2023-09-12 慧荣科技股份有限公司 data storage device
US11249664B2 (en) 2018-10-09 2022-02-15 Western Digital Technologies, Inc. File system metadata decoding for optimizing flash translation layer operations
US11340810B2 (en) 2018-10-09 2022-05-24 Western Digital Technologies, Inc. Optimizing data storage device operation by grouping logical block addresses and/or physical block addresses using hints
US10678724B1 (en) 2018-12-29 2020-06-09 Intel Corporation Apparatuses, methods, and systems for in-network storage in a configurable spatial accelerator
US11455402B2 (en) 2019-01-30 2022-09-27 Seagate Technology Llc Non-volatile memory with precise write-once protection
US10817291B2 (en) 2019-03-30 2020-10-27 Intel Corporation Apparatuses, methods, and systems for swizzle operations in a configurable spatial accelerator
US11029927B2 (en) 2019-03-30 2021-06-08 Intel Corporation Methods and apparatus to detect and annotate backedges in a dataflow graph
US10915471B2 (en) 2019-03-30 2021-02-09 Intel Corporation Apparatuses, methods, and systems for memory interface circuit allocation in a configurable spatial accelerator
US10965536B2 (en) 2019-03-30 2021-03-30 Intel Corporation Methods and apparatus to insert buffers in a dataflow graph
CN110187835B (en) * 2019-05-24 2023-02-03 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for managing access requests
US11037050B2 (en) 2019-06-29 2021-06-15 Intel Corporation Apparatuses, methods, and systems for memory interface circuit arbitration in a configurable spatial accelerator
KR20210053384A (en) 2019-11-01 2021-05-12 삼성전자주식회사 Storage device and operating method of storage device
US11797188B2 (en) * 2019-12-12 2023-10-24 Sk Hynix Nand Product Solutions Corp. Solid state drive with multiplexed internal channel access during program data transfers
US11907713B2 (en) 2019-12-28 2024-02-20 Intel Corporation Apparatuses, methods, and systems for fused operations using sign modification in a processing element of a configurable spatial accelerator
US11222258B2 (en) 2020-03-27 2022-01-11 Google Llc Load balancing for memory channel controllers
US11579801B2 (en) * 2020-06-09 2023-02-14 Samsung Electronics Co., Ltd. Write ordering in SSDs
US11307806B2 (en) 2020-07-22 2022-04-19 Seagate Technology Llc Controlling SSD performance by queue depth
US11347394B2 (en) 2020-08-03 2022-05-31 Seagate Technology Llc Controlling SSD performance by the number of active memory dies
US11507298B2 (en) * 2020-08-18 2022-11-22 PetaIO Inc. Computational storage systems and methods
US20230120600A1 (en) * 2021-10-20 2023-04-20 Western Digital Technologies, Inc. Data Storage Devices, Systems, and Related Methods for Grouping Commands of Doorbell Transactions from Host Devices
US20230393784A1 (en) * 2022-06-03 2023-12-07 Micron Technology, Inc. Data path sequencing in memory systems
US20240004788A1 (en) * 2022-07-01 2024-01-04 Micron Technology, Inc. Adaptive configuration of memory devices using host profiling
US20240061615A1 (en) * 2022-08-22 2024-02-22 Micron Technology, Inc. Command scheduling for a memory system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63175970A (en) * 1987-01-16 1988-07-20 Hitachi Ltd Memory control system
US20020078292A1 (en) * 2000-12-19 2002-06-20 Chilton Kendell A. Methods and apparatus for transferring a data element within a data storage system
JP2004013473A (en) * 2002-06-06 2004-01-15 Hitachi Ltd Data writing control method for magnetic disk device
CN1875339A (en) * 2003-10-29 2006-12-06 松下电器产业株式会社 Drive device and related computer program
US20080320209A1 (en) * 2000-01-06 2008-12-25 Super Talent Electronics, Inc. High Performance and Endurance Non-volatile Memory Based Storage Systems
JP2009020883A (en) * 2007-07-10 2009-01-29 Internatl Business Mach Corp <Ibm> Memory controller read queue dynamic optimization of command selection

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5924485A (en) * 1982-07-30 1984-02-08 Toshiba Corp Input/output paging mechanism
DE3241376A1 (en) * 1982-11-09 1984-05-10 Siemens AG, 1000 Berlin und 8000 München DMA CONTROL DEVICE FOR TRANSMITTING DATA BETWEEN A DATA TRANSMITTER AND A DATA RECEIVER
US4797812A (en) * 1985-06-19 1989-01-10 Kabushiki Kaisha Toshiba System for continuous DMA transfer of virtually addressed data blocks
US5544347A (en) * 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
US5182800A (en) * 1990-11-16 1993-01-26 International Business Machines Corporation Direct memory access controller with adaptive pipelining and bus control features
US5640596A (en) * 1992-03-10 1997-06-17 Hitachi, Ltd. Input output control system for transferring control programs collectively as one transfer unit designated by plurality of input output requests to be executed
US5526484A (en) * 1992-12-10 1996-06-11 International Business Machines Corporation Method and system for pipelining the processing of channel command words
US5517670A (en) * 1992-12-30 1996-05-14 International Business Machines Corporation Adaptive data transfer channel employing extended data block capability
US5564055A (en) * 1994-08-30 1996-10-08 Lucent Technologies Inc. PCMCIA slot expander and method
US5717952A (en) * 1994-11-16 1998-02-10 Apple Computer, Inc. DMA controller with mechanism for conditional action under control of status register, prespecified parameters, and condition field of channel command
JPH09128159A (en) * 1995-11-06 1997-05-16 Matsushita Electric Ind Co Ltd Storage device
JP3287203B2 (en) * 1996-01-10 2002-06-04 株式会社日立製作所 External storage controller and data transfer method between external storage controllers
US6233660B1 (en) * 1996-02-16 2001-05-15 Emc Corporation System and method for emulating mainframe channel programs by open systems computer systems
US5889935A (en) * 1996-05-28 1999-03-30 Emc Corporation Disaster control features for remote data mirroring
US5928370A (en) * 1997-02-05 1999-07-27 Lexar Media, Inc. Method and apparatus for verifying erasure of memory blocks within a non-volatile memory structure
US6034897A (en) * 1999-04-01 2000-03-07 Lexar Media, Inc. Space management for managing high capacity nonvolatile memory
US6012104A (en) * 1997-10-24 2000-01-04 International Business Machines Corporation Method and apparatus for dynamic extension of channel programs
US6076137A (en) * 1997-12-11 2000-06-13 Lexar Media, Inc. Method and apparatus for storing location identification information within non-volatile memory devices
US6192444B1 (en) * 1998-01-05 2001-02-20 International Business Machines Corporation Method and system for providing additional addressable functional space on a disk for use with a virtual data storage subsystem
US6038646A (en) 1998-01-23 2000-03-14 Sun Microsystems, Inc. Method and apparatus for enforcing ordered execution of reads and writes across a memory interface
US8171204B2 (en) * 2000-01-06 2012-05-01 Super Talent Electronics, Inc. Intelligent solid-state non-volatile memory device (NVMD) system with multi-level caching of multiple channels
US7102671B1 (en) * 2000-02-08 2006-09-05 Lexar Media, Inc. Enhanced compact flash memory card
US20020120741A1 (en) * 2000-03-03 2002-08-29 Webb Theodore S. Systems and methods for using distributed interconnects in information management enviroments
US6684311B2 (en) * 2001-06-22 2004-01-27 Intel Corporation Method and mechanism for common scheduling in a RDRAM system
US7269709B2 (en) * 2002-05-15 2007-09-11 Broadcom Corporation Memory controller configurable to allow bandwidth/latency tradeoff
US6915378B2 (en) * 2003-04-23 2005-07-05 Hypernova Technologies, Inc. Method and system for improving the performance of a processing system
US7058735B2 (en) * 2003-06-02 2006-06-06 Emulex Design & Manufacturing Corporation Method and apparatus for local and distributed data memory access (“DMA”) control
US7010654B2 (en) * 2003-07-24 2006-03-07 International Business Machines Corporation Methods and systems for re-ordering commands to access memory
TW200506733A (en) * 2003-08-15 2005-02-16 Via Tech Inc Apparatus and method for the co-simulation of CPU and DUT modules
KR100585136B1 (en) * 2004-03-04 2006-05-30 삼성전자주식회사 Memory system and data channel initialization method thereof
US7328317B2 (en) * 2004-10-21 2008-02-05 International Business Machines Corporation Memory controller and method for optimized read/modify/write performance
US7353301B2 (en) * 2004-10-29 2008-04-01 Intel Corporation Methodology and apparatus for implementing write combining
JP4366298B2 (en) * 2004-12-02 2009-11-18 富士通株式会社 Storage device, control method thereof, and program
US8316129B2 (en) * 2005-05-25 2012-11-20 Microsoft Corporation Data communication coordination with sequence numbers
JP2007058646A (en) * 2005-08-25 2007-03-08 Hitachi Ltd Data processing system
US20070162643A1 (en) * 2005-12-19 2007-07-12 Ivo Tousek Fixed offset scatter/gather dma controller and method thereof
JP2008021380A (en) * 2006-07-14 2008-01-31 Fujitsu Ltd Seek control device, seek control method, storage device
US7822887B2 (en) * 2006-10-27 2010-10-26 Stec, Inc. Multi-channel solid-state storage system
US20080107275A1 (en) * 2006-11-08 2008-05-08 Mehdi Asnaashari Method and system for encryption of information stored in an external nonvolatile memory
US8151082B2 (en) * 2007-12-06 2012-04-03 Fusion-Io, Inc. Apparatus, system, and method for converting a storage request into an append data storage command
US7934025B2 (en) * 2007-01-24 2011-04-26 Qualcomm Incorporated Content terminated DMA
TW200844841A (en) * 2007-05-10 2008-11-16 Realtek Semiconductor Corp Method for expediting data access of universal serial bus stoarage device
CN100458751C (en) * 2007-05-10 2009-02-04 忆正存储技术(深圳)有限公司 Paralleling flash memory controller
JP4963088B2 (en) * 2007-07-13 2012-06-27 インターナショナル・ビジネス・マシーンズ・コーポレーション Data caching technology
JP2009128159A (en) 2007-11-22 2009-06-11 Ogata Institute For Medical & Chemical Research Capillary isoelectric-point electrophoretic apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63175970A (en) * 1987-01-16 1988-07-20 Hitachi Ltd Memory control system
US20080320209A1 (en) * 2000-01-06 2008-12-25 Super Talent Electronics, Inc. High Performance and Endurance Non-volatile Memory Based Storage Systems
US20020078292A1 (en) * 2000-12-19 2002-06-20 Chilton Kendell A. Methods and apparatus for transferring a data element within a data storage system
JP2004013473A (en) * 2002-06-06 2004-01-15 Hitachi Ltd Data writing control method for magnetic disk device
CN1875339A (en) * 2003-10-29 2006-12-06 松下电器产业株式会社 Drive device and related computer program
JP2009020883A (en) * 2007-07-10 2009-01-29 Internatl Business Mach Corp <Ibm> Memory controller read queue dynamic optimization of command selection

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850354B (en) * 2014-02-18 2019-03-08 东芝存储器株式会社 Information processing system and storage system
CN104850354A (en) * 2014-02-18 2015-08-19 株式会社东芝 Information processing system and storage device system
CN106663074A (en) * 2014-09-03 2017-05-10 高通股份有限公司 Multi-channel audio communication in serial low-power inter-chip media bus (SLIMbus) system
CN108733580A (en) * 2014-09-05 2018-11-02 慧荣科技股份有限公司 Method for scheduling read commands
CN106126437A (en) * 2015-05-07 2016-11-16 爱思开海力士有限公司 Storage system
KR20160131379A (en) * 2015-05-07 2016-11-16 에스케이하이닉스 주식회사 Memory system
CN106126437B (en) * 2015-05-07 2020-10-20 爱思开海力士有限公司 Storage system
KR102398611B1 (en) * 2015-05-07 2022-05-17 에스케이하이닉스 주식회사 Memory system
CN106484317A (en) * 2015-08-27 2017-03-08 三星电子株式会社 Accumulator system, memory module and its method
CN107436730A (en) * 2016-05-25 2017-12-05 爱思开海力士有限公司 Data handling system and its operating method
CN108958848A (en) * 2017-05-17 2018-12-07 慧与发展有限责任合伙企业 Nearly memory counting system structure
CN110678853A (en) * 2017-06-15 2020-01-10 美光科技公司 Memory controller
CN111566610B (en) * 2017-10-24 2022-04-05 美光科技公司 Command selection strategy
CN111566610A (en) * 2017-10-24 2020-08-21 美光科技公司 Command selection strategy
CN108628759B (en) * 2017-12-29 2020-09-01 贵阳忆芯科技有限公司 Method and apparatus for out-of-order execution of NVM commands
CN108628759A (en) * 2017-12-29 2018-10-09 贵阳忆芯科技有限公司 The method and apparatus of Out-of-order execution NVM command
CN110245097A (en) * 2018-03-08 2019-09-17 爱思开海力士有限公司 Memory Controller and storage system with Memory Controller
CN111868678A (en) * 2018-03-21 2020-10-30 美光科技公司 Hybrid memory system
CN108874685A (en) * 2018-06-21 2018-11-23 郑州云海信息技术有限公司 The data processing method and solid state hard disk of solid state hard disk
CN108874685B (en) * 2018-06-21 2021-10-29 郑州云海信息技术有限公司 Data processing method of solid state disk and solid state disk
CN110727393A (en) * 2018-07-17 2020-01-24 爱思开海力士有限公司 Data storage device, operation method thereof and storage system
CN112930520A (en) * 2018-09-13 2021-06-08 铠侠股份有限公司 System and method for storing data using an Ethernet driver and an Ethernet open channel driver
CN112930520B (en) * 2018-09-13 2023-08-15 铠侠股份有限公司 System and method for storing data using an Ethernet driver and an Ethernet open channel driver
CN112447227A (en) * 2019-08-28 2021-03-05 美光科技公司 Command tracking
CN110737540A (en) * 2019-09-29 2020-01-31 深圳忆联信息系统有限公司 Recovery optimization method, device, equipment and storage medium for SSD read exception
CN113360091A (en) * 2020-03-04 2021-09-07 美光科技公司 Internal commands for access operations
US11726716B2 (en) 2020-03-04 2023-08-15 Micron Technology, Inc. Internal commands for access operations
CN113360091B (en) * 2020-03-04 2022-05-10 美光科技公司 Internal commands for access operations
CN115702417A (en) * 2020-06-12 2023-02-14 超威半导体公司 Dynamic multi-bank memory command coalescing
CN114265795A (en) * 2020-09-16 2022-04-01 铠侠股份有限公司 Apparatus and method for high performance memory debug record generation and management
US11847037B2 (en) 2020-09-16 2023-12-19 Kioxia Corporation Device and method for high performance memory debug record generation and management
WO2023108989A1 (en) * 2021-12-16 2023-06-22 北京小米移动软件有限公司 Data access method and apparatus, and non-transient computer-readable storage medium
CN114721984A (en) * 2022-03-30 2022-07-08 湖南长城银河科技有限公司 SATA interface data transmission method and system for low-delay application
CN114721984B (en) * 2022-03-30 2024-03-26 湖南长城银河科技有限公司 SATA interface data transmission method and system for low-delay application

Also Published As

Publication number Publication date
EP2417527A4 (en) 2012-12-19
US20150212734A1 (en) 2015-07-30
US20130268701A1 (en) 2013-10-10
JP2012523612A (en) 2012-10-04
TW201104440A (en) 2011-02-01
US8396995B2 (en) 2013-03-12
JP5729774B2 (en) 2015-06-03
US9015356B2 (en) 2015-04-21
US20190265889A1 (en) 2019-08-29
US20100262721A1 (en) 2010-10-14
US8260973B2 (en) 2012-09-04
EP2958027A1 (en) 2015-12-23
WO2010117404A2 (en) 2010-10-14
CN102439576B (en) 2015-04-15
EP2417527A2 (en) 2012-02-15
WO2010117404A3 (en) 2011-03-31
KR20120015313A (en) 2012-02-21
US8751700B2 (en) 2014-06-10
KR101371815B1 (en) 2014-03-07
US20120324180A1 (en) 2012-12-20
US20140310431A1 (en) 2014-10-16
US10331351B2 (en) 2019-06-25
US20120011335A1 (en) 2012-01-12
EP2958027B1 (en) 2019-09-04
US10949091B2 (en) 2021-03-16
TWI418989B (en) 2013-12-11
US8055816B2 (en) 2011-11-08
EP2417527B1 (en) 2015-09-23

Similar Documents

Publication Publication Date Title
CN102439576B (en) Memory controllers, memory systems, solid state drivers and methods for processing a number of commands
CN103176746B (en) Systems and methods for enhanced controller architecture in data storage systems
CN102326154B (en) Architecture for address mapping of managed non-volatile memory
CN103282887A (en) Controller and method for performing background operations
KR101560469B1 (en) Apparatus including memory system controllers and related methods
CN103403681A (en) Descriptor scheduler
CN108885584A (en) It is transmitted using the unordered reading of host memory buffers
US8918554B2 (en) Method and apparatus for effectively increasing a command queue length for accessing storage
CN108121672A (en) A kind of storage array control method and device based on Nand Flash memorizer multichannel
CN104166441A (en) Scalable storage devices
KR20120098505A (en) Efficient buffering for a system having non-volatile memory
CN101297276A (en) A mass storage device having both xip function and storage function
CN102272730A (en) Virtualized ecc nand
CN109656833B (en) Data storage device
CN102473078A (en) Controller for reading data from non-volatile memory
CN207008602U (en) A kind of storage array control device based on Nand Flash memorizer multichannel

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant