CN109725846A - Storage system and control method - Google Patents

Storage system and control method Download PDF

Info

Publication number
CN109725846A
CN109725846A CN201810767079.9A CN201810767079A CN109725846A CN 109725846 A CN109725846 A CN 109725846A CN 201810767079 A CN201810767079 A CN 201810767079A CN 109725846 A CN109725846 A CN 109725846A
Authority
CN
China
Prior art keywords
block
host
data
physical address
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810767079.9A
Other languages
Chinese (zh)
Other versions
CN109725846B (en
Inventor
吉田英树
菅野伸一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kioxia Corp
Original Assignee
Toshiba Memory Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Memory Corp filed Critical Toshiba Memory Corp
Priority to CN202111461348.7A priority Critical patent/CN114115747B/en
Publication of CN109725846A publication Critical patent/CN109725846A/en
Application granted granted Critical
Publication of CN109725846B publication Critical patent/CN109725846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/152Virtualized environment, e.g. logically partitioned system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • G06F2212/2022Flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7202Allocation control and policies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7205Cleaning, compaction, garbage collection, erase control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Embodiments of the present invention provide that a kind of it is possible to realize the storage system of I/O performance improvement and control methods.The storage system of embodiment includes: nonvolatile memory, includes multiple blocks, and the multiple block respectively contains multiple pages;And controller, the nonvolatile memory is controlled.The controller is when the write-in for receiving specified 1st logical address and the 1st block number from host requires, the 1st position in the 1st block with the 1st block number of the data from the host should be written in decision, and the data from the host are written to the 1st position in the 1st block.The controller is by any one notice of the group of physical address in physical address in the 1st block for indicating the 1st position or the 1st logical address, the 1st block number and the 1st block to the host.

Description

Storage system and control method
[related application]
Present application is enjoyed with Japanese patent application case 2017-208105 (applying date: on October 27th, 2017) as base The priority of plinth application case.Present application all the elements comprising basic application case by referring to the basis application case.
Technical field
Embodiments of the present invention are related to a kind of storage system and control method for having nonvolatile memory.
Background technique
In recent years, has the storage system of nonvolatile memory just widely available.
As one of this storage system, it is known to which the solid-state based on NAND (Not AND, with non-) flash memory technology drives Device (SSD).
Recently, start to propose the new interface between host and memory.
However, typically, since the control of NAND type flash memory is complex, so being used to improve I/ realizing When the new interface of O (Input/Output, input/output) performance, it is necessary to consider host and memory (storage system) it Between role-sharing appropriate.
Summary of the invention
Embodiments of the present invention provide that a kind of it is possible to realize the improved storage systems and control method of I/O performance.
According to embodiment, the storage system that can be connected to host has: nonvolatile memory, includes multiple areas Block, these multiple blocks respectively contain multiple pages;And controller, it is electrically connected to the nonvolatile memory, and to described Nonvolatile memory is controlled.The controller is receiving specified 1st logical address and the 1st block number from the host Write-in require in the case where, determine that the 1st blocks with the 1st block number of the data from the host should be written Data from the host are written to the 1st position in the 1st block by the 1st interior position, will indicate described Physics in physical address or the 1st logical address, the 1st block number and the 1st block in 1st block of 1 position Any one notice of the group of address is to the host.The controller is specifying the non-volatile memories from host reception The copy source block number of the garbage collected (garbage collection) of device and duplication destination block number In the case where control instruction, select the 2nd block with the copy source block number and have from the multiple block described in The 3rd block of destination block number is replicated, determines the 3rd area that the valid data for being stored in the 2nd block should be written The valid data are copied to the duplication destination locations of the 3rd block by the duplication destination locations in block, by institute State logical address, the duplication destination block number and the 2nd block for indicating the duplication destination locations of valid data Interior physical address is notified to the host.
Detailed description of the invention
Fig. 1 is the block diagram for indicating the relationship of storage system (flash memory devices) of host and embodiment.
Fig. 2 is the flash memory devices for illustrating role-sharing and the embodiment between existing type SSD and host The figure of role-sharing between host.
Fig. 3 is to indicate that the data transmission between multiple main frames and multiple flash memory devices is executed via the network equipment The block diagram of the configuration example of computer system.
Fig. 4 is the block diagram for indicating the configuration example of storage system of the embodiment.
Fig. 5 is that set NAND Interface and multiple NAND type flash store in the storage system for indicate the embodiment The block diagram of the relationship of device bare die.
Fig. 6 is the figure for indicating the configuration example for the super block constructed by the set of multiple blocks.
Fig. 7 is for illustrating that host specifies the storage system of logical address and block number and the embodiment to determine area The data write activity of physical address (block bias internal) and host designated blocks number and physical address (area in block in block Block bias internal) data reading operation figure.
Fig. 8 is the figure for illustrating write instruction applied in the storage system of the embodiment.
Fig. 9 is the figure for illustrating the response to the write instruction of Fig. 8.
Figure 10 is the figure for illustrating the instruction of Trim applied in the storage system of the embodiment.
Figure 11 is for illustrating to indicate the block number of physical address and the figure of offset.
Figure 12 is the figure of the write activity for illustrating to be executed according to write instruction.
Figure 13 is the figure for illustrating to skip the write activity of bad page.
Figure 14 is another figure of the write activity for illustrating to skip bad page.
Figure 15 is for illustrating the figure of the movement to the page being written in block of logical address and data.
Figure 16 be the user data area for illustrating to write data into the page in block and by the data logically Location is written to the figure of the movement in the tediously long region of this page.
Figure 17 is the figure for illustrating the block number using super block and the relationship of offset.
Figure 18 is referred to for illustrating that maximum block number applied in the storage system of the embodiment obtains (get) The figure of order.
Figure 19 is the figure for illustrating to obtain maximum block number the response of instruction.
Figure 20 is for illustrating that resource block size applied in the storage system of the embodiment obtains the figure of instruction.
Figure 21 is the figure for illustrating to obtain resource block size the response of instruction.
Figure 22 is for illustrating the distribution of block applied in the storage system of the embodiment (allocate) instruction The figure of (block distribution requires).
Figure 23 is the figure for illustrating to distribute block the response of instruction.
Figure 24 is that the block information for indicating the storage system by host and the embodiment and executing obtains the sequence handled Column figure.
Figure 25 is the sequence of the sequence for the write-in processing for indicating the storage system by host and the embodiment and executing Figure.
Figure 26 is the figure for indicating the data update action that the more new data relative to the data having been written into is written.
Figure 27 is for the dynamic of the block management table update that illustrates to be managed by the storage system of the embodiment The figure of work.
Figure 28 is the movement that the look-up table (logical physical address conversion table) for illustrating to be managed by host updates Figure.
Figure 29 is for illustrating according to indicating coming from for corresponding with invalidated data are answered block number and physical address The notice of host is by the figure of the movement of block management table update.
Figure 30 is the figure for illustrating reading instruction applied in the storage system of the embodiment.
Figure 31 is the figure of the read action executed for illustrating the storage system by the embodiment.
Figure 32 is for illustrating that, according to the reading instruction from host, reading is respectively stored in different physical storage locations Data portion movement figure.
Figure 33 is the sequence of the sequence for the reading process for indicating the storage system by host and the embodiment and executing Figure.
Figure 34 is for illustrating that garbage collected applied in the storage system of the embodiment (GC) control refers to The figure of order.
Figure 35 is the figure for illustrating GC callback instruction applied in the storage system of the embodiment.
Figure 36 is that the garbage collected (GC) for indicating the storage system by host and the embodiment and executing acts Sequence sequence chart.
Figure 37 is the figure of the example of the data copy action for illustrating to execute for garbage collected (GC).
Figure 38 is the content of the look-up table of the host for illustrating to be updated according to the result of the data copy action of Figure 37 Figure.
Specific embodiment
Hereinafter, being illustrated referring to attached drawing to embodiment.
Firstly, referring to Fig.1, being illustrated to the composition of the computer system of the storage system comprising an embodiment.
The storage system is and to read data from nonvolatile memory to write data into nonvolatile memory The semiconductor memory system that constitutes of mode.Flash memory devices 3 of the storage system as nand flash memory technical foundation And it realizes.
The computer system also may include host (host apparatus) 2 and multiple flash memory devices 3.Host 2 can also be with For the server constituted in such a way that the flash array being made of multiple flash memory devices 3 is used as memory.Host (clothes Business device) 2 is connected with each other (internal interconnection) via interface 50 with multiple flash memory devices 3.It is mutually interconnected as the inside The interface 50 connect, it's not limited to that, and PCI Express (PCIe) (registered trademark), NVM Express (NVMe) can be used (registered trademark), Ethernet (registered trademark), NVMe over Fabrics (NVMeOF) etc..
It, can server in enumerated data center as the typical case of the server functioned as host 2.
In the example that host 2 is realized by the server in data center, which can also be via net Network 51 and be connected to multiple end user's terminals (client) 61.Host 2 can provide these end user's terminals 61 various Service.
The example for the service that can be provided by host (server) 2 has (1) that system operating platform is supplied to each client (respectively End user's terminal 61) platform i.e. service (PaaS), the infrastructure as virtual server is supplied to each client by (2) The infrastructure of (each end user's terminal 61) is held to service (IaaS) etc..
Multiple virtual machines can also execute on the physical server functioned as the host (server) 2. The each of these virtual machines run on host (server) 2 can be as corresponding several to be supplied to various services Virtual server that the mode of client (end user's terminal 61) is constituted and function.
The memory management functions of multiple flash memory devices 3 of the host (server) 2 comprising management composition flash array, And the front-end functionality of the various services comprising memory access is provided to end user's terminal 61 respectively.
In existing type SSD, block/page stratum construction of NAND type flash memory translates layer by the flash memory in SSD (FTL) hidden.That is, the FTL of existing type SSD, which includes (1), plays function used as logical physical address conversion table The look-up table of energy manages the function of the mapping between logical address each and the physical address each of NAND type flash memory; (2) it is used to the deletion of the read/write of page unit and block unit acting hidden function;(3) NAND type flash is executed to deposit The function etc. of the garbage collected (GC) of reservoir.Logical address each and NAND type flash memory are not observed from host Mapping between physical address.Block/page construction of NAND type flash memory is not observed from host yet.
On the other hand, in host, a kind of address conversion (application-level address conversion) is also executed sometimes.The address turns It changes using application-level address conversion table, the logical address each for managing application-level and the logical address of SSD are each Mapping between person.In addition, the fragment generated on the logical address space in order to eliminate SSD, also executing should in host A kind of GC (application-level GC) of data configuration change on logical address space.
However, being respectively there is the interminable composition of address conversion table (SSD has as logic object in host and SSD The look-up table managing address conversion table and functioning, host have application-level address conversion table) in, in order to save this A little address conversion tables and consume huge storage resource.In turn, the address conversion of the address conversion comprising host computer side and the side SSD Dual address conversion also become the factor for making I/O reduced performance.
In turn, the application-level GC of host computer side, which becomes, makes to increase to the data writing of SSD as actual user data The factor of several times (such as the 2 times) left and right of amount.Increase and the write-in application program of SSD of this data writing be combined with each other and Making the memory performance of system entirety reduces, and the service life of SSD is in addition made also to shorten.
In order to eliminate this problem, it is also considered that by the FTL of existing type SSD the functional countermeasure for moving to host.
However, in order to install the countermeasure, it is necessary to directly operate the block of NAND type flash memory and page by host.? In NAND type flash memory, by there are the limitations of page write sequence, so host is difficult to directly operate page.In addition, in NAND type In flash memory, the case where there are block including bad page (bad page).It is more difficult for host to operate bad page.
Therefore, in the present embodiment, acting between host 2 and flash memory devices 3 for FTL is shared.For probably, Host 2 manages the look-up table functioned as logical physical address conversion table, and host 2 only specifies the area that data should be written (the write-in destination of the position in the block of the data should be written in the block number of block and logical address corresponding with the data Position) it is determined by flash memory devices 3.Indicate the block of the position (write-in destination locations) in the block determined Interior physical address is notified from flash memory devices 3 to host 2.
In this way, the only operation block of host 2, the position (for example, position in page, page) in block is by flash memory devices 3 To operate.
When flash memory devices 3 must be write data into, the selection of host 2 block number (or filled with being stored to flash The mode for setting 3 distribution free time blocks requires), and by the block number of specified logical address and the block that has been selected (or by dodging The block number of the block for the distribution that fast storage device 3 notifies) write-in require (write instruction) to be sent to flash memory devices 3.Data from host 2 are written to the block with specified block number by flash memory devices 3.In the situation Under, flash memory devices 3 determine the position (write-in destination locations) in the block, and the data from host 2 are written to this Position (write-in destination locations) in block.Moreover, (purpose is written in the position indicated in the block by flash memory devices 3 Position) block in physical address as to write-in require response (return value) notify to host 2.Hereinafter, will move to master The FTL function of machine 2 is known as overall situation FTL.
The global FTL of host 2 has the function of executing storage service, consume control function, for realizing high availability Function prevents having multiple repeated data portions of identical content are stored in memory to repeat to exclude (De-duplication) Function, garbage collected (GC) block selection function, QoS control function etc..Comprising being directed to each QoS in QoS control function Area's (or each block) determines the function of access unit.Access unit indicates the minimum data size that host 2 can be written/read (Grain).Flash memory devices 3 support single or multiple access unit (Grain), and host 2 can be 3 in flash memory devices It is directed to each area QoS (or each block) in the case where holding multiple access units, the access unit that should be used is indicated to flash Storage device 3.
In addition, the function in QoS control function comprising being used to prevent the performance in the section QoS from interfering as far as possible.The function is basic The upper function for for saving the stable waiting time.
On the other hand, flash memory devices 3 are able to carry out low level abstraction (LLA).LLA is NAND type flash memory The function of abstract.LLA includes by the hidden function of bad page (bad page) and in accordance with the function of page write sequence limitation.LLA Also function is executed comprising GC.GC executes function, and the valid data in the duplication source area block specified by host 2 (GC source area block) are multiple It is made as the duplication destination block (destination GC block) specified by host 2.The GC of flash memory devices 3 executes function decision and answers The position in the destination the GC block of valid data (duplication destination locations) are written, and the valid data in GC source area block are answered The duplication destination locations being made as in the block of the destination GC.
Fig. 2 indicates flash memory devices 3 and the master of role-sharing and present embodiment between existing type SSD and host Role-sharing between machine 2.
The left part of Fig. 2 indicates the computer system entirety comprising existing type SSD and the host for executing virtual disk service Stratum's construction.
In host (server), executes and be used to provide the virtual machine service of multiple virtual machines to multiple end users 101.In each virtual machine in virtual machine service 101, the operating system and use used by corresponding end user is executed Family application program 102.
In addition, executing multiple virtual disk services corresponding with multiple user applications 102 in host (server) 103.Each virtual disk service 103 applies a part of the capacity of the storage resource in existing type SSD as corresponding user The storage resource (virtual disk) of program 102 is distributed.In each virtual disk service 103, also executes and use application-level The logical address of application-level is converted to the application-level address conversion of SSD logical address by address conversion table.Into And in host, also executing application grade GC104.
It instructs from the response that the transmission of host (server) Xiang Xianyou type SSD and instruction are completed from existing type SSD to host The passback of (server) is that respectively existing I/O queue, IOQ 200 is performed via host (server) and existing type SSD.
Existing type SSD include write buffer (WB) 301, look-up table (LUT) 302, garbage collected function 303, NAND type flash memory (nand flash memory array) 304.Existing type SSD only manages a look-up table (LUT) 302, and NAND type is dodged The resource of fast memory (nand flash memory array) 304 is shared by multiple virtual disk services 103.
In this composition, because comprising useless in the application-level GC104 and existing type SSD under virtual disk service 103 The duplicate GC of information collecting function 303 (LUT grades of GC) causes write-in amplification to become larger.In addition, in existing type SSD, it may Adjacent interference is led to the problem of, i.e., due to the increasing from a certain end user or the data writing of a certain virtual disk service 103 Add and increase the frequency of GC, thus causes the I/O performance for other end users or other virtual disk services 103 bad Change.
In addition, because existing comprising in the application-level address conversion table and existing type SSD in each virtual disk service LUT302 repetition resource, and consume more memory resource.
The right part of Fig. 2 indicates the rank of the computer system entirety of the flash memory devices 3 comprising present embodiment and host 2 Layer construction.
In host (server) 2, executes the virtual machine for being used to provide multiple end users multiple virtual machines and take Business 401.In each virtual machine in virtual machine service 401, execute the operating system used by corresponding end user and User application 402.
In addition, executing multiple I/O services 403 corresponding with multiple user applications 402 in host (server) 2. These I/O service 403 also may include the block I/O based on LBA (Logical Block Address, logical block addresses) Service, key assignments storage service etc..Each I/O service 403 includes look-up table (LUT) 411, which manages logically Mapping between location each and the physical address each of flash memory devices 3.Herein, so-called logical address is to refer to identification to deposit Take the identifier of the data of object.The logical address may be the logical block addresses of the position on specified logical address space (LBA), or, or key assignments storage key (label), can also be key Hash Value.
In the block I/O service based on LBA, management logical address (LBA) each also can be used and flash storage fills Set the LUT411 of the mapping between 3 physical address each.
In key assignments storage service, management logical address (label namely as key) each and object also can be used The LUT411 of the mapping between each of address is managed, the physical address expression stores with these logical addresses (namely as key The same label) corresponding data flash memory devices 3 in physical storage locations.In the LUT411, it can also manage Label, the corresponding relationship that store data length by the physical address and data of the data of the tag recognition.
Each end user can select the addressing method (LBA, key of key assignments memory etc.) that should be used.
Logical address each from user application 402 is not converted to flash memory devices by these each LUT411 3 logical address each, and the logical address each from user application 402 is converted into flash memory devices 3 Physical address each.That is, these each LUT411 are that the logical address of flash memory devices 3 is converted to physical address Table and the table of the synthesis (merging (merge)) of application-level address conversion table.
In addition, each I/O service 403 includes GC block selection function.GC block selection function is able to use corresponding LUT pipe The valid data amount of each block is managed, thereby, it is possible to select GC source area block.
In host (server) 2, can also being directed to each area QoS, there are I/O services 403.Belong to certain area QoS I/O service 403 can also manage the logical address each used by the user application 402 in the corresponding area QoS and belong to In distribution to the mapping between the block number each of the block group of the resource population in the corresponding area QoS.
The response etc. that from host (server) 2 to the transmission of flash memory devices 3 and instruction is completed is instructed to store from flash Device 3 is to the respective existing I/O that the passback of host (server) 2 is via host (server) 2 and flash memory devices 3 Queue 500 and execute.These I/O queue, IOQs 500 can also be classified as multiple queue groups corresponding with multiple areas QoS.
Flash memory devices 3 include multiple write buffers (WB) 601 corresponding with multiple areas QoS and multiple areas pair QoS Multiple garbage collecteds (GC) function 602 and NAND type flash memory (nand flash memory array) 603 answered.
In the composition shown in the right part of the Fig. 2, have a common boundary since upper stratum (host 2) can recognize block, so energy Enough consider that user data is written to each block by block boundary/resource block size.That is, host 2 can recognize NAND type sudden strain of a muscle Data, are written to an area for example, being able to carry out as a result, by each block of fast memory (nand flash memory array) 603 simultaneously Block is whole, and the data in a block entirely through deletion or are updated and invalidated control.As a result, it is possible to make at one Block is mixed valid data and the situation of invalid data is easy to produce.Therefore, it can reduce frequency required for executing GC. By reducing the frequency of GC, and application program reduction is written, can be realized the raising of the performance of flash memory devices 3, flash is deposited The maximization in the service life of storage device 3.In this way, the composition that upper stratum (host 2) can recognize block number is more useful.
On the other hand, the position in the block of data should be written not to be determined by upper stratum (host 2), but by flash Storage device 3 determines.Therefore, can be hidden by bad page (bad page), it additionally is able to abide by the limitation of page write sequence.
The change case that the system that Fig. 3 indicates Fig. 1 is constituted.
In Fig. 3, the data transmission between multiple main frames 2A and multiple flash memory devices 3 via the network equipment (herein, For network switching 1) and execute.
That is, the memory management functions of the host (server) 2 of Fig. 1 are moved in the computer system of Fig. 3 Administrator 2B, and the front-end functionality of host (server) 2 is moved into multiple main frames (end user services with host) 2A.
Administrator 2B manages multiple flash memory devices 3, according to from each host (end user services with host) 2A It is required that distributing the storage resource of these flash memory devices 3 to each host (end user services with host) 2A.
Each host (end user services with host) 2A is connected to more than one end user's terminal 61 via network. Each host (end user, which services, uses host) 2A manages looking into as the logical physical address conversion table of the synthesis (merging) Look for table (LUT).Each host (end user services with host) 2A uses the LUT of itself, and only managing is made by corresponding end user Logical address each and distribution are to the mapping between the physical address each of the resource of itself.Therefore, this composition can make System easily extends to the outside.
The global FTL of each host 2A has the function of managing look-up table (LUT), the function for realizing high availability, QoS Control function, GC block selection function etc..
Administrator 2B is the dedicated device (computer) for managing multiple flash memory devices 3.Administrator 2B has Reserve the global resource reservation function from the storage resource of the desired capacity amount of each host 2A.In turn, administrator 2B has and is used to It monitors the consume function for monitoring of the consumption degree of each flash memory devices 3, distribute the storage resource reserved (NAND resource) To the NAND resource allocation function, QoS control function, global clock management function etc. of each host 2A.
The low level abstraction (LLA) of each flash memory devices 3 has to be write by the hidden function of bad page (bad page), in accordance with page Enter the function of sequence limitation, the function of managing write buffer, GC and executes function etc..
It is constituted according to the system of Fig. 3, since the management of each flash memory devices 3 is executed by administrator 2B, so each host Come as long as 2A only executes to require to be sent to by I/O to distribute to the movement of the more than one flash memory devices 3 of itself, with reception From the movement of the response of flash memory devices 3.That is, between multiple main frames 2A and multiple flash memory devices 3 Data transmission is only executed via network switching 1, and administrator 2B is unrelated with the data transmission.In addition, as described above, by host 2A The content of the LUT of each management is mutually indepedent.Therefore, because can easily increase the quantity of host 2A, so can be realized The system for extending to the outside type is constituted.
The configuration example of Fig. 4 expression flash memory devices 3.
Flash memory devices 3 have controller 4 and nonvolatile memory (NAND type flash memory) 5.Flash storage Device 3 can also have random access memory, for example, DRAM (Dynamic Random Access Memory, dynamic random Access memory) 6.
NAND type flash memory 5 includes memory cell array, which includes to configure rectangularly Multiple memory cells.NAND type flash memory 5 both can be the NAND type flash memory of two-dimensional structure, or The NAND type flash memory of three-dimensional construction.
The memory cell array of NAND type flash memory 5 includes multiple block BLK0~BLKm-1.Block BLK0~ The each of BLKm-1 (is made of herein for page P0~Pn-1) a most pages.Block BLK0~BLKm-1 is as deletion unit And it functions.Block is also sometimes referred to as " deleting block ", " physical blocks " or " physics deletion block ".Page P0~Pn-1 Each include to be connected to multiple memory cells of same word line.Page P0~Pn-1 is that data write activity and data are read in The unit of movement.
Controller 4 is electrically connected via as the NAND Interface 13 Toggle, open nand flash memory interface (ONFI) In the NAND type flash memory 5 as nonvolatile memory.Controller 4 is the side to control NAND type flash memory 5 The Memory Controller (control circuit) that formula is constituted.
NAND type flash memory 5 is as shown in figure 5, include multiple NAND type flash memory dies.Each NAND type flash Memory die is the periphery electricity for including the memory cell array comprising multiple block BLK and the control memory cell array The nonvolatile memory bare die on road.Each NAND type flash memory die can be acted independently.Therefore, NAND type flash Memory die is functioned as action potential arranged side by side.NAND type flash memory die is also referred to as " NAND type flash Memory chip " or " nonvolatile memory chip ".In Fig. 5, following situation is instantiated: being connected to 16 in NAND Interface 13 A channel Ch1, Ch2 ... Ch16, these channels Ch1, Ch2 ... it is (such as each that each of Ch16 is connected to identical quantity 2 bare dies of channel) NAND type flash memory die each.Each channel includes to be used to and corresponding NAND type flash memory The communication line (memory bus) that bare die is communicated.
Controller 4 via channel Ch1, Ch2 ... Ch16 and control NAND type flash memory die #1~#32.Controller 4 can simultaneously driving channel Ch1, Ch2 ... Ch16.
16 NAND type flash memory die #1~#16 for being connected to channel Ch1~Ch16 can also be used as the 1st storage Device group and form, remaining the 16 NAND type flash memory die #17~#32 for being additionally connected to channel Ch1~Ch16 can also To be formed as the 2nd memory group.Memory group be used as be used to make multiple memory modules using memory group staggeredly and simultaneously Column movement unit and function.In the configuration example of Fig. 5, using 16 channels and the memory group of 2 memory groups is used Staggeredly, maximum 32 NAND type flash memory dies can be made to act side by side.
In the present embodiment, controller 4 can also manage multiple blocks that each is made of multiple block BLK (hereinafter, Referred to as super block), the unit that also can use super block executes deletion movement.
It's not limited to that for super block, also may include from NAND type flash memory die #1~#32 one one What is selected aly adds up to 32 block BLK.In addition, each of NAND type flash memory die #1~#32 also can have it is more Plane is constituted.For example, there are each in NAND type flash memory die #1~#32 more planes comprising 2 planes to constitute In the case where, a super block also may include from 64 planes corresponding with NAND type flash memory die #1~#32 In singly select add up to 64 block BLK.A super block SB is instantiated in Fig. 6 by storing from NAND type flash What is singly selected in device bare die #1~#32 adds up to 32 block BLK (the block BLK surrounded in Fig. 5 by thick frame) structures At the case where.
As shown in figure 4, controller 4 includes host interface 11, CPU (Central Processing Unit, central processing Device) 12, NAND Interface 13 and DRAM interface 14 etc..These CPU12, NAND Interface 13, DRAM interface 14 phase via bus 10 It connects.
The host interface 11 is the host interface circuit constituted in a manner of executing the communication with host 2.The host interface 11 for example may be PCIe controller (NVMe controller).Host interface 11 receives various requirement (instruction) from host 2.These It is required that requiring (write instruction), reading requirement (reading instruction) and other various requirements (instruction) comprising write-in in (instruction).
CPU12 is the processor constituted in a manner of controlling host interface 11, NAND Interface 13, DRAM interface 14.CPU12 Respond flash memory devices 3 power supply connect from NAND type flash memory 5 or ROM (not shown) (Read Only Memory, Read-only memory) will control program (firmware) be loaded into DRAM6, then execute the firmware, thus carry out various processing.In addition, Firmware can also be loaded into SRAM (not shown) (Static Random Access Memory, static random in controller 4 Access memory) on.The CPU12 is able to carry out the instruction processing etc. for handling the various instructions from host 2.CPU12's Movement is the firmware by being executed by CPU12 to control.In addition, part or all of instruction processing can also be by controlling Specialized hardware in device 4 executes.
CPU12 can be played as write activity control unit 21, read action control unit 22 and GC operation control part 23 Function.In these write activity control units 21, read action control unit 22 and GC operation control part 23, it is installed with for reality The application programming interfaces (API) that system shown in the right part of existing Fig. 2 is constituted.
From the write-in requirement of the reception designated blocks number of host 2 and logical address, (write-in refers to write activity control unit 21 It enables).Logical address is the identifier that can identify the data (user data) that should be written, for example, both can be LBA, Huo Zheye It can be for as the label the key of key assignments memory, or the Hash Value of key.Block number is to specify that the number should be written According to block identifier.As block number, can be used can uniquely identify arbitrary one in multiple blocks Various values.The block specified by block number both can be physical blocks, or the super block.Refer to receiving write-in In the case where order, what write activity control unit 21 determined should to be written first the data from host 2 has the specified area Position (write-in destination locations) in the block (write-in destination block) of block number.Then, write activity control unit 21 will Data (write-in data) from host 2 are written to the write-in destination locations of the write-in destination block.In this case, it writes The data from host 2 can not only be written to write-in destination block by entering operation control part 21, and can be by the data Write-in destination block is written to the two of the logical address of the data.
Then, write activity control unit 21 will indicate the block of the said write destination locations of the write-in destination block Interior physical address is notified to host 2.Physical address is by indicating the write-in purpose status in the write-in destination block in the block The block bias internal set indicates.
In this case, the block bias internal indicate write-in destination block beginning to be written destination locations it is inclined It moves, that is to say, that the offset of the write-in destination locations at the beginning relative to write-in destination block.Destination block is written Beginning to write-in destination locations offset size by with the size different from page size granularity (Grain) multiple table Show.Granularity (Grain) is the access unit.The maximum value of the size of granularity (Grain) is limited to resource block size.In other words It says, block bias internal is by making the beginning that destination block is written deviating with different from page size to write-in destination locations The multiple of the granularity of size indicates.
Granularity (Grain) also can have the size less than page size.For example, in the case where page size is 16K byte, The size of granularity (Grain) may be 4K byte.In this case, it is specified that size is respectively 4K word in some block Multiple deviation posts of section.Block bias internal corresponding with the initial deviation post in block be, for example, in 0, with block under The corresponding block bias internal of one deviation post is, for example, 1, block bias internal example corresponding with the deviation post next again in block For example 2.
Alternatively, granularity (Grain) also can have the size greater than page size.For example, granularity (Grain) or page The size of the several times of size.In the case where page size is 16K byte, granularity may be the size of 32K byte.
In this way, write activity control unit 21 is by itself determining the write-in in the block with the block number from host 2 Destination locations, the write-in destination locations being then written to the write-in data from host 2 in the block.Then, it writes Enter operation control part 21 and wants physical address in the block for indicating the write-in destination locations (block bias internal) as with write-in It asks corresponding response (return value) and notifies to host 2.Alternatively, write activity control unit 21 will not only physical address in block (block bias internal) is notified to host 2, can also be by physical address in logical address, block number and block (block bias internal) Group notify to host 2.
Therefore, flash memory devices 3 can be such that 2 operation block of host numbers, and by the limitation of page write sequence, bad page, page Size etc. is hidden.
As a result, host 2 can recognize block boundary, but about the limitation of page write sequence, bad page, page size and unawareness Know, can manage which user data is present in which block number.
Read action control unit 22 is receiving specified physical address (that is, in block number and block partially from host 2 Move) reading requirement (read instruction) in the case where, these block numbers and block bias internal are based on, from the block of reading object The physical storage locations of interior reading object read data.The block of reading object is specific by block number.Reading in the block Take the physical storage locations of object specific by block bias internal.By using the block bias internal, host 2 is without operating NAND The different page sizes of every generation of flash memory.
In order to obtain the physical storage locations of reading object, read action control unit 22 can also first will be in the block partially Remove with indicate the granularity of page size number (in the case where page size is 16K byte and granularity (Grain) is 4K byte, table The number for showing the granularity of page size is 4), then using the resulting quotient of the division and remainder as the page number of reading object and reading pair The page bias internal of elephant and determine respectively.
GC operation control part 23 is receiving answering for the garbage collected of specified NAND type flash memory 5 from host 2 Source region block number (GC source region block number) processed and the GC control instruction for replicating destination block number (destination GC block number) In the case where, from multiple blocks of NAND type flash memory 5, will have the block of specified copy source block number with Block with specified duplication destination block number is selected as duplication source area block (GC source area block) and duplication destination Block (destination GC block).The significant figure being stored in the GC source area block having been selected should be written in the decision of GC operation control part 23 According to the destination GC block in duplication destination locations, valid data are copied into the duplication destination in the block of the destination GC Position.
Then, GC operation control part 23 by the logical address of valid data, duplication destination block number and indicates GC mesh Ground block in duplication destination locations block in physical address (block bias internal) notify to host 2.
Block management table 32 can be used also to execute in the management of valid data/invalid data.The block management table 32 can also for example exist for each block.In block management table 32 corresponding with certain block, store indicates the area The bitmap of the invalidating of data each in block marks.Herein, so-called valid data, refer to from logical address as newest Data link data, and have the data for a possibility that reading from host 2 after referring to.So-called invalid data, refers to Have no way of host 2 read a possibility that data.For example, establishing associated data with certain logical address is valid data, and it is any It is invalid data that logical address, which does not set up associated data,.
As described above, GC operation control part 23 determines that the significant figure being stored in duplication source area block (GC source area block) should be written According to duplication destination block (destination GC block) in position (duplication destination locations), valid data are copied into duplication The position determined (duplication destination locations) of destination block (destination GC block).In this case, GC is acted The two of valid data and the logical address of the valid data can also be copied to duplication destination block (GC mesh by control unit 23 Ground block).
In the present embodiment, as described above, write activity control unit 21 can be by data (the write-in number from host 2 According to) with the two of the logical address from host 2 it is written to write-in destination block.Therefore, GC operation control part 23 can be from The duplication source area block (GC source area block) easily obtains the logical address of each data in duplication source area block (GC source area block), so The logical address of replicated valid data can easily be notified to host 2.
NAND Interface 13 is under the control of CPU12, to control the storage that the mode of NAND type flash memory 5 is constituted Device control circuit.DRAM interface 14 is under the control of CPU12, to control the DRAM control circuit that the mode of DRAM6 is constituted. A part of the storage region of DRAM6 in order to write buffer (WB) 31 storage and use.In addition, the storage region of DRAM6 Another part in order to block management table 32 storage and use.In addition, these write buffers (WB) 31 and block management Table 32 also can store the SRAM (not shown) in controller 4.
Fig. 7 is to indicate that host 2 specifies logical address and block number and flash memory devices 3 determine physical address in block The data write activity of (block bias internal) is numbered and physical address (block bias internal) in block with 2 designated blocks of host Data reading operation.
Data write activity is executed according to sequence below.
(1) it when data (write-in data) must be written to flash memory devices 3 by the write-in processing unit 412 of host 2, writes Entering processing unit 412 may also require that flash memory devices 3 to distribute idle block.The controller 4 of flash memory devices 3 includes pipe Manage the block dispenser 701 of the idle block group of NAND type flash memory 5.Block dispenser 701 from write-in processing unit 412 When receiving requirement (block distribution requires), block dispenser 701 distributes one of idle block group idle block to host 2, and the block number of the allocated block (BLK#) is notified to host 2.
Alternatively, write-in processing unit 412 can also be by itself in the composition that write-in processing unit 412 manages idle block group Selection write-in destination block.
(2) write-in processing unit 412 will specified logical address (such as LBA) corresponding with write-in data and write-in purpose area The write-in of the block number (BLK#) of block requires to be sent to flash memory devices 3.
(3) controller 4 of flash memory devices 3 includes the page dispenser 702 of the page of distribution data write-in.It is distributed in page When the reception write-in of portion 702 requires, page dispenser 702 determines to indicate to have to require the block of specified block number (to write by being written Enter destination block) in write-in destination locations block in physical address (PBA in block).Physical address (area in block PBA in block) it can be indicated by the block bias internal (also only referring to for offset).Controller 4 is based on being required by write-in specified Block number and block in physical address (PBA in block), by the write-in data from host 2 be written to write-in purpose area Write-in destination locations in block.
(4) physical address (PBA in block) in the block for indicating write-in destination locations is used as and wants to write-in by controller 4 The response asked and notify to host 2.Alternatively, controller 4 can also will be with the corresponding logical address (LBA) of write-in data, write-in The block number (BLK#) of destination block and indicate the group of PBA (offset) in the blocks of write-in destination locations as to writing Enter the response of requirement and notifies to host 2.In other words, controller is by physical address in block or logical address, block number And in block the group of physical address any one notice to host 2.In host 2, to indicate to be written with the physics of write-in data The physical address (physical address (block bias internal) in block number, block) of storage location is mapped as the logic of the write-in data The mode of address updates LUT411.
Reading data movement is executed according to sequence below.
(1) ' when host 2 must read data from flash memory devices 3, host 2 is obtained referring to LUT411 from LUT411 Physical address (physical address (block bias internal) in block number, block) corresponding with the logical address for the data that should be read.
(2) ' host 2 will want in the block number and block that specified have been achieved in the reading of physical address (block bias internal) It asks and is sent to flash memory devices 3.When the controller 4 of flash memory devices 3 receives the reading requirement from host 2, controller 4 Based on physical address in block number and block, the physical storage locations of block and reading object to reading object carry out special It is fixed, data are read from the physical storage locations of the reading object in the block of the reading object.
Fig. 8 is to indicate write instruction applied in flash memory devices 3.
Write instruction is that flash memory devices 3 are required with the instruction of the write-in of data.The write instruction also may include finger Enable ID, block number BLK#, logical address, length etc..
Instruction ID is to indicate that the instruction is the ID (instruction code) of write instruction, includes the finger of write instruction in write instruction Enable ID.
Block number BLK# is the identifier (block address) that can uniquely identify the block that data should be written.
Logical address is the identifier for identifying the write-in data that should be written.The logical address is as described above, can also be with For LBA, or the key of key assignments memory, or the Hash Value of key.In the case where logical address is LBA, include Logical address (starting LBA) in the write instruction indicates logical place (the initial logical bit that should be written with write-in data It sets).
Length indicates the length for the write-in data that should be written.The length (data length) both can be by granularity (Grain) Number can also be specified, alternatively, its size can also be specified by byte to specify by the number of LBA.
When receiving write instruction from host 2, controller 4 determines the block with the block number specified by write instruction Interior write-in destination locations.The write-in destination locations consider limitation and bad page of page write sequence etc. and determine.Then, it controls Data from host 2 are written to the write-in mesh in the block with the block number specified by write instruction by device 4 processed Position.
Fig. 9 is the response indicated to the write instruction of Fig. 8.
The response includes physical address, length in block.Physical address expression is written in the block of data in block Position (physical storage locations).Physical address in block by block bias internal as described above, can be specified.Length indicates write-in Data length.The length (data length) can both be specified by the number of granularity (Grain), can also by the number of LBA Lai It is specified, alternatively, its size can also be specified by byte.
Alternatively, the response not only includes physical address and length in block, it can also in turn include logical address and block Number.Logical address be include logical address in the write instruction of Fig. 8.Block number is the write instruction included in Fig. 8 In logical address.
Figure 10 is to indicate that Trim applied in flash memory devices 3 is instructed.
The block number and block of physical storage locations of the Trim instruction comprising indicating to be stored with the data that should be invalid The instruction of interior physical address (block bias internal).That is, Trim instruction can specify physical address, and not as LBA The same logical address.Trim instruction includes instruction ID, physical address, length.
Instruction ID is to indicate that the instruction is the ID (instruction code) of Trim instruction, comprising Trim instruction in Trim instruction Instruction ID.
Physical address expression is stored with the initial physical storage locations for answering invalidated data.In the present embodiment, The physical address is specified by block number and the combination of offset (block bias internal).
Length indicates to answer the length of invalidated data.The length (data length) both can be by the number of granularity (Grain) It specifies, can also be specified by byte.
Controller 4 indicates having for data each included in each of multiple blocks using the management of block management table 32 Effect/invalid label (bitmap label).The physical store position comprising indicating to be stored with the data that should be invalid is being received from host 2 In the case that the block number set and the Trim of offset (block bias internal) are instructed, controller 4 updates block management table 32, will Label corresponding to the data of physical storage locations corresponding with block number included in Trim instruction and block bias internal (bitmap label) is changed to indicate invalid value.
Figure 11 is the block bias internal for indicating physical address in regulation block.
Block number specifies some block BLK.Each block BLK is as shown in figure 11, comprising multiple pages (herein, for page 0~ Page is n).
It is 16K byte in page size (the user data storage region of each page), granularity (Grain) is the reality of the size of 4KB In example, block BLK is divided logically into 4 × (n+1) a regions.
Offset+0 indicates the initial region 4KB of page 0, and offset+1 indicates the 2nd region 4KB of page 0, and offset+2 indicates page 0 the 3rd region 4KB, offset+3 indicate the 4th region 4KB of page 0.
Offset+4 indicates the initial region 4KB of page 1, and offset+5 indicates the 2nd region 4KB of page 1, and offset+6 indicates page 1 the 3rd region 4KB, offset+7 indicate the 4th region 4KB of page 1.
Figure 12 is the write activity for indicating to be executed according to write instruction.
Now, it is contemplated that the case where block BLK#1 is distributed as write-in destination block.Controller 4 according to page 0, page 1, Page 2 ... data are written to block BLK#1 with page unit by the sequence of page n.
In Figure 11, it is contemplated that in the state that the data of 16K amount of bytes to be had been written into page 0 of block BLK#1, from master Machine 2 receives the case where write instruction of designated blocks number (=BLK#1), logical address (LBAx) and length (=4).Controller The page 1 of block BLK#1 is determined as write-in destination locations by 4, will be written from the write-in data of the received 16K amount of bytes of host 2 To the page 1 of block BLK#1.Then, controller 4 returns offset (block bias internal), length as the response to the write instruction It is back to host 2.In this example, offset (block bias internal) is+5, length 4.Alternatively, controller 4 can also be used as to this The response of write instruction makes logical address, block number, offset (block bias internal), length be back to host 2.In the example In, logical address LBAx, block number BLK#1, offset (block bias internal) are+5, length 4.
Figure 13 is the write activity for indicating to skip bad page (bad page).
In Figure 13, it is contemplated that in the state that data to be had been written into page 0, page 1 of block BLK#1, received from host 2 Designated blocks number the case where write instruction of (=BLK#1), logical address (LBAx+1) and length (=4).If block The page 2 of BLK#1 is bad page, then the page 3 of block BLK#1 is determined as write-in destination locations by controller 4, it will be from host 2 The write-in data of received 16K amount of bytes are written to the page 3 of block BLK#1.Then, controller 4 is as to the write instruction Response makes offset (block bias internal), length be back to host 2.In this example, offset (block bias internal) is+12, length It is 4.Alternatively, controller 4 can also be used as the response to the write instruction, make logical address, block number, offset (in block Offset), length be back to host 2.In this example, logical address LBAx+1, block number BLK#1 are deviated (in block Offset) it is+12, length 4.
Figure 14 is another example for indicating to skip the write activity of bad page.
In Figure 14, it is contemplated that the case where data are written across 2 pages across bad page.Now, it is contemplated that by data It is written to page 0, the page 1 of block BLK#2, and the write-in data for the 8K amount of bytes not being written are remained in write buffer 31 Situation.In this state, if receiving the write-in of designated blocks number (=BLK#2), logical address (LBAy) and length (=6) Instruction, then controller 4 is using the 8K byte write-in data not being written and out of host 2 new received 24K byte write-in data Initial 8K byte data are written, prepare 16K byte write-in data corresponding with page size.Then, controller 4 by this The 16K byte write-in data of preparation are written to the page 2 of block BLK#2.
If lower one page 3 of block BLK#2 is bad page, the page 4 of block BLK#2 is determined as next by controller 4 Destination locations are written, will be written to from the remaining 16K byte write-in data in the received 24K byte of host 2 write-in data The page 4 of block BLK#2.
Then, controller 4 returns to 2 offsets (block bias internal) with 2 length as the response to the write instruction To host 2.In this example, the response also may include offset (=+ 10), length (=2), offset (=+ 16), length (= 4).Alternatively, controller 4 can also be used as the response to the write instruction, make LBAy, block number (=BLK#2), offset (=+ 10), length (=2), block number (=BLK#2), offset (=+ 16), length (=4) are back to host 2.
Figure 15, Figure 16 are the movements to the page being written in block indicated logical address and data.
In each block, each page also may include the user data area for storing user data and be used to storage management The tediously long region of data.Page size is 16KB+ α.
Controller 4 writes the two of 4KB user data and logical address (such as LBA) corresponding with the 4KB user data Enter to write-in destination block BLK.It in this case, as shown in figure 15, can also include LBA and 4KB user data by each 4 data groups be written to identical page.Block bias internal can also indicate a group boundary.
Alternatively, as shown in figure 16,4 4KB user data can also be written to the user data area in page, it will be with this Corresponding 4 LBA of a little 4 4KB user data are written to the tediously long region in this page.
Figure 17 is the relationship of the block number and offset (block bias internal) in the example indicated using super block.With Under, block bias internal also only referring to for offset.
Herein, for simplified illustration, it is contemplated that some super block SB#1 is by 4 blocks BLK#11, BLK#21, BLK# 31, the case where BLK#41 is constituted.Controller 4 according to the page 0 of block BLK#11, the page 0 of block BLK#21, block BLK#31 page 0, the page 0 of block BLK#41, the page 1 of block BLK#11, the page 1 of block BLK#21, the page 1 of block BLK#31, block BLK#41 Page 1 ... be sequentially written in data.
Offset+0 indicates the initial region 4KB of the page 0 of block BLK#11, and offset+1 indicates the page 0 of block BLK#11 2nd region 4KB, offset+2 indicate the 3rd region 4KB of the page 0 of block BLK#11, and offset+3 indicates the page of block BLK#11 0 the 4th region 4KB.
Offset+4 indicates the initial region 4KB of the page 0 of block BLK#21, and offset+5 indicates the page 0 of block BLK#21 2nd region 4KB, offset+6 indicate the 3rd region 4KB of the page 0 of block BLK#21, and offset+7 indicates the page of block BLK#21 0 the 4th region 4KB.
Similarly, offset+12 indicates the initial region 4KB of the page 0 of block BLK#41, and offset+13 indicates block BLK# 2nd region 4KB of 41 page 0, offset+14 indicate the 3rd region 4KB of the page 0 of block BLK#41, and offset+15 indicates area 4th region 4KB of the page 0 of block BLK#41.
Offset+16 indicates the initial region 4KB of the page 1 of block BLK#11, and offset+17 indicates the page 1 of block BLK#11 The 2nd region 4KB, offset+18 indicate block BLK#11 page 1 the 3rd region 4KB, offset+19 indicate block BLK#11 Page 1 the 4th region 4KB.
Offset+20 indicates the initial region 4KB of the page 1 of block BLK#21, and offset+21 indicates the page 1 of block BLK#21 The 2nd region 4KB, offset+22 indicate block BLK#21 page 1 the 3rd region 4KB, offset+23 indicate block BLK#21 Page 1 the 4th region 4KB.
Similarly, offset+28 indicates the initial region 4KB of the page 1 of block BLK#41, and offset+29 indicates block BLK# 2nd region 4KB of 41 page 1, offset+30 indicate the 3rd region 4KB of the page 1 of block BLK#41, and offset+31 indicates area 4th region 4KB of the page 1 of block BLK#41.
Figure 18 is to indicate that maximum block number applied in flash memory devices 3 is instructed.
Maximum block number obtains instruction as the instruction for obtaining maximum block number from flash memory devices 3.Host 2 It is instructed by sending maximum block number to flash memory devices 3, can recognize indicates included in flash memory devices 3 Block number maximum block number.Maximum block number obtains the instruction that instruction obtains instruction comprising maximum block number ID does not include parameter.
Figure 19 is the response for indicating to obtain maximum block number instruction.
When receiving maximum block number from host 2 and obtaining instruction, flash memory devices 3, which make to respond shown in Figure 19, to be returned To host 2.The response includes to indicate maximum block number (that is, can utilize included in flash memory devices 3 The sum of block) parameter.
Figure 20 is to indicate that resource block size applied in flash memory devices 3 is instructed.
Resource block size obtains instruction as the instruction for obtaining resource block size from flash memory devices 3.Host 2 is by sudden strain of a muscle Fast storage device 3 sends resource block size and is instructed, and can recognize the storage of NAND type flash included in flash memory devices 3 The resource block size of device 5.
In addition, in other embodiments, resource block size, which obtains instruction, also may include the parameter of designated blocks number.? It is received from host 2 in the case where specifying the resource block size of certain block number to obtain instruction, flash memory devices 3 make have the block The resource block size of the block of number is back to host 2.Even if the block included in NAND type flash memory 5 is each as a result, In the non-uniform situation of the resource block size of person, host 2 can also recognize the resource block size of each block each.
Figure 21 is the response for indicating to obtain resource block size instruction.
When obtaining instruction from the reception of host 2 resource block size, flash memory devices 3 make resource block size, and (NAND type flash is deposited The common resource block size of block each included in reservoir 5) it is back to host 2.In this case, if block number by Resource block size obtains instruction to specify, then flash memory devices 3 are as described above, make the block of the block with the block number Size is back to host 2.
Figure 22 is to indicate the distribution instruction of block applied in flash memory devices 3.
In order flash memory devices 3 are required with the instruction of the distribution of block (idle block), (block is distributed for block distribution instruction It is required that).Host 2 can require flash memory devices 3 by the way that block distribution instruction is sent to flash memory devices 3 to distribute Thus idle block obtains block number (block number of the allocated idle block).
In flash memory devices 3 using idle block list management free time block group, host 2 does not manage free area block group's In example, host 2 requires flash memory devices 3 to distribute idle block, thus obtains block number.On the other hand, in host 2 In the example for managing idle block group, host 2 by itself due to that can be selected one of idle block group, so need not be by block Distribution instruction is sent to flash memory devices 3.
Figure 23 is the response for indicating to distribute block instruction.
When receiving block distribution instruction from host 2, flash memory devices 3 should be distributed from the selection of idle block lists to master The idle block of machine 2, and the response of the block number of the idle block comprising having been selected is made to be back to host 2.
Figure 24 is the block information acquirement processing for indicating to be executed by host 2 and flash memory devices 3.
When host 2 begins to use flash memory devices 3, maximum block number is obtained instruction first and is sent to by host 2 Flash memory devices 3.The controller of flash memory devices 3 makes maximum block number be back to host 2.Maximum block number indicates The sum for the block that can be utilized.In addition, maximum block number can also indicate energy in the example using the super block The sum of the super block enough utilized.
Then, resource block size is obtained instruction and is sent to flash memory devices 3 by host 2, obtains resource block size.In the situation Under, host 2 can also be instructed the resource block size of designated blocks number 1, the resource block size of designated blocks number 2 is referred to Enable, the resource block size of designated blocks number 3 is instructed ... be respectively sent to flash memory devices 3, individually obtain all areas The resource block size of block each.
By the block information acquirement processing, host 2 can recognize the block counts that can be utilized, the block ruler of each block It is very little.
Figure 25 is the sequence for indicating the write-in processing executed by host 2 and flash memory devices 3.
Host 2 require first flash memory devices 3 with by itself select in order to which the block that should be used (idle block) is written, Or idle block is distributed by the way that block distribution instruction is sent to flash memory devices 3.Then, host 2 will be comprising by itself The block number BLK# (or block number BLK# of the idle block distributed by flash memory devices 3) of the block of selection, logic The write instruction of address (LBA) and length is sent to flash memory devices 3 (step S20).
When the controller 4 of flash memory devices 3 receives the write instruction, the decision of controller 4 should be written from host 2 The write-in destination locations in the block (write-in destination block BLK#) with block number BLK# of data are written, and will Write-in data are written to the write-in destination locations (step S11) of write-in destination block BLK#.In step s 11, it controls Device 4 can also (the two for LBA) and write-in data be written to write-in destination block herein by logical address.
Controller 4 updates and the corresponding block management table 32 of destination block BLK# is written, by with the number that has been written into According to corresponding bitmap label (that is, bitmap corresponding with offset (the block bias internal) of the data is had been written into marks) 1 (step S12) is changed to from 0.
For example, as shown in figure 26, it is contemplated that the 16K byte more new data that LBA is LBAx will be started and be written to and block BLK#1 The corresponding physical storage locations in offset+4~+7 the case where.In this case, as shown in figure 27, in the area of block BLK#1 In block management table, bitmap label each corresponding with offset+4~+7 is changed to 1 from 0.
Then, as shown in figure 25, controller 4 makes to be back to host 2 (step S13) to the response of the write instruction.The sound The offset (block bias internal) for being written with the data should be included at least.
When host 2 receives the response, host 2 updates the LUT411 that is managed by host 2, to the write-in number that has been written into According to corresponding logical address each mapped physical address.As shown in figure 28, LUT411 includes and multiple logical addresses (such as LBA) The corresponding multiple input values of each.In input value corresponding with certain logical address (such as certain LBA), stores expression and be stored with The physical address PBA of position (physical storage locations) in the NAND type flash memory 5 of data corresponding with the LBA, also It is to say block number, offset (block bias internal).As shown in figure 26, if the 16K byte more new data that LBA is LBAx will be started Physical storage locations corresponding with offset+4~+7 of block BLK#1 are written to, then as shown in figure 28, LUT411 is updated, BLK#1, offset+4 are stored in input value corresponding with LBAx, and BLK#1, offset are stored in input value corresponding with LBAx+1 + 5, BLK#1, offset+6 are stored in input value corresponding with LBAx+2, store BLK# in input value corresponding with LBAx+3 1 ,+7 are deviated.
As shown in figure 25, hereafter, host 2 by be used to make write-in using the more new data without pervious number Flash memory devices 3 are sent to according to invalidated Trim instruction.As shown in figure 26, it is stored in and block BLK# in pervious data In the case where 0 offset+0, offset+1, offset+2 ,+3 corresponding positions of offset, as shown in figure 29, by designated blocks number (= BLK#0), (=+ 0) is deviated, the Trim of length (=4) is instructed from host 2 and is sent to flash memory devices 3.Flash memory devices 3 Controller 4 according to the Trim instruct, update block management table 32 (Figure 25, step S14).In step S15, such as Figure 29 institute Show, in the block management table of block BLK#0, bitmap label each corresponding with offset+0~+3 is changed to 0 from 1.
Figure 30 is to indicate that reading applied in flash memory devices 3 instructs.
Reading instruction is that flash memory devices 3 are required with the instruction of the reading of data.Reading instruction includes instruction ID, object Manage address PBA, length, transfer destination index.
Instruction ID is to indicate that the instruction is the ID (instruction code) for reading instruction, reads instruction of the instruction comprising reading instruction ID。
Physical address PBA indicates that the initial physical storage locations of data should be read.Physical address PBA by block number, Deviate (block bias internal) Lai Zhiding.
Length indicates the length for the data that should be read.The data length can be specified by the number of Grain.
Transfer destination index expression should transmit the position on the memory in the host 2 for the data having been read out.
One is read instruction and can specify the group of multiple physical address PBA (block number, offset) and length.
Figure 31 is to indicate read action.
Herein, it is contemplated that the reading for receiving designated blocks number (=BLK#2) from host 2, deviating (=+ 5), length (=3) The case where instruction.The controller 4 of flash memory devices 3 is based on block number (=BLK#2), offset (=+ 5), length (=3), Data d1~d3 is read from BLK#2.In this case, controller 4 reads the data of 1 page size amount from the page 1 of BLK#2, from this It reads data and extracts data d1~data d3 out.Then, data d1~data d3 is sent to by transfer destination index by controller 4 On specified mainframe memory.
Figure 32 is to indicate according to the reading instruction from host 2, will be respectively stored in the number of different physical storage locations The movement read according to portion.
Herein, it is contemplated that receive designated blocks number (=BLK#2), offset (=+ 10), length (=2), block from host 2 The case where numbering (=BLK#2), offset (=+ 16), the reading of length (=4) instruction.4 base of controller of flash memory devices 3 In block number (=BLK#2), offset (=+ 10), length (=2), the data of 1 page size amount are read from the page 2 of BLK#2, from The reading data extract data d1~data d2 out.Then, controller 4 is based on block number (=BLK#2), offset (=+ 16), length It spends (=4), the data (data d3~data d6) of 1 page size amount is read from the page 4 of BLK#2.Then, controller 4 will be by making The reading data transmission of data d1~data d2 length (=6) obtained in conjunction with data d3~data d6 is instructed to by reading On the specified mainframe memory of interior transfer destination index.
It, can be from another object even if read error will not be caused there are in the case where bad page in block as a result, It manages storage location and reads data portion.Even if, also can be by data benefit in addition, in the case where data are written across 2 blocks The distribution of instruction is read with one to read.
Figure 33 is the sequence for indicating the reading process executed by host 2 and flash memory devices 3.
Host 2 will be patrolled referring to the LUT411 managed by host 2 included in the reading requirement from user application Collecting address conversion is block number, offset.Then, host 2 will specify the block number, offset, the reading of length instruction to send To flash memory devices 3.
Flash memory devices 3 controller 4 from host 2 receive read instruction when, controller 4 just like that with by the reading The specified corresponding block determining of block number of instruction fetch is the block of reading object, and based on specified by reading instruction Offset determines the page (step S31) of reading object.In step S31, controller 4 first can also will be specified by reading instruction Offset is divided by the number for the granularity for indicating page size (herein, for 4).Then, controller 4 can also be by the resulting quotient of the division and remaining Number is determined as the page number of reading object and the page bias internal position of reading object respectively.
Controller 4 will read (step from NAND type flash memory 5 by the data of block number, offset, length legislations S32), and by the reading data it is sent to host 2.
Figure 34 is to indicate GC control instruction applied in flash memory devices 3.
GC control instruction be in order to notify GC source region block number and the destination GC block number to flash memory devices 3 and It uses.Host 2 can manage valid data amount/invalid data amount of each block, and by the less several blocks of valid data amount It is selected as GC source area block.In addition, host 2 can manage idle block lists, and several idle blocks are selected as the destination GC Block.The GC control instruction also may include instruction ID, GC source region block number, the destination GC block number etc..
Instruction ID is to indicate that the instruction is the ID (instruction code) of GC control instruction, and GC control instruction is used comprising GC control instruction Instruction ID.
GC source region block number is the block number for indicating GC source area block.Host 2 can specify should make for which source block GC Block.Multiple GC source region block numbers can also be set as a GC control instruction by host 2.
The destination GC block number is the block number for indicating the destination GC block.Host 2 can specify should make for which area Block is the destination GC block.Multiple destinations GC block number can also be set as a GC control instruction by host 2.
Figure 35 is to indicate GC callback instruction.
GC with callback instruction be in order to by the logical address of the valid data replicated by GC with indicate answering for the valid data The block number and offset notice of destination locations processed are used to host 2.
GC may include instruction ID, logical address, length, purpose physical addresses with callback instruction.
Instruction ID is to indicate that the instruction is the ID (instruction code) of GC callback instruction, and GC is adjusted back with callback instruction comprising GC The instruction ID of instruction.
Logical address indicates the logical address for copying to the valid data of the destination GC block from GC source area block using GC.
Length indicates the length of the replicate data.The data length can also be specified by the number of granularity (Grain).
Destination physical address table is given instructions in reply the position made in the destination the GC block of valid data.Purpose physical addresses By block number, offset (block bias internal) Lai Zhiding.
Figure 36 is the sequence for indicating garbage collected (GC) movement.
For example, the number of the remaining idle block included in the idle block lists managed as host 2 of host 2 reduces To threshold value situation below, GC source area block and the destination GC block are selected, and by the specified GC source area block having been selected and The GC control instruction of the seleced destination GC block is sent to flash memory devices 3 (step S41).Alternatively, being handled in write-in Portion 412 manages in the composition of idle block group, and processing unit 412 is written when the number of remaining idle block is reduced to threshold value or less The purport notice can also be carried out to host 2, notification received host 2 carries out the hair of block selection and GC control instruction It send.
When receiving the GC control instruction, the controller 4 of flash memory devices 3 executes data copy action, and the data are multiple Braking is made comprising determining that (the duplication purpose status of the position in the destination the GC block of the valid data in GC source area block should be written Set) movement, and by the valid data in GC source area block copy in the block of the destination GC duplication destination locations movement (step S51).In step s 51, controller 4 is not only by the valid data in GC source area block (duplication source area block), but also should The two of valid data and logical address corresponding with the valid data copies to GC purpose from GC source area block (duplication source area block) Ground block (duplication destination block).Thereby, it is possible to save data in the destination GC block (duplication destination block) and patrol Collect pair of address.
In addition, in step s 51, repeating number until the duplication of all valid data in GC source area block is completed According to replication actions.In the case where multiple GC source area blocks are specified by GC control instruction, until all in all GC source area blocks have The duplication of effect data repeats data copy action until completing.
Then, controller 4 is directed to replicated each valid data, by the logical address (LBA) of the valid data and The purpose physical addresses etc. for indicating the duplication destination locations of the valid data are notified with callback instruction to host 2 using GC (step S52).Purpose physical addresses corresponding with certain valid data are by being replicated with the duplication destination blocks of the valid data The block number of (destination GC block), the physical store position being replicated with expression in the duplication destination block of the valid data Physical address (block bias internal) indicates in the block set.
When host 2 receives the GC callback instruction, host 2 updates the LUT411 that is managed by host 2, and to answered Corresponding logical address mapping purpose physical addresses (block number, block bias internal) (step S42) of each valid data of system.
Figure 37 has been expressed as GC and the example of data copy action that executes.
In Figure 37, it is contemplated that will be stored in GC source area block (is herein the corresponding position of the offset+4 of block BLK#50) Valid data (LBA=10) copy to the destination GC block (be herein the corresponding position of the offset+0 of block BLK#100), will It is stored in and (is replicated herein for the valid data (LBA=20) of the corresponding position of the offset+10 of block BLK#50) with GC source area block To with the destination GC block (herein the case where position corresponding for the offset+1 of block BLK#100).In this case, controller 4 notify { LBA10, BLK#100, offset (=+ 0), LBA20, BLK#100, offset (=+ 1) } to host (at GC readjustment Reason).
Figure 38 is the content of the LUT411 for the host 2 for indicating that the result of the data copy action based on Figure 37 updates.
In the LUT411, block number corresponding with LBA10 and offset from BLK#50, offset (=+ 4) be updated to BLK# 100, (=+ 0) is deviated.Similarly, block number corresponding with LBA20 and offset are updated to from BLK#50, offset (=+ 10) BLK#100, offset (=+ 1).
After LUT411 is updated, specified BLK#50 and the Trim of offset (=+ 4) can also be instructed and be sent by host 2 To flash memory devices 3, make to be stored in the data invalid of offset (=+ 4) with BLK#50 corresponding position.In turn, host 2 By specified BLK#50 and it the Trim instruction of (=+ 10) can also be deviated be sent to flash memory devices 3, make to be stored in and BLK#50 The corresponding position of offset (=+ 10) data invalid.
Alternatively, Trim instruction, the more new block pipe of controller 4 as the ring that GC is handled can not also be sent from host 2 Reason table 32 simultaneously makes these data invalids.
As indicated above, according to the present embodiment, specified 1st logical address and the 1st area are being received from host 2 In the case that the write-in of block number requires, the controller 4 of flash memory devices 3 determines that the tool of the data from host 2 should be written There is the position (write-in destination locations) in the block (write-in destination block) of the 1st block number, by the data from host 2 The write-in destination locations of write-in destination block are written to, physical address in the 1st block for indicating the 1st position or the 1st are patrolled Any one notice of the group of physical address is to host 2 in volume address, the 1st block number and the 1st block.
Therefore, it can be realized following composition: 2 operation block of host number, flash memory devices 3 consider page write sequence limit System/bad page etc. determines the write-in destination locations (block bias internal) for having in the block for the block number specified by host 2. It is numbered by 2 operation block of host, can be realized the application-level address conversion table and existing type of upper stratum (host 2) The merging of the LUT grade address conversion table of SSD.In addition, flash memory devices 3 it can be considered that NAND type flash memory 5 spy Sign/limitation controls NAND type flash memory 5.In turn, have a common boundary since host 2 can recognize block, so can examine Consider block boundary/resource block size and user data is written to each block.As a result, due to host 2 is able to carry out will be in same block Data using data update etc. and the control of invalidation etc. together, so can reduce GC execution frequency.As a result, writing Enter application program reduction, can be realized the maximization of the raising of the performance of flash memory devices 3, the service life of flash memory devices 3.
Therefore, it can be realized the role-sharing appropriate between host 2 and flash memory devices 3, thus, it is possible to seek to wrap Raising containing host 2 Yu the I/O performance of the system entirety of flash memory devices 3.
In addition, in the copy source block number and duplication destination block that receive specified garbage collected from host 2 In the case where the control instruction of number, from multiple blocks, there is the controller 4 of flash memory devices 3 duplication source area block to compile for selection Number the 2nd block with have duplication destination block number the 3rd block, decision the significant figure for being stored in the 2nd block should be written According to the 3rd block in duplication destination locations, valid data are copied to the duplication destination locations of the 3rd block.Then, it controls Device processed by the logical address of valid data, duplication destination block number, indicate duplication destination locations in the 3rd block the Physical address is notified to host 2 in 2 blocks.Thereby, it is possible to realize following composition: in GC, host 2 also only number by operation block (copy source block number, duplication destination block number), flash memory devices 3 determine the duplication mesh in duplication destination block Position.
In addition, flash memory devices 3 can also be used as the one of the multiple flash memory devices 3 being arranged in memory array It is a and be used.Memory array can also be connected at the information as server computer via cable or network Manage device.Memory array includes the controller controlled multiple flash memory devices 3 in the memory array.It is inciting somebody to action In the case that flash memory devices 3 are applied to memory array, the controller of the memory array can also be used as flash storage The host 2 of device 3 and function.
In addition, in the present embodiment, instantiating NAND type flash memory as nonvolatile memory.However, this The function of embodiment for example also can be applied to as MRAM (Magnetoresistive Random Access Memory, magnetic Hinder random access memory), PRAM (Phase change Random Access Memory, phase change random access memory devices), ReRAM (Resistive Random Access Memory, resistive ram) or FeRAM (Ferroelectric Random Access Memory, ferroelectric RAM) equally various other non-volatile Memory.
Several embodiments of the invention are described, but these embodiments be prompted as example, and unexpectedly Figure limits the range of invention.These novel embodiments are can be implemented in the form of various other, can not depart from invention Purport in the range of, carry out it is various omit, displacement, change.These embodiments or its variation be included in invention range or In purport, and it is included in the range that invention is impartial with it documented by claims.
[explanation of symbol]
2 hosts
3 flash memory devices
4 controllers
5 NAND type flash memories
21 write activity control units
22 read action control units
23 GC operation control parts

Claims (16)

1. a kind of storage system is can be connected to the storage system of host, and have:
Nonvolatile memory, includes multiple blocks, and the multiple block respectively contains multiple pages;And
Controller is electrically connected to the nonvolatile memory, and controls the nonvolatile memory;
The controller is constituted as follows:
When the write-in for receiving specified 1st logical address and the 1st block number from the host requires, decision should be written and come from The 1st position in the 1st block with the 1st block number of the data of the host, by the data from the host It is written to the 1st position in the 1st block, and by physical address or institute in the 1st block for indicating the 1st position State the 1st logical address, in the 1st block number and the 1st block group of physical address any one notice to the master Machine,
The duplication source area block for the garbage collected of the nonvolatile memory is specified to compile when receiving from the host Number and duplication destination block number control instruction when, from the multiple block selection have the copy source block number The 2nd block and the 3rd block with the duplication destination block number, decision should be written and is stored in the 2nd block Duplication destination locations in the 3rd block of valid data, the valid data are copied to described in the 3rd block Destination locations are replicated, and by the logical address of the valid data, the duplication destination block number and are indicated described multiple Physical address is notified to the host in 2nd block of destination locations processed.
2. storage system according to claim 1, wherein the controller is constituted as follows: when from the host It receives and specifies the 1st block number and when the reading requirement of physical address, be based on the 1st block in the 1st block Number and physical address in the 1st block, read data from the 1st position in the 1st block.
3. storage system according to claim 1, wherein physical address is by the 1st block bias internal in the 1st block It indicates, the 1st block bias internal is to indicate the 1st block to have the multiple with the various sizes of granularity of page size Beginning to the 1st position offset,
Physical address is indicated by the 2nd block bias internal in 2nd block, and the 2nd block bias internal is with the granularity Multiple indicates the offset at beginning of the 3rd block to the duplication destination locations.
4. storage system according to claim 1, wherein the controller is constituted as follows:
When receiving said write requirement from the host, by the 1st logical address and from the data one of the host It rises and is written to the 1st block,
When receiving the control instruction from the host, the valid data in the 2nd block and institute will be stored in Both the logical addresses for stating valid data copy to the 3rd block.
5. storage system according to claim 1, wherein the controller is constituted as follows:
The idle block group in the multiple block is managed, and
When receiving block distribution from the host and requiring, an idle block in the idle block group is distributed into institute Host is stated, and the block number of the assigned block is notified to the host,
1st block number indicates the block number for the idle block that the host is distributed to by the controller.
6. storage system according to claim 1, wherein the controller is constituted as follows:
When receiving the 1st instruction for requiring maximum block number from the host, the quantity of the multiple block will be indicated Maximum block number notifies to the host,
When receiving the 2nd instruction for requiring resource block size from the host, the respective resource block size of the multiple block is led to Know to the host.
7. storage system according to claim 6, wherein the controller is constituted as follows: when the 2nd finger When enabling comprising block number, by have include the block of the block number in the 2nd instruction resource block size notice To the host.
8. a kind of storage system is can be connected to the storage system of host, and have:
Nonvolatile memory, includes multiple blocks, and the multiple block respectively contains multiple pages;And
Controller is electrically connected to the nonvolatile memory, and controls the nonvolatile memory;
The controller is constituted as follows:
When receiving the 1st instruction for requiring maximum block number from the host, the quantity of the multiple block will be indicated Maximum block number notifies to the host,
When receiving the 2nd instruction for requiring resource block size from the host, the respective resource block size of the multiple block is led to Know to the host,
When the write-in for receiving specified 1st logical address and the 1st block number from the host requires, decision should be written and come from The 1st position in the 1st block with the 1st block number of the data of the host, by the data from the host It is written to the 1st position in the 1st block, and by physical address or institute in the 1st block for indicating the 1st position State the 1st logical address, in the 1st block number and the 1st block group of physical address any one notice to the master Machine,
The duplication source area block for the garbage collected of the nonvolatile memory is specified to compile when receiving from the host Number and duplication destination block number control instruction when, from the multiple block selection have the copy source block number The 2nd block and the 3rd block with the duplication destination block number, decision should be written and is stored in the 2nd block Duplication destination locations in the 3rd block of valid data, the valid data are copied to described in the 3rd block Destination locations are replicated, and by the logical address of the valid data, the duplication destination block number and are indicated described multiple Physical address is notified to the host in 2nd block of destination locations processed.
9. storage system according to claim 8, wherein the controller is constituted as follows: when from the host It receives and specifies the 1st block number and when the reading requirement of physical address, be based on the 1st block in the 1st block Number and physical address in the 1st block, read data from the 1st position in the 1st block.
10. storage system according to claim 8, wherein physical address is by the 1st block bias internal in the 1st block It indicates, the 1st block bias internal is to indicate the 1st block with the multiple of the granularity with the size different from page size Beginning to the 1st position offset,
Physical address is indicated by the 2nd block bias internal in 2nd block, and the 2nd block bias internal is with the granularity Multiple indicates the offset at beginning of the 3rd block to the duplication destination locations.
11. storage system according to claim 8, wherein the controller is constituted as follows:
When receiving said write requirement from the host, by the 1st logical address and from the data one of the host It rises and is written to the 1st block,
When receiving the control instruction from the host, the valid data in the 2nd block and institute will be stored in Both the logical addresses for stating valid data copy to the 3rd block.
12. a kind of control method, is the control method controlled nonvolatile memory, which includes Multiple blocks, the multiple block respectively contains multiple pages, and the control method has following steps:
When the write-in for receiving specified 1st logical address and the 1st block number from host requires, executes following movement: determining The 1st position in the 1st block with the 1st block number of the data from the host should be written;It will be from described The data of host are written to the 1st position in the 1st block;And by object in the 1st block for indicating the 1st position Any one of the group of physical address in address or the 1st logical address, the 1st block number and the 1st block is managed to lead to Know to the host;And
The duplication source area block for the garbage collected of the nonvolatile memory is specified to compile when receiving from the host Number and duplication destination block number control instruction when, execute following movement: selection has described from the multiple block 2nd block of copy source block number and the 3rd block with the duplication destination block number;Decision, which should be written, to be stored in Duplication destination locations in the 3rd block of the valid data of 2nd block;The valid data are copied to described The duplication destination locations of 3rd block;And the logical address of the valid data, the duplication destination block are compiled Number and indicate it is described duplication destination locations the 2nd block in physical address notify to the host.
13. control method according to claim 12, is also equipped with following steps: when receiving specified institute from the host When stating the reading requirement of physical address in the 1st block number and the 1st block, it is based on the 1st block number and the described 1st Physical address in block reads data from the 1st position in the 1st block.
14. a kind of storage system is can be connected to the storage system of host, and have:
Nonvolatile memory, includes multiple blocks, and the multiple block respectively contains multiple pages;And
Controller is electrically connected to the nonvolatile memory, and controls the nonvolatile memory;
The controller is constituted as follows:
When the write-in for receiving specified 1st logical address and the 1st block number from the host requires, decision should be written and come from The 1st position in the 1st block with the 1st block number of the data of the host, by the data from the host It is written to the 1st position in the 1st block, and by physical address or institute in the 1st block for indicating the 1st position State the 1st logical address, in the 1st block number and the 1st block group of physical address any one notice to the master Machine.
15. storage system according to claim 14, wherein the controller is constituted as follows: when from the master Machine receives copy source block number and the duplication destination specified for the garbage collected of the nonvolatile memory When the control instruction of block number, selection has the 2nd block and tool of the copy source block number from the multiple block There is the 3rd block of the duplication destination block number, the described of the valid data for being stored in the 2nd block should be written in decision The valid data are copied to the duplication purpose status of the 3rd block by the duplication destination locations in the 3rd block It sets, and by the logical address of the valid data, the duplication destination block number and indicates the duplication destination locations The 2nd block in physical address notify to the host.
16. storage system according to claim 14, wherein the controller is constituted as follows: when from the master Machine, which receives, specifies the 1st block number and when the reading requirement of physical address, is based on the 1st area in the 1st block Physical address in block number and the 1st block reads data from the 1st position in the 1st block.
CN201810767079.9A 2017-10-27 2018-07-13 Memory system and control method Active CN109725846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111461348.7A CN114115747B (en) 2017-10-27 2018-07-13 Memory system and control method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017208105A JP6982468B2 (en) 2017-10-27 2017-10-27 Memory system and control method
JP2017-208105 2017-10-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111461348.7A Division CN114115747B (en) 2017-10-27 2018-07-13 Memory system and control method

Publications (2)

Publication Number Publication Date
CN109725846A true CN109725846A (en) 2019-05-07
CN109725846B CN109725846B (en) 2021-12-31

Family

ID=66242977

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202111461348.7A Active CN114115747B (en) 2017-10-27 2018-07-13 Memory system and control method
CN201810767079.9A Active CN109725846B (en) 2017-10-27 2018-07-13 Memory system and control method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202111461348.7A Active CN114115747B (en) 2017-10-27 2018-07-13 Memory system and control method

Country Status (4)

Country Link
US (4) US10719437B2 (en)
JP (1) JP6982468B2 (en)
CN (2) CN114115747B (en)
TW (1) TWI674502B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010445A (en) * 2019-12-20 2021-06-22 爱思开海力士有限公司 Data storage device, method of operating the same, and storage system using the same
CN113885808A (en) * 2021-10-28 2022-01-04 合肥兆芯电子有限公司 Mapping information recording method, memory control circuit unit and memory device

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6785205B2 (en) 2017-09-21 2020-11-18 キオクシア株式会社 Memory system and control method
JP2019079464A (en) 2017-10-27 2019-05-23 東芝メモリ株式会社 Memory system and control method
JP6982468B2 (en) * 2017-10-27 2021-12-17 キオクシア株式会社 Memory system and control method
US11263124B2 (en) 2018-08-03 2022-03-01 Micron Technology, Inc. Host-resident translation layer validity check
US11226907B2 (en) 2018-12-19 2022-01-18 Micron Technology, Inc. Host-resident translation layer validity check techniques
US11226894B2 (en) * 2018-12-21 2022-01-18 Micron Technology, Inc. Host-based flash memory maintenance techniques
KR20210001546A (en) 2019-06-28 2021-01-06 에스케이하이닉스 주식회사 Apparatus and method for transmitting internal data of memory system in sleep mode
KR20200122086A (en) 2019-04-17 2020-10-27 에스케이하이닉스 주식회사 Apparatus and method for transmitting map segment in memory system
US11294825B2 (en) 2019-04-17 2022-04-05 SK Hynix Inc. Memory system for utilizing a memory included in an external device
KR20200139913A (en) * 2019-06-05 2020-12-15 에스케이하이닉스 주식회사 Memory system, memory controller and meta infomation storage device
US10860228B1 (en) * 2019-06-24 2020-12-08 Western Digital Technologies, Inc. Method to switch between traditional SSD and open-channel SSD without data loss
JP7318899B2 (en) * 2020-01-02 2023-08-01 レベル スリー コミュニケーションズ,エルエルシー Systems and methods for storing content items in secondary storage
JP2022042762A (en) 2020-09-03 2022-03-15 キオクシア株式会社 Non-volatile memory, memory system, and method for controlling non-volatile memory
US11749335B2 (en) 2020-11-03 2023-09-05 Jianzhong Bi Host and its memory module and memory controller
JP2023001494A (en) * 2021-06-21 2023-01-06 キオクシア株式会社 Memory system and control method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1717707A1 (en) * 1997-03-31 2006-11-02 Lexar Media, Inc. Moving sectors within a block in a flash memory
CN101645043A (en) * 2009-09-08 2010-02-10 成都市华为赛门铁克科技有限公司 Methods for reading and writing data and memory device
CN102789427A (en) * 2012-07-17 2012-11-21 威盛电子股份有限公司 Data storage device and operation method thereof
US20130246721A1 (en) * 2012-02-08 2013-09-19 Kabushiki Kaisha Toshiba Controller, data storage device, and computer program product
CN104679446A (en) * 2013-08-16 2015-06-03 Lsi公司 A method for using a partitioned flash translation layer and a device
US20150261452A1 (en) * 2014-03-12 2015-09-17 Samsung Electronics Co., Ltd. Memory device and controlling method of the same
CN105009094A (en) * 2013-03-05 2015-10-28 西部数据技术公司 Methods, devices and systems for two stage power-on map rebuild with free space accounting in a solid state drive
CN105005536A (en) * 2015-07-01 2015-10-28 忆正科技(武汉)有限公司 Working methods for solid-state storage equipment and host, solid-state storage equipment and host
CN106874211A (en) * 2015-12-14 2017-06-20 株式会社东芝 The control method of accumulator system and nonvolatile memory
US20170262176A1 (en) * 2016-03-08 2017-09-14 Kabushiki Kaisha Toshiba Storage system, information processing system and method for controlling nonvolatile memory
CN107168640A (en) * 2016-03-08 2017-09-15 东芝存储器株式会社 The control method of storage system, information processing system and nonvolatile memory

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6883063B2 (en) * 1998-06-30 2005-04-19 Emc Corporation Method and apparatus for initializing logical objects in a data storage system
US6804674B2 (en) 2001-07-20 2004-10-12 International Business Machines Corporation Scalable Content management system and method of using the same
US7512957B2 (en) 2004-12-03 2009-03-31 Microsoft Corporation Interface infrastructure for creating and interacting with web services
US7315917B2 (en) * 2005-01-20 2008-01-01 Sandisk Corporation Scheduling of housekeeping operations in flash memory systems
US7934049B2 (en) 2005-09-14 2011-04-26 Sandisk Corporation Methods used in a secure yet flexible system architecture for secure devices with flash mass storage memory
US7769978B2 (en) 2005-12-21 2010-08-03 Sandisk Corporation Method and system for accessing non-volatile storage devices
US20080172519A1 (en) 2007-01-11 2008-07-17 Sandisk Il Ltd. Methods For Supporting Readydrive And Readyboost Accelerators In A Single Flash-Memory Storage Device
US8959280B2 (en) 2008-06-18 2015-02-17 Super Talent Technology, Corp. Super-endurance solid-state drive with endurance translation layer (ETL) and diversion of temp files for reduced flash wear
WO2009153982A1 (en) 2008-06-20 2009-12-23 パナソニック株式会社 Plurally partitioned nonvolatile memory device and system
US9323658B2 (en) 2009-06-02 2016-04-26 Avago Technologies General Ip (Singapore) Pte. Ltd. Multi-mapped flash RAID
US8688894B2 (en) 2009-09-03 2014-04-01 Pioneer Chip Technology Ltd. Page based management of flash storage
US8255661B2 (en) 2009-11-13 2012-08-28 Western Digital Technologies, Inc. Data storage system comprising a mapping bridge for aligning host block size with physical block size of a data storage device
US20110137966A1 (en) 2009-12-08 2011-06-09 Netapp, Inc. Methods and systems for providing a unified namespace for multiple network protocols
JP5183662B2 (en) * 2010-03-29 2013-04-17 三菱電機株式会社 Memory control device and memory control method
JP5589205B2 (en) 2011-02-23 2014-09-17 株式会社日立製作所 Computer system and data management method
US20120246385A1 (en) 2011-03-22 2012-09-27 American Megatrends, Inc. Emulating spi or 12c prom/eprom/eeprom using flash memory of microcontroller
US9069657B2 (en) * 2011-12-12 2015-06-30 Apple Inc. LBA bitmap usage
US20130191580A1 (en) 2012-01-23 2013-07-25 Menahem Lasser Controller, System, and Method for Mapping Logical Sector Addresses to Physical Addresses
JP5597666B2 (en) 2012-03-26 2014-10-01 株式会社東芝 Semiconductor memory device, information processing system, and control method
US9075710B2 (en) 2012-04-17 2015-07-07 SanDisk Technologies, Inc. Non-volatile key-value store
WO2013171792A1 (en) * 2012-05-16 2013-11-21 Hitachi, Ltd. Storage control apparatus and storage control method
CN103176752A (en) 2012-07-02 2013-06-26 晶天电子(深圳)有限公司 Super-endurance solid-state drive with Endurance Translation Layer (ETL) and diversion of temp files for reduced Flash wear
CN105122491B (en) 2013-03-12 2019-06-07 Vitro可变资本股份有限公司 Has the Organic Light Emitting Diode of light-extraction layer
CN104572478B (en) * 2013-10-14 2018-07-06 联想(北京)有限公司 Data access method and data access device
US20150186259A1 (en) 2013-12-30 2015-07-02 Sandisk Technologies Inc. Method and apparatus for storing data in non-volatile memory
US10812313B2 (en) 2014-02-24 2020-10-20 Netapp, Inc. Federated namespace of heterogeneous storage system namespaces
US9959203B2 (en) 2014-06-23 2018-05-01 Google Llc Managing storage devices
EP3260985B1 (en) * 2014-06-27 2019-02-27 Huawei Technologies Co., Ltd. Controller, flash memory apparatus, and method for writing data into flash memory apparatus
US20160041760A1 (en) * 2014-08-08 2016-02-11 International Business Machines Corporation Multi-Level Cell Flash Memory Control Mechanisms
KR20160027805A (en) * 2014-09-02 2016-03-10 삼성전자주식회사 Garbage collection method for non-volatile memory device
US9977734B2 (en) 2014-12-11 2018-05-22 Toshiba Memory Corporation Information processing device, non-transitory computer readable recording medium, and information processing system
JP6406707B2 (en) 2015-03-23 2018-10-17 東芝メモリ株式会社 Semiconductor memory device
US20160321010A1 (en) 2015-04-28 2016-11-03 Kabushiki Kaisha Toshiba Storage system having a host directly manage physical data locations of storage device
US10102138B2 (en) 2015-10-22 2018-10-16 Western Digital Technologies, Inc. Division of data storage in single-storage device architecture
US10705952B2 (en) 2015-11-04 2020-07-07 Sandisk Technologies Llc User space data storage management
US9996473B2 (en) 2015-11-13 2018-06-12 Samsung Electronics., Ltd Selective underlying exposure storage mapping
US9990304B2 (en) 2015-11-13 2018-06-05 Samsung Electronics Co., Ltd Multimode storage management system
US9946642B2 (en) 2015-11-13 2018-04-17 Samsung Electronics Co., Ltd Distributed multimode storage management
US10061523B2 (en) * 2016-01-15 2018-08-28 Samsung Electronics Co., Ltd. Versioning storage devices and methods
TWI595492B (en) 2016-03-02 2017-08-11 群聯電子股份有限公司 Data transmitting method, memory control circuit unit and memory storage device
JP6444917B2 (en) 2016-03-08 2018-12-26 東芝メモリ株式会社 Storage system, information processing system, and control method
US20180173619A1 (en) * 2016-12-21 2018-06-21 Sandisk Technologies Llc System and Method for Distributed Logical to Physical Address Mapping
US10592408B2 (en) 2017-09-13 2020-03-17 Intel Corporation Apparatus, computer program product, system, and method for managing multiple regions of a memory device
JP6785205B2 (en) 2017-09-21 2020-11-18 キオクシア株式会社 Memory system and control method
JP6982468B2 (en) * 2017-10-27 2021-12-17 キオクシア株式会社 Memory system and control method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1717707A1 (en) * 1997-03-31 2006-11-02 Lexar Media, Inc. Moving sectors within a block in a flash memory
CN101645043A (en) * 2009-09-08 2010-02-10 成都市华为赛门铁克科技有限公司 Methods for reading and writing data and memory device
US20130246721A1 (en) * 2012-02-08 2013-09-19 Kabushiki Kaisha Toshiba Controller, data storage device, and computer program product
CN102789427A (en) * 2012-07-17 2012-11-21 威盛电子股份有限公司 Data storage device and operation method thereof
CN105009094A (en) * 2013-03-05 2015-10-28 西部数据技术公司 Methods, devices and systems for two stage power-on map rebuild with free space accounting in a solid state drive
CN104679446A (en) * 2013-08-16 2015-06-03 Lsi公司 A method for using a partitioned flash translation layer and a device
US20150261452A1 (en) * 2014-03-12 2015-09-17 Samsung Electronics Co., Ltd. Memory device and controlling method of the same
CN105005536A (en) * 2015-07-01 2015-10-28 忆正科技(武汉)有限公司 Working methods for solid-state storage equipment and host, solid-state storage equipment and host
CN106874211A (en) * 2015-12-14 2017-06-20 株式会社东芝 The control method of accumulator system and nonvolatile memory
US20170262176A1 (en) * 2016-03-08 2017-09-14 Kabushiki Kaisha Toshiba Storage system, information processing system and method for controlling nonvolatile memory
CN107168640A (en) * 2016-03-08 2017-09-15 东芝存储器株式会社 The control method of storage system, information processing system and nonvolatile memory

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010445A (en) * 2019-12-20 2021-06-22 爱思开海力士有限公司 Data storage device, method of operating the same, and storage system using the same
CN113885808A (en) * 2021-10-28 2022-01-04 合肥兆芯电子有限公司 Mapping information recording method, memory control circuit unit and memory device
CN113885808B (en) * 2021-10-28 2024-03-15 合肥兆芯电子有限公司 Mapping information recording method, memory control circuit unit and memory device

Also Published As

Publication number Publication date
CN114115747A (en) 2022-03-01
US20230333978A1 (en) 2023-10-19
US20190129838A1 (en) 2019-05-02
US10719437B2 (en) 2020-07-21
US20200310961A1 (en) 2020-10-01
CN109725846B (en) 2021-12-31
CN114115747B (en) 2023-12-22
TWI674502B (en) 2019-10-11
JP2019079463A (en) 2019-05-23
US11416387B2 (en) 2022-08-16
US11748256B2 (en) 2023-09-05
TW201917579A (en) 2019-05-01
US20220342809A1 (en) 2022-10-27
JP6982468B2 (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN109725846A (en) Storage system and control method
US11093137B2 (en) Memory system and method for controlling nonvolatile memory
US11151029B2 (en) Computing system and method for controlling storage device
TWI689817B (en) Memory system and control method
CN109725847A (en) Storage system and control method
US11797436B2 (en) Memory system and method for controlling nonvolatile memory
US11726707B2 (en) System and method of writing to nonvolatile memory using write buffers
JP2021007059A (en) Memory system
JP7204020B2 (en) Control method
JP7167295B2 (en) Memory system and control method
JP7366222B2 (en) Memory system and control method
US20230297290A1 (en) Memory system and control method
JP2023021450A (en) memory system
JP2022121655A (en) Memory system and control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Tokyo

Patentee after: TOSHIBA MEMORY Corp.

Address before: Tokyo

Patentee before: Pangea Co.,Ltd.

Address after: Tokyo

Patentee after: Kaixia Co.,Ltd.

Address before: Tokyo

Patentee before: TOSHIBA MEMORY Corp.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220209

Address after: Tokyo

Patentee after: Pangea Co.,Ltd.

Address before: Tokyo

Patentee before: TOSHIBA MEMORY Corp.