CN110287130A - Storage device and its operating method - Google Patents
Storage device and its operating method Download PDFInfo
- Publication number
- CN110287130A CN110287130A CN201811300759.6A CN201811300759A CN110287130A CN 110287130 A CN110287130 A CN 110287130A CN 201811300759 A CN201811300759 A CN 201811300759A CN 110287130 A CN110287130 A CN 110287130A
- Authority
- CN
- China
- Prior art keywords
- data
- memory
- memory device
- reading
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003860 storage Methods 0.000 title claims abstract description 143
- 238000011017 operating method Methods 0.000 title abstract description 6
- 230000015654 memory Effects 0.000 claims abstract description 452
- 239000010410 layer Substances 0.000 claims description 20
- 238000000034 method Methods 0.000 claims description 18
- 230000004044 response Effects 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 5
- 239000006185 dispersion Substances 0.000 claims description 5
- 239000002356 single layer Substances 0.000 claims description 3
- 239000000872 buffer Substances 0.000 description 50
- 238000010586 diagram Methods 0.000 description 30
- 238000004891 communication Methods 0.000 description 18
- 101100481702 Arabidopsis thaliana TMK1 gene Proteins 0.000 description 13
- 230000002093 peripheral effect Effects 0.000 description 11
- 230000006399 behavior Effects 0.000 description 8
- 238000013507 mapping Methods 0.000 description 8
- 238000000151 deposition Methods 0.000 description 6
- 230000005611 electricity Effects 0.000 description 6
- 239000012536 storage buffer Substances 0.000 description 6
- 239000004065 semiconductor Substances 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000005684 electric field Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 239000003990 capacitor Substances 0.000 description 3
- 101150013423 dsl-1 gene Proteins 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000014759 maintenance of location Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 102100031885 General transcription and DNA repair factor IIH helicase subunit XPB Human genes 0.000 description 2
- 101000920748 Homo sapiens General transcription and DNA repair factor IIH helicase subunit XPB Proteins 0.000 description 2
- 101100049574 Human herpesvirus 6A (strain Uganda-1102) U5 gene Proteins 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 238000005538 encapsulation Methods 0.000 description 2
- 238000007667 floating Methods 0.000 description 2
- 101150064834 ssl1 gene Proteins 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 101000934888 Homo sapiens Succinate dehydrogenase cytochrome b560 subunit, mitochondrial Proteins 0.000 description 1
- 102100025393 Succinate dehydrogenase cytochrome b560 subunit, mitochondrial Human genes 0.000 description 1
- 101100524346 Xenopus laevis req-a gene Proteins 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000009413 insulation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005086 pumping Methods 0.000 description 1
- 238000004513 sizing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0844—Multiple simultaneous or quasi-simultaneous cache accessing
- G06F12/0846—Cache with multiple tag or data arrays being simultaneously accessible
- G06F12/0851—Cache with interleaved addressing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0873—Mapping of cache memory to specific storage devices or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0893—Caches characterised by their organisation or structure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0893—Caches characterised by their organisation or structure
- G06F12/0895—Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
- G06F3/0649—Lifecycle management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0658—Controller construction arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0661—Format or protocol conversion arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/22—Employing cache memory using specific memory technology
- G06F2212/222—Non-volatile memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/28—Using a specific disk cache architecture
- G06F2212/283—Plural cache memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/30—Providing cache or TLB in specific location of a processing system
- G06F2212/305—Providing cache or TLB in specific location of a processing system being part of a memory device, e.g. cache DRAM
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/31—Providing disk cache in a specific location of a storage system
- G06F2212/313—In storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7201—Logical to physical mapping or translation of blocks or pages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7202—Allocation control and policies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7208—Multiple device management, e.g. distributing data over multiple flash devices
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
- Read Only Memory (AREA)
Abstract
The present invention provides the operating method of a kind of storage device and the storage device, which includes various memory devices.A kind of storage device, comprising: multiple memory devices, each of multiple memory devices include that at least one or more reads cache memory block and multiple main memory blocks;And Memory Controller, cache memory block is read for disperse by data among the data stored in multiple main memory blocks, being stored in the same memory device and there is the reading more than threshold value to count and storing at least one or more for including into each of multiple memory devices, which counts expression read requests number.
Description
Cross reference to related applications
This application claims on March 19th, 2018 submitting, application No. is the South Korea patent applications of 10-2018-0031751
Priority, the entire disclosure are incorporated herein by reference.
Technical field
The various embodiments of the disclosure relate in general to a kind of electronic device, more particularly, to a kind of storage device and are somebody's turn to do
The operating method of storage device.
Background technique
In general, storage device is the storing data under the control of such as host of computer, smart phone or Intelligent flat
Device.According to the type for the device for being provided for storing data, the example of storage device can be categorized into data such as
It is stored in the device of the hard disk drive (HDD) in disk, and such as stores data in semiconductor memory, is especially non-
The device of solid state hard disk (SSD) or storage card in volatile memory.
Storage device may include wherein the memory device of storing data and being configured to store data to memory
The Memory Controller of device.Memory device can be classified as volatile memory and nonvolatile memory.It is non-volatile
The representative example of memory includes: read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electricity
Erasable programmable ROM (EEPROM), flash memory, phase change random access memory devices (PRAM), magnetic ram (MRAM), electricity
Resistive RAM (PRAM), ferroelectric RAM (FRAM) etc..
Summary of the invention
The various embodiments of the disclosure are related to a kind of storage device including reading cache memory block and the storage
The operating method of device.
Embodiment of the disclosure can provide a kind of storage device comprising: multiple memory devices, multiple memory devices
Each of set includes that at least one or more reads cache memory block and multiple main memory blocks;And memory control
Device, be configured to by multiple primary storages it is fast among the data that store, to be stored in the same memory device and have be more than threshold
Data dispersion (spread) that the reading of value counts simultaneously stores at least one for including into each of multiple memory devices
Or multiple reading cache memory blocks, read requests number is indicated wherein reading and counting.
Embodiment of the disclosure can provide a kind of method of operating memory device, which includes multiple memories
Device and Memory Controller, multiple memory devices are attached to each packet in identical channel and multiple memory devices
It includes at least one or more reading cache memory block and multiple main memory blocks, Memory Controller is configured to control multiple deposit
Reservoir device, this method comprises: detect it is among the data stored in multiple main memory blocks, be more than threshold value reading count
Data, wherein read count indicate read requests number;And according to have more than threshold value reading count data whether
It is stored in identical memory device, the data that there is the reading more than threshold value to count are dispersed and stores to multiple memories
At least one or more for including in each of device reads cache memory block.
Embodiment of the disclosure can provide a kind of storage system comprising: multiple memory devices, multiple memories
Each of device includes reading cache memory block and main memory block;And controller, it is suitable for: by multiple memory devices
Data Detection among the data of the middle storage of each of main memory block set, with the reading counting for being greater than threshold value is that high speed is slow
Deposit data;And when detecting multiple cached data segments in one of multiple memory devices, by multiple high numbers
Data cached segment is dispersed in the reading cache memory block of memory device, so as to being dispersed in a manner of data cross
Multiple cached data segments execute subsequent read operations.
Detailed description of the invention
Fig. 1 is the block diagram for showing storage device according to an embodiment of the present disclosure.
Fig. 2 is the frame for showing the embodiment of the connection relationship between multiple memory devices of Fig. 1 and Memory Controller
Figure.
Fig. 3 is shown according to the embodiment of the present disclosure according to the programming operation of data cross and the timing diagram of read operation.
Fig. 4 is the diagram for showing superblock and the super page according to the embodiment of the present disclosure.
Fig. 5 is the diagram that data are stored to the operation to multiple memory devices shown according to the embodiment of the present disclosure.
Fig. 6 is to show the diagram of the configuration of Memory Controller of Fig. 1.
Fig. 7 is to show storing cached data to reading cache memory block according to the embodiment of the present disclosure
The diagram of operation.
Fig. 8 is to show the flow chart of the operation of the storage device according to the embodiment of the present disclosure.
Fig. 9 is to show the flow chart of the operation of the storage device according to the embodiment of the present disclosure.
Figure 10 is to show the diagram of the configuration of memory device of Fig. 1.
Figure 11 is to show the diagram of the embodiment of memory cell array 110 of Figure 10.
Figure 12 is any one memory block shown according to the memory block BLK1 of Figure 11 of the embodiment of the present disclosure into BLKz
The circuit diagram of BLKa.
Figure 13 is any one memory block shown according to the memory block BLK1 of Figure 11 of the embodiment of the present disclosure into BLKz
The circuit diagram of BLKb.
Figure 14 is to show the memory block for including in the memory cell array 110 in Figure 10 according to the embodiment of the present disclosure
The circuit diagram of any one memory block BLKc of the BLK1 into BLKz.
Figure 15 is the diagram for showing the embodiment of Memory Controller of Fig. 1.
Figure 16 is to show the block diagram for the memory card system for applying the storage device according to the embodiment of the present disclosure.
Figure 17 is to show the block diagram of solid state hard disk (SSD) system for applying the storage device according to the embodiment of the present disclosure.
Figure 18 is to show the block diagram for the custom system for applying the storage device according to the embodiment of the present disclosure.
Specific embodiment
Exemplary embodiment is described more fully with below in reference to attached drawing, however, these embodiments can be in different forms
Implement and is not necessarily to be construed as limitation embodiment set forth herein.On the contrary, thesing embodiments are provided so that the disclosure will
It is thorough and complete, and the range of exemplary embodiment comprehensively will be passed into those skilled in the art.
It in the accompanying drawings, clearly can be with up-sizing in order to illustrate.It will be appreciated that when element is referred to as in two elements
" between " when, it can be and only have an element between the two elements, or there may be one or more intermediary elements.
Hereinafter, embodiment is described with reference to the accompanying drawings.It herein will be referring to the signal as embodiment and intermediate structure
The sectional view of figure describes embodiment.Thus, for example since the modification for illustrating shape caused by manufacturing technology and/or tolerance is
It is expected.Therefore, embodiment should not be construed as limited to the specific shape in region shown herein, but may include example
Such as due to form variations caused by manufacturing.In the accompanying drawings, the length and size of layer and region can amplify for clearness.It is attached
Similar appended drawing reference indicates similar element in figure.
Such as term of " first " and " second " can be used to describe various parts, but these terms should not limit it is various
Component.The purpose that these terms are only used to distinguish component and another component.For example, do not depart from the disclosure spirit and
In the case where range, the first component is properly termed as second component, and second component is properly termed as the first component.In addition, " and/
Or " may include in the component referred to any one or combinations thereof.
In addition, then singular may include plural form as long as no specifically mentioned in sentence.In addition, specification
Used in "comprises/comprising" or " include/include " indicate to exist or increase one or more components, step, operation and
Element.
In addition, unless otherwise defined, including whole arts of technical terms and scientific terms used in this specification
Language has meaning identical with the normally understood meaning of those skilled in the relevant arts.Term defined in the general dictionary should be explained
Clearly to determine with meaning identical with the meaning explained under related fields background, and unless in addition having in the description
Justice, otherwise these terms should not be interpreted as having idealization or meaning too formal.
It shall yet further be noted that in the present specification, " connection/connection " not only indicates that a component directly couples another component,
And indicate that a component couples another component by intermediate member indirectly.On the other hand, " it is directly connected to/directly couples " table
Show that a component directly couples another component without intermediate member.
Fig. 1 is to show the block diagram of the storage device 50 according to the embodiment of the present disclosure.
Referring to Fig.1, storage device 50 may include memory device 100, Memory Controller 200 and buffer storage
300。
Storage device 50 can be arranged to the device of the storing data under the control of host 400 such as below: honeycomb
Phone, smart phone, MP3 player, laptop computer, desktop computer, game machine, TV, tablet PC or on-vehicle information joy
Music system.
Memory device 100 can storing data wherein.Memory device 100 can be in Memory Controller 200
The lower operation of control.Memory Controller 100 may include memory cell array, and memory cell array includes being configured as
Wherein multiple memory cells of storing data.Memory cell array may include multiple memory blocks.Each memory block can be with
Including multiple storage units.Each memory block may include multiple pages.In embodiment, each page, which can be, deposits data
Store up to memory device 100 or read from memory device 100 unit of institute's storing data.Each memory block can be erasing
The unit of data.In embodiment, memory device 100 may is that double data speed synchronous dynamic RAM
(DDR SDRAM), low power double data rate 4 (LPDDR4) SDRAM, graphics double data rate (GDDR) SDRAM, low function
Rate DDR (LPDDR), Rambus dynamic random access memory (RDRAM), NAND flash, the storage of vertical nand flash
Device, NOR flash memory device, resistive random access memory (RRAM), phase transition storage (PRAM), reluctance type are deposited at random
Access to memory (MRAM), ferroelectric RAM (FRAM) or spin-transfer torque random access memory (STT-RAM).
In the present specification, memory device 100 is NAND flash, however, it is possible to use other memory devices.
In embodiment, memory device 100 can be implemented with three-dimensional matrix structure.The disclosure can be applied not only to it
The flash memory that middle charge storage layer is formed by conductive floating gates (FG), and can be applied to wherein charge storage layer by insulating
Charge-trapping flash (CTF) memory that layer is formed.
The multiple memory blocks for including in memory device 100 can be divided main memory block 101 and read cache and deposit
Store up block 102.In embodiment, memory device 100 may include at least one or more main memory block 101 and at least one
Or multiple reading cache memory blocks 102.
In embodiment, reading each of the memory cell for including in cache memory block 102 can be by energy
Enough store single layer cell (SLC) formation of a data bit.Each of the memory cell for including in main memory block 101 can
With by the multilevel-cell that can store two data bit (MLC), can store three data bit three-layer unit (TLC) or can
Store four layer units (QLC) formation of four data bit.
It can be cached data wait store to the data for reading cache memory block 102.In embodiment, high speed
The data cached reading counting that can be among the data stored in main memory block 101 or data read request number are more than threshold value
(TH) data.
In embodiment, Memory Controller 200 can control multiple memory devices 100.In various embodiments, high
The data cached reading counting that can be among the data stored in identical main memory block 101 of speed is more than the data of threshold value (TH).
Memory device 100 can receive order and address from Memory Controller 200 and access memory cell
By the region of address choice.In other words, Memory Controller 100 can be corresponding with order to being executed by the region of address choice
Operation.For example, memory device 100 can execute write-in (programming) operation, read operation and erasing operation.In programming operation
Data can be programmed to the region by address choice by period, memory device 100.During read operation, memory device
100 can read data from by the region of address choice.During erasing operation, memory device 100 can be selected from by address
The region erasing data selected.
Memory Controller 200 can control all operationss of storage device 50.
When electric power is applied to storage device 50, Memory Controller 200 can run (execute) firmware.It is depositing
In the case that reservoir device 100 is flash memory device, Memory Controller 200 can run such as flash translation layer (FTL)
(FTL) firmware is to control the communication between host 400 and memory device 100.
In embodiment, Memory Controller 200 can receive data and logical block address LBA from host 400, and will
Logical block address is converted into physical block address PBA, physical block address PBA indicate memory device 100 in include, wait store
The address of the memory cell of data.Memory Controller 200 can will indicate the logic of the mapping relations between LBA and PBA
It is stored in buffer storage 300 to physical address mapping table.
Memory Controller 200 can control memory device 100 in response to the request from host 400 to execute programming
Operation, read operation or erasing operation.During programming operation, Memory Controller 200 can be by program command, PBA sum number
According to being provided to memory device 100.During read operation, reading order and PBA can be provided to by Memory Controller 200
Memory device 100.During read operation, erasing order and PBA can be provided to memory device by Memory Controller 200
Set 100.
In various embodiments, Memory Controller 200 can from host 400 receive read requests, and to the reading
The read requests number for requesting corresponding LBA is taken to be counted.
In embodiment, Memory Controller 200 can store cached data information.Cached data information can
With include read count and corresponding to each LBA PBA, wherein read count be multiple LBA read requests number.
Memory Controller 200 can determine cached data based on cached data information, and high speed is slow
Deposit data is among the data stored in main memory block 101 wait be transferred to the data for reading cache memory block 102.
In detail, the reading among the data stored in main memory block 101 can be counted or be counted by Memory Controller 200
It is determined as cached data according to the data that read requests number is more than threshold value (TH).
In embodiment, Memory Controller 200 can control multiple memory devices 100.Memory Controller 200 can
In each of multiple memory devices 100, to determine that it is super that the reading among the data stored in main memory block 101 counts
The data for crossing threshold value (TH) are cached data.Here, when corresponding with multiple initial data segments (pieces of data)
Cached data when being stored in single identical memory device, high speed corresponding with multiple initial data segments is slow
Deposit data cannot be read in a manner of data cross.
Memory Controller 200 can be by cached data transmission to reading cache memory block 102.In detail,
Memory Controller 200 can control memory device 100 from main memory block 101 read wait be transferred to read cache deposit
The cached data of block 102 is stored up, and the data of reading are programmed to and read cache memory block 102.
In embodiment, the memory device that Memory Controller 200 can will be stored in multiple memory devices
The high speed buffer data for the multiple initial data segments set disperses and is programmed to the reading height in multiple memory devices included
Fast buffer memory block 102.In the present embodiment, because cached data corresponding with multiple initial data segments is dispersed in
In different memory devices, it is possible to read cached data in a manner of data cross.
In embodiment, Memory Controller 200 can be in the case where the request not from host 400, automatically
Program command, address and data are generated, and program command, address and data are transmitted to memory device 100.For example, depositing
Order, address and data can be provided to memory device 100 to execute consistency operation by memory controller 200, such as
The programming operation of wear leveling or programming operation for garbage collection.
In embodiment, Memory Controller 200 can control the friendship of the data between host 400 and buffer storage 300
It changes.Optionally, Memory Controller 200 can temporarily store the system data for being used to control memory device 100 to buffering
Memory 300.For example, Memory Controller 200 can will be from the data temporary storage that host 400 inputs to buffer storage
300, the data being temporarily stored in buffer storage 300 are then transmitted to memory device 100.
In various embodiments, buffer storage 300 may be used as the working storage or high speed of Memory Controller 200
Buffer memory.Buffer storage 300 can store code or order to be executed by Memory Controller 200.Optionally, delay
Rushing memory 300 can store data to be handled by Memory Controller 200.
In embodiment, SRAM or DRAM can be used to implement in buffer storage 300, wherein such as double data of DRAM
Rate synchronization dynamic random access memory (DDRSDRAM), DDR4SDRAM, low power double data rate 4 (LPDDR4)
SDRAM, graphics double data rate (GDDR) SDRAM, low-power DDR (LPDDR) or Rambus dynamic random access memory
(RDRAM)。
In various embodiments, memory device 50 can not include buffer storage 300.In this case, it is arranged
Volatile memory devices outside memory 500 can execute the function of buffer storage 300.
In embodiment, Memory Controller 200 can control at least two memory devices 100.In this case,
Memory Controller 200 can control memory device 100 with interleaved mode to enhance operating characteristics.
Host 400 can be used at least one of various communication means such as below and communicate with storage device 50: logical
With (HSIC), minicomputer system between universal serial bus (USB), serial AT attachment (SATA), tandem SCSI (SAS), high-speed chip
Interface (SCSI), peripheral assembly interconnect (PCI), high-speed PCI (PCIe), high speed nonvolatile memory (NVMe), Common Flash Memory
(UFS), secure digital (SD), multimedia card (MMC), embedded MMC (eMMC), dual inline memory modules (DIMM),
Deposit formula DIMM (RDIMM), low-load DIMM (LRDIMM) communication means.
According to host interface, i.e., the communication system communicated with host 400, storage device 50 is configurable to various types of
Any one of storage device.For example, data storage device 50 is configurable to various types of storage devices such as below
Any one of: SSD, MMC, eMMC, RS-MMC or miniature-MMC type multimedia card, SD, mini-SD, the miniature safe number of-SD type
Word card, universal serial bus (USB) storage device, Common Flash Memory (UFS) device, personal computer memory card international agreement
(PCMCIA) card-type storage device, peripheral assembly interconnection (PCI) card-type storage device, high-speed PCI (PCI-E) type storage device,
Standard flash memory (CF) card, smart media card and memory stick.
Storage device 50 can be manufactured in the form of any in various encapsulated types.For example, storage device 50 can be with
It is manufactured in the form of any one of various encapsulated types such as below: stacked package (POP) type, system in package (SIP)
Type, system on chip (SOC) type, multi-chip package (MCP) type, chip on board (COB) type, wafer scale manufacture encapsulation (WFP) type and
Crystal circular piled encapsulation (WSP) type.
Fig. 2 is to show the embodiment of the connection relationship between multiple memory devices of Fig. 1 and Memory Controller 200
Block diagram.
With reference to Fig. 2, Memory Controller 200 can pass through multiple channel C H0 to CH3 and the (storage of multiple memory devices
Device device _ 00 to memory device _ 33) connection.In embodiment, it is noted that the quantity in channel is attached to each logical
The quantity of the memory device in road can change in various ways.In the present specification, Memory Controller 200 is logical by four
Road is attached to memory device, and four memory devices are attached to each channel, however also can be used more or less
Channel.
Memory device _ 00, memory device _ 01, memory device _ 02 and memory device _ 03 can couple jointly
To channel 0CH0.Memory device _ 00, memory device _ 01, memory device _ 02 and memory device _ 03 can be by logical
Road 0CH0 is communicated with Memory Controller 200.Due to reservoir device _ 00, memory device _ 01, memory device _ 02 and storage
Device device _ 03 is attached to channel 0CH0 jointly, therefore same time only one memory device can be logical with Memory Controller 200
Letter.However, memory device _ 00, memory device _ 01, memory device _ 02 and the respective internal behaviour in memory device _ 03
Work can be performed simultaneously.
Memory device _ 10, memory device _ 11, memory device _ 12 and memory device _ 13 can couple jointly
To channel 1CH1.Memory device _ 10, memory device _ 11, memory device _ 12 and memory device _ 13 can be by logical
Road 1CH1 is communicated with Memory Controller 200.Due to reservoir device _ 10, memory device _ 11, memory device _ 12 and storage
Device device _ 13 are attached to channel 1CH1 jointly, therefore same time only one memory device can be logical with Memory Controller 200
Letter.However, memory device _ 10, memory device _ 11, memory device _ 12 and the respective internal behaviour in memory device _ 13
Work can be performed simultaneously.
Memory device _ 20, memory device _ 21, memory device _ 22 and memory device _ 23 can couple jointly
To channel 2CH2.Memory device _ 20, memory device _ 21, memory device _ 22 and memory device _ 23 can be by logical
Road 2CH2 is communicated with Memory Controller 200.Due to reservoir device _ 20, memory device _ 21, memory device _ 22 and storage
Device device _ 23 are attached to channel 2CH2 jointly, therefore same time only one memory device can be logical with Memory Controller 200
Letter.However, memory device _ 20, memory device _ 21, memory device _ 22 and the respective internal behaviour in memory device _ 23
Work can be performed simultaneously.
Memory device _ 30, memory device _ 31, memory device _ 32 and memory device _ 33 can couple jointly
To channel 3CH3.Memory device _ 30, memory device _ 31, memory device _ 32 and memory device _ 33 can be by logical
Road 3CH3 is communicated with Memory Controller 200.Due to reservoir device _ 30, memory device _ 31, memory device _ 32 and storage
Device device _ 33 are attached to channel 3CH3 jointly, therefore same time only one memory device can be logical with Memory Controller 200
Letter.However, memory device _ 30, memory device _ 31, memory device _ 32 and the respective internal behaviour in memory device _ 33
Work can be performed simultaneously.
In the storage device using multiple memory devices, performance can be enhanced by using data cross, wherein
Data cross is the data communication using interleaved scheme.In the structure that single channel is shared on two or more roads (way), number
According to intersecting read or write operation can be executed while changing road.For data cross, channel and Lu Laiguan can be based on
Manage memory device.It is maximized to make to be attached to the concurrency of the memory device in each channel, Memory Controller 200 can
By continuous logical storage Regional Dispersion and to distribute to channel and road.
For example, Memory Controller 200 can will be ordered by channel 0CH0, the control signal and data including address pass
Transport to memory device _ 00.Data cross can be operations described below: the data of transmission being programmed to it in memory device _ 00
Including memory cell while, Memory Controller 200 will order, the control signal and data including address are transmitted to and deposit
Reservoir device _ 01.
Referring to Fig. 2, multiple memory devices can be configured by four road WAY0 to WAY3.Road 0WAY0 may include storage
Device device _ 00, memory device _ 10, memory device _ 20 and memory device _ 30.Road 1WAY1 may include memory device
Set _ 01, memory device _ 11, memory device _ 21 and memory device _ 31.Road 2WAY2 may include memory device _
02, memory device _ 12, memory device _ 22 and memory device _ 32.Road 3WAY3 may include memory device _ 03, deposit
Reservoir device _ 13, memory device _ 23 and memory device _ 33.
Each of channel C H0 to CH3 can be total by the shared signal of the memory device for being attached to corresponding channel
Line.
Although the case where data cross is applied to 4 channels/4 line structure has been described in Fig. 2, with channel
The increase of quantity and the quantity on road, crossing efficiency can increase.
Fig. 3 is shown according to the embodiment of the present disclosure according to the programming operation of data cross and the timing of read operation
Figure.
Referring to Fig. 3, (a) shows programming operation and (b) shows read operation.In Fig. 3, will to following example into
Row explanation: programming operation (a) is executed to memory device _ 00 to memory device _ 03 for the channel 0CH0 for being attached to Fig. 2 jointly
With read operation (b).
Referring to (a) of Fig. 3, during the period from t0 to t1, data can be executed to memory device _ 00 and input DIN#
00.During data input DIN#00, memory device _ 00 can receive program command, address and data by channel 0CH0.
Memory device _ 00, memory device _ 01, memory device _ 02 and memory device _ 03 are attached to channel 0CH0 jointly.Cause
This, while executing data input DIN#00 to memory device _ 00 during the period from t0 to t1, other memory devices
It sets, i.e., memory device _ 01, memory device _ 02 and memory device _ 03 cannot use channel 0CH0.
During the period from t1 to t2, data can be executed to memory device _ 01 and input DIN#01.It is inputted in data
During DIN#01, memory device _ 01 can receive program command, address and data by channel 0CH0.Memory device _
00, memory device _ 01, memory device _ 02 and memory device _ 03 are attached to channel 0CH0 jointly.Therefore, from t1
To t2 period during data input DIN#01 is executed to memory device _ 01 while, other memory devices, i.e. memory
Device _ 00, memory device _ 02 and memory device _ 03 cannot use channel 0CH0.However, due to memory device _ 00
Through receiving data (DIN#00) during the period from t0 to t1, therefore memory device _ 00 can execute programming since t1
It operates (tPROG#00).
During the period from t2 to t3, data can be executed to memory device _ 02 and input DIN#02.It is inputted in data
During DIN#02, memory device _ 02 can receive program command, address and data by channel 0CH0.Memory device _
00, memory device _ 01, memory device _ 02 and memory device _ 03 are attached to channel 0CH0 jointly.Therefore, from t2 to
While executing data input DIN#02 to memory device _ 02 during the period of t3, other memory devices, i.e. memory device
Channel 0CH0 cannot be used by setting _ 00, memory device _ 01 and memory device _ 03.However, due to memory device _ 00
Data (DIN#00) is received during the period from t0 to t1, therefore memory device _ 00 can execute programming behaviour since t1
Make (tPROG#00).Further, since memory device _ 01 receives data (DIN#01) during the period from t1 to t2,
Therefore memory device _ 01 can execute programming operation (tPROG#01) since t2.
During the period from t3 to t4, data can be executed to memory device _ 03 and input DIN#03.It is inputted in data
During DIN#03, memory device _ 03 can receive program command, address and data by channel 0CH0.Memory device _
00, memory device _ 01, memory device _ 02 and memory device _ 03 are attached to channel 0CH0 jointly.Therefore, from t3 to
While executing data input DIN#03 to memory device _ 03 during the period of t4, other memory devices, i.e. memory device
Channel 0CH0 cannot be used by setting _ 00, memory device _ 01 and memory device _ 02.However, due to memory device _ 00
Data (DIN#00) is received during the period from t0 to t1, therefore memory device _ 00 can execute programming behaviour since t1
Make (tPROG#00).Further, since memory device _ 01 receives data (DIN#01) during the period from t1 to t2,
Therefore memory device _ 01 can execute programming operation (tPROG#01) since t2.In addition, due to memory device _ 02
Through receiving data (DIN#02) during the period from t2 to t3, therefore memory device _ 02 can execute programming since t3
It operates (tPROG#02).
(tPROG#00) can be completed in the programming operation of t4, memory device _ 00.
Then, during the period from t4 to t8, can with same way performed during the period from t0 to t4
To execute memory device _ 00 to memory device _ 03 data input DIN#00, DIN#01, DIN#02 and DIN#03.
Referring to (b) of Fig. 3, during 2 period from t ' 0 to t ', memory device _ 00 is into memory device _ 03
Each can internally read data (tR#00 to tR#03) corresponding with specific address.In embodiment, memory device _
00, which can be based on the page to memory device _ 03, reads data.It memory device _ 00 can be during 1 period from t ' 0 to t '
Read data (tR#00), and can during 3 period from t ' 1 to t ', by channel 0CH0 by the data of reading export to
Memory Controller (DOUT#00).
During 3 period from t ' 1 to t ', memory device _ 01, memory device _ 02 and memory device _ 03 cannot
Using channel 0CH0, because memory device _ 00 passes through channel 0CH0 output data (DOUT#00).
During 4 period from t ' 3 to t ', memory device _ 01 can be exported the data of reading by channel 0CH0
To Memory Controller (DOUT#01).During 4 period from t ' 3 to t ', memory device _ 00, memory device _ 02 and
Memory device _ 03 cannot use channel 0CH0, because memory device _ 01 passes through channel 0CH0 output data (DOUT#01).
During 5 period from t ' 4 to t ', memory device _ 02 can be exported the data of reading by channel 0CH0
To Memory Controller (DOUT#02).During 5 period from t ' 4 to t ', memory device _ 00, memory device _ 01 and
Memory device _ 03 cannot use channel 0CH0, because memory device _ 02 passes through channel 0CH0 output data (DOUT#02).
During 6 period from t ' 5 to t ', memory device _ 03 can be exported the data of reading by channel 0CH0
To Memory Controller (DOUT#03).During 6 period from t ' 5 to t ', memory device _ 00, memory device _ 01 and
Memory device _ 02 cannot use channel 0CH0, because memory device _ 03 passes through channel 0CH0 output data (DOUT#03).
Fig. 4 is to show the diagram of superblock and the super page according to the embodiment of the present disclosure.
Referring to Fig. 4, four memory devices including memory device _ 00 to memory device _ 03 can couple jointly
To channel 0CH0.
Referring to Fig. 4, each of memory device (memory device _ 00 to memory device _ 03) may include the 0th depositing
Store up block BLK0 to the n-th memory block BLKn.Each memory block may include page zero Page 0 to kth page Page k.
Memory Controller can be controlled based on superblock and is attached in multiple memory devices in each channel jointly
Including memory block.For example, the 0th memory block BLK0 that memory device _ 00 includes into memory device _ 03 can form
0 superblock Super Block 0.Therefore, memory device _ 00 to memory device _ 03 for being attached to channel 0CH0 can wrap
Include 0 to the n-th superblock Super Block n of the 0th superblock Super Block.
Each superblock may include multiple super pages.The super page is alternatively referred to as " band ".
Each super page may include multiple pages.For example, multiple 0th storages of the 0th superblock Super Block 0
The page zero Page 0 for including in block BLK 0 can form the 0th super page Super Page 0.
Therefore, each superblock may include the 0th super page Super Page 0 to the super page Super Page of kth
k。
When data being stored to memory device _ 00 to memory device _ 03 or being read the data of storage, memory control
Device processed can be stored or be read data based on the super page.
In this case, data are stored to the reading to the programming operation of the single super page or the data for reading storage
Operation can be executed in a manner of the data cross described referring to Fig. 3.
Fig. 5 is the diagram that data are stored to the operation to multiple memory devices shown according to the embodiment of the present disclosure.
The data for corresponding to write request can be deposited referring to Fig. 5 according to the input sequence of the write request of host input
It stores up to memory device _ 00 to memory device _ 03 for being attached to channel 0CH0 jointly.
In embodiment, storing data in programming operation of memory device _ 00 into memory device _ 03 can be with base
It is executed in the page.
In detail, each of memory device _ 00 to memory device _ 03 may include first page Page 1 to
Kth page Page k.Each page can store the data of 96KB size.
According to Memory Controller in response to the write request that is inputted from host and the writing commands sequence generated, it is right respectively
It should request the data of req11 that can be stored sequentially memory device _ 00 to storage in the first request req1 to the 11st
Device device _ 03.
First request can be the write request for 4KB size data, and the second request can be for the big decimal of 64KB
According to write request, third request can be write request for 4KB size data, and the 4th request can be big for 4KB
The write request of small data, the 5th request can be the write request for 4KB size data, and the 6th request, which can be, to be used for
The write request of 64KB size data, the 7th request can be write request for 64KB size data, and the 8th request can be with
It is the write request for 4KB size data, the 9th request can be the write request for 32KB size data, the tenth request
It can be the write request for 4KB size data, and the 11st request can be and ask for the write-in of 4KB size data
It asks.
Each of the page that memory device _ 00 includes into memory device _ 03 can store the data of 96KB.
Therefore, Memory Controller 200 can be stored a series of data corresponding with requests to memory device as unit of 96KB
_ 00 is set to memory device _ 03, rather than carrys out storing data as unit of request.
In embodiment, Memory Controller 200 can be come in a manner of data cross to memory device _ 00 to memory
Device _ 03 executes programming operation.Furthermore, it is possible to described referring to Fig. 4, memory device is managed based on superblock or the super page
Set _ 00 memory cell for including into memory device _ 03.
Therefore, a part of the 64KB data corresponding to the first request to the data of the 5th request and corresponding to the 6th request
It can store to memory device _ 00.Corresponding to the 6th request 64KB data remainder and corresponding to the 7th request
A part of 64KB data can store to memory device _ 01.Corresponding to the 7th request 64KB data remainder and
Data corresponding to the 8th request to the 11st request can store to memory device _ 02.
After according to the embodiment storing data of Fig. 5, the read requests for each data can be inputted.According to
The logical block address (LBA) for including in read requests, Memory Controller 200 can detecte and respective logic block address (LBA)
Corresponding physical block address (PBA), and read the data stored in respective physical block address (PBA).
Assuming that continually input host for requesting each of req5 in response to third request req3 to the 5th and
The read requests of the 4KB data of input.Moreover, it is assumed that continually input host in response to the 8th request req8, the tenth
Request each of the request of req10 and the 11st req11 and the read requests of the 4KB data of input.
Since the 4KB data requesting each of req5 in response to third request req3 to the 5th and inputting are stored in
In single memory device _ 00, therefore to these 4KB data (that is, corresponding to the number of the third request request of req3 to the 5th req5
According to) read operation cannot be executed in a manner of data cross.Memory Controller 200 can will correspond respectively to third request
Req3 to the 5th requests the reading order of req5 to be provided to memory device _ 00, is asked with reading with third request req3 to the 5th
Seek the corresponding data of req5.In other words, Memory Controller 200 can request req3 to the 5th to request req5 will in response to third
Three reading orders are provided to memory device _ 00.Memory device _ 00 can execute corresponding three times with three reading orders
Read operation.
Due to being inputted in response to each of the 8th request req8, the tenth request req10 and the 11st request req11
4KB data are stored in single memory device _ 02, therefore to these 4KB data (that is, corresponding to the 8th request req8, the
Ten request req10 and the 11st request req11 data) read operation cannot be executed in a manner of data cross.Memory
Controller 200 can order the reading for corresponding respectively to the 8th request req8, the tenth request req10 and the 11st request req11
Order is provided to memory device _ 02, corresponding with the 8th request req8, the tenth request req10 and the 11st request req11 to read
Data.In other words, Memory Controller 200 can be in response to the 8th request req8, the tenth request req10 and the 11st request
Three reading orders are provided to memory device _ 02 by req11.Memory device _ 02 can execute and three reading orders pair
The read operation three times answered.
In the case where the embodiment according to Fig. 5 is come storing data, data cannot be read in a manner of data cross, because of tool
There are the data of high reading frequency and smaller size to be stored in single memory device (example with larger size (96KB) for unit
Such as, each of memory device _ 00 in above-mentioned example and memory device _ 02) in.Accordingly, it is possible to which performance is caused to deteriorate.
Therefore, in embodiment of the disclosure, the data that there is high reading frequency and cannot be read in a manner of data cross
It can be dispersed and be stored into the reading cache memory block of multiple memory devices, be had to read more quickly
The data of high reading frequency.
It, can in the page quantity or each page that include in each memory device in the various embodiments of the disclosure
The size of data of storage those of is not limited in the embodiment of Fig. 5.In other words, each memory device may include multiple storages
Block, wherein each memory block includes first page to the kth page.The data capacity that can be stored in each page can be set to
Various values, such as 4KB, 8KB, 16KB, 64KB, 128KB or 1024KB.
Fig. 6 is to show the diagram of the configuration of Memory Controller 200 of Fig. 1.
Referring to Fig. 6, Memory Controller 200 may include that operation control unit 210 and reading cacheline control are single
Member 220.
Reading cache block control unit 220 may include cached data administrative unit 221 and cache number
According to information memory cell 222.
Operation control unit 210 can receive the request inputted from host.In embodiment, each request can be write-in
Request Write or read requests Read.Operation control unit 210 can be the portion of the firmware of the FTL such as described referring to Fig.1
Part.
When inputting write request, operation control unit 210 can will data corresponding to related write request store to
The main memory block 101 of memory device 100.
When inputting read requests, operation control unit 210 can be read and related read requests pair from main memory block 101
The data answered.The data of reading can be provided to host by operation control unit 210.
In embodiment, after having been carried out read requests, operation control unit 210 can will be with read requests pair
The reading counting of the logical block address LBA answered is provided to cached data information memory cell 222.
Cached data information memory cell 122 can store cached data information.
Cached data information may include read count and corresponding to each LBA physical block address (PBA), wherein
Read the read requests number that counting is multiple logical block address (LBA).
Cached data information memory cell 222 can be by the cached data information CACHE DATA of storage
INFO is provided to cached data administrative unit 221.
The high data cached administrative unit 221 of number can determine cached data based on cached data information, i.e.,
Data among the data stored in main memory block 101, wait be transferred to reading cache memory block 102.
In detail, cached data administrative unit 221 can be by among the data stored in main memory block 101, reading
It takes counting or data read request number is more than that the data of threshold value (TH) are determined as cached data.
In embodiment, Memory Controller 200 can control multiple memory devices 100.Cached data management
Unit 221 can be in each of multiple memory devices 100, by among the data stored in main memory block 101, reading
Taking counting is more than that the data of threshold value (TH) are determined as cached data.Here, when height corresponding with multiple initial data segments
Speed is data cached when being stored in single memory device, it is not possible to be read in a manner of data cross and multiple initial data
The corresponding cached data of segment.
The physical block address (PBA) of cached data can be provided to operation by cached data administrative unit 221
Control unit 210, wherein the physical block address (PBA) of cached data is the information about cached data.Operation control
Unit 210 processed can read data corresponding with the PBA of cached data from main memory block 101, and execute reading
Data are stored to the cache write-in operation for reading cache memory block 102.
In embodiment, the programming or reading speed for reading cache memory block 102 can be higher than main memory block 101
Programming or reading speed.In embodiment, reading cache memory block 102 can be by the SLC of each storage a data
It is formed.Optionally, main memory block 101 can be stored the TLC of the MLC of two bits, each three data of storage by each
Or the QLC of each storage four figures evidence is formed.
In various embodiments, during cache write-in operation, operation control unit 210 can disperse data simultaneously
It is programmed to the reading cache memory block 102 in multiple memory devices 100 included.In other words, operation control unit 210 can
To disperse the cached data of single memory device and store to multiple memory devices 100, with can be according to data
Interleaved scheme reads zero access memory block 102.
Fig. 7 is to show storing cached data to reading cache memory block according to the embodiment of the present disclosure
The diagram of 102 operation.
Referring to Fig. 7, each of memory device _ 00 to memory device _ 03 may include main memory block and reading
Cache memory block.
In embodiment, each of memory device _ 00 to memory device _ 03 may include main memory block 1 to
Main memory block n.In embodiment, the memory cell for including in each main memory block can store two bits by each
MLC, each TLC or each QLC for storing four figures evidence for storing three data are formed.
In embodiment, read cache memory block programming or reading speed can be higher than main memory block programming or
Reading speed.In embodiment, reading cache memory block can be formed by the SLC of each storage a data.
Memory Controller 200 can each of memory device (that is, memory device _ 00 to memory device _
03) in, cache number is determined according to the cached data information stored in cached data information memory cell 222
According to data among the data stored in main memory block, wait be transferred to reading cache memory block.
Cached data information memory cell 222 can store logical block address (LBA), size of data, read counting
(RC) and physical block address (PBA).In embodiment, PBA, which can be, indicates memory device (that is, memory device _ 00 is to depositing
Reservoir device _ 03) among the memory device for storing data corresponding to related LBA tube core number.
Referring to Fig. 7, Memory Controller 200 can determine cached data based on cached data information.It is high
The data cached reading count value that can be LBA of speed is more than the data of threshold value (TH).In fig. 7, it is assumed that threshold value (TH) is 100.
It can be with the 4th LBA LBA4 to the corresponding data of each of the 6th LBA LBA6 with 4KB size and reading
Taking counting is more than the data of threshold value (TH).Data corresponding with the 4th LBA LBA4 to the 6th LBA LBA6, which can store, to be stored
In the main memory block of device device _ 03 (tube core 3).
Therefore, Memory Controller 200 can determine data corresponding with the 4th LBA LBA4 to the 6th LBA LBA6
For cached data.
Memory Controller 200 can read store in the main memory block of memory device _ 03 and the 4th LBA LBA4
To the corresponding data of the 6th LBA LBA6.Memory Controller 200 can be by reading and the 4th LBA LBA4 to the 6th LBA
The corresponding data of LBA6 (that is, cached data of memory device _ 03) store to memory device (that is, memory device _
00 to memory device _ 03) reading cache memory block.That is, Memory Controller 200 can be by the number of reading
Disperse according to (that is, cached data of memory device _ 03) and stores to memory device (that is, memory device _ 00 is to depositing
Reservoir device _ 03), data corresponding with each LBA are read so as to data cross mode.For example, corresponding to the 4th
The 4KB data of LBALBA4 can store to the reading cache memory block of memory device _ 00, correspond to the 5th LBA
The 4KB data of LBA5 can store to the reading cache memory block of memory device _ 01, and correspond to the 6th LBA
The 4KB data of LBA6 can store to the reading cache memory block of memory device _ 02.
Fig. 8 is to show the flow chart of the operation of the storage device 50 according to the embodiment of the present disclosure.
Referring to Fig. 8, in step S801, storage device 50 can receive read requests from host.Read requests may include
LBA, LBA are the logical block address of data to be read.
In step S803, storage device 50 can execute read operation to corresponding LBA.For example, storage device 50 can be examined
PBA corresponding with the LBA of read requests is surveyed, reads the data being stored at corresponding PBA, and then the data of reading are provided
To host.
In step S805, storage device 50 can update the reading counting for having been carried out the LBA of read requests.In detail
Ground, the reading that storage device 50 can increase corresponding LBA count and store increased reading count value.
In step S807, it is more than two of threshold value TH that storage device 50, which may determine whether that the reading of each counts,
Or more LBA.Reading counting if there is each is more than two or more LBA of threshold value TH, then storage device 50
It can carry out to step S809, the reading counting if there is no each is more than two or more LBA of threshold value TH, then deposits
Storage device 50 can terminate process.
In step S809, it is more than the corresponding number of the LBA of threshold value TH that storage device 50 can will be counted with the reading of each
Cache memory block is read according to being transmitted to.Step S809 will be more fully described referring to Fig. 9.
Fig. 9 is to show the flow chart of the operation of the storage device 50 according to the embodiment of the present disclosure.
Referring to Fig. 9, in step S903, storage device 50 can determine that the counting of the reading with each is more than threshold value TH's
Whether two or more corresponding data slots of two or more LBA are stored in identical memory device.Such as
Fruit and each reading counting are more than corresponding two or more data slices of two or more LBA of threshold value TH
Section is not stored in identical memory device, then process can terminate, because can be with to the read operation of main data block
Data cross mode executes.However, if with each read count be more than threshold value TH two or more LBA it is right respectively
Two or more data slots answered are stored in identical memory device, then process proceeds to step S909, because not
Read operation can be executed to two or more data slots being stored in the same memory device in a manner of data cross.
In step S909, the reading counting with each can be more than two or more of threshold value TH by storage device 50
The cached data of two or more corresponding data slots of a LBA disperses and stores into different memory device
Including reading cache memory block.In embodiment, the memory cell for including in each main memory block can be by each
The QLC shape of the MLC of two data bit of a storage, the TLC of each three data bit of storage or each four data bit of storage
At.In embodiment, the programming or reading speed of reading cache memory block can be higher than the programming or reading of main memory block
Speed.In embodiment, reading cache memory block can have the SLC of each storage a data to be formed.
Figure 10 is to show the diagram of the configuration of memory device 100 of Fig. 1.
Referring to Fig.1 0, memory device 100 may include memory cell array 110, peripheral circuit 120 and control logic
130。
Memory cell array 110 may include multiple memory block BLK1 to BLKz.Multiple memory block BLK1 to BLKz are logical
It crosses line RL and is attached to address decoder 121.Multiple memory block BLK1 to BLKz, which are attached to by bit line BL1 to BLm, to be read/writes
Enter circuit 123.Each of memory block BLK1 to BLKz may include multiple memory cells.In embodiment, Duo Gecun
Storage unit can be Nonvolatile memery unit.The memory cell for being attached to same word line in multiple memory cells
It can be defined as a page.In other words, memory cell array 110 is formed by multiple pages.In embodiment, memory list
Each of memory block BLK1 to BLKz for including in element array 110 may include multiple dummy units.Here, one or more
A dummy unit can with coupled in series drain electrode selection transistor and memory cell between and coupled in series in drain selection
Between transistor and memory cell.
Each memory cell of memory device 100 can be by that can store the SLC of individual data position, can store
The MLC of two data bit, the TLC that can store three data bit or the QLC that can store four data bit are formed.
Peripheral circuit 120 may include address decoder 121, voltage generator 122, read/write circuits 123 and data
Input/output circuitry 124.
Peripheral circuit 120 can drive memory cell array 110.For example, peripheral circuit 120 can drive memory list
Element array 110 executes programming operation, read operation or erasing operation.
Address decoder 121 is attached to memory cell array 110 by line RL.Line RL may include drain electrode selection
Line, wordline, drain selection line and common source polar curve.In embodiment, wordline may include normal character line and dummy word lines.In reality
It applies in example, line RL may further include pipeline selection line.
Address decoder 121 can operate under the control of control logic 130.Address decoder 121 can be patrolled from control
It collects 130 and receives address AD DR.
Address decoder 121 can be decoded the block address in received address AD DR.Address decoder 121 can be with
At least one memory block is selected into BLKz in memory block BLK1 according to decoded block address.Address decoder 121 can be right
Row address in received address AD DR is decoded.Address decoder 121 can be according to row address decoded, by will be from
The voltage that voltage generator 122 is supplied applies at least one wordline WL to select at least one wordline WL of selected memory block.
During programming operation, program voltage can be applied to selected wordline and by level by address decoder 121
Level lower than program voltage by voltage is applied to unselected word line.During programming verification operation, address decoder 121
Verifying voltage can be applied to selected wordline and will be above the voltage that is verified of verifying voltage and apply unselected word line.
During read operation, address decoder 121 can will read voltage and be applied to selected wordline and will be above
That reads voltage is applied to unselected word line by voltage.
In embodiment, the erasing operation of memory device 100 can be executed based on memory block.In the operation operation phase
Between, the address AD DR to be input to memory device 100 includes block address.Address decoder 121 can decode block address and
Corresponding memory block is selected according to decoded block address.During erasing operation, address decoder 121 can be applied ground voltage
Add to the wordline coupled with selected memory block.
In embodiment, address decoder 121 can be decoded the column address in the address AD DR of transmission.It is decoded
Column address DCA can be transferred to read/write circuits 123.In embodiment, address decoder 121 may include such as going
The component of decoder, address decoder and address buffer.
Voltage generator 122, which can be used, is provided to the external supply voltage of memory device 100 to generate multiple voltages.
Voltage generator 122 can operate under the control of control logic 130.
In embodiment, voltage generator 122 can generate internal supply voltage by adjusting external supply voltage.From
The inside supply voltage that voltage generator 122 generates may be used as the operation voltage of memory device 100.
In embodiment, voltage generator 122 can be generated by using outside supply voltage or internal supply voltage
Multiple voltages.Various voltages needed for memory device 100 can be generated in voltage generator 122.For example, voltage generator
122 can be generated multiple program voltages, it is multiple by voltage, multiple selections read voltage and multiple non-selected reading voltages.
For example, voltage generator 122 may include multiple pump capacitor (pumping for receiving internal supply voltage
Capacitors multiple electricity), and under the control of control logic 130 are generated by selectively activating multiple pump capacitors
Pressure.
The voltage of generation can be provided to memory cell array 110 by address decoder 121.
Read/write circuits 123 may include first page buffer PB1 to m page buffer PBm.First page
Buffer PB1 to m page buffer PBm is attached to memory cell by the first bit line BL1 to m bit line BLm respectively
Array 110.First page buffer PB1 can be operated under the control of control logic 130 to m page buffer.
First page buffer PB1 can execute the number with data input/output circuit 124 to m page buffer PBm
According to communication.During programming operation, first page buffer PB1 to m page buffer PBm can by data input/it is defeated
Circuit 124 and data line DL receive data DATA to be stored out.
During programming operation, when programming pulse is applied to selected wordline, first page buffer PB1 to m
Page buffer PBm can will be passed by the received data DATA of data input/output circuit 124 by bit line BL1 to BLm
Transport to selected memory cell.Memory cell in the selected page is programmed based on the data DATA of transmission.It is attached to
The memory cell for being applied the bit line of programming license voltage (for example, ground voltage) can have increased threshold voltage.Connection
It is connected to and is applied the threshold voltage of the memory cell of bit line of program-inhibit voltage (for example, supply voltage) and can be kept.
During programming verification operation, first page buffer PB1 to m page buffer PBm can by bit line BL1 to BLm from
Selected memory cell reads page data.
During read operation, read/write circuits 123 can be by bit line BL from the memory list in the selected page
Member reads data DATA, and exports data DATA is read to data input/output circuit 124.
During erasing operation, read/write circuits 123 can make bit line BL floating (float).In embodiment, it reads
Take/write circuit 123 may include column select circuit.
Data input/output circuit 124 is attached to first page buffer PB1 to m page buffer by data line DL
Device PBm.Data input/output circuit 124 can operate under the control of control logic 130.
Data input/output circuit 124 may include for receiving multiple input/output (i/o) buffers of input data (not
It shows).During programming operation, data input/output circuit 124 can be externally controlled device (not shown) receive it is to be stored
Data DATA.During read operation, data input/output circuit 124 can will include from read/write circuits 123
First page buffer PB1 is exported to peripheral control unit to the received data of m page buffer PBm.
Control logic 130 could be attached to address decoder 121, voltage generator 122,123 sum number of read/write circuits
According to input/output circuitry 124.Control logic 130 can control all operationss of memory device 100.Control logic 130 can be with
It is operated in response to the order CMD transmitted from external device (ED).
Figure 11 is to show the diagram of the embodiment of memory cell array 110 of Figure 10.
Referring to Fig.1 1, memory cell array 110 may include multiple memory block BLK1 to BLKz.Each memory block can be with
With three-dimensional structure.Each memory block may include the multiple storage units stacked on substrate.These storage units are in the side+X
It is arranged in, +Y direction and +Z direction.Referring to Fig.1 2 and Figure 13 are more fully described to the structure of each memory block.
Figure 12 is any one memory block shown according to the memory block BLK1 of Figure 10 of the embodiment of the present disclosure into BLKz
The circuit diagram of BLKa.
Referring to Fig.1 2, memory block BLKa may include multiple unit string CS11 to CS1m and CS21 to CS2m.In embodiment
In, each of unit string CS11 to CS1m and CS21 to CS2m can be formed as " u "-shaped.In memory block BLKa, m single
Member string can be arranged in line direction (that is, +X direction).In Figure 12, two unit strings are shown as in column direction (that is, the side+Y
To) on arrange.However, the diagram is merely for convenience and purposes of illustration of, it should be understood that can arrange three or more in a column direction
A unit string.
Each of multiple unit string CS11 to CS1m and CS21 to CS2m may include at least one drain selection crystalline substance
Body pipe SST, first memory unit MC1 are brilliant to the n-th memory cell MCn, tunnel transistor PT and at least one drain electrode selection
Body pipe DST.
Selection transistor SST and DST and memory cell MC1 to MCn can respectively have similar structure.Implementing
In example, each of selection transistor SST and DST and memory cell MC1 to MCn may include that channel layer, tunnel are exhausted
Edge layer, charge storage layer and barrier insulating layer.In embodiment, can be arranged in each unit string for providing channel layer
Column (pillar).In embodiment, it can be arranged in each unit string and be deposited for providing channel layer, tunnel insulation layer, charge
The column of each of reservoir and barrier insulating layer.
The drain selection transistor SST of each unit string is connected in common source polar curve CSL and memory cell MC1 to MCp
Between.
In embodiment, the drain selection transistor for the unit string being arranged in mutually colleague is attached to be extended in the row direction
Drain selection line, and the drain selection transistor for being arranged in the unit string in not going together is attached to different drain selections
Line.In Figure 12, the drain selection transistor of the unit string CS11 to CS1m in the first row is attached to the first drain selection line
SSL1.The drain selection transistor of unit string CS21 to CS2m in second row is attached to the second drain selection line SSL2.
In embodiment, the drain selection transistor of unit string CS11 to CS1m and CS21 to CS2m can be attached to jointly
Single source electrode selection line.
The memory cell MCn of first memory unit MC1 to n-th in each unit string is connected in drain selection transistor
Between SST and drain electrode selection transistor DST.
First memory unit MC1 to the n-th memory cell MCn can be divided into first memory unit MC1 and deposit to pth
Storage unit MCp and+1 memory cell MCp+1 of pth to the n-th memory cell MCn.First memory unit MC1 is to pth
Memory cell MCp is sequentially disposed on the direction opposite with +Z direction and coupled in series is in drain selection transistor
Between SST and tunnel transistor PT.+ 1 memory cell MCp+1 of pth to the n-th memory cell MCn is sequentially disposed at+Z
On direction and coupled in series is between tunnel transistor PT and drain electrode selection transistor DST.First memory unit MC1 to
P memory cell MCp and+1 memory cell MCp+1 of pth to the n-th memory cell MCn passes through tunnel transistor PT each other
Connection.The grid of the first memory unit MC1 of each unit string to pth memory cell MCn are respectively coupled to the first wordline
WL1 to the n-th wordline WLn.
The grid of the tunnel transistor PT of each unit string is attached to pipeline PL.
The drain electrode selection transistor DST of each unit string be connected in respective bit line and memory cell MCp+1 to MCn it
Between.The unit series connection arranged on line direction is connected to the drain electrode selection line extended in the row direction.Unit string CS11 in the first row
Drain electrode selection transistor to CS1m is attached to the first drain electrode selection line DSL1.The leakage of unit string CS21 to CS2m in second row
Pole selection transistor is attached to the second drain electrode selection line DSL2.
The unit string arranged on column direction could be attached to the bit line extended in a column direction.In Figure 12, in the first row
Unit string CS11 and CS21 be attached to the first bit line BL1.Unit string CS1m and CS2m in m column are attached to m bit line
BLm。
Memory cell in the unit string arranged on line direction, to be attached to same word line forms the single page.For example,
Unit string CS11 in the first row is into CS1m, memory cell being attached to the first wordline WL1 forms the single page.Second
Unit string CS21 in row is into CS2m, memory cell being attached to the first wordline WL1 forms another single page.It is elected
When selecting any one of drain electrode selection line DSL1 and DSL2, the corresponding units string being arranged on single line direction can choose.
When selecting any one of wordline WL1 to WLn, the corresponding single page can be selected from the unit string of selection.
In embodiment, even bitlines and odd bit lines can be set to replace the first bit line BL1 to m bit line BLm.?
The unit string CS11 to CS1m or CS21 arranged on line direction into C2m, the unit string of even-numbered could be attached to respectively
Even bitlines.The unit string CS11 to CS1m or CS21 arranged in the row direction into C2m, the unit string of odd-numbered can
To be attached to respective odd bit lines.
In embodiment, at least one of first memory unit MC1 to the n-th memory cell MCn may be used as void
Quasi- memory cell.For example, at least one or more virtual memory unit can be set to reduce drain selection transistor SST
And memory cell MC1 is to the electric field between MCp.It is alternatively possible to which at least one or more virtual memory unit is arranged
Reduce drain electrode selection transistor DST and memory cell MCp+1 to the electric field between MCn.With the number of virtual memory unit
Amount increases, and the operating reliability of memory block BLKa can increase, while the size of memory block BLKa can be increased.With virtually depositing
The quantity of storage unit is reduced, and the size of memory block BLKa can reduce, but the operating reliability of memory block BLKa may drop
It is low.
In order to efficiently control at least one virtual memory unit, each of virtual memory unit be can have
Required threshold voltage.It, can be to complete in virtual memory unit before or after memory block BLKa executes erasing operation
Portion or some execution programming operations.In the case where having executed the later execution erasing operation of programming operation, by control to
It is applied to the voltage of the dummy word lines coupled with each virtual memory unit, virtual memory unit can have required threshold
Threshold voltage.
Figure 13 is any one storage shown according to the memory block BLK1 of Figure 11 of the embodiment of the present disclosure into BLKz
The circuit diagram of block BLKb.
Referring to Fig.1 3, memory block BLKb may include multiple unit string CS11 ' to CS1m ' and CS21 ' to CS2m '.Unit
String CS11 ' is to CS1m ' and CS21 ' extend in +Z direction to each of CS2m '.Unit string CS11 ' to CS1m ' and
Each of CS21 ' to CS2m ' may include at least one the drain selection transistor being stacked on substrate (not shown)
SST, first memory unit MC1 are to the n-th memory cell MCn and at least one drain electrode selection transistor DST, wherein serving as a contrast
Bottom is located at the lower section of memory block BLKb.
The drain selection transistor SST of each unit string is connected in common source polar curve CSL and memory cell MC1 to MCn
Between.The drain selection transistor for the unit string being arranged in mutually colleague is attached to identical drain selection line.It is arranged in first
The drain selection transistor of unit string CS11 ' to CS1m ' in row could be attached to the first drain selection line SSL1.It is arranged in
The drain selection transistor of unit string CS21 ' to CS2m ' in two rows could be attached to the second drain selection line SSL2.Implementing
In example, the drain selection transistor of unit string CS11 ' to CS1m ' and CS21 ' to CS2m ' can be attached to single source electrode choosing jointly
Select line.
The memory cell MCn coupled in series of first memory unit MC1 to n-th in each unit string is in drain selection crystalline substance
Between body pipe SST and drain electrode selection transistor DST.The grid of first memory unit MC1 to the n-th memory cell MCn is distinguished
The first wordline WL1 is attached to the n-th wordline WLn.
The drain electrode selection transistor DST of each unit string is connected in respective bit line and memory cell MC1 between MCn.
The drain electrode selection transistor for the unit string arranged in the row direction could be attached to the drain electrode selection line extended in the row direction.The
The drain electrode selection transistor of unit string CS11 ' to CS1m ' in a line is attached to the first drain electrode selection line DSL1.In second row
The drain electrode selection transistor of unit string CS21 ' to CS2m ' could be attached to the second drain electrode selection line DSL2.
Therefore, other than not having tunnel transistor PT in each unit string, the memory block BLKb of Figure 13 be can have
Equivalent circuit similar with the circuit of memory block BLKa of Figure 12.
In embodiment, even bitlines and odd bit lines can be set to replace the first bit line BL1 to m bit line BLm.?
The unit string of even-numbered of the unit string CS11 ' to CS1m ' or CS21 ' arranged on line direction into CS2m ' could be attached to
Respective even bitlines, the unit string CS11 ' to CS1m ' or CS21 ' arranged in the row direction obtain odd-numbered into CS2m '
Unit string could be attached to respective odd bit lines.
In embodiment, at least one of first memory unit MC1 to the n-th memory cell MCn may be used as void
Quasi- memory cell.For example, at least one or more virtual memory unit can be set to reduce drain selection transistor SST
And memory cell MC1 is to the electric field between MCn.It is alternatively possible to provide at least one or more virtual memory unit
Reduce drain electrode selection transistor DST and memory cell MC1 to the electric field between MCn.With the quantity of virtual memory unit
Increase, the operating reliability of memory block BLKb can increase, while the size of memory block BLKb can be increased.With virtual memory
The quantity of device unit is reduced, and the size of memory block BLKb can reduce, but the operating reliability of memory block BLKb may be decreased.
In order to efficiently control at least one virtual memory unit, each of virtual memory unit be can have
Required threshold voltage.It, can be in virtual memory unit before or after executing erasing operation to memory block BLKb
All or some executes programming operation.In the case where the later execution erasing operation that programming operation has executed, pass through control
To be applied to the voltage of the dummy word lines coupled with each virtual memory unit, virtual memory unit can have required
Threshold voltage.
Figure 14 is to show the memory block for including in the memory cell array 110 according to Figure 10 of the embodiment of the present disclosure
The circuit diagram of any one memory block BLKc of the BLK1 into BLKz.
Referring to Fig.1 4, memory block BLKc may include multiple string SR.Multiple string SR can be respectively coupled to multiple bit line BL1
To BLn.Each string SR may include drain selection transistor SST, memory cell MC and drain electrode selection transistor DST.
The drain selection transistor SST of each string SR can be connected between memory cell MC and common source polar curve CSL.
The drain selection transistor SST of multiple string SR can be attached to common source polar curve CSL jointly.
The drain electrode selection transistor DST of each string SR can be connected between memory cell MC and corresponding bit line BL.
The drain electrode selection transistor DST of string SR can be respectively coupled to bit line BL1 to BLn.
In each string SR, multiple deposit can be set between drain selection transistor SST and drain electrode selection transistor DST
Storage unit MC.In each string SR, memory cell MC can be coupled to one another in series.
In each string SR, the memory cell MC that is arranged in the identical circle (turn) away from common source polar curve CSL
It can be attached to single wordline jointly.The memory cell MC of multiple string SR could be attached to multiple wordline WL1 to WLm.
In memory block BLKc, erasing operation can be executed based on memory block.When executing erasing operation based on memory block,
Whole memory cells in simultaneously erased memory block BLKc can be requested in response to erasing.
Figure 15 is to show the diagram of the embodiment of Memory Controller 200 of Fig. 1.
Memory Controller 1000 is attached to host and memory device.In response to the request from host, memory control
The accessible memory device of device 1000 processed.For example, Memory Controller 1000 can control memory device write operation,
Read operation, erasing operation and consistency operation.Memory Controller 1000 can provide connecing between memory device and host
Mouthful.Memory Controller 1000 can drive the firmware for controlling memory device.
Referring to Fig.1 5, Memory Controller 1000 may include processor 1010, storage buffer 1020, error correction
Code (ECC) circuit 1030, host interface 1040, Buffer control circuit 1050, memory interface 1060 and bus 1070.
Bus 1070 can provide the channel between the component of Memory Controller 1000.
Processor 1010 can control all operationss of Memory Controller 1000 and execute logical operation.Processor
1010 can be communicated by host interface 1040 with external host, and logical by memory interface 1060 and memory device
Letter.In addition, processor 1010 can be communicated by Buffer control circuit 1050 with storage buffer 1020.Processor 1010
It can be deposited by using storage buffer 1020 as working storage, cache memory or buffer storage to control
The operation of storage device.
Processor 1010 can execute the function of flash translation layer (FTL) (FTL).Processor 1010 can will be by host by FTL
The logical block address (LBA) of offer is converted into physical block address (PBA).Mapping table can be used to receive LBA and by LBA in FTL
It is converted into PBA.It can be modified in various ways according to mapping unit using the address mapping method of FTL.Representative address
Mapping method may include page-map method, block mapping method and mixed-use developments method.
Processor 1010 can will be from the received randomizing data of host.For example, randomization can be used in processor 1010
Seed will be from the received randomizing data of host.Randomization data can be used as data to be processed and be provided to memory device
And memory cell array can be programmed to.
During read operation, processor 1010 can will be received data derandomizing from memory device 100.Example
Such as, processor 1010 derandomized seed can be used will be received data derandomizing from memory device.Derandomized number
According to can export to host.
In embodiment, processor 1010 can be with drive software or firmware to execute randomization operation or derandomized behaviour
Make.
Memorizer buffer circuit 1020 may be used as working storage, cache memory or the buffering of processor 1010
Memory.Memorizer buffer circuit 1020 can store code and order to be executed by processor 1010.Memorizer buffer electricity
Road 1020 can store the data to be handled by processor 1010.Storage buffer 1020 may include static RAM (SRAM)
Or dynamic ram (DRAM).
ECC circuit 1030 can execute error correction.ECC circuit 1030 can be based on wait pass through memory interface 1060
The data of memory device 100 are written to execute ECC encoding operation.The data encoded through ECC can pass through memory interface
1060 and be transferred to memory device 100.ECC circuit 1030 can to by memory interface 1060 from memory device
100 received data execute ECC decoding operate.For example, ECC circuit 1030 can be included in memory interface 1060 and make
For the component of memory interface 1060.
Host interface 1040 can communicate under the control of processor 1010 with external host.Host interface 1040 can make
Communication is executed at least one of various communication means such as below: universal serial bus (USB), serial AT attachment
(SATA), (HSIC), small computer system interface (SCSI), peripheral assembly interconnection between tandem SCSI (SAS), high-speed chip
(PCI), high-speed PCI (PCIe), high speed nonvolatile memory (NVMe), Common Flash Memory (UFS), secure digital (SD), more matchmakers
Body card (MMC), embedded MMC (eMMC), dual inline memory modules (DIMM), deposit formula DIMM (RDIMM) and it is low bear
Carry DIMM (LRDIMM) communication means.
Buffer control circuit 1050 can control storage buffer 1020 under the control of processor 1010.
Memory interface 1060 can communicate under the control of processor 1010 with memory device 100.Memory interface
1060 can pass through channel and 100 communications command of memory device, address and data.
For example, Memory Controller 1000 can be neither including storage buffer 1020 nor including buffer control electricity
Road 1050.
For example, code can be used to control the operation of Memory Controller 1000 in processor 1010.Processor 1010 can
With from non-volatile memory device (for example, read-only memory) loading code being arranged in Memory Controller 1000.It can
Selection of land, processor 1010 can be by memory interfaces 1060 from memory device loading code.
For example, the bus 1070 of Memory Controller 1000 can be divided into control bus and data/address bus.Data/address bus
Data can be transmitted in Memory Controller 1000.Control bus can be transmitted in Memory Controller 1000 and such as be ordered
With the control information of address.Data/address bus and control bus can be separated from each other and neither interfere with each other nor influence each other.
Data/address bus could be attached to host interface 1040, Buffer control circuit 1050, ECC circuit 1030 and memory interface
1060.Control bus could be attached to host interface 1040, processor 1010, Buffer control circuit 1050, memorizer buffer
Device 1020 and memory interface 1060.
Figure 16 is to show the block diagram for the memory card system 2000 for applying the storage device according to the embodiment of the present disclosure.
Referring to Fig.1 6, memory card system 2000 may include Memory Controller 2100, memory device 2200 and connection
Device 2300.
Memory Controller 2100 is attached to memory device 2200.The accessible memory device of Memory Controller 2100
Set 2200.For example, Memory Controller 2100 can control the read operation of memory device 2200, write operation, erasing behaviour
Work and consistency operation.Memory Controller 2100 can provide the interface between memory device 2100 and host.Memory control
Device 2100 processed can drive the firmware for controlling memory device 2200.Memory Controller 2100 can with referring to Fig.1
The identical mode of Memory Controller 200 of description is implemented.
In embodiment, Memory Controller 2100 may include such as random access memory (RAM), processing unit,
The component of host interface, memory interface and ECC circuit.
Memory Controller 2100 can pass through connector 2300 and communication with external apparatus.Memory Controller 2100 can be with
It is communicated based on specific communication protocol with external device (ED) (for example, host).In embodiment, Memory Controller 2100 can lead to
Cross at least one of various communication protocols such as below and communication with external apparatus: universal serial bus (USB), multimedia card
(MMC), embedded MMC (eMMC), peripheral assembly interconnection (PCI), high-speed PCI (PCI-E), Advanced Technology Attachment (ATA), serial
ATA (SATA), Parallel ATA (PATA), minicomputer low profile interface (SCSI), enhanced minidisk interface (ESDI), electricity
Sub- integrated drive (IDE), firewire, Common Flash Memory (UFS), Wi-Fi, bluetooth and high speed nonvolatile memory (NVMe) association
View.In embodiment, connector 2300 can be defined by least one of above-mentioned various communication protocols.
In embodiment, memory device 2200 may be embodied as in various non-volatile memory devices such as below
Any one: electrically erasable ROM (EEPROM), NAND flash, NOR flash memory, phase transformation RAM
(PRAM), resistance-type RAM (ReRAM), ferroelectric RAM (FRAM) and spin transfer torque magnetic ram (STT-MRAM).
In embodiment, Memory Controller 2100 and memory device 2200 are desirably integrated into single semiconductor device
To form storage card.For example, Memory Controller 2100 and memory device 2200 are desirably integrated into single semiconductor device
To form storage card such as below: Personal Computer Memory Card International Association (PCMCIA), standard flash memory card (CF), intelligent matchmaker
Body card (SM or SMC), memory stick, multimedia card (MMC, RS-MMC or miniature MMC), SD card (SD, mini SD, miniature SD or
) or Common Flash Memory (UFS) SDHC.
Figure 17 is to show the frame of solid state hard disk (SSD) system 3000 for applying the storage device according to the embodiment of the present disclosure
Figure.
7, SSD system 3000 may include host 3100 and SSD 3200 referring to Fig.1.SSD 3200 can be connected by signal
It connects device 3001 and exchanges signal SIG with host 3100, and electric power PWR can be received by power connector 3002.SSD 3200
It may include SSD controller 3210, multiple flash memories 3221 to 322n, accessory power supply 3230 and buffer storage 3240.
In embodiment, SSD controller 3210 can execute Memory Controller 200 as described with reference to FIG 1
Function.
SSD controller 3210 can be in response to controlling multiple flash memories from 3100 received signal SIG of host
3221 to 322n.In embodiment, signal SIG can be the letter of the interface between the interface and SSD3200 of Intrusion Detection based on host 3100
Number.For example, signal SIG can be the signal defined by least one of various interfaces such as below: universal serial bus
(USB), multimedia card (MMC), embedded MMC (eMMC), peripheral component interconnection (PCI), high-speed PCI (PCI-E), advanced techniques
Attachment (ATA), serial ATA (SATA), Parallel ATA (PATA), minicomputer low profile interface (SCSI), enhanced minidisk
Interface (ESDI), electronic integrated driver (IDE), firewire, Common Flash Memory (UFS), Wi-Fi, bluetooth, high speed nonvolatile storage
Device (NVMe) interface.
Accessory power supply 3230 can be attached to host 3100 by power connector 3002.Accessory power supply 3230 can be supplied
There is the electric power PWR from host 3100 and can be charged by electric power PWR.It is executed in the power supply from host 3100
In jiggly situation, accessory power supply 3230 can supply the electric power of SSD 3200.In embodiment, accessory power supply 3230 can be with
Positioned at 3200 inside SSD or it is located at outside SSD 3200.For example, accessory power supply 3230 can be set in the motherboard and can
With by secondary electrical supply to SSD 3200.
Buffer storage 3240 is used as the buffer storage of SSD3200.For example, buffer storage 3240 can be stored temporarily
From the received data of host 3100 or from multiple flash memories 3221 to the received data of 322n, or it can temporarily store sudden strain of a muscle
Fast memory 3221 to 322n metadata (for example, mapping table).Buffer storage 3240 may include such as DRAM, SDRAM,
The volatile memory or such as FRAM, ReRAM, STT-MRAM and PRAM of DDR SDRAM, LPDDR SDRAM and GRAM it is non-
Volatile memory.
Figure 18 is to show the block diagram for the custom system 4000 for applying the storage device according to the embodiment of the present disclosure.
Referring to Fig.1 8, custom system 4000 may include application processor 4100, memory module 4200, network module
4300, memory module 4400 and user interface 4500.
Application processor 4100 can run the portion for including in custom system 4000, operating system (OS) or user program
Part.In embodiment, application processor 4100 may include for controlling the controller of the component in custom system 4000, connecing
Mouth, graphics engine etc..Application processor 4100 can be set to system on chip (SoC).
Memory module 4200 may be used as the main memory of custom system 4000, working storage, buffer storage or
Cache memory.Memory module 4200 may include such as DRAM, SDRAM, DDR SDRAM, DDR2SDRAM,
The volatibility RAM of DDR3SDRAM, LPDDRSDRAM and LPDDR3SDRAM or such as PRAM, ReRAM, MRAM and FRAM it is non-easily
The property lost RAM.In embodiment, application processor 4100 and memory module 4200 can be encapsulated based on stacked package (PoP)
And it then can be set to single semiconductor packages.
Network module 4300 can be with communication with external apparatus.For example, network module 4300 can support nothing such as below
Line communication: CDMA (CDMA), global system for mobile communications (GSM), wideband CDMA (WCDMA), CDMA-2000, time-division are more
Location (TDMA), long term evolution (LTE), WiMAX, WLAN, UWB, bluetooth or Wi-Fi communication.In embodiment, network module 4300
It may include in application processor 4100.
Memory module 4400 can storing data wherein.For example, memory module 4400 can store from application processor
4100 received data.Optionally, the data stored in memory module 4400 can be transmitted at by memory module 4400
Manage device 4100.In embodiment, memory module 4400 may be implemented as nonvolatile semiconductor memory dress such as below
Set: phase transformation RAM (PRAM), magnetic ram (MRAM), resistance-type RAM (RRAM), NAND flash, NOR flash memory or
NAND flash with three-dimensional structure.In embodiment, memory module 4400 can be set to custom system 4000
Removable storage medium (that is, can be removed driver), such as storage card or peripheral driver.
In embodiment, memory module 4400 may include multiple non-volatile memory devices, and multiple non-volatile
Property each of memory device can be in a manner of identical with the above-mentioned memory device 100 of 0 to Figure 14 description referring to Fig.1
Operation.Memory module 4400 can be operated in a manner of identical with the above-mentioned storage device 50 described referring to Fig.1.
User interface 4500 may include for data or instruction input are defeated to application processor 4100 or by data
Out to the interface of external device (ED).In embodiment, user interface 4500 may include user input interface such as below: key
Disk, keypad, button, touch panel, touch screen, Trackpad, touch ball, video camera, microphone, gyro sensor, vibration
Sensor and piezo-electric device.User interface 4500 may further include user's output interface such as below: liquid crystal display
(LCD), Organic Light Emitting Diode (OLED)) display device, Activematric OLED (AMOLED) display device, LED, loudspeaker and
Motor.
The various embodiments of the disclosure can provide it is a kind of including read cache memory block storage device and this deposit
The operating method of storage device.
According to the disclosure, there is high reading frequency and cannot be with data due to being stored in the same memory device
The data that interleaved mode is read can be dispersed and be stored to cache data blocks are read, and allow to read tool more quickly
There are the data of high reading frequency.
Although it have been described that embodiment of the disclosure, skilled person will understand that be without departing substantially from this public affairs
In the case where the scope and spirit opened, various modifications, increase and replacement are possible.
Therefore, the scope of the present disclosure must by the following claims and their equivalents rather than thus before description limit.
In the above-described embodiments, it executes or is skipped to all step property of can choose.In addition, the step in each embodiment
It suddenly is not always to be sequentially carried out with given sequence, but can execute at random.In addition, real disclosed in the specification and drawings
It applies example and attached drawing is intended to that those skilled in the art is helped to be more clearly understood that the disclosure is not intended to be limiting the disclosure
Range.In other words, geography will be easy based on scope of the presently disclosed technology in the those of ordinary skill in field described in the disclosure
It is possible for solving various modifications.
Embodiment of the disclosure is described with reference to the accompanying drawings, concrete term used in specification or word should be according to these
Disclosed spirit is come the theme explained without that should limit the disclosure.It should be understood that underlying inventive design described herein
A variety of variants and modifications will be fallen in the spirit and scope of the present disclosure limited such as the following claims and their equivalents.
Claims (17)
1. a kind of storage device, comprising:
Multiple memory devices, each of the multiple memory device include that at least one or more reading cache is deposited
Store up block and multiple main memory blocks;And
Memory Controller, by it is among the data stored in the multiple main memory block, be stored in the same memory device
And the data that there is the reading more than threshold value to count are dispersed and are stored
At least one or more reads cache memory block, and the reading, which counts, indicates read requests number.
2. storage device according to claim 1, wherein it is described at least one or more read cache memory block by
Each single layer cell for storing single bit data is formed.
3. storage device according to claim 1, wherein the multiple main memory block is by each including any in following
One memory cell is formed: the multilevel-cell of two data bit of storage, the three-layer unit of three data bit of storage and storage
Four layer units of four data bit.
4. storage device according to claim 1, wherein the Memory Controller includes:
Operation control unit controls the multiple memory device in response to the read requests inputted from external host;And
Read cache block control unit, the reading of the logical block address based on the data stored in the multiple main memory block
It counts to determine that cached data, the cached data are patrolled at least two or multiple in the logical block address
Collect the corresponding data of block address.
5. storage device according to claim 4, wherein the reading cache block control unit includes:
Cached data information memory cell stores the reading of the logical block address of the data stored in the multiple main memory block
Take counting and include physical block address corresponding with each logical block address of data stored in the multiple main memory block
Cached data information;And
Cached data administrative unit determines the cached data based on the cached data information.
6. storage device according to claim 5, wherein the cached data administrative unit is slow based on the high speed
Deposit data information, reading meter among the logical block address to detect the data stored in the multiple main memory block, each
Number is more than at least two or multiple logical block address of the threshold value.
7. storage device according to claim 6, wherein the cached data administrative unit will be with described at least two
The same memory device among the corresponding data of a or multiple logical block address, being stored in the multiple memory device
In data be determined as the cached data.
8. storage device according to claim 7, wherein the cached data administrative unit is by the cache
The physical block address of data is provided to the operation control unit.
9. storage device according to claim 8, wherein the operation control unit is based on the physical block address, by institute
Cached data is stated to disperse and store at least one or more described reading cache memory block.
10. storage device according to claim 9, wherein the cached data is deposited in the multiple main memory block
Data among the data of storage, not read in a manner of data cross.
11. a kind of method of operating memory device, the storage device includes multiple memory devices and Memory Controller, institute
State multiple memory devices be attached to each of identical channel and the multiple memory device include at least one or it is more
A reading cache memory block and multiple main memory blocks;And the Memory Controller controls the multiple memory device
It sets, which comprises
Data among the data stored in the multiple main memory block, counting with the reading for being more than threshold value are detected, it is described
It reads to count and indicates read requests number;And
Whether it is stored in identical memory device, will be had super according to the data that there is the reading more than the threshold value to count
It crosses the data dispersion that the reading of the threshold value counts and stores include into each of the multiple memory device at least one
A or multiple reading cache memory blocks.
12. according to the method for claim 11, wherein the detection includes:
The reading of the logical block address of the data stored in the multiple main memory block is counted and the threshold value comparison;And
The each reading of detection counts at least two or multiple logical block address more than the threshold value.
13. according to the method for claim 12, wherein the dispersion and storage include:
By it is among data corresponding with described at least two or multiple logical block address, be stored in identical memory device
Data be determined as cached data;And
The cached data is copied to at least one or more reading in each of the multiple memory device included
Take cache memory block.
14. according to the method for claim 13, wherein the copy package includes:
Detect the physical block address of the cached data;And
Read operation is executed to the physical block address, and read data are programmed to the multiple memory device
At least one or more the cache memory block for including in each.
15. according to the method for claim 11, wherein at least one or more described reading cache memory block is by every
The single layer cell of a storage individual data position is formed.
16. according to the method for claim 11, wherein the multiple main memory block is by each any one including in following
A memory cell is formed: the multilevel-cell of two data bit of storage, the three-layer unit of three data bit of storage and storage four
Four layer units of a data bit.
17. a kind of storage system, comprising:
Multiple memory devices, each of the multiple memory device include reading cache memory block and primary storage
Block;And
Controller, the controller:
By it is among the data of the middle storage of each of main memory block in the multiple memory device, with the reading for being greater than threshold value
The Data Detection for taking counting is cached data;
When detecting multiple cached data segments in one of the multiple memory device, by the multiple high speed
Data cached segment is dispersed in the reading cache memory block of the memory device, so as to institute in a manner of data cross
The cached data segment of dispersion executes subsequent read operations.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2018-0031751 | 2018-03-19 | ||
KR1020180031751A KR102535104B1 (en) | 2018-03-19 | 2018-03-19 | Storage device and operating method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110287130A true CN110287130A (en) | 2019-09-27 |
CN110287130B CN110287130B (en) | 2024-03-08 |
Family
ID=67905591
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811300759.6A Active CN110287130B (en) | 2018-03-19 | 2018-11-02 | Memory device and method of operating the same |
Country Status (3)
Country | Link |
---|---|
US (1) | US10853236B2 (en) |
KR (1) | KR102535104B1 (en) |
CN (1) | CN110287130B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111739455A (en) * | 2020-05-21 | 2020-10-02 | 昆明物理研究所 | Device and method for converting self-adaptive arbitrary frame frequency digital video signal and VGA (video graphics array) |
CN114546249A (en) * | 2020-11-26 | 2022-05-27 | 爱思开海力士有限公司 | Data storage device and operation method thereof |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016081192A1 (en) * | 2014-11-20 | 2016-05-26 | Rambus Inc. | Memory systems and methods for improved power management |
KR20190090635A (en) * | 2018-01-25 | 2019-08-02 | 에스케이하이닉스 주식회사 | Data storage device and operating method thereof |
KR102695175B1 (en) * | 2019-10-11 | 2024-08-14 | 에스케이하이닉스 주식회사 | Memory system, memory controller, and operating method |
KR20210046481A (en) * | 2019-10-18 | 2021-04-28 | 삼성전자주식회사 | Electronic apparatus and controlling method thereof |
US11645006B2 (en) * | 2020-04-30 | 2023-05-09 | Macronix International Co., Ltd. | Read performance of memory devices |
KR20230018831A (en) * | 2021-07-30 | 2023-02-07 | 에스케이하이닉스 주식회사 | Memory system and operating method thereof |
TWI850805B (en) * | 2022-10-19 | 2024-08-01 | 慧榮科技股份有限公司 | Memory operation method and memory device |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010042174A1 (en) * | 1998-07-31 | 2001-11-15 | Anurag Gupta | Method and apparatus for determining interleaving schemes in a computer system that supports multiple interleaving schemes |
US20020178083A1 (en) * | 1999-09-10 | 2002-11-28 | Krys Cianciarulo | Systems and methods for insuring data over the internet |
CN1624802A (en) * | 1998-07-01 | 2005-06-08 | 株式会社日立制作所 | Semiconductor memory device and cache |
US20060080506A1 (en) * | 2004-10-07 | 2006-04-13 | International Business Machines Corporation | Data replication in multiprocessor NUCA systems to reduce horizontal cache thrashing |
CN101069211A (en) * | 2004-11-23 | 2007-11-07 | 高效存储技术公司 | Method and apparatus of multiple abbreviations of interleaved addressing of paged memories and intelligent memory banks therefor |
CN101349963A (en) * | 2007-07-19 | 2009-01-21 | 三星电子株式会社 | Solid state disk controller and data processing method thereof |
JP2009037317A (en) * | 2007-07-31 | 2009-02-19 | Panasonic Corp | Memory controller, non-volatile storage device using the same, and non-volatile memory system |
US20120054421A1 (en) * | 2010-08-25 | 2012-03-01 | Hitachi, Ltd. | Information device equipped with cache memories, apparatus and program using the same device |
US20130046920A1 (en) * | 2011-08-17 | 2013-02-21 | Samsung Electronics Co., Ltd. | Nonvolatile memory system with migration manager |
CN103562883A (en) * | 2011-05-31 | 2014-02-05 | 美光科技公司 | Dynamic memory cache size adjustment in a memory device |
CN103839584A (en) * | 2012-11-20 | 2014-06-04 | 爱思开海力士有限公司 | Semiconductor memory device, memory system including the same and operating method thereof |
US20160071608A1 (en) * | 2006-11-29 | 2016-03-10 | Rambus Inc. | Dynamic memory rank configuration |
US20160364337A1 (en) * | 2015-06-10 | 2016-12-15 | Micron Technology, Inc. | Memory having a static cache and a dynamic cache |
CN107025177A (en) * | 2016-02-01 | 2017-08-08 | 爱思开海力士有限公司 | Accumulator system and its operating method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1607879A1 (en) * | 2004-06-14 | 2005-12-21 | Dialog Semiconductor GmbH | Memory interleaving in a computer system |
US8190809B2 (en) * | 2004-11-23 | 2012-05-29 | Efficient Memory Technology | Shunted interleave for accessing plural memory banks, particularly those having partially accessed cells containing data for cache lines |
KR101498673B1 (en) | 2007-08-14 | 2015-03-09 | 삼성전자주식회사 | Solid state drive, data storing method thereof, and computing system including the same |
-
2018
- 2018-03-19 KR KR1020180031751A patent/KR102535104B1/en active IP Right Grant
- 2018-10-08 US US16/154,358 patent/US10853236B2/en active Active
- 2018-11-02 CN CN201811300759.6A patent/CN110287130B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1624802A (en) * | 1998-07-01 | 2005-06-08 | 株式会社日立制作所 | Semiconductor memory device and cache |
US20010042174A1 (en) * | 1998-07-31 | 2001-11-15 | Anurag Gupta | Method and apparatus for determining interleaving schemes in a computer system that supports multiple interleaving schemes |
US20020178083A1 (en) * | 1999-09-10 | 2002-11-28 | Krys Cianciarulo | Systems and methods for insuring data over the internet |
US20060080506A1 (en) * | 2004-10-07 | 2006-04-13 | International Business Machines Corporation | Data replication in multiprocessor NUCA systems to reduce horizontal cache thrashing |
CN101069211A (en) * | 2004-11-23 | 2007-11-07 | 高效存储技术公司 | Method and apparatus of multiple abbreviations of interleaved addressing of paged memories and intelligent memory banks therefor |
US20160071608A1 (en) * | 2006-11-29 | 2016-03-10 | Rambus Inc. | Dynamic memory rank configuration |
CN101349963A (en) * | 2007-07-19 | 2009-01-21 | 三星电子株式会社 | Solid state disk controller and data processing method thereof |
JP2009037317A (en) * | 2007-07-31 | 2009-02-19 | Panasonic Corp | Memory controller, non-volatile storage device using the same, and non-volatile memory system |
US20120054421A1 (en) * | 2010-08-25 | 2012-03-01 | Hitachi, Ltd. | Information device equipped with cache memories, apparatus and program using the same device |
CN103562883A (en) * | 2011-05-31 | 2014-02-05 | 美光科技公司 | Dynamic memory cache size adjustment in a memory device |
US20130046920A1 (en) * | 2011-08-17 | 2013-02-21 | Samsung Electronics Co., Ltd. | Nonvolatile memory system with migration manager |
CN103839584A (en) * | 2012-11-20 | 2014-06-04 | 爱思开海力士有限公司 | Semiconductor memory device, memory system including the same and operating method thereof |
US20160364337A1 (en) * | 2015-06-10 | 2016-12-15 | Micron Technology, Inc. | Memory having a static cache and a dynamic cache |
CN107025177A (en) * | 2016-02-01 | 2017-08-08 | 爱思开海力士有限公司 | Accumulator system and its operating method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111739455A (en) * | 2020-05-21 | 2020-10-02 | 昆明物理研究所 | Device and method for converting self-adaptive arbitrary frame frequency digital video signal and VGA (video graphics array) |
CN114546249A (en) * | 2020-11-26 | 2022-05-27 | 爱思开海力士有限公司 | Data storage device and operation method thereof |
CN114546249B (en) * | 2020-11-26 | 2024-04-02 | 爱思开海力士有限公司 | Data storage device and method of operating the same |
Also Published As
Publication number | Publication date |
---|---|
KR102535104B1 (en) | 2023-05-23 |
US10853236B2 (en) | 2020-12-01 |
CN110287130B (en) | 2024-03-08 |
KR20190109985A (en) | 2019-09-27 |
US20190286555A1 (en) | 2019-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110069212B (en) | Storage device and operation method of storage device | |
CN110503997B (en) | Memory device and method of operating the same | |
CN110321070B (en) | Memory controller and method of operating the same | |
CN110275673B (en) | Memory device and method of operating the same | |
CN111696608A (en) | Memory device and operation method thereof | |
CN111104059B (en) | Memory controller and method of operating the same | |
CN110287130A (en) | Storage device and its operating method | |
CN111105829B (en) | Memory controller and method of operating the same | |
CN110390970B (en) | Memory device and method of operating the same | |
CN111258793B (en) | Memory controller and method of operating the same | |
CN111258919B (en) | Storage device and method of operating the same | |
CN110275672A (en) | Storage device and its operating method | |
CN112908374A (en) | Memory controller and operating method thereof | |
CN110399092A (en) | The method of storage device and operating memory device | |
CN110780802A (en) | Memory controller and operating method thereof | |
CN110175132A (en) | Storage device and its operating method | |
CN110502449A (en) | Storage device and its operating method | |
KR20210151374A (en) | Storage device and operating method thereof | |
CN111445939B (en) | Memory device and method of operating the same | |
CN110389722A (en) | Storage device and its operating method | |
CN110619912A (en) | Storage device and operation method thereof | |
KR20200066893A (en) | Memory controller and operating method thereof | |
CN114078530A (en) | Memory device and operation method thereof | |
CN111341372B (en) | Memory device and method of operating the same | |
KR20200090556A (en) | Storage device and operating method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |