US20220350655A1 - Controller and memory system having the same - Google Patents

Controller and memory system having the same Download PDF

Info

Publication number
US20220350655A1
US20220350655A1 US17/868,430 US202217868430A US2022350655A1 US 20220350655 A1 US20220350655 A1 US 20220350655A1 US 202217868430 A US202217868430 A US 202217868430A US 2022350655 A1 US2022350655 A1 US 2022350655A1
Authority
US
United States
Prior art keywords
command
data
output
input
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/868,430
Inventor
Seung Gu JI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SK Hynix Inc
Original Assignee
SK Hynix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SK Hynix Inc filed Critical SK Hynix Inc
Priority to US17/868,430 priority Critical patent/US20220350655A1/en
Publication of US20220350655A1 publication Critical patent/US20220350655A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3041Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is an input/output interface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/0292User address space allocation, e.g. contiguous or non contiguous base addressing using tables or multilevel address translation means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1028Power efficiency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7203Temporary buffering, e.g. using volatile buffer or dedicated buffer blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present disclosure generally relates to a controller and a memory system having the same, and more particularly, to a controller configured to perform a suspend operation in response to a suspend command, and a memory system having the controller.
  • a memory system may include a memory device and a controller.
  • the memory device may include a plurality of dies capable of storing data.
  • Memory cells included in the dies may be implemented as volatile memory cells in which stored data disappears when the supply of power is interrupted, or be implemented as nonvolatile memory cells in which stored data is retained even when the supply of power is interrupted.
  • the controller may control data communication between a host and the memory device. For example, the controller may control the memory device in response to a request from the host. Also, the controller may perform a background operation without any request from the host so as to improve the performance of the memory system.
  • the host may communicate with the memory device through the controller by using an interface protocol such as Peripheral Component Interconnect-Express (PCI-e or PCIe), Advanced Technology Attachment (ATA), Serial ATA (SATA), Parallel ATA (DATA), or Serial Attached SCSI (SAS).
  • PCI-e or PCIe Peripheral Component Interconnect-Express
  • ATA Advanced Technology Attachment
  • SATA Serial ATA
  • DATA Parallel ATA
  • SAS Serial Attached SCSI
  • USB Universal Serial Bus
  • MMC Mufti-Media Card
  • ESDI Enhanced Small Disk Interface
  • IDE Integrated Drive Electronics
  • Embodiments provide a controller capable of preventing execution delay of a suspend command and a memory system having the controller.
  • a controller including: a command queue scheduler configured to queue normal commands, and provide a suspend command with a higher priority than the normal commands, when the suspend command is input; a data input/output component configured to output multiple data items in response to a data output signal from the command queue scheduler, and stop the output of the multiple data items in response to a data output stop signal from the command queue scheduler; and a data monitor configured to divide plural items of input data input to the data input/output component into a plurality of data groups, and monitor information of a current data group including data that is currently output from the data input/output component, wherein the data input/output component outputs preceding data and the currently output data in the current data group and stops the output of next data, in response to the data output stop signal, wherein the command queue scheduler outputs the suspend command, when the output of the current data group is stopped.
  • a memory system including: first and second dies coupled to the same channel; a processor configured to output a first command or a second command having a higher priority than that of the first command, in response to a request received from a host; and a flash interface layer configured to output data to the first die in response to the first command, wherein the flash interface layer: divides input data into a plurality of data groups; when the second command is input, outputs preceding data and currently output data in a current data group and then outputs the second command to the second die; and when execution of the second command ends, outputs, to the first die, data in a next data group of the current data group that has been completely output.
  • a memory system including: first and second dies; and a controller including: a processor suitable for receiving a request from a host, and generating a first command for a first die and a second command for a second die, the second command having a higher priority than that of the first command, in response to the request; and a flash interface layer coupled to the first and second dies through a channel, and suitable for receiving a plurality of data groups corresponding to the first command, each of the plurality of data groups including plural data items, providing some data groups among the plurality of data groups to the first die, suspending providing remaining data groups among the plurality of data groups to the first die when the second command is received, and providing the remaining data groups to the first die when the second command is executed.
  • FIG. 1 is a diagram illustrating a memory system in accordance with an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating a die shown in FIG. 1 .
  • FIG. 3 is a diagram illustrating a command execution method in a multi-channel scheme
  • FIG. 4 is a diagram illustrating a controller in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a diagram illustrating a configuration of a flash interface layer.
  • FIG. 6 is a diagram illustrating a central processing unit in accordance with an embodiment of the present disclosure.
  • FIG. 7 is a diagram illustrating a function of a channel scheduler.
  • FIG. 8 is a diagram illustrating an address table.
  • FIG. 9 is a diagram illustrating a flash interface.
  • FIGS. 10 to 12 are diagrams illustrating a method in which a command queue scheduler queues a suspend command.
  • FIG. 13 is a diagram illustrating a data group setting method in accordance with an embodiment of the present disclosure.
  • FIG. 14 is a diagram illustrating a general suspend command processing method.
  • FIG. 15 is a diagram illustrating a suspend command processing method in accordance with an embodiment of the present disclosure
  • FIG. 16 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1 .
  • FIG. 17 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1 .
  • FIG. 18 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1 .
  • FIG. 19 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1 .
  • FIG. 1 is a diagram illustrating a memory system in accordance with an embodiment of the present disclosure.
  • the memory system 1000 may include a memory device 1100 configured to store data and a controller 1200 configured to control the memory device 1100 .
  • the memory device 1100 may include a plurality of dies D 1 to Di (where i is a positive integer greater than 1).
  • the dies D 1 to Di may be implemented with a volatile memory device in which stored data disappears when the supply of power is interrupted or a nonvolatile memory device in which stored data is retained even when the supply of power is interrupted.
  • the memory system including the dies D 1 to Di implemented with the nonvolatile memory device is described as an example.
  • the nonvolatile memory device may be a NAND flash memory device.
  • the memory device 1100 may communicate with the controller 1200 through a plurality of channels CH 1 to CHk (where k is a positive integer greater than 1).
  • the dies D 1 to Di in the memory device 1100 may receive a command, an address, data, and the like from the controller 1200 through the channels CH 1 to CHk, and output data to the controller 1200 .
  • the controller 1200 may control the memory device 1100 in response to a request received from a host 2000 , and output data read from the memory device 1100 to the host 2000 .
  • the controller 1200 may store the received data in the memory device 1100 .
  • the controller 1200 may perform a read operation according to a physical address mapped to the logical address, and output read data to the host 2000 .
  • the controller 1200 may perform a background operation capable of managing the memory device 1100 without any request from the host 2000 .
  • the controller 1200 may perform a function including garbage collection, wear leveling, and the like.
  • the controller 1200 may perform various functions for efficiently managing the memory device 1100 .
  • the controller 1200 may generate a suspend command corresponding to the suspend request. That is, the suspend command may have priority over a normal command.
  • the suspend command may be a read command
  • the normal command may be a program command or erase command.
  • the controller may immediately transmit the suspend command to a selected die through the selected channel.
  • the controller 1200 may queue commands such that the suspend command can be executed next after a command currently being executed in the selected channel.
  • the controller 1200 may transmit the program data to that die including all data groups of the program data currently being transmitted at the time the suspend command is received from the host, and transmit the suspend command to the selected die.
  • the controller 1200 may re-transmit a program command, a physical address, and the program data to the other die on which the program data transmission operation is stopped.
  • the controller 1200 does not re-transmit the data that has already been transmitted, but may transmit data that is not yet transmitted.
  • the controller 1200 may divide program data into a plurality of groups, and determine in which data group data being currently transmitted is included. Accordingly, the memory system 1000 may prevent suspend command execution delay of a selected die, and reduce a resume operation time of another die.
  • the host 2000 may communicate with the memory system 1000 by using an interface protocol such as Peripheral Component Interconnect-Express (PCI-e or PCIe), Advanced Technology Attachment (ATA), Serial ATA (SATA), Parallel ATA (PATA), Serial Attached SCSI (SAS), or NonVolatile Memory Express (NVMe).
  • PCI-e or PCIe Peripheral Component Interconnect-Express
  • ATA Advanced Technology Attachment
  • SATA Serial ATA
  • PATA Parallel ATA
  • SAS Serial Attached SCSI
  • NVMe NonVolatile Memory Express
  • the interface protocol is not limited to the above-described examples; alternatively the host 2000 may communicate with the memory system 1000 through any of various other protocols such as a Universal Serial Bus (USB), a Multi-Media Card (MMC), an Enhanced Small Disk Interface (ESDI), and Integrated Drive Electronics (IDE).
  • USB Universal Serial Bus
  • MMC Multi-Media Card
  • ESDI Enhanced Small Disk Interface
  • IDE Integrated Drive Electronics
  • FIG. 2 is a diagram illustrating a representative die Di of the dies shown in FIG. 1 .
  • the die Di may include a memory cell array 110 configured to store data, a peripheral circuit configured to perform a program, read or erase operation, and a logic circuit 170 configured to control the peripheral circuit.
  • the memory cell array 110 may include a plurality of memory blocks in which data is stored.
  • Each of the memory blocks may include a plurality of memory cells, and the memory cells may be implemented in a two-dimensional structure in which the memory cells are arranged in parallel to a substrate or a three-dimensional structure in which the memory cells are stacked vertically to a substrate.
  • the peripheral circuit may include a voltage generator 120 , a row decoder 130 , a page buffer group 140 , a column decoder 150 , and an input and output (input/output) circuit 160 .
  • the voltage generator 120 may generate and output operating voltages Vop necessary for various operations in response to an operation signal OPS.
  • the voltage generator 120 may generate and output a program voltage, a verify voltage, a read voltage, a pass voltage, an erase voltage, and the like.
  • the row decoder 130 may select one memory block among the memory blocks in the memory cell array 110 according to a row address RADD, and transmit operating voltages Vop to the selected memory block.
  • the page buffer group 140 may be coupled to the memory cell array 110 through bit lines, and include a plurality of page buffers coupled to the bit lines.
  • the plurality of page buffers may temporarily store data in a program or read operation in response to a page buffer control signal PBSIG.
  • the page buffers may include a plurality of latches for temporarily storing data. For example, data received through the channel CHk in the program operation may be temporarily stored in the page buffers and then programmed. Data read from the memory cell array 110 in the read operation may be temporarily stored in the page buffers and then output.
  • the column decoder 150 may sequentially transmit data received from the input/output circuit 160 to the page buffers included in the page buffer group 140 according to a column address CADD, or sequentially transmit data received from the page buffers to the input/output circuit 160 .
  • the input/output circuit 160 may be coupled to the controller 1200 through input/output lines of the channel CHk, and input and output a command CMD, an address ADD, and data DATA through the input/output lines.
  • the input/output circuit 160 may transmit the command CMD and the address ADD, which are received from the controller 1200 , to the logic circuit 170 , and transmit the data DATA received from the controller 1200 to the column decoder 150 .
  • the input/output circuit 160 may output data read from the memory cell array 110 to the controller 1200 through the input/output lines of the channel CHk.
  • the logic circuit 170 may output operation signals OPS, a row address RADD, page buffer control signals PBSIG, and a column address CADD, in response to a control signal CTSIG received from the controller 1200 through the channel CHk and the command CMD and the address ADD, which are received from the input/output circuit 160 .
  • FIG. 3 is a diagram illustrating a command execution method in a multi-channel scheme.
  • a plurality of dies D 1 to Di may be coupled to each of a plurality of channels CH 1 to CHk in the multi-channel scheme.
  • first to ith dies D 1 to Di may be coupled to a first channel CH 1
  • first to ith dies D 1 to Di may be coupled to a second channel CH 2
  • first to ith dies D 1 to Di may be coupled to a kth channel CHk.
  • First to ith dies D 1 to Di coupled to different channels may be physically different dies. Dies coupled to the same channel cannot be simultaneously selected, and dies coupled to different channels can be simultaneously selected.
  • a die selected in each channel is an example for describing the present disclosure, and therefore, dies simultaneously selected in different channels may vary depending on a command and an address.
  • FIG. 4 is a diagram illustrating a controller 1200 in accordance with an embodiment of the present disclosure.
  • the controller 1200 may include a host interface layer (HIL) 210 , a central processing unit (CPU) 220 , and a flash interface layer (FIL) 230 .
  • HIL host interface layer
  • CPU central processing unit
  • FIL flash interface layer
  • the HIL 210 may communicate between the host 2000 and the CPU 220 . For example, when the HIL 210 receives a request, a logical address, or data from the host 2000 , the HIL 210 may transmit the received request, logical address or data to the CPU 220 . Also, when the HIL 210 receives data from the CPU 220 , the HIL 210 may output the received data to the host 2000 .
  • the CPU 220 may communicate between the HIL 210 and the FIL 230 , and control overall operations of the controller 1200 .
  • the CPU 220 may convert a request received from the HIL 210 into a command, and transmit the command to the FIL 230 according to states of channels.
  • the CPU 220 may transmit a command to the FIL 230 such that an overload is not applied in a specific channel, by considering commands queued in each channel, peak power, etc.
  • the CPU 220 may convert (or translate) a logical address received from the HIL 210 into a physical address, and transmit the physical address to the FIL 230 .
  • the CPU 220 may transmit data received from the HIL 210 to the FIL 230 .
  • the FIL 230 may communicate between the CPU 220 and the memory device 1100 .
  • the FIL 230 may receive a command, a physical address, or data from the CPU 220 , and transmit the command, the physical address, or the data to dies selected through channels.
  • the FIL 230 may queue commands according to a state of each of the channels, divide program data into a plurality of data groups, and store and update in real time information on a data group including data transmitted to a channel.
  • the FIL 230 may output the suspend command, when transmission of a currently loaded data group through a channel is ended.
  • execution of the suspend command is ended, the FIL 230 may resume a stopped (or suspended) program operation.
  • the FIL 230 may re-transmit a command and a physical address with respect to the stopped program operation, and transmit untransmitted data through the channel.
  • FIG. 5 is a diagram illustrating a configuration of the flash interface layer (FIL) 230 .
  • the FIL 230 may include a plurality of flash interfaces 1 FI to kFI.
  • the number of the flash interfaces 1 FI to kFI may be equal to that of the channels CH 1 to CHk communicating with the memory device 1100 .
  • the FIL 230 may include first to kth flash interfaces 1 FI to kFI.
  • the first flash interface 1 FI may communicate with the memory device 1100 through the first channel CH 1
  • the second flash interface 2 FI may communicate with the memory device 1100 through the second channel CH 2
  • the kth flash interface kFI may communicate with the memory device 1100 through the kth channel CHk.
  • the CPU 220 may selectively transmit a command CMD, a physical address PADD, and data DATA to the first to kth flash interfaces 1 FI to kFI according to states of the first to kth channels CH 1 to CHk.
  • the CPU 220 may transmit a program command when a program request is received, transmit a read command when a read request is received, and transmit an erase command when an erase request is received.
  • Each of the first to kth flash interfaces 1 FI to kFI may queue the command CMD received from the CPU 220 . Further, each of the first to kth flash interfaces 1 FI to kFI may transmit a program command CMDp, a physical address PADD, and data DATA to a selected die or transmit a read command CMDr and a physical address PADD to a selected die, though a channel according to a queued order. Each of the first to kth flash interfaces 1 FI to kFI may queue various commands received from the CPU 220 , in addition to the program command CMDp and the read command CMDr, and output the commands according to a queued order.
  • the first flash interface 1 FI may output the program command CMDp, the physical address PADD, and the data DATA through the first channel CH 1
  • the kth flash interface kFI may output the read command CMDr and the physical address PADD through the kth channel CHk.
  • FIG. 6 is a diagram illustrating a central processing unit (CPU) 220 in accordance with an embodiment of the present disclosure.
  • the CPU 220 may include a command (CMD) generator 221 , a channel (CH) scheduler 222 , an address (ADD) table 223 , and a first buffer 224 .
  • CMD command
  • CH channel
  • ADD address
  • first buffer 224 first buffer
  • the command generator 221 may generate a request RQ received from the host 2000 as a command CMD to be used in the memory system 1000 , and transmit the generated command CMD to the CH scheduler 222 .
  • the channel scheduler 222 may queue the command CMD received from the command generator 221 according to states of channels, and output the command CMD according to a queued order.
  • the address table 223 may be a table in which logical addresses LADD and physical addresses PADD are mapped to each other.
  • the address table 223 may be stored in an internal memory, which is included in the CPU 220 .
  • the address table 223 may be updated whenever mapped addresses are changed.
  • the address table 223 may output a physical address PADD corresponding to the received logic address LADD.
  • the first buffer 224 may temporarily store data DATA received from the host 2000 , and transmit the data DATA to the FIL 230 according to a set data width.
  • FIG. 7 is a diagram illustrating a function of the channel (CH) scheduler 222 .
  • the channel scheduler 222 may include a second buffer 71 and a third buffer 72 .
  • the second buffer 71 is configured to store state information ST # on each of channels CH 1 , CH 2 , . . . .
  • the third buffer 72 is configured to temporarily store queued commands CMD according to the state information ST #.
  • the state information ST # in the second buffer 71 may include information on the number of commands CMD being executed or to be executed in each channel, or information on a current power consumption amount or predicted power consumption amount of each channel.
  • the state information ST # may include all the information.
  • Information on predicted power consumption amount or information on an operation time may be stored in the channel scheduler 222 .
  • the channel scheduler 222 may update state information ST # of a corresponding channel in the second buffer 71 according to queuing information of commands stored in the third buffer 72 .
  • the state information ST # may be calculated as a workload for each channel to be stored in the second buffer 71 .
  • the channel scheduler 222 may check a workload corresponding to state information ST # of the channels CH 1 , CH 2 , . . . , which are stored in the second buffer 71 . Further, the CH scheduler 222 may preferentially allocate a command CMD to a channel having a relatively low workload or allocate a command CMD having a relatively high workload to a channel having a relatively low workload. For example, according to workloads of the channels CH 1 , CH 2 , . . .
  • the channel scheduler 222 may allocate a first command CMD 1 to a third channel CH 3 ( 21 ), allocate a second command CMD 2 to a fourth channel CH 4 ( 22 ), allocate a third command CMD 3 to a first channel CH 1 ( 23 ), and allocate a fourth command CMD 4 to a second channel CH 2 ( 24 ).
  • the channel scheduler 222 may sequentially output the first to fourth commands CMD 1 to CMD 4 according to an order ( 21 , 22 , 23 , and 24 ) In which the first to fourth commands CMD 1 to CMD 4 are queued in the third buffer 72 , and update the state information ST # for each channel in the second buffer 71 .
  • FIG. 8 is a diagram illustrating the address (ADD) table 223 .
  • the address table 223 may include physical addresses PADD 1 to PADD # respectively mapped to logical addresses LADD 1 to LADD #.
  • the logical addresses LADD 1 to LADD # may be addresses used in the host 2000
  • the physical addresses PADD 1 to PADD # may be addresses used in the memory device 1100 .
  • the CPU 220 may update the address table 223 whenever the mapped addresses are changed.
  • the CPU 220 may output a physical address PADD mapped to the received logical address LADD.
  • FIG. 9 is a diagram illustrating a flash interface.
  • the flash interface is any of the first to kth flash interfaces 1 FI to kFI shown in FIG. 5 , and the first to kth flash interfaces 1 FI to kFI may be configured identically to one another. Therefore, the kth flash interface kFI is described as an example.
  • the kth flash interface kFI may include a command (CMD) queue scheduler 91 , an address (ADD) input/output component 92 , a data input/output component 93 , and a data monitor 94 .
  • CMD command
  • ADD address
  • data monitor 94 data monitor
  • each of the components in the kth flash interface kFI may operate as follows.
  • the command queue scheduler 91 may queue input commands CMD, and sequentially output the queued commands CMD. For example, when normal commands are input, the command queue scheduler 91 may queue the normal commands in an order in which the normal commands are input, and output commands CMD in an order in which the normal commands are queued. Also, after the command queue scheduler 91 outputs a command CMD, the command queue scheduler 91 may sequentially output an address output signal AOS and a data output signal DOS.
  • the address input/output component 92 may temporarily store the input physical address PADD, and output the physical address PADD in response to the address output signal AOS output from the command queue scheduler 91 .
  • the data input/output component 93 may temporarily store the input data DATA, and output the data DATA in response to the data output signal DOS output from the command queue scheduler 91 .
  • the data monitor 94 may communicate with the data input/output component 93 .
  • the data monitor 94 may divide data input to the data input/output component 93 into a plurality of data groups, and monitor in real time a data group currently output from the data input/output component 93 .
  • each of the components in the kth flash interface kFI may operate as follows.
  • the command queue scheduler 91 may provide the suspend command with a higher priority than the existing queued normal commands CMD, and output a data output stop signal DSS. Subsequently, when a completion signal FIS output from the data input/output component 93 is received, the command queue scheduler 91 may output the suspend command.
  • the suspend command may be a read command, and therefore, the command queue scheduler 91 may output the suspend command and then output an address output signal AOS.
  • the address input/output component 92 may temporarily store a physical address PADD input together with the suspend command, and output the physical address PADD in response to an address output signal AOS output from the command queue scheduler 91 .
  • the data input/output component 93 may output data up to and including last data set by the data monitor 94 , and output the completion signal FIS.
  • the data monitor 94 may store information on a data group including the data output from the data input/output component 93 .
  • the command queue scheduler 91 may output the suspend command.
  • each of the components included in the kth flash interface kFI may operate as follows.
  • the command queue scheduler 91 may re-output a normal command CMD stopped by the suspend command in response to the resume command. Subsequently, the command queue scheduler 91 may sequentially output an address output signal AOS and a data output signal DOS.
  • the address input/output component 92 may output a physical address PADD in response to the address output signal AOS output from the command queue scheduler 91 .
  • the data input/output component 93 may output the data that was stopped from being output as a result of the suspend command. Information on the data that was not output may be received from the data monitor 94 .
  • the data monitor 94 may transmit information on a next data group to the data input/output component 93 , based on information on a data group that has been completely output in a previous normal operation.
  • the data input/output component 93 may output data from first data of a selected data group according to data group information received from the data monitor 94 .
  • FIGS. 10 to 12 are diagrams illustrating a method in which the command (CMD) queue scheduler 91 queues a suspend command.
  • the command queue scheduler 91 may queue the first to fifth commands CMD 1 to CMD 5 in an order in which they are input, and sequentially output the first to fifth commands CMD 1 to CMD 5 in that same order.
  • the command queue scheduler 91 may store a command that has most recently been output until all operations corresponding to the commands that have been outputted are completed.
  • the command queue scheduler 91 may provide a higher priority to the suspend command CMD 6 with respect to the stopped first command CMD 1 , and queue the commands such that the suspend command CMD 6 is higher in the queue and thus output earlier than the other normal commands CMD 1 to CMD 5 .
  • FIG. 13 is a diagram illustrating a data group setting method in accordance with an embodiment of the present disclosure.
  • the data monitor 94 may set data groups by dividing data DATA input to the data input/output component 93 into the data groups according to capacities CP 1 to CP 4 or intervals CHP 1 to CHP 3 .
  • the data monitor 94 may divide the total capacity of the data DATA input to the data input/output unit 93 into 4 portions. This division is merely an example; the DATA may be divided into more or less than 4 portions.
  • the capacities CP 1 to CP 4 may be equal to or different from one another.
  • a first data group DATA 1 - 1 may have a first capacity CP 1
  • a second data group DATA 1 - 2 may have a second capacity CP 2
  • a third data group DATA 1 - 3 may have a third capacity CP 3
  • a fourth data group DATA 1 - 4 may have a fourth capacity CP 4 , which capacities may all be the same or one or more may be different.
  • the data monitor 94 may update information on a data group including currently output data, whenever the capacity of data output from the data input/output component 93 reaches a set capacity.
  • the data monitor 94 may store the intervals CHP 1 to CHP 3 for the set times they represent, when the data DATA is output to a channel CH 1 from the data input/output component 93 . That is, the data monitor 94 may set a time corresponding to each of a plurality of intervals, and update information on a data group including data for every set time after the data is output from the data input/output component 93 . For example, position information of data output from the data input/output component 93 may be stored at a first time CHP 1 when a certain time elapses after first data is output.
  • Position information of data output from the data input/output component 93 may be stored at a second time CHP 2 when a certain time elapses after the first time CHP 1 .
  • Position information of data output from the data input/output component 93 may be stored at a third time CHP 3 when a certain time elapses after the second time CHP 2 .
  • the data input/output component 93 may output only that data up to and including the data in the second data group DATA 1 - 2 , based on information on a data group including data transmitted to a current channel CH from the data monitor 94 , and then stop the output of data in the third data group DATA 1 - 3 .
  • the data input/output component 93 may receive, from the data monitor 94 , information of the third data group DATA 1 - 3 corresponding to a next group of the second data group DATA 1 - 2 , of which output has been stopped.
  • the data input/output component 93 may sequentially output data from first data in the third data group DATA 1 - 3 according to information of the third data group DATA 1 - 3 .
  • a delay time until the suspend command is executed may be reduced, and a resume time of the program operation stopped by the suspend command may also be reduced.
  • the existing operating method and an operating method in accordance with this embodiment compare as follows.
  • FIG. 14 is a diagram illustrating a general suspend command processing method.
  • a first command CMD 1 is a program command
  • the program command CMD 1 , a physical address PADD, and first to fourth data groups DATA 1 - 1 to DATA 1 - 4 may be sequentially input to a first die D 1 coupled to a first channel CH 1 .
  • a suspend command CMD 6 for a second die D 2 may be input.
  • the suspend command CMD 6 may be a read command.
  • the suspend command CMD 6 is transmitted after all data DATA 1 - 1 to DATA 1 - 4 related to a program command being currently executed are transmitted.
  • a first delay time may elapse until the suspend command CMD 6 is executed in the second die D 2 .
  • FIG. 15 is a diagram illustrating a suspend command processing method in accordance with an embodiment of the present disclosure.
  • the suspend command CMD 6 when the suspend command CMD 6 is input, data is transmitted to the first die D 1 including all data up to data in the second data group DATA 1 - 2 , which is currently being transmitted through the first channel CH 1 . Then, the suspend command CMD 6 for the second die D 2 may be executed before data in the third and fourth data groups DATA 1 - 3 and DATA 1 - 4 are transmitted to the first die D 1 . That is, in accordance with this embodiment, a second delay time T 2 shorter than the first delay time T 1 may elapse until the suspend command CMD 6 is executed.
  • data that is not output may be detected based on data capacity information CP 1 , data interval information CHP 2 , or information obtained by adding up the data capacity information CP 1 and the data interval information CHP 2 . Accordingly, the program command CMD 1 , the physical address PADD, and the third and fourth data groups DATA 1 - 3 and DATA 1 - 4 may be sequentially transmitted to the first die D 1 .
  • FIG. 16 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1 .
  • the memory system 30000 may be implemented as a cellular phone, a smart phone, a tablet personal computer (PC), a personal digital assistant (PDA), or a wireless communication device.
  • the memory system 30000 may include a memory device 1100 and a controller 1200 capable of controlling an operation of the memory device 1100 .
  • the controller 1200 may control a data access operation of the memory device 1100 , e.g., a program operation, an erase operation, a read operation, or the like under the control of a processor 3100 .
  • Data programmed in the memory device 1100 may be output through a display 3200 under the control of the controller 1200 .
  • a radio transceiver 3300 may transmit and receive radio signals through an antenna ANT.
  • the radio transceiver 3300 may convert a radio signal received through the antenna ANT into a signal that can be processed by the processor 3100 . Therefore, the processor 3100 may process a signal output from the radio transceiver 3300 and transmit the processed signal to the controller 1200 or the display 3200 .
  • the controller 1200 may transmit the signal processed by the processor 3100 to the memory device 1100 .
  • the radio transceiver 3300 may convert a signal output from the processor 3100 into a radio signal, and output the changed radio signal to an external device through the antenna ANT.
  • An input device 3400 is a device capable of inputting a control signal for controlling an operation of the processor 3100 or data to be processed by the processor 3100 , and may be implemented as a pointing device such as a touch pad or a computer mount, a keypad, or a keyboard.
  • the processor 3100 may control an operation of the display 3200 such that data output from the controller 1200 , data output from the radio transceiver 3300 , or data output from the input device 3400 can be output through the display 3200 .
  • the controller 1200 capable of controlling an operation of the memory device 1100 may be implemented as a part of the processor 3100 , or be implemented as a chip separate from the processor 3100 .
  • FIG. 17 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1 .
  • the memory system 40000 may be implemented as a personal computer (PC), a tablet PC, a net-book, an e-reader, a personal digital assistant (PDA), a portable multi-media player (PMP), an MP3 player, or an MP4 player.
  • PC personal computer
  • PDA personal digital assistant
  • PMP portable multi-media player
  • MP3 player an MP3 player
  • MP4 player an MP4 player
  • the memory system 40000 may include a memory device 1100 configured to store data and a controller 1200 capable of controlling a data processing operation of the memory device 1100 .
  • a processor 4100 may output data stored in the memory device 1100 through a display 4300 according to data input through an input device 4200 .
  • the input device 4200 may be implemented as a pointing device such as a touch pad or a computer mouse, a keypad, or a keyboard.
  • the processor 4100 may control overall operations of the memory system 40000 , and control an operation of the controller 1200 .
  • the controller 1200 capable of controlling an operation of the memory device 1100 may be implemented as a part of the processor 4100 , or be implemented as a chip separate from the processor 4100 .
  • FIG. 18 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1 .
  • the memory system 50000 may be implemented as an image processing device, e.g., a digital camera, a smart phone having a digital camera attached thereto, or a tablet PC having a digital camera attached thereto.
  • an image processing device e.g., a digital camera, a smart phone having a digital camera attached thereto, or a tablet PC having a digital camera attached thereto.
  • the memory system 50000 may include a memory device 1100 and a controller 1200 capable of controlling a data processing operation of the memory device 1100 , e.g., a program operation, an erase operation, or a read operation.
  • a data processing operation of the memory device 1100 e.g., a program operation, an erase operation, or a read operation.
  • An image sensor 5200 of the memory system 50000 may convert an optical image into digital signals, and the converted digital signals may be transmitted to a processor 5100 or the controller 1200 . Under the control of the processor 5100 , the converted digital signals may be output through a display 5300 , or be stored in the memory device 1100 through the controller 1200 . In addition, data stored in the memory device 1100 may be output through the display 5300 under the control of the processor 5100 or the controller 1200 .
  • the controller 1200 capable of controlling an operation of the memory device 1100 may be implemented as a part of the processor 5100 , or be implemented as a chip separate from the processor 5100 .
  • FIG. 19 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1 .
  • the memory system 70000 may be implemented as a memory card or a smart card.
  • the memory system 70000 may include a memory device 1100 , a controller 1200 , and a card interface 7100 .
  • the controller 1200 may control data exchange between the memory device 1100 and the card interface 7100 .
  • the card interface 7100 may be a secure digital (SD) card interface or a multi-media card (MMC) interface, but the present disclosure is not limited thereto.
  • SD secure digital
  • MMC multi-media card
  • the card interface 7100 may interface data exchange between a host 60000 and the controller 1200 according to a protocol of the host 60000 .
  • the card interface 7100 may support a universal serial bus (USB) protocol and an inter-chip (IC)-USB protocol.
  • the card interface 7100 may mean hardware capable of supporting a protocol used by the host 60000 , software embedded in the hardware, or a signal transmission scheme.
  • the host interface 6200 may perform data communication with the memory device 1100 through the card interface 7100 and the controller 1200 under the control of a microprocessor ( ⁇ P) 6100 .
  • ⁇ P microprocessor
  • the controller when a suspend command is received during a program operation, transmits data including all data groups currently being transmitted, to the memory device, and executes the suspend command, so that execution delay of the suspend command due to an operation being currently performed may be prevented.

Abstract

A controller includes: a command queue scheduler for queuing normal commands, and providing a priority order to a suspend command, when the suspend command is input; a data input/output component for outputting data in response to a data output signal output the command queue scheduler, and stopping the output of the data in response to a data output stop signal; and a data monitor for dividing data input to the data input/output component into a plurality of data groups, and monitoring information of a data group including data currently output from the data input/output component. The data input/output component outputs data up to the currently output data included in the data group and then stops the output of the data, in response to the data output stop signal. The command queue scheduler outputs the suspend command, when the output of the data group is stopped.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • The present application is a continuation of U.S. patent application Ser. No. 16/727,417 filed on Dec. 26, 2019, which claims priority under 35 U.S.C. § 119(a) to Korean patent application number 10-2019-0054502, filed on May 9, 2019, which is incorporated herein by reference in its entirety.
  • BACKGROUND Field of Invention
  • The present disclosure generally relates to a controller and a memory system having the same, and more particularly, to a controller configured to perform a suspend operation in response to a suspend command, and a memory system having the controller.
  • Description of Related Art
  • A memory system may include a memory device and a controller.
  • The memory device may include a plurality of dies capable of storing data. Memory cells included in the dies may be implemented as volatile memory cells in which stored data disappears when the supply of power is interrupted, or be implemented as nonvolatile memory cells in which stored data is retained even when the supply of power is interrupted.
  • The controller may control data communication between a host and the memory device. For example, the controller may control the memory device in response to a request from the host. Also, the controller may perform a background operation without any request from the host so as to improve the performance of the memory system.
  • The host may communicate with the memory device through the controller by using an interface protocol such as Peripheral Component Interconnect-Express (PCI-e or PCIe), Advanced Technology Attachment (ATA), Serial ATA (SATA), Parallel ATA (DATA), or Serial Attached SCSI (SAS). Alternatively, any of various other interface protocols, such as a Universal Serial Bus (USB), a Mufti-Media Card (MMC), an Enhanced Small Disk Interface (ESDI), or Integrated Drive Electronics (IDE) may be used.
  • SUMMARY
  • Embodiments provide a controller capable of preventing execution delay of a suspend command and a memory system having the controller.
  • In accordance with an aspect of the present disclosure, there is provided a controller including: a command queue scheduler configured to queue normal commands, and provide a suspend command with a higher priority than the normal commands, when the suspend command is input; a data input/output component configured to output multiple data items in response to a data output signal from the command queue scheduler, and stop the output of the multiple data items in response to a data output stop signal from the command queue scheduler; and a data monitor configured to divide plural items of input data input to the data input/output component into a plurality of data groups, and monitor information of a current data group including data that is currently output from the data input/output component, wherein the data input/output component outputs preceding data and the currently output data in the current data group and stops the output of next data, in response to the data output stop signal, wherein the command queue scheduler outputs the suspend command, when the output of the current data group is stopped.
  • In accordance with another aspect of the present disclosure, there is provided a memory system including: first and second dies coupled to the same channel; a processor configured to output a first command or a second command having a higher priority than that of the first command, in response to a request received from a host; and a flash interface layer configured to output data to the first die in response to the first command, wherein the flash interface layer: divides input data into a plurality of data groups; when the second command is input, outputs preceding data and currently output data in a current data group and then outputs the second command to the second die; and when execution of the second command ends, outputs, to the first die, data in a next data group of the current data group that has been completely output.
  • In accordance with another aspect of the present disclosure, there is provided a memory system including: first and second dies; and a controller including: a processor suitable for receiving a request from a host, and generating a first command for a first die and a second command for a second die, the second command having a higher priority than that of the first command, in response to the request; and a flash interface layer coupled to the first and second dies through a channel, and suitable for receiving a plurality of data groups corresponding to the first command, each of the plurality of data groups including plural data items, providing some data groups among the plurality of data groups to the first die, suspending providing remaining data groups among the plurality of data groups to the first die when the second command is received, and providing the remaining data groups to the first die when the second command is executed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Various embodiments are described more fully below with reference to the accompanying drawings; however, the present invention may be embodied in different forms and thus is not limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure is thorough and complete and fully conveys the scope of the embodiments to those skilled in the art.
  • In the drawing figures, dimensions may be exaggerated for clarity of illustration. It will be understood that when an element is referred to as being “between” two elements, it can be the only element between the two elements, or one or more intervening elements may also be present. Like reference numerals refer to like elements throughout. Also, throughout the specification, reference to “an embodiment,” “another embodiment” or the like is not necessarily to only one embodiment, and different references to any such phrase are not necessarily to the same embodiment(s).
  • FIG. 1 is a diagram illustrating a memory system in accordance with an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating a die shown in FIG. 1.
  • FIG. 3 is a diagram illustrating a command execution method in a multi-channel scheme,
  • FIG. 4 is a diagram illustrating a controller in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a diagram illustrating a configuration of a flash interface layer.
  • FIG. 6 is a diagram illustrating a central processing unit in accordance with an embodiment of the present disclosure.
  • FIG. 7 is a diagram illustrating a function of a channel scheduler.
  • FIG. 8 is a diagram illustrating an address table.
  • FIG. 9 is a diagram illustrating a flash interface.
  • FIGS. 10 to 12 are diagrams illustrating a method in which a command queue scheduler queues a suspend command.
  • FIG. 13 is a diagram illustrating a data group setting method in accordance with an embodiment of the present disclosure.
  • FIG. 14 is a diagram illustrating a general suspend command processing method.
  • FIG. 15 is a diagram illustrating a suspend command processing method in accordance with an embodiment of the present disclosure,
  • FIG. 16 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1.
  • FIG. 17 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1.
  • FIG. 18 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1.
  • FIG. 19 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1.
  • DETAILED DESCRIPTION
  • With respect to the present disclosure, advantages, features and methods for achieving them will become more apparent in light of the description of the following embodiments taken in conjunction with the drawings. The present invention may, however, be embodied in different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided to describe the present disclosure in detail to the extent that those skilled in the art to which the disclosure pertains may easily practice the present invention.
  • Throughout the specification, when an element is referred to as being “connected” or “coupled” to another dement, it can be directly connected or coupled to the another dement or be indirectly connected or coupled to the another element with one or more intervening elements interposed therebetween. In addition, when an element is referred to as “including” a component, this indicates that the element may further include one or more other components instead of excluding other components unless stated otherwise.
  • FIG. 1 is a diagram illustrating a memory system in accordance with an embodiment of the present disclosure.
  • Referring to FIG. 1, the memory system 1000 may include a memory device 1100 configured to store data and a controller 1200 configured to control the memory device 1100.
  • The memory device 1100 may include a plurality of dies D1 to Di (where i is a positive integer greater than 1). The dies D1 to Di may be implemented with a volatile memory device in which stored data disappears when the supply of power is interrupted or a nonvolatile memory device in which stored data is retained even when the supply of power is interrupted. In the following embodiments, the memory system including the dies D1 to Di implemented with the nonvolatile memory device is described as an example. The nonvolatile memory device may be a NAND flash memory device.
  • The memory device 1100 may communicate with the controller 1200 through a plurality of channels CH1 to CHk (where k is a positive integer greater than 1). For example, the dies D1 to Di in the memory device 1100 may receive a command, an address, data, and the like from the controller 1200 through the channels CH1 to CHk, and output data to the controller 1200.
  • The controller 1200 may control the memory device 1100 in response to a request received from a host 2000, and output data read from the memory device 1100 to the host 2000. For example, when the controller 1200 receives a program request and data from the host 2000, the controller 1200 may store the received data in the memory device 1100. When the controller 1200 receives a read request and a logical address from the host 2000, the controller 1200 may perform a read operation according to a physical address mapped to the logical address, and output read data to the host 2000.
  • The controller 1200 may perform a background operation capable of managing the memory device 1100 without any request from the host 2000. For example, the controller 1200 may perform a function including garbage collection, wear leveling, and the like. In addition, the controller 1200 may perform various functions for efficiently managing the memory device 1100.
  • Also, when the controller 1200 receives a suspend request from the host 2000, the controller 1200 may generate a suspend command corresponding to the suspend request. That is, the suspend command may have priority over a normal command. For example, the suspend command may be a read command, and the normal command may be a program command or erase command. When a selected channel is free, the controller may immediately transmit the suspend command to a selected die through the selected channel.
  • When the selected channel is busy, the controller 1200 may queue commands such that the suspend command can be executed next after a command currently being executed in the selected channel. When the controller 1200 is transmitting program data to another die through the selected channel, the controller 1200 may transmit the program data to that die including all data groups of the program data currently being transmitted at the time the suspend command is received from the host, and transmit the suspend command to the selected die.
  • When the suspend command is transmitted to the selected die, the controller 1200 may re-transmit a program command, a physical address, and the program data to the other die on which the program data transmission operation is stopped. The controller 1200 does not re-transmit the data that has already been transmitted, but may transmit data that is not yet transmitted. To this end, in a program operation, the controller 1200 may divide program data into a plurality of groups, and determine in which data group data being currently transmitted is included. Accordingly, the memory system 1000 may prevent suspend command execution delay of a selected die, and reduce a resume operation time of another die.
  • The host 2000 may communicate with the memory system 1000 by using an interface protocol such as Peripheral Component Interconnect-Express (PCI-e or PCIe), Advanced Technology Attachment (ATA), Serial ATA (SATA), Parallel ATA (PATA), Serial Attached SCSI (SAS), or NonVolatile Memory Express (NVMe). The interface protocol is not limited to the above-described examples; alternatively the host 2000 may communicate with the memory system 1000 through any of various other protocols such as a Universal Serial Bus (USB), a Multi-Media Card (MMC), an Enhanced Small Disk Interface (ESDI), and Integrated Drive Electronics (IDE).
  • FIG. 2 is a diagram illustrating a representative die Di of the dies shown in FIG. 1.
  • Referring to FIG. 2, the die Di may include a memory cell array 110 configured to store data, a peripheral circuit configured to perform a program, read or erase operation, and a logic circuit 170 configured to control the peripheral circuit.
  • The memory cell array 110 may include a plurality of memory blocks in which data is stored. Each of the memory blocks may include a plurality of memory cells, and the memory cells may be implemented in a two-dimensional structure in which the memory cells are arranged in parallel to a substrate or a three-dimensional structure in which the memory cells are stacked vertically to a substrate.
  • The peripheral circuit may include a voltage generator 120, a row decoder 130, a page buffer group 140, a column decoder 150, and an input and output (input/output) circuit 160.
  • The voltage generator 120 may generate and output operating voltages Vop necessary for various operations in response to an operation signal OPS. For example, the voltage generator 120 may generate and output a program voltage, a verify voltage, a read voltage, a pass voltage, an erase voltage, and the like.
  • The row decoder 130 may select one memory block among the memory blocks in the memory cell array 110 according to a row address RADD, and transmit operating voltages Vop to the selected memory block.
  • The page buffer group 140 may be coupled to the memory cell array 110 through bit lines, and include a plurality of page buffers coupled to the bit lines. The plurality of page buffers may temporarily store data in a program or read operation in response to a page buffer control signal PBSIG. To this end, the page buffers may include a plurality of latches for temporarily storing data. For example, data received through the channel CHk in the program operation may be temporarily stored in the page buffers and then programmed. Data read from the memory cell array 110 in the read operation may be temporarily stored in the page buffers and then output.
  • The column decoder 150 may sequentially transmit data received from the input/output circuit 160 to the page buffers included in the page buffer group 140 according to a column address CADD, or sequentially transmit data received from the page buffers to the input/output circuit 160.
  • The input/output circuit 160 may be coupled to the controller 1200 through input/output lines of the channel CHk, and input and output a command CMD, an address ADD, and data DATA through the input/output lines. For example, the input/output circuit 160 may transmit the command CMD and the address ADD, which are received from the controller 1200, to the logic circuit 170, and transmit the data DATA received from the controller 1200 to the column decoder 150. Also, the input/output circuit 160 may output data read from the memory cell array 110 to the controller 1200 through the input/output lines of the channel CHk.
  • The logic circuit 170 may output operation signals OPS, a row address RADD, page buffer control signals PBSIG, and a column address CADD, in response to a control signal CTSIG received from the controller 1200 through the channel CHk and the command CMD and the address ADD, which are received from the input/output circuit 160.
  • FIG. 3 is a diagram illustrating a command execution method in a multi-channel scheme.
  • Referring to FIG. 3, a plurality of dies D1 to Di may be coupled to each of a plurality of channels CH1 to CHk in the multi-channel scheme. For example, first to ith dies D1 to Di may be coupled to a first channel CH1, first to ith dies D1 to Di may be coupled to a second channel CH2, and first to ith dies D1 to Di may be coupled to a kth channel CHk. First to ith dies D1 to Di coupled to different channels may be physically different dies. Dies coupled to the same channel cannot be simultaneously selected, and dies coupled to different channels can be simultaneously selected.
  • For example, when a second die D2 among first to ith dies D1 to Di coupled to the first channel CH1 communicates with the controller 1200, the first and third to ith dies D1 and D3 to Di cannot communicate with the controller 1200. However, when the second die D2 among the first to ith dies D1 to Di coupled to the first channel CH1 communicates with the controller 1200, a third die D3 coupled to the second channel CH2 and a first die D1 coupled to the kth channel CHk can simultaneously communicate with the controller 1200. A die selected in each channel is an example for describing the present disclosure, and therefore, dies simultaneously selected in different channels may vary depending on a command and an address.
  • FIG. 4 is a diagram illustrating a controller 1200 in accordance with an embodiment of the present disclosure.
  • Referring to FIG. 4, the controller 1200 may include a host interface layer (HIL) 210, a central processing unit (CPU) 220, and a flash interface layer (FIL) 230.
  • The HIL 210 may communicate between the host 2000 and the CPU 220. For example, when the HIL 210 receives a request, a logical address, or data from the host 2000, the HIL 210 may transmit the received request, logical address or data to the CPU 220. Also, when the HIL 210 receives data from the CPU 220, the HIL 210 may output the received data to the host 2000.
  • The CPU 220 may communicate between the HIL 210 and the FIL 230, and control overall operations of the controller 1200. For example, the CPU 220 may convert a request received from the HIL 210 into a command, and transmit the command to the FIL 230 according to states of channels. For example, the CPU 220 may transmit a command to the FIL 230 such that an overload is not applied in a specific channel, by considering commands queued in each channel, peak power, etc. Also, the CPU 220 may convert (or translate) a logical address received from the HIL 210 into a physical address, and transmit the physical address to the FIL 230. The CPU 220 may transmit data received from the HIL 210 to the FIL 230.
  • The FIL 230 may communicate between the CPU 220 and the memory device 1100. The FIL 230 may receive a command, a physical address, or data from the CPU 220, and transmit the command, the physical address, or the data to dies selected through channels. For example, the FIL 230 may queue commands according to a state of each of the channels, divide program data into a plurality of data groups, and store and update in real time information on a data group including data transmitted to a channel. Also, when the FIL 230 receives a suspend command, the FIL 230 may output the suspend command, when transmission of a currently loaded data group through a channel is ended. When execution of the suspend command is ended, the FIL 230 may resume a stopped (or suspended) program operation. When the program operation is resumed, the FIL 230 may re-transmit a command and a physical address with respect to the stopped program operation, and transmit untransmitted data through the channel.
  • FIG. 5 is a diagram illustrating a configuration of the flash interface layer (FIL) 230.
  • Referring to FIG. 5, the FIL 230 may include a plurality of flash interfaces 1FI to kFI. The number of the flash interfaces 1FI to kFI may be equal to that of the channels CH1 to CHk communicating with the memory device 1100. For example, when the memory system 1000 is configured in the multi-channel scheme including the first to kth channels CH1 to CHk, the FIL 230 may include first to kth flash interfaces 1FI to kFI. The first flash interface 1FI may communicate with the memory device 1100 through the first channel CH1, the second flash interface 2FI may communicate with the memory device 1100 through the second channel CH2, and the kth flash interface kFI may communicate with the memory device 1100 through the kth channel CHk.
  • The CPU 220 may selectively transmit a command CMD, a physical address PADD, and data DATA to the first to kth flash interfaces 1FI to kFI according to states of the first to kth channels CH1 to CHk. The CPU 220 may transmit a program command when a program request is received, transmit a read command when a read request is received, and transmit an erase command when an erase request is received.
  • Each of the first to kth flash interfaces 1FI to kFI may queue the command CMD received from the CPU 220. Further, each of the first to kth flash interfaces 1FI to kFI may transmit a program command CMDp, a physical address PADD, and data DATA to a selected die or transmit a read command CMDr and a physical address PADD to a selected die, though a channel according to a queued order. Each of the first to kth flash interfaces 1FI to kFI may queue various commands received from the CPU 220, in addition to the program command CMDp and the read command CMDr, and output the commands according to a queued order.
  • For example, the first flash interface 1FI may output the program command CMDp, the physical address PADD, and the data DATA through the first channel CH1, and the kth flash interface kFI may output the read command CMDr and the physical address PADD through the kth channel CHk.
  • FIG. 6 is a diagram illustrating a central processing unit (CPU) 220 in accordance with an embodiment of the present disclosure.
  • Referring to FIG. 6, the CPU 220 may include a command (CMD) generator 221, a channel (CH) scheduler 222, an address (ADD) table 223, and a first buffer 224.
  • The command generator 221 may generate a request RQ received from the host 2000 as a command CMD to be used in the memory system 1000, and transmit the generated command CMD to the CH scheduler 222.
  • The channel scheduler 222 may queue the command CMD received from the command generator 221 according to states of channels, and output the command CMD according to a queued order.
  • The address table 223 may be a table in which logical addresses LADD and physical addresses PADD are mapped to each other. The address table 223 may be stored in an internal memory, which is included in the CPU 220. The address table 223 may be updated whenever mapped addresses are changed. When a logical address LADD is received, the address table 223 may output a physical address PADD corresponding to the received logic address LADD.
  • The first buffer 224 may temporarily store data DATA received from the host 2000, and transmit the data DATA to the FIL 230 according to a set data width.
  • FIG. 7 is a diagram illustrating a function of the channel (CH) scheduler 222.
  • Referring to FIG. 7, the channel scheduler 222 may include a second buffer 71 and a third buffer 72. The second buffer 71 is configured to store state information ST # on each of channels CH1, CH2, . . . . The third buffer 72 is configured to temporarily store queued commands CMD according to the state information ST #. For example, the state information ST # in the second buffer 71 may include information on the number of commands CMD being executed or to be executed in each channel, or information on a current power consumption amount or predicted power consumption amount of each channel. Alternatively, the state information ST # may include all the information.
  • Information on predicted power consumption amount or information on an operation time may be stored in the channel scheduler 222. The channel scheduler 222 may update state information ST # of a corresponding channel in the second buffer 71 according to queuing information of commands stored in the third buffer 72. The state information ST # may be calculated as a workload for each channel to be stored in the second buffer 71.
  • That is, the channel scheduler 222 may check a workload corresponding to state information ST # of the channels CH1, CH2, . . . , which are stored in the second buffer 71. Further, the CH scheduler 222 may preferentially allocate a command CMD to a channel having a relatively low workload or allocate a command CMD having a relatively high workload to a channel having a relatively low workload. For example, according to workloads of the channels CH1, CH2, . . . and a workload of a received command CMD, the channel scheduler 222 may allocate a first command CMD1 to a third channel CH3 (21), allocate a second command CMD2 to a fourth channel CH4 (22), allocate a third command CMD3 to a first channel CH1 (23), and allocate a fourth command CMD4 to a second channel CH2 (24).
  • The channel scheduler 222 may sequentially output the first to fourth commands CMD1 to CMD4 according to an order (21, 22, 23, and 24) In which the first to fourth commands CMD1 to CMD4 are queued in the third buffer 72, and update the state information ST # for each channel in the second buffer 71.
  • FIG. 8 is a diagram illustrating the address (ADD) table 223.
  • Referring to FIG. 8, the address table 223 may include physical addresses PADD1 to PADD # respectively mapped to logical addresses LADD1 to LADD #. The logical addresses LADD1 to LADD # may be addresses used in the host 2000, and the physical addresses PADD1 to PADD # may be addresses used in the memory device 1100.
  • The CPU 220 may update the address table 223 whenever the mapped addresses are changed. When a logical address LADD is received, the CPU 220 may output a physical address PADD mapped to the received logical address LADD.
  • FIG. 9 is a diagram illustrating a flash interface.
  • Referring to FIG. 9, the flash interface is any of the first to kth flash interfaces 1FI to kFI shown in FIG. 5, and the first to kth flash interfaces 1FI to kFI may be configured identically to one another. Therefore, the kth flash interface kFI is described as an example.
  • The kth flash interface kFI may include a command (CMD) queue scheduler 91, an address (ADD) input/output component 92, a data input/output component 93, and a data monitor 94.
  • When normal commands CMD are input, each of the components in the kth flash interface kFI may operate as follows.
  • The command queue scheduler 91 may queue input commands CMD, and sequentially output the queued commands CMD. For example, when normal commands are input, the command queue scheduler 91 may queue the normal commands in an order in which the normal commands are input, and output commands CMD in an order in which the normal commands are queued. Also, after the command queue scheduler 91 outputs a command CMD, the command queue scheduler 91 may sequentially output an address output signal AOS and a data output signal DOS.
  • When a physical address PADD is input, the address input/output component 92 may temporarily store the input physical address PADD, and output the physical address PADD in response to the address output signal AOS output from the command queue scheduler 91.
  • When data DATA is input, the data input/output component 93 may temporarily store the input data DATA, and output the data DATA in response to the data output signal DOS output from the command queue scheduler 91.
  • The data monitor 94 may communicate with the data input/output component 93. The data monitor 94 may divide data input to the data input/output component 93 into a plurality of data groups, and monitor in real time a data group currently output from the data input/output component 93.
  • When a suspend command CMD is input, each of the components in the kth flash interface kFI may operate as follows.
  • The command queue scheduler 91 may provide the suspend command with a higher priority than the existing queued normal commands CMD, and output a data output stop signal DSS. Subsequently, when a completion signal FIS output from the data input/output component 93 is received, the command queue scheduler 91 may output the suspend command. The suspend command may be a read command, and therefore, the command queue scheduler 91 may output the suspend command and then output an address output signal AOS.
  • The address input/output component 92 may temporarily store a physical address PADD input together with the suspend command, and output the physical address PADD in response to an address output signal AOS output from the command queue scheduler 91.
  • When the data output stop signal DSS is received from the command queue scheduler 91, the data input/output component 93 may output data up to and including last data set by the data monitor 94, and output the completion signal FIS.
  • The data monitor 94 may store information on a data group including the data output from the data input/output component 93.
  • When the completion signal FIS is received, the command queue scheduler 91 may output the suspend command.
  • When a resume command CMD is input, each of the components included in the kth flash interface kFI may operate as follows.
  • The command queue scheduler 91 may re-output a normal command CMD stopped by the suspend command in response to the resume command. Subsequently, the command queue scheduler 91 may sequentially output an address output signal AOS and a data output signal DOS.
  • The address input/output component 92 may output a physical address PADD in response to the address output signal AOS output from the command queue scheduler 91.
  • When the data output signal DOS output from the command queue scheduler 91 is re-input, the data input/output component 93 may output the data that was stopped from being output as a result of the suspend command. Information on the data that was not output may be received from the data monitor 94.
  • The data monitor 94 may transmit information on a next data group to the data input/output component 93, based on information on a data group that has been completely output in a previous normal operation.
  • The data input/output component 93 may output data from first data of a selected data group according to data group information received from the data monitor 94.
  • An operation of the command queue scheduler 91, among the above-described components, is additionally described as follows.
  • FIGS. 10 to 12 are diagrams illustrating a method in which the command (CMD) queue scheduler 91 queues a suspend command.
  • Referring to FIG. 10, when first to fifth normal commands CMD1 to CMD5 are sequentially input to the command queue scheduler 91, the command queue scheduler 91 may queue the first to fifth commands CMD1 to CMD5 in an order in which they are input, and sequentially output the first to fifth commands CMD1 to CMD5 in that same order.
  • Referring to FIG. 11, when a suspend command CMD6 is input before the command queue scheduler 91 outputs the first command CMD1 and then output the second command CMD2 as a next command, an operation corresponding to the first command CMD1 that is being executed may be stopped. To this end, the command queue scheduler 91 may store a command that has most recently been output until all operations corresponding to the commands that have been outputted are completed.
  • Referring to FIG. 12, the command queue scheduler 91 may provide a higher priority to the suspend command CMD6 with respect to the stopped first command CMD1, and queue the commands such that the suspend command CMD6 is higher in the queue and thus output earlier than the other normal commands CMD1 to CMD5.
  • FIG. 13 is a diagram illustrating a data group setting method in accordance with an embodiment of the present disclosure.
  • Referring to FIG. 13, the data monitor 94 may set data groups by dividing data DATA input to the data input/output component 93 into the data groups according to capacities CP1 to CP4 or intervals CHP1 to CHP3.
  • When data groups DATA1-1 to DATA1-4 are divided according to the capacities CP1 to CP4, the data monitor 94 may divide the total capacity of the data DATA input to the data input/output unit 93 into 4 portions. This division is merely an example; the DATA may be divided into more or less than 4 portions. The capacities CP1 to CP4 may be equal to or different from one another. For example, a first data group DATA1-1 may have a first capacity CP1, a second data group DATA1-2 may have a second capacity CP2, a third data group DATA1-3 may have a third capacity CP3, and a fourth data group DATA1-4 may have a fourth capacity CP4, which capacities may all be the same or one or more may be different. The data monitor 94 may update information on a data group including currently output data, whenever the capacity of data output from the data input/output component 93 reaches a set capacity.
  • When the data groups DATA1-1 to DATA1-4 are divided according to the intervals CHP1 to CHP3, the data monitor 94 may store the intervals CHP1 to CHP3 for the set times they represent, when the data DATA is output to a channel CH1 from the data input/output component 93. That is, the data monitor 94 may set a time corresponding to each of a plurality of intervals, and update information on a data group including data for every set time after the data is output from the data input/output component 93. For example, position information of data output from the data input/output component 93 may be stored at a first time CHP1 when a certain time elapses after first data is output. Position information of data output from the data input/output component 93 may be stored at a second time CHP2 when a certain time elapses after the first time CHP1. Position information of data output from the data input/output component 93 may be stored at a third time CHP3 when a certain time elapses after the second time CHP2.
  • Assuming that data currently output from the data input/output component 93 is included in the second data group DATA1-2 when a suspend command is input, the data input/output component 93 may output only that data up to and including the data in the second data group DATA1-2, based on information on a data group including data transmitted to a current channel CH from the data monitor 94, and then stop the output of data in the third data group DATA1-3.
  • When a program operation that has been stopped is resumed after the suspend command is executed, the data input/output component 93 may receive, from the data monitor 94, information of the third data group DATA1-3 corresponding to a next group of the second data group DATA1-2, of which output has been stopped. The data input/output component 93 may sequentially output data from first data in the third data group DATA1-3 according to information of the third data group DATA1-3.
  • As described above, in this embodiment, when the suspend command is input, a delay time until the suspend command is executed may be reduced, and a resume time of the program operation stopped by the suspend command may also be reduced. In relation to this, the existing operating method and an operating method in accordance with this embodiment compare as follows.
  • FIG. 14 is a diagram illustrating a general suspend command processing method.
  • Referring to FIG. 14, assuming that a first command CMD1 is a program command, the program command CMD1, a physical address PADD, and first to fourth data groups DATA1-1 to DATA1-4 may be sequentially input to a first die D1 coupled to a first channel CH1. When data in the second data group DATA1-2 is input to the first die D1, a suspend command CMD6 for a second die D2 may be input. The suspend command CMD6 may be a read command. In a general case, although the suspend command CMD6 is input, the suspend command CMD6 is transmitted after all data DATA1-1 to DATA1-4 related to a program command being currently executed are transmitted. Hence, a first delay time may elapse until the suspend command CMD6 is executed in the second die D2.
  • FIG. 15 is a diagram illustrating a suspend command processing method in accordance with an embodiment of the present disclosure.
  • Referring to FIG. 15, unlike FIG. 14, when the suspend command CMD6 is input, data is transmitted to the first die D1 including all data up to data in the second data group DATA1-2, which is currently being transmitted through the first channel CH1. Then, the suspend command CMD6 for the second die D2 may be executed before data in the third and fourth data groups DATA1-3 and DATA1-4 are transmitted to the first die D1. That is, in accordance with this embodiment, a second delay time T2 shorter than the first delay time T1 may elapse until the suspend command CMD6 is executed. When the execution of the suspend command CMD6 is completed in the second die D2, data that is not output may be detected based on data capacity information CP1, data interval information CHP2, or information obtained by adding up the data capacity information CP1 and the data interval information CHP2. Accordingly, the program command CMD1, the physical address PADD, and the third and fourth data groups DATA1-3 and DATA1-4 may be sequentially transmitted to the first die D1.
  • FIG. 16 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1.
  • Referring to FIG. 16, the memory system 30000 may be implemented as a cellular phone, a smart phone, a tablet personal computer (PC), a personal digital assistant (PDA), or a wireless communication device. The memory system 30000 may include a memory device 1100 and a controller 1200 capable of controlling an operation of the memory device 1100. The controller 1200 may control a data access operation of the memory device 1100, e.g., a program operation, an erase operation, a read operation, or the like under the control of a processor 3100.
  • Data programmed in the memory device 1100 may be output through a display 3200 under the control of the controller 1200.
  • A radio transceiver 3300 may transmit and receive radio signals through an antenna ANT. For example, the radio transceiver 3300 may convert a radio signal received through the antenna ANT into a signal that can be processed by the processor 3100. Therefore, the processor 3100 may process a signal output from the radio transceiver 3300 and transmit the processed signal to the controller 1200 or the display 3200. The controller 1200 may transmit the signal processed by the processor 3100 to the memory device 1100. Also, the radio transceiver 3300 may convert a signal output from the processor 3100 into a radio signal, and output the changed radio signal to an external device through the antenna ANT. An input device 3400 is a device capable of inputting a control signal for controlling an operation of the processor 3100 or data to be processed by the processor 3100, and may be implemented as a pointing device such as a touch pad or a computer mount, a keypad, or a keyboard. The processor 3100 may control an operation of the display 3200 such that data output from the controller 1200, data output from the radio transceiver 3300, or data output from the input device 3400 can be output through the display 3200.
  • In some embodiments, the controller 1200 capable of controlling an operation of the memory device 1100 may be implemented as a part of the processor 3100, or be implemented as a chip separate from the processor 3100.
  • FIG. 17 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1.
  • Referring to FIG. 17, the memory system 40000 may be implemented as a personal computer (PC), a tablet PC, a net-book, an e-reader, a personal digital assistant (PDA), a portable multi-media player (PMP), an MP3 player, or an MP4 player.
  • The memory system 40000 may include a memory device 1100 configured to store data and a controller 1200 capable of controlling a data processing operation of the memory device 1100.
  • A processor 4100 may output data stored in the memory device 1100 through a display 4300 according to data input through an input device 4200. For example, the input device 4200 may be implemented as a pointing device such as a touch pad or a computer mouse, a keypad, or a keyboard.
  • The processor 4100 may control overall operations of the memory system 40000, and control an operation of the controller 1200. In some embodiments, the controller 1200 capable of controlling an operation of the memory device 1100 may be implemented as a part of the processor 4100, or be implemented as a chip separate from the processor 4100.
  • FIG. 18 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1.
  • Referring to FIG. 18, the memory system 50000 may be implemented as an image processing device, e.g., a digital camera, a smart phone having a digital camera attached thereto, or a tablet PC having a digital camera attached thereto.
  • The memory system 50000 may include a memory device 1100 and a controller 1200 capable of controlling a data processing operation of the memory device 1100, e.g., a program operation, an erase operation, or a read operation.
  • An image sensor 5200 of the memory system 50000 may convert an optical image into digital signals, and the converted digital signals may be transmitted to a processor 5100 or the controller 1200. Under the control of the processor 5100, the converted digital signals may be output through a display 5300, or be stored in the memory device 1100 through the controller 1200. In addition, data stored in the memory device 1100 may be output through the display 5300 under the control of the processor 5100 or the controller 1200.
  • In some embodiments, the controller 1200 capable of controlling an operation of the memory device 1100 may be implemented as a part of the processor 5100, or be implemented as a chip separate from the processor 5100.
  • FIG. 19 is a diagram illustrating another embodiment of the memory system including the controller shown in FIG. 1.
  • Referring to FIG. 19, the memory system 70000 may be implemented as a memory card or a smart card. The memory system 70000 may include a memory device 1100, a controller 1200, and a card interface 7100.
  • The controller 1200 may control data exchange between the memory device 1100 and the card interface 7100. In some embodiments, the card interface 7100 may be a secure digital (SD) card interface or a multi-media card (MMC) interface, but the present disclosure is not limited thereto.
  • The card interface 7100 may interface data exchange between a host 60000 and the controller 1200 according to a protocol of the host 60000. In some embodiments, the card interface 7100 may support a universal serial bus (USB) protocol and an inter-chip (IC)-USB protocol. The card interface 7100 may mean hardware capable of supporting a protocol used by the host 60000, software embedded in the hardware, or a signal transmission scheme.
  • When the memory system 70000 is coupled to a host interface 6200 of the host 60000 such as a PC, a tablet PC, a digital camera, a digital audio player, a cellular phone, console video game hardware, or a digital set-top box, the host interface 6200 may perform data communication with the memory device 1100 through the card interface 7100 and the controller 1200 under the control of a microprocessor (μP) 6100.
  • In accordance with embodiments of the present disclosure, when a suspend command is received during a program operation, the controller transmits data including all data groups currently being transmitted, to the memory device, and executes the suspend command, so that execution delay of the suspend command due to an operation being currently performed may be prevented.
  • Various embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, as would be apparent to one of ordinary skill in the art as of the filing of the present application, features, characteristics, and/or elements described in connection with a particular embodiment may be used singly or in combination with features, characteristics, and/or elements described in connection with other embodiments unless otherwise specifically indicated. Accordingly, it will be understood by those of skill in the art that various changes in form and details may be made without departing from the spirit and scope of the present disclosure as set forth in the following claims.

Claims (11)

What is claimed is:
1. A controller comprising:
a processing unit configured to generate a first command or a second command; and
a command queue scheduler configured to queue the first command and execute the first command when the first command is input, and compare priorities of the first command and the second command when the second command is input,
wherein the command queue scheduler suspends the execution of the first command when the priority of the second command is higher than that of the first command, and executes the second command.
2. The controller of claim 1, wherein the command queue scheduler continuously executes the first command when the priority of the second command is lower than or equal to that of the first command.
3. The controller of claim 1, wherein the command queue scheduler holds the first command until the execution of the second command is finished.
4. The controller of claim 3, wherein the command queue scheduler executes the first command when the execution of the second command is finished.
5. A controller comprising:
a command queue scheduler configured to execute a first command when the first command is input; and
a data input/output component configured to sequentially output first and second data groups corresponding to the first command,
wherein, when a second command is input to the command queue scheduler while the first data group is output, the data input/output component suspends the output of the second data group, and
wherein the command queue scheduler executes the second command after the output of the first data group.
6. The controller of claim 5, wherein, when the execution of the second command is finished, the command queue scheduler resumes the first command.
7. The controller of claim 6, wherein, when the first command is resumed, the data input/output component outputs the second data group.
8. The controller of claim 5, further comprising a data monitor configured to divide input data corresponding to the first command into the first and second data groups, and monitor information of an output data group from the data input/output component.
9. The controller of claim 8, wherein the data monitor manages information on the first and second data groups by dividing the input data according to capacities or output intervals.
10. A controller comprising:
a processing unit configured to generate a normal command or a suspend command; and
a command queue scheduler configured to execute the normal command when the normal command is input,
wherein the command queue scheduler suspends the execution of the normal command when the suspend command is input, executes the suspend command, and resumes the normal command when the execution of the suspend command is finished.
11. The controller of claim 10, further comprising a data input/output component configured to sequentially output first and second data groups corresponding to the normal command.
US17/868,430 2019-05-09 2022-07-19 Controller and memory system having the same Pending US20220350655A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/868,430 US20220350655A1 (en) 2019-05-09 2022-07-19 Controller and memory system having the same

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR1020190054502A KR20200129700A (en) 2019-05-09 2019-05-09 Controller and memory system having the same
KR10-2019-0054502 2019-05-09
US16/727,417 US11455186B2 (en) 2019-05-09 2019-12-26 Controller and memory system having the same
US17/868,430 US20220350655A1 (en) 2019-05-09 2022-07-19 Controller and memory system having the same

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/727,417 Continuation US11455186B2 (en) 2019-05-09 2019-12-26 Controller and memory system having the same

Publications (1)

Publication Number Publication Date
US20220350655A1 true US20220350655A1 (en) 2022-11-03

Family

ID=73045795

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/727,417 Active 2040-12-13 US11455186B2 (en) 2019-05-09 2019-12-26 Controller and memory system having the same
US17/868,430 Pending US20220350655A1 (en) 2019-05-09 2022-07-19 Controller and memory system having the same

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/727,417 Active 2040-12-13 US11455186B2 (en) 2019-05-09 2019-12-26 Controller and memory system having the same

Country Status (3)

Country Link
US (2) US11455186B2 (en)
KR (1) KR20200129700A (en)
CN (1) CN111913654B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7443195B2 (en) * 2020-08-21 2024-03-05 キオクシア株式会社 Memory system and control method
CN112965669B (en) * 2021-04-02 2022-11-22 杭州华澜微电子股份有限公司 Data storage system and method
KR102583244B1 (en) * 2022-01-28 2023-09-26 삼성전자주식회사 Storage device and operating method of storage device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070239926A1 (en) * 2006-03-28 2007-10-11 Yevgen Gyl Method and device for reduced read latency of non-volatile memory
US20130073793A1 (en) * 2011-09-16 2013-03-21 Osamu Yamagishi Memory device
US20130215020A1 (en) * 2012-02-17 2013-08-22 Renesas Electronics Corporation Signal processing device and semiconductor device
US20170249104A1 (en) * 2016-02-25 2017-08-31 SK Hynix Inc. Memory controller and request scheduling method using the same

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3422583B1 (en) * 2004-08-30 2020-07-08 Google LLC Systems and methods for providing nonvolatile memory management in wireless phones
US7366028B2 (en) * 2006-04-24 2008-04-29 Sandisk Corporation Method of high-performance flash memory data transfer
US8407562B2 (en) * 2009-09-01 2013-03-26 Marvell World Trade Ltd. Systems and methods for compressing data in non-volatile semiconductor memory drives
US20120167100A1 (en) * 2010-12-23 2012-06-28 Yan Li Manual suspend and resume for non-volatile memory
KR102356071B1 (en) 2015-05-06 2022-01-27 에스케이하이닉스 주식회사 Storage device and operating method thereof
US10037167B2 (en) * 2015-09-11 2018-07-31 Sandisk Technologies Llc Multiple scheduling schemes for handling read requests
KR20170033643A (en) 2015-09-17 2017-03-27 에스케이하이닉스 주식회사 Semiconductor system and operating method thereof
US10503412B2 (en) * 2017-05-24 2019-12-10 Western Digital Technologies, Inc. Priority-based internal data movement
KR20190032809A (en) * 2017-09-20 2019-03-28 에스케이하이닉스 주식회사 Memory system and operating method thereof
KR20190042970A (en) * 2017-10-17 2019-04-25 에스케이하이닉스 주식회사 Memory system and operation method for the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070239926A1 (en) * 2006-03-28 2007-10-11 Yevgen Gyl Method and device for reduced read latency of non-volatile memory
US20130073793A1 (en) * 2011-09-16 2013-03-21 Osamu Yamagishi Memory device
US20130215020A1 (en) * 2012-02-17 2013-08-22 Renesas Electronics Corporation Signal processing device and semiconductor device
US20170249104A1 (en) * 2016-02-25 2017-08-31 SK Hynix Inc. Memory controller and request scheduling method using the same

Also Published As

Publication number Publication date
CN111913654B (en) 2023-08-11
US20200356407A1 (en) 2020-11-12
CN111913654A (en) 2020-11-10
KR20200129700A (en) 2020-11-18
US11455186B2 (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN113377283B (en) Memory system with partitioned namespaces and method of operation thereof
CN107885456B (en) Reducing conflicts for IO command access to NVM
US20220350655A1 (en) Controller and memory system having the same
US8874828B2 (en) Systems and methods for providing early hinting to nonvolatile memory charge pumps
JP6163532B2 (en) Device including memory system controller
US9135190B1 (en) Multi-profile memory controller for computing devices
US8838879B2 (en) Memory system
JP5918359B2 (en) Apparatus including memory system controller and associated method
KR101752117B1 (en) Method and apparatus for dram spatial coalescing within a single channel
GB2533688A (en) Resource allocation and deallocation for power management in devices
US11061607B2 (en) Electronic system having host and memory controller, and operating method thereof
US9569381B2 (en) Scheduler for memory
KR20100031132A (en) Phased garbage collection and house keeping operations in a flash memory system
KR101317760B1 (en) Dynamic random access memory for a semiconductor storage device-based system
KR20190130831A (en) Controller and memory system including the same
KR20210039872A (en) Host System managing assignment of free block, Data Processing System having the same and Operating Method of Host System
US20180364946A1 (en) Data storage device
US11106559B2 (en) Memory controller and memory system including the memory controller
KR102549540B1 (en) Storage device and method of operating the same
KR20210060253A (en) Memory controller, memory system and operationg method of the same
US11200163B2 (en) Controller and method of operating the same
US20230384936A1 (en) Storage device, electronic device including storage device, and operating method thereof
US20230281115A1 (en) Calendar based flash command scheduler for dynamic quality of service scheduling and bandwidth allocations
KR102653373B1 (en) Controller and operation method thereof

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED