US11243709B2 - Data storage apparatus and operating method thereof - Google Patents

Data storage apparatus and operating method thereof Download PDF

Info

Publication number
US11243709B2
US11243709B2 US16/841,274 US202016841274A US11243709B2 US 11243709 B2 US11243709 B2 US 11243709B2 US 202016841274 A US202016841274 A US 202016841274A US 11243709 B2 US11243709 B2 US 11243709B2
Authority
US
United States
Prior art keywords
zone
random access
data
backup
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US16/841,274
Other versions
US20210055864A1 (en
Inventor
Jung Ki Noh
Yong Jin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SK Hynix Inc
Original Assignee
SK Hynix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SK Hynix Inc filed Critical SK Hynix Inc
Assigned to SK Hynix Inc. reassignment SK Hynix Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JIN, YONG, NOH, JUNG KI
Priority to US17/000,082 priority Critical patent/US11734175B2/en
Publication of US20210055864A1 publication Critical patent/US20210055864A1/en
Application granted granted Critical
Publication of US11243709B2 publication Critical patent/US11243709B2/en
Priority to US18/346,203 priority patent/US20230350803A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/068Hybrid storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/21Employing a record carrier using a specific recording technology
    • G06F2212/214Solid state disk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/31Providing disk cache in a specific location of a storage system
    • G06F2212/313In storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7203Temporary buffering, e.g. using volatile buffer or dedicated buffer blocks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • Various embodiments generally relate to a semiconductor apparatus, and more particularly, to a data storage apparatus and an operating method thereof.
  • a data storage apparatus using a memory apparatus has advantages in that it has excellent stability and durability, a very high access speed of information, and low power consumption because it does not include a mechanical driver.
  • a data storage apparatus having such advantages includes a universal serial bus (USB) memory apparatus, a memory card having various interfaces, a universal flash storage (US) apparatus, and a solid state drive.
  • USB universal serial bus
  • USB universal flash storage
  • Solid state drives may include a NAND flash memory. Memory locations in the NAND flash memory may not be overwritten, but are instead erased between each write. Accordingly, it is necessary to match and manage a logical address used in a host apparatus and a physical address used in a NAND flash memory.
  • a mapping table between a logical address and a physical address may be managed in a 4-kilobyte (KB) unit, such that contiguous blocks of logical addresses each corresponding to 4 KB are mapped into corresponding blocks of physical addresses each corresponding to 4 KB.
  • the mapping table may be stored in a volatile random access memory (RAM), such as a dynamic random access memory (DRAM).
  • RAM volatile random access memory
  • DRAM dynamic random access memory
  • Various embodiments are directed to the provision of a data storage apparatus having enhanced write performance by securing a data storage space of a volatile memory as a region capable of random write, and an operating method thereof.
  • a data storage apparatus includes a volatile memory, the volatile memory including a region in which a zone mapping table and system information are stored and a random access zone suitable for random writes; a non-volatile memory including a backup zone and a plurality of sequential zones suitable for sequential writes; and a controller configured to identify whether a logical address received with a command from a host apparatus belongs to the random access zone or to the sequential zone and to control an operation corresponding to the command of the identified zone, wherein the controller is configured to back up data stored in the random access zone onto the backup zone based on a criterion and to recover the data stored in the backup zone into the random access zone when a state of the controller switches to an on state after power is off.
  • an operating method of a data storage apparatus includes receiving a logical address and a command from a host; identifying whether the logical address belongs to a random access zone within a volatile memory or to a sequential zone within a non-volatile memory, the volatile memory including a region in which a zone mapping table and system information are stored and the random access zone suitable for random writes, and the non-volatile memory including a backup zone and a plurality of sequential zones suitable for sequential writes; and performing an operation corresponding to the command based on the identified random access zone or sequential zone.
  • a data storage space for writing data may be secured in a volatile memory through a change in the structure of a mapping table, and performance of a write operation can thereby be improved because a write speed can be increased due to characteristic of the volatile memory.
  • FIG. 1 illustrates a data storage apparatus according to an embodiment.
  • FIG. 2 illustrates a non-volatile memory according to an embodiment.
  • FIG. 3 illustrates a memory cell array according to an embodiment.
  • FIG. 4 illustrates a data processing system according to an embodiment.
  • FIG. 5 illustrates a volatile memory according to an embodiment.
  • FIG. 6 illustrates an example of a zone mapping table according to an embodiment.
  • FIG. 7 illustrates a backup process according to an embodiment.
  • FIG. 8 illustrates a recovery process according to an embodiment.
  • FIG. 9 is a flowchart of an operating process of the data storage apparatus according to an embodiment.
  • FIG. 10 is a flowchart of a data write process in FIG. 9 , according to an embodiment.
  • FIG. 11 is a flowchart of a data read process in FIG. 9 , according to an embodiment.
  • FIG. 12 is a flowchart of an operating process of the data storage apparatus according to another embodiment.
  • FIG. 13 illustrates a data processing system including a solid state drive (SSD) according to an embodiment.
  • SSD solid state drive
  • FIG. 14 illustrates a controller in FIG. 13 , according to an embodiment.
  • FIG. 15 illustrates a data processing system including a data storage apparatus according to an embodiment.
  • FIG. 16 illustrates a data processing system including a data storage apparatus according to an embodiment.
  • FIG. 17 illustrates a network system including a data storage apparatus according to an embodiment.
  • FIG. 1 is a diagram illustrating a data storage apparatus 10 according to an embodiment.
  • the data storage apparatus 10 may store data accessed by a host apparatus (not illustrated), such as a mobile phone, an MP3 player, a laptop computer, a desktop computer, a game machine, TV or an in-vehicle infotainment system.
  • a host apparatus such as a mobile phone, an MP3 player, a laptop computer, a desktop computer, a game machine, TV or an in-vehicle infotainment system.
  • the data storage apparatus 10 may be called a memory system.
  • the data storage apparatus 10 may be fabricated as any one of various types of storage apparatuses depending on an interface protocol electrically coupled to the host apparatus.
  • the data storage apparatus 10 may be configured as any one of various types of storage apparatuses, such as a multimedia card of a solid state drive (SSD), MMC, eMMC, RS-MMC or micro-MMC form, a secure digital card of an SD, mini-SD or micro-SD form, a storage apparatus of a universal serial bus (USB) storage apparatus, universal flash storage (UFS) apparatus or personal computer memory card international association (PCMCIA) card form, a storage apparatus of a peripheral component interconnection (PCI) card form, a storage apparatus of a PCI-express (PCI-E) card form, a compact flash (CF) card, a smart media card, and a memory stick.
  • SSD solid state drive
  • MMC solid state drive
  • eMMC embedded MultiMediaCard
  • RS-MMC embedded MultiMediaCard Access Memory Stick
  • micro-MMC
  • the data storage apparatus 10 may be fabricated as one of various types of package forms.
  • the data storage apparatus 10 may be fabricated as any one of various types of package forms, such as a package on package (POP), a system in package (SIP), a system on chip (SOC), a multi-chip package (MCP), a chip on board (COB), a wafer-level fabricated package (WFP) and a wafer-level stack package (WSP).
  • POP package on package
  • SIP system in package
  • SOC system on chip
  • MCP multi-chip package
  • COB chip on board
  • WFP wafer-level fabricated package
  • WSP wafer-level stack package
  • the data storage apparatus 10 may include a non-volatile memory 100 , a controller 200 and a volatile memory 300 .
  • the non-volatile memory 100 may operate as a storage medium of the data storage apparatus 10 .
  • the non-volatile memory 100 may be configured as one of various types of non-volatile memories, such as a NAND flash memory apparatus, a NOR flash memory apparatus, a ferroelectric random access memory (FRAM) using a ferroelectric capacitor, a magnetic random access memory (MRAM) using a tunneling magneto-resistive (TMR) film, a phase change random access memory (PRAM) using chalcogenide alloys, and a resistive random access memory (ReRAM) using transition metal oxide, depending on the memory cells used, but embodiments are not limited thereto.
  • a NAND flash memory apparatus a NOR flash memory apparatus
  • FRAM ferroelectric random access memory
  • MRAM magnetic random access memory
  • TMR tunneling magneto-resistive
  • PRAM phase change random access memory
  • ReRAM resistive random access memory
  • FIG. 2 illustrates the non-volatile memory 100 in FIG. 1 .
  • FIG. 3 illustrates a memory cell array 110 in FIG. 2 .
  • the non-volatile memory 100 may include the memory cell array 110 , a row decoder 120 , a write/read circuit 130 , a column decoder 140 , a page buffer 150 , a voltage generator 160 , control logic 170 , and an input/output (I/O) circuit 180 .
  • the memory cell array 110 may include a plurality of memory cells (not illustrated) disposed at respective regions in which a plurality of bit lines BL and a plurality of word lines WL intersect with each other.
  • the memory cell array 110 may include a plurality of memory blocks BLK 1 to BLKi.
  • Each of the plurality of memory blocks BLK 1 to BLKi may include a plurality of pages PG 1 to PGj.
  • a memory block corresponds to the smallest unit of the memory cell array 110 that can be independently erased
  • a page corresponds to the smallest unit of the memory cell array 110 that can be independently programmed.
  • Each of the memory cells of the memory cell array 110 may be a single-level cell (SLC) in which 1-bit data is stored, a multi-level cell (MLC) in which 2-bit data is stored, a triple-level cell (TLC) in which 3-bit data is stored, or a quadruple-level cell (QLC) in which 4-bit data is stored.
  • the memory cell array 110 may include at least one of an SLC, an MLC, a TLC, a QLC, and combinations thereof.
  • the memory cell array 110 may include memory cells disposed in a two-dimensional horizontal structure or may include memory cells disposed in a three-dimensional vertical structure.
  • the row decoder 120 may be electrically coupled to the memory cell array 110 through the word lines WL.
  • the row decoder 120 may operate under the control of the control logic 170 .
  • the row decoder 120 may decode a row address X_ADDR provided by the control logic 170 , may select at least one of the word lines WL based on a result of the decoding, and may drive the selected word line WL.
  • the row decoder 120 may provide the selected word line WL with an operating voltage Vop provided by the voltage generator 160 .
  • the write/read circuit 130 may be electrically coupled to the memory cell array 110 through the bit lines BL.
  • the write/read circuit 130 may include write/read circuits (not illustrated) corresponding to the respective bit lines BL.
  • the write/read circuit 130 may operate under the control of the control logic 170 .
  • the write/read circuit 130 may include a write driver WD for writing data in memory cells and a sense amplifier (SA) for amplifying data read from memory cells.
  • SA sense amplifier
  • the write/read circuit 130 may provide a current pulse or voltage pulse to memory cells that belong to the memory cells of the memory cell array 110 and that are selected by the row decoder 120 and the column decoder 140 , thereby performing write and read operations on the selected memory cells.
  • the column decoder 140 may operate under the control of the control logic 170 .
  • the column decoder 140 may decode a column address Y_ADDR provided by the control logic 170 .
  • the column decoder 140 may electrically couple write/read circuits of the write/read circuit 130 , corresponding to respective bit lines BL, and the page buffer 150 based on a result of the decoding.
  • the page buffer 150 may be configured to temporarily store data, such as provided by a memory interface 240 of the controller 200 and to be written in the memory cell array 110 , or data read from the memory cell array 110 and to be provided to the memory interface 240 of the controller 200 .
  • the page buffer 150 may operate under the control of the control logic 170 .
  • the voltage generator 160 may generate various voltages for performing write, read and erase operations on the memory cell array 110 based on a voltage control signal CTRL_vol provided by the control logic 170 .
  • the voltage generator 160 may generate driving voltages Vop for driving the plurality of word lines WL and bit lines BL. Furthermore, the voltage generator 160 may generate at least one reference voltage in order to read data stored in a memory cell MC.
  • the control logic 170 may output various types of control signals for writing data DATA in the memory cell array 110 or reading data DATA from the memory cell array 110 based on a command CMD_op, address ADDR and control signal CTRL received from the controller 200 .
  • the various types of control signals output by the control logic 170 may be provided to the row decoder 120 , the write/read circuit 130 , the column decoder 140 , the page buffer 150 and the voltage generator 160 . Accordingly, the control logic 170 may generally control various types of operations performed in the non-volatile memory 100 .
  • control logic 170 may generate an operation control signal CTRL_op based on a command CMD and a control signal CTRL, and may provide the generated operation control signal CTRL_op to the write/read circuit 130 .
  • the control logic 170 may provide the row decoder 120 and the column decoder 140 with a row address X_ADDR and column address Y_ADDR included in an address ADDR, respectively.
  • the I/O circuit 180 may be configured to receive a command CMD, address ADDR and data DATA provided by the controller 200 or to provide the controller 200 with data DATA read from the memory cell array 110 .
  • the I/O circuit 180 may output the command CMD and address ADDR, received from the controller 200 , to the control logic 170 , and may output the data DATA to the page buffer 150 .
  • the I/O circuit 180 may output, to the controller 200 , data DATA received from the page buffer 150 .
  • the I/O circuit 180 may operate under the control of the control logic 170 .
  • the controller 200 may control an overall operation of the data storage apparatus 10 through the execution of firmware or software loaded on a memory 230 .
  • the controller 200 may decode and execute instructions or algorithms of a code form, such as firmware or software.
  • the controller 200 may be implemented in the form of hardware or a combination of hardware and software.
  • the controller 200 may control data to be written to or read from the non-volatile memory 100 or the volatile memory 300 in response to a write command or read command transmitted by a host apparatus 20 (refer to FIG. 4 ). This will be described in detail later.
  • the controller 200 may include a host interface 210 , a processor 220 , the memory 230 and the memory interface 240 .
  • the host interface 210 may provide an interface between a host apparatus and the data storage apparatus 10 in accordance with a protocol of the host apparatus.
  • the host interface 210 may communicate with the host apparatus through one of protocols, such as a universal serial bus (USB), a universal flash storage (UFS), a multimedia card (MMC), a parallel advanced technology attachment (PATA), a serial advanced technology attachment (SATA), a small computer system interface (SCSI), a serial attached SCSI (SAS), a peripheral component interconnection (PCI), and a PCI express (PCI-e).
  • USB universal serial bus
  • UFS universal flash storage
  • MMC multimedia card
  • PATA parallel advanced technology attachment
  • SATA serial advanced technology attachment
  • SATA small computer system interface
  • SAS serial attached SCSI
  • PCI-e PCI express
  • the processor 220 may be configured with a micro control unit (MCU) and/or a central processing unit (CPU).
  • the processor 220 may process a request transmitted by the host apparatus.
  • the processor 220 may execute an instruction or algorithm of a code form, that is, firmware loaded on the memory 230 , and may control internal function blocks, such as the host interface 210 , the memory 230 and the memory interface 240 , and the non-volatile memory 100 .
  • the processor 220 may generate control signals that will control an operation of the non-volatile memory 100 based on requests transmitted by the host apparatus, and may provide the generated control signals to the non-volatile memory 100 through the memory interface 240 .
  • the memory 230 may be configured as a random access memory, such as a dynamic random access memory (DRAM) or a static random access memory (SRAM).
  • the memory 230 may store firmware executed by the processor 220 .
  • the memory 230 may store data used by the firmware, for example, meta data. That is, the memory 230 may operate as a working memory of the processor 220 .
  • the memory 230 may be configured to include a data buffer (DB) (not illustrated) for temporarily storing one or both of write data to be transmitted from the host apparatus to the non-volatile memory 100 and read data to be transmitted from the non-volatile memory 100 to the host apparatus. That is, the memory 230 may operate as a buffer memory.
  • DB data buffer
  • the memory interface 240 may control the non-volatile memory 100 under the control of the processor 220 .
  • the memory interface 240 may also be called a memory controller.
  • the memory interface 240 may communicate with the non-volatile memory 100 using channel signals CH.
  • the channel signals CH may include a command, an address, and an operation control signal for controlling the non-volatile memory 100 .
  • the memory interface 240 may use the channel signals CH to provide data to the non-volatile memory 100 or to receive data from the non-volatile memory 100 .
  • the volatile memory 300 may include a region 300 a in which a zone mapping table and system information are stored and a random access zone 300 b capable of random write. This will be described in detail later.
  • FIG. 4 is a diagram illustrating a data processing system 1100 according to an embodiment.
  • FIG. 5 illustrating the configuration of a volatile memory according to an embodiment
  • FIG. 6 illustrating an example of a zone mapping table 600 according to an embodiment
  • FIG. 7 illustrating a backup process according to an embodiment
  • FIG. 8 illustrating a recovery process according to an embodiment.
  • the data processing system 1100 may include the data storage apparatus 10 (for example, an SSD) and the host apparatus 20 .
  • the data storage apparatus 10 may include the non-volatile memory 100 , the controller 200 and the volatile memory 300 .
  • the non-volatile memory 100 may be configured with a set of zones Zone 0 to Zone N having the same size, that is, a NAND flash region.
  • each of the zones may respectively include one or more physical blocks.
  • the non-volatile memory 100 may include a backup zone 100 a and a plurality of sequential zones 100 b capable of sequential write.
  • the backup zone 100 a is a backup space for providing a non-volatile characteristic of the random access zone 300 b provided using the volatile memory 300 and that is a marginal space outside of a system region of the volatile memory 300 .
  • the backup zone 100 a may be used in an SLC way.
  • the backup zone 100 a may have a size two times or three times greater than the size of the random access zone 300 b , but embodiments are not limited thereto.
  • the sequential zone 100 b is a user region, and may be used in a TLC or QLC way. In this case, a write operation may be performed on the sequential zone 100 b in a sequential write way.
  • the volatile memory 300 may include the region 300 a in which a zone mapping table for managing the physical addresses of zones and system information are stored and the random access zone 300 b capable of random write.
  • the volatile memory 300 may be implemented as a DRAM, but embodiments are not limited thereto.
  • a zone according to an embodiment may have a relatively larger size than a page or block, and the zone mapping table may include a start physical block address (Start PBA), a total length and the final write location for each zone. Accordingly, a marginal space may occur in a region of the volatile memory 300 because the amount of stored mapping data is reduced compared to a conventional technology.
  • Start PBA start physical block address
  • the zone mapping table 600 stored in the volatile memory 300 may include for each zone a logical block address group, a zone index, a start physical block address (PBA 0), a total length and the final write location.
  • the logical block address group may be defined to mean a plurality of logical block addresses grouped by a given number.
  • the zone index may be defined to mean identification information for identifying a respective zone.
  • the final write location of the zone mapping table 600 may be defined to mean the last write location of the respective zone at the present time.
  • the total length may be defined to mean a total length of physical address.
  • a logical block address group 0 may include Logical Block Address (LBA) 0 to LBA 99.
  • LBA Logical Block Address
  • Each of the other logical address groups may include 100 logical block addresses.
  • Such a logical block address group may be matched with one zone.
  • the amount of data included in a mapping table is massive because a logical block address and a physical block address are matched in a one-to-one way.
  • the amount of data included in the mapping table can be reduced because a plurality of logical block addresses are matched with one zone and managed accordingly.
  • the size of mapping data is relatively reduced because the mapping data can be managed in a zone unit without using a mapping process of a 4 KB unit as in a conventional technology.
  • a marginal space of the volatile memory 300 can be secured.
  • a region of the volatile memory 300 secured because the size of a zone mapping table 600 is reduced due to a change in the structure of the zone mapping table, may be used as the random access zone 300 b capable of random write.
  • Each of the non-volatile memory 100 and the volatile memory 300 may be configured with a plurality of zones (Zone 0 to Zone N+1). Accordingly, the host apparatus 20 may recognize, as a logical region, each of the plurality of zones within the volatile memory 300 and the non-volatile memory 100 . That is, the host apparatus 20 may recognize the data storage apparatus 10 as a storage apparatus including a plurality of zones. For example, the host apparatus 20 may recognize (N+1) zones (refer to FIG. 4 ).
  • the controller 200 may identify, such as by using the zone mapping table 600 , whether the logical address belongs to the random access zone 300 b or the sequential zone 100 b , and then may control an operation, corresponding to the write command or read command of the identified zone, to be performed.
  • the controller 200 may also receive the size of data when a write command or a read command is received from the host apparatus 20 .
  • the logical address may mean the start logical address of the data to be read or written. If the logical address belongs to the random access zone or the sequential zone, this may mean that a physical address corresponding to the logical address belongs to the random access zone or the sequential zone, respectively.
  • the controller 200 may control an operation, corresponding to the write command or read command, to be performed using the physical address corresponding to the logical address as a start address, wherein the physical address is an address within a sequential zone. If the logical address belongs to the random access zone, the controller 200 may control an operation, corresponding to the write command or read command, to be performed using the physical address corresponding to the logical address as a start address, wherein the physical address is an address within a random access zone.
  • the controller 200 may back up, onto the backup zone 100 a , data stored in the random access zone 300 b based on a preset criterion. Furthermore, when the state of the controller 200 switches to an on state after power is off, the controller 200 may recover the data stored in the backup zone 100 a into the random access zone 300 b.
  • the controller 200 may identify a zone index matched with the start logical address of an operation based on the zone mapping table 600 , and may identify whether the zone index is the random access zone 300 b or the sequential zone 100 b.
  • the controller 200 may identify that LBA 5 belongs to a logical block address group 0 (LBAG 0 in FIG. 6 ) based on the zone mapping table, and may identify that the logical block address group 0 is matched with a zone index 0 and thus belongs to the sequential zone 100 b .
  • a case where the logical block address group 0 includes LBA 0 to LBA 99 and is matched with the zone index 0 may be described as an example.
  • LBAs and a zone index matched with each logical block address group may be preset.
  • the controller 200 may differently apply a process of identifying a physical address when a write command or read command for the random access zone 300 b is received because the random access zone 300 b corresponds to a volatile memory and the sequential zone 100 b corresponds to a non-volatile memory.
  • the controller 200 may identify a zone index matched with the start logical address of a zone based on the zone mapping table, may write data, corresponding to the size of the data received when the write command is received, from a location next to the final write location of the identified zone index, and then may update the final write location in the zone mapping table.
  • the controller 200 may identify a logical address group 0 and zone index 0, matched with LBA 5, based on the zone mapping table. If the final write physical address of the zone index 0 is 10, the controller 200 may write data, corresponding to the size (e.g. 4) of data received when a write command is received, starting from physical address 11 of the zone corresponding to zone index 0. Furthermore, the controller 200 may update a physical address 14 as the final write location in the zone mapping table.
  • the controller 200 may write data, corresponding to the size of the data received from the host apparatus 20 , based on a start physical address of the random access zone 300 b.
  • the controller 200 may identify a logical block address group (LBAG) 10 to which LBA 902 belongs and a zone index 10, which is preset in this example as being the zone index of a zone in the random access zone 300 b . If the physical addresses of the region 300 a of the volatile memory 300 in which the zone mapping table and system information are stored are included in physical blocks 0 to 599 (wherein, for example, each block includes 4 KB) and the physical addresses of the random access zone 300 b are in physical blocks 600 to 999 , the controller 200 may write data, corresponding to the size (e.g.
  • the controller 200 may write data from the start physical address of the random access zone 300 b because the random access zone 300 b corresponds to a volatile random-access memory.
  • the controller 200 may identify a zone index matched with the logical address based on the zone mapping table, and may read data corresponding to the size of data received from the host apparatus 20 using a physical address, corresponding to the logical address in a corresponding zone of the identified zone index, as a start address.
  • the controller 200 may identify the final physical address by adding the offset relative to the corresponding logical block address group of the logical address, received from the host apparatus 20 , to a start physical address of the random access zone 300 b , and may read data, corresponding to the size of data received from the host apparatus 20 , from the final physical address. For example, if the logical address corresponds to an offset of 8000 from the beginning of LBAG 10, and LBAG 10 is mapped to a zone beginning at the start of the random access zone 300 b , then the start physical address would be an address offset by 8000 from the start of the random access zone 300 b.
  • the random access zone 300 b may include a plurality of slice regions (backup slices) 0 to n. Random access indices 0 to n may be sequentially assigned to the plurality of slice regions, respectively. In this case, the random access index may be defined to mean an index assigned to each of the plurality of slice regions within the random access zone 300 b .
  • the slice region may have a size corresponding to the size of data which may be written at a time (that is, to a page size of the non-volatile memory 100 ), but embodiments are not limited thereto.
  • each of the random indices may be matched with a flush flag (Flush 1 or 0), indicating whether data stored in the corresponding slice of the random access zone 300 b is backed up onto the backup zone 100 a , and an update flag (Update 1 or 0) indicating whether data stored in the corresponding slice of the random access zone 300 b has been updated with new data.
  • a flush flag Flush 1 or 0
  • Update 1 or 0 an update flag
  • the backup zone 100 a may include a first region (Index 0, Index 1) onto which data written in the random access zone 300 b is backed up in a one-to-one way and a second region (Index 2) onto which the latest data updated in the random access zone 300 b is backed up when power is turned off or interrupted.
  • Each of the first region and the second region may be matched with each virtual backup index.
  • One or more backup indices may be assigned to each backup zone. For example, two backup indices, such as Index 0 and Index 1, may be assigned to the first region, and one backup index, such as Index 2, may be assigned to the second region.
  • the first region may be configured to include two or more subregions respectively corresponding to Index 0 and Index 1, where each subregion of the first region has a size equal to or greater than the size of the random access zone 300 b.
  • the controller 200 may separately manage, as system information, an indication of at which backup index of each of the first and second regions the latest data is stored; that is, which indices are the latest backup indices.
  • the information on the latest backup index may be stored in the volatile memory 300 as system information.
  • the controller 200 may back up, onto the backup zone 100 a , the data stored in the random access zone 300 b , and may change a value of a corresponding flush flag for the backed-up random access zone to 1 (i.e., one).
  • the controller 200 may set index 0 as the latest backup index for the first region.
  • the controller 200 may set index 1 as the latest backup index for the first region.
  • the controller 200 may erase the subregion corresponding to index 1 after the data was backed up to the subregion corresponding to index 0 and may erase the subregion corresponding to index 0 after the data was backed up to the subregion corresponding to index 1, in order to prepare for the next backup operation. Furthermore, after backing up the data to the first region, the controller 200 may erase the second region and reset the update flags for the slices, to prepare the second region to accept backup data when, for example, power fails. In such embodiments, the controller 200 may alternate between backing up the random access zone 300 b to the subregions of index 0 and index 1.
  • the controller 200 may sequentially write, in the backup zone 100 a , data stored in a plurality of slice regions based on random access index numbers, and may change values of flush flags, for the backed-up slice regions, to 1 (i.e., one).
  • the flush flag having a value of 1 may indicate that data has been backed up onto the backup zone 100 a .
  • the flush flag having a value of 0 may indicate that data has not been backed up onto the backup zone 100 a.
  • the controller 200 may apply, as a backup condition, a condition in which data written in response to a request from the host apparatus 20 reaches a slice size, not the number of write commands, but embodiments are not limited thereto.
  • the backup condition may be changed or added to depending on an operator's needs.
  • the controller 200 may reset the values of all the flush flags to 0 (i.e., zero).
  • the controller 200 may change a value of an update flag for a corresponding slice region to 1 (i.e., one).
  • the update flag 1 may indicate that data has been updated with new data, but has not been backed up since the update.
  • the update flag 0 may indicate that there is no new data has not been backed up onto the backup zone 100 a.
  • the controller 200 may back up, onto the second region (Index 2), data that is stored in the random access zone 300 b and that has an update flag of 1. After the backup of update data is completed, the controller 200 may reset the update flag for a corresponding slice region to 0.
  • the controller 200 may write, in the second region (Index 2), data that is stored in the random access zone 300 b and that has a flush flag of 0 (indicating that a backup operation to the first region was only partly completed and did not back up the data) or update flag of 1 (indicating that the data was modified since the last backup to the first region).
  • the controller 200 may also store a corresponding random access index. In this case, the random access index may be written in a spare region of the first page of the second region.
  • a plurality of corresponding random access indices may be stored in the spare region of the first page of the second region, or in another embodiment may be stored in spare regions of the pages used to store the respective slices.
  • the stored random access index (or indices) may be used to identify a location of the backed up data within a random access zone prior to the backup when the data is recovered.
  • the data storage apparatus 10 may perform a backup operation based on an internal capacity or by external power.
  • the controller 200 may calculate physical addresses corresponding to the latest backup index of a first region (either the subregion corresponding to Index 0 or the subregion corresponding to Index 1), and may sequentially read data from the corresponding physical addresses to the random access zone 300 b . Specifically, the controller 200 may calculate the final physical address whose data needs to be recovered by incorporating the start physical address of a random access zone into a backup index. In an embodiment, the controller 200 may determine whether the subregion of index 0 or the subregion of index 1 holds the data to be restored by determining which subregion is in an erased state, by using system information stored in the nonvolatile memory, or both.
  • the controller 200 may read the latest data, stored in the backup zone 100 a of the second region (Index 2 in FIG. 8 ), to the random access zone 300 b.
  • the controller 200 may identify a location of the random access zone 300 b to which the latest data will be written based on a corresponding random access index stored in the second region. In this manner, any data not backed up because of either a failure of a backup operation to the first region to complete or because the data was updated after the last backup operation to the first region will be restored from the second region.
  • FIG. 9 is a flowchart for describing an operating process 900 of the data storage apparatus 10 according to an embodiment.
  • the data storage apparatus 10 may receive a logical address received along with a write command or a read command from the host apparatus 20 (S 101 ).
  • the data storage apparatus 10 may also receive the size of data when receiving the write command or read command from the host apparatus 20 .
  • the data storage apparatus 10 may identify whether the logical address belongs to the random access zone 300 b within the volatile memory 300 or to the sequential zone 100 b within the non-volatile memory 100 (S 103 and S 105 ).
  • the logical address may mean the start logical address of an operation corresponding to the command. If the logical address belongs to the random access zone or the sequential zone, this may respectively mean that a physical address corresponding to the logical address belongs to the random access zone or the sequential zone. That is, if the logical address belongs to the sequential zone, the data storage apparatus 10 may control an operation, corresponding to the write command or read command, to be performed using the physical address corresponding to the logical address as a start address.
  • the volatile memory 300 may include the region 300 a in which a zone mapping table and system information are stored and the random access zone 300 b capable of random write.
  • the non-volatile memory 100 may include the backup zone 100 a and the plurality of sequential zones 100 b capable of sequential write.
  • the data storage apparatus 10 may perform an operation, corresponding to the write command or read command, based on the random access zone 300 b or sequential zone 100 b identified at step S 103 (S 107 and S 117 ). This will be described in detail later.
  • the data storage apparatus 10 may back up data, stored in the random access zone 300 b , onto the backup zone 100 a based on a preset criterion (S 109 and S 111 ).
  • the random access zone 300 b may include a plurality of slice regions. Each of the plurality of slice regions is matched with a random access index. Each of the random access indices may be matched with a flush flag, indicating whether data stored in the random access zone 300 b has been backed up onto the backup zone 100 a , and an update flag indicating whether data stored in the random access zone 300 b has been updated with new data.
  • the backup zone 100 a may include the first region (as shown in FIG. 7 ) onto which data written in the random access zone 300 b is backed up in a one-to-one way and the second region (as also shown in FIG. 7 ) onto which the latest data updated in the random access zone 300 b is backed up when power is turned off or otherwise interrupted.
  • Each of the first region and the second region may have a respective latest backup index identifying a subregion that stores the latest backed up data in that region.
  • the data storage apparatus 10 may back up, onto the backup zone 100 a , the data stored in the random access zone 300 b , and may change a value of a corresponding flush flag for the backed-up random access zone 300 b to 1 (i.e., one).
  • the data storage apparatus 10 may sequentially write, in the backup zone 100 a , data stored in the plurality of slice regions based on random access index numbers.
  • the data storage apparatus 10 may sequentially back up, onto the backup zone 100 a , data stored in slice regions from a random access index 0 to a random access index n.
  • the data storage apparatus 10 may change values of flush flags for the backed-up slice regions to 1 (i.e., one).
  • the data storage apparatus 10 may change a value of an update flag for a corresponding slice region to 1 (i.e., one).
  • the data storage apparatus 10 may write, in the second region (Index 2), data in the random access zone 300 b having a flush flag of 0 (Flush 0) or update flag of 1 (Update 1).
  • the data storage apparatus 10 may also store a corresponding random access index.
  • the data storage apparatus 10 may recover the data stored in the backup zone 100 a into the random access zone 300 b (S 113 and S 115 ). Note that although S 113 and S 115 are shown as following after S 111 , embodiments are not limited thereto, and a power interruption and subsequent recovery may occur at any time during the process 900 of FIG. 9 .
  • the data storage apparatus 10 may calculate physical addresses corresponding to the latest backup index of the first region (such as, for example Index 0 or Index 1), and may sequentially read data from the physical addresses to the random access zone 300 b.
  • the latest backup index of the first region such as, for example Index 0 or Index 1
  • the data storage apparatus 10 may separately manage, as system information, a latest backup index that belongs to the first region and a latest backup index that belongs to the second region, each indicating where in their region the latest data is stored.
  • the latest backup index may be stored in the volatile memory 300 as system information. After identifying the latest backup index, the data storage apparatus 10 may recover the data of the corresponding backup index into the random access zone 300 b at step S 115 .
  • the data storage apparatus 10 may read the latest data, stored in the second region of the backup zone 100 a , to the random access zone 300 b.
  • FIG. 10 is a detailed flowchart for the data write process 910 such as may be used in the process 900 of FIG. 9 .
  • the data storage apparatus 10 may receive a logical address along with a write command from the host apparatus 20 (S 201 ).
  • the controller 200 may also receive the size of data when receiving a write command or read command from the host apparatus 20 .
  • the data storage apparatus 10 may identify whether the logical address belongs to the random access zone 300 b within the volatile memory or the sequential zone 100 b within the non-volatile memory 100 (S 203 and S 205 ).
  • the data storage apparatus 10 may write data, corresponding to the size of the data received from the host apparatus 20 , based on a start physical address of the random access zone 300 b (S 207 and S 209 ).
  • the data storage apparatus 10 may identify a zone index, matched with the logical address received along with the write command from the host apparatus 20 , based on the zone mapping table.
  • the data storage apparatus 10 may identify a physical address on which a write operation will be performed by identifying the final write location of the identified zone index (S 211 ).
  • the data storage apparatus 10 may write data, corresponding to the size of the data, from a location next to the final write location of the identified zone index (S 213 ).
  • the size of the data may correspond to a page size of the sequential zone 100 b .
  • the size of the data may be less than a page size of the sequential zone 100 b , and a read-modify-write operation may be used to perform the write of the data.
  • the data storage apparatus 10 may update the final write location in the zone mapping table (S 215 ).
  • FIG. 11 is a detailed flowchart for describing a data read process 920 such as may be used in the process 900 of FIG. 9 .
  • the data storage apparatus 10 may receive a logical address along with a read command from the host apparatus 20 (S 301 ). Next, the data storage apparatus 10 may identify whether the logical address belongs to the random access zone 300 b within the volatile memory or the sequential zone 100 b within the non-volatile memory (S 303 and S 305 ).
  • the data storage apparatus 10 may identify the final physical address by adding a portion of the logical address, such as the logical address's offset from a start logical address of the corresponding logical block address group, to a start physical address of the random access zone 300 b (S 307 ).
  • the region 300 a in which the zone mapping table and system information are stored occupies a part of a memory space and the remaining marginal space is used as the random access zone 300 b .
  • the start physical address of the random access zone 300 b is not 0, but may be a physical address after the region 300 a in which the zone mapping table and system information are stored.
  • the data storage apparatus 10 may identify the final physical address from which data will be actually read, by adding all or a portion of the start logical address of a command received from the host apparatus 20 , to the start physical address of the random access zone 300 b.
  • the data storage apparatus 10 may read, from the final physical address, data corresponding to the size of the data received when the read command is received (S 309 ).
  • the data storage apparatus 10 may identify a zone index matched with the logical address based on the zone mapping table.
  • the data storage apparatus 10 may identify a start physical address corresponding to the logical address at the identified zone index (S 311 ).
  • the data storage apparatus 10 may read data, corresponding to the size of the data requested by the host apparatus 20 , from the start physical address identified at step S 311 (S 313 ).
  • FIG. 12 is a flowchart for describing an operating process 930 of the data storage apparatus according to another embodiment. A case where the data storage apparatus 10 moves data stored in the random access zone 300 b to the backup zone 100 a will be described as an example.
  • the data storage apparatus 10 may back up data stored in the random access zone 300 b onto the backup zone 100 a.
  • the data storage apparatus 10 may identify whether the amount of data written in the random access zone 300 b is a reference value or more (S 401 ).
  • the random access zone 300 b may include a plurality of slice regions. Each of the plurality of slice regions may be matched with a respective random access index. Each of the random access indices may be matched with a flush flag, indicating whether data stored in the random access zone 300 b has been backed up onto the backup zone 100 a , and an update flag indicating whether data stored in the random access zone 300 b has been updated with new data.
  • the backup zone 100 a may include the first region (shown in FIG. 7 ) onto which data written in the random access zone 300 b is backed up in a one-to-one way and the second region (also shown in FIG. 7 ) onto which the latest data updated in the random access zone 300 b is backed up when power is turned off or interrupted.
  • Each of the first region and the second region may have a respective latest backup index indicating a subregion of the respective region that includes the latest backed up data.
  • the data storage apparatus 10 may sequentially write, in the backup zone 100 a , data stored in a plurality of slice regions within the random access zone 300 b , based on random access index numbers (S 403 ).
  • the data storage apparatus 10 may change a value of a corresponding flush flag for the backed-up random access zone 300 b to 1 (i.e., one) (S 405 ).
  • the data storage apparatus 10 may identify whether the number of write commands received from the host apparatus 20 is a reference value or more (S 407 ).
  • the data storage apparatus 10 may sequentially write, in the backup zone 100 a , data stored in a plurality of slice regions based on random access index numbers (S 409 ).
  • the data storage apparatus 10 may change a value of a corresponding flush flag for the backed-up slice region to 1 (i.e., one) (S 411 ).
  • the data storage apparatus 10 may change a value of a corresponding update flag for a corresponding slice region to 1 (i.e., one) (S 413 ).
  • the data storage apparatus 10 may write, in the second region (Index 2), data having a flush flag of 0 (Flush 0) or update flag of 1 (Update 1) in the second region of the random access zone 300 b (S 417 ).
  • the data storage apparatus 10 may also store a corresponding random access index for each slice written.
  • the data storage apparatus 10 may recover the data stored in the backup zone 100 a into the random access zone 300 b.
  • FIG. 13 is a diagram illustrating a data processing system 2000 including a solid state drive (SSD) according to an embodiment.
  • the data processing system 2000 may include a host apparatus 2100 and an solid state drive 2200 (hereinafter referred to as an “SSD”).
  • SSD solid state drive
  • the SSD 2200 may include a controller 2210 , a buffer memory apparatus 2220 , non-volatile memories 2231 to 223 n , a power supply 2240 , a signal connector 2250 and a power connector 2260 .
  • the controller 2210 may control an overall operation of the SSD 2200 .
  • the buffer memory apparatus 2220 may temporarily store data to be stored in the non-volatile memories 2231 to 223 n . Furthermore, the buffer memory apparatus 2220 may temporarily store data read from the non-volatile memories 2231 to 223 n . The data temporarily stored in the buffer memory apparatus 2220 may be transmitted to the host apparatus 2100 or the non-volatile memories 2231 to 223 n under the control of the controller 2210 .
  • the non-volatile memories 2231 to 223 n may be used as storage media of the SSD 2200 .
  • the non-volatile memories 2231 to 223 n may be electrically coupled to the controller 2210 through a plurality of channels CH 1 to CHn.
  • One or more non-volatile memories may be electrically coupled to one channel.
  • Non-volatile memories electrically coupled to one channel may be electrically coupled to the same signal bus and data bus.
  • the power supply 2240 may provide a power supply PWR, received through the power connector 2260 , into the SSD 2200 .
  • the power supply 2240 may include an auxiliary power supply 2241 . If sudden power-off occurs, the auxiliary power supply 2241 may supply power so that the SSD 2200 is terminated normally.
  • the auxiliary power supply 2241 may include high-capacity capacitors capable of being charged with the power supply PWR.
  • the controller 2210 may exchange signals SGL with the host apparatus 2100 through the signal connector 2250 .
  • the signal SGL may include a command, an address, data, etc.
  • the signal connector 2250 may be configured with various types of connectors based on an interface used between the host apparatus 2100 and the SSD 2200 .
  • FIG. 14 is a diagram illustrating the configuration of the controller 2100 in FIG. 13 .
  • the controller 2210 may include a host interface unit 2211 , a control unit 2212 , a random access memory 2213 , an error correction code (ECC) unit 2214 and a memory interface unit 2215 .
  • ECC error correction code
  • the host interface unit 2211 may provide an interface between the host apparatus 2100 and the SSD 2200 based on a protocol of the host apparatus 2100 .
  • the host interface unit 2211 may communicate with the host apparatus 2100 through any one of protocols, such as secure digital, a universal serial bus (USB), a multi-media card (MMC), an embedded MMC (eMMC), personal computer memory card international association (PCMCIA), parallel advanced technology attachment (PATA), serial advanced technology attachment (SATA), a small computer system interface (SCSI), a serial attached SCSI (SAS), peripheral component interconnection (PCI), PCI express (PCI-E), and a universal flash storage (UFS).
  • the host interface unit 2211 may perform a disk emulation function for enabling the host apparatus 2100 to recognize the SSD 2200 as a general-purpose data storage apparatus, for example, a hard disk drive (HDD).
  • HDD hard disk drive
  • the control unit 2212 may analyze and process a signal SGL received from the host apparatus 2100 .
  • the control unit 2212 may control operations of internal function blocks based on firmware or software for driving the SSD 2200 .
  • the random access memory 2213 may be used as a working memory for driving such firmware or software.
  • the ECC unit 2214 may generate parity data of data to be transmitted to the non-volatile memories 2231 to 223 n .
  • the generated parity data may be stored in the non-volatile memories 2231 to 223 n along with data.
  • the ECC unit 2214 may detect an error of data read from the non-volatile memories 2231 to 223 n based on the parity data. If the detected error is within a correctable range, the ECC unit 2214 may correct the detected error.
  • the memory interface unit 2215 may provide the non-volatile memories 2231 to 223 n with control signals, such as a command and an address, under the control of the control unit 2212 . Furthermore, the memory interface unit 2215 may exchange data with the non-volatile memories 2231 to 223 n under the control of the control unit 2212 . For example, the memory interface unit 2215 may provide the non-volatile memories 2231 to 223 n with data stored in the buffer memory apparatus 2220 or may provide the buffer memory apparatus 2220 with data read from the non-volatile memories 2231 to 223 n.
  • FIG. 15 is a diagram illustrating a data processing system 3000 including a data storage apparatus according to an embodiment.
  • the data processing system 3000 may include a host apparatus 3100 and a data storage apparatus 3200 .
  • the host apparatus 3100 may be configured in a board form, such as a printed circuit board (PCB). Although not illustrated, the host apparatus 3100 may include internal function blocks for performing functions of the host apparatus.
  • PCB printed circuit board
  • the host apparatus 3100 may include a connection terminal 3110 , such as a socket, a slot or a connector.
  • the data storage apparatus 3200 may be mounted on the connection terminal 3110 .
  • the data storage apparatus 3200 may be configured in a board form, such as a PCB.
  • the data storage apparatus 3200 may be called a memory module or a memory card.
  • the data storage apparatus 3200 may include a controller 3210 , a buffer memory apparatus 3220 , non-volatile memories 3231 and 3232 , a power management integrated circuit (PMIC) 3240 and a connection terminal 3250 .
  • PMIC power management integrated circuit
  • the controller 3210 may control an overall operation of the data storage apparatus 3200 .
  • the controller 3210 may be configured identically with the controller 2210 of FIG. 15 .
  • the buffer memory apparatus 3220 may temporarily store data to be stored in the non-volatile memories 3231 and 3232 . Furthermore, the buffer memory apparatus 3220 may temporarily store data read from the non-volatile memories 3231 and 3232 . The data temporarily stored in the buffer memory apparatus 3220 may be transmitted to the host apparatus 3100 or the non-volatile memories 3231 and 3232 under the control of the controller 3210 .
  • the non-volatile memories 3231 and 3232 may be used as storage media of the data storage apparatus 3200 .
  • the PMIC 3240 may provide power, received through the connection terminal 3250 , into the data storage apparatus 3200 .
  • the PMIC 3240 may manage power of the data storage apparatus 3200 under the control of the controller 3210 .
  • connection terminal 3250 may be electrically coupled to the connection terminal 3110 of the host apparatus. Signals, such as a command, an address and data, and power may be transmitted between the host apparatus 3100 and the data storage apparatus 3200 through the connection terminal 3250 .
  • the connection terminal 3250 may be configured in various forms based on an interface process between the host apparatus 3100 and the data storage apparatus 3200 .
  • the connection terminal 3250 may be positioned on any one side of the data storage apparatus 3200 .
  • FIG. 16 is a diagram illustrating a data processing system 4000 including a data storage apparatus according to an embodiment.
  • the data processing system 4000 may include a host apparatus 4100 and a data storage apparatus 4200 .
  • the host apparatus 4100 may be configured in a board form, such as a PCB. Although not illustrated, the host apparatus 4100 may include internal function blocks for performing functions of the host apparatus.
  • the data storage apparatus 4200 may be configured in a flap-type package form.
  • the data storage apparatus 4200 may be mounted on the host apparatus 4100 through solder balls 4250 .
  • the data storage apparatus 4200 may include a controller 4210 , a buffer memory apparatus 4220 and a non-volatile memory 4230 .
  • the controller 4210 may control an overall operation of the data storage apparatus 4200 .
  • the controller 4210 may be configured identically with the controller 3210 of FIG. 15 .
  • the buffer memory apparatus 4220 may temporarily store data to be stored in the non-volatile memory 4230 . Furthermore, the buffer memory apparatus 4220 may temporarily store data read from the non-volatile memory 4230 . The data temporarily stored in the buffer memory apparatus 4220 may be transmitted to the host apparatus 4100 or the non-volatile memory 4230 under the control of the controller 4210 .
  • the non-volatile memory 4230 may be used as a storage medium of the data storage apparatus 4200 .
  • FIG. 17 is a diagram illustrating a network system 5000 including a data storage apparatus according to an embodiment.
  • the network system 5000 may include a server system 5300 and a plurality of client systems 5410 , 5420 and 5430 , which are electrically coupled over a network 5500 .
  • the server system 5300 may serve data in response to a request from the plurality of client systems 5410 , 5420 and 5430 .
  • the server system 5300 may store data provided by the plurality of client systems 5410 , 5420 and 5430 .
  • the server system 5300 may provide data to the plurality of client systems 5410 , 5420 and 5430 .
  • the server system 5300 may include a host apparatus 5100 and a data storage apparatus 5200 .
  • the data storage apparatus 5200 may be configured with the data storage apparatus 10 of FIG. 1 , the SSD 2200 of FIG. 13 , the data storage apparatus 3200 of FIG. 15 and the data storage apparatus 4200 of FIG. 16 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Quality & Reliability (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Memory System (AREA)

Abstract

A data storage apparatus includes a volatile memory, the volatile memory including a region in which a zone mapping table and system information are stored and a random access zone capable of random write. The data storage apparatus further includes a non-volatile memory including a backup zone and a plurality of sequential zones capable of sequential write, and a controller configured to identify whether a logical address belongs to the random access zone or the sequential zone when the logical address and a data size are received along with a write command or read command from a host apparatus and to control an operation corresponding to the write command or read command.

Description

CROSS-REFERENCE TO RELATED APPLICATION
The present application claims priority under 35 U.S.C. § 119(a) to Korean application number 10-2019-0103087, filed on Aug. 22, 2019, in the Korean Intellectual Property Office, which is incorporated herein by reference in its entirety.
BACKGROUND 1. Technical Field
Various embodiments generally relate to a semiconductor apparatus, and more particularly, to a data storage apparatus and an operating method thereof.
2. Related Art
A data storage apparatus using a memory apparatus has advantages in that it has excellent stability and durability, a very high access speed of information, and low power consumption because it does not include a mechanical driver. A data storage apparatus having such advantages includes a universal serial bus (USB) memory apparatus, a memory card having various interfaces, a universal flash storage (US) apparatus, and a solid state drive.
Solid state drives may include a NAND flash memory. Memory locations in the NAND flash memory may not be overwritten, but are instead erased between each write. Accordingly, it is necessary to match and manage a logical address used in a host apparatus and a physical address used in a NAND flash memory.
A mapping table between a logical address and a physical address may be managed in a 4-kilobyte (KB) unit, such that contiguous blocks of logical addresses each corresponding to 4 KB are mapped into corresponding blocks of physical addresses each corresponding to 4 KB. In order to improve data access performance, the mapping table may be stored in a volatile random access memory (RAM), such as a dynamic random access memory (DRAM). In this document, “random” refers to the ability to read and write the memory in any order, which includes being able to overwrite previously-written data.
SUMMARY
Various embodiments are directed to the provision of a data storage apparatus having enhanced write performance by securing a data storage space of a volatile memory as a region capable of random write, and an operating method thereof.
In an embodiment, a data storage apparatus includes a volatile memory, the volatile memory including a region in which a zone mapping table and system information are stored and a random access zone suitable for random writes; a non-volatile memory including a backup zone and a plurality of sequential zones suitable for sequential writes; and a controller configured to identify whether a logical address received with a command from a host apparatus belongs to the random access zone or to the sequential zone and to control an operation corresponding to the command of the identified zone, wherein the controller is configured to back up data stored in the random access zone onto the backup zone based on a criterion and to recover the data stored in the backup zone into the random access zone when a state of the controller switches to an on state after power is off.
In an embodiment, an operating method of a data storage apparatus includes receiving a logical address and a command from a host; identifying whether the logical address belongs to a random access zone within a volatile memory or to a sequential zone within a non-volatile memory, the volatile memory including a region in which a zone mapping table and system information are stored and the random access zone suitable for random writes, and the non-volatile memory including a backup zone and a plurality of sequential zones suitable for sequential writes; and performing an operation corresponding to the command based on the identified random access zone or sequential zone.
According to the embodiments, a data storage space for writing data may be secured in a volatile memory through a change in the structure of a mapping table, and performance of a write operation can thereby be improved because a write speed can be increased due to characteristic of the volatile memory.
Furthermore, it is possible to prevent a loss of data of the volatile memory because data stored in the data storage space secured in the volatile memory is backed up and recovered using a non-volatile memory.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a data storage apparatus according to an embodiment.
FIG. 2 illustrates a non-volatile memory according to an embodiment.
FIG. 3 illustrates a memory cell array according to an embodiment.
FIG. 4 illustrates a data processing system according to an embodiment.
FIG. 5 illustrates a volatile memory according to an embodiment.
FIG. 6 illustrates an example of a zone mapping table according to an embodiment.
FIG. 7 illustrates a backup process according to an embodiment.
FIG. 8 illustrates a recovery process according to an embodiment.
FIG. 9 is a flowchart of an operating process of the data storage apparatus according to an embodiment.
FIG. 10 is a flowchart of a data write process in FIG. 9, according to an embodiment.
FIG. 11 is a flowchart of a data read process in FIG. 9, according to an embodiment.
FIG. 12 is a flowchart of an operating process of the data storage apparatus according to another embodiment.
FIG. 13 illustrates a data processing system including a solid state drive (SSD) according to an embodiment.
FIG. 14 illustrates a controller in FIG. 13, according to an embodiment.
FIG. 15 illustrates a data processing system including a data storage apparatus according to an embodiment.
FIG. 16 illustrates a data processing system including a data storage apparatus according to an embodiment.
FIG. 17 illustrates a network system including a data storage apparatus according to an embodiment.
DETAILED DESCRIPTION
Hereinafter, a data storage apparatus and an operating process thereof will be described below with reference to the accompanying drawings through various examples of embodiments.
FIG. 1 is a diagram illustrating a data storage apparatus 10 according to an embodiment.
The data storage apparatus 10 may store data accessed by a host apparatus (not illustrated), such as a mobile phone, an MP3 player, a laptop computer, a desktop computer, a game machine, TV or an in-vehicle infotainment system. The data storage apparatus 10 may be called a memory system.
The data storage apparatus 10 may be fabricated as any one of various types of storage apparatuses depending on an interface protocol electrically coupled to the host apparatus. For example, the data storage apparatus 10 may be configured as any one of various types of storage apparatuses, such as a multimedia card of a solid state drive (SSD), MMC, eMMC, RS-MMC or micro-MMC form, a secure digital card of an SD, mini-SD or micro-SD form, a storage apparatus of a universal serial bus (USB) storage apparatus, universal flash storage (UFS) apparatus or personal computer memory card international association (PCMCIA) card form, a storage apparatus of a peripheral component interconnection (PCI) card form, a storage apparatus of a PCI-express (PCI-E) card form, a compact flash (CF) card, a smart media card, and a memory stick.
The data storage apparatus 10 may be fabricated as one of various types of package forms. For example, the data storage apparatus 10 may be fabricated as any one of various types of package forms, such as a package on package (POP), a system in package (SIP), a system on chip (SOC), a multi-chip package (MCP), a chip on board (COB), a wafer-level fabricated package (WFP) and a wafer-level stack package (WSP).
As illustrated in FIG. 1, the data storage apparatus 10 may include a non-volatile memory 100, a controller 200 and a volatile memory 300.
The non-volatile memory 100 may operate as a storage medium of the data storage apparatus 10. The non-volatile memory 100 may be configured as one of various types of non-volatile memories, such as a NAND flash memory apparatus, a NOR flash memory apparatus, a ferroelectric random access memory (FRAM) using a ferroelectric capacitor, a magnetic random access memory (MRAM) using a tunneling magneto-resistive (TMR) film, a phase change random access memory (PRAM) using chalcogenide alloys, and a resistive random access memory (ReRAM) using transition metal oxide, depending on the memory cells used, but embodiments are not limited thereto.
FIG. 2 illustrates the non-volatile memory 100 in FIG. 1. FIG. 3 illustrates a memory cell array 110 in FIG. 2.
Referring to FIG. 2, the non-volatile memory 100 may include the memory cell array 110, a row decoder 120, a write/read circuit 130, a column decoder 140, a page buffer 150, a voltage generator 160, control logic 170, and an input/output (I/O) circuit 180.
The memory cell array 110 may include a plurality of memory cells (not illustrated) disposed at respective regions in which a plurality of bit lines BL and a plurality of word lines WL intersect with each other. Referring to FIG. 3, the memory cell array 110 may include a plurality of memory blocks BLK1 to BLKi. Each of the plurality of memory blocks BLK1 to BLKi may include a plurality of pages PG1 to PGj. In embodiments, a memory block corresponds to the smallest unit of the memory cell array 110 that can be independently erased, and a page corresponds to the smallest unit of the memory cell array 110 that can be independently programmed.
Each of the memory cells of the memory cell array 110 may be a single-level cell (SLC) in which 1-bit data is stored, a multi-level cell (MLC) in which 2-bit data is stored, a triple-level cell (TLC) in which 3-bit data is stored, or a quadruple-level cell (QLC) in which 4-bit data is stored. The memory cell array 110 may include at least one of an SLC, an MLC, a TLC, a QLC, and combinations thereof. The memory cell array 110 may include memory cells disposed in a two-dimensional horizontal structure or may include memory cells disposed in a three-dimensional vertical structure.
The row decoder 120 may be electrically coupled to the memory cell array 110 through the word lines WL. The row decoder 120 may operate under the control of the control logic 170. The row decoder 120 may decode a row address X_ADDR provided by the control logic 170, may select at least one of the word lines WL based on a result of the decoding, and may drive the selected word line WL. The row decoder 120 may provide the selected word line WL with an operating voltage Vop provided by the voltage generator 160.
The write/read circuit 130 may be electrically coupled to the memory cell array 110 through the bit lines BL. The write/read circuit 130 may include write/read circuits (not illustrated) corresponding to the respective bit lines BL. The write/read circuit 130 may operate under the control of the control logic 170. The write/read circuit 130 may include a write driver WD for writing data in memory cells and a sense amplifier (SA) for amplifying data read from memory cells. The write/read circuit 130 may provide a current pulse or voltage pulse to memory cells that belong to the memory cells of the memory cell array 110 and that are selected by the row decoder 120 and the column decoder 140, thereby performing write and read operations on the selected memory cells.
The column decoder 140 may operate under the control of the control logic 170. The column decoder 140 may decode a column address Y_ADDR provided by the control logic 170. The column decoder 140 may electrically couple write/read circuits of the write/read circuit 130, corresponding to respective bit lines BL, and the page buffer 150 based on a result of the decoding.
The page buffer 150 may be configured to temporarily store data, such as provided by a memory interface 240 of the controller 200 and to be written in the memory cell array 110, or data read from the memory cell array 110 and to be provided to the memory interface 240 of the controller 200. The page buffer 150 may operate under the control of the control logic 170.
The voltage generator 160 may generate various voltages for performing write, read and erase operations on the memory cell array 110 based on a voltage control signal CTRL_vol provided by the control logic 170. The voltage generator 160 may generate driving voltages Vop for driving the plurality of word lines WL and bit lines BL. Furthermore, the voltage generator 160 may generate at least one reference voltage in order to read data stored in a memory cell MC.
The control logic 170 may output various types of control signals for writing data DATA in the memory cell array 110 or reading data DATA from the memory cell array 110 based on a command CMD_op, address ADDR and control signal CTRL received from the controller 200. The various types of control signals output by the control logic 170 may be provided to the row decoder 120, the write/read circuit 130, the column decoder 140, the page buffer 150 and the voltage generator 160. Accordingly, the control logic 170 may generally control various types of operations performed in the non-volatile memory 100.
Specifically, the control logic 170 may generate an operation control signal CTRL_op based on a command CMD and a control signal CTRL, and may provide the generated operation control signal CTRL_op to the write/read circuit 130. The control logic 170 may provide the row decoder 120 and the column decoder 140 with a row address X_ADDR and column address Y_ADDR included in an address ADDR, respectively.
The I/O circuit 180 may be configured to receive a command CMD, address ADDR and data DATA provided by the controller 200 or to provide the controller 200 with data DATA read from the memory cell array 110. The I/O circuit 180 may output the command CMD and address ADDR, received from the controller 200, to the control logic 170, and may output the data DATA to the page buffer 150. The I/O circuit 180 may output, to the controller 200, data DATA received from the page buffer 150. The I/O circuit 180 may operate under the control of the control logic 170.
The controller 200 may control an overall operation of the data storage apparatus 10 through the execution of firmware or software loaded on a memory 230. The controller 200 may decode and execute instructions or algorithms of a code form, such as firmware or software. The controller 200 may be implemented in the form of hardware or a combination of hardware and software.
The controller 200 may control data to be written to or read from the non-volatile memory 100 or the volatile memory 300 in response to a write command or read command transmitted by a host apparatus 20 (refer to FIG. 4). This will be described in detail later.
The controller 200 may include a host interface 210, a processor 220, the memory 230 and the memory interface 240.
The host interface 210 may provide an interface between a host apparatus and the data storage apparatus 10 in accordance with a protocol of the host apparatus. For example, the host interface 210 may communicate with the host apparatus through one of protocols, such as a universal serial bus (USB), a universal flash storage (UFS), a multimedia card (MMC), a parallel advanced technology attachment (PATA), a serial advanced technology attachment (SATA), a small computer system interface (SCSI), a serial attached SCSI (SAS), a peripheral component interconnection (PCI), and a PCI express (PCI-e).
The processor 220 may be configured with a micro control unit (MCU) and/or a central processing unit (CPU). The processor 220 may process a request transmitted by the host apparatus. In order to process the request transmitted by the host apparatus, the processor 220 may execute an instruction or algorithm of a code form, that is, firmware loaded on the memory 230, and may control internal function blocks, such as the host interface 210, the memory 230 and the memory interface 240, and the non-volatile memory 100.
The processor 220 may generate control signals that will control an operation of the non-volatile memory 100 based on requests transmitted by the host apparatus, and may provide the generated control signals to the non-volatile memory 100 through the memory interface 240.
The memory 230 may be configured as a random access memory, such as a dynamic random access memory (DRAM) or a static random access memory (SRAM). The memory 230 may store firmware executed by the processor 220. Furthermore, the memory 230 may store data used by the firmware, for example, meta data. That is, the memory 230 may operate as a working memory of the processor 220.
The memory 230 may be configured to include a data buffer (DB) (not illustrated) for temporarily storing one or both of write data to be transmitted from the host apparatus to the non-volatile memory 100 and read data to be transmitted from the non-volatile memory 100 to the host apparatus. That is, the memory 230 may operate as a buffer memory.
The memory interface 240 may control the non-volatile memory 100 under the control of the processor 220. The memory interface 240 may also be called a memory controller. The memory interface 240 may communicate with the non-volatile memory 100 using channel signals CH. The channel signals CH may include a command, an address, and an operation control signal for controlling the non-volatile memory 100. The memory interface 240 may use the channel signals CH to provide data to the non-volatile memory 100 or to receive data from the non-volatile memory 100.
As illustrated in FIG. 5, the volatile memory 300 may include a region 300 a in which a zone mapping table and system information are stored and a random access zone 300 b capable of random write. This will be described in detail later.
FIG. 4 is a diagram illustrating a data processing system 1100 according to an embodiment.
The data processing system 1100 will be described below with reference to FIG. 5 illustrating the configuration of a volatile memory according to an embodiment, FIG. 6 illustrating an example of a zone mapping table 600 according to an embodiment, FIG. 7, illustrating a backup process according to an embodiment, and FIG. 8 illustrating a recovery process according to an embodiment.
Referring to FIG. 4, the data processing system 1100 may include the data storage apparatus 10 (for example, an SSD) and the host apparatus 20.
The data storage apparatus 10 may include the non-volatile memory 100, the controller 200 and the volatile memory 300.
The non-volatile memory 100 may be configured with a set of zones Zone 0 to Zone N having the same size, that is, a NAND flash region. In this case, each of the zones may respectively include one or more physical blocks.
Referring to FIG. 4, the non-volatile memory 100 may include a backup zone 100 a and a plurality of sequential zones 100 b capable of sequential write.
The backup zone 100 a is a backup space for providing a non-volatile characteristic of the random access zone 300 b provided using the volatile memory 300 and that is a marginal space outside of a system region of the volatile memory 300. In an embodiment, the backup zone 100 a may be used in an SLC way. The backup zone 100 a may have a size two times or three times greater than the size of the random access zone 300 b, but embodiments are not limited thereto.
The sequential zone 100 b is a user region, and may be used in a TLC or QLC way. In this case, a write operation may be performed on the sequential zone 100 b in a sequential write way.
Referring to FIG. 5, the volatile memory 300 may include the region 300 a in which a zone mapping table for managing the physical addresses of zones and system information are stored and the random access zone 300 b capable of random write. In this case, the volatile memory 300 may be implemented as a DRAM, but embodiments are not limited thereto.
A zone according to an embodiment may have a relatively larger size than a page or block, and the zone mapping table may include a start physical block address (Start PBA), a total length and the final write location for each zone. Accordingly, a marginal space may occur in a region of the volatile memory 300 because the amount of stored mapping data is reduced compared to a conventional technology.
Specifically, as illustrated in FIG. 6, the zone mapping table 600 stored in the volatile memory 300 may include for each zone a logical block address group, a zone index, a start physical block address (PBA 0), a total length and the final write location. The logical block address group may be defined to mean a plurality of logical block addresses grouped by a given number. The zone index may be defined to mean identification information for identifying a respective zone. The final write location of the zone mapping table 600 may be defined to mean the last write location of the respective zone at the present time. The total length may be defined to mean a total length of physical address.
For example, a logical block address group 0 (LBAG 0) may include Logical Block Address (LBA) 0 to LBA 99. Each of the other logical address groups may include 100 logical block addresses. Such a logical block address group may be matched with one zone. In contrast, in a conventional technology, the amount of data included in a mapping table is massive because a logical block address and a physical block address are matched in a one-to-one way. In an embodiment, the amount of data included in the mapping table can be reduced because a plurality of logical block addresses are matched with one zone and managed accordingly.
For this reason, in the zone mapping table 600, the size of mapping data is relatively reduced because the mapping data can be managed in a zone unit without using a mapping process of a 4 KB unit as in a conventional technology. As a result, a marginal space of the volatile memory 300 can be secured. A region of the volatile memory 300, secured because the size of a zone mapping table 600 is reduced due to a change in the structure of the zone mapping table, may be used as the random access zone 300 b capable of random write.
Each of the non-volatile memory 100 and the volatile memory 300 may be configured with a plurality of zones (Zone 0 to Zone N+1). Accordingly, the host apparatus 20 may recognize, as a logical region, each of the plurality of zones within the volatile memory 300 and the non-volatile memory 100. That is, the host apparatus 20 may recognize the data storage apparatus 10 as a storage apparatus including a plurality of zones. For example, the host apparatus 20 may recognize (N+1) zones (refer to FIG. 4).
When a logical address is received along with a write command or read command from the host apparatus 20, the controller 200 may identify, such as by using the zone mapping table 600, whether the logical address belongs to the random access zone 300 b or the sequential zone 100 b, and then may control an operation, corresponding to the write command or read command of the identified zone, to be performed. The controller 200 may also receive the size of data when a write command or a read command is received from the host apparatus 20. The logical address may mean the start logical address of the data to be read or written. If the logical address belongs to the random access zone or the sequential zone, this may mean that a physical address corresponding to the logical address belongs to the random access zone or the sequential zone, respectively.
That is, if the logical address belongs to the sequential zone, the controller 200 may control an operation, corresponding to the write command or read command, to be performed using the physical address corresponding to the logical address as a start address, wherein the physical address is an address within a sequential zone. If the logical address belongs to the random access zone, the controller 200 may control an operation, corresponding to the write command or read command, to be performed using the physical address corresponding to the logical address as a start address, wherein the physical address is an address within a random access zone.
Furthermore, the controller 200 may back up, onto the backup zone 100 a, data stored in the random access zone 300 b based on a preset criterion. Furthermore, when the state of the controller 200 switches to an on state after power is off, the controller 200 may recover the data stored in the backup zone 100 a into the random access zone 300 b.
The controller 200 may identify a zone index matched with the start logical address of an operation based on the zone mapping table 600, and may identify whether the zone index is the random access zone 300 b or the sequential zone 100 b.
For example, if the start logical block address of an operation is LBA 5, the controller 200 may identify that LBA 5 belongs to a logical block address group 0 (LBAG 0 in FIG. 6) based on the zone mapping table, and may identify that the logical block address group 0 is matched with a zone index 0 and thus belongs to the sequential zone 100 b. In this case, a case where the logical block address group 0 includes LBA 0 to LBA 99 and is matched with the zone index 0 may be described as an example. As described above, LBAs and a zone index matched with each logical block address group may be preset.
The controller 200 may differently apply a process of identifying a physical address when a write command or read command for the random access zone 300 b is received because the random access zone 300 b corresponds to a volatile memory and the sequential zone 100 b corresponds to a non-volatile memory.
If a logical address received along with a write command from the host apparatus 20 belongs to the sequential zone 100 b, the controller 200 may identify a zone index matched with the start logical address of a zone based on the zone mapping table, may write data, corresponding to the size of the data received when the write command is received, from a location next to the final write location of the identified zone index, and then may update the final write location in the zone mapping table.
For example, if the start logical address of a zone is in LBA 5, the controller 200 may identify a logical address group 0 and zone index 0, matched with LBA 5, based on the zone mapping table. If the final write physical address of the zone index 0 is 10, the controller 200 may write data, corresponding to the size (e.g. 4) of data received when a write command is received, starting from physical address 11 of the zone corresponding to zone index 0. Furthermore, the controller 200 may update a physical address 14 as the final write location in the zone mapping table.
If a logical address received along with a write command from the host apparatus 20 belongs to the random access zone 300 b, the controller 200 may write data, corresponding to the size of the data received from the host apparatus 20, based on a start physical address of the random access zone 300 b.
For example, if the start logical address of a write operation corresponds to LBA 902, the controller 200, based on the zone mapping table, may identify a logical block address group (LBAG) 10 to which LBA 902 belongs and a zone index 10, which is preset in this example as being the zone index of a zone in the random access zone 300 b. If the physical addresses of the region 300 a of the volatile memory 300 in which the zone mapping table and system information are stored are included in physical blocks 0 to 599 (wherein, for example, each block includes 4 KB) and the physical addresses of the random access zone 300 b are in physical blocks 600 to 999, the controller 200 may write data, corresponding to the size (e.g. 4) of data, at an offset from physical block address 600, that is, the start physical address of the random access zone 300 b. For example, if the start logical address of the write operation corresponds to an address offset of 9000 from the beginning of LBAG 10 (to which LBA 902 belongs), the write operation would be performed using a start physical address that is offset by 9000 from the beginning of physical block 600. In this case, the controller 200 may write data from the start physical address of the random access zone 300 b because the random access zone 300 b corresponds to a volatile random-access memory.
If a logical address received along with a read command from the host apparatus 20 belongs to the sequential zone 100 b, the controller 200 may identify a zone index matched with the logical address based on the zone mapping table, and may read data corresponding to the size of data received from the host apparatus 20 using a physical address, corresponding to the logical address in a corresponding zone of the identified zone index, as a start address.
If a logical address received along with a read command from the host apparatus 20 belongs to the random access zone 300 b, the controller 200 may identify the final physical address by adding the offset relative to the corresponding logical block address group of the logical address, received from the host apparatus 20, to a start physical address of the random access zone 300 b, and may read data, corresponding to the size of data received from the host apparatus 20, from the final physical address. For example, if the logical address corresponds to an offset of 8000 from the beginning of LBAG 10, and LBAG 10 is mapped to a zone beginning at the start of the random access zone 300 b, then the start physical address would be an address offset by 8000 from the start of the random access zone 300 b.
Referring to FIG. 7, the random access zone 300 b may include a plurality of slice regions (backup slices) 0 to n. Random access indices 0 to n may be sequentially assigned to the plurality of slice regions, respectively. In this case, the random access index may be defined to mean an index assigned to each of the plurality of slice regions within the random access zone 300 b. The slice region may have a size corresponding to the size of data which may be written at a time (that is, to a page size of the non-volatile memory 100), but embodiments are not limited thereto.
Furthermore, each of the random indices may be matched with a flush flag (Flush 1 or 0), indicating whether data stored in the corresponding slice of the random access zone 300 b is backed up onto the backup zone 100 a, and an update flag (Update 1 or 0) indicating whether data stored in the corresponding slice of the random access zone 300 b has been updated with new data.
Furthermore, the backup zone 100 a may include a first region (Index 0, Index 1) onto which data written in the random access zone 300 b is backed up in a one-to-one way and a second region (Index 2) onto which the latest data updated in the random access zone 300 b is backed up when power is turned off or interrupted. Each of the first region and the second region may be matched with each virtual backup index. One or more backup indices may be assigned to each backup zone. For example, two backup indices, such as Index 0 and Index 1, may be assigned to the first region, and one backup index, such as Index 2, may be assigned to the second region.
As illustrated in FIG. 7, the first region may be configured to include two or more subregions respectively corresponding to Index 0 and Index 1, where each subregion of the first region has a size equal to or greater than the size of the random access zone 300 b.
The controller 200 may separately manage, as system information, an indication of at which backup index of each of the first and second regions the latest data is stored; that is, which indices are the latest backup indices. The information on the latest backup index may be stored in the volatile memory 300 as system information.
For example, if the amount of data written in the random access zone 300 b is a reference value or more, the controller 200 may back up, onto the backup zone 100 a, the data stored in the random access zone 300 b, and may change a value of a corresponding flush flag for the backed-up random access zone to 1 (i.e., one). When the data was backed up to the subregion corresponding to index 0 of the backup zone 100 a, the controller 200 may set index 0 as the latest backup index for the first region. When the data was backed up to the subregion corresponding to index 1 of the backup zone 100 a, the controller 200 may set index 1 as the latest backup index for the first region. Furthermore, in an embodiment, the controller 200 may erase the subregion corresponding to index 1 after the data was backed up to the subregion corresponding to index 0 and may erase the subregion corresponding to index 0 after the data was backed up to the subregion corresponding to index 1, in order to prepare for the next backup operation. Furthermore, after backing up the data to the first region, the controller 200 may erase the second region and reset the update flags for the slices, to prepare the second region to accept backup data when, for example, power fails. In such embodiments, the controller 200 may alternate between backing up the random access zone 300 b to the subregions of index 0 and index 1.
For another example, if the number of write commands received from the host apparatus 20 is a reference value or more, the controller 200 may sequentially write, in the backup zone 100 a, data stored in a plurality of slice regions based on random access index numbers, and may change values of flush flags, for the backed-up slice regions, to 1 (i.e., one). In this case, the flush flag having a value of 1 may indicate that data has been backed up onto the backup zone 100 a. The flush flag having a value of 0 may indicate that data has not been backed up onto the backup zone 100 a.
The controller 200 may apply, as a backup condition, a condition in which data written in response to a request from the host apparatus 20 reaches a slice size, not the number of write commands, but embodiments are not limited thereto. The backup condition may be changed or added to depending on an operator's needs.
When the backup of data for all the slice regions within the random access zone 300 b is completed, the controller 200 may reset the values of all the flush flags to 0 (i.e., zero).
After data stored in the plurality of slice regions is written in the backup zone 100 a, if the update of data stored in the random access zone 300 b occurs from the host apparatus 20, the controller 200 may change a value of an update flag for a corresponding slice region to 1 (i.e., one). In this case, the update flag 1 may indicate that data has been updated with new data, but has not been backed up since the update. The update flag 0 may indicate that there is no new data has not been backed up onto the backup zone 100 a.
Referring to FIG. 7, the controller 200 may back up, onto the second region (Index 2), data that is stored in the random access zone 300 b and that has an update flag of 1. After the backup of update data is completed, the controller 200 may reset the update flag for a corresponding slice region to 0.
In particular, in an embodiment, when a signal for power-off is received, the controller 200 may write, in the second region (Index 2), data that is stored in the random access zone 300 b and that has a flush flag of 0 (indicating that a backup operation to the first region was only partly completed and did not back up the data) or update flag of 1 (indicating that the data was modified since the last backup to the first region). When the data is written in the second region (Index 2), the controller 200 may also store a corresponding random access index. In this case, the random access index may be written in a spare region of the first page of the second region. When a plurality of slices are backed up to the second region, a plurality of corresponding random access indices may be stored in the spare region of the first page of the second region, or in another embodiment may be stored in spare regions of the pages used to store the respective slices. The stored random access index (or indices) may be used to identify a location of the backed up data within a random access zone prior to the backup when the data is recovered.
In the case of an abnormal termination, the data storage apparatus 10 may perform a backup operation based on an internal capacity or by external power.
Referring to FIG. 8, when the state of the controller 200 switches to an on state after power is off, the controller 200 may calculate physical addresses corresponding to the latest backup index of a first region (either the subregion corresponding to Index 0 or the subregion corresponding to Index 1), and may sequentially read data from the corresponding physical addresses to the random access zone 300 b. Specifically, the controller 200 may calculate the final physical address whose data needs to be recovered by incorporating the start physical address of a random access zone into a backup index. In an embodiment, the controller 200 may determine whether the subregion of index 0 or the subregion of index 1 holds the data to be restored by determining which subregion is in an erased state, by using system information stored in the nonvolatile memory, or both.
Furthermore, when the loading of data onto the first region is terminated, the controller 200 may read the latest data, stored in the backup zone 100 a of the second region (Index 2 in FIG. 8), to the random access zone 300 b.
The controller 200 may identify a location of the random access zone 300 b to which the latest data will be written based on a corresponding random access index stored in the second region. In this manner, any data not backed up because of either a failure of a backup operation to the first region to complete or because the data was updated after the last backup operation to the first region will be restored from the second region.
FIG. 9 is a flowchart for describing an operating process 900 of the data storage apparatus 10 according to an embodiment.
Referring to FIG. 9, the data storage apparatus 10 may receive a logical address received along with a write command or a read command from the host apparatus 20 (S101). The data storage apparatus 10 may also receive the size of data when receiving the write command or read command from the host apparatus 20.
Next, the data storage apparatus 10 may identify whether the logical address belongs to the random access zone 300 b within the volatile memory 300 or to the sequential zone 100 b within the non-volatile memory 100 (S103 and S105).
The logical address may mean the start logical address of an operation corresponding to the command. If the logical address belongs to the random access zone or the sequential zone, this may respectively mean that a physical address corresponding to the logical address belongs to the random access zone or the sequential zone. That is, if the logical address belongs to the sequential zone, the data storage apparatus 10 may control an operation, corresponding to the write command or read command, to be performed using the physical address corresponding to the logical address as a start address.
The volatile memory 300 may include the region 300 a in which a zone mapping table and system information are stored and the random access zone 300 b capable of random write. The non-volatile memory 100 may include the backup zone 100 a and the plurality of sequential zones 100 b capable of sequential write.
The data storage apparatus 10 may perform an operation, corresponding to the write command or read command, based on the random access zone 300 b or sequential zone 100 b identified at step S103 (S107 and S117). This will be described in detail later.
Next, the data storage apparatus 10 may back up data, stored in the random access zone 300 b, onto the backup zone 100 a based on a preset criterion (S109 and S111).
The random access zone 300 b may include a plurality of slice regions. Each of the plurality of slice regions is matched with a random access index. Each of the random access indices may be matched with a flush flag, indicating whether data stored in the random access zone 300 b has been backed up onto the backup zone 100 a, and an update flag indicating whether data stored in the random access zone 300 b has been updated with new data.
The backup zone 100 a may include the first region (as shown in FIG. 7) onto which data written in the random access zone 300 b is backed up in a one-to-one way and the second region (as also shown in FIG. 7) onto which the latest data updated in the random access zone 300 b is backed up when power is turned off or otherwise interrupted. Each of the first region and the second region may have a respective latest backup index identifying a subregion that stores the latest backed up data in that region.
For example, at steps S109 and S111, if the amount of data written in the random access zone 300 b is a reference value or more, the data storage apparatus 10 may back up, onto the backup zone 100 a, the data stored in the random access zone 300 b, and may change a value of a corresponding flush flag for the backed-up random access zone 300 b to 1 (i.e., one).
For another example, at steps S109 and S111, if the number of write commands received from the host apparatus 20 is a reference value or more, the data storage apparatus 10 may sequentially write, in the backup zone 100 a, data stored in the plurality of slice regions based on random access index numbers.
Referring to FIG. 7, the data storage apparatus 10 may sequentially back up, onto the backup zone 100 a, data stored in slice regions from a random access index 0 to a random access index n.
Thereafter, the data storage apparatus 10 may change values of flush flags for the backed-up slice regions to 1 (i.e., one).
Next, after the data stored in the plurality of slice regions is written in the backup zone 100 a, if the update of data stored in the random access zone 300 b occurs from the host apparatus 20, the data storage apparatus 10 may change a value of an update flag for a corresponding slice region to 1 (i.e., one).
When a signal for power-off is received, the data storage apparatus 10 may write, in the second region (Index 2), data in the random access zone 300 b having a flush flag of 0 (Flush 0) or update flag of 1 (Update 1).
When the data is written in the second region, the data storage apparatus 10 may also store a corresponding random access index.
Next, when the state of the data storage apparatus 10 switches to an on state after power is off, the data storage apparatus 10 may recover the data stored in the backup zone 100 a into the random access zone 300 b (S113 and S115). Note that although S113 and S115 are shown as following after S111, embodiments are not limited thereto, and a power interruption and subsequent recovery may occur at any time during the process 900 of FIG. 9.
Specifically, when the state of the data storage apparatus 10 switches to an on state after power is off, the data storage apparatus 10 may calculate physical addresses corresponding to the latest backup index of the first region (such as, for example Index 0 or Index 1), and may sequentially read data from the physical addresses to the random access zone 300 b.
In this case, the data storage apparatus 10 may separately manage, as system information, a latest backup index that belongs to the first region and a latest backup index that belongs to the second region, each indicating where in their region the latest data is stored. The latest backup index may be stored in the volatile memory 300 as system information. After identifying the latest backup index, the data storage apparatus 10 may recover the data of the corresponding backup index into the random access zone 300 b at step S115.
Next, when the loading of data from the first region is completed, the data storage apparatus 10 may read the latest data, stored in the second region of the backup zone 100 a, to the random access zone 300 b.
FIG. 10 is a detailed flowchart for the data write process 910 such as may be used in the process 900 of FIG. 9.
The data storage apparatus 10 may receive a logical address along with a write command from the host apparatus 20 (S201). The controller 200 may also receive the size of data when receiving a write command or read command from the host apparatus 20.
Next, the data storage apparatus 10 may identify whether the logical address belongs to the random access zone 300 b within the volatile memory or the sequential zone 100 b within the non-volatile memory 100 (S203 and S205).
If, as a result of the identification at step S205, the logical address belongs to the random access zone 300 b, the data storage apparatus 10 may write data, corresponding to the size of the data received from the host apparatus 20, based on a start physical address of the random access zone 300 b (S207 and S209).
If, as a result of the identification at step S205, the logical address belongs to the sequential zone 100 b, the data storage apparatus 10 may identify a zone index, matched with the logical address received along with the write command from the host apparatus 20, based on the zone mapping table.
Next, the data storage apparatus 10 may identify a physical address on which a write operation will be performed by identifying the final write location of the identified zone index (S211).
Next, the data storage apparatus 10 may write data, corresponding to the size of the data, from a location next to the final write location of the identified zone index (S213). In an embodiment, the size of the data may correspond to a page size of the sequential zone 100 b. In another embodiment, the size of the data may be less than a page size of the sequential zone 100 b, and a read-modify-write operation may be used to perform the write of the data.
Next, after performing the write operation, the data storage apparatus 10 may update the final write location in the zone mapping table (S215).
FIG. 11 is a detailed flowchart for describing a data read process 920 such as may be used in the process 900 of FIG. 9.
The data storage apparatus 10 may receive a logical address along with a read command from the host apparatus 20 (S301). Next, the data storage apparatus 10 may identify whether the logical address belongs to the random access zone 300 b within the volatile memory or the sequential zone 100 b within the non-volatile memory (S303 and S305).
If, as a result of the identification at step S305, the logical address belongs to the random access zone 300 b, the data storage apparatus 10 may identify the final physical address by adding a portion of the logical address, such as the logical address's offset from a start logical address of the corresponding logical block address group, to a start physical address of the random access zone 300 b (S307).
As illustrated in FIG. 5, in the volatile memory 300, the region 300 a in which the zone mapping table and system information are stored occupies a part of a memory space and the remaining marginal space is used as the random access zone 300 b. Accordingly, the start physical address of the random access zone 300 b is not 0, but may be a physical address after the region 300 a in which the zone mapping table and system information are stored. As a result, the data storage apparatus 10 may identify the final physical address from which data will be actually read, by adding all or a portion of the start logical address of a command received from the host apparatus 20, to the start physical address of the random access zone 300 b.
Next, the data storage apparatus 10 may read, from the final physical address, data corresponding to the size of the data received when the read command is received (S309).
If, as a result of the identification at step S305, the logical address belongs to the sequential zone 100 b, the data storage apparatus 10 may identify a zone index matched with the logical address based on the zone mapping table. The data storage apparatus 10 may identify a start physical address corresponding to the logical address at the identified zone index (S311).
Next, the data storage apparatus 10 may read data, corresponding to the size of the data requested by the host apparatus 20, from the start physical address identified at step S311 (S313).
FIG. 12 is a flowchart for describing an operating process 930 of the data storage apparatus according to another embodiment. A case where the data storage apparatus 10 moves data stored in the random access zone 300 b to the backup zone 100 a will be described as an example.
If a backup condition is satisfied, the data storage apparatus 10 may back up data stored in the random access zone 300 b onto the backup zone 100 a.
For example, the data storage apparatus 10 may identify whether the amount of data written in the random access zone 300 b is a reference value or more (S401).
The random access zone 300 b may include a plurality of slice regions. Each of the plurality of slice regions may be matched with a respective random access index. Each of the random access indices may be matched with a flush flag, indicating whether data stored in the random access zone 300 b has been backed up onto the backup zone 100 a, and an update flag indicating whether data stored in the random access zone 300 b has been updated with new data.
The backup zone 100 a may include the first region (shown in FIG. 7) onto which data written in the random access zone 300 b is backed up in a one-to-one way and the second region (also shown in FIG. 7) onto which the latest data updated in the random access zone 300 b is backed up when power is turned off or interrupted. Each of the first region and the second region may have a respective latest backup index indicating a subregion of the respective region that includes the latest backed up data.
If, as a result of the identification, the amount of the data written in the random access zone 300 b is a reference value or more, the data storage apparatus 10 may sequentially write, in the backup zone 100 a, data stored in a plurality of slice regions within the random access zone 300 b, based on random access index numbers (S403).
Next, the data storage apparatus 10 may change a value of a corresponding flush flag for the backed-up random access zone 300 b to 1 (i.e., one) (S405).
If, as a result of the identification at step S401, the amount of the data written in the random access zone 300 b is not the reference value or more, the data storage apparatus 10 may identify whether the number of write commands received from the host apparatus 20 is a reference value or more (S407).
If, as a result of the identification at step S407, the number of write commands is the reference value or more, the data storage apparatus 10 may sequentially write, in the backup zone 100 a, data stored in a plurality of slice regions based on random access index numbers (S409).
Thereafter, the data storage apparatus 10 may change a value of a corresponding flush flag for the backed-up slice region to 1 (i.e., one) (S411).
Next, if the update of data stored in the random access zone 300 b occurs from the host apparatus 20 after the data stored in the plurality of slice regions is written in the first region of the backup zone 100 a, the data storage apparatus 10 may change a value of a corresponding update flag for a corresponding slice region to 1 (i.e., one) (S413).
When a signal for power-off is received (S415), the data storage apparatus 10 may write, in the second region (Index 2), data having a flush flag of 0 (Flush 0) or update flag of 1 (Update 1) in the second region of the random access zone 300 b (S417). When the data is written in the second region, the data storage apparatus 10 may also store a corresponding random access index for each slice written.
Although not illustrated, when the state of the data storage apparatus 10 switches to an on state after power is off, the data storage apparatus 10 may recover the data stored in the backup zone 100 a into the random access zone 300 b.
FIG. 13 is a diagram illustrating a data processing system 2000 including a solid state drive (SSD) according to an embodiment. Referring to FIG. 13, the data processing system 2000 may include a host apparatus 2100 and an solid state drive 2200 (hereinafter referred to as an “SSD”).
The SSD 2200 may include a controller 2210, a buffer memory apparatus 2220, non-volatile memories 2231 to 223 n, a power supply 2240, a signal connector 2250 and a power connector 2260.
The controller 2210 may control an overall operation of the SSD 2200.
The buffer memory apparatus 2220 may temporarily store data to be stored in the non-volatile memories 2231 to 223 n. Furthermore, the buffer memory apparatus 2220 may temporarily store data read from the non-volatile memories 2231 to 223 n. The data temporarily stored in the buffer memory apparatus 2220 may be transmitted to the host apparatus 2100 or the non-volatile memories 2231 to 223 n under the control of the controller 2210.
The non-volatile memories 2231 to 223 n may be used as storage media of the SSD 2200. The non-volatile memories 2231 to 223 n may be electrically coupled to the controller 2210 through a plurality of channels CH1 to CHn. One or more non-volatile memories may be electrically coupled to one channel. Non-volatile memories electrically coupled to one channel may be electrically coupled to the same signal bus and data bus.
The power supply 2240 may provide a power supply PWR, received through the power connector 2260, into the SSD 2200. The power supply 2240 may include an auxiliary power supply 2241. If sudden power-off occurs, the auxiliary power supply 2241 may supply power so that the SSD 2200 is terminated normally. The auxiliary power supply 2241 may include high-capacity capacitors capable of being charged with the power supply PWR.
The controller 2210 may exchange signals SGL with the host apparatus 2100 through the signal connector 2250. In this case, the signal SGL may include a command, an address, data, etc. The signal connector 2250 may be configured with various types of connectors based on an interface used between the host apparatus 2100 and the SSD 2200.
FIG. 14 is a diagram illustrating the configuration of the controller 2100 in FIG. 13. Referring to FIG. 14, the controller 2210 may include a host interface unit 2211, a control unit 2212, a random access memory 2213, an error correction code (ECC) unit 2214 and a memory interface unit 2215.
The host interface unit 2211 may provide an interface between the host apparatus 2100 and the SSD 2200 based on a protocol of the host apparatus 2100. For example, the host interface unit 2211 may communicate with the host apparatus 2100 through any one of protocols, such as secure digital, a universal serial bus (USB), a multi-media card (MMC), an embedded MMC (eMMC), personal computer memory card international association (PCMCIA), parallel advanced technology attachment (PATA), serial advanced technology attachment (SATA), a small computer system interface (SCSI), a serial attached SCSI (SAS), peripheral component interconnection (PCI), PCI express (PCI-E), and a universal flash storage (UFS). Furthermore, the host interface unit 2211 may perform a disk emulation function for enabling the host apparatus 2100 to recognize the SSD 2200 as a general-purpose data storage apparatus, for example, a hard disk drive (HDD).
The control unit 2212 may analyze and process a signal SGL received from the host apparatus 2100. The control unit 2212 may control operations of internal function blocks based on firmware or software for driving the SSD 2200. The random access memory 2213 may be used as a working memory for driving such firmware or software.
The ECC unit 2214 may generate parity data of data to be transmitted to the non-volatile memories 2231 to 223 n. The generated parity data may be stored in the non-volatile memories 2231 to 223 n along with data. The ECC unit 2214 may detect an error of data read from the non-volatile memories 2231 to 223 n based on the parity data. If the detected error is within a correctable range, the ECC unit 2214 may correct the detected error.
The memory interface unit 2215 may provide the non-volatile memories 2231 to 223 n with control signals, such as a command and an address, under the control of the control unit 2212. Furthermore, the memory interface unit 2215 may exchange data with the non-volatile memories 2231 to 223 n under the control of the control unit 2212. For example, the memory interface unit 2215 may provide the non-volatile memories 2231 to 223 n with data stored in the buffer memory apparatus 2220 or may provide the buffer memory apparatus 2220 with data read from the non-volatile memories 2231 to 223 n.
FIG. 15 is a diagram illustrating a data processing system 3000 including a data storage apparatus according to an embodiment. Referring to FIG. 15, the data processing system 3000 may include a host apparatus 3100 and a data storage apparatus 3200.
The host apparatus 3100 may be configured in a board form, such as a printed circuit board (PCB). Although not illustrated, the host apparatus 3100 may include internal function blocks for performing functions of the host apparatus.
The host apparatus 3100 may include a connection terminal 3110, such as a socket, a slot or a connector. The data storage apparatus 3200 may be mounted on the connection terminal 3110.
The data storage apparatus 3200 may be configured in a board form, such as a PCB. The data storage apparatus 3200 may be called a memory module or a memory card. The data storage apparatus 3200 may include a controller 3210, a buffer memory apparatus 3220, non-volatile memories 3231 and 3232, a power management integrated circuit (PMIC) 3240 and a connection terminal 3250.
The controller 3210 may control an overall operation of the data storage apparatus 3200. The controller 3210 may be configured identically with the controller 2210 of FIG. 15.
The buffer memory apparatus 3220 may temporarily store data to be stored in the non-volatile memories 3231 and 3232. Furthermore, the buffer memory apparatus 3220 may temporarily store data read from the non-volatile memories 3231 and 3232. The data temporarily stored in the buffer memory apparatus 3220 may be transmitted to the host apparatus 3100 or the non-volatile memories 3231 and 3232 under the control of the controller 3210.
The non-volatile memories 3231 and 3232 may be used as storage media of the data storage apparatus 3200.
The PMIC 3240 may provide power, received through the connection terminal 3250, into the data storage apparatus 3200. The PMIC 3240 may manage power of the data storage apparatus 3200 under the control of the controller 3210.
The connection terminal 3250 may be electrically coupled to the connection terminal 3110 of the host apparatus. Signals, such as a command, an address and data, and power may be transmitted between the host apparatus 3100 and the data storage apparatus 3200 through the connection terminal 3250. The connection terminal 3250 may be configured in various forms based on an interface process between the host apparatus 3100 and the data storage apparatus 3200. The connection terminal 3250 may be positioned on any one side of the data storage apparatus 3200.
FIG. 16 is a diagram illustrating a data processing system 4000 including a data storage apparatus according to an embodiment. Referring to FIG. 16, the data processing system 4000 may include a host apparatus 4100 and a data storage apparatus 4200.
The host apparatus 4100 may be configured in a board form, such as a PCB. Although not illustrated, the host apparatus 4100 may include internal function blocks for performing functions of the host apparatus.
The data storage apparatus 4200 may be configured in a flap-type package form. The data storage apparatus 4200 may be mounted on the host apparatus 4100 through solder balls 4250. The data storage apparatus 4200 may include a controller 4210, a buffer memory apparatus 4220 and a non-volatile memory 4230.
The controller 4210 may control an overall operation of the data storage apparatus 4200. The controller 4210 may be configured identically with the controller 3210 of FIG. 15.
The buffer memory apparatus 4220 may temporarily store data to be stored in the non-volatile memory 4230. Furthermore, the buffer memory apparatus 4220 may temporarily store data read from the non-volatile memory 4230. The data temporarily stored in the buffer memory apparatus 4220 may be transmitted to the host apparatus 4100 or the non-volatile memory 4230 under the control of the controller 4210.
The non-volatile memory 4230 may be used as a storage medium of the data storage apparatus 4200.
FIG. 17 is a diagram illustrating a network system 5000 including a data storage apparatus according to an embodiment. Referring to FIG. 17, the network system 5000 may include a server system 5300 and a plurality of client systems 5410, 5420 and 5430, which are electrically coupled over a network 5500.
The server system 5300 may serve data in response to a request from the plurality of client systems 5410, 5420 and 5430. For example, the server system 5300 may store data provided by the plurality of client systems 5410, 5420 and 5430. For another example, the server system 5300 may provide data to the plurality of client systems 5410, 5420 and 5430.
The server system 5300 may include a host apparatus 5100 and a data storage apparatus 5200. The data storage apparatus 5200 may be configured with the data storage apparatus 10 of FIG. 1, the SSD 2200 of FIG. 13, the data storage apparatus 3200 of FIG. 15 and the data storage apparatus 4200 of FIG. 16.
Those skilled in the art to which this disclosure pertains should understand that the embodiments are only illustrative from all aspects not being limitative because this disclosure may be implemented in various other forms without departing from the technical spirit or essential characteristics of this disclosure. Accordingly, the scope of this disclosure is defined by the appended claims rather than by the detailed description, and all modifications or variations derived from the meanings and scope of the claims and equivalents thereof should be understood as being included in the scope of this disclosure.
While various embodiments have been described above, it will be understood to those skilled in the art that the embodiments described are by way of example only. Accordingly, the data storage apparatus and method described herein should not be limited based on the described embodiments.

Claims (25)

What is claimed is:
1. A data storage apparatus comprising:
a volatile memory, the volatile memory including a region in which a zone mapping table and system information are stored and a random access zone suitable for random writes;
a non-volatile memory including a backup zone and a plurality of sequential zones suitable for sequential writes; and
a controller configured to identify whether a logical address received with a command from a host apparatus belongs to the random access zone or to the sequential zone and to control an operation corresponding to the command of the identified zone, to control the operation corresponding to the command using a physical address corresponding to the logical address as a start address when the logical address belongs to the sequential zone, to back up data stored in the random access zone onto the backup zone based on a criterion and to recover the data stored in the backup zone into the random access zone when a state of the controller switches to an on state after power is off,
wherein the command is a write command or a read command, and
wherein the zone mapping table comprises one or more entries, each entry including a logical block address group, a zone index, a start physical address, a total length, and a final write location.
2. The data storage apparatus according to claim 1, wherein when the logical address received with the write command from the host apparatus belongs to the sequential zone, the controller is configured to:
identify a zone index matched with the logical address using the zone mapping table,
write data, corresponding to a size of data received from the host apparatus, from a location next to the final write location corresponding to the identified zone index, and
update the final write location corresponding to the identified zone index in the zone mapping table.
3. The data storage apparatus according to claim 1, wherein when the logical address received with the write command from the host apparatus belongs to the random access zone, the controller is configured to:
write data, corresponding to a size of data received from the host apparatus, from a location based on a start physical address of the random access zone.
4. The data storage apparatus according to claim 1, wherein when the logical address received along with the read command from the host apparatus belongs to the sequential zone, the controller is configured to:
identify a zone index matched with the logical address using the zone mapping table, and
read data, corresponding to a size of data received from the host apparatus, using a physical address corresponding to the logical address as a start address in a corresponding zone of the identified zone index.
5. The data storage apparatus according to claim 1, wherein when the logical address received with the read command from the host apparatus belongs to the random access zone, the controller is configured to:
identify a final physical address by adding, to a start physical address of the random access zone, a value based on the logical address received from the host apparatus, and
read data, corresponding to a size of data received from the host apparatus, from the final physical address.
6. The data storage apparatus according to claim 1, wherein:
the random access zone comprises a plurality of slice regions,
each of the plurality of slice regions is matched with a random access index, and
each of the random access indices is matched with a flush flag, indicating whether data stored in the random access zone has been backed up onto the backup zone, and an update flag indicating whether data stored in the random access zone has been updated with new data.
7. The data storage apparatus according to claim 6, wherein the controller is configured to:
back up, onto the backup zone, data stored in the random access zone when an amount of data written in the random access zone is a reference value or more, and
change a value of a flush flag for the backed-up random access zone to 1 (one).
8. The data storage apparatus according to claim 6, wherein when the number of write commands received from the host apparatus is a reference value or more, the controller is configured to:
sequentially write, in the backup zone, data stored in the plurality of slice regions based on random access index numbers, and
change values of flush flags for the backed-up slice regions to 1 (one).
9. The data storage apparatus according to claim 8, wherein data backup for all the slice regions within the random access zone is completed, the controller is configured to reset values of all the flush flags to 0 (zero).
10. The data storage apparatus according to claim 6, wherein when an update of the data stored in the random access zone occurs from the host apparatus after data stored in the plurality of slice regions is written in the backup zone, the controller is configured to change a value of an update flag for a corresponding slice region to 1 (one).
11. The data storage apparatus according to claim 10, wherein:
the backup zone comprises a first region onto which data written in the random access zone is backed up in a one-to-one way and a second region onto which latest data updated in the random access zone is backed up when power is turned off, and
each of the first region and the second region is respectively matched with one or more backup indices.
12. The data storage apparatus according to claim 11, wherein the controller is configured to:
write, in the second region, data having the flush flag of 0 or the update flag of 1 in the random access zone when a signal for power-off is received, and
store a corresponding random access index when the data is written in the second region.
13. The data storage apparatus according to claim 12, wherein when a state of the controller switches to an on state after the power is off, the controller is configured to:
calculate physical addresses corresponding to backup indices of the first region,
sequentially load data onto the random access zone, and
load, onto the random access zone, the latest data stored in the backup zone of the second region when the loading for the first region is terminated.
14. The data storage apparatus according to claim 13, wherein the controller is configured to identify a location of the random access zone into which the latest data is to be written based on the random access index stored in the second region along with the latest data.
15. A method for operating a data storage apparatus, the method comprising:
receiving a logical address and a command from a host;
identifying whether the logical address belongs to a random access zone within a volatile memory or to a sequential zone within a non-volatile memory, the volatile memory including a region in which a zone mapping table and system information are stored and the random access zone suitable for random writes, and the non-volatile memory including a backup zone and a plurality of sequential zones suitable for sequential writes; and
performing an operation corresponding to the command based on the identified random access zone or sequential zone,
wherein when the logical address belongs to the sequential zone, the operation corresponding to the command is performed using a physical address corresponding to the logical address as a start address, the command being a read command or a write command, and
wherein when the logical address belongs to the sequential zone, performing the operation corresponding to the write command comprises:
identifying, using the zone mapping table, a zone index matched with the logical address received along with the write command from the host apparatus; and
writing data corresponding to a size of data received from the host apparatus starting from a location next to a final write location of the identified zone index, and
updating the final write location of the identified zone index in the zone mapping table.
16. The method according to claim 15, wherein when the logical address belongs to the random access zone, performing the operation corresponding to the write command comprises writing data corresponding to a size of data received from the host apparatus based on a start physical address of the random access zone.
17. The method according to claim 15, wherein when the logical address belongs to the sequential zone, the performing the operation corresponding to the read command comprises:
identifying a zone index matched with the logical address based on the zone mapping table;
setting, as a start address, a physical address corresponding to the logical address in a corresponding zone of the identified zone index; and
reading, from the set start address, data corresponding to a size of data received from the host apparatus.
18. The method according to claim 15, wherein when the logical address belongs to the random access zone, performing the operation corresponding to the read command comprises:
identifying a final physical address by adding, to a start physical address of the random access zone, a value corresponding to the logical address received from the host apparatus, and
reading, from the final physical address, data corresponding to a size of the data received from a host apparatus.
19. The method according to claim 15, further comprising:
backing up, onto the backup zone, data stored in the random access zone based on a preset criterion; and
recovering, into the random access zone, the data stored in the backup zone when switching to an on state after power is off.
20. The method according to claim 19, wherein:
the random access zone comprises a plurality of slice regions,
each of the plurality of slice regions is matched with a random access index, and
each of the random access indices is matched with a flush flag, indicating whether data stored in the random access zone has been backed up onto the backup zone, and an update flag indicating whether data stored in the random access zone has been updated with new data.
21. The method according to claim 20, wherein the backing up, onto the backup zone, of the data comprises:
backing up, onto the backup zone, data stored in the random access zone when an amount of data written in the random access zone is a reference value or more, and
changing a value of a flush flag for the backed-up random access zone to 1 (one).
22. The method according to claim 20, wherein the backing up, onto the backup zone, of the data comprises:
sequentially writing, in the backup zone, data stored in the plurality of slice regions based on random access index numbers when the number of write commands received from the host apparatus is a reference value or more, and
changing values of flush flags for the backed-up slice regions to 1 (one).
23. The method according to claim 20, wherein the backing up, onto the backup zone, of the data further comprises changing a value of an update flag for a corresponding slice region to 1 (one) when an update of the data stored in the random access zone occurs from the host apparatus after data stored in the plurality of slice regions is written in the backup zone.
24. The method according to claim 23, wherein:
the backup zone comprises a first region onto which data written in the random access zone is backed up in a one-to-one way and a second region onto which the latest data updated in the random access zone is backed up when power is turned off, and
each of the first region and the second region is respectively matched with a one or more backup indices, and
the backing up, onto the backup zone, of the data comprises:
writing, in the second region, data having the flush flag of 0 or the update flag of 1 in the random access zone when a signal for power-off is received, and
storing a corresponding random access index when the data is written in the second region.
25. The method according to claim 24, wherein the recovering, into the random access zone, of the data comprises:
calculating a physical address corresponding to the backup index of the first region and sequentially loading data into the random access zone when switching to an on state after the power is off, and
loading, onto the random access zone, the latest data stored in the backup zone of the second region when the loading for the first region is terminated.
US16/841,274 2019-08-22 2020-04-06 Data storage apparatus and operating method thereof Active US11243709B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/000,082 US11734175B2 (en) 2019-08-22 2020-08-21 Storage device and method of operating the same
US18/346,203 US20230350803A1 (en) 2019-08-22 2023-07-01 Storage device and method of operating the same

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0103087 2019-08-22
KR1020190103087A KR20210023203A (en) 2019-08-22 2019-08-22 Data storage device and operating method thereof

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/847,555 Continuation-In-Part US11288189B2 (en) 2019-08-22 2020-04-13 Memory controller and method of operating the same

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US16/847,555 Continuation-In-Part US11288189B2 (en) 2019-08-22 2020-04-13 Memory controller and method of operating the same
US16/882,076 Continuation-In-Part US11487627B2 (en) 2019-08-22 2020-05-22 Storage device and method of operating the same
US17/000,082 Continuation-In-Part US11734175B2 (en) 2019-08-22 2020-08-21 Storage device and method of operating the same

Publications (2)

Publication Number Publication Date
US20210055864A1 US20210055864A1 (en) 2021-02-25
US11243709B2 true US11243709B2 (en) 2022-02-08

Family

ID=74495906

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/841,274 Active US11243709B2 (en) 2019-08-22 2020-04-06 Data storage apparatus and operating method thereof

Country Status (5)

Country Link
US (1) US11243709B2 (en)
JP (1) JP2021034026A (en)
KR (1) KR20210023203A (en)
CN (1) CN112416242A (en)
DE (1) DE102020112512A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11843597B2 (en) * 2016-05-18 2023-12-12 Vercrio, Inc. Automated scalable identity-proofing and authentication process
US11734175B2 (en) * 2019-08-22 2023-08-22 SK Hynix Inc. Storage device and method of operating the same
TWI777087B (en) * 2019-09-06 2022-09-11 群聯電子股份有限公司 Data managing method, memory controlling circuit unit and memory storage device
US11762769B2 (en) 2019-09-20 2023-09-19 SK Hynix Inc. Memory controller based on flush operation and method of operating the same
US11573891B2 (en) 2019-11-25 2023-02-07 SK Hynix Inc. Memory controller for scheduling commands based on response for receiving write command, storage device including the memory controller, and operating method of the memory controller and the storage device
KR102456176B1 (en) 2020-05-21 2022-10-19 에스케이하이닉스 주식회사 Memory controller and operating method thereof
US11604740B2 (en) * 2020-12-01 2023-03-14 Capital One Services, Llc Obfuscating cryptographic material in memory
US11620084B2 (en) 2020-12-30 2023-04-04 Samsung Electronics Co., Ltd. Storage device including memory controller and operating method of memory controller
CN112947996B (en) * 2021-05-14 2021-08-27 南京芯驰半导体科技有限公司 Off-chip nonvolatile memory dynamic loading system and method based on virtual mapping
CN113568579B (en) * 2021-07-28 2022-05-03 深圳市高川自动化技术有限公司 Memory, data storage method and data reading method
CN113687710B (en) * 2021-10-26 2022-03-22 西安羚控电子科技有限公司 Power failure processing method and system for flight control management computer of fixed-wing unmanned aerial vehicle
KR102385572B1 (en) 2021-11-02 2022-04-13 삼성전자주식회사 Controller, storage device and operation method of the storage device
US20230205705A1 (en) * 2021-12-23 2023-06-29 Advanced Micro Devices, Inc. Approach for providing indirect addressing in memory modules
US11941286B2 (en) 2022-02-04 2024-03-26 Western Digital Technologies, Inc. Keeping a zone random write area in non-persistent memory
EP4343767A3 (en) * 2022-08-31 2024-05-22 Samsung Electronics Co., Ltd. Storage device including nonvolatile memory device and operating method of storage device

Citations (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070150645A1 (en) * 2005-12-28 2007-06-28 Intel Corporation Method, system and apparatus for power loss recovery to enable fast erase time
KR100825802B1 (en) 2007-02-13 2008-04-29 삼성전자주식회사 Data write method of non-volatile memory device copying data having logical pages prior to logical page of write data from data block
US20080282105A1 (en) * 2007-05-10 2008-11-13 Deenadhayalan Veera W Data integrity validation in storage systems
US20100023682A1 (en) * 2007-10-11 2010-01-28 Super Talent Electronics Inc. Flash-Memory System with Enhanced Smart-Storage Switch and Packed Meta-Data Cache for Mitigating Write Amplification by Delaying and Merging Writes until a Host Read
US20100174870A1 (en) * 2009-01-02 2010-07-08 Arindam Banerjee System and method to preserve and recover unwritten data present in data cache of a disk subsystem across power outages
US20110099323A1 (en) * 2009-10-27 2011-04-28 Western Digital Technologies, Inc. Non-volatile semiconductor memory segregating sequential, random, and system data to reduce garbage collection for page based mapping
KR20110046243A (en) 2009-10-27 2011-05-04 삼성전자주식회사 User device and its mapping data management method
US20110161563A1 (en) * 2009-12-24 2011-06-30 National Taiwan University Block management method of a non-volatile memory
US20110302446A1 (en) * 2007-05-10 2011-12-08 International Business Machines Corporation Monitoring lost data in a storage system
US20120072801A1 (en) * 2010-08-11 2012-03-22 The University Of Tokyo Data processing apparatus, control device and data storage device
US20120084484A1 (en) 2010-09-30 2012-04-05 Apple Inc. Selectively combining commands for a system having non-volatile memory
US20130173857A1 (en) * 2006-10-30 2013-07-04 Won-Moon CHEON Flash memory device with multi-level cells and method of writing data therein
US20140082265A1 (en) * 2012-09-20 2014-03-20 Silicon Motion, Inc. Data storage device and flash memory control method thereof
US20140258588A1 (en) * 2013-03-05 2014-09-11 Western Digital Technologies, Inc. Methods, devices and systems for two stage power-on map rebuild with free space accounting in a solid state drive
US20140289355A1 (en) 2013-03-21 2014-09-25 Fujitsu Limited Autonomous distributed cache allocation control system
KR101449524B1 (en) 2008-03-12 2014-10-14 삼성전자주식회사 Storage device and computing system
US20150046670A1 (en) * 2013-08-08 2015-02-12 Sangmok Kim Storage system and writing method thereof
KR101506675B1 (en) 2008-12-09 2015-03-30 삼성전자주식회사 User device comprising auxiliary power supply
US20150205539A1 (en) * 2014-01-21 2015-07-23 Sangkwon Moon Memory system including nonvolatile memory device and erase method thereof
US20150347026A1 (en) * 2014-05-28 2015-12-03 Sandisk Technologies Inc. Method and system for interleaving pieces of a mapping table for a storage device
KR101636248B1 (en) 2009-12-10 2016-07-06 삼성전자주식회사 Flash memory device, flash memory system, and method of programming the flash memory device
US9418699B1 (en) 2014-10-09 2016-08-16 Western Digital Technologies, Inc. Management of sequentially written data
US20160342509A1 (en) * 2015-05-22 2016-11-24 Sandisk Enterprise Ip Llc Hierarchical FTL Mapping Optimized for Workload
US9875038B2 (en) * 2015-06-24 2018-01-23 Samsung Electronics Co., Ltd. Storage device including nonvolatile memory device
KR20180024615A (en) 2016-08-30 2018-03-08 삼성전자주식회사 Method of managing power and performance of an electronic device including a plurality of capacitors for supplying auxiliary power
US20180129453A1 (en) 2016-11-10 2018-05-10 Samsung Electronics Co., Ltd. Solid state drive device and storage system having the same
US20180357170A1 (en) * 2017-06-12 2018-12-13 Western Digital Technologies, Inc. Method and apparatus for cache write overlap handling
US20180364938A1 (en) * 2017-06-14 2018-12-20 Burlywood, LLC Extent-based data location table management
US20190065387A1 (en) * 2017-08-28 2019-02-28 Western Digital Technologies, Inc. Storage system and method for fast lookup in a table-caching database
US20190102250A1 (en) * 2017-10-02 2019-04-04 Western Digital Technologies, Inc. Redundancy Coding Stripe Based On Internal Addresses Of Storage Devices
KR20190060328A (en) 2017-11-24 2019-06-03 삼성전자주식회사 Storage device, host device controlling storage device, and operation mehtod of storage device
US20190187934A1 (en) * 2017-12-18 2019-06-20 Formulus Black Corporation Random access memory (ram)-based computer systems, devices, and methods
US20190220416A1 (en) * 2018-01-16 2019-07-18 SK Hynix Inc. Data storage apparatus and operating method thereof
US20190227735A1 (en) * 2016-11-09 2019-07-25 Sandisk Technologies Llc Method and System for Visualizing a Correlation Between Host Commands and Storage System Performance
US10643707B2 (en) * 2017-07-25 2020-05-05 Western Digital Technologies, Inc. Group write operations for a data storage device
US10698817B2 (en) 2017-06-12 2020-06-30 Dell Products, L.P. Method for determining available stored energy capacity at a power supply and system therefor
US10747666B2 (en) 2016-10-27 2020-08-18 Toshiba Memory Corporation Memory system
US20210165579A1 (en) 2019-12-03 2021-06-03 Pure Storage, Inc. Dynamic allocation of blocks of a storage device based on power loss protection
US20210248842A1 (en) 2020-02-11 2021-08-12 Aptiv Technologies Limited Data Logging System for Collecting and Storing Input Data

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012016089A2 (en) * 2010-07-28 2012-02-02 Fusion-Io, Inc. Apparatus, system, and method for conditional and atomic storage operations
US20140229657A1 (en) * 2013-02-08 2014-08-14 Microsoft Corporation Readdressing memory for non-volatile storage devices
US9218279B2 (en) * 2013-03-15 2015-12-22 Western Digital Technologies, Inc. Atomic write command support in a solid state drive
KR20180080589A (en) * 2017-01-04 2018-07-12 에스케이하이닉스 주식회사 Data storage device and operating method thereof
TWI670600B (en) * 2017-09-18 2019-09-01 深圳大心電子科技有限公司 Data backup method, data recovery method and storage controller

Patent Citations (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070150645A1 (en) * 2005-12-28 2007-06-28 Intel Corporation Method, system and apparatus for power loss recovery to enable fast erase time
US20130173857A1 (en) * 2006-10-30 2013-07-04 Won-Moon CHEON Flash memory device with multi-level cells and method of writing data therein
KR100825802B1 (en) 2007-02-13 2008-04-29 삼성전자주식회사 Data write method of non-volatile memory device copying data having logical pages prior to logical page of write data from data block
US20110302446A1 (en) * 2007-05-10 2011-12-08 International Business Machines Corporation Monitoring lost data in a storage system
US20080282105A1 (en) * 2007-05-10 2008-11-13 Deenadhayalan Veera W Data integrity validation in storage systems
US20100023682A1 (en) * 2007-10-11 2010-01-28 Super Talent Electronics Inc. Flash-Memory System with Enhanced Smart-Storage Switch and Packed Meta-Data Cache for Mitigating Write Amplification by Delaying and Merging Writes until a Host Read
KR101449524B1 (en) 2008-03-12 2014-10-14 삼성전자주식회사 Storage device and computing system
KR101506675B1 (en) 2008-12-09 2015-03-30 삼성전자주식회사 User device comprising auxiliary power supply
US20100174870A1 (en) * 2009-01-02 2010-07-08 Arindam Banerjee System and method to preserve and recover unwritten data present in data cache of a disk subsystem across power outages
KR20110046243A (en) 2009-10-27 2011-05-04 삼성전자주식회사 User device and its mapping data management method
US20110099323A1 (en) * 2009-10-27 2011-04-28 Western Digital Technologies, Inc. Non-volatile semiconductor memory segregating sequential, random, and system data to reduce garbage collection for page based mapping
KR101636248B1 (en) 2009-12-10 2016-07-06 삼성전자주식회사 Flash memory device, flash memory system, and method of programming the flash memory device
US20110161563A1 (en) * 2009-12-24 2011-06-30 National Taiwan University Block management method of a non-volatile memory
US20120072801A1 (en) * 2010-08-11 2012-03-22 The University Of Tokyo Data processing apparatus, control device and data storage device
US20120084484A1 (en) 2010-09-30 2012-04-05 Apple Inc. Selectively combining commands for a system having non-volatile memory
US20140082265A1 (en) * 2012-09-20 2014-03-20 Silicon Motion, Inc. Data storage device and flash memory control method thereof
US20140258588A1 (en) * 2013-03-05 2014-09-11 Western Digital Technologies, Inc. Methods, devices and systems for two stage power-on map rebuild with free space accounting in a solid state drive
US20140289355A1 (en) 2013-03-21 2014-09-25 Fujitsu Limited Autonomous distributed cache allocation control system
US20150046670A1 (en) * 2013-08-08 2015-02-12 Sangmok Kim Storage system and writing method thereof
US20150205539A1 (en) * 2014-01-21 2015-07-23 Sangkwon Moon Memory system including nonvolatile memory device and erase method thereof
US20150347026A1 (en) * 2014-05-28 2015-12-03 Sandisk Technologies Inc. Method and system for interleaving pieces of a mapping table for a storage device
US9418699B1 (en) 2014-10-09 2016-08-16 Western Digital Technologies, Inc. Management of sequentially written data
US20160342509A1 (en) * 2015-05-22 2016-11-24 Sandisk Enterprise Ip Llc Hierarchical FTL Mapping Optimized for Workload
US9875038B2 (en) * 2015-06-24 2018-01-23 Samsung Electronics Co., Ltd. Storage device including nonvolatile memory device
KR20180024615A (en) 2016-08-30 2018-03-08 삼성전자주식회사 Method of managing power and performance of an electronic device including a plurality of capacitors for supplying auxiliary power
US10747666B2 (en) 2016-10-27 2020-08-18 Toshiba Memory Corporation Memory system
US20190227735A1 (en) * 2016-11-09 2019-07-25 Sandisk Technologies Llc Method and System for Visualizing a Correlation Between Host Commands and Storage System Performance
US20180129453A1 (en) 2016-11-10 2018-05-10 Samsung Electronics Co., Ltd. Solid state drive device and storage system having the same
US20180357170A1 (en) * 2017-06-12 2018-12-13 Western Digital Technologies, Inc. Method and apparatus for cache write overlap handling
US10698817B2 (en) 2017-06-12 2020-06-30 Dell Products, L.P. Method for determining available stored energy capacity at a power supply and system therefor
US20180364938A1 (en) * 2017-06-14 2018-12-20 Burlywood, LLC Extent-based data location table management
US10643707B2 (en) * 2017-07-25 2020-05-05 Western Digital Technologies, Inc. Group write operations for a data storage device
US20190065387A1 (en) * 2017-08-28 2019-02-28 Western Digital Technologies, Inc. Storage system and method for fast lookup in a table-caching database
US20190102250A1 (en) * 2017-10-02 2019-04-04 Western Digital Technologies, Inc. Redundancy Coding Stripe Based On Internal Addresses Of Storage Devices
KR20190060328A (en) 2017-11-24 2019-06-03 삼성전자주식회사 Storage device, host device controlling storage device, and operation mehtod of storage device
US20190187934A1 (en) * 2017-12-18 2019-06-20 Formulus Black Corporation Random access memory (ram)-based computer systems, devices, and methods
US20190220416A1 (en) * 2018-01-16 2019-07-18 SK Hynix Inc. Data storage apparatus and operating method thereof
US20210165579A1 (en) 2019-12-03 2021-06-03 Pure Storage, Inc. Dynamic allocation of blocks of a storage device based on power loss protection
US20210248842A1 (en) 2020-02-11 2021-08-12 Aptiv Technologies Limited Data Logging System for Collecting and Storing Input Data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Final Office Action for related U.S. Appl. No. 16/877,239, dated Oct. 14, 2021.
Non-Final Office Action for related U.S. Appl. No. 16/877,239, dated Apr. 16, 2021.
Notice of Allowance for U.S. Appl. No. 16/847,555, dated Dec. 1, 2021.

Also Published As

Publication number Publication date
US20210055864A1 (en) 2021-02-25
KR20210023203A (en) 2021-03-04
JP2021034026A (en) 2021-03-01
DE102020112512A1 (en) 2021-02-25
CN112416242A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
US11243709B2 (en) Data storage apparatus and operating method thereof
US11216362B2 (en) Data storage device and operating method thereof
US11150811B2 (en) Data storage apparatus performing flush write operation, operating method thereof, and data processing system including the same
CN111124273B (en) Data storage device and operation method thereof
US20200218653A1 (en) Controller, data storage device, and operating method thereof
US11520694B2 (en) Data storage device and operating method thereof
US10922000B2 (en) Controller, operating method thereof, and memory system including the same
US11966603B2 (en) Memory system for updating firmware when SPO occurs and operating method thereof
US20230273748A1 (en) Memory system, operating method thereof and computing system
US11061614B2 (en) Electronic apparatus having data retention protection and operating method thereof
CN113704138A (en) Storage device and operation method thereof
US20230075820A1 (en) Event log management method, controller and storage device
US11157401B2 (en) Data storage device and operating method thereof performing a block scan operation for checking for valid page counts
KR20220130526A (en) Memory system and operating method thereof
US10657046B2 (en) Data storage device and operating method thereof
KR20210079894A (en) Data storage device and operating method thereof
KR20210056625A (en) Data storage device and Storage systmem using the same
KR20200015185A (en) Data storage device and operating method thereof
US11429530B2 (en) Data storage device and operating method thereof
US11379362B2 (en) Memory system and operating method thereof
KR20180047808A (en) Data storage device and operating method thereof
KR20190099570A (en) Data storage device and operating method thereof

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: SK HYNIX INC., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NOH, JUNG KI;JIN, YONG;REEL/FRAME:052336/0414

Effective date: 20200326

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE