CN112148208A - Apparatus and method for transferring internal data of memory system in sleep mode - Google Patents

Apparatus and method for transferring internal data of memory system in sleep mode Download PDF

Info

Publication number
CN112148208A
CN112148208A CN201911374232.2A CN201911374232A CN112148208A CN 112148208 A CN112148208 A CN 112148208A CN 201911374232 A CN201911374232 A CN 201911374232A CN 112148208 A CN112148208 A CN 112148208A
Authority
CN
China
Prior art keywords
host
memory device
internal data
volatile memory
memory system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911374232.2A
Other languages
Chinese (zh)
Other versions
CN112148208B (en
Inventor
李钟涣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SK Hynix Inc
Original Assignee
SK Hynix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SK Hynix Inc filed Critical SK Hynix Inc
Publication of CN112148208A publication Critical patent/CN112148208A/en
Application granted granted Critical
Publication of CN112148208B publication Critical patent/CN112148208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3275Power saving in memory, e.g. RAM, cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/0284Multiple user address space allocation, e.g. using different base addresses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0625Power saving in storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0634Configuration or reconfiguration of storage systems by changing the state or mode of one or more devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1028Power efficiency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7203Temporary buffering, e.g. using volatile buffer or dedicated buffer blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The present disclosure relates to a memory system. The memory system may include: a non-volatile memory device; a volatile memory device that is suspended from being powered in a sleep mode; and a controller configured to temporarily store internal data in the volatile memory device, the internal data being generated during processing of operation of the non-volatile memory device. When the memory system receives a sleep command from the host, the controller may output internal data stored in the volatile memory device to the host in response to the sleep command, and then may transmit a confirmation of entering the sleep mode to the host and enter the sleep mode.

Description

Apparatus and method for transferring internal data of memory system in sleep mode
Cross Reference to Related Applications
This application claims priority from korean patent application No. 10-2019-0077806, filed on 28.6.2019, which is incorporated herein by reference in its entirety.
Technical Field
The illustrative embodiments relate to a data processing system, and more particularly, to an apparatus and method of transferring and storing internal data of a memory system included in a data processing system into a host or a computing device.
Background
Recently, the paradigm of computing environments has turned into pervasive computing that enables computer systems to be accessed almost anywhere, anytime. Therefore, the use of portable electronic devices such as mobile phones, digital cameras, notebook computers, and the like is rapidly increasing. Such portable electronic devices typically use or include a memory system that uses or embeds at least one memory device, i.e., a data storage device. The data storage device may be used as a primary storage device or a secondary storage device for the portable electronic device.
Unlike a hard disk, a data storage device using a nonvolatile semiconductor memory device is advantageous in that it has excellent stability and durability because it does not have a mechanical driving part (e.g., a robot arm), and has high data access speed and low power consumption. In the context of a memory system having such advantages, illustrative data storage devices include USB (universal serial bus) memory devices, memory cards with various interfaces, Solid State Drives (SSDs), and the like.
Disclosure of Invention
Various embodiments relate to a data processing system that transfers data between components in the data processing system, including components or resources such as a memory system and a host.
Also, various embodiments relate to an apparatus that transfers internal data stored in a volatile memory of a memory system, to which power is suspended, to a host when controlling the memory system included in a data processing system to enter a sleep mode, and then receives the internal data from the host when exiting from the sleep mode and stores the internal data in the volatile memory of the memory system, thereby improving the operating performance of the memory system.
The technical objects of the present disclosure are not limited to the above technical objects, and other technical objects not described herein will be clearly understood by those skilled in the art to which the present disclosure pertains based on the following description.
In an embodiment, a memory system may include: a non-volatile memory device; a volatile memory device that is suspended from being powered in a sleep mode; and a controller configured to temporarily store internal data in the volatile memory device, the internal data being generated during a process of an operation of the non-volatile memory device, and when a sleep command is received from the host, the controller may output the internal data stored in the volatile memory device to the host in response to the sleep command, and then may transmit a confirmation of entering the sleep mode to the host and enter the sleep mode.
When a sleep command is received from the host, the controller may perform a comparison of the size of internal data stored in the volatile memory device with the size of data that can be transferred at once in response to the sleep command, and may determine whether to divide the internal data based on the result of the comparison.
When the result of the comparison indicates that the size of the internal data stored in the volatile memory device exceeds the size of data that can be transferred at one time in response to the sleep command, the controller may divide the internal data stored in the volatile memory device into N data parts, and may sequentially output the divided internal data from the first part to the nth part to the host as a response to the sleep command, and may then transfer a confirmation of entering the sleep mode to the host and enter the sleep mode, where N may be a natural number equal to or greater than 2.
When the result of the comparison indicates that the size of the internal data stored in the volatile memory device exceeds the size of data that can be transferred at one time in response to the sleep command: the controller may divide internal data stored in the volatile memory device into N data parts, where N may be a natural number equal to or greater than 2, and when N exceeds a reference value, the controller may program the internal data stored in the volatile memory device into the non-volatile memory device, and may transmit a confirmation of entering the sleep mode to the host and enter the sleep mode.
The controller may send an acknowledgement request to the host to confirm whether the internal data stored in the volatile memory device is allowed to be transferred to the host in response to the hibernation command.
When the host confirms that the controller is allowed to transmit the internal data to the host in response to the confirmation request, the controller may output the internal data stored in the volatile memory device to the host in response to the sleep command, and then may transmit a confirmation of entering the sleep mode to the host and may enter the sleep mode.
When the host does not indicate that the controller is allowed to transfer the internal data to the host in response to the confirmation request, the controller may program the internal data stored in the volatile memory device to the non-volatile memory device, and then may transfer a confirmation of entering the sleep mode to the host and may enter the sleep mode.
When a wake command including internal data is received from the host, the controller may exit the sleep mode, may supply power to the volatile memory device, may store the internal data included in the wake command in the volatile memory device, and may transmit an acknowledgement of the exit from the sleep mode to the host.
In an embodiment, a data processing system may include: a host configured to generate a sleep command and a wake command and output the generated commands; and a memory system including a non-volatile memory device and a volatile memory device, the memory system is configured to suspend power supply to the volatile memory device and temporarily store internal data in the volatile memory device in the sleep mode, the internal data is generated during a process of operation of the non-volatile memory device and, when a sleep command is received from the host, the memory system may output internal data stored in the volatile memory device to the host in response to the sleep command, the acknowledgement of entering the sleep mode may then be transmitted to the host and the sleep mode is entered, and the host may store internal data received from the memory system in the internal memory between a first point in time when the sleep command is output to the memory system and a second point in time when a wake command is sent to the memory system.
When a sleep command is received from the host, the memory system may determine whether to divide the internal data by comparing the size of the internal data stored in the volatile memory device with the size of data that can be transferred at once in response to the sleep command.
When the size of internal data stored in the volatile memory device exceeds the size of data that can be transferred at one time in response to a sleep command, the memory system may divide the internal data stored in the volatile memory device into N data parts, may sequentially output the divided internal data from the first part to the nth part to the host in response to the sleep command, and may then transfer a confirmation of entering the sleep mode to the host and enter the sleep mode, where N may be a natural number equal to or greater than 2.
When the size of internal data stored in the volatile memory device exceeds the size of data that can be transferred at once in response to the sleep command: the memory system may divide internal data stored in the volatile memory device into N data parts, and when N exceeds a reference value, the memory system may program the internal data stored in the volatile memory device to the non-volatile memory device, and then may transmit a confirmation of entering the sleep mode to the host and enter the sleep mode.
In response to receiving the sleep command, the memory system may send an acknowledgement request to the host to acknowledge whether internal data stored in the volatile memory device is allowed to be transferred to the host, and when the host receives the acknowledgement request from the memory system between the first point in time and receiving an acknowledgement to enter the sleep mode, the host: the state of the internal memory may be checked, it may be determined whether the memory system is allowed to transmit the internal data to the host according to the check result, and an acknowledgement indicating whether the memory system is allowed to transmit the internal data to the host may be transmitted to the memory system.
In response to an acknowledgement indicating that the memory system is allowed to transfer the internal data to the host, the memory system may output the internal data stored in the volatile memory device to the host in response to the sleep command, may then transfer an acknowledgement of entering the sleep mode to the host, and may enter the sleep mode.
In response to an acknowledgement indicating that the memory system is not allowed to transfer the internal data to the host, the memory system may program the internal data stored in the volatile memory device to the non-volatile memory device, may then transfer an acknowledgement to enter the sleep mode to the host and may enter the sleep mode.
The host may include the second data stored in the internal memory in a wake command, and may output the wake command to the memory system together with the second data, and when the memory system receives the wake command from the host, the memory system may exit the sleep mode, may supply power to the volatile memory device, may store the second data included in the wake command in the volatile memory device, and may transmit an acknowledgement of the exit of the sleep mode to the host.
In an embodiment, an operating method of a memory system including a nonvolatile memory device and a volatile memory device may include: receiving a sleep command from a host; outputting, to the host, internal data stored in the volatile memory device in response to the sleep command after receiving the sleep command, the internal data being data generated during a process of operation of the non-volatile memory device; transmitting an acknowledgement of entering the sleep mode to the host; and entering a sleep mode after outputting the internal data, the entering the sleep mode may include suspending power to the volatile memory device.
Outputting the internal data may include: determining whether to divide the internal data by comparing a size of the internal data stored in the second volatile memory device with a size of data that can be transferred at one time in response to the sleep command; in response to determining to partition the internal data: dividing internal data stored in a second volatile memory device into N data portions; outputting the divided internal data from the first to nth parts to the host as a response to the sleep command when N is less than the reference value; when N exceeds a reference value, internal data stored in the volatile memory device is programmed to the non-volatile memory device.
The operating method may further include: confirming whether to allow internal data stored in the volatile memory device to be transferred to the host in response to receiving the sleep command; executing outputting the internal data in response to the host indicating that the memory system is allowed to transmit the internal data to the host; and programming the internal data to the non-volatile memory device when the host indicates that the memory system is not allowed to transfer the internal data to the host.
The operating method may further include: in response to receiving a wake command from the host: exiting the sleep mode, supplying power to the volatile memory device, storing internal data included in the wake command in the volatile memory device, and transmitting an acknowledgement of exiting the sleep mode to the host.
Drawings
FIG. 1 illustrates sleep mode entry and exit operations in a data processing system according to an embodiment.
FIG. 2 illustrates a data processing system including a memory system according to an embodiment.
Fig. 3 shows the configuration of a host and a memory system in the data processing system according to the present embodiment.
Fig. 4 shows a read operation of the host and the memory system in the data processing system according to the present embodiment.
Fig. 5A, 5B, 5C, 5D, and 5E illustrate sleep mode entry and exit operations in the data processing system according to the present embodiment.
Detailed Description
Various embodiments will be described in more detail below with reference to the accompanying drawings. This invention may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Throughout this disclosure, like reference numerals refer to like parts throughout the various figures and embodiments of the present invention.
FIG. 1 illustrates sleep mode entry and exit operations in a data processing system according to an embodiment.
Referring to FIG. 1, a host 102 and a memory system 110 may be interconnected with each other. The host 102 may be understood as a computing device and implemented in the form of a mobile device, computer, server, or the like. The memory system 110 interconnected with the host 102 may receive commands from the host 102 and store or output data in response to the received commands.
Memory system 110 may include a non-volatile memory device that includes non-volatile memory cells. For example, the memory system 110 may be implemented in the form of a flash memory, a Solid State Drive (SSD), or the like.
The memory system 110 may include a volatile memory device that temporarily stores internal data INDATA generated during processing of operations of the non-volatile memory device. For example, the memory system 110 may generate mapping data as internal data INDATA for performing a mapping (such as a logical address to physical address mapping) to connect a file system used by the host 102 to a storage space of the non-volatile memory device. Further, in order to manage the reliability and lifetime of the nonvolatile memory device, the memory system 110 may generate read/write/erase count data as the internal data INDATA.
When a write or read operation is not scheduled to be performed on the memory system 110 for a predetermined time or more, the host 102 may request the memory system 110 to enter a sleep mode to reduce power consumption of the memory system 110.
In an embodiment, the host 102 can generate a SLEEP COMMAND and transmit the SLEEP COMMAND to the memory system 110 to request the memory system 110 to enter a SLEEP mode.
In an embodiment, when a SLEEP COMMAND is received from the host 102, the MEMORY system 110 may output the internal data INDATA stored in the VOLATILE MEMORY2 of the MEMORY system 110 to the host 102 in response to the SLEEP COMMAND. At this time, although not shown in detail in fig. 1, the memory system 110 may include a first volatile memory to which power is supplied in the sleep mode and a second volatile memory to which power is suspended in the sleep mode. Thus, the MEMORY system 110 may output a first portion of the internal data INDATA stored in the second VOLATILE MEMORY2 to the host 102 in response to the SLEEP COMMAND, without outputting a second portion of the internal data INDATA stored in the first VOLATILE MEMORY device to the host 102.
In an embodiment, after outputting the internal data INDATA stored in the VOLATILE MEMORY2 to the host 102, the MEMORY system 110 may acknowledge entering the SLEEP MODE by sending an ACK SLEEP MODE ENTRY message to the host 102 and then enter the SLEEP MODE. While in the sleep mode, the MEMORY system 110 may supply power to the first VOLATILE MEMORY devices therein and suspend power to the second VOLATILE MEMORY device 2. At this time, the second VOLATILE MEMORY device VOLATILE MEMORY device 2, which is suspended from being powered in response to entering the sleep mode, may lose all of the first portion of the internal data INDATA stored therein, while the first VOLATILE MEMORY device, which is powered, may retain the second portion of the internal data INDATA stored therein.
In an embodiment, the host 102 may store the INTERNAL data INDATA received from the MEMORY system 110 in an INTERNAL MEMORY of the host 102 when the MEMORY system 110 is in the sleep mode.
In an embodiment, when the host 102 intends to request the memory system 110, which has entered the sleep mode, to exit the sleep mode, the host 102 may generate a wake COMMAND, and transmit the wake COMMAND to the memory system 110. The host 102 may include the INTERNAL data INDATA stored in the INTERNAL MEMORY in the wake COMMAND, and communicate the wake COMMAND with the INTERNAL data INDATA to the MEMORY system 110.
In an embodiment, when the MEMORY system 110 receives a wake COMMAND from the host 102, the MEMORY system 110 may exit the sleep mode and supply power to the second VOLATILE MEMORY2, which has been suspended from supplying power for the sleep mode time interval. Then, the MEMORY system 110 may store the internal data INDATA, which has been received from the host 102 together with the wake COMMAND wake, in the second VOLATILE MEMORY2, and transmit an acknowledgement ACK SLEEP MODE EXIT exiting from the SLEEP MODE to the host 102.
Referring to FIG. 2, a data processing system 100 is depicted in accordance with an embodiment of the present disclosure. Data processing system 100 may include a host 102 interfaced with or operating in conjunction with a memory system 110.
For example, the host 102 may include a portable electronic device such as a mobile phone, an MP3 player, or a laptop computer, or an electronic device such as a desktop computer, a game console, a Television (TV), a projector, or the like.
The host 102 may include at least one Operating System (OS) that may generally manage and control the functions and operations performed in the host 102. The OS may provide interoperability between the host 102 interfacing with the memory system 110 and users using the memory system 110. The OS may support functions and operations corresponding to user requests. By way of example and not limitation, an OS may be classified as a general purpose operating system or a mobile operating system depending on the mobility of the host 102. The general-purpose operating system may be a personal operating system or an enterprise operating system, depending on system requirements or user environment. Personal operating systems, including, for example, Windows and Chrome, may support general purpose services. But enterprise operating systems may be dedicated to protecting and supporting high performance, including Windows servers, Linux, Unix, etc. Further, the Mobile operating system may include Android, iOS, Windows Mobile, and the like. The mobile operating system may support services or functions for mobility (e.g., power saving functions). The host 102 may include multiple operating systems. The host 102 may execute multiple operating systems interlocked with the memory system 110 according to the user's requirements. The host 102 may transfer a plurality of commands corresponding to a user's request into the memory system 110, thereby performing operations corresponding to the commands within the memory system 110.
The memory system 110 may operate or perform particular functions or operations in response to requests from the host 102, and in particular, may store data to be accessed by the host 102. The memory system 110 may be used as a primary memory system or a secondary memory system for the host 102. Depending on the protocol of the host interface, the memory system 110 may be implemented with any of various types of memory devices that may be electrically coupled with the host 102. Non-limiting examples of suitable storage devices include Solid State Drives (SSDs), multimedia cards (MMCs), embedded MMCs (emmcs), reduced size MMCs (RS-MMCs), micro MMCs, Secure Digital (SD) cards, mini SDs, micro SDs, Universal Serial Bus (USB) storage devices, universal flash memory (UFS) devices, standard flash memory (CF) cards, Smart Media (SM) cards, memory sticks, and the like.
The storage devices of memory system 110 may be implemented using volatile memory devices, such as Dynamic Random Access Memory (DRAM) or static RAM (sram), and/or non-volatile memory devices, such as Read Only Memory (ROM), mask ROM (mrom), programmable ROM (prom), erasable programmable ROM (eprom), electrically erasable programmable ROM (eeprom), ferroelectric RAM (fram), phase change RAM (pram), magnetoresistive RAM (mram), resistive RAM (RRAM or ReRAM), or flash memory.
Memory system 110 may include a controller 130 and a memory device 150. Memory device 150 may store data to be accessed by host 102. Controller 130 may control the storage of data in memory device 150.
The controller 130 and the memory device 150 may be integrated into a single semiconductor device, which may be included in any of the various types of memory systems discussed in the examples above.
By way of example and not limitation, controller 130 and memory device 150 may be integrated into a single semiconductor device. The controller 130 and the memory device 150 may be integrated into the SSD; when the memory system 110 is used as an SSD, the operation speed of the host 102 connected to the memory system 110 can be increased compared to if the host 102 is connected to a hard disk. The controller 130 and the memory device 150 may also be integrated into one semiconductor device to form a memory card such as the following: PC card (PCMCIA), compact flash Card (CF), memory card such as smart media card (SM, SMC), memory stick, multimedia card (MMC, RS-MMC, micro MMC), SD card (SD, mini SD, micro SD, SDHC), universal flash memory, etc.
The memory system 110 may be configured as part of, for example: a computer, an ultra mobile pc (umpc), a workstation, a netbook, a Personal Digital Assistant (PDA), a portable computer, a network tablet, a wireless phone, a mobile phone, a smart phone, an electronic book, a Portable Multimedia Player (PMP), a portable game machine, a navigation system, a black box, a digital camera, a Digital Multimedia Broadcasting (DMB) player, a three-dimensional (3D) television, a smart television, a digital audio recorder, a digital audio player, a digital picture recorder, a digital picture player, a digital video recorder, a digital video player, a storage device configured with a data center, a device capable of transmitting and receiving information in a wireless environment, one of various electronic devices configured with a home network, one of various electronic devices configured with a computer network, one of various electronic devices configured with a remote information processing network, a computer, a Radio Frequency Identification (RFID) device or configure one of various components of a computing system.
Memory device 150 may be a non-volatile memory device and may retain data stored therein even when power is not supplied thereto. The memory device 150 may store data provided from the host 102 through a write operation while providing data stored therein to the host 102 through a read operation. Memory device 150 may include a plurality of memory blocks 152, 154, 156, and each of the plurality of memory blocks 152, 154, 156 may include a plurality of pages. Each of the plurality of pages may include a plurality of memory cells electrically coupled with a plurality of Word Lines (WLs). Memory device 150 also includes a plurality of memory dies, each of the plurality of memory dies including a plurality of planes, each of the plurality of planes including a plurality of memory blocks 152, 154, 156. In addition, the memory device 150 may be a nonvolatile memory device such as a flash memory, and the flash memory may be implemented in a two-dimensional or three-dimensional stacked structure.
The controller 130 may control overall operations of the memory device 150, such as a read operation, a write operation, a program operation, and an erase operation. For example, the controller 130 may control the memory device 150 in response to a request from the host 102. Controller 130 may provide data read from memory device 150 to host 102. The controller 130 may also store data provided by the host 102 into the memory device 150.
Controller 130 may include a host interface (I/F)132, a processor 134, an Error Correction Code (ECC) component 138, a Power Management Unit (PMU)140, a memory interface (I/F)142, and memories 201 and 202, all operatively coupled via an internal bus.
The host interface 132 may process commands and data provided by the host 102 and may communicate with the host 102 through at least one of various interface protocols, such as: universal Serial Bus (USB), multi-media card (MMC), peripheral component interconnect express (PCI-e or PCIe), Small Computer System Interface (SCSI), serial SCSI (sas), Serial Advanced Technology Attachment (SATA), Parallel Advanced Technology Attachment (PATA), Small Computer System Interface (SCSI), Enhanced Small Disk Interface (ESDI), and Integrated Drive Electronics (IDE). According to an embodiment, the host interface 132 is a component that exchanges data with the host 102, the host interface 132 may be implemented by a fixed run by a processor, referred to as a Host Interface Layer (HIL) and stored on a non-transitory computer readable medium.
The ECC component 138 may correct erroneous bits of data to be processed in (e.g., output from) the memory device 150, and the ECC component 138 may include an ECC encoder and an ECC decoder. Here, the ECC encoder may perform error correction encoding on data to be programmed into the memory device 150 to generate encoded data to which one or more parity or check bits are added, and store the encoded data in the memory device 150. When the controller 130 reads data stored in the memory device 150, the ECC decoder may detect and correct errors included in the data read from the memory device 150. In other words, after performing error correction decoding on data read from the memory device 150, the ECC component 138 may determine whether the error correction decoding has succeeded and output an indication signal (e.g., a correction success signal or a correction failure signal). The ECC component 138 may use the parity bits generated in the ECC encoding process to correct one or more erroneous bits of the read data. When the number of erroneous bits is greater than or equal to the threshold number of correctable erroneous bits, the ECC component 138 may not correct the erroneous bits but output an error correction failure signal indicating that the correcting of the erroneous bits failed.
The ECC component 138 may perform error correction operations based on coded modulation such as: low Density Parity Check (LDPC) codes, Bose-Chaudhri-Hocquenghem (BCH) codes, turbo codes, Reed-Solomon (RS) codes, convolutional codes, Recursive Systematic Codes (RSC), Trellis Coded Modulation (TCM), and Block Coded Modulation (BCM). The ECC component 138 may include all circuits, modules, systems, or devices that perform error correction operations based on at least one of the above-described codes.
PMU140 may manage the power provided at controller 130. In an embodiment, PMU140 may selectively supply power to first memory 201 and second memory 202 during a sleep mode, where the entry and exit of the sleep mode is determined by host 102. For example, during a sleep mode interval, PMU140 may supply power to first memory 201 and suspend supplying power to second memory 202.
Memory interface 142 may serve as an interface to handle commands and data transferred between controller 130 and memory devices 150 to allow controller 130 to control memory devices 150 in response to requests passed from host 102. When memory device 150 is a flash memory, and in particular, when memory device 150 is a NAND flash memory, under the control of processor 134, memory interface 142 may generate control signals for memory device 150 and may process data provided to or received from memory device 150. Memory interface 142 may provide an interface to handle commands and data between controller 130 and memory device 150, such as the operation of a NAND flash interface, particularly between controller 130 and memory device 150. According to an embodiment, memory interface 142 may be implemented as a component that exchanges data with memory device 150 by executing firmware by a processor, referred to as a Flash Interface Layer (FIL), stored on a non-transitory computer-readable medium.
The memories 201 and 202 may support operations performed by the memory system 110 and the controller 130. The memories 201 and 202 may store temporary or transactional data generated or provided for operations in the memory system 110 and the controller 130. The controller 130 may control the memory device 150 in response to a request from the host 102. The controller 130 may transfer data read from the memory device 150 to the host 102. Controller 130 may store data received from host 102 within memory device 150. Memories 201 and 202 may be used to store data, such as read operations or program/write operations, for performing controller 130 and memory device 150.
The memories 201 and 202 may be implemented as volatile memories, respectively. Memories 201 and 202 may be implemented using Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), or both. Although fig. 2 shows the memories 201 and 202 being provided within the controller 130, the embodiment is not limited thereto, and the memories 201 and 202 may be located inside or outside the controller 130. For example, the memories 201 and 202 may be implemented by an external volatile memory having a memory interface that transfers data and/or signals between the memories 201 and 202 and the controller 130.
The memories 201 and 202 may store data necessary to perform operations requested by the host 102, such as data writes and data reads, and/or data transferred between the memory device 150 and the controller 130 for background operations such as garbage collection and wear leveling as described above. According to an embodiment, to support operations in memory system 110, memories 201 and 202 may include a program memory, a data memory, a write buffer/cache, a read buffer/cache, a data buffer/cache, a map buffer/cache, and the like.
In the embodiment of fig. 2, the first memory 201 and the second memory 202 may be configured as memories physically separated from each other and included in the memory system 110. In another embodiment, unlike the configuration of fig. 2, the first memory 201 and the second memory 202 may be alternatively configured as respective separate areas divided into two or more areas in one memory. However, the first memory 201 and the second memory 202 are different from each other in that power is independently supplied or suspended, respectively, during the sleep mode interval determined by the host 102. In an embodiment, in the sleep mode time interval, power may be supplied to the first memory 201 and power may be suspended to the second memory 202. At this time, since the first memory 201 and the second memory 202 are both volatile memories, the data stored in the first memory 201 may be continuously maintained while in the sleep mode interval, while the data stored in the second memory 202 may be lost while in the sleep mode interval.
The processor 134 may be implemented using a microprocessor or Central Processing Unit (CPU). The memory system 110 may include one or more processors 134. Processor 134 may control the overall operation of memory system 110. By way of example and not limitation, processor 134 may control a programming operation or a read operation of memory device 150 in response to a write request or a read request input from host 102. According to an embodiment, the processor 134 may use or run firmware stored in a non-transitory computer readable medium to control the overall operation of the memory system 110. This firmware may be referred to herein as a Flash Translation Layer (FTL). The FTL may serve as an interface between the host 102 and the memory device 150. The host 102 may communicate requests for write and read operations to the memory device 150 through the FTL.
The FTL may manage address mapping, garbage collection, wear leveling, etc. In particular, the FTL may load, generate, update, or store mapping data. Thus, the controller 130 may use the mapping data to map logical addresses received from the host 102 to physical addresses of the memory devices 150. Due to the address mapping operation, memory device 150 may perform read or write operations similar to a general purpose memory device. Also, through the address mapping operation based on the mapping data, when the controller 130 attempts to update data stored in a specific page, the controller 130 may program the updated data on another empty page and may invalidate old data of the specific page (e.g., update a physical address corresponding to a logical address of the updated data from a previous specific page to another newly programmed page) to take into account characteristics of the flash memory device. Further, the controller 130 may store the mapping data of the new data in the FTL.
For example, when performing operations requested by the host 102 in the memory device 150, the controller 130 uses the processor 134 implemented in a microprocessor or a Central Processing Unit (CPU) or the like. Processor 134, in conjunction with memory device 150, may handle instructions or commands corresponding to commands received from host 102. The controller 130 may perform a foreground operation corresponding to a command received from the host 102 as a command operation, such as a program operation corresponding to a write command, a read operation corresponding to a read command, an erase/discard operation corresponding to an erase/discard command, and a parameter setting operation corresponding to a set parameter command or a set feature command together with a set command.
For another example, controller 130 may use processor 134 to perform background operations on memory device 150. By way of example and not limitation, background operations of the memory device 150 may include copying and storing data stored in a memory block among the memory blocks 152, 154, 156 in the memory device 150 to another memory block, such as a Garbage Collection (GC) operation. The background operation may include moving or swapping data stored in at least one of the memory blocks 152, 154, 156 to at least another one of the memory blocks 152, 154, 156, such as a Wear Leveling (WL) operation. During background operations, controller 130 may use processor 134 to store mapping data stored in controller 130 to at least one of memory blocks 152, 154, and 156 in memory device 150, such as a map flush (flush) operation. A bad block management operation that checks or searches for bad blocks among memory blocks 152, 154, 156 is another example of a background operation that may be performed by processor 134.
In the memory system 110, the controller 130 performs a plurality of command operations corresponding to a plurality of commands input from the host 102. For example, when a plurality of program operations corresponding to a plurality of program commands, a plurality of read operations corresponding to a plurality of read commands, and a plurality of erase operations corresponding to a plurality of erase commands are sequentially, randomly, or alternately executed, the controller 130 may determine which channel(s) or lane(s) among a plurality of channels (or lanes) connecting the controller 130 to a plurality of memory dies included in the memory 150 is appropriate or appropriate for performing each operation. The controller 130 may send or transfer data or instructions to perform each operation via the determined channel or pathway. After each operation is completed, the multiple memory dies included in memory 150 may each communicate the results of the operation via the same channel or lane. The controller 130 may then transmit a response or acknowledgement signal to the host 102. In an embodiment, the controller 130 may check the status of each channel or each channel. In response to a command input from the host 102, the controller 130 may select at least one channel or lane based on the state of each channel or each lane so that instructions and/or operation results may be transferred with data via the selected channel or lane.
By way of example and not limitation, controller 130 may identify status regarding a plurality of channels (or lanes) associated with a plurality of memory dies included in memory device 150. The controller 130 may determine the state of each channel or each lane as one of a busy state, a ready state, an active state, an idle state, a normal state, and/or an abnormal state. The controller determines which channel or lane the instructions (and/or data) are transferred through may be associated with the physical block address, e.g., into which die(s) the instructions (and/or data) are transferred. The controller 130 may refer to the descriptor transferred from the memory device 150. The descriptor, which is data having a predetermined format or structure, may include a parameter describing a block or page with respect to the memory device 150. For example, the descriptors may include device descriptors, configuration descriptors, unit descriptors, and the like. The controller 130 may reference or use the descriptors to determine via which channel(s) or lane(s) instructions or data are exchanged.
A management unit (not shown) may be included in processor 134. The management unit may perform bad block management of the memory device 150. The management unit may find a bad memory block in the memory device 150 that is not satisfying a further use condition, and perform bad block management on the bad memory block. When the memory device 150 is a flash memory such as a NAND flash memory, a program failure may occur during a write operation, for example, during a program operation, due to the characteristics of the NAND logic function. During bad block management, data of a memory block that failed programming or a bad memory block may be programmed into a new memory block. The bad block may seriously deteriorate the utilization efficiency of the memory device 150 having the 3D stack structure and the reliability of the memory system 110. Thus, reliable bad block management may improve or improve the performance of the memory system 110.
Meanwhile, a program operation, a read operation, and an erase operation of the controller 130 will be described below.
First, the controller 130 may store program data corresponding to a program command received from the host 102 in buffers/caches included in the memories 201 and 202 of the controller 130 and then store the data stored in the buffers/caches in the memory blocks 152, 154, and 156 included in the memory device 150. Further, the controller 130 may update the mapping data corresponding to the program operation, and then may store the updated mapping data in the memory blocks 152, 154, and 156 included in the memory device 150.
When receiving a read command from the host 102, the controller 130 may read data corresponding to the read command from the memory device 150 by checking mapping data of the data corresponding to the read command, may store the read data in buffers/caches included in the memories 201 and 202 of the controller 130, and may then provide the data stored in the buffers/caches to the host 102.
When an erase command is received from the host 102, the controller 130 may perform an erase operation: checks a memory block corresponding to the erase command, erases data stored in the checked memory block, updates mapping data corresponding to the erased data, and then stores the updated mapping data in memory blocks 152, 154, and 156 included in the memory device 150.
The mapping data may include logical/physical (L2P: logical to physical) address information and physical/logical (P2L: physical to logical) address information on data stored in the memory block through a program operation.
The data corresponding to the command may include user data and metadata. The metadata may include mapping data generated in the controller 130 corresponding to the user data stored in the memory device 150. Further, the metadata may include information on command data corresponding to a command received from the host 102, information on a command operation corresponding to the command, information on a memory block of the memory device 150 on which the command operation is to be performed, and information on mapping data corresponding to the command operation. In other words, the metadata may include information and data for command operations, rather than user data corresponding to commands received from the host 102.
That is, when a write command is received from the host 102, the controller 130 performs a program operation corresponding to the write command. At this time, the controller 130 may store user data corresponding to the write command in at least one of the memory blocks 152, 154, and 156 of the memory device 150 (e.g., an empty memory block among the memory blocks, an open memory block, or a free memory block in which an erase operation is performed). In addition, the controller 130 may store logical-to-physical address mapping information (L2P mapping) and physical-to-logical address mapping information (P2L mapping) related to user data stored in the memory blocks in the form of a mapping table or a mapping list in empty, open, or free memory blocks among the memory blocks of the memory device 150.
The user data to be stored in the memory device 150 may be divided in units of segments having a preset size. The preset size may be the same as the minimum data size required for the memory system 110 to interoperate with the host 102. According to an embodiment, the size of the data segment, which is a unit of user data, may be determined according to a configuration and control method in the memory device 150. While storing the data segment of the user data in the memory block of the memory device 150, the controller 130 may generate or update a mapping address corresponding to the stored data segment. When the meta-segments, each of which corresponds to a metadata unit including a mapping address, e.g., a logical-to-physical (L2P) segment and a physical-to-logical (P2L) segment that are mapping segments of mapping data, generated by the controller 130 or stored in a memory block are loaded into the memories 201 and 202 and then updated, the mapping segments may be stored in the memory blocks of the memory device 150.
Referring to fig. 1 and 2, the memory system 110 may generate the internal data INDATA during processing of an operation of the memory device 150. At this time, the internal data INDATA may be metadata generated in the memory system 110 because metadata is required to perform data write/read operations between the host 102 and the memory device 150. For example, mapping data including L2P (logical to physical) address mapping information and P2L (physical to logical) address mapping information related to data stored in the memory blocks 152, 154, and 156 may be included in the internal data INDATA. In addition, read/write/erase count data required to ensure reliability of data stored in the memory blocks 152, 154, and 156 or to determine a point in time of a background operation may be included in the internal data INDATA. On the other hand, write/read data directly input from the host 102, stored in the memory blocks 152, 154, and 156, and then output to the host 102 may not be included in the internal data INDATA because the write/read data is not generated by the memory system 110.
The memory system 110 may store the internal data INDATA in at least any one of the first memory 201 and the second memory 202. For example, as shown, a first PART1 of the internal data INDATA may be stored in the first memory 201, and a second PART2 of the internal data INDATA may be stored in the second memory 202.
When a write or read operation is not scheduled to be performed on the memory system 110 for a predetermined time or more, the host 102 may request the memory system 110 to enter a sleep mode to reduce power consumption of the memory system 110.
When receiving a request to enter the sleep mode from the host 102, the memory system 110 may output the second PART2 of the internal data INDATA stored in the secondary memory 202 to the host 102 and then suspend (i.e., stop) power supply to the secondary memory 202.
After requesting the memory system 110 to enter the sleep mode, the host 102 may store a second portion PART2 of the internal data INDATA received from the memory system 110 in the internal memory 106.
The host 102 may request at a later time that the memory system 110 that has entered the sleep mode exit the sleep mode. At this time, the host 102 may output the second PART2 of the internal data INDATA stored in the internal memory 106 to the memory system 110 while requesting the memory system 110 to exit the sleep mode.
The memory system 110 may provide power to the second memory 202 when a request to exit the sleep mode is received from the host 102. The memory system 110 may then store a second portion PART2 of the internal data INDATA in the secondary memory 202, the second portion PART2 of the internal data INDATA being received from the host 102 along with the request to exit the sleep mode.
FIG. 3 illustrates a configuration of host 102 and memory system 110 in data processing system 100, according to an embodiment of the present disclosure.
FIG. 4 illustrates read operations of a host and a memory system in a data processing system according to an embodiment of the present disclosure.
Referring to fig. 3, the host 102 may include a processor 104, a memory 106, and a host controller interface 108. Memory system 110 may include a controller 130' and a memory device 150. Herein, the controller 130' and the memory device 150 described with reference to fig. 3 may generally correspond to the controller 130 and the memory device 150 described with reference to fig. 2.
Hereinafter, differences between the controller 130' and the memory device 150 shown in fig. 3 and the controller 130 and the memory device 150 shown in fig. 2, which can be technically distinguished, will be mainly described. In particular, the logic block 160 in the controller 130 may correspond to the Flash Translation Layer (FTL) described with reference to fig. 2. However, according to an embodiment, the logic block 160 in the controller 130' may perform additional functions not described in the Flash Translation Layer (FTL) described with reference to fig. 2.
The host 102 may include: a processor 104, the processor 104 having a higher performance than the memory system 110; and a memory 106, the memory 106 capable of storing a greater amount of data than the memory 144 of the memory system 110 in cooperation with the host 102. The processor 104 and memory 106 in the host 102 may have advantages in terms of space and scalability. For example, the processor 104 and the memory 106 may have fewer space limitations than the processor 134 and the memory 144 in the memory system 110. The processor 104 and memory 106 may be interchangeable to enhance their performance, in contrast to the processor 134 and memory 144 in the memory system 110. In an embodiment, the memory system 110 may utilize resources owned by the host 102 to improve the operating efficiency of the memory system 110.
As the amount of data that may be stored in the memory devices 150 of the memory system 110 increases, the amount of metadata corresponding to the data stored in the memory system 110 also increases. When the storage capacity for loading metadata into the memory 144 of the controller 130 is limited or restricted, an increase in the amount of loaded metadata may place an operational burden on the operation of the controller 130. For example, a portion, rather than all, of the metadata may be loaded due to limitations of the space or region allocated for the metadata in the memory 144 of the controller 130. If the loaded metadata does not include metadata specific to the physical location to which the host 102 has requested access, the controller 130 must restore the loaded metadata to the memory device 150 and load metadata specific to the physical location to which the host 102 has requested access if some of the loaded metadata has been updated. These operations are performed so that the controller 130 can perform read operations or write operations required by the host 102 and may degrade the performance of the memory system 110.
The storage capacity of the memory 106 included in the host 102 may be several tens or hundreds of times larger than the storage capacity of the memory 144 included in the controller 130. The memory system 110 may transfer the metadata 166 used by the controller 130 to the memory 106 in the host 102 so that at least some portion of the memory 106 in the host 102 may be accessed by the memory system 110. At least some portions of the memory 106 may be used as cache memory for address translation required to read or write data in the memory system 110. In this case, the host 102 translates the logical address to a physical address based on the metadata 166 stored in the memory 106 before transmitting the logical address to the memory system 110 with the request, command, or instruction. The host 102 may then transmit the translated physical address to the memory system 110 along with a request, command, or instruction. The memory system 110 receiving the translated physical address along with the request, command, or instruction may skip the internal process of translating the logical address to the physical address and access the memory device 150 based on the physical address received from the host 102. In this case, overhead (e.g., an operation burden) of the controller 130 to load metadata from the memory device 150 for address conversion may be eliminated, and operation efficiency of the memory system 110 may be improved.
On the other hand, even if the memory system 110 transmits the metadata 166 to the host 102, the memory system 110 may control mapping information, such as metadata generation, erasure, update, and the like, based on the metadata 166. The controller 130 in the memory system 110 may perform a predetermined operation according to an operation state of the memory device 150, and may determine a physical address, i.e., in which physical location in the memory device 150 data transferred from the host 102 is to be stored. Because the physical address of data stored in the memory device 150 may be changed, and the host 102 has not recognized the changed physical address, the memory system 110 may actively control the metadata 166.
While the memory system 110 controls the metadata for address translation, it may be determined that the memory system 110 needs to modify or update the metadata 166 previously transmitted to the host 102. The memory system 110 may send a signal including the metadata to the host 102 to request an update of the metadata 166 stored in the host 102. The host 102 may update the metadata 166 stored in the memory 106 in response to requests received from the memory system 110. This allows the metadata 166 stored in the memory 106 in the host 102 to be kept up-to-date so that, even if the host controller interface 108 uses the metadata 166 stored in the memory 106, the logical addresses are correctly converted to physical addresses and the converted physical addresses are transferred to the memory system 110 along with the logical addresses.
Meanwhile, the controller 130 in the memory system 110 may control (e.g., create, delete, update, etc.) the logical/physical information items or the physical/logical information items and store the logical information items or the physical information items to the memory device 150. Because the memory 106 in the host 102 is a type of volatile memory, the metadata 166 stored in the memory 106 may disappear when an event occurs, such as an interruption in power to the host 102 and the memory system 110. Thus, the controller 130 in the memory system 110 may not only maintain the latest state of the metadata 166 stored in the memory 106 of the host 102, but may also store logical/physical information items or physical/logical information items of the latest state included in the metadata 166 in the memory device 150.
Referring to fig. 3 and 4, operations are described in which the host 102 requests to read data stored in the memory system 110 when the metadata 166 is stored in the memory 106 of the host 102.
Power is supplied to the host 102 and the memory system 110, and then the host 102 and the memory system 110 may be engaged with each other. The logical-to-physical address mapping metadata (L2P MAP) stored in memory device 150 may be transferred to host memory 106 when host 102 and memory system 110 cooperate.
When a read command (read CMD) is issued by the processor 104 in the host 102, the read command is transmitted to the host controller interface 108. After receiving the read command, the host controller interface 108 searches the metadata (L2P MAP) stored in the host memory 106 for a physical address corresponding to the logical address corresponding to the read command. Based on the metadata (L2P MAP) stored in host memory 106, host controller interface 108 may determine a physical address corresponding to the logical address. The host controller interface 108 then performs address translation of the logical address associated with the read command.
The host controller interface 108 transfers a read command (read CMD) to the controller 130 of the memory system 110 along with the logical address and the physical address. The controller 130 may access the memory device 150 based on a physical address input with the read command. Data stored at a location corresponding to a physical address in the memory device 150 may be transferred to the host memory 106 in response to a read command (read CMD).
An operation to read data stored in a memory device 150 that includes non-volatile memory may take more time than an operation to read data stored in a memory such as host memory 106 that is volatile memory. In the above-described operation of handling a read command (read CMD), the controller 130 may skip or omit address translation corresponding to a logical address input from the host 102 (e.g., search for and identify a physical address associated with the logical address). For example, because address translation is performed in host 102, controller 130 may not have to load metadata from memory device 150 or replace metadata stored in memory 144 because controller 130 cannot find metadata in memory 144 for address translation. This allows the memory system 110 to perform read operations requested by the host 102 more quickly.
The operations of the host 102 and the memory system 110 that have been described with reference to fig. 3 and 4 may be operations of the host 102 and the memory system 110 that are performed in a normal mode. That is, the operations of the host 102 and the memory system 110, which have been described with reference to fig. 3 and 4, may indicate operations of storing the metadata 166 generated by the memory system 110 in the internal memory 106 included in the host 102, and managing the metadata 166 during an operation in which the host 102 requests the memory system 110 to write specific data or requests the memory system 110 to read specific data stored therein. In such a normal mode, the memory system 110 may perform many operations requested by the host. Thus, the memory system 110 may be allowed to use a relatively large amount of power in the normal mode.
On the other hand, the host 102 and the memory system 110 may enter a sleep mode. That is, when it is expected that the host 102 does not need to request the memory system 110 to write or read specific data for a sufficiently long time, the host 102 may request the memory system 110 to enter a sleep mode as described with reference to fig. 1 and 2. In the sleep mode interval, the memory system 110 does not perform a separate operation except to prepare for exiting the sleep mode. Thus, the memory system 110 may not be allowed to use a large amount of power during the sleep mode time interval. For this reason, the memory system 110 may perform various operations to minimize power consumption in the sleep mode interval. In an embodiment, the memory system 110 may perform operations that suspend power to a portion of the internal volatile memory (such as the secondary memory 202 in fig. 2) that would otherwise continuously consume power during the sleep mode time interval. At this time, when the power supply is suspended, the volatile memories 201 and 202 in the memory system 110 may lose the data stored therein. Thus, when a request for sleep mode entry is received from the host 102, the memory system 110 may move data already stored in a portion of the volatile memory to another region and then suspend power to the portion of the volatile memory.
As described with reference to fig. 3 and 4, the memory system 110 may communicate the metadata 166 to the host 102 such that the metadata 166 is stored in the internal memory 106 of the host 102. Fig. 3 and 4 only show mapping data as metadata 166 that is transferred from the memory system 110 to the host 102. However, this is merely an embodiment, and the metadata 166 is not necessarily limited to mapping data, as the host 102 does not decide whether to store the metadata 166 transferred by the memory system 110 in the internal memory 106 by checking the composition of the metadata 166. Thus, when a request for sleep mode entry is received from the host 102, the memory system 110 may transfer data stored in a portion of the internal volatile memory to the host 102, thereby storing the data in the internal memory 106 of the host 102. The memory system 110 may then suspend power to the portion of the volatile memory, i.e., the operations described with reference to fig. 1 and 2.
Fig. 5A to 5E illustrate sleep mode entry and exit operations in the data processing system according to the present embodiment.
The sleep mode entry and exit operations of data processing system 100 described with reference to FIGS. 5A and 5B may be a detailed version of the sleep mode entry and exit operations of data processing system 100 described with reference to FIGS. 1 and 2. Therefore, the following description will focus on the contents of the difference between the sleep mode entry and exit operation described with reference to fig. 5A and 5B and the operation described with reference to fig. 1 and 2, which can be technically distinguished.
In an embodiment, the host 102 can generate a SLEEP COMMAND and transmit the SLEEP COMMAND to the memory system 110 to request the memory system 110 to enter a SLEEP mode.
Upon receiving the SLEEP COMMAND from the host 102, the MEMORY system 110 may check the size of the internal data INDATA stored in the second VOLATILE MEMORY2 within the MEMORY system 110. That is, the MEMORY system 110 may compare the SIZE of the internal data INDATA stored in the second VOLATILE MEMORY volalile MEMORY2 with the reference SIZE REF SIZE. This is because, in an embodiment, the size of data that can be transferred by the memory system 110 to the host 102 at a time in response to the SLEEP COMMAND can be limited to a predetermined amount. That is, in order to store data transferred from the MEMORY system 110 in the INTERNAL MEMORY of the host 102, a predetermined protocol between the host 102 and the MEMORY system 110 needs to be used as it is, and the predetermined protocol may include the size of data that can be transferred at once (e.g., in a single message). The reference SIZE REF SIZE may correspond to the SIZE of data that the memory system 110 may transfer to the host 102 at a time in response to the SLEEP COMMAND.
Accordingly, when the SLEEP COMMAND is received from the host 102, the MEMORY system 110 may decide whether to divide the internal data INDATA stored in the second VOLATILE MEMORY volalile MEMORY2 according to the comparison result between the SIZE of the internal data INDATA stored in the second VOLATILE MEMORY volalile MEMORY2 and the reference SIZE REF SIZE. When the SIZE OF the internal data INDATA stored in the second VOLATILE MEMORY volalile MEMORY2 is equal to or smaller than a reference SIZE REF SIZE (SIZE OF INDATA < ═ REF SIZE), the MEMORY system 110 may not divide the internal data INDATA stored in the second VOLATILE MEMORY volalile MEMORY 2. On the other hand, when the SIZE OF the internal data INDATA stored in the second VOLATILE MEMORY volalile MEMORY2 exceeds the reference SIZE REF SIZE (SIZE OF INDATA > REF SIZE), the MEMORY system 110 may divide the internal data INDATA stored in the second VOLATILE MEMORY volalile MEMORY 2.
Fig. 5A shows that the MEMORY system 110 enters and exits the sleep mode when the SIZE OF the internal data INDATA stored in the second VOLATILE MEMORY volalile MEMORY2 is equal to or smaller than a reference SIZE REF SIZE (SIZE OF INDATA < ═ REF SIZE).
First, the MEMORY system 110 can output the internal data INDATA stored in the second VOLATILE MEMORY2 to the host 102, and then acknowledge ENTRY into the SLEEP MODE (ACK SLEEP MODE ENTRY) to the host 102 and enter the SLEEP MODE.
While in the sleep mode, the MEMORY system 110 may supply power to the first VOLATILE MEMORY devices therein and suspend power to the second VOLATILE MEMORY device 2. Therefore, the second VOLATILE MEMORY device VOLATILE MEMORY2, which is suspended from being powered during the sleep mode, may lose all of the internal data INDATA stored therein, and the first VOLATILE MEMORY device, which is powered, may retain the data stored therein.
The host 102 may store the INTERNAL data INDATA received from the MEMORY system 110 in the INTERNAL MEMORY of the host 102 during the time when the MEMORY system 110 enters, is in, and exits the sleep mode.
To request the memory system 110, which has entered the sleep mode, to exit the sleep mode, the host 102 may generate a wake COMMAND and transmit the wake COMMAND to the memory system 110. The host 102 may include the INTERNAL data INDATA stored in the INTERNAL MEMORY in the wake COMMAND, and output the wake COMMAND with the INTERNAL data INDATA to the MEMORY system 110.
Thus, upon receiving the wake COMMAND from the host 102, the MEMORY system 110 may exit from the sleep mode and POWER to the second VOLATILE MEMORY2 that has been suspended from POWER for the sleep mode time interval. Then, the MEMORY system 110 may store the internal data INDATA received from the host 102 together with the wake COMMAND wake into the second VOLATILE MEMORY2, and transmit an acknowledgement (ACK SLEEP MODE EXIT) of EXIT from the SLEEP MODE to the host 102.
Contrary to the description OF fig. 5A, fig. 5B illustrates an operation in which the MEMORY system 110 enters and exits the sleep mode when the SIZE OF the internal data INDATA stored in the second VOLATILE MEMORY volalile MEMORY2 within the MEMORY system 110 exceeds the reference SIZE REF SIZE (SIZE OF INDATA > REF SIZE). The operation above line a in the operation of fig. 5B may be performed in the same manner as shown above line a in fig. 5A.
First, the MEMORY system 110 may divide the internal data INDATA STORED IN the second VOLATILE MEMORY2 INTO N data portions (DIVIDING INDATA STORED IN VOLATILE MEMORY2 INTO1/N), where N may be a natural number equal to or greater than 2. The memory system 110 may then compare the VALUE OF N to a reference VALUE REF VALUE (compare VALUE OF 'N' WITH REF VALUE). This is because the overall size of data that the memory system 110 can transfer to the host 102 in response to the SLEEP COMMAND can have a predetermined limit. That is, in order to store data transferred from the MEMORY system 110 in the INTERNAL MEMORY of the host 102, a predetermined protocol between the host 102 and the MEMORY system 110 needs to be used in accordance with its actual situation, and the predetermined protocol may include a limit on the overall size of data that can be transferred. At this time, the size may correspond to a reference VALUE REF VALUE that can define the overall size of data that the memory system 110 may transfer to the host 102 in response to the SLEEP COMMAND. For reference, as described above, the SIZE definition of data that the memory system 110 can transfer to the host 102 at a time in response to the SLEEP COMMAND has been defined as the reference SIZE REF SIZE. Further, the reason why the internal data INDATA stored in the second VOLATILE MEMORY volalile MEMORY2 is divided into N data parts is that the SIZE of the internal data INDATA is larger than the reference SIZE REF SIZE. Therefore, it is assumed that when the internal data INDATA is divided into N data parts, the internal data INDATA is divided by the reference SIZE REF SIZE. Thus, the reference VALUE REF VALUE compared to N may define the overall size of data that the memory system 110 may transfer to the host 102 in response to the SLEEP COMMAND.
Fig. 5B shows a case where the VALUE OF N is equal to or less than a reference VALUE REF VALUE (VALUE OF 'N' < ═ REF VALUE).
Specifically, when the VALUE OF N is equal to or less than a reference VALUE REF VALUE (VALUE OF 'N'<REF VALUE), the memory system 110 may separately divide N parts of the internal data INDATA from the first part1STPorosity OF INDATA to Nth PORTION NTHThe notification OF INDATA is sequentially output to the host 102 as a response to the SLEEP COMMAND, and then confirms to the host 102 that the SLEEP MODE (ACK SLEEP MODE ENTRY) is entered and the SLEEP MODE is entered.
When entering the sleep mode, the MEMORY system 110 may supply power to the first VOLATILE MEMORY devices therein and suspend power to the second VOLATILE MEMORY 2. At this time, the second VOLATILE MEMORY device VOLATILE MEMORY2, which is suspended from being powered during the sleep mode, may lose all the internal data INDATA stored therein, and the first VOLATILE MEMORY device, which is powered, may retain the data stored therein.
The host 102 may store the INTERNAL data INDATA received from the MEMORY system 110 in the INTERNAL MEMORY device of the host 102 between a first point in time when the host 102 generates the SLEEP COMMAND and outputs the SLEEP COMMAND to the MEMORY system 110 and a second point in time when the wake COMMAND is sent to the MEMORY system 110 until after the MEMORY system 110 exits the SLEEP mode.
To request that the memory system 110, which has entered SLEEP MODE, exit the SLEEP MODE, the host 102 may generate a wake COMMAND and transmit the wake COMMAND to the memory system 110. The host 102 may include the INTERNAL data INDATA stored in the INTERNAL MEMORY in the wake COMMAND, and output the wake COMMAND with the INTERNAL data INDATA to the MEMORY system 110.
Thus, upon receiving the wake COMMAND from the host 102, the MEMORY system 110 may exit from the sleep mode and supply power to the second VOLATILE MEMORY2, which has been suspended from supplying power during the sleep mode time interval. Then, the MEMORY system 110 may store the internal data INDATA received from the host 102 together with the wake COMMAND WAKEUP in the second VOLATILE MEMORY2, and transmit an acknowledgement of EXIT from the SLEEP MODE (ACK SLEEP MODE EXIT) to the host 102.
Contrary to the description OF fig. 5B, fig. 5C shows a case where the VALUE OF N exceeds the reference VALUE REF VALUE (VALUE OF 'N' > REF VALUE). The operation above line B in the operation of fig. 5C may be performed in the same manner as shown above line B in fig. 5B (including the operation shown above line a in fig. 5A).
Specifically, when the VALUE OF N exceeds the reference VALUE REF VALUE (VALUE OF 'N' > REF VALUE), the MEMORY system 110 may program the internal data INDATA stored in the second VOLATILE MEMORY2 to the non-VOLATILE MEMORY device NONVOLATILE MEMORY, and then confirm ENTRY into the SLEEP MODE (ACK SLEEP MODE ENTRY) and enter the SLEEP MODE to the host 102. That is, the memory system 110 may not output any data to the host 102 in response to the SLEEP COMMAND, but may acknowledge entry into the SLEEP mode to the host 102. Because the host 102 does not receive data from the MEMORY system 110 in response to the SLEEP COMMAND, the host 102 does not store any corresponding data in the INTERNAL MEMORY.
While in the sleep mode, the MEMORY system 110 may supply power to the first VOLATILE MEMORY devices therein and suspend power to the second VOLATILE MEMORY device 2. At this time, the second VOLATILE MEMORY device VOLATILE MEMORY2, which is suspended from being powered during the sleep mode, may lose all the internal data INDATA stored therein, and the first VOLATILE MEMORY device, which is powered, may retain the data stored therein.
To request that the memory system 110, which has entered the SLEEP MODE, exit the SLEEP MODE, the host 102 may generate a wake COMMAND and transmit the wake COMMAND to the memory system 110. At this time, since the host 102 never receives any data from the memory system 110 as a response to the SLEEP COMMAND, the host 102 may not include data in the wake COMMAND output to the memory system 110.
Upon receiving the wake COMMAND from the host 102, the MEMORY system 110 may exit from the sleep mode and supply power to the second VOLATILE MEMORY2, which has been suspended from supplying power during the sleep mode time interval. Then, the MEMORY system 110 can read the internal data INDATA that has been programmed into the NONVOLATILE MEMORY device non VOLATILE MEMORY device at the time of SLEEP MODE entry from the NONVOLATILE MEMORY device non VOLATILE MEMORY device, store the read data in the second VOLATILE MEMORY device 2, and transmit an acknowledgement of EXIT from SLEEP MODE (ACK SLEEP MODE EXIT) to the host 102.
Referring to fig. 5D, in an embodiment, upon receiving the SLEEP COMMAND from the host 102, the MEMORY system 110 may request confirmation from the host 102 to support the transfer of the internal data INDATA to the host 102(CONFIRM indtatranssfer) before transferring the internal data INDATA stored in the second VOLATILE MEMORY space MEMORY2 to the host 102.
Between the first point in time when the host 102 generates the SLEEP COMMAND and outputs the SLEEP COMMAND to the MEMORY system 110 and the point in time when the acknowledgement to enter the SLEEP mode is received from the MEMORY system 110, the host 102 may check the status of the INTERNAL MEMORY interval MEMORY (CHECKING INTERNAL MEMORY) when a request to acknowledge support the transfer of the INTERNAL data INDATA is received from the MEMORY system 110. Based on the results obtained by checking the status of the INTERNAL MEMORY, the host 102 may allow the MEMORY system 110 to transmit INTERNAL data (indata), (transferallowed), or NOT allow the MEMORY system 110 to transmit INTERNAL data (indata), (transferant NOT allowed).
When the host 102 acknowledges (TRANSFERs ALLOWED) to the MEMORY system 110 that the MEMORY system 110 is ALLOWED to TRANSFER the internal data INDATA to the host 102, the MEMORY system 110 may output the internal data INDATA stored in the second VOLATILE MEMORY2 to the host 102 by using the operations described with reference to fig. 5A to 5C as it is. That is, the operations described with reference to fig. 5A to 5C may be applied as operations below line a in the operations of fig. 5D.
In contrast to the case of fig. 5D, fig. 5E shows a case where the host 102 indicates to the memory system 110 that the memory system 110 is NOT ALLOWED to TRANSFER internal data to the host 102 (TRANSFER NOT ALLOWED).
When the internal data INDATA is not allowed to be transferred to the host 102, the MEMORY system 110 may program the internal data INDATA stored in the second VOLATILE MEMORY device 2 to the non-VOLATILE MEMORY device NONVOLATILE MEMORY device, and then acknowledge ENTRY into the SLEEP MODE (ACK SLEEP MODE ENTRY) to the host 102 and enter the SLEEP MODE. That is, the memory system 110 may not output any data to the host 102 in response to the SLEEP COMMAND, but rather acknowledge entry into the SLEEP mode to the host 102. Further, since the host 102 does not receive any data from the MEMORY system 110 in response to the SLEEP COMMAND, the host 102 may not store such data in the INTERNAL MEMORY.
While in the sleep mode, the MEMORY system 110 may supply power to the first VOLATILE MEMORY devices therein and suspend power to the second VOLATILE MEMORY device 2. During this period, the second VOLATILE MEMORY device whose power supply is suspended 2 may lose all the internal data INDATA stored therein, and the first VOLATILE MEMORY device whose power supply is suspended may retain the data stored therein.
To request the memory system 110, which has entered the sleep mode, to exit the sleep mode, the host 102 may generate a wake COMMAND and transmit the wake COMMAND to the memory system 110. At this time, since the host 102 never receives any data from the memory system 110 in response to the SLEEP COMMAND, the host 102 may not include data in the wake COMMAND output to the memory system 110.
Upon receiving the wake COMMAND from the host 102, the MEMORY system 110 may exit from the sleep mode and supply power to the second VOLATILE MEMORY2, which has been suspended from supplying power for the sleep mode time interval. Then, the MEMORY system 110 can read the internal data INDATA that has been programmed into the NONVOLATILE MEMORY device non VOLATILE MEMORY at the time of SLEEP MODE entry from the NONVOLATILE MEMORY device non VOLATILE MEMORY device, store the read data in the second VOLATILE MEMORY device 2, and transmit an acknowledgement of EXIT from SLEEP MODE (ACK SLEEP MODE EXIT) to the host 102.
According to the above-described embodiments, when the memory system is controlled to enter the sleep mode, internal data stored in the volatile memory devices of the memory system, which may be suspended from being powered in the sleep mode, may be transferred to the host and stored in the internal memory of the host. In addition, when the memory system is controlled to exit from the sleep mode, internal data stored in the internal memory of the host may be received from the host and stored in the volatile memory of the memory system. This operation can significantly reduce the entry and exit delays of the sleep mode as compared to the case where internal data stored in the volatile memory device of the memory system is programmed/read to/from the non-volatile memory device in the sleep mode entry/exit time interval. By this operation, the operation performance of the memory system can be improved.
According to an embodiment, the apparatus may have the following effects.
When controlling the memory system to enter the hibernation mode, the data processing system may transfer internal data stored in the volatile memory of the memory system to the host, wherein the volatile memory will be suspended from power supply in the hibernation mode and store the internal data in the internal memory of the host. Further, when the memory system is controlled to exit from the sleep mode, the data processing system may receive internal data stored in the internal memory of the host and store the internal data in a volatile memory of the memory system, wherein the volatile memory has been powered back as part of exiting the sleep mode. By this operation, entry and exit delays of the sleep mode can be significantly reduced as compared to when internal data stored in the volatile memory of the memory system is programmed to and read from the non-volatile memory at the time of entry and exit of the sleep mode, respectively. Accordingly, the operating performance of the memory system can be improved.
Although various embodiments have been described for illustrative purposes, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Claims (20)

1. A memory system, the memory system comprising:
a non-volatile memory device;
a volatile memory device that is suspended from being powered in a sleep mode; and
a controller that temporarily stores internal data in the volatile memory device, the internal data being generated during a process of operation of the non-volatile memory device,
wherein when a sleep command is received from a host, the controller outputs the internal data stored in the volatile memory device to the host in response to the sleep command and then transmits an acknowledgement of entering the sleep mode to the host and enters the sleep mode.
2. The memory system according to claim 1, wherein the controller performs, when the sleep command is received from the host, comparison of a size of the internal data stored in the volatile memory device with a size of data that can be transferred at once in response to the sleep command, and determines whether to divide the internal data based on a result of the comparison.
3. The memory system according to claim 2, wherein when the result of the comparison indicates that the size of the internal data stored in the volatile memory device exceeds the size of data that can be transferred at one time in response to the sleep command, the controller divides the internal data stored in the volatile memory device into N data parts, and sequentially outputs the divided internal data from a first part to an nth part to the host in response to the sleep command, and then transfers the confirmation of entry into the sleep mode to the host and enters the sleep mode, where N is a natural number equal to or greater than 2.
4. The memory system of claim 2, wherein when a result of the comparison indicates that a size of the internal data stored in the volatile memory device exceeds a size of data that can be transferred at one time in response to the sleep command:
the controller divides the internal data stored in the volatile memory device into N data parts, where N is a natural number equal to or greater than 2, and
when N exceeds a reference value, the controller programs the internal data stored in the volatile memory device into the non-volatile memory device and transmits the confirmation of entering the sleep mode to the host and enters the sleep mode.
5. The memory system according to claim 1, wherein the controller sends an acknowledgement request to the host to confirm whether the internal data stored in the volatile memory device is allowed to be transferred to the host in response to the sleep command.
6. The memory system according to claim 5, wherein when the host confirms that the controller is allowed to transmit the internal data to the host in response to the confirmation request, the controller outputs the internal data stored in the volatile memory device to the host as a response to the sleep command, and then transmits a confirmation of entering the sleep mode to the host and enters the sleep mode.
7. The memory system according to claim 6, wherein when the host does not indicate permission of the controller to transfer the internal data to the host in response to the confirmation request, the controller programs the internal data stored in the volatile memory device to the nonvolatile memory device and then transfers the confirmation of entry into the sleep mode to the host and enters the sleep mode.
8. The memory system of claim 1, wherein when a wake command including the internal data is received from the host, the controller exits the sleep mode, supplies power to the volatile memory device, stores the internal data included in the wake command into the volatile memory device, and communicates an acknowledgement of exiting the sleep mode to the host.
9. A data processing system, the data processing system comprising:
a host generating a sleep command and a wake-up command and outputting the generated commands; and
a memory system including a nonvolatile memory device and a volatile memory device, the memory system suspending power supply to the volatile memory device in a sleep mode and temporarily storing internal data in the volatile memory device, the internal data being generated during processing of an operation of the nonvolatile memory device,
wherein when the sleep command is received from the host, the memory system outputs the internal data stored in the volatile memory device to the host in response to the sleep command, then transmits an acknowledgement of entering the sleep mode to the host and enters the sleep mode, and
the host stores the internal data received from the memory system in an internal memory between a first time point of outputting the sleep command to the memory system and a second time point of transmitting a wake-up command to the memory system.
10. The data processing system of claim 9, wherein when the hibernation command is received from the host, the memory system determines whether to divide the internal data by comparing a size of the internal data stored in the volatile memory device with a size of data that can be transferred at once in response to the hibernation command.
11. The data processing system of claim 10, wherein when the size of the internal data stored in the volatile memory device exceeds the size of data that can be transferred at one time in response to the sleep command, the memory system divides the internal data stored in the volatile memory device into N data parts, sequentially outputs the divided internal data from a first part to an nth part to the host in response to the sleep command, and then transfers the confirmation of entering the sleep mode to the host and enters the sleep mode, where N is a natural number equal to or greater than 2.
12. The data processing system of claim 10, wherein when the size of the internal data stored in the volatile memory device exceeds the size of data that can be transferred at one time in response to the sleep command:
the memory system divides the internal data stored in the volatile memory device into N data parts, and
when N exceeds a reference value, the memory system programs the internal data stored in the volatile memory device to the non-volatile memory device, then transmits the confirmation of entering the sleep mode to the host and enters the sleep mode.
13. The data processing system of claim 9, wherein in response to receiving the hibernation command, the memory system sends an acknowledgement request to the host to confirm whether the internal data stored in the volatile memory devices is allowed to be transferred to the host,
wherein when the host receives the acknowledgement request from the memory system between the first point in time and receiving the acknowledgement to enter the sleep mode, the host:
the state of the internal memory is checked and,
determining whether to allow the memory system to transfer the internal data to the host according to the check result, and
transmitting an acknowledgement to the memory system indicating whether the memory system is allowed to transmit the internal data to the host.
14. The data processing system of claim 13, wherein in response to the acknowledgement indicating that the memory system is allowed to transfer the internal data to the host, the memory system outputs the internal data stored in the volatile memory device to the host in response to the sleep command, and then transfers the acknowledgement of entry into the sleep mode to the host and enters the sleep mode.
15. The data processing system of claim 14, wherein in response to the acknowledgement indicating that the memory system is not allowed to transfer the internal data to the host, the memory system programs the internal data stored in the volatile memory device to the non-volatile memory device and then transfers the acknowledgement to enter the hibernation mode to the host and enters the hibernation mode.
16. The data processing system of claim 15, wherein the host includes second data stored in the internal memory in the wake command and outputs the wake command to the memory system together with the second data,
wherein when the memory system receives the wake command from the host, the memory system exits the sleep mode, supplies power to the volatile memory device, stores the second data included in the wake command into the volatile memory device, and transmits an acknowledgement to the host to exit the sleep mode.
17. A method of operating a memory system, the memory system including a non-volatile memory device and a volatile memory device, the method of operation comprising:
receiving a sleep command from a host;
outputting, to the host in response to the sleep command, internal data stored in the volatile memory device after receiving the sleep command, the internal data being data generated during processing of an operation of the nonvolatile memory device;
transmitting an acknowledgement of entry into the sleep mode to the host; and is
Entering the sleep mode after outputting the internal data, wherein entering the sleep mode includes suspending power to the volatile memory device.
18. The operating method of claim 17, wherein outputting the internal data comprises:
determining whether to divide the internal data by comparing a size of the internal data stored in a second volatile memory device with a size of data that can be transferred at once in response to the sleep command;
in response to determining to partition the internal data:
dividing the internal data stored in the second volatile memory device into N data portions;
outputting the divided internal data from the first to nth parts to the host as a response to the sleep command when N is less than a reference value; and is
Programming the internal data stored in the volatile memory device into the non-volatile memory device when N exceeds the reference value.
19. The method of operation of claim 17, further comprising:
in response to receiving the hibernation command, confirming whether the internal data stored in the volatile memory device is allowed to be transferred to the host;
performing outputting the internal data in response to the host indicating that the memory system is allowed to transfer the internal data to the host; and is
Programming the internal data to the non-volatile memory device when the host indicates that the memory system is not allowed to transfer the internal data to the host.
20. The method of operation of claim 17, further comprising:
in response to receiving a wake command from the host:
an exit from the sleep mode is made,
power is supplied to the volatile memory device,
storing the internal data included in the wake-up command in the volatile memory device, and
communicating an acknowledgement to the host to exit the sleep mode.
CN201911374232.2A 2019-06-28 2019-12-27 Apparatus and method for transferring internal data of memory system in sleep mode Active CN112148208B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190077806A KR20210001546A (en) 2019-06-28 2019-06-28 Apparatus and method for transmitting internal data of memory system in sleep mode
KR10-2019-0077806 2019-06-28

Publications (2)

Publication Number Publication Date
CN112148208A true CN112148208A (en) 2020-12-29
CN112148208B CN112148208B (en) 2024-03-26

Family

ID=73892111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911374232.2A Active CN112148208B (en) 2019-06-28 2019-12-27 Apparatus and method for transferring internal data of memory system in sleep mode

Country Status (3)

Country Link
US (1) US11294597B2 (en)
KR (1) KR20210001546A (en)
CN (1) CN112148208B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022226840A1 (en) * 2021-04-28 2022-11-03 Micron Technology, Inc. Light hibernation mode for memory

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI738235B (en) * 2020-03-03 2021-09-01 慧榮科技股份有限公司 Method for performing resuming management, and memory device and controller thereof and electronic device
WO2022193144A1 (en) * 2021-03-16 2022-09-22 Micron Technology, Inc. Read operations for active regions of memory device
JP2023040578A (en) * 2021-09-10 2023-03-23 キオクシア株式会社 Memory system and control method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102272734A (en) * 2009-01-05 2011-12-07 马维尔国际贸易有限公司 Method and system for hibernation or suspend using a non-volatile-memory device
US20160004471A1 (en) * 2014-07-01 2016-01-07 Kabushiki Kaisha Toshiba Storage device and data processing method
CN106062814A (en) * 2014-04-09 2016-10-26 英特尔公司 Improved banked memory access efficiency by a graphics processor
US20170038973A1 (en) * 2015-08-06 2017-02-09 Kabushiki Kaisha Toshiba Storage device and data saving method
CN106775609A (en) * 2015-11-19 2017-05-31 飞思卡尔半导体公司 System and method for reducing dormancy and recovery time

Family Cites Families (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7752417B2 (en) 2006-06-05 2010-07-06 Oracle America, Inc. Dynamic selection of memory virtualization techniques
KR100823171B1 (en) 2007-02-01 2008-04-18 삼성전자주식회사 Computer system having a partitioned flash translation layer and flash translation layer partition method thereof
US9720616B2 (en) 2008-06-18 2017-08-01 Super Talent Technology, Corp. Data-retention controller/driver for stand-alone or hosted card reader, solid-state-drive (SSD), or super-enhanced-endurance SSD (SEED)
US8671258B2 (en) 2009-03-27 2014-03-11 Lsi Corporation Storage system logical block address de-allocation management
US20120110259A1 (en) 2010-10-27 2012-05-03 Enmotus Inc. Tiered data storage system with data management and method of operation thereof
US8621603B2 (en) 2011-09-09 2013-12-31 Lsi Corporation Methods and structure for managing visibility of devices in a clustered storage system
US9311228B2 (en) 2012-04-04 2016-04-12 International Business Machines Corporation Power reduction in server memory system
US8583840B1 (en) 2012-04-25 2013-11-12 Lsi Corporation Methods and structure for determining mapping information inconsistencies in I/O requests generated for fast path circuits of a storage controller
KR101420754B1 (en) 2012-11-22 2014-07-17 주식회사 이에프텍 Non-volatile memory system and method of managing a mapping table for the same
KR102049265B1 (en) 2012-11-30 2019-11-28 삼성전자주식회사 Systems having a maximum sleep mode and methods of operating the same
WO2014110095A1 (en) 2013-01-08 2014-07-17 Violin Memory Inc. Method and system for data storage
US9229854B1 (en) 2013-01-28 2016-01-05 Radian Memory Systems, LLC Multi-array operation support and related devices, systems and software
US9652376B2 (en) 2013-01-28 2017-05-16 Radian Memory Systems, Inc. Cooperative flash memory control
US20150331624A1 (en) 2014-05-19 2015-11-19 Kabushiki Kaisha Toshiba Host-controlled flash translation layer snapshot
US10540275B2 (en) 2014-12-22 2020-01-21 Sony Corporation Memory controller, information processing system, and memory extension area management method
CN104733882A (en) 2015-03-09 2015-06-24 连展科技(深圳)有限公司 Stand type socket electric connector
WO2017066601A1 (en) 2015-10-16 2017-04-20 Huang Yiren Ronnie Method and apparatus for providing hybrid mode to access ssd drive
CN105404597B (en) 2015-10-21 2018-10-12 华为技术有限公司 Method, equipment and the system of data transmission
US10229051B2 (en) 2015-12-30 2019-03-12 Samsung Electronics Co., Ltd. Storage device including nonvolatile memory device and controller, operating method of storage device, and method for accessing storage device
US10198198B2 (en) * 2016-01-11 2019-02-05 Toshiba Memory Corporation Storage device that stores setting values for operation thereof
US20170300422A1 (en) 2016-04-14 2017-10-19 Micron Technology, Inc. Memory device with direct read access
US10157004B2 (en) 2016-04-14 2018-12-18 Sandisk Technologies Llc Storage system and method for recovering data corrupted in a host memory buffer
KR102651425B1 (en) 2016-06-30 2024-03-28 에스케이하이닉스 주식회사 Memory system and operating method of memory system
KR20180016679A (en) 2016-08-04 2018-02-19 삼성전자주식회사 Storage device using host memory and operating method thereof
US10503635B2 (en) 2016-09-22 2019-12-10 Dell Products, Lp System and method for adaptive optimization for performance in solid state drives based on segment access frequency
US10459644B2 (en) 2016-10-28 2019-10-29 Western Digital Techologies, Inc. Non-volatile storage system with integrated compute engine and optimized use of local fast memory
US10489313B2 (en) 2016-10-31 2019-11-26 Alibaba Group Holding Limited Flash storage failure rate reduction and hyperscale infrastructure robustness enhancement through the MRAM-NOR flash based cache architecture
US10635584B2 (en) 2017-06-29 2020-04-28 Western Digital Technologies, Inc. System and method for host system memory translation
US10592408B2 (en) 2017-09-13 2020-03-17 Intel Corporation Apparatus, computer program product, system, and method for managing multiple regions of a memory device
JP2019056972A (en) 2017-09-19 2019-04-11 東芝メモリ株式会社 Memory system and control method of memory system
US10970226B2 (en) 2017-10-06 2021-04-06 Silicon Motion, Inc. Method for performing access management in a memory device, associated memory device and controller thereof, and associated electronic device
JP6982468B2 (en) 2017-10-27 2021-12-17 キオクシア株式会社 Memory system and control method
US10929285B2 (en) 2018-02-27 2021-02-23 Western Digital Technologies, Inc. Storage system and method for generating a reverse map during a background operation and storing it in a host memory buffer
KR102624911B1 (en) 2018-06-13 2024-01-12 삼성전자주식회사 Method for increasing endurance of flash memory by improved metadata management
KR20200022179A (en) 2018-08-22 2020-03-03 에스케이하이닉스 주식회사 Data processing system and operating method of data processing system
KR20200088635A (en) 2019-01-15 2020-07-23 에스케이하이닉스 주식회사 Memory system and operation method thereof
US10860228B1 (en) 2019-06-24 2020-12-08 Western Digital Technologies, Inc. Method to switch between traditional SSD and open-channel SSD without data loss
US11138108B2 (en) 2019-08-22 2021-10-05 Micron Technology, Inc. Logical-to-physical map synchronization in a memory device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102272734A (en) * 2009-01-05 2011-12-07 马维尔国际贸易有限公司 Method and system for hibernation or suspend using a non-volatile-memory device
CN106062814A (en) * 2014-04-09 2016-10-26 英特尔公司 Improved banked memory access efficiency by a graphics processor
US20160004471A1 (en) * 2014-07-01 2016-01-07 Kabushiki Kaisha Toshiba Storage device and data processing method
US20170038973A1 (en) * 2015-08-06 2017-02-09 Kabushiki Kaisha Toshiba Storage device and data saving method
CN106775609A (en) * 2015-11-19 2017-05-31 飞思卡尔半导体公司 System and method for reducing dormancy and recovery time

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022226840A1 (en) * 2021-04-28 2022-11-03 Micron Technology, Inc. Light hibernation mode for memory

Also Published As

Publication number Publication date
US11294597B2 (en) 2022-04-05
KR20210001546A (en) 2021-01-06
US20200409602A1 (en) 2020-12-31
CN112148208B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US11294825B2 (en) Memory system for utilizing a memory included in an external device
CN112148208B (en) Apparatus and method for transferring internal data of memory system in sleep mode
KR20210027642A (en) Apparatus and method for transmitting map information in memory system
US11354250B2 (en) Apparatus for transmitting map information in memory system
CN113900586A (en) Memory system and operating method thereof
CN111581121B (en) Method and apparatus for managing mapping data in a memory system
US11681633B2 (en) Apparatus and method for managing meta data in memory system
US11422942B2 (en) Memory system for utilizing a memory included in an external device
US11029867B2 (en) Apparatus and method for transmitting map information and read count in memory system
CN112148632A (en) Apparatus and method for improving input/output throughput of memory system
CN110781097A (en) Apparatus and method for controlling metadata to interface multiple memory systems
KR20200113989A (en) Apparatus and method for controlling write operation of memory system
CN112148533A (en) Apparatus and method for storing data in MLC region of memory system
CN110806837A (en) Data processing system and method of operation thereof
KR20200122685A (en) Apparatus and method for handling different types of data in memory system
US20200042243A1 (en) Memory system and operation method for the same
CN110781098B (en) Apparatus and method for interfacing a plurality of memory systems with each other
US20200250104A1 (en) Apparatus and method for transmitting map information in a memory system
CN116204112A (en) Memory controller and method of operating the same
CN112416818A (en) Apparatus and method for managing firmware through runtime overlay
CN111857818A (en) Memory system and method for executing command operation by the same
US11663139B2 (en) Apparatus for transmitting map information in memory system
US11366611B2 (en) Apparatus for transmitting map information in a memory system
CN113495689A (en) Apparatus and method for controlling input/output throughput of memory system
KR20200014161A (en) Apparatus and method for managing meta data for engagement of plural memory system to store data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant