CN118349163A - Current management during data burst operations in a multi-die memory device - Google Patents

Current management during data burst operations in a multi-die memory device Download PDF

Info

Publication number
CN118349163A
CN118349163A CN202410053536.3A CN202410053536A CN118349163A CN 118349163 A CN118349163 A CN 118349163A CN 202410053536 A CN202410053536 A CN 202410053536A CN 118349163 A CN118349163 A CN 118349163A
Authority
CN
China
Prior art keywords
memory
data
current utilization
operations
memory subsystem
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410053536.3A
Other languages
Chinese (zh)
Inventor
B·约里奥
L·努比莱
W·迪·弗朗西斯可
J·宾福特
于亮
何艳康
A·摩哈马萨德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Publication of CN118349163A publication Critical patent/CN118349163A/en
Pending legal-status Critical Current

Links

Abstract

The present disclosure relates to current management during data burst operations in a multi-die memory device. Control logic on the memory die of the multi-die memory subsystem receives a data burst command from the memory subsystem controller indicating an impending data burst event and determines an expected current utilization in the memory subsystem during the data burst event. The control logic determines whether the expected current utilization in the memory subsystem during the data incident meets a threshold criterion, and in response to determining that the expected current utilization in the memory subsystem during the data incident does not meet the threshold criterion, pauses one or more operations being performed by the control logic on the memory die until the expected current utilization in the memory subsystem during the data incident meets the threshold criterion. In response to determining that the expected current utilization in the memory subsystem satisfies the threshold criteria during the data incident, the control logic provides an indication to the memory subsystem controller that the data incident is warranted, and may perform one or more operations corresponding to the data incident.

Description

Current management during data burst operations in a multi-die memory device
Technical Field
Embodiments of the present disclosure relate generally to memory subsystems and, more particularly, to current management during data burst operations in memory devices of the memory subsystems.
Background
The memory subsystem may include one or more memory devices that store data. The memory device may be, for example, a non-volatile memory device and a volatile memory device. In general, a host system may utilize a memory subsystem to store data at and retrieve data from a memory device.
Disclosure of Invention
According to aspects of the present disclosure, a memory subsystem is provided. The memory subsystem includes: a memory subsystem controller; and a plurality of memory dies coupled to the storage subsystem controller, wherein each memory die of the plurality of memory dies comprises: a memory array; and control logic operably coupled with the memory array to perform operations comprising: receiving a data burst command from the memory subsystem controller indicating an impending data burst event; determining an expected current utilization in the memory subsystem during the data incident; determining whether the expected current utilization in the memory subsystem during the data incident meets a threshold criterion; in response to determining that the expected current utilization in the memory subsystem during the data incident does not meet the threshold criteria, suspending one or more operations being performed by the control logic on the memory die until the expected current utilization in the memory subsystem during the data incident meets the threshold criteria; and in response to determining that the expected current utilization in the memory subsystem satisfies the threshold criteria during the data incident, providing an indication to the memory subsystem controller that the data incident is warranted.
According to another aspect of the present disclosure, a memory device is provided. The memory device includes: a memory array; and control logic operably coupled with the memory array to perform operations comprising: receiving a data burst command from the requester indicating an impending data burst event; determining an expected current utilization during the data incident; determining whether the expected current utilization during the data incident meets a threshold criterion; in response to determining that the expected current utilization during the data incident does not meet the threshold criteria, suspending one or more operations being performed by the control logic on the memory device until the expected current utilization during the data incident meets the threshold criteria; and in response to determining that the expected current utilization during the data incident satisfies the threshold criteria, providing an indication to the requestor that the data incident is warranted.
According to yet another aspect of the present disclosure, a method is provided. The method comprises the following steps: receiving a data burst command from the requester indicating an impending data burst event; determining an expected current utilization during the data incident; determining whether the expected current utilization during the data incident meets a threshold criterion; in response to determining that the expected current utilization during the data incident does not meet the threshold criteria, suspending one or more operations being performed on a memory device until the expected current utilization during the data incident meets the threshold criteria; and in response to determining that the expected current utilization during the data incident satisfies the threshold criteria, providing an indication to the requestor that the data incident is warranted.
Drawings
The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure.
FIG. 1A illustrates an example computing system including a memory subsystem, according to some embodiments of the disclosure.
FIG. 1B is a block diagram of a memory device in communication with a memory subsystem controller of a memory subsystem, according to some embodiments of the present disclosure.
FIG. 2 is a schematic diagram of a portion of a memory cell array such as may be used in a memory of the type described with respect to FIG. 1B, in accordance with some embodiments of the present disclosure.
Fig. 3 is a block diagram illustrating a multi-die package having multiple memory dies in a memory subsystem according to some embodiments of the present disclosure.
FIG. 4 is a flowchart of an example method of current management during a data burst operation in a memory device of a memory subsystem according to some embodiments of the present disclosure.
Fig. 5 is a diagram illustrating utilization during a data burst operation in a memory device of a memory subsystem according to some embodiments of the present disclosure.
FIG. 6 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.
Detailed Description
Aspects of the present disclosure relate to current management during data burst operations in memory devices of a memory subsystem. The memory subsystem may be a storage device, a memory module, or a mix of storage devices and memory modules. Examples of memory devices and memory modules are described below in connection with FIG. 1. In general, a host system may utilize a memory subsystem that includes one or more components, such as a memory device that stores data. The host system may provide data to be stored at the memory subsystem and may request data to be retrieved from the memory subsystem.
The memory subsystem may include a high density non-volatile memory device where it is desirable to retain data when power is not being supplied to the memory device. For example, NAND memory (e.g., 3D flash NAND memory) provides storage in a compact, high density configuration. A nonvolatile memory device is a package of one or more dies, each die including one or more planes. For some types of non-volatile memory devices (e.g., NAND memory), each plane includes a set of physical blocks. Each block contains a set of pages. Each page includes a set of memory cells ("cells"). A cell is an electronic circuit that stores information. Depending on the cell type, a cell may store one or more bits of binary information and have various logic states related to the number of bits stored. The logic states may be represented by binary values such as "0" and "1" or a combination of such values.
The memory device may be comprised of bits arranged in a two-dimensional or three-dimensional grid. The memory cells are formed in an array of columns (hereinafter also referred to as bit lines) and rows (hereinafter also referred to as word lines) onto a silicon wafer. A word line may refer to one or more rows of memory cells of a memory device that are used with one or more bit lines to generate an address for each of the memory cells. The intersection of the bit line and the word line constitutes the address of the memory cell. A block refers hereinafter to a unit of memory device for storing data and may include a group of memory cells, a group of word lines, a word line, or individual memory cells. One or more blocks may be grouped together to form separate partitions (e.g., planes) of the memory device in order to allow concurrent operation on each plane.
One example of a memory subsystem is a Solid State Drive (SSD) that includes one or more non-volatile memory devices (i.e., memory dies) and a memory subsystem controller for managing the non-volatile memory devices. In a memory subsystem including multiple memory dies, the associated memory access operations may be performed concurrently (i.e., at least partially overlapping in time) on separate memory dies. The various access lines, data lines, and voltage nodes may be charged or discharged very quickly during sensing (e.g., read or verify), programming, and erase operations so that the memory access operations may meet commonly required performance specifications to meet data throughput targets as may be dictated by customer requirements or industry standards, for example. For sequential reading or programming, multi-plane operation is typically used to increase system throughput. Thus, the memory subsystem may have a high peak current usage, which may be four to five times the average current magnitude. Thus, this high average market demand for the total current usage budget, for example, operating more than a certain number of memory devices (i.e., memory dies) simultaneously, can become challenging.
One type of data transfer that may occur in a memory subsystem is a data burst transfer (i.e., a "data burst event"), which refers to a set of consecutive data input or data output transfer cycles performed between a memory subsystem controller and a memory die without interruption. A data burst may be initiated by specifying a set of parameters including a starting memory address from where to begin a data transfer and the amount of data to be transferred. After the data burst is started, it runs to completion, using as many interface bus transactions as necessary to transfer the amount of data specified by the parameter set. Due at least in part to the specified parameter set, the data burst process may incur overhead loss with respect to the execution of the pre-transfer instruction. However, since the data burst may continue after startup without any processor involvement, processing resources may be freed up for other tasks. Data bursts are typically fast (e.g., about 1 to 2 microseconds) and asynchronous events (e.g., memory devices cannot predict when a data burst will occur). One example of a burst of data is a read burst. Another example of a data burst is a write burst.
The occurrence of a data burst event may consume a significant amount of current in the memory subsystem and may result in the total current limit in the memory subsystem being met or exceeded when this data burst occurs simultaneously with other ongoing operations that also consume system current. This occurrence may lead to undesirable consequences in the memory subsystem, such as, but not limited to, an asynchronous reset event triggered by a power supply voltage drop, which interrupts all ongoing memory access operations and may result in shutdown of one or more components. Some memory subsystems utilize Peak Power Management (PPM) techniques to manage power consumption, many of which rely on memory subsystem controllers to stagger the activity of memory die, seeking to avoid performing the high power portion of memory access operations in more than one die at the same time. A PPM communication protocol may be used, which is an inter-die communication protocol that limits and/or tracks the current or power consumed by each memory die in the memory subsystem. Each memory die may include a PPM component that exchanges information with its own local media controller (e.g., NAND controller) and other PPM components of other dies via a communication bus. However, such PPM techniques do not have the ability to handle or manage data incidents. Thus, many memory subsystems artificially reduce the available current budget in the memory subsystem so that some portion of the current budget can always be reserved for high priority data burst operations that may or may not occur. This affects the number of non-data burst operations that can be performed simultaneously and compromises system performance.
Aspects of the present disclosure address the above and other drawbacks by implementing current management during data burst operations in memory devices of a memory subsystem. In one embodiment, a dedicated command is used to instruct memory devices in a multi-die memory subsystem to reserve a certain amount of current budget to handle data bursts without exceeding a maximum allowable current budget in the memory subsystem. These commands may be issued by a requestor, such as a memory subsystem controller or host system, when a data burst event is imminent (i.e., identified based on read/write workload). In this way, the PPM component of the memory device can utilize all available current budgets for non-data burst operations when no data burst is expected, but can interrupt the operations in response to receiving a command to reacquire some of the current budgets for use during a data burst. The reserved current budget may be released after the data burst is completed.
Advantages of such an approach include, but are not limited to, improved performance of the memory subsystem. The dedicated command to reserve a current budget for an impending data burst provides automatic control of current utilization in the memory subsystem, which is appropriate for the actual current consumption at each individual instant. The memory device need not always reserve a certain amount of current budget in case of high priority data bursts. This allows more current budget to be used for non-data burst operations and reduces the occurrence of asynchronous reset events in the memory subsystem.
FIG. 1A illustrates an example computing system 100 including a memory subsystem 110, according to some embodiments of the disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (e.g., memory device 140), one or more non-volatile memory devices (e.g., memory device 130), or a combination of such.
The memory subsystem 110 may be a storage device, a memory module, or a hybrid of storage devices and memory modules. Examples of storage devices include Solid State Drives (SSDs), flash drives, universal Serial Bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal Flash Storage (UFS) drives, secure Digital (SD) cards, and Hard Disk Drives (HDD). Examples of memory modules include Dual Inline Memory Modules (DIMMs), low profile DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).
The computing system 100 may be a computing device, such as a desktop computer, a laptop computer, a network server, a mobile device, a vehicle (e.g., an airplane, an unmanned aerial vehicle, a train, an automobile, or other conveyance), an internet of things (IoT) capable device, an embedded computer (e.g., an embedded computer included in a vehicle, industrial equipment, or a networked commercial device), or such a computing device that includes memory and a processing device.
The computing system 100 may include a host system 120 coupled to one or more memory subsystems 110. In some embodiments, host system 120 is coupled to different types of memory subsystems 110. FIG. 1A illustrates one example of a host system 120 coupled to one memory subsystem 110. As used herein, "coupled to" or "coupled with …" generally refers to a connection between components, which may be an indirect communication connection or a direct communication connection (e.g., without intervening components), whether wired or wireless, including, for example, electrical, optical, magnetic, etc.
Host system 120 may include a processor chipset and a software stack executed by the processor chipset. The processor chipset may include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system 120 uses the memory subsystem 110, for example, to write data to the memory subsystem 110 and to read data from the memory subsystem 110.
Host system 120 may be coupled to memory subsystem 110 via a physical host interface. Examples of physical host interfaces include, but are not limited to, serial Advanced Technology Attachment (SATA) interfaces, peripheral component interconnect express (PCIe) interfaces, universal Serial Bus (USB) interfaces, fibre channel, serial Attached SCSI (SAS), double Data Rate (DDR) memory buses, small Computer System Interfaces (SCSI), dual Inline Memory Module (DIMM) interfaces (e.g., DIMM socket interfaces supporting Double Data Rate (DDR)), and the like. A physical host interface may be used to transfer data between host system 120 and memory subsystem 110. When the memory subsystem 110 is coupled with the host system 120 through a PCIe interface, the host system 120 may further utilize an NVM quick (NVMe) interface to access memory components (e.g., the memory device 130). The physical host interface may provide an interface for passing control, address, data, and other signals between the memory subsystem 110 and the host system 120. Fig. 1A illustrates a memory subsystem 110 as an example. In general, the host system 120 may access multiple memory subsystems via the same communication connection, multiple separate communication connections, and/or a combination of communication connections.
The memory devices 130, 140 may include any combination of different types of non-volatile memory devices and/or volatile memory devices. Volatile memory devices, such as memory device 140, may be, but are not limited to, random Access Memory (RAM), such as Dynamic Random Access Memory (DRAM) and Synchronous Dynamic Random Access Memory (SDRAM).
Some examples of non-volatile memory devices, such as memory device 130, include NAND (NAND) flash memory and write-in-place memory, such as three-dimensional cross-point ("3D cross-point") memory. The cross-point array of non-volatile memory may perform bit storage based on bulk resistance variation along with a stackable cross-grid data access array. Additionally, in contrast to many flash-based memories, cross-point nonvolatile memories may perform in-situ write operations, where nonvolatile memory cells may be programmed without having been previously erased. NAND flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND).
Each of memory devices 130 may include one or more arrays of memory cells. A type of memory cell, such as a Single Level Cell (SLC), may store one bit per cell. Other types of memory cells, such as multi-level cells (MLC), three-level cells (TLC), and four-level cells (QLC), may store multiple bits per cell. In some embodiments, each of memory devices 130 may include one or more arrays of memory cells, such as SLC, MLC, TLC, QLC or any combination of such. In some embodiments, a particular memory device may include an SLC portion and an MLC portion, a TLC portion, or a QLC portion of a memory cell. The memory cells of memory device 130 may be grouped into pages, which may refer to logical units of the memory device for storing data. For some types of memory (e.g., NAND), pages may be grouped to form blocks.
Although a non-volatile memory component is described, such as a 3D cross-point array of non-volatile memory cells and NAND-type flash memory (e.g., 2D NAND, 3D NAND), the memory device 130 may be based on any other type of non-volatile memory, such as Read Only Memory (ROM), phase Change Memory (PCM), self-selected memory, other chalcogenide-based memory, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM), magnetic Random Access Memory (MRAM), spin torque (STT) -MRAM, conductive Bridging RAM (CBRAM), resistive Random Access Memory (RRAM), oxide-based RRAM (OxRAM), or non-NOR) flash memory, electrically Erasable Programmable Read Only Memory (EEPROM).
The memory subsystem controller 115 (or simply controller 115) may communicate with the memory devices 130 to perform operations such as reading data, writing data, or erasing data at the memory devices 130, and other such operations. The memory subsystem controller 115 may include hardware such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The hardware may include digital circuitry with dedicated (i.e., hard-coded) logic for performing the operations described herein. The memory subsystem controller 115 may be a microcontroller, dedicated logic circuitry (e.g., field Programmable Gate Array (FPGA), application Specific Integrated Circuit (ASIC), etc.), or other suitable processor.
The memory subsystem controller 115 may include a processor 117 (e.g., a processing device) configured to execute instructions stored in a local memory 119. In the illustrated example, the local memory 119 of the memory subsystem controller 115 includes embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control the operation of the memory subsystem 110, including handling communications between the memory subsystem 110 and the host system 120.
In some embodiments, local memory 119 may include memory registers that store memory pointers, extracted data, and the like. Local memory 119 may also include Read Only Memory (ROM) for storing microcode. Although the example memory subsystem 110 in fig. 1A has been illustrated as including the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include the memory subsystem controller 115, but rather may rely on external control (e.g., provided by an external host, or provided by a processor or controller separate from the memory subsystem).
In general, the memory subsystem controller 115 may receive commands or operations from the host system 120 and may convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory device 130. The memory subsystem controller 115 may be responsible for other operations such as wear leveling operations, discard item collection operations, error detection and Error Correction Code (ECC) operations, encryption operations, cache operations, and address translation between logical addresses (e.g., logical Block Addresses (LBAs), namespaces) and physical addresses (e.g., physical block addresses) associated with the memory device 130. The memory subsystem controller 115 may further include host interface circuitry for communicating with the host system 120 via a physical host interface. The host interface circuitry may translate commands received from the host system into command instructions for accessing the memory device 130, and translate responses associated with the memory device 130 into information for the host system 120.
Memory subsystem 110 may also include additional circuitry or components not illustrated. In some embodiments, memory subsystem 110 may include caches or buffers (e.g., DRAM) and address circuitry (e.g., row decoders and column decoders) that may receive addresses from memory subsystem controller 115 and decode the addresses to access memory device 130.
In some embodiments, memory device 130 includes a local media controller 135, local media controller 135 operating in conjunction with memory subsystem controller 115 to perform operations on one or more memory units of memory device 130. An external controller (e.g., memory subsystem controller 115) may manage memory device 130 externally (e.g., perform media management operations on memory device 130). In some embodiments, the memory device 130 is a managed memory device, which is a raw memory device 130 with control logic (e.g., local controller 135) on the die and a controller (e.g., memory subsystem controller 115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAAND) device. For example, the memory device 130 may represent a single die having some control logic (e.g., the local media controller 135) embodied thereon. In some embodiments, one or more components of memory subsystem 110 may be omitted.
In one embodiment, memory subsystem 110 includes memory interface component 113. The memory interface component 113 is responsible for handling interactions of the memory subsystem controller 115 with memory devices of the memory subsystem 110 (e.g., the memory device 130). For example, the memory interface component 113 can send memory access commands, such as program commands, read commands, or other commands, to the memory device 130 corresponding to requests received from the host system 120. Additionally, the memory interface component 113 can receive data from the memory device 130, such as data retrieved in response to a read command, an acknowledgement that a program command was successfully executed, or an indication of multi-level health status information corresponding to one or more sections of the memory device 130. In some embodiments, memory subsystem controller 115 includes at least a portion of memory interface 113. For example, the memory subsystem controller 115 may include a processor 117 (e.g., a processing device) configured to execute instructions stored in a local memory 119 for performing the operations described herein. In some embodiments, the memory interface component 113 is part of the host system 120, an application program, or an operating system.
In one embodiment, memory device 130 includes local media controller 135, peak power management component 150, and memory array 104. As described herein, the memory array 104 may be logically or physically divided into sections (e.g., die, blocks, pages, etc.). In one embodiment, the local media controller 135 of the memory device 130 includes at least a portion of the PPM component 150. In this embodiment, PPM component 150 can be implemented using hardware or as firmware, stored on memory device 130, executed by control logic (e.g., local media controller 135) to perform operations related to power budget arbitration for multiple concurrent access operations described herein. In another embodiment, PPM component 150 is separate from local media controller 135. In one embodiment, memory device 130 represents a single memory die. In one embodiment, memory subsystem 110 includes a plurality of memory dies, where each memory die includes the same or similar components as memory device 130, including respective examples of PPM components 150.
In one embodiment, PPM component 150 receives a data burst command from a requestor, such as memory subsystem controller 115 or host system 120, indicating an impending data burst event and determines an expected current utilization in memory subsystem 110 during the data burst event. PPM component 150 further determines whether the expected current utilization in memory subsystem 110 during the data incident meets a threshold criterion and, in response to determining that the expected current utilization in the memory subsystem during the data incident does not meet the threshold criterion, pauses one or more operations being performed on memory array 104 of memory device 130 until the expected current utilization in memory subsystem 110 during the data incident meets the threshold criterion. Responsive to determining that the expected current utilization in memory subsystem 110 satisfies the threshold criteria during the data incident, PPM component 150 provides an indication to the requestor that the data incident is warranted and can perform one or more operations corresponding to the data incident. After the data burst event is completed, PPM component 150 receives a data burst release command indicating that the data burst event has been completed and can resume one or more suspended operations performed on memory device 130. Further details regarding the design and operation of the PPM component 150 are described below.
FIG. 1B is a simplified block diagram of a first device in the form of a memory apparatus 130 in communication with a second device in the form of a memory subsystem controller 115 of a memory subsystem (e.g., memory subsystem 110 of FIG. 1A), according to an embodiment. Some examples of electronic systems include personal computers, personal Digital Assistants (PDAs), digital cameras, digital media players, digital recorders, gaming machines, household appliances, vehicles, wireless devices, mobile telephones, and the like. The memory subsystem controller 115 (e.g., a controller external to the memory device 130) may be a memory controller or other external host device.
The memory device 130 includes an array of memory cells 104 logically arranged in rows and columns. The memory cells of a logical row are typically connected to the same access line (e.g., word line), while the memory cells of a logical column are typically selectively connected to the same data line (e.g., bit line). A single access line may be associated with more than one logical row of memory cells and a single data line may be associated with more than one logical column. The memory cells (not shown in FIG. 1B) of at least a portion of the memory cell array 104 are capable of being programmed to one of at least two target data states.
Row decoding circuitry 108 and column decoding circuitry 109 are provided to decode address signals. Address signals are received and decoded to access the memory cell array 104. Memory device 130 also includes input/output (I/O) control circuitry 160 for managing the input of commands, addresses, and data to memory device 130, and the output of data and status information from memory device 130. The address register 114 communicates with the I/O control circuitry 160 and the row decode circuitry 108 and column decode circuitry 109 to latch address signals prior to decoding. The command register 124 communicates with the I/O control circuitry 160 and the local media controller 135 to latch incoming commands.
A controller, such as local media controller 135 internal to memory device 130, controls access to memory cell array 104 in response to commands and generates status information for external memory subsystem controller 115, i.e., local media controller 135 is configured to perform access operations (e.g., read operations, program operations, and/or erase operations) on memory cell array 104. Local media controller 135 communicates with row decode circuitry 108 and column decode circuitry 109 to control row decode circuitry 108 and column decode circuitry 109 in response to addresses. In one embodiment, the local media controller 135 includes or is coupled to the PPM component 150, which PPM component 150 can implement the current management described herein during data burst operations.
Local media controller 135 also communicates with cache register 172. The cache register 172 latches incoming or outgoing data according to the instructions of the local media controller 135 to temporarily store data while the memory cell array 104 is busy writing or reading other data, respectively. During a programming operation (e.g., a write operation), data may be transferred from the cache register 172 to the data register 170 for transfer to the memory cell array 104; new data may then be latched from I/O control circuitry 160 in cache register 172. During a read operation, data may be passed from the cache register 172 to the I/O control circuitry 160 for output to the memory subsystem controller 115; new data may then be transferred from the data register 170 to the cache register 172. The cache register 172 and/or the data register 170 may form a page buffer of the memory device 130 (e.g., may form a portion of the page buffer). The page buffer may further include sensing means (not shown in fig. 1B) for sensing a data state of the memory cells connected to the memory cell array 104, for example, by sensing a state of the data lines of the memory cells. Status register 122 may communicate with I/O control circuitry 160 and local memory controller 135 to latch status information for output to memory subsystem controller 115.
Memory device 130 receives control signals at memory subsystem controller 115 from local media controller 135 over control link 132. For example, the control signals may include a chip enable signal CE#, a command latch enable signal CLE, an address latch enable signal ALE, a write enable signal WE#, a read enable signal RE#, and a write protect signal WP#. Additional or alternative control signals (not shown) may be further received through control link 132, depending on the nature of memory device 130. In one embodiment, memory device 130 receives command signals (which represent commands), address signals (which represent addresses), and data signals (which represent data) from memory subsystem controller 115 over multiplexed input/output (I/O) bus 134, and outputs data to memory subsystem controller 115 over I/O bus 134.
For example, commands may be received at I/O control circuitry 160 through input/output (I/O) pins [7:0] of I/O bus 134 and may then be written into command register 124. The address may be received at I/O control circuitry 160 through input/output (I/O) pins [7:0] of I/O bus 134 and may then be written into address register 114. Data may be received at I/O control circuitry 160 through input/output (I/O) pins of an 8-bit device [7:0] or input/output (I/O) pins of a 16-bit device [15:0], and may then be written into cache register 172. Data may then be written into the data register 170 to program the memory cell array 104.
In an embodiment, the cache register 172 may be omitted and the data may be written directly into the data register 170. Data may also be output through input/output (I/O) pins of an 8-bit device [7:0] or input/output (I/O) pins of a 16-bit device [15:0 ]. Although reference may be made to I/O pins, they may include any conductive node that provides an electrical connection to the memory device 130 through an external device (e.g., the memory subsystem controller 115), such as a commonly used conductive pad or conductive bump.
Those skilled in the art will appreciate that additional circuitry and signals may be provided and that the memory device 130 of FIG. 1B has been simplified. It should be appreciated that the functionality of the various block components described with reference to fig. 1B may not necessarily be separated into different components or component portions of the integrated circuit device. For example, a single component or component part of an integrated circuit device may be adapted to perform the functionality of more than one block component of fig. 1B. Alternatively, one or more components or component parts of the integrated circuit device may be combined to perform the functionality of a single block component of fig. 1B. Additionally, while particular I/O pins are described in accordance with popular conventions for receipt and output of the various signals, it is noted that other combinations or numbers of I/O pins (or other I/O node structures) may be used in the various embodiments.
FIG. 2 is a schematic diagram of a portion of a memory cell array 104 (e.g., a NAND memory array) as may be used in a memory of the type described with respect to FIG. 1B, according to an embodiment. The memory array 104 includes access lines (e.g., word lines 202 0 -202 N) and data lines (e.g., bit lines 204 0 -204 M). Word line 202 may be connected to a global access line (e.g., global word line) in a many-to-one relationship, not shown in fig. 2. For some embodiments, the memory array 104 may be formed on a semiconductor, which may be conductively doped to have one conductivity type (e.g., p-type conductivity to form a p-well, for example, or n-type conductivity to form an n-well, for example).
The memory array 104 may be arranged in rows (each corresponding to a word line 202) and columns (each corresponding to a bit line 204). Each column may include a string of serially connected memory cells (e.g., non-volatile memory cells), such as one of NAND strings 206 0 -206 M. Each NAND string 206 may be connected (e.g., selectively connected) to a common Source (SRC) 216 and may include memory cells 208 0 -208 N. Memory unit 208 may represent a non-volatile memory unit used for data storage. The memory cells 208 of each NAND string 206 can be connected in series with a select gate 210 (e.g., a field effect transistor), such as one of the select gates 210 0 -210 M (which can be, for example, a source select transistor, commonly referred to as a select gate source), and a select gate 212 (e.g., a field effect transistor), such as one of the select gates 212 0 -212 M (e.g., Which may be a drain select transistor, commonly referred to as a select gate drain)). Select gates 210 0 -210 M may be commonly connected to select line 214 (e.g., source select line (SGS)), and select gates 212 0 -212 M may be commonly connected to select line 215 (e.g., drain select line (SGD)). although depicted as conventional field effect transistors, select gates 210 and 212 may utilize a structure similar to (e.g., identical to) memory cell 208. Select gates 210 and 212 may represent several select gates connected in series, with each select gate in the series configured to receive the same or independent control signals.
The source of each select gate 210 may be connected to a common source 216. The drain of each select gate 210 may be connected to a memory cell 208 0 of the corresponding NAND string 206. For example, the drain of select gate 210 0 may be connected to memory cell 208 0 of the corresponding NAND string 206 0. Thus, each select gate 210 may be configured to selectively connect a corresponding NAND string 206 to the common source 216. The control gate of each select gate 210 may be connected to a select line 214.
The drain of each select gate 212 may be connected to the bit line 204 of the corresponding NAND string 206. For example, the drain of select gate 212 0 may be connected to bit line 204 0 of the corresponding NAND string 206 0. The source of each select gate 212 may be connected to the memory cell 208 N of the corresponding NAND string 206. For example, the source of select gate 212 0 may be connected to memory cell 208 N of the corresponding NAND string 206 0. Thus, each select gate 212 may be configured to selectively connect a corresponding NAND string 206 to a corresponding bit line 204. The control gate of each select gate 212 may be connected to a select line 215.
The memory array 104 in FIG. 2 may be a quasi-two-dimensional memory array and may have a substantially planar structure, for example, with the common source 216, NAND strings 206, and bit lines 204 extending in substantially parallel planes. Alternatively, the memory array 104 in FIG. 2 may be a three-dimensional memory array, for example, in which NAND strings 206 may extend substantially perpendicular to a plane containing common sources 216 and may extend substantially parallel to a plane containing bit lines 204 of the plane containing common sources 216.
A typical construction of memory cell 208 includes a data storage structure 234 (e.g., floating gate, charge trap, and the like) that can determine the data state of the memory cell (e.g., through a change in threshold voltage), and a control gate 236, as shown in fig. 2. Data storage structure 234 may include both conductive and dielectric structures, while control gate 236 is typically formed of one or more conductive materials. In some cases, the memory cell 208 may further have a defined source/drain (e.g., source) 230 and a defined source/drain (e.g., drain) 232. The memory cells 208 have their control gates 236 connected to (and in some cases formed by) the word line 202.
A column of memory cells 208 may be a NAND string 206 or a number of NAND strings 206 selectively connected to a given bit line 204. A row of memory cells 208 may be memory cells 208 commonly connected to a given word line 202. A row of memory cells 208 may, but need not, include all memory cells 208 that are commonly connected to a given word line 202. The rows of memory cells 208 may generally be divided into one or more groups of physical pages of memory cells 208, and the physical pages of memory cells 208 generally include every other memory cell 208 commonly connected to a given word line 202. For example, memory cells 208 commonly connected to word line 202 N and selectively connected to even bit line 204 (e.g., bit line 204 0、2042、2044, etc.) may be one physical page of memory cells 208 (e.g., even memory cells), while memory cells 208 commonly connected to word line 202 N and selectively connected to odd bit line 204 (e.g., bit line 204 1、2043、2045, etc.) may be another physical page of memory cells 208 (e.g., odd memory cells).
Although bit lines 204 3 -204 5 are not explicitly depicted in FIG. 2, it is apparent from the drawing that bit lines 204 of memory cell array 104 may be numbered consecutively from bit line 204 0 to bit line 204 M. Other groupings of memory cells 208 commonly connected to a given word line 202 may also define physical pages of memory cells 208. For a particular memory device, all memory cells commonly connected to a given word line can be considered physical pages of memory cells. The portion of the physical page of memory cells (which may still be an entire row in some embodiments) that is read during a single read operation or programmed during a single program operation (e.g., the upper or lower page of memory cells) may be considered a logical page of memory cells. A block of memory cells may include those memory cells configured to be erased together, such as all memory cells connected to word lines 202 0 -202 N (e.g., all NAND strings 206 sharing a common word line 202). References herein to a page of memory cells refer to memory cells of a logical page of memory cells unless explicitly distinguished. Although the example of FIG. 2 is discussed in connection with NAND flash memory, the embodiments AND concepts described herein are not limited to a particular array architecture or structure, AND may include other structures (e.g., SONOS, phase changes, ferroelectric, etc.) AND other architectures (e.g., AND arrays, NOR arrays, etc.).
Fig. 3 is a block diagram illustrating a multi-die package having multiple memory dies in a memory subsystem according to some embodiments of the present disclosure. As illustrated, the multi-die package 300 includes any of the memory dies 330 (0) through 330 (7). However, in other embodiments, the multi-die package 300 may include some other number of memory dies, such as additional or fewer memory dies. In one embodiment, memory dies 330 (0) through 330 (7) share a clock signal ICLK received via a clock signal line. Memory dies 330 (0) through 330 (7) may be selectively enabled (e.g., via a control link) in response to a chip enable signal and may communicate over a separate I/O bus. In addition, the peak current magnitude indicator signal HC# is typically shared between the memory dies 330 (0) through 330 (7). The peak current magnitude indicator signal hc# may be pulled normally to a particular state (e.g., pulled high). In one embodiment, each of the memory dies 330 (0) through 330 (7) includes an instance of the PPM component 150 that receives both the clock signal ICLK and the peak current magnitude indicator signal hc#.
In one embodiment, a token-based protocol is used in which tokens are cycled through each of the memory dies 330 (0) through 330 (7) for determining and broadcasting an expected peak current magnitude, even though some of the memory dies 330 (0) through 330 (7) may be disabled in response to their respective chip enable signals. The period of time that a given PPM component 150 holds this token (e.g., a certain number of cycles of the clock signal ICLK) may be referred to herein as the power management cycle of the associated memory die. At the end of the power management cycle, the tokens are sequentially passed to the next memory die. Finally, the token is received again by the same PPM component 150, which PPM component 150 signals the associated memory die that a new power management cycle has begun. In one embodiment, the encoded value of the lowest expected peak current magnitude is configured such that each of its digits corresponds to the normal logic level of the peak current magnitude indicator signal HC#, with the disabled chip not converting the peak current magnitude indicator signal HC#. However, in other embodiments, the memory dies may be configured to drive transitions of the peak current magnitude indicator signal hc# when otherwise disabled in response to their respective chip enable signals to indicate the encoded value of the lowest expected peak current magnitude when specified. When a given PPM component 150 holds a token, it can determine a peak current magnitude for a respective one of memory dies 330 (0) through 330 (7), which can be attributed to one or more processing threads on the memory die, and broadcast an indication of the peak current magnitude via peak current magnitude indicator signal hc#.
FIG. 4 is a flowchart of an example method of determining a multi-layer health status in a memory device of a memory subsystem, according to some embodiments of the present disclosure. The method 400 may be performed by processing logic that may comprise hardware (e.g., a processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of the device, an integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 400 is performed by the PPM component 150 of fig. 1A, 1B, and 3. Although shown in a particular sequence or order, the order of the processes may be modified unless otherwise specified. Thus, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Additionally, one or more processes may be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.
At operation 405, a memory operation is performed. For example, control logic (e.g., local media controller 135) may perform one or more operations on a memory die (e.g., memory device 130). Depending on the embodiment, the operations may include read, write, or erase operations, or a combination of any type of memory operations. For example, the control logic may cause programming or read voltage signals to be applied to access lines (e.g., bit lines and word lines) of the memory array 104 to program data to or read data from corresponding memory cells. These operations may include host-initiated operations (i.e., performed in response to requests or commands received from the host system 120 or the memory subsystem controller 115) or internal media management operations. Any such operation utilizes an amount of current in the memory subsystem, however, this amount of current is typically less than the maximum allowable current budget. For example, as shown in fig. 5, the actual current 502 used to perform these operations (i.e., during the period of time before the data burst command is received at time 510) remains less than the maximum allowable current budget 550.
In operation 410, information is broadcast. For example, control logic (e.g., PPM component 150) can periodically broadcast current utilization associated with one or more operations to a plurality of other memory devices in memory subsystem 110. In one embodiment, when the memory device 130 holds the token, as described above with respect to fig. 3, the control logic may cause an indication of current utilization to be broadcast to other memory dies via the peak current magnitude indicator signal hc# as part of the shared data packet. In this way, the PPM component 150 on each memory die in the memory subsystem 110 is aware of current utilization on each of the other memory dies.
At operation 415, a command is received. For example, the control logic may receive a data burst command from a requestor, such as memory subsystem controller 115 or host system 120, indicating an impending data burst event. A data burst event occurs when a set of consecutive data input or data output transfer cycles are performed between memory subsystem controller 115 and a memory die (e.g., memory device 130) without interruption. For example, the memory subsystem controller 115 may buffer incoming requests from the host system 120 and thus may preemptively determine when a data burst event will occur. In another embodiment, based on historical trends, for example, the memory subsystem controller 115 may predict the occurrence of future data bursts based on the current memory access workload. The data burst command may be a dedicated command having a unique header or other identifier that is recognizable by the PPM component 150. The same command may be sent to and received by each other memory die in the memory subsystem. In one embodiment, the data burst command contains additional information, such as the number of data bursts that will occur within a particular time period.
At operation 420, a determination is made. For example, the control logic may determine an expected current utilization in memory subsystem 110 during a data burst event. In one embodiment, the expected current utilization in memory subsystem 110 during a data incident includes a combination of current utilization associated with one or more operations (i.e., actual current utilization 502) and estimated current utilization associated with the data incident. In one embodiment, PPM component 150 can be preconfigured with a default current utilization associated with a data incident. Thus, based on the number of data bursts indicated in the data burst command, PPM component 150 can determine an estimated current utilization.
At operation 425, a determination is made. For example, the control logic may determine whether the expected current utilization in memory subsystem 110 during a data burst event meets a threshold criterion. In one embodiment, if the expected current utilization will remain below a threshold level (e.g., the maximum allowable current budget 550 for the plurality of memory dies in the memory subsystem 110), then the expected current utilization satisfies a threshold criterion. As shown in fig. 5, the expected current 504 includes the actual current 502 plus some additional amount of current utilization (i.e., a default amount associated with a data incident). During the period of time after the data burst command is received at time 510, the expected current increases beyond the maximum allowable current budget 550, which typically triggers an asynchronous reset event. However, since the expected current 504 is not an actual current utilization, no reset occurs. However, the expected current 504 does not meet the threshold criteria because the expected current 504 is greater than the maximum allowable current budget 550.
At operation 430, memory operations are suspended. In response to determining that the expected current utilization during the data incident does not meet the threshold criteria, the control logic may suspend one or more operations being performed by the control logic on the memory device until the expected current utilization in the memory subsystem during the data incident meets the threshold criteria. For example, in one embodiment, the PPM component 150 on each memory die can reject all requests to increase current utilization associated with the memory operation being performed in response to determining a suspend operation. However, PPM component 150 can continue to communicate a decrease in current utilization to other memory dies, such as a decrease in current utilization associated with completion of a memory operation. In this way, current to perform a data burst event will be released. As shown in fig. 5, when a data burst command is received at time 510, the expected current 504 increases significantly. However, in the subsequent time period, both the expected current 504 and the actual current 502 are shown to systematically decrease (i.e., step down) as a result of one or more operations being suspended. The current budget that is being utilized by the operation is released and can be used to accommodate the impending data burst command. The control logic continues to track the expected current 504 and repeatedly compares the expected current 504 to the maximum current budget 550.
At operation 435, an indication is provided. In response to determining that the expected current utilization during the data incident meets a threshold criteria (i.e., is at or below the maximum current budget 550), initially or after one or more operations have been suspended, the control logic may provide an indication to the requestor that the data incident is warranted. As shown in fig. 5, once the expected current 504 reaches the maximum current budget 550, data burst approval may occur at time 520. In one embodiment, to provide an indication that the data incident is warranted, the control logic may set the corresponding bit in the status register to a particular value. In one embodiment, PPM component 150 periodically or continuously sends a signal to memory subsystem controller 115 indicating the status of the corresponding bit (i.e., whether the data burst is approved). In another embodiment, the memory subsystem controller 115 may periodically poll the status registers to determine whether the corresponding bit is set to a particular value.
At operation 440, an operation is performed. For example, the control logic may perform one or more operations corresponding to a data incident. As shown in fig. 5, the data burst begins at time 530 and the performed operation may correspond to a period of uninterrupted data transfer to or from a plurality of memory dies in the memory subsystem 110. During a data burst event, the actual current 502 increases and may reach or approach the maximum current budget 550.
At operation 445, a command is received. For example, the control logic may receive a data burst release command from the requestor indicating that the data burst event has completed. Because the memory subsystem controller 115 is aware of the impending workload, it can determine when the data burst event will end and, in response, can send a data burst release command to each of the memory dies in the memory subsystem 110. The data burst release command may be a dedicated command having a unique header or other identifier that is recognizable by the PPM component 150. As shown in fig. 5, a data burst release command may be received at time 540.
At operation 450, the paused operation is resumed. In response to receiving the data burst release command, the control logic may resume the paused one or more operations performed on the memory device 130. In one embodiment, the PPM component 150 on each die will poll the available current each time the corresponding die receives a token until there is enough current available to resume the previously paused memory operation. When the current associated with the data incident is released, the PPM component 150 will see enough of the available current budget and will send an acknowledgement to the control logic to resume one or more operations and consume the previously requested current value. As shown in fig. 5, the actual current 502 may drop below the maximum current budget 550 because the extra current budget utilized during the data burst is no longer present.
FIG. 6 illustrates an example machine of a computer system 600 within which a set of instructions for causing the machine to perform any one or more of the methods discussed herein may be executed. In some embodiments, computer system 600 may correspond to a host system (e.g., host system 120 of fig. 1A) that includes, is coupled to, or utilizes a memory subsystem (e.g., memory subsystem 110 of fig. 1A), or may be used to perform operations of a controller (e.g., execute an operating system to perform operations corresponding to local media controller 135 of fig. 1A). In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.
The machine may be a Personal Computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a network appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Moreover, while a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
Example computer system 600 includes a processing device 602, a main memory 604 (e.g., read Only Memory (ROM), flash memory, dynamic Random Access Memory (DRAM), such as Synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 606 (e.g., flash memory, static Random Access Memory (SRAM), etc.), and a data storage system 618, which communicate with each other via a bus 630.
The processing device 602 represents one or more general-purpose processing devices, such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, or one processor implementing other instruction sets or multiple processors implementing a combination of instruction sets. The processing device 602 may also be one or more special purpose processing devices, such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a network processor, or the like. The processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. Computer system 600 may further include a network interface device 608 to communicate over a network 620.
The data storage system 618 may include a machine-readable storage medium 624 (also referred to as a computer-readable medium) on which is stored one or more sets of instructions 626 or software embodying any one or more of the methodologies or functions described herein. The instructions 626 may also reside, completely or at least partially, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600, the main memory 604 and the processing device 602 also constituting machine-readable storage media. The machine-readable storage medium 624, the data storage system 618, and/or the main memory 604 may correspond to the memory subsystem 110 of fig. 1A.
In one embodiment, instructions 626 include instructions that implement functionality corresponding to local media controller 135 of fig. 1A. While the machine-readable storage medium 624 is shown in an example embodiment to be a single medium, the term "machine-readable storage medium" should be taken to include a single medium or multiple media storing one or more sets of instructions. The term "machine-readable storage medium" shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term "machine-readable storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
Some portions of the preceding detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, considered to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may relate to the actions and processes of a computer system or similar electronic computing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.
The present disclosure also relates to apparatus for performing the operations herein. Such an apparatus may be specially constructed for the desired purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random Access Memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each of them is coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the disclosure as described herein.
The present disclosure may be provided as a computer program product or software which may include a machine-readable medium having stored thereon instructions which may be used to program a computer system (or other electronic device) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., computer) readable storage medium, such as read only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory components, and the like.
In the foregoing specification, embodiments thereof have been described with reference to specific example embodiments of the disclosure. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (20)

1. A memory subsystem, comprising:
A memory subsystem controller; and
A plurality of memory dies coupled to the storage subsystem controller, wherein each memory die of the plurality of memory dies comprises:
A memory array; and
Control logic operably coupled with the memory array to perform operations comprising:
Receiving a data burst command from the memory subsystem controller indicating an impending data burst event;
determining an expected current utilization in the memory subsystem during the data incident;
Determining whether the expected current utilization in the memory subsystem during the data incident meets a threshold criterion;
In response to determining that the expected current utilization in the memory subsystem during the data incident does not meet the threshold criteria, suspending one or more operations being performed by the control logic on the memory die until the expected current utilization in the memory subsystem during the data incident meets the threshold criteria; and
In response to determining that the expected current utilization in the memory subsystem satisfies the threshold criteria during the data incident, an indication is provided to the memory subsystem controller that the data incident is warranted.
2. The memory subsystem of claim 1, wherein the control logic is to perform operations further comprising:
Performing the one or more operations on the memory die; and
Current utilization associated with the one or more operations is periodically broadcast to other memory dies of the plurality of memory dies.
3. The memory subsystem of claim 2, wherein determining the expected current utilization in the memory subsystem during the data incident comprises combining the current utilization associated with the one or more operations with an estimated current utilization associated with the data incident.
4. The memory subsystem of claim 1, wherein determining whether the expected current utilization in the memory subsystem during the data incident meets a threshold criterion comprises: determining whether the expected current utilization in the memory subsystem will remain below a maximum allowable current budget for the plurality of memory dies in the memory subsystem during the data burst event.
5. The memory subsystem of claim 1, wherein providing the indication that the data incident is warranted comprises: setting a corresponding bit in a status register to a particular value, wherein the memory subsystem controller is to periodically poll the status register to determine whether the corresponding bit is set to the particular value.
6. The memory subsystem of claim 1, wherein the control logic is to perform operations further comprising:
one or more operations corresponding to the data incident are performed, wherein the one or more operations correspond to periods of uninterrupted data transfer to or from the plurality of memory dies.
7. The memory subsystem of claim 1, wherein the control logic is to perform operations further comprising:
receiving a data burst release command from the storage subsystem controller indicating that the data burst event has completed; and
In response to receiving the data burst release command, resuming the paused one or more operations executing on the memory die.
8. A memory device, comprising:
A memory array; and
Control logic operably coupled with the memory array to perform operations comprising:
Receiving a data burst command from the requester indicating an impending data burst event;
Determining an expected current utilization during the data incident;
determining whether the expected current utilization during the data incident meets a threshold criterion;
In response to determining that the expected current utilization during the data incident does not meet the threshold criteria, suspending one or more operations being performed by the control logic on the memory device until the expected current utilization during the data incident meets the threshold criteria; and
In response to determining that the expected current utilization during the data incident meets the threshold criteria, an indication is provided to the requestor that the data incident is warranted.
9. The memory device of claim 8, wherein the control logic is to perform operations further comprising:
performing the one or more operations on the memory device; and
Current utilization associated with the one or more operations is periodically broadcast to a plurality of other memory devices.
10. The memory device of claim 9, wherein determining the expected current utilization during the data incident comprises: the current utilization associated with the one or more operations is combined with an estimated current utilization associated with the data incident.
11. The memory device of claim 8, wherein determining whether the expected current utilization during the data incident meets a threshold criterion comprises: a determination is made as to whether the expected current utilization during the data burst event will remain below a maximum allowable current budget.
12. The memory device of claim 8, wherein providing the indication that the data incident is warranted comprises: setting a corresponding bit in a status register to a particular value, wherein the requestor will periodically poll the status register to determine whether the corresponding bit is set to the particular value.
13. The memory device of claim 8, wherein the control logic is to perform operations further comprising:
One or more operations corresponding to the data incident are performed, wherein the one or more operations correspond to periods of uninterrupted data transfer to or from the memory device.
14. The memory device of claim 8, wherein the control logic is to perform operations further comprising:
Receiving a data burst release command from the requester indicating that the data burst event has completed; and
In response to receiving the data burst release command, the paused one or more operations executing on the memory device are resumed.
15. A method, comprising:
Receiving a data burst command from the requester indicating an impending data burst event;
Determining an expected current utilization during the data incident;
determining whether the expected current utilization during the data incident meets a threshold criterion;
In response to determining that the expected current utilization during the data incident does not meet the threshold criteria, suspending one or more operations being performed on a memory device until the expected current utilization during the data incident meets the threshold criteria; and
In response to determining that the expected current utilization during the data incident meets the threshold criteria, an indication is provided to the requestor that the data incident is warranted.
16. The method as recited in claim 15, further comprising:
performing the one or more operations on the memory device; and
Current utilization associated with the one or more operations is periodically broadcast to a plurality of other memory devices.
17. The method of claim 16, wherein determining the expected current utilization during the data incident comprises: the current utilization associated with the one or more operations is combined with an estimated current utilization associated with the data incident.
18. The method of claim 15, wherein determining whether the expected current utilization during the data incident meets a threshold criterion comprises: a determination is made as to whether the expected current utilization during the data burst event will remain below a maximum allowable current budget.
19. The method of claim 15, wherein providing the indication that the data incident is warranted comprises: setting a corresponding bit in a status register to a particular value, wherein the requestor will periodically poll the status register to determine whether the corresponding bit is set to the particular value.
20. The method as recited in claim 15, further comprising:
Performing one or more operations corresponding to the data incident, wherein the one or more operations correspond to periods of uninterrupted data transfer to or from the memory device;
Receiving a data burst release command from the requester indicating that the data burst event has completed; and
In response to receiving the data burst release command, the paused one or more operations executing on the memory device are resumed.
CN202410053536.3A 2023-01-13 2024-01-12 Current management during data burst operations in a multi-die memory device Pending CN118349163A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US63/439,027 2023-01-13
US18/407,239 2024-01-08

Publications (1)

Publication Number Publication Date
CN118349163A true CN118349163A (en) 2024-07-16

Family

ID=

Similar Documents

Publication Publication Date Title
CN113424165B (en) Interruption of programming operations at a memory subsystem
US20240145010A1 (en) Partial block handling in a non-volatile memory device
US20230195317A1 (en) I/o expanders for supporting peak power management
US20230060312A1 (en) Continuous memory programming operations
US20240241643A1 (en) Current management during data burst operations in a multi-die memory device
CN118349163A (en) Current management during data burst operations in a multi-die memory device
US20230305616A1 (en) Peak power management with data window reservation
US20230105208A1 (en) Headroom management during parallel plane access in a multi-plane memory device
US20230350587A1 (en) Peak power management priority override
US20230195312A1 (en) Peak power management in a memory device during suspend status
US20240055058A1 (en) Scheduled interrupts for peak power management token ring communication
US20230393784A1 (en) Data path sequencing in memory systems
US20240143501A1 (en) Dual data channel peak power management
US11735272B2 (en) Noise reduction during parallel plane access in a multi-plane memory device
US20230289307A1 (en) Data burst suspend mode using pause detection
US20240152295A1 (en) Peak power management with dynamic data path operation current budget management
US20240143179A1 (en) Resuming suspended program operations in a memory device
US12027211B2 (en) Partial block handling protocol in a non-volatile memory device
US12001336B2 (en) Hybrid parallel programming of single-level cell memory
US20230289306A1 (en) Data burst suspend mode using multi-level signaling
US20240061592A1 (en) Multiple current quantization values for peak power management
US20230367723A1 (en) Data burst queue management
US11842078B2 (en) Asynchronous interrupt event handling in multi-plane memory devices
US20240231460A1 (en) Clock pulse management to reduce peak power levels
US20240168536A1 (en) Peak power management extensions to application-specific integrated circuits

Legal Events

Date Code Title Description
PB01 Publication