US20140237167A1 - Apparatus and Methods for Peak Power Management in Memory Systems - Google Patents

Apparatus and Methods for Peak Power Management in Memory Systems Download PDF

Info

Publication number
US20140237167A1
US20140237167A1 US14/262,077 US201414262077A US2014237167A1 US 20140237167 A1 US20140237167 A1 US 20140237167A1 US 201414262077 A US201414262077 A US 201414262077A US 2014237167 A1 US2014237167 A1 US 2014237167A1
Authority
US
United States
Prior art keywords
command
execution
memory array
memory
respect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/262,077
Inventor
Damian P. Yurzola
Rajeev Nagabhirava
Gary J. Lin
Matthew Davidson
Paul A. Lassa
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SanDisk Technologies LLC
Original Assignee
SanDisk Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13/167,929 priority Critical patent/US8694719B2/en
Priority to US13/296,898 priority patent/US8745369B2/en
Application filed by SanDisk Technologies LLC filed Critical SanDisk Technologies LLC
Priority to US14/262,077 priority patent/US20140237167A1/en
Publication of US20140237167A1 publication Critical patent/US20140237167A1/en
Assigned to SANDISK TECHNOLOGIES LLC reassignment SANDISK TECHNOLOGIES LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SANDISK TECHNOLOGIES INC
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 – G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/20Cooling means
    • G06F1/206Cooling means comprising thermal management
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 – G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of power-saving mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 – G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3275Power saving in memory, e.g. RAM, cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
    • G06F3/0601Dedicated interfaces to storage systems
    • G06F3/0628Dedicated interfaces to storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7207Details relating to flash memory management management of metadata or control data
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing
    • Y02D10/10Reducing energy consumption at the single machine level, e.g. processors, personal computers, peripherals or power supply
    • Y02D10/14Interconnection, or transfer of information or other signals between, memories, peripherals or central processing units

Abstract

Disclosed are apparatus and techniques for managing power in a memory system having a controller and nonvolatile memory array. In one embodiment, prior to execution of each command with respect to the memory array, a request for execution of such command is received with respect to the memory array. In response to receipt of each request for each command, execution of such command is allowed or withheld with respect to the memory array based on whether such command, together with execution of other commands, is estimated to exceed a predetermined power usage specification for the memory system.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This is a continuation of U.S. patent application Ser. No. 13/296,898, filed Nov. 15, 2011, which is a continuation-in-part of U.S. patent application Ser. No. 13/167,929, filed Jun. 24, 2011 (now U.S. Pat. No. 8,694,719), both of which are hereby incorporated by reference herein.
  • BACKGROUND OF THE INVENTION
  • This invention relates to methods for managing peak power levels in memory systems, in particular, memory systems that allow parallel operations with respect to multiple memory array (e.g., multi-die and/or multi-die memory systems).
  • Memory systems generally include multiple components which are in communication with each other and perform different functions as part of an overall system. One example of such a memory system is a nonvolatile memory system. Nonvolatile memory systems are used in various applications. Some nonvolatile memory systems are embedded in a larger system such as a personal computer. Other nonvolatile memory systems are removably connected to a host system and may be interchanged between different host systems. Examples of such removable memory systems (removable memory units) include memory cards and USB flash drives. Electronic circuit cards, including non-volatile memory cards, have been commercially implemented according to a number of well-known standards. Memory cards are used with personal computers, cellular telephones, personal digital assistants (PDAs), digital still cameras, digital movie cameras, portable audio players and other host electronic devices for the storage of large amounts of data. Such cards usually contain a re-programmable non-volatile semiconductor memory cell array along with a controller that controls and supports operation of the memory cell array and interfaces with a host to which the card is connected. Memory card standards include PC Card, CompactFlash™ card (CF™ card), SmartMedia™ card, MultiMediaCard (MMC™), Secure Digital (SD) card, a miniSD™ card, microSD™ card, Memory Stick™, Memory Stick Duo card and microSD/TransFlash™ memory module standards, by way of a few examples. There are several USB flash drive products commercially available from SanDisk Corporation under its trademark “Cruzer®.” Other examples of removable memory units include Solid State Drives (SSDs), e.g. using SATA, PCle, ExpressCard or similar standards. SSDs use solid state memory systems in applications where Hard Disk Drives have traditionally been used, such as in laptop computers.
  • A solid state drive (SSD) is designed to provide reliable and high performance storage of user data across a flash-based memory system containing a host interface controller (such as a Serial Advanced Technology Attachment (SATA)) interface) and a number of memory multi-chip packages (MCPs), where each MCP contains a flash memory controller and a stack of NAND flash dies. The Open NAND Flash Interface (ONFI) protocol provides support for parallel access to multiple NAND dies (or “logical units” (LUNs)) on a single “target” or NAND multi-chip stack on a single shared ONFI channel. In a typical SAT A-based SSD application, a central host controller accesses multiple attached devices (targets/NAND device clusters) on each ONFI channel, and across several ONFI channels. Each ONFI target typically controls 2, 4, or 8 NAND dies. Storage management software running on the host controller manages a virtual memory space that is mapped to flash blocks in the physical dies in each of the attached MCP's.
  • In many memory systems, storage management software running on the host controller manages a virtual memory space that is mapped to flash blocks in the physical dies in each of the attached MCP's. The host controller and the storage management software utilize parallel access and efficient usage of the available flash devices to optimize SSD drive performance, endurance, and cost. The system often must achieve these optimizations within product-related or technology-related power, which is often set forth in the specifications for the product. For example, in some SSD assemblies, the SSD assembly must not exceed 1O W peak power consumption under any operational mode.
  • Different techniques have been used to manage power within required limits. For example, the host can employ a host-initiated power management/power-down (HIPM/HIPD) technique in which the host de-powers some number of target modules or directs them to enter a standby/power-down mode. In this way, the host reduces traffic to some number of devices. Improved power management in non-volatile memory systems would be beneficial.
  • SUMMARY OF THE INVENTION
  • The present invention is defined by the claims, and nothing in this section should be taken as a limitation on those claims.
  • In general, apparatus and techniques for managing power in a memory system having a controller and nonvolatile memory array are provided. In one embodiment, prior to execution of each command with respect to the memory array, a request for execution of such command is received with respect to the memory array. Execution of each command is allowed or withheld with respect to the memory array based on whether such command, together with execution of other commands, is estimated to exceed a predetermined power usage specification for the memory system.
  • In a specific implementation, the memory array is formed within multiple die and/or multiple planes that are accessible in parallel. In a specific aspect, allowing or withholding execution of each command with respect to the memory array is further based on whether such command has a type of command that has been previously executed more than a predetermined threshold number of times. In another aspect, allowing or withholding execution of each command with respect to the memory array is further based on a configurable decision matrix describing necessary delays between execution of each different type of command or a combination of commands.
  • In another method embodiment, prior to issuing for a component of the memory system a current command having a type, a request for execution of such current command is received at the controller. The controller allows the current command to issue, increments a count for the current command type, and resets a timer associated with the current command type if the count has not reached a predefined semaphore capacity. Othewise, the controller withholds the current command from issuing if the count for such current command has reached the predefined semaphore capacity and the timer has not expired. The controller resets the count for such current command type if the timer for such current command type has expired.
  • In a specific implementation, the component (for which the request is received) is a memory cell array and the current command type pertains to programming, reading, or erasing with respect to the memory cell array. In one aspect, the count is reset by subtracting a timer expiration rate from the count. In a further aspect, the expiration rate equals the semaphore capacity. In another embodiment, the type of the current command is determined by comparing the current command to a plurality of command type values.
  • In another embodiment, the invention pertains to memory system having a nonvolatile memory array for storing data, a flash protocol sequencer (FPS) for accessing the memory array and prior to such accessing, requesting permission from a power arbitration unit to access such memory array, and the power arbitration unit (PAU). The PAU is configured for allowing or withholding permission to the FPS for accessing the memory array. The PAU is configured to determine whether to allow or withhold based on whether such command, together with execution of other commands, is estimated to exceed a predetermined power usage specification for the memory system. In further embodiments the PAU is configured to perform one or more of the above described method operations. In another embodiment, the memory system comprises a nonvolatile memory array for storing data and a controller that is operable to perform one or more of the above described power arbitration method operations.
  • These and other features of the present invention will be presented in more detail in the following specification of embodiments of the invention and the accompanying figures, which illustrate by way of example the principles of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows an example of a memory system in accordance with one embodiment of the present invention.
  • FIG. 2A illustrates the operation of a Power Arbitration Unit (PAU) controller configured to implement data transfer operations with respect to logical units of a memory array so as to minimize peak power overlap in accordance with one example embodiment.
  • FIG. 2B shows a table of predefined semaphore information that is used during power arbitration in accordance with a specific implementation.
  • FIG. 2C is a flow chart illustrating a procedure for performing power arbitration with respect to various types of commands in accordance with a specific implementation of the present invention.
  • FIG. 3 is a diagrammatic representation of an interface between a power arbitration unit (PAU) and a plurality of flash interface modules (FIM) in accordance with one embodiment of the present invention.
  • FIG. 4 is a diagrammatic representation of a PAU module in accordance with one implementation of the present invention.
  • FIG. 5 illustrates one example of a PAU slave and an FPS module and I/O interface in accordance with a specific implementation.
  • FIG. 6 is a diagrammatic representation of an example structure of a memory cell array.
  • DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
  • Reference will now be made in detail to a specific embodiment of the invention. An example of this embodiment is illustrated in the accompanying drawings. While the invention will be described in conjunction with this specific embodiment, it will be understood that it is not intended to limit the invention to one embodiment. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.
  • The following embodiments arc directed to techniques for meeting power limits of a storage system, while providing better performance. In general, the memory storage system includes a power arbitration unit (PAU) that is configured to provide power arbitration for commands that utilize power. The PAU receives a request to allow issuance for each command (or set of commands). For example, the PAU receives a write or read command request prior to execution of such command with respect to the memory array of the storage system. The PAU allows or withholds permission for execution of such command with respect to the memory array based on whether such command, together with execution of other commands, has been estimated to exceed a predetermined power usage specification for the memory system. For instance, the PAU may only allow a certain number of a particular command type (or combination of command types) to be executed in parallel.
  • Before turning to the details of PAU embodiments of the present invention, exemplary memory system architectures will first be described. FIG. 1 shows an example of memory system 100 in accordance with one embodiment of the present invention. The memory system includes a host interface 102, a memory controller 104, and a memory array in the form of one or more memory array dies, e.g., 106 a-106 d. An outer casing may be formed around these components so as to provide protection to such components from physical damage. The memory system may include other components (such as light emitting diodes, LEDs) for additional functionality.
  • The memory controller 104 is in communication with a host interface 102 that provides a connection to a host 101, which is, for example, a digital camera, laptop computer, MP3 player, PDA, or other similar electronic device. In certain implementations, the host interface 102 complies with a standard (such as a memory card standard or the USB standard) so that the memory system can interface with a wide range of hosts that have a corresponding interface. Typically, such standards provide for the physical arrangement of pins in the physical interface as well as the assignment of each pin, voltage levels used, as well as the protocols used for data and commands sent through the interface. Many interfaces include a provision for a host to provide power to a memory system. For example, memory cards and USB flash drives can obtain their power from a host through such a host interface.
  • The memory controller 104 is also in communication with four memory array chips 106 a-106 d over memory buses 114 a and 114 b. In the illustrated example, the controller 104 also includes a plurality of memory interfaces, such as Flash Interface Module (FIMs) 110 a and 110 b. Each FIM is coupled with a different memory bus that is coupled to a different set of memory dies. For instance, FIM 110 a is coupled with memory dies 106 a and 106 b via memory bus 114 a, and FIM 110 b is coupled with memory dies 106 c and 106 d via memory bus 114 b. Memory controller 104 also includes host interface 108, which is connected to the host interface 102 of the memory system 100.
  • The arrangement of FIG. 1 may facilitate higher speed access to the memory array by allowing a higher degree of parallelism. Both FIM's may transfer data in parallel to different sets of memory dies, thus doubling the speed of transfer for a given bus size. In one example, each memory bus has a bus width of 16 bits, so that using two such busses in parallel provides the equivalent of a 32 bit wide bus, but without requiring memory chips that are designed for 32 bit access (i.e. cheaper memory chips with 16 bit access may be used). Additionally, a higher degree of parallelism may be achieved due to a higher level of concurrency of operations being executed within the different memory dies, planes, etc.
  • The memory controller 104 may also be configured to manage data in the memory array. When a host sends data, the memory controller 104 can be operable to determine where the data is to be stored and record the location where such data is stored. In one example, the memory controller performs logical-to-physical mapping so that data received from the host with logical addresses is mapped to physical locations in the memory array in a manner that is determined by the memory controller according to the available space in the memory array.
  • The memory controller 104 may also include one or more Error Correction Code (ECC) modules, e.g., 118. Host data can be transferred between the memory controller and the flash memory array via FIMs 110 a and 1101 b, which temporarily store such data in buffer RAM 120. A FIM can be configured to detect data errors on the fly during this process. If no errors are detected, the data can be transferred to the host via host interface modules 108 and 102. If errors are detected, ECC circuit 118 could be utilized to correct such errors before transferring the data to the host. Such ECC functions allow errors in data that are read from the memory array 106 to be corrected in the memory controller 104 before the data is sent to the host 101. In certain embodiments, the controller 104 may include any suitable number of ECC modules for writing and reading data to and from the memory array via one or more of the FIMs. That is, each FIM may have its own ECC module, or a single ECC module (118, as shown) may interface with a plurality of FIMs.
  • The memory controller 104 can take the form of any combination of hardware and software, such as a dedicated chip or Application Specific Integrated Circuit (“ASIC”), which is separate from the nonvolatile memory chips. The memory controller 104 may also include any number and type of circuits for performing the various functions of the controller. For instance, the controller 104 may also include one or more microprocessors 116 and buffer RAM 120. A microprocessor 116 can be configured to provide overall control of the ECC circuit 118, host interface module 108, and flash interface modules 110 a and 110 b, as well as other components of memory controller 104. The buffer RAM 120 may provide temporary storage of data that is being transferred between the host 101 and memory array, as well as other data for operation of the controller 104.
  • The memory array may take the form of a nonvolatile NAND flash memory array. Alternatively, the nonvolatile memory array may take the form of one-time-programmable (OTP) memory, NOR flash memory, Magnetic Random Access Memory (MRAM), or other form of nonvolatile memory. The nonvolatile memory array may be located in a plurality of chips as shown. Each chip may include read and write circuits and other peripheral circuits.
  • Alternative memory systems may include any suitable number and type of controllers, interfaces, buses, and/or memory dies. Several memory system architectures are described in U.S. patent application Ser. No. 13/167,929, entitled “Controller, Storage Device, and Method for Power Throttling Memory Operations”, filed Jun. 24, 2011, by Paul A. Lassa et al., which application is incorporated herein by reference in its entirety for all purposes.
  • In a multi-die memory system, particularly multi-bank-multi-die systems, there is a very high chance that power consuming operations at the memory array overlap in time. As a result, windows of very high peak power can be created. Execution of other types of commands, besides commands that are executed with respect to the array, also consume power.
  • The memory system of the present invention also includes a power arbitration unit (PAU) that is configured to manage power with respect to a plurality of commands. The PAU embodiments of the present invention may be integrated into any type of memory system architectures, such as the architectures described herein, including descriptions incorporate herein. Overall, a PAU may be implemented by any suitable combination of hardware and/or software. Although the embodiments illustrated herein show the PAU as being part of the memory controller, the PAU can be a separate module from the controller or formed within any suitable logic block of the memory system.
  • The PAU may operate to cause the time periods of peak power for a predefined number of commands of a certain type stacked together and executed in parallel, while the execution of a subsequent command of the same type is delayed. FIG. 2A illustrates a PAU controller 204 that is configured to implement data transfer operations with respect to logical units (LUNs) LUN0-LUN7 over time so as to minimize peak power overlap. At time t0, PAU controller 204 allows data transfer operations for LUN0-LUN3 to execute, while delaying execution of data transfer operations for LUN4-LUN7 until time t1. As a result, the peak power durations between t0 and t1, as shown in the power profiles 202 a-202 d for LUN0-LUN3, will not significantly overlap with the peak power durations of power profiles 202 e-202 h for LUN4-LUN7.
  • The PAU may be configured with any suitable data for facilitating power arbitration with respect to particular types of commands (or sets of commands). FIG. 2B includes a table 220 of predefined semaphore information that is used during power arbitration in accordance with a specific implementation. As shown, each command type has a plurality of associated semaphore fields: a command semaphore capacity, a command semaphore, a semaphore expiration timer, and a semaphore expiration rate. The PAU may be operable to utilize these semaphore values to determine whether to allow or inhibit issuance of a particular command type of set of command types as described further below.
  • The semaphore capacity generally indicates how many times the associated command type can issue or execute before further issuance is to be inhibited by the PAU. By way of example in some systems, a particular command type may be allowed to issue four times, but the fifth command for the same type may be inhibited so as to not cause the memory system's power usage to exceed the peak power budget. The command semaphore value indicates how many commands of the associated type have been allowed to issue or execute, e.g., within a specified time frame. In the previous example, the command semaphore value increments each time a command for the associated type is executed. When the command semaphore for a particular command type reaches the command semaphore capacity, the next request for the same command type has to be withheld.
  • The semaphore expiration timer indicates the maximum amount of time that can pass since the last issued command for the particular type. After the expiration timer has been met, the currently withheld request or next request can be granted without delay. If a request is pending and the expiration timer value has not been met, such request is granted as soon as the command semaphore becomes less than the command semaphore capacity. The semaphore expiration rate indicates the number of semaphore units or counts that are restored whenever the semaphore expiration timer value is reached. In one embodiment, the semaphore expiration rate is subtracted from the command semaphore after the time has expired after the last command. For example, if a write command can only be issued four times after which the fifth write command is held, the fifth write command is allowed to issue after the expiration time and the current semaphore will be reset to zero if the expiration rate is 4. Of course, it is not necessary to have the expiration rate be equal to the semaphore capacity value. For example, the expiration rate can equal 1, while the capacity equals 4. In this later example, four commands could issue at once (e.g., for LUN0˜LUN3), and then subsequent commands would be staggered (e.g., for LUN4˜LUN7).
  • The data for facilitating power arbitration may take any suitable form and contain any suitable values for efficiently limiting power consumption for a particular device. For example, the power facilitation data may be fixed to values that are determined to work best for the particular type of memory system, e.g., based on the devices specification limits and/or experimentation results. Alternatively, the power facilitation data may be selectively alterable as power needs change, e.g., as the device ages. Additionally, other types of data, such as power units, may be associated with and tracked for each type of command. The power units may then be added for simultaneously executing commands until a power limit is reached, after which command execution is deferred.
  • FIG. 2C is a flow chart illustrating a generalized procedure 250 for performing power arbitration with respect to various types of commands in accordance with a specific implementation of the present invention. Initially, a request for execution of a command with respect to the memory system may be received at the PAU in operation 252. For example, a command for programming the memory array is received from a particular FPS module.
  • The type of command (or set of commands) can then be determined in operation 254. For instance, a particular field of a received command is compared to a list of command type values that correspond to different command types, such as program, read, etc. In a further embodiment, the PAU may accumulate commands until a particular combination of command types for executing together are received.
  • It may then be determined whether the command type's (or command combination type's) corresponding timer has expired since issuance of the last command for this particular type in operation 256. For example, enough time may have passed for the previous command of the same type (as well as all concurrently commands and any other commands executed before the last command of the same type) to have executed completely or, at least, finished with the period of time for peak power usage. If the timer has expired, the timer expiration rate value may be subtracted from the command count associated with this command type in operation 257. For example, if the expiration rate is 4 and the current count has reached semaphore capacity of 4 and the expiration timer has expired, the current count is reset to zero. If the expiration timer has not expired, this operation for resetting the count is skipped.
  • After the expiration timer is reset or if it is determined that the timer has not expired yet, it may then be determined whether the command count for this type has reached the semaphore capacity in operation 258. For example, if the semaphore capacity for a current command having a “write” type is 4, it is determined whether 4 “write” commands have already issued. If the semaphore capacity has not been reached, the current command (or combination of commands) may be allowed to issue and the current count for this command type is incremented (and the expiration timer may be reset) in operation 260. The arbitration process may then end for the particular command until another command is received.
  • If the command count has reached its capacity, execution of the current command may be withheld in operation 262. It then may be again determined whether the command type's timer has expired since issuance of the last command in operation 264. The procedure may wait for expiration of the timer, after which the timer expiration rate is subtracted from the current count and cause such count to fall below the semaphore capacity value. This count reset will then cause the withheld current command to be issued in operation 260.
  • By monitoring and limiting the “power cost” of the commands on a command-by-command basis, the PAU is able to dynamically (e.g., on the fly) alter when each of the commands is performed so that performance of the commands in the aggregate does not exceed power limits over a period of time, e.g., as set forth in the specifications of the storage device. Because it is the storage device's PAU, and not a host processor, that facilitates power regulations, these embodiments can be used with multi-chip packages (MCP) (e.g., a controller and N number of LUNs) that serve multiple host processing instances that are unaware of each other. For example, a host may be running four independent instances of a flash memory management application, where one or two LUNs on a four or eight LUN MCP are allocated or dedicated to each of the instances. In this case, each independent instance of the flash memory management knows how much and what kind of traffic it is sending, but it does not know what the other three instances are sending. Centralizing power regulation control in a PAU of the MCP overcomes this problem.
  • Any suitable hardware and/or software may be configured to implement the PAU techniques described herein. FIG. 3 is a diagrammatic representation of an example interface between a PAU 112 and a plurality of FIM's 310 a-110 d. The PAU 112 may include a plurality of master modules 302 a-302 d, and the master modules are operable to interface with a plurality of slave modules 304 a-304 d of the FIM's 110 a-110 d. A plurality of flash protocol sequencer (FPS) modules 306 a-306 d of the FIM's can also provide an interface between the slave modules and a plurality of I/O modules 308 a-308 d, which are communicatively coupled to the memory array (not shown).
  • Each FPS may be configured to implement the commands with respect to the memory array, e.g., NAND array, or, more specifically, a set of associated memory array dies. That is, each FPS may serve multiple array dies. In one implementation, each FIM can concurrently execute multiple process threads for accessing the multiple dies or banks of memory array via its associated FPS module. For instance, each FIM may be configured to forward a plurality of commands to its associated FPS for execution in parallel with respect to a plurality of associated memory array dies and/or banks. After permission is granted by the PAU for one or more commands, the FPS may then generate the appropriate memory array signals for the permitted commands with respect to its associated memory array, as further described below.
  • In general, each master module of the PAU may provide a pass-through feature so as to transmit command requests and acknowledgement responses between each respective FPS module and the PAU. The number of master modules may depend on the number of FPS modules, for example, with a master module being provided for each FPS module. This arrangement provides easy scaling for additionally memory dies and their associated FIM modules to be easily instantiated.
  • FIG. 4 is a diagrammatic representation of PAU module 112 in accordance with one implementation of the present invention. The PAU may include a plurality of command unit modules 402 a-402 h for storing data for facilitating power arbitration. In this illustrated embodiment, each command unit module contains a plurality of fields for each command type or set of commands. For example, command unit 402 a includes a semaphore capacity field 404 a, a semaphore field 406 a, an expiration timer field 408 a, and an expiration rate field 410 a. The PAU may also include one or more timers, e.g., 412, as well as one or more master interfaces, e.g., 402 a-402 d.
  • FIG. 5 illustrates one example of a PAU slave 204 a and an FPS module 206 a and I/O interface 208 a in accordance with a specific implementation. The PAU 112 may include any suitable number and type of registers for holding variables or constants for operation of power arbitration for particular command types. As shown, the PAU slave 204 a may include a plurality of command type registers 504 a-504 h. Each command type register specifies or describes a particular command type, e.g., write, read, etc.
  • The PAU may also include a command comparator 502. The command comparator 502 may receive a command from an FPS command pipe 506, for example, of FPS 206 a. The command comparator 502 may operate to compare the received command to information in the plurality of command registers 504 a-504 h so as to determine the particular type of command. The command comparator may then output a command request 210 a having the particular determined command type. The command request 210 a may be any suitable width, depending on the number of commands that are to be distinguished. In the illustrated example, the command request 210 a is eight bits wide.
  • By way of example, a command register #0 may specify the command number for a particular command type that is used by the comparator to compare against the received command to determine whether the received command is in the list of arbitrated commands.
  • The command request is transmitted to the PAU master of the PAU module, which determines whether to allow or not allow the particular command request to proceed. If the PAU module determines that a particular command request is to proceed, the PAU module can then return a command acknowledgment (ACK) 212 a to the FPS 206 a via the respective slave module 204 a. The command ACK 212 a may be received by a command enabler 508, for example, of the FPS 206 a. The command enabler 508 of the FPS 206 a can generally operate to issue the particular command type, e.g., programming, reading, or erasing a plurality of cells within the memory array, via a I/O interface 208 a.
  • In embodiments in which power arbitration is applied to the memory array, the command enabler 508 issues a command to the array using any suitable combination of hardware and software. For example, a NAND type memory cell array having a plurality of storage units M arranged in a matrix may be controlled by various types of hardware or software modules, such as a column control circuit, a row control circuit, a c-source control circuit, and a c-p-well control circuit. In this embodiment, the column control circuit is connected to bit lines (BL) of the memory cell array for reading data stored in the memory cells (M), for determining a state of the memory cells (M) during a program operation, and for controlling potential levels of the bit lines (BL) to promote the programming or to inhibit the programming. The row control circuit is connected to word lines (WL) to select one of the word lines (WL), to apply read voltages, to apply a program voltages combined with the bit line potential levels controlled by the column control circuit, and to apply an erase voltage coupled with a voltage of p-type regions (labeled as “c-p-well” in FIG. 6) on which the memory cells (M) are formed. The c-source control circuit controls the common source lines (labeled as “c-source” in FIG. 6) connected to the memory cells (M). The c-p-well control circuit controls the voltage of the c-p-well.
  • Other types of modules may also be implemented for various array operations, such as data I/O buffers for input and output of data to and from the array, a command interface for receiving command data for controlling the memory array from the external I/O lines from a respective FPS, one or more state machines for controlling various memory array modules (e.g., the column control circuit, the row control circuit, the c-source control circuit, the c-p-well control circuit, and the data I/O buffer) and for outputting status data of the flash memory, such as READY/BUSY or PASS/FAIL.
  • With reference to FIG. 6, an example structure of a memory cell array is briefly described. A flash EEPROM of a NAND type is described as an example. The memory cells (M) are partitioned into a number of blocks, 1,024 in a specific example. The data stored in a particular block are simultaneously erased. In this implementation, the block is the minimum unit of a number of cells that are simultaneously erasable. In each block, there are N columns, N=8,512 in this example, that are divided into left columns and right columns, as described in further U.S. Pat. No. 6,522,580, which patent is incorporated by reference herein. The bit lines are also divided into left bit lines (BLL) and right bit lines (BLR). Four memory cells connected to the word lines (WL0 to WL3) at each gate electrode are connected in series to form a NAND cell unit. One terminal of the NAND cell unit is connected to corresponding bit line (BL) via a first select transistor (S) which gate electrode is coupled to a first (Drain) select gate line (SGD), and another terminal is connected to the c-source via a second (Source) select transistor (S) which gate electrode is coupled to a second select gate line (SGS). Although four floating gate transistors are shown to be included in each cell unit, for simplicity, other numbers of transistors, such as 8, 16, 32 or even 64 or more, are used. In some memory systems more than 8,512 columns (bit lines) may be provided, for example 67840 columns. FIG. 6 also includes a connection, C-p-well, for supplying the well voltage.
  • In each block, in this example, 8,512 columns are divided into even columns and odd columns. The bit lines are also divided into even bit lines (BLe) and odd bit lines (BLo). Four memory cells connected to the word lines (WL0 to WL3) at each gate electrode are connected in series to form a NAND cell unit. One terminal of the NAND cell unit is connected to corresponding bit line (BL) via a first select transistor (S) which gate electrode is coupled to a first select gate line (SGD), and another terminal is connected to the c-source via a second select transistor (S) which gate electrode is coupled to a second select gate line (SGS). Although, for simplicity, four floating gate transistors are shown to be included in each cell unit, a higher number of transistors, such as 8, 16 or even 32, are used.
  • In an alternate set of embodiments, as described in U.S. Pat. No. 6,771,536, which is herein incorporated by reference, the array can be divided into left and right portions instead of the odd-even arrangement. The left and right sides may additionally have independent well structures with the right and left sides of the array each formed over such separate well structures, allowing the voltage levels to be set independently by the c-p-well control circuit. In a further variation, this could also allow erasure of a sub-block of less than all of the partitions of a block. Further variations that are compatible with the present invention are also described in U.S. Pat. No. 6,771,536.
  • In the exemplary embodiments, the page size is 512 bytes, which is smaller than the cell numbers on the same word line. This page size is based on user preference and convention. Allowing the word line size to correspond to more than one page's worth of cells saves the X-decoder (row control circuit 3) space since different pages worth of data can share the decoders. During a user data read and programming operation, N=4,256 cells (M) are simultaneously selected in this example. The cells (M) selected have the same word line (WL), for example WL2, and the same kind of bit line (BL). Therefore, 532 bytes of data can be read or programmed simultaneously. This 532B data simultaneously read or programmed forms a “page” logically. Therefore, one block can store at least eight pages. When each memory cell (M) stores two bits of data, namely a multi-level cell, one block stores 16 pages in the case of two bit per cell storage. In this embodiment, the storage element of each of the memory cells, in this case the floating gate of each of the memory cells, stores two bits of user data.
  • Regardless of the particular memory configuration, the controller can be connected or connectable with a host system, such as a personal computer, a digital camera, or a personal digital assistant. The host can initiate commands, such as to store or read data to or from the memory array, and provide or receive such data, respectively. In general, an FPS of the controller converts such commands into command signals that can be interpreted and executed by the command circuits of the array. However, the FPS only converts such commands to signals after receiving permission from the PAU.
  • Although the power arbitration techniques are mostly described herein in relation to controlling a memory array via its various control hardware and software, the arbitration techniques may also be used to control power with respect to commands issued for other components of the memory system, which are sources of high power consumption, such as ECC (error correction code) and AES (advanced encryption standard) engines. The PAU may also or alternatively be configured to interface with various command modules of these other components.
  • Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. Therefore, the described embodiments should be taken as illustrative and not restrictive, and the invention should not be limited to the details given herein but should be defined by the following claims and their full scope of equivalents.

Claims (11)

What is claimed is:
1. A method for managing power in a memory system having a controller and nonvolatile memory array, the method comprising:
prior to execution of each command with respect to the memory array, receiving a request for execution of such command with respect to the memory array; and
in response to receipt of each request for each command, allowing or withholding execution of such command with respect to the memory array based on whether such command, together with execution of other commands, is estimated to exceed a predetermined power usage specification for the memory system.
2. The method of claim 1, wherein the memory array is formed within multiple die and/or multiple planes that are accessible in parallel.
3. The method of claim 1, wherein allowing or withholding execution of each command with respect to the memory array is further based on whether such command has a type of command that has been previously executed more than a predetermined threshold number of times.
4. The method of claim 1, wherein allowing or withholding execution of each command with respect to the memory array is further based on a configurable decision matrix describing necessary delays between execution of each different type of command or a combination of commands.
5. The method of claim 1, wherein each request for execution of a command with respect to the memory array is received by a power arbitration unit of the controller.
6. The method of claim 1, further comprising: prior to execution of each command with respect to the controller, receiving a request for execution of such command with respect to the controller; and in response to receipt of each request for execution of each command with respect to the controller, allowing or withholding execution of such command with respect to the controller based on whether such command, together with execution of other commands, is estimated to exceed a predetermined power usage specification for the memory system.
7. The method of claim 6, wherein each request that is received with respect to the controller is received with respect to an error correction coding (ECC) module an encryption module of the controller.
8. A memory system comprising:
a nonvolatile memory array for storing data;
a flash protocol sequencer (FPS) for accessing the memory array and prior to such accessing, requesting permission from a power arbitration unit to access such memory array;
the power arbitration unit (PAU) for allowing or withholding permission to the FPS for accessing the memory array, wherein the PAU is configured to determine whether to allow or withhold based on whether such command, together with execution of other commands, is estimated to exceed a predetermined power usage specification for the memory system.
9. The memory system of claim 8, wherein the memory array is formed within multiple die and/or multiple planes that are accessible in parallel.
10. The memory system of claim 8, wherein allowing or withholding execution of each command with respect to the memory array is further based on whether such command has a type of command that has been previously executed more than a predetermined threshold number of times.
11. The memory system of claim 8, wherein allowing or withholding execution of each command with respect to the memory array is further based on a configurable decision matrix describing necessary delays between execution of each different type of command or a combination of commands.
US14/262,077 2011-06-24 2014-04-25 Apparatus and Methods for Peak Power Management in Memory Systems Abandoned US20140237167A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US13/167,929 US8694719B2 (en) 2011-06-24 2011-06-24 Controller, storage device, and method for power throttling memory operations
US13/296,898 US8745369B2 (en) 2011-06-24 2011-11-15 Method and memory system for managing power based on semaphores and timers
US14/262,077 US20140237167A1 (en) 2011-06-24 2014-04-25 Apparatus and Methods for Peak Power Management in Memory Systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/262,077 US20140237167A1 (en) 2011-06-24 2014-04-25 Apparatus and Methods for Peak Power Management in Memory Systems

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/296,898 Continuation US8745369B2 (en) 2011-06-24 2011-11-15 Method and memory system for managing power based on semaphores and timers

Publications (1)

Publication Number Publication Date
US20140237167A1 true US20140237167A1 (en) 2014-08-21

Family

ID=47362969

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/296,898 Active 2032-03-17 US8745369B2 (en) 2011-06-24 2011-11-15 Method and memory system for managing power based on semaphores and timers
US14/262,077 Abandoned US20140237167A1 (en) 2011-06-24 2014-04-25 Apparatus and Methods for Peak Power Management in Memory Systems

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/296,898 Active 2032-03-17 US8745369B2 (en) 2011-06-24 2011-11-15 Method and memory system for managing power based on semaphores and timers

Country Status (1)

Country Link
US (2) US8745369B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9368214B2 (en) 2013-10-03 2016-06-14 Apple Inc. Programmable peak-current control in non-volatile memory devices
US10366766B2 (en) 2017-12-12 2019-07-30 Western Digital Technologies, Inc. Power shaping and peak power reduction by data transfer throttling

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101961324B1 (en) * 2012-05-09 2019-03-22 삼성전자주식회사 Memory device and power managing method of the same
US9417685B2 (en) 2013-01-07 2016-08-16 Micron Technology, Inc. Power management
US9443600B2 (en) * 2013-03-28 2016-09-13 Intel Corporation Auto-suspend and auto-resume operations for a multi-die NAND memory device to reduce peak power consumption
US20150033045A1 (en) * 2013-07-23 2015-01-29 Apple Inc. Power Supply Droop Reduction Using Feed Forward Current Control
EP2884369B1 (en) 2013-12-16 2018-02-07 Stichting IMEC Nederland Memory control system for a non-volatile memory and control method
US9293176B2 (en) 2014-02-18 2016-03-22 Micron Technology, Inc. Power management
US9582211B2 (en) * 2014-04-29 2017-02-28 Sandisk Technologies Llc Throttling command execution in non-volatile memory systems based on power usage
US9575677B2 (en) * 2014-04-29 2017-02-21 Sandisk Technologies Llc Storage system power management using controlled execution of pending memory commands
US9547587B2 (en) 2014-05-23 2017-01-17 International Business Machines Corporation Dynamic power and thermal capping for flash storage
US10013345B2 (en) 2014-09-17 2018-07-03 Sandisk Technologies Llc Storage module and method for scheduling memory operations for peak-power management and balancing
US9612763B2 (en) * 2014-09-23 2017-04-04 Western Digital Technologies, Inc. Apparatus and methods to control power on PCIe direct attached nonvolatile memory storage subsystems
US9940036B2 (en) 2014-09-23 2018-04-10 Western Digital Technologies, Inc. System and method for controlling various aspects of PCIe direct attached nonvolatile memory storage subsystems
US9880605B2 (en) 2014-10-27 2018-01-30 Sandisk Technologies Llc Method and system for throttling power consumption
US9847662B2 (en) 2014-10-27 2017-12-19 Sandisk Technologies Llc Voltage slew rate throttling for reduction of anomalous charging current
US9916087B2 (en) 2014-10-27 2018-03-13 Sandisk Technologies Llc Method and system for throttling bandwidth based on temperature
US20160162215A1 (en) * 2014-12-08 2016-06-09 Sandisk Technologies Inc. Meta plane operations for a storage device
US10133483B2 (en) * 2015-04-28 2018-11-20 Sandisk Technologies Llc Memory system and method for differential thermal throttling
US9875049B2 (en) * 2015-08-24 2018-01-23 Sandisk Technologies Llc Memory system and method for reducing peak current consumption
KR20170027556A (en) * 2015-09-02 2017-03-10 에스케이하이닉스 주식회사 Memory controller and memory system having the same
US10120817B2 (en) 2015-09-30 2018-11-06 Toshiba Memory Corporation Device and method for scheduling commands in a solid state drive to reduce peak power consumption levels
US9817595B2 (en) 2016-01-28 2017-11-14 Apple Inc. Management of peak power consumed by multiple memory devices
US9947401B1 (en) 2016-12-22 2018-04-17 Sandisk Technologies Llc Peak current management in non-volatile storage

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020013915A1 (en) * 2000-07-27 2002-01-31 Matsushita Electric Industrial Co., Ltd. Data processing control system, controller, data processing control method, program, and medium
US6895482B1 (en) * 1999-09-10 2005-05-17 International Business Machines Corporation Reordering and flushing commands in a computer memory subsystem
US6996821B1 (en) * 1999-03-25 2006-02-07 International Business Machines Corporation Data processing systems and method for batching tasks of the same type in an instruction cache
US20080089146A1 (en) * 2006-10-11 2008-04-17 Masamichi Fujito Semiconductor device
US20090019264A1 (en) * 2007-07-11 2009-01-15 Correale Jr Anthony Adaptive execution cycle control method for enhanced instruction throughput
US20090172264A1 (en) * 2007-12-28 2009-07-02 Asmedia Technology Inc. System and method of integrating data accessing commands
US20110185145A1 (en) * 2010-01-27 2011-07-28 Kabushiki Kaisha Toshiba Semiconductor storage device and control method thereof
US20120221767A1 (en) * 2011-02-28 2012-08-30 Apple Inc. Efficient buffering for a system having non-volatile memory
US20120254504A1 (en) * 2011-03-28 2012-10-04 Western Digital Technologies, Inc. Flash memory device comprising host interface for processing a multi-command descriptor block in order to exploit concurrency

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5661848A (en) 1994-09-08 1997-08-26 Western Digital Corp Multi-drive controller with encoder circuitry that generates ECC check bytes using the finite field for optical data for appending to data flowing to HDA
US7127573B1 (en) 2000-05-04 2006-10-24 Advanced Micro Devices, Inc. Memory controller providing multiple power modes for accessing memory devices by reordering memory transactions
US6522580B2 (en) 2001-06-27 2003-02-18 Sandisk Corporation Operating techniques for reducing effects of coupling between storage elements of a non-volatile memory operated in multiple data states
US20030046630A1 (en) 2001-09-05 2003-03-06 Mark Hilbert Memory using error-correcting codes to correct stored data in background
US20030115476A1 (en) * 2001-10-31 2003-06-19 Mckee Bret Hardware-enforced control of access to memory within a computer using hardware-enforced semaphores and other similar, hardware-enforced serialization and sequencing mechanisms
US6771536B2 (en) 2002-02-27 2004-08-03 Sandisk Corporation Operating techniques for reducing program and read disturbs of a non-volatile memory
JP2003308176A (en) 2002-04-03 2003-10-31 Internatl Business Mach Corp <Ibm> Data storage device, reordering method for command queue, data processing method and program
US7069394B2 (en) 2002-12-05 2006-06-27 International Business Machines Corporation Dynamic data routing mechanism for a high speed memory cloner
US6986011B2 (en) 2002-12-05 2006-01-10 International Business Machines Corporation High speed memory cloner within a data processing system
US7334144B1 (en) 2003-06-05 2008-02-19 Maxtor Corporation Host-based power savings method and apparatus
US7269481B2 (en) * 2003-06-25 2007-09-11 Intel Corporation Method and apparatus for memory bandwidth thermal budgetting
US7010387B2 (en) 2003-08-28 2006-03-07 Spectra Logic Corporation Robotic data storage library comprising a virtual port
US7752471B1 (en) * 2003-09-17 2010-07-06 Cypress Semiconductor Corporation Adaptive USB mass storage devices that reduce power consumption
US7808895B2 (en) * 2003-10-30 2010-10-05 Intel Corporation Isochronous device communication management
KR100606242B1 (en) 2004-01-30 2006-07-31 삼성전자주식회사 Volatile Memory Device for buffering between non-Volatile Memory and host, Multi-chip packaged Semiconductor Device and Apparatus for processing data using the same
US20060112240A1 (en) 2004-11-24 2006-05-25 Walker Robert M Priority scheme for executing commands in memories
US7610497B2 (en) * 2005-02-01 2009-10-27 Via Technologies, Inc. Power management system with a bridge logic having analyzers for monitoring data quantity to modify operating clock and voltage of the processor and main memory
US7721011B1 (en) 2005-05-09 2010-05-18 Oracle America, Inc. Method and apparatus for reordering memory accesses to reduce power consumption in computer systems
KR100684907B1 (en) 2006-01-09 2007-02-13 삼성전자주식회사 Multi_chip package reducing peak current on power_up
US7701764B2 (en) 2006-05-17 2010-04-20 Micron Technology, Inc. Apparatus and method for reduced peak power consumption during common operation of multi-NAND flash memory devices
JP4794370B2 (en) * 2006-06-20 2011-10-19 株式会社日立製作所 Storage system and storage control method that achieve both power saving and performance
US8060718B2 (en) 2006-06-20 2011-11-15 International Business Machines Updating a memory to maintain even wear
US7587559B2 (en) * 2006-08-10 2009-09-08 International Business Machines Corporation Systems and methods for memory module power management
US20080235441A1 (en) 2007-03-20 2008-09-25 Itay Sherman Reducing power dissipation for solid state disks
US7739461B2 (en) 2007-07-10 2010-06-15 International Business Machines Corporation DRAM power management in a memory controller
US7724602B2 (en) 2007-07-10 2010-05-25 International Business Machines Corporation Memory controller with programmable regression model for power control
US8443242B2 (en) 2007-10-25 2013-05-14 Densbits Technologies Ltd. Systems and methods for multiple coding rates in flash devices
US8291181B2 (en) 2008-10-28 2012-10-16 Micron Technology, Inc. Temporary mirroring, logical segregation, and redundant programming or addressing for solid state drive operation
US20100125695A1 (en) 2008-11-15 2010-05-20 Nanostar Corporation Non-volatile memory storage system
US7921178B2 (en) 2008-12-04 2011-04-05 Voltaire Ltd. Device, system, and method of accessing storage
US20100146205A1 (en) 2008-12-08 2010-06-10 Seagate Technology Llc Storage device and method of writing data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6996821B1 (en) * 1999-03-25 2006-02-07 International Business Machines Corporation Data processing systems and method for batching tasks of the same type in an instruction cache
US6895482B1 (en) * 1999-09-10 2005-05-17 International Business Machines Corporation Reordering and flushing commands in a computer memory subsystem
US20020013915A1 (en) * 2000-07-27 2002-01-31 Matsushita Electric Industrial Co., Ltd. Data processing control system, controller, data processing control method, program, and medium
US20080089146A1 (en) * 2006-10-11 2008-04-17 Masamichi Fujito Semiconductor device
US20090019264A1 (en) * 2007-07-11 2009-01-15 Correale Jr Anthony Adaptive execution cycle control method for enhanced instruction throughput
US20090172264A1 (en) * 2007-12-28 2009-07-02 Asmedia Technology Inc. System and method of integrating data accessing commands
US20110185145A1 (en) * 2010-01-27 2011-07-28 Kabushiki Kaisha Toshiba Semiconductor storage device and control method thereof
US20120221767A1 (en) * 2011-02-28 2012-08-30 Apple Inc. Efficient buffering for a system having non-volatile memory
US20120254504A1 (en) * 2011-03-28 2012-10-04 Western Digital Technologies, Inc. Flash memory device comprising host interface for processing a multi-command descriptor block in order to exploit concurrency

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9368214B2 (en) 2013-10-03 2016-06-14 Apple Inc. Programmable peak-current control in non-volatile memory devices
US9671968B2 (en) 2013-10-03 2017-06-06 Apple Inc. Programmable peak-current control in non-volatile memory devices
US10366766B2 (en) 2017-12-12 2019-07-30 Western Digital Technologies, Inc. Power shaping and peak power reduction by data transfer throttling

Also Published As

Publication number Publication date
US20120331282A1 (en) 2012-12-27
US8745369B2 (en) 2014-06-03

Similar Documents

Publication Publication Date Title
KR101735142B1 (en) Method and system for concurrent background and forground operations in a non-volatile memory array
US9053808B2 (en) Flash memory with targeted read scrub algorithm
US8254172B1 (en) Wear leveling non-volatile semiconductor memory based on erase times and program times
US20120284460A1 (en) High performance path for command processing
US8806090B2 (en) Apparatus including buffer allocation management and related methods
US20130138870A1 (en) Memory system, data storage device, memory card, and ssd including wear level control logic
KR101702392B1 (en) Semiconductor storage device and method for throttling performance of the semiconductor storage device
US9021158B2 (en) Program suspend/resume for memory
US8732431B2 (en) Logical address translation
US8452911B2 (en) Synchronized maintenance operations in a multi-bank storage system
US9916087B2 (en) Method and system for throttling bandwidth based on temperature
WO2009046115A1 (en) Flash memory controller
US10146292B2 (en) Power management
US20140112079A1 (en) Controlling and staggering operations to limit current spikes
US20120044771A1 (en) Method of programming non-volatile memory device and apparatuses for performing the method
WO2015057458A1 (en) Biasing for wear leveling in storage systems
US8966231B2 (en) Modifying commands
US9348699B2 (en) Memory system
CN106687940A (en) Suspending and resuming non-volatile memory operations
US8990476B2 (en) Power interrupt management
CN103635968B (en) The method and associated apparatus comprising a memory system controller
EP2347418B1 (en) Logical unit operation
US8917559B2 (en) Multiple write operations without intervening erase
US8279682B2 (en) Determining memory page status
US9747029B2 (en) Apparatus including memory management control circuitry and related methods for allocation of a write block cluster

Legal Events

Date Code Title Description
AS Assignment

Owner name: SANDISK TECHNOLOGIES LLC, TEXAS

Free format text: CHANGE OF NAME;ASSIGNOR:SANDISK TECHNOLOGIES INC;REEL/FRAME:038807/0807

Effective date: 20160516

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION