CN113168287B - Method and system for visualizing correlation between host commands and storage system performance - Google Patents

Method and system for visualizing correlation between host commands and storage system performance Download PDF

Info

Publication number
CN113168287B
CN113168287B CN201980079230.2A CN201980079230A CN113168287B CN 113168287 B CN113168287 B CN 113168287B CN 201980079230 A CN201980079230 A CN 201980079230A CN 113168287 B CN113168287 B CN 113168287B
Authority
CN
China
Prior art keywords
host
storage system
memory
time
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980079230.2A
Other languages
Chinese (zh)
Other versions
CN113168287A (en
Inventor
T·谢克德
O·吉拉德
L·霍德
E·索博尔
E·齐伯尔斯坦
J·G·哈恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SanDisk Technologies LLC
Original Assignee
SanDisk Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/371,613 external-priority patent/US10564888B2/en
Application filed by SanDisk Technologies LLC filed Critical SanDisk Technologies LLC
Publication of CN113168287A publication Critical patent/CN113168287A/en
Application granted granted Critical
Publication of CN113168287B publication Critical patent/CN113168287B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0625Power saving in storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3034Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a storage system, e.g. DASD based or network based
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3058Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

The present invention provides a method and system for visualizing the correlation between host commands and storage system performance. In one embodiment, the method comprises: receiving information about a host operation performed by the host over a period of time; receiving information about storage system operations of the storage system performed during the time period; and simultaneously displaying both host operation and storage system operation during the period of time. Other embodiments are possible, and each of the embodiments may be used alone or in combination.

Description

Method and system for visualizing correlation between host commands and storage system performance
Cross Reference to Related Applications
The present application is a continuation-in-part application from U.S. patent application Ser. No. 15/347565, filed 11/9 a 2016, which is hereby incorporated by reference.
Background
One metric used in designing a memory system is the write amplification factor. The write amplification factor is defined as the amount of data written to the memory of the storage system divided by the amount of data written by the host. The write amplification factor is one would be ideal as this would provide the best response time and promote high endurance of the memory. However, writing host data typically entails write overhead, such as writing control data to memory for flash management, and possibly relocating data from one pool of blocks to another. The write amplification may be measured using various methods.
Drawings
FIG. 1A is a block diagram of a non-volatile storage system of one embodiment.
FIG. 1B is a block diagram illustrating a memory module of one embodiment.
FIG. 1C is a block diagram illustrating a hierarchical storage system of one embodiment.
FIG. 2A is a block diagram illustrating components of a controller of the nonvolatile memory system shown in FIG. 1A, according to one embodiment.
FIG. 2B is a block diagram illustrating components of the non-volatile storage system shown in FIG. 1A according to one embodiment.
Fig. 3 is a diagram illustrating factors that may affect the write amplification factor of one embodiment.
FIG. 4 is a block diagram of a system using a write amplification tool of one embodiment.
FIG. 5 is a flow chart of a method of one embodiment for measuring the amount of data written by a host.
FIG. 6 is a flow chart of a method of one embodiment for measuring an amount of data written to a memory of a storage system.
FIG. 7 is a flow chart of a method for calculating one embodiment of a write amplification factor.
8A-8D are graphs generated and displayed by a write amplification tool of one embodiment.
9A-9G are graphs generated and displayed by a write amplification tool of one embodiment.
FIG. 10 is a flow diagram of a method of an embodiment for visualizing a correlation between host commands and storage system performance.
FIG. 11 is a diagram of an embodiment of simultaneously displaying information about host operation and information about storage system operation over a period of time.
FIG. 12 is a graph of an embodiment of tasks performed by a storage device over time.
FIG. 13 is a graph of an embodiment of tasks performed by a storage device and a host over time.
Figure 14 is a graph of an embodiment of power consumption.
Detailed Description
By way of introduction, the following embodiments relate to methods and systems for write amplification analysis. In one embodiment, a method performed in a computing device is provided. The method comprises the following steps: determining an amount of data to write from a computing device to a storage system over a period of time, wherein the storage system includes a memory; determining an amount of data written to the memory by the memory system during the period of time; calculating a write amplification factor for the time period; and simultaneously displaying graphics of: the amount of data written from the computing device during the time period, the amount of data written to the memory during the time period, and the write amplification factor during the time period.
In some embodiments, the method further comprises displaying a graphic of the capacity consumed during the period of time.
In some embodiments, the method further comprises displaying a graphic of the excess provisioned blocks of memory over the period of time.
In some embodiments, the method further comprises displaying a graphic written during the period of time that controls the size of the write.
In some embodiments, the method further comprises displaying a graphic of the relocation data during the period of time.
In some embodiments, the amount of data written from the computing device during the period of time is determined by monitoring a bus between the computing device and the storage system for write commands.
In some embodiments, the amount of data written to the memory during the period of time is determined by monitoring a bus between the memory and a controller of the memory system.
In some embodiments, the storage system is a simulation model of the storage system.
In some embodiments, the storage system is a real-time storage system.
In some embodiments, the method further comprises calculating an optimization function of a flash management algorithm in the storage system to reduce the write amplification.
In some embodiments, the memory comprises a three-dimensional memory.
In some embodiments, the storage system is embedded in the host.
In some embodiments, the storage system is removably connected to the host.
In another embodiment, a computing device is provided, the computing device comprising: apparatus for: means for collecting information about an amount of data written to a memory of the storage system over a period of time and information about an amount of data written to the storage system from a host over the period of time; and means for: displaying a graphical representation of activity in the storage system synchronized over the period of time, the activity contributing to an amount of data written to the memory of the storage system over the period of time that is greater than an amount of data written to the storage system from the host over the period of time.
In some embodiments, the memory comprises a three-dimensional memory.
In some embodiments, the storage system is an embedded storage system.
In some embodiments, the storage system is a removable storage system.
In another embodiment, a computer readable storage medium is provided that stores computer readable program code that, when executed by a processor, causes the processor to: collecting information associated with the write amplification, wherein the information is collected over a period of time for different write activities; generating a graph based on the information; and displaying the graphics together on a display device.
In some embodiments, the storage system comprises a three-dimensional memory.
In some embodiments, the storage system is embedded in the host.
In some embodiments, the storage system is removably connected to the host.
The following embodiments are also directed to methods and systems for visualizing a correlation between host commands and storage device performance. In one embodiment, a method performed in a computing device is presented. The method comprises the following steps: receiving information about a host operation performed by the host over a period of time; receiving information about storage system operations of the storage system performed during the time period; and simultaneously displaying both host operation and storage system operation during the period of time.
In some embodiments, the host operation and the storage system operation are displayed in the graphic simultaneously.
In some embodiments, the graph shows when a host operation is performed without storage system operation.
In some embodiments, the graphical indication synchronizes the start and stop of operation.
In some embodiments, the method further comprises determining the size of the write buffer using information displayed graphically.
In some embodiments, the method further comprises determining the capacitor size using information displayed graphically.
In some embodiments, the method further comprises displaying a graphic of power consumption over the period of time.
In some embodiments, the storage system comprises a three-dimensional memory.
In some embodiments, the storage system is embedded in the host.
In some embodiments, the storage system is removably connected to the host.
In another embodiment, a method is provided, the method comprising: after the host initiates the process of refreshing commands to the storage system, receiving information about the activity of the host; after the host initiates the process of refreshing commands to the memory system, receiving information about the activity of the memory system; and simultaneously displaying information about the activity of the host and information about the activity of the storage system.
In some embodiments, the simultaneous display shows a period of time when there is host activity after a host to storage system refresh command without storage system activity.
In some embodiments, information about the activity of the host and information about the activity of the storage system are displayed on a graphic.
In some embodiments, the method further comprises graphically displaying a curve of power consumption.
In some embodiments, the method further comprises graphically displaying an indicator of when the host initiates and ends the process of refreshing commands to the memory system.
In some embodiments, the method further comprises determining the size of the write buffer based on the concurrently displayed information.
In some embodiments, the method further comprises determining the capacitor size based on the concurrently displayed information.
In another embodiment, a computing device is provided, the computing device comprising: means for receiving information regarding host operations performed by a host over a period of time; means for receiving information regarding storage system operations of the storage system performed during the time period; and means for simultaneously displaying both host operation and storage system operation during the period of time.
In some implementations, the computing device further includes means for displaying a graphic showing power consumption over the period of time.
In some implementations, the computing device further includes means for displaying indicators of the start and stop of the refresh operation.
Other embodiments are possible, and each of the embodiments may be used alone or in combination. Accordingly, various embodiments will now be described with reference to the accompanying drawings.
Turning now to the drawings, FIGS. 1A-1C illustrate storage systems suitable for implementing aspects of these embodiments. FIG. 1A is a block diagram illustrating a non-volatile memory system 100 according to one embodiment of the subject matter described herein. Referring to FIG. 1A, a nonvolatile memory system 100 includes a controller 102 and nonvolatile memory that may be comprised of one or more nonvolatile memory die 104. As described herein, the term die refers to a collection of non-volatile memory cells formed on a single semiconductor substrate, and associated circuitry for managing the physical operation of those non-volatile memory cells. (the terms "memory" and "medium" may be used interchangeably herein.) the controller 102 interacts with the host system and transmits command sequences for read operations, program operations, and erase operations to the nonvolatile memory die 104.
The controller 102 (which may be a flash memory controller) may take the form of: such as a processing circuit, a microprocessor or processor, and a computer readable medium (e.g., firmware), logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller storing computer readable program code executable by the (micro) processor. The controller 102 may be configured with hardware and/or firmware to perform the various functions described below and shown in the flow diagrams. In addition, some components shown as being internal to the controller may also be stored external to the controller, and other components may be used. Further, the phrase "in operative communication with …" may mean in communication with, through, one or more components, either directly or indirectly (wired or wireless), which may or may not be shown herein.
As used herein, a flash memory controller is a device that manages data stored on flash memory and communicates with a host, such as a computer or electronic device. The flash memory controller may have various functions in addition to the specific functions described herein. For example, the flash memory controller may format the flash memory to ensure that the memory is operating properly, mark out bad flash memory cells, and allocate spare cells to replace future failed cells. Some of the spare cells may be used to house firmware to operate the flash memory controller and implement other features. In operation, when a host needs to read data from or write data to flash memory, it will communicate with the flash memory controller. If the host provides a logical address to read/write data, the flash memory controller may translate the logical address received from the host into a physical address in the flash memory. (or the host may provide a physical address.) the flash memory controller may also perform various memory management functions such as, but not limited to, wear leveling (allocating writes to avoid wearing out particular memory blocks that would otherwise be written repeatedly) and garbage collection (moving only valid pages of data to new blocks after the blocks are full, so the complete blocks can be erased and reused).
The nonvolatile memory die 104 may include any suitable nonvolatile storage medium including NAND flash memory cells and/or NOR flash memory cells. The memory cells may take the form of solid state (e.g., flash) memory cells, and may be one-time programmable, several times programmable, or multiple times programmable. The memory cells may also be Single Level Cells (SLC), multi-level cells (MLC), three-level cells (TLC), or use other memory cell level technologies now known or later developed. In addition, the memory cell may be fabricated in two dimensions or in three dimensions.
The interface between the controller 102 and the nonvolatile memory die 104 may be any suitable flash interface, such as the switching modes 200, 400, or 800. In one embodiment, the storage system 100 may be a card-based system, such as a Secure Digital (SD) card or a micro secure digital (micro SD) card. In alternative embodiments, the storage system 100 may be part of an embedded storage system.
Although in the example shown in fig. 1A, non-volatile memory system 100 (sometimes referred to herein as a memory module) includes a single channel between controller 102 and non-volatile memory die 104, the subject matter described herein is not limited to having a single memory channel. For example, in some NAND memory system architectures (such as the architectures shown in fig. 1B and 1C), there may be 2,4, 8, or more NAND channels between the controller and the NAND memory device, depending on the capabilities of the controller. In any of the embodiments described herein, there may be more than one single channel between the controller and the memory die even though a single channel is shown in the figures.
FIG. 1B illustrates a memory module 200 that includes a plurality of non-volatile memory systems 100. As such, the memory module 200 may include a memory controller 202 that interacts with a host and a memory system 204 that includes a plurality of non-volatile memory systems 100. The interface between the storage controller 202 and the non-volatile storage system 100 may be a bus interface, such as a Serial Advanced Technology Attachment (SATA) or peripheral component interconnect express (PCIe) interface. In one embodiment, the storage module 200 may be a Solid State Drive (SSD), such as found in portable computing devices such as laptop computers and tablet computers.
FIG. 1C is a block diagram illustrating a tiered storage system. Hierarchical storage system 250 includes a plurality of storage controllers 202, each of which controls a respective storage system 204. Host system 252 may access memory within the storage system via a bus interface. In one embodiment, the bus interface may be a non-volatile memory express (NVMe) or fibre channel over ethernet (FCoE) interface. In one embodiment, the system shown in FIG. 1C may be a rack-mountable mass storage system that is capable of being accessed by multiple host computers, such as found in a data center or in other locations where mass storage is desired.
Fig. 2A is a block diagram illustrating exemplary components of controller 102 in more detail. The controller 102 includes a front-end module 108 that interacts with a host, a back-end module 110 that interacts with one or more nonvolatile memory die 104, and various other modules that perform the functions that will now be described in detail. For example, in this embodiment, the controller 102 includes a NAND bus recorder 111, which may be implemented in hardware or software/firmware, and is configured to record traffic on the bus to the memory 104. The use of the NAND bus recorder 111 will be discussed in more detail below. Instead of or in addition to the NAND bus recorder 111 in the memory system 100, an external NAND bus monitor and/or an internal host bus monitor may be used. Additionally, in one embodiment, a portion of the write amplified computing element is in the controller 102 of the memory system 100 (through the use of a NAND bus monitor and a host bus monitor). The modules may take the following form: for example, a packaged-function hardware unit designed for use with other components, a portion of program code (e.g., software or firmware) that can be executed by a (micro) processor or processing circuit that typically performs a particular one of the relevant functions, or a separate hardware or software component that interacts with a larger system.
Referring again to the modules of the controller 102, the buffer management/bus controller 114 manages buffers in Random Access Memory (RAM) 116 and controls internal bus arbitration of the controller 102. Read Only Memory (ROM) 118 stores system boot code. Although shown in fig. 2A as being located separately from the controller 102, in other embodiments, one or both of the RAM 116 and the ROM 118 may be located within the controller. In still other embodiments, portions of RAM and ROM may be located within controller 102 and external to the controller.
The front end module 108 includes a host interface 120 and a physical layer interface (PHY) 122 that provides an electrical interface with a host or next level storage controller. The type of host interface 120 may be selected depending on the type of memory used. Examples of host interface 120 include, but are not limited to, serial Advanced Technology Attachment (SATA), SATA Express, serial attached small computer system interface (SAS), fibre channel, universal Serial Bus (USB), peripheral component interconnect Express (PCIe), and nonvolatile memory Express (NVMe). The host interface 120 generally facilitates the transfer of data, control signals, and timing signals.
The back-end module 110 includes an Error Correction Code (ECC) engine 124 that encodes data bytes received from the host and decodes and error corrects data bytes read from the non-volatile memory. The command sequencer 126 generates command sequences, such as program command sequences and erase command sequences, for transmission to the nonvolatile memory die 104. A Redundant Array of Independent Drives (RAID) module 128 manages the generation of RAID parity and recovery of failure data. RAID parity may be used as an additional level of integrity protection for data written to the memory device 104. In some cases, RAID module 128 may be part of ECC engine 124. The memory interface 130 provides command sequences to the nonvolatile memory die 104 and receives status information from the nonvolatile memory die 104. In one embodiment, memory interface 130 may be a Double Data Rate (DDR) interface, such as a switch mode 200, 400, or 800 interface. The flash control layer 132 controls the overall operation of the back-end module 110.
The memory system 100 also includes other discrete components 140, such as external electrical interfaces, external RAM, resistors, capacitors, or other components that may interact with the controller 102. In alternative embodiments, one or more of physical layer interface 122, RAID module 128, media management layer 138, and buffer management/bus controller 114 are optional components that are not needed in controller 102.
Fig. 2B is a block diagram illustrating components of the nonvolatile memory die 104 in more detail. The nonvolatile memory die 104 includes peripheral circuitry 141 and a nonvolatile memory array 142. The nonvolatile memory array 142 includes nonvolatile memory cells for storing data. The nonvolatile memory cells may be any suitable nonvolatile memory cells, including NAND flash memory cells and/or NOR flash memory cells in two-dimensional and/or three-dimensional configurations. The nonvolatile memory die 104 also includes a data cache 156 that caches data.
As discussed above, one metric used in designing a memory system is the Write Amplification (WA) coefficient. The write amplification factor is defined as the amount of data written to the memory of the storage system divided by the amount of data written by the host. If expressed mathematically, the Write Amplification Factor (WAF) is defined as:
The write amplification factor is one would be ideal as this would provide the best response time and promote high endurance of the memory. However, writing host data is typically accompanied by write overhead, and FIG. 3 illustrates some factors that may affect the write amplification factor. These factors include writing control data to memory for flash management and possibly relocating data from one pool of blocks to another. As used herein, control data may refer to additional writes required for a data structure for flash management. The amount of control data written depends on the writing scenario. For example, random writes over a wide address range may require updating logical to physical address tables more frequently than sequential writes.
The relocation data may take the form of folds or garbage collection. In one implementation, folding refers to moving data from Single Level Cell (SLC) blocks to multi-level or three level cell (MLC or TLC) blocks. In contrast, garbage collection refers to moving data between blocks of the same memory cell type (e.g., MLC to MLC, or TLC to TLC). As shown in fig. 3, garbage collection may depend on excessive provisioning (e.g., availability of idle TLC/SLC blocks that exceed memory export capacity). The over-provisioning may be determined by the number of spare blocks at the time of production and the number of blocks that the host does not map and refers to the number of spare blocks that the system uses to write new incoming data. For example, when a host wants to write data to a block that already contains data, the storage system may write the incoming data to a spare block and then invalidate the old block in the logical-to-physical address table. If there are no spare blocks available in this scenario, then the old data needs to be extracted from the target block before the incoming host data can be written. This may lead to performance problems. Thus, the more spare blocks available in memory, the easier it is to accommodate a random write scenario.
Media fragmentation is another vector in garbage collection and is measured by a metric called effective count (VC). Fragmentation occurs when there are partially written blocks, because some of them are invalid. Blocks with a relatively small amount of valid data are good candidates for garbage collection because not much data needs to be moved. In contrast, if a 1MB block has only 16KB of invalid data, more data will need to be moved, resulting in more write overhead.
Turning again to the drawings, FIG. 4 is a block diagram of a system for one embodiment of calculating and analyzing write amplification. As shown in fig. 4, in this embodiment, a computer 400 (also referred to herein as a computing device) and a display 410 are provided. These components may take any suitable form. For example, computer 400 may be a personal computer or a server, and display 410 may be a stand-alone monitor. Alternatively, the computer 400 may be a mobile device having a display 410 integrated therein (e.g., as a touch screen). Of course, these are merely examples, and other embodiments may be used. For example, in an embodiment in which a portion of the amplified computing elements are written in the controller 102 of the storage system (through the use of a NAND bus monitor and a host bus), the computer 400 may read only the data and monitor the data periodically.
In this embodiment, computer 400 includes a processor 420 and a memory 430. The processor 420 is configured to implement a write amplification analysis tool 440. In one embodiment, the computer readable program code of the write amplification analysis tool 440 is stored in the memory 430 and executed by the processor 420 (i.e., the write amplification analysis tool 440 can be software/firmware executed by hardware). In another embodiment, the write amplification analysis tool 440 is implemented in hardware only. In any event, in one embodiment, the write amplification analysis tool 440 can be used to implement the algorithms shown in the attached flow diagrams and described herein. In one embodiment, the write amplification analysis tool 440 may be used to perform write amplification analysis on a simulation of a memory system or may be used to perform write amplification analysis on an actual memory system 100 connected to the computer 400 via the bus 450. The computer 400 may also be used to generate and display written Terabytes (TBWs), which specifies how many terabytes can be written to the memory 104 until it can no longer absorb any data.
Referring again to the drawings, fig. 5-7 are flowcharts 500, 600, 700 of a method that may be used to calculate one embodiment of a write application coefficient. As described above, the basic data to generate the write application coefficients is the amount of data written by the host and the amount of data written to the memory 104. Examples of methods that may be used to collect this data are shown in fig. 5 and 6, respectively.
Referring first to fig. 5, fig. 5 is a flow chart 500 illustrating a method for measuring one embodiment of the amount of data written by a host (e.g., computer 400 in fig. 4). As shown in fig. 5, the write amplification analysis tool 440 first divides the time domain into a fixed window size (e.g., 100 milliseconds (ms)) (operation 510). The write amplification analysis tool 440 then monitors the bus 450 between the computer 400 and the storage system 100 to obtain an input record with the following information:
The command type, the command size, the command start time, and the command duration (operation 520). Beginning with time window #1, the write amplification analysis tool 440 looks for all write commands whose start times fall within this window (operation 530). The write amplification analysis tool 440 then calculates the amount of data written using the following formula: sum (sector number X sector size) (operation 540). The write amplification analysis tool 440 then calculates write performance by dividing the amount of data by the window size (operation 550). The write amplification analysis tool 440 repeats these operations (530, 540, 550) for each window. After all windows have been processed, the write amplification analysis tool 440 measures the amount of data written by the computer 400.
As shown in the flow chart 600 of FIG. 6, the write amplification analysis tool 440 then measures the amount of data written to the memory 104 of the storage system 100. Although in this example measurements are made of the actual storage system 100, as described above, analysis may be performed on the simulation. As shown in fig. 6, in this embodiment, the write amplification analysis tool 440 divides the time domain into fixed size windows (e.g., 100 ms) (operation 610). The write amplification analysis tool 440 then receives input from the NAND bus recorder 111 and uses a protocol analyzer to extract the command type and command size for recording activity on the NAND bus (operation 620). For example, the amount of data used for control or relocation may be obtained from the NAND bus recorder 111, which occurs in parallel and is synchronized with host activity, and address behavior may be obtained from a recording utility in the host, such as FTrace.
Beginning with time window #1, the write amplification analysis tool 440 looks for all write commands whose start times fall within this window (operation 630). The write amplification analysis tool 440 then calculates the amount of data written using the following formula: sum (number of sectors X sector size) (operation 640). The write amplification analysis tool 440 then calculates write performance by dividing the amount of data by the window size (operation 650). The write amplification analysis tool 440 repeats these operations for each window (630,640,650). After all windows have been processed, the write amplification analysis tool 440 measures the amount of data written to the memory 104 of the storage system 100.
Turning now to the flowchart in FIG. 7, to calculate the write amplification, the write amplification analysis tool 440 may begin with a time window #1 (operation 710) dividing the NAND performance of the window (as determined in flowchart 600 of FIG. 6) by the host performance of the window (as determined in flowchart 500 of FIG. 5) (operation 720). The write amplification analysis tool 440 repeats the process for other windows.
With the information now collected, the write amplification analysis tool 440 can simultaneously display on the display 410 a graphic of the amount of data written from the host, the amount of data written to the memory, and the write amplification synchronized over the time period of the various writing scenarios, so that the memory system designer can see various effects on the write analysis coefficients over time. For example, FIG. 8A shows five different write and erase activities over time (sequentially writing over the entire medium, randomly writing 4GB, discarding/unmapping 1/2 medium, randomly writing 4GB, sequentially writing 1/2 medium, and randomly writing over the entire medium), and FIGS. 8B-8D show the effect of these activities over time on the measured data written by the host (FIG. 8B), the measured data written to the NAND (FIG. 8C), and the calculated write amplification factor (FIG. 8D).
Of course, the above-described graphics are merely examples, and may also be used with different or other types of graphics. Some of these additional graphics are shown in fig. 9A-9G.
Fig. 9A shows the behavior of commands in the address space over time. Fig. 9B (similar to fig. 8D) shows the write amplification over time derived from all of the factors discussed above. Fig. 9C shows over-provisioning (e.g., 4 kbytes) in both the free block and the Flash Management Unit (FMU). In the case of random writes, the amount of free FMUs remains unchanged, but the amount of available blocks may decrease over time until a garbage collection threshold is reached, thereby activating the garbage collection process. Fig. 9D shows data written to the NAND (similar to fig. 8C). Fig. 9E shows NAND data-host data. Fig. 9F shows the excess data generated from the control activity only. Sequential writes typically have very few control updates, while purely random writes generate much more table updates, as described below. Additional garbage collection generates more control data. FIG. 9G illustrates relocation data when garbage collection is initiated.
As shown in these figures, for sequential writing, data is written directly to the memory with sequential addresses (fig. 9A), since the update to the logical to physical address table is minimal, the amount of control data required is minimal (fig. 9F). Therefore, as shown in fig. 9B, the write amplification factor is almost 1, which means that the amount of data written to the memory is almost entirely from the data written by the host. As shown in fig. 9C, in this example, writing the derived capacity of the entire medium causes the over-provisioned flash memory management unit (FMU) and the over-provisioned blocks to reach a level just above the garbage collection threshold (i.e., memory is filled to its output capacity and all remaining are spare blocks of a minimum level).
Next, for random writing in the 4GB range, the write amplification (fig. 9B) will have a jump, because random writing will invalidate the data in the different blocks and due to the host activity, the logical to physical address table needs to be updated (fig. 9F). However, as shown in FIG. 9C, at random write, the number of over-provisioned blocks falls below the garbage collection threshold. This will cause the controller 102 to perform garbage collection and reassign data from the old block to the new block (FIG. 9G), which will also cause the controller 102 to update the logical-to-physical address table (FIG. 9F) based on this garbage collection activity. Writing this additional control data due to garbage collection results in another jump in the write amplification (fig. 9B).
Next, the host sends a discard/unmap command (i.e., address independent) to half of the medium. This means that almost immediately half of the egress capacity is available again, although the data in those blocks may or may not be erased (fig. 9A) and a jump occurs in the over-provisioning (fig. 9C). In addition, as shown in fig. 9F, there is a small bump in the amount of control data due to an update in the logical-to-physical address table to make the block available.
After discarding the command, there is another random write within 4 GB. This time, since half of the media is available for storage, data can be written without triggering garbage collection. Thus, the logical to physical address table need only be updated based on host activity (rather than garbage collection activity as in the previous random write), and thus the write amplification is less than that of the previous random write (FIG. 9F). Next, half of the media is written sequentially. Similar to the sequential writing discussed above. Finally, the entire medium is randomly written. This is also similar to the random writing discussed previously, but since more data is now being written, the write amplification, control data, and relocation data have increased.
These embodiments have several advantages. For example, the write amplification analysis tools disclosed herein provide a powerful analysis tool that displays the dynamic behavior of write amplification and can cross-correlate write amplification with other available information. The tool allows a memory system designer to analyze the actual usage of a large number of read, write, discard, refresh, etc. operations having different sizes and addresses. Unlike existing methods that generate values only for the write amplification, these embodiments allow a storage system designer to view the impact of realistic write usage on the write amplification over time by simultaneously displaying graphs of various metrics and synchronizing to the same time scale.
Displaying all of this information in a synchronized manner on the same timescale shows the dynamic behavior of the write amplification over time and the correlation with various input scenarios. For example, displaying the amount of data written by a host versus the amount of data written to the memory of a storage system for various types of write activities may display to a storage system designer when a problem has occurred and why the problem has occurred. Using these graphs, a storage system designer may determine what adjustments to make to the flash management algorithm in storage system 100 in order to improve or optimize the write amplification to improve response time and avoid degrading the endurance of the memory. As used herein, "flash management algorithm" refers to hardware, software, and/or firmware in the memory system 100 that controls the operation of the memory system 100 and that, among other things, affects the relocation and control factors that contribute to write amplification. That is, after viewing the various graphics, the storage system designer may modify the flash management algorithm to reduce the write amplification (e.g., average or peak) for the various scenarios.
For example, as shown and discussed above, performing random writing across the medium provides the worst write application coefficients. To reduce the write amplification, a storage system designer may alter the structure of the logical to physical address table, or alter the policy as to when to update the table. Generally, updating the table at a lower frequency increases the risk of losing data in the event of a sudden power outage. However, if a sudden power outage is unlikely to occur, then reducing the number of table updates may be acceptable for improving performance. The write amplification analysis tool may also be configured to automatically or semi-automatically (e.g., based on designer input) calculate an optimization function of the flash management algorithm to reduce the write amplification. For example, a write amplification analysis tool may be used to optimize firmware code for folding, garbage collection, etc. of a storage system.
In another embodiment, a method and system for visualizing a correlation between host commands and storage system performance is provided. A storage system, sometimes referred to herein as a "storage device" or "device," such as a Solid State Drive (SSD) embedded in a mobile device such as a phone, tablet, or a wearable device, may perform many input/output operations (e.g., read, write, trim, and refresh) during use of the mobile device. Operations may include features (such as time stamps at initiation and completion) and peripheral data (such as power state and aggregate queue depth). Analysis of these input/output commands and their characteristics can be used to design and implement algorithms for data storage. In operation, an application running in a storage system may record various operations that occur, and the log may be analyzed by a computing device (e.g., a host or another device). In a data analysis environment for storage workloads, there are typically millions of individual data points representing specific features of input/output operations sent from a mobile device (host) to a storage system, and vice versa.
When there is a performance or power problem with the host interacting with the storage system, it may be assumed that the problem is related to the storage system and the data analysis described above may be used to help identify and solve the problem. For example, there is inefficiency in the data storage algorithm and analysis of the input-output operations of the storage system can be used to identify the cause of the inefficiency and suggest a solution. Examples of such analyses can be found in U.S. patent application Ser. No. 15/347565, filed on Ser. No. 11/9, and U.S. patent application Ser. No. 15/226661, filed on 8/2, 2016, both of which are hereby incorporated by reference.
However, there may be situations where performance or power issues are caused at least in part by the host. In this case, analyzing storage system operation alone will not identify that the host is contributing to the problem or the source of the problem. The following embodiments may be used to address this situation.
As shown in the flowchart 1000 in fig. 10, in one embodiment, a computing device 400 (e.g., a Personal Computer (PC) or server) receives information from a host regarding host operations (events) performed during a period of time (operation 1010) and also receives information from a storage system 100 regarding storage system operations (events) performed during the period of time (operation 1020). For example, the host and storage system 100 may provide its log to the computing device 400 over a wired or wireless connection. The log may include a list of various commands or operations that occur and various related data, such as, but not limited to, time stamps at the time of initiation and completion, power status, aggregate queue depth, and the like. In addition to or instead of using logs, host and/or storage system activity may be measured by monitoring traffic on buses used with those components. In one embodiment, the computing device 400 communicates with a server that provides computer readable program code to the computing device 400 to execute software that performs the algorithm shown in the flowchart 1000 in fig. 10.
The computing device 400 draws these events chronologically on the same graph (operation 1030), with the result that both host operations and storage system operations are displayed simultaneously during this period of time. FIG. 11 is an example of such a graph showing host tasks and storage system tasks over time. As shown in fig. 11, the host performs a plurality of file write operations inside the host. These operations may be, for example, the operating system of the host queuing the write command in the memory of the host after the user invokes the write function. In this example, there are five write operations, each represented by a different rectangle, with the first write operation being larger than the other write operations. At this point, there is no storage system operation because the host has not yet provided a write command to the storage system 100.
At some point (e.g., due to a host-triggered timeout or because the host application invokes it), the host decides to flush a write command from its internal memory to the storage system 100 by invoking a kernel Application Program Interface (API) to begin a synchronization (sync) operation. This is indicated by the synchronization start arrow in fig. 11. At this time, the host performs additional operations for a period of time. For example, the host may merge one or more write commands together, place the write commands in a queue, update an internal table in the host's file system, determine which pages of memory are "dirty," or even perform operations unrelated to the write commands. Thus, in this example, there is a delay between the start of the synchronization and the time at which the storage system 100 actually starts performing the write operation.
At some point, the storage system 100 is ready to perform a write operation. Fig. 11 shows that the storage system 100 sequentially performs four write operations (in this example, two of the five write operations are merged). In this example, FIG. 11 also shows that the host performs additional operations after the storage system 100 has completed a write operation. At some later time, the host invokes the kernel API to end the synchronization operation. This is indicated by the synchronization end arrow in fig. 11. Thus, in this example, there is a delay between the time that the storage system 100 completes performing the write operation and the end of the synchronization. The delay and the delay after the start of synchronization are measures of file system overhead.
As shown in fig. 11, this embodiment helps a technician to visualize the correlation between host commands and storage system performance by simultaneously displaying information about host operation and information about storage system operation within the same time period. Here, the graph in fig. 11 shows that relatively little time between the synchronization start and the synchronization end is consumed by the storage system operation (operation 1040). If the technician does not benefit from the graph, he can assume that the storage system 100 uses the entire time between the start of synchronization and the end of synchronization to perform a write operation and does not recognize that the latency is primarily from host overhead. Thus, if a performance problem is encountered when a host interacts with the storage system 100, the graphic indicates that the problem is on the host side and not on the storage system side. Based on this knowledge, the technician may focus on improving the efficiency of the host rather than the efficiency of the storage system 100. For example, a technician may use the information on the graph to determine an optimal size of the write buffer and/or a capacitor size for refreshing the data in the event of improper shutdown (UGSD).
This embodiment may also be used to derive storage system power and throughput as costs for host activity (operation 1050). In this way, this embodiment can be used to visualize the correlation between host commands and device performance as well as each command sequence and power consumption of the storage device and host platform. This embodiment will now be discussed in connection with fig. 12-14.
Fig. 12 shows a graph of tasks performed by the storage device 100 over time. The graph does not show host activity (i.e., host activity has been filtered out of the graph). Fig. 13 adds to the host activity. FIG. 13 is similar to FIG. 11 in that it illustrates the writing to main memory prior to a synchronization operation, and the host activity during the synchronization operation. However, the host activity during the synchronization operation is less than in fig. 11. Finally, fig. 14 adds a plot of power consumption over time to the graph. As can be seen in fig. 14, power consumption is a function of both host operation and storage system operation. The graph shows the amount of power consumed by the storage system due to host overhead during a write operation.
Such graphics may help to attempt to identify contributions to battery life of the depleted host. These embodiments have several advantages. For example, the curves of tasks over time generated by these embodiments may be used to evaluate file system performance, power, and throughput for each command sequence and each storage system. It also allows for comparison of different platform file system overheads. Furthermore, there are several alternatives that can be used with these embodiments. For example, while the host and storage system activities are shown as being displayed simultaneously on the same graphic, in other embodiments, the activities are displayed simultaneously on different graphics displayed with each other, or even in a chart or some other non-graphical form.
Finally, as noted above, any suitable type of memory may be used. Semiconductor memory devices include volatile memory devices such as dynamic random access memory ("DRAM") or static random access memory ("SRAM") devices, nonvolatile memory devices such as resistive random access memory ("ReRAM"), electrically erasable programmable read-only memory ("EEPROM"), flash memory (which may also be considered a subset of EEPROM), ferroelectric random access memory ("FRAM") and magnetoresistive random access memory ("MRAM"), as well as other semiconductor elements capable of storing information. Each type of memory device may have a different configuration. For example, the flash memory device may be configured in a NAND configuration or a NOR configuration.
The memory device may be formed of passive elements and/or active elements in any combination. By way of non-limiting example, passive semiconductor memory elements include ReRAM device elements, which in some embodiments include resistivity switching memory elements such as antifuses, phase change materials, and the like, and optionally include steering elements such as diodes and the like. By way of further non-limiting example, active semiconductor memory elements include EEPROM and flash memory device elements, which in some embodiments include elements having a charge storage region, such as a floating gate, conductive nanoparticles, or charge storage dielectric material.
The plurality of memory elements may be configured such that they are connected in series or such that each element may be accessed individually. By way of non-limiting example, flash memory devices (NAND memories) in a NAND configuration typically include memory elements connected in series. The NAND memory array may be configured such that the array is made up of multiple strings of memory, where a string is made up of multiple memory elements sharing a single bit line and accessed as a group. Alternatively, the memory elements may be configured such that each element may be individually accessed, such as a NOR memory array. NAND memory configurations and NOR memory configurations are examples, and memory elements may be otherwise configured.
Semiconductor memory elements located within and/or over a substrate may be arranged in two or three dimensions, such as a two-dimensional memory structure or a three-dimensional memory structure.
In a two-dimensional memory structure, semiconductor memory elements are arranged in a single plane or a single memory device level. Typically, in a two-dimensional memory structure, the memory elements are arranged in a plane (e.g., in the x-z direction plane) that extends substantially parallel to the major surface of the substrate supporting the memory elements. The substrate may be a wafer on or in which layers of memory elements are formed, or it may be a carrier substrate to which the memory elements are attached after they are formed. As a non-limiting example, the substrate may include a semiconductor, such as silicon.
The memory elements may be arranged in a single memory device level in an ordered array (such as in multiple rows and/or columns). However, the memory elements may be arranged in a non-conventional configuration or a non-orthogonal configuration. The memory elements may each have two or more electrodes or contact lines, such as bit lines and word lines.
The three-dimensional memory array is arranged such that the memory elements occupy multiple planes or multiple levels of memory devices, forming a three-dimensional (i.e., in an x-direction, a y-direction, and a z-direction, where the y-direction is substantially perpendicular to the major surface of the substrate, and the x-direction and z-direction are substantially parallel to the major surface of the substrate) structure.
As a non-limiting example, a three-dimensional memory structure may be vertically arranged as a stack of multiple two-dimensional memory device levels. As another non-limiting example, the three-dimensional memory array may be arranged in a plurality of vertical columns (e.g., columns extending substantially perpendicular to the major surface of the substrate, i.e., in the y-direction), with a plurality of memory elements in each column. The columns may be arranged in a two-dimensional configuration, for example in the x-z plane, resulting in a three-dimensional arrangement of memory elements, with the elements being located on a plurality of vertically stacked memory planes. Other configurations of three-dimensional memory elements may also constitute a three-dimensional memory array.
By way of non-limiting example, in a three-dimensional NAND memory array, memory elements can be coupled together to form NAND strings within a single level (e.g., x-z) memory device level. Alternatively, the memory elements may be coupled together to form vertical NAND strings that traverse multiple levels of horizontal memory devices. Other three-dimensional configurations are contemplated in which some NAND strings contain memory elements in a single memory level, while other strings contain memory elements that span multiple memory levels. The three-dimensional memory array may also be designed in a NOR configuration and a ReRAM configuration.
Typically, in monolithic three dimensional memory arrays, one or more memory device levels are formed above a single substrate. Optionally, the monolithic three dimensional memory array can also have one or more memory layers at least partially within a single substrate. As a non-limiting example, the substrate may include a semiconductor, such as silicon. In monolithic three dimensional arrays, the layers that make up each memory device level of the array are typically formed on the layers of the underlying memory device level of the array. However, the layers of adjacent memory device levels of a monolithic three dimensional memory array may be shared or have intervening layers between memory device levels.
A two-dimensional array may then be formed separately and then packaged together to form a non-monolithic memory device having multiple memory layers. For example, a non-monolithic stacked memory may be constructed by forming memory levels on separate substrates and then stacking the memory levels on top of each other. The substrate may be thinned or removed from the memory device level prior to stacking, but since the memory device level is initially formed on a separate substrate, the resulting memory array is not a monolithic three dimensional memory array. Further, multiple two-dimensional memory arrays or three-dimensional memory arrays (monolithic or non-monolithic) can be formed on separate chips and then packaged together to form a stacked chip memory device.
Associated circuitry is typically required to operate and communicate with the memory elements. As a non-limiting example, a memory device may have circuitry for controlling and driving memory elements to implement functions such as programming and reading. The associated circuitry may be located on the same substrate as the memory element and/or on a separate substrate. For example, the controller for memory read-write operations may be located on a separate controller chip and/or on the same substrate as the memory elements.
Those skilled in the art will recognize that the present invention is not limited to the two-dimensional and three-dimensional structures described, but encompasses all relevant memory structures as described herein and as understood by those skilled in the art.
It is intended that the foregoing detailed description be understood as an illustration of selected forms that the invention may take and not as a definition of the invention. It is intended that only the following claims, including all equivalents, define the scope of the invention as claimed. Finally, it should be noted that any aspect of any of the embodiments described herein may be used alone or in combination with one another.

Claims (17)

1. A method of operating a computing device, comprising:
the following operations are performed in the computing device:
Receiving information about a host operation performed by the host over a period of time;
receiving information about storage system operations of the storage system performed during the time period; and
Both the host operation and the storage system operation within the time period are displayed simultaneously in a graphic, wherein the graphic shows when a host operation is performed without a storage system operation.
2. The method of claim 1, wherein the graphic indicates a start and stop of a synchronization operation.
3. The method of claim 1, further comprising determining a size of a write buffer using information displayed on the graphic.
4. The method of claim 1, further comprising determining a capacitor size using information displayed on the graphic.
5. The method of claim 1, further comprising displaying a graph of power consumption over the period of time.
6. The method of claim 1, wherein the storage system comprises a three-dimensional memory.
7. The method of claim 1, wherein the storage system is embedded in the host.
8. The method of claim 1, wherein the storage system is removably connected to the host.
9. A method of operating a computing device, comprising:
the following operations are performed in the computing device:
after a host initiates a process of refreshing commands to a storage system, receiving information about the activity of the host;
After the host initiates the process of refreshing commands to the storage system, receiving information about the activity of the storage system; and
Simultaneously displaying the information about the activity of the host and the information about the activity of the storage system, wherein the simultaneous display shows a period of time when host activity is present without storage system activity after the host has commanded a refresh of the storage system.
10. The method of claim 9, wherein the information about the activity of the host and the information about the activity of the storage system are displayed graphically.
11. The method of claim 10, further comprising displaying a curve of power consumption on the graph.
12. The method of claim 10, further comprising displaying an indicator on the graphic of the process when the host initiates and ends the refresh command to the memory system.
13. The method of claim 9, further comprising determining a size of a write buffer based on the information displayed simultaneously.
14. The method of claim 9, further comprising determining a capacitor size based on the information displayed simultaneously.
15. A computing device, comprising:
means for receiving information regarding host operations performed by a host over a period of time;
means for receiving information regarding storage system operations of the storage system performed during the time period;
Means for simultaneously displaying both the host operation and the storage system operation within the time period in a graphic, wherein the graphic shows when a host operation is performed without a storage system operation.
16. The computing device of claim 15, further comprising means for displaying a graphic showing power consumption over the period of time.
17. The computing device of claim 15, further comprising means for displaying indicators of the start and stop of refresh operations.
CN201980079230.2A 2019-04-01 2019-12-19 Method and system for visualizing correlation between host commands and storage system performance Active CN113168287B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/371,613 US10564888B2 (en) 2016-11-09 2019-04-01 Method and system for visualizing a correlation between host commands and storage system performance
US16/371,613 2019-04-01
PCT/US2019/067445 WO2020205017A1 (en) 2019-04-01 2019-12-19 Method and system for visualizing a correlation between host commands and storage system performance

Publications (2)

Publication Number Publication Date
CN113168287A CN113168287A (en) 2021-07-23
CN113168287B true CN113168287B (en) 2024-05-10

Family

ID=72666770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980079230.2A Active CN113168287B (en) 2019-04-01 2019-12-19 Method and system for visualizing correlation between host commands and storage system performance

Country Status (3)

Country Link
CN (1) CN113168287B (en)
DE (1) DE112019005393T5 (en)
WO (1) WO2020205017A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106708431A (en) * 2016-12-01 2017-05-24 华为技术有限公司 Data storage method, data storage device, mainframe equipment and storage equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8327226B2 (en) * 2010-02-03 2012-12-04 Seagate Technology Llc Adjustable error correction code length in an electrical storage device
JP5853899B2 (en) * 2012-03-23 2016-02-09 ソニー株式会社 Storage control device, storage device, information processing system, and processing method therefor
GB2527529B (en) * 2014-06-24 2021-07-14 Advanced Risc Mach Ltd A device controller and method for performing a plurality of write transactions atomically within a non-volatile data storage device
US9584374B2 (en) * 2014-10-09 2017-02-28 Splunk Inc. Monitoring overall service-level performance using an aggregate key performance indicator derived from machine data
US20170068451A1 (en) * 2015-09-08 2017-03-09 Sandisk Technologies Inc. Storage Device and Method for Detecting and Handling Burst Operations
US20170131948A1 (en) * 2015-11-06 2017-05-11 Virtium Llc Visualization of usage impacts on solid state drive life acceleration
US10296260B2 (en) * 2016-11-09 2019-05-21 Sandisk Technologies Llc Method and system for write amplification analysis
US11269764B2 (en) * 2017-03-21 2022-03-08 Western Digital Technologies, Inc. Storage system and method for adaptive scheduling of background operations

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106708431A (en) * 2016-12-01 2017-05-24 华为技术有限公司 Data storage method, data storage device, mainframe equipment and storage equipment

Also Published As

Publication number Publication date
CN113168287A (en) 2021-07-23
DE112019005393T5 (en) 2021-09-02
WO2020205017A1 (en) 2020-10-08

Similar Documents

Publication Publication Date Title
CN109690468B (en) Write amplification analysis method, related computing device and computer-readable storage medium
US10564888B2 (en) Method and system for visualizing a correlation between host commands and storage system performance
US9921956B2 (en) System and method for tracking block level mapping overhead in a non-volatile memory
CN111149096B (en) Quality of service of adaptive device by host memory buffer range
US10133490B2 (en) System and method for managing extended maintenance scheduling in a non-volatile memory
US9778855B2 (en) System and method for precision interleaving of data writes in a non-volatile memory
US10120613B2 (en) System and method for rescheduling host and maintenance operations in a non-volatile memory
US10032488B1 (en) System and method of managing data in a non-volatile memory having a staging sub-drive
US10452536B2 (en) Dynamic management of garbage collection and overprovisioning for host stream storage
WO2019203915A1 (en) Storage cache management
JP7293458B1 (en) Storage system and method for quantifying storage fragmentation and predicting performance degradation
US20180041411A1 (en) Method and System for Interactive Aggregation and Visualization of Storage System Operations
US11086389B2 (en) Method and system for visualizing sleep mode inner state processing
CN113168287B (en) Method and system for visualizing correlation between host commands and storage system performance
US11520695B2 (en) Storage system and method for automatic defragmentation of memory
US11573893B2 (en) Storage system and method for validation of hints prior to garbage collection
US11599298B1 (en) Storage system and method for prediction-based pre-erase of blocks to improve sequential performance
US20230281122A1 (en) Data Storage Device and Method for Host-Determined Proactive Block Clearance
US11941282B2 (en) Data storage device and method for progressive fading for video surveillance systems
US11650758B2 (en) Data storage device and method for host-initiated cached read to recover corrupted data within timeout constraints
US11880256B2 (en) Data storage device and method for energy feedback and report generation
US11809747B2 (en) Storage system and method for optimizing write-amplification factor, endurance, and latency during a defragmentation operation
US20230315335A1 (en) Data Storage Device and Method for Executing a Low-Priority Speculative Read Command from a Host
CN118215905A (en) Data storage device and method for host buffer management
CN113176849A (en) Storage system and method for maintaining uniform thermal count distribution using intelligent stream block swapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant