CN113168287A - Method and system for visualizing correlations between host commands and storage system performance - Google Patents

Method and system for visualizing correlations between host commands and storage system performance Download PDF

Info

Publication number
CN113168287A
CN113168287A CN201980079230.2A CN201980079230A CN113168287A CN 113168287 A CN113168287 A CN 113168287A CN 201980079230 A CN201980079230 A CN 201980079230A CN 113168287 A CN113168287 A CN 113168287A
Authority
CN
China
Prior art keywords
host
storage system
memory
time
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980079230.2A
Other languages
Chinese (zh)
Inventor
T·谢克德
O·吉拉德
L·霍德
E·索博尔
E·齐伯尔斯坦
J·G·哈恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SanDisk Technologies LLC
Original Assignee
SanDisk Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/371,613 external-priority patent/US10564888B2/en
Application filed by SanDisk Technologies LLC filed Critical SanDisk Technologies LLC
Publication of CN113168287A publication Critical patent/CN113168287A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0625Power saving in storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3034Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a storage system, e.g. DASD based or network based
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3058Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The present invention provides a method and system for visualizing correlations between host commands and storage system performance. In one embodiment, a method comprises: receiving information about a host operation of the host performed over a period of time; receiving information about a storage system operation of the storage system performed during the time period; and displaying both host operations and storage system operations during the time period. Other embodiments are possible, and each of the embodiments can be used alone or together in combination.

Description

Method and system for visualizing correlations between host commands and storage system performance
Cross Reference to Related Applications
This application is a continuation-in-part application of U.S. patent application No. 15/347565 filed on 9/11/2016, which is hereby incorporated by reference.
Background
One metric used in designing a memory system is the write amplification factor. The write amplification factor is defined as the amount of data written to the memory of the storage system divided by the amount of data written by the host. Writing an amplification factor of one would be desirable as this would provide the best response time and promote high endurance of the memory. However, writing host data is often accompanied by write overhead, such as writing control data to memory for flash management, and possibly relocating data from one pool of blocks to another. Various methods may be used to measure the write amplification factor.
Drawings
FIG. 1A is a block diagram of a non-volatile storage system of an embodiment.
FIG. 1B is a block diagram illustrating a memory module of one embodiment.
FIG. 1C is a block diagram illustrating a hierarchical storage system of one embodiment.
FIG. 2A is a block diagram illustrating components of a controller of the non-volatile storage system shown in FIG. 1A, according to one embodiment.
FIG. 2B is a block diagram illustrating components of the non-volatile storage system shown in FIG. 1A, according to one embodiment.
FIG. 3 is a diagram illustrating factors that may affect the write amplification factor of one embodiment.
FIG. 4 is a block diagram of a system using a write amplification tool of one embodiment.
FIG. 5 is a flow diagram of a method of one embodiment for measuring an amount of data written by a host.
FIG. 6 is a flow diagram of a method of one embodiment for measuring an amount of data written to a memory of a memory system.
FIG. 7 is a flow diagram of a method of one embodiment for calculating a write amplification factor.
FIGS. 8A-8D are graphs generated and displayed by a write zoom tool of one embodiment.
FIGS. 9A-9G are graphs generated and displayed by a write zoom tool of one embodiment.
FIG. 10 is a flow diagram of a method of an embodiment for visualizing a correlation between host commands and storage system performance.
FIG. 11 is a diagram of an embodiment showing information about host operations and information about storage system operations simultaneously over a period of time.
FIG. 12 is a diagram of an embodiment of tasks performed by a storage device over time.
FIG. 13 is a diagram of an embodiment of tasks performed by a storage device and a host over time.
FIG. 14 is a graph of an embodiment of power consumption.
Detailed Description
By way of introduction, the following embodiments are directed to methods and systems for write amplification analysis. In one embodiment, a method performed in a computing device is provided. The method comprises the following steps: determining an amount of data written from a computing device to a storage system over a period of time, wherein the storage system comprises a memory; determining an amount of data written by the storage system to the memory during the time period; calculating a write amplification factor in the time period; and simultaneously displaying a graphic of: an amount of data written from the computing device during the time period, an amount of data written to the memory during the time period, and a write amplification factor during the time period.
In some embodiments, the method further comprises displaying a graph of the volume consumed over the period of time.
In some embodiments, the method further comprises displaying a graph of excess pre-dosed chunks of memory over the time period.
In some embodiments, the method further comprises displaying a graph controlling the size of the writes written during the time period.
In some embodiments, the method further comprises displaying a graph of the relocation data during the time period.
In some embodiments, the amount of data written from the computing device during the time period is determined by monitoring a bus between the computing device and the memory system for write commands.
In some embodiments, the amount of data written to the memory during the time period is determined by monitoring a bus between the memory and a controller of the memory system.
In some embodiments, the storage system is a simulation model of the storage system.
In some embodiments, the storage system is a real-time storage system.
In some embodiments, the method further comprises calculating an optimization function of a flash management algorithm in the storage system to reduce the write amplification factor.
In some implementations, the memory includes a three-dimensional memory.
In some embodiments, the storage system is embedded in the host.
In some embodiments, the storage system is removably connected to the host.
In another embodiment, a computing device is provided, the computing device comprising: apparatus for: means for collecting information about an amount of data written to a memory of the storage system over a period of time and information about an amount of data written to the storage system from a host over the period of time; and means for: displaying a graphical representation of activity in the storage system synchronized over the time period, the activity contributing to an amount of data written to a memory of the storage system over the time period being greater than an amount of data written from the host to the storage system over the time period.
In some implementations, the memory includes a three-dimensional memory.
In some embodiments, the storage system is an embedded storage system.
In some embodiments, the storage system is a removable storage system.
In another embodiment, a computer readable storage medium is provided that stores computer readable program code that, when executed by a processor, causes the processor to: collecting information associated with the write amplification factor, wherein the information is collected over a period of time for different write activities; generating a graph based on the information; and displaying the graphics together on a display device.
In some embodiments, the storage system includes a three-dimensional memory.
In some embodiments, the storage system is embedded in the host.
In some embodiments, the storage system is removably connected to the host.
The following embodiments also relate to methods and systems for visualizing correlations between host commands and storage device performance. In one embodiment, a method performed in a computing device is presented. The method comprises the following steps: receiving information about a host operation of the host performed over a period of time; receiving information about a storage system operation of the storage system performed during the time period; and displaying both host operations and storage system operations during the time period.
In some embodiments, host operations and storage system operations are displayed simultaneously in a graph.
In some embodiments, the graph illustrates when a host operation is performed without a storage system operation.
In some embodiments, the graphic indicates the start and stop of the synchronization operation.
In some embodiments, the method further comprises using information displayed on the graphic to determine the size of the write buffer.
In some embodiments, the method further comprises using information displayed on the graph to determine the capacitor size.
In some embodiments, the method further comprises displaying a graph of power consumption over the period of time.
In some embodiments, the storage system includes a three-dimensional memory.
In some embodiments, the storage system is embedded in the host.
In some embodiments, the storage system is removably connected to the host.
In another embodiment, a method is provided, comprising: receiving information about the activity of the host after the host initiates a process to flush commands to the storage system; receiving information about activity of the storage system after a host initiates a process of a refresh command to the storage system; and displaying information about the activity of the host and information about the activity of the storage system simultaneously.
In some embodiments, the concurrent display shows a period of time after a host refresh command to the memory system that there is host activity and no memory system activity.
In some embodiments, information about the activity of the host and information about the activity of the storage system are displayed on a graphic.
In some embodiments, the method further comprises graphically displaying a curve of power consumption.
In some embodiments, the method further includes displaying an indicator on the graphic of when the host initiated and terminated the process of refreshing commands to the memory system.
In some embodiments, the method further comprises determining a write buffer size based on the concurrently displayed information.
In some embodiments, the method further comprises determining the capacitor size based on the simultaneously displayed information.
In another embodiment, a computing device is provided, the computing device comprising: means for receiving information about a host operation of the host performed over a period of time; means for receiving information regarding storage system operations of the storage system performed during the time period; and means for displaying both host operations and storage system operations during the time period simultaneously.
In some embodiments, the computing device further comprises means for displaying a graphic showing power consumption over the period of time.
In some embodiments, the computing device further comprises means for displaying an indicator of the start and stop of the refresh operation.
Other embodiments are possible, and each of the embodiments may be used alone or in combination. Accordingly, various embodiments will now be described with reference to the accompanying drawings.
Turning now to the figures, FIGS. 1A-1C illustrate storage systems suitable for implementing aspects of the embodiments. Figure 1A is a block diagram illustrating a non-volatile storage system 100 according to one embodiment of the subject matter described herein. Referring to FIG. 1A, a non-volatile storage system 100 includes a controller 102 and non-volatile storage that may be comprised of one or more non-volatile memory die 104. As described herein, the term die refers to a collection of non-volatile memory cells formed on a single semiconductor substrate, and associated circuitry for managing the physical operation of those non-volatile memory cells. (the terms "memory" and "media" may be used interchangeably herein.) the controller 102 interacts with a host system and transmits a sequence of commands for read, program, and erase operations to the non-volatile memory die 104.
The controller 102 (which may be a flash memory controller) may take the form of: such as a processing circuit, a microprocessor or processor, and a computer readable medium (e.g., firmware), logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller that stores computer readable program code executable by the (micro) processor. The controller 102 may be configured with hardware and/or firmware to perform the various functions described below and shown in the flowcharts. In addition, some components shown as internal to the controller may also be stored external to the controller, and other components may be used. Further, the phrase "in operative communication with …" may mean in communication with, through, or directly (wired or wireless) with one or more components, which may or may not be shown herein.
As used herein, a flash memory controller is a device that manages data stored on a flash memory and communicates with a host, such as a computer or electronic device. The flash memory controller may have various functions in addition to the specific functions described herein. For example, a flash memory controller may format the flash memory to ensure that the memory is operating correctly, map out bad flash memory cells, and allocate spare cells to replace future failed cells. Some of the spare cells may be used to house firmware to operate the flash memory controller and implement other features. In operation, when a host needs to read data from or write data to the flash memory, it will communicate with the flash memory controller. If the host provides a logical address to read/write data, the flash memory controller may translate the logical address received from the host into a physical address in the flash memory. (alternatively, the host may provide physical addresses.) the flash memory controller may also perform various memory management functions such as, but not limited to, wear leveling (allocating writes to avoid wearing a particular memory block that would otherwise be repeatedly written) and garbage collection (moving only valid pages of data to a new block after the block is full, so that the entire block can be erased and reused).
The non-volatile memory die 104 may include any suitable non-volatile storage medium, including NAND flash memory cells and/or NOR flash memory cells. The memory cells may take the form of solid-state (e.g., flash) memory cells, and may be one-time programmable, several-time programmable, or multiple-time programmable. The memory cells may also be Single Level Cells (SLC), multi-level cells (MLC), Three Level Cells (TLC), or using other memory cell level technologies now known or later developed. In addition, the memory cells may be fabricated in a two-dimensional manner or a three-dimensional manner.
The interface between the controller 102 and the non-volatile memory die 104 may be any suitable flash interface, such as the switching modes 200, 400, or 800. In one embodiment, the storage system 100 may be a card-based system, such as a secure digital card (SD) or a micro secure digital (micro SD) card. In an alternative embodiment, the storage system 100 may be part of an embedded storage system.
Although in the example shown in FIG. 1A, the non-volatile storage system 100 (sometimes referred to herein as a storage module) includes a single channel between the controller 102 and the non-volatile memory die 104, the subject matter described herein is not limited to having a single memory channel. For example, in some NAND memory system architectures (such as the architectures shown in fig. 1B and 1C), there may be 2, 4, 8, or more NAND channels between the controller and the NAND memory devices, depending on the capabilities of the controller. In any of the embodiments described herein, even though a single channel is shown in the figures, there may be more than one single channel between the controller and the memory die.
FIG. 1B shows a memory module 200 that includes a plurality of non-volatile memory systems 100. As such, the memory module 200 may include a memory controller 202 that interacts with a host and a memory system 204 that includes a plurality of non-volatile memory systems 100. The interface between the storage controller 202 and the non-volatile storage system 100 may be a bus interface, such as a Serial Advanced Technology Attachment (SATA) or peripheral component interconnect express (PCIe) interface. In one embodiment, the storage module 200 may be a Solid State Drive (SSD), such as found in portable computing devices such as laptops and tablets.
FIG. 1C is a block diagram illustrating a hierarchical storage system. The hierarchical storage system 250 includes a plurality of storage controllers 202, each of which controls a respective storage system 204. The host system 252 may access memory within the memory system via a bus interface. In one embodiment, the bus interface may be a non-volatile memory express (NVMe) or fibre channel over ethernet (FCoE) interface. In one embodiment, the system shown in FIG. 1C may be a rack-mountable mass storage system that is accessible by multiple host computers, such as may be found in a data center or in other locations where mass storage is desired.
Fig. 2A is a block diagram illustrating exemplary components of controller 102 in more detail. The controller 102 includes a front-end module 108 that interacts with a host, a back-end module 110 that interacts with one or more non-volatile memory dies 104, and various other modules that perform functions that will now be described in detail. For example, in this embodiment, the controller 102 includes a NAND bus recorder 111, which may be implemented in hardware or software/firmware, and configured to record traffic on the bus to the memory 104. The use of the NAND bus recorder 111 will be discussed in more detail below. Instead of or in addition to the NAND bus recorder 111 in the memory system 100, an external NAND bus guardian and/or an internal host bus guardian may be used. Additionally, in one embodiment, a portion of the write amplified computing elements are in the controller 102 of the memory system 100 (by using a NAND bus guardian and a host bus guardian). The modules may take the following form: for example, a packaged-function hardware unit designed for use with other components, a portion of program code (e.g., software or firmware) that can be executed by a (micro) processor or processing circuit that typically performs a particular one of the associated functions, or a separate hardware or software component that interacts with a larger system.
Referring again to the modules of the controller 102, the buffer management/bus controller 114 manages buffers in the Random Access Memory (RAM)116 and controls internal bus arbitration of the controller 102. A Read Only Memory (ROM)118 stores system boot code. Although shown in fig. 2A as being located separately from the controller 102, in other embodiments, one or both of the RAM 116 and the ROM 118 may be located within the controller. In yet other embodiments, portions of the RAM and ROM may be located within the controller 102 and external to the controller.
The front end module 108 includes a host interface 120 and a physical layer interface (PHY)122 that provides an electrical interface with a host or a next level memory controller. The type of host interface 120 may be selected depending on the type of memory used. Examples of host interface 120 include, but are not limited to, Serial Advanced Technology Attachment (SATA), SATA Express, serial attached small computer system interface (SAS), fibre channel, Universal Serial Bus (USB), peripheral component interconnect Express (PCIe), and non-volatile memory Express (NVMe). The host interface 120 generally facilitates the transfer of data, control signals, and timing signals.
The back end module 110 includes an Error Correction Code (ECC) engine 124 that encodes data bytes received from the host, and decodes and error corrects data bytes read from the non-volatile memory. The command sequencer 126 generates command sequences, such as a program command sequence and an erase command sequence, for transmission to the non-volatile memory die 104. A Redundant Array of Independent Drives (RAID) module 128 manages the generation of RAID parity and the recovery of failed data. RAID parity may be used as an additional level of integrity protection for data written into the memory device 104. In some cases, the RAID module 128 may be part of the ECC engine 124. The memory interface 130 provides command sequences to the non-volatile memory die 104 and receives status information from the non-volatile memory die 104. In one embodiment, memory interface 130 may be a Double Data Rate (DDR) interface, such as a switched mode 200, 400, or 800 interface. Flash control layer 132 controls the overall operation of back end module 110.
The memory system 100 also includes other discrete components 140, such as external electrical interfaces, external RAM, resistors, capacitors, or other components that may interact with the controller 102. In alternative embodiments, one or more of the physical layer interface 122, RAID module 128, media management layer 138, and buffer management/bus controller 114 are optional components that are not required in the controller 102.
Fig. 2B is a block diagram illustrating components of the non-volatile memory die 104 in more detail. The non-volatile memory die 104 includes peripheral circuitry 141 and a non-volatile memory array 142. The non-volatile memory array 142 includes non-volatile memory cells for storing data. The non-volatile memory cells may be any suitable non-volatile memory cells, including NAND flash memory cells and/or NOR flash memory cells in a two-dimensional configuration and/or a three-dimensional configuration. The non-volatile memory die 104 also includes a data cache 156 that caches data.
As discussed above, one metric used in designing a storage system is the Write Amplification (WA) coefficient. The write amplification factor is defined as the amount of data written to the memory of the storage system divided by the amount of data written by the host. If expressed mathematically, the Write Amplification Factor (WAF) is defined as:
Figure BDA0003092506540000091
writing an amplification factor of one would be desirable as this would provide the best response time and promote high endurance of the memory. However, writing host data is typically accompanied by write overhead, and FIG. 3 illustrates some factors that may affect the write amplification factor. These factors include writing control data to memory for flash management and possibly relocating data from one block pool to another. As used herein, control data may refer to additional writes required by the data structure for flash management. The amount of control data written depends on the writing scenario. For example, random writes over a wide address range may require updating the logical-to-physical address table more frequently than sequential writes.
The relocation data may take the form of folding or garbage collection. In one embodiment, folding refers to moving data from a Single Level Cell (SLC) block to a multi-level or three-level cell (MLC or TLC) block. In contrast, garbage collection refers to moving data (e.g., MLC to MLC, or TLC to TLC) between blocks of the same memory cell type. As shown in fig. 3, garbage collection may depend on excess provisioning (e.g., availability of free TLC/SLC blocks that exceed memory egress capacity). Over provisioning may be determined by the number of spare blocks at production and the number of blocks unmapped by the host, and refers to the number of spare blocks used by the system to write new incoming data. For example, when a host wants to write data to a block that already contains data, the storage system may write the incoming data to a spare block and then invalidate the old block in the logical-to-physical address table. If there are no spare blocks available in this scenario, the old data needs to be pulled from the target block before the incoming host data can be written. This may lead to performance problems. Thus, the more spare blocks available in memory, the easier it is to accommodate random write scenarios.
Media fragmentation is another vector in garbage collection and is measured by a metric called the effective count (VC). Fragmentation occurs when there are partially written blocks because some of them are invalid. Blocks with a relatively small amount of valid data are good candidates for garbage collection because there is no need to move too much data. In contrast, if a 1MB block has only 16KB of invalid data, more data will need to be moved, resulting in more write overhead.
Turning again to the drawings, FIG. 4 is a block diagram of a system for calculating and analyzing one embodiment of a write amplification factor. As shown in fig. 4, in this embodiment, a computer 400 (also referred to herein as a computing device) and a display 410 are provided. These components may take any suitable form. For example, the computer 400 may be a personal computer or a server, and the display 410 may be a stand-alone monitor. Alternatively, the computer 400 may be a mobile device having the display 410 integrated therein (e.g., as a touch screen). Of course, these are merely examples, and other implementations may be used. For example, in an implementation in which a portion of the amplified computing element is written in the controller 102 of the memory system (by using a NAND bus guardian and a host bus), the computer 400 may only read the data and periodically monitor the data.
In this embodiment, computer 400 includes a processor 420 and a memory 430. The processor 420 is configured to implement a write amplification analysis tool 440. In one embodiment, computer readable program code written to the magnification analysis tool 440 is stored in the memory 430 and executed by the processor 420 (i.e., the write magnification analysis tool 440 may be software/firmware executed by hardware). In another embodiment, the write amplification analysis tool 440 is implemented in hardware only. In any case, in one embodiment, the write amplification analysis tool 440 may be used to implement the algorithms shown in the accompanying flow charts and described herein. In one embodiment, the write amplification analysis tool 440 may be used to perform write amplification analysis on a simulation of a storage system or may be used to perform write amplification analysis on an actual storage system 100 connected to the computer 400 through the bus 450. The computer 400 may also be used to generate and display a terabyte written (TBW), which specifies how many terabytes can be written to the memory 104 until it can no longer absorb any data.
Referring again to the figures, fig. 5-7 are flow diagrams 500, 600, 700 of a method that may be used for one embodiment of calculating write application coefficients. As described above, the base data for generating the write application coefficient is the amount of data written by the host and the amount of data written to the memory 104. Examples of methods that may be used to collect this data are shown in fig. 5 and 6, respectively.
Referring first to FIG. 5, FIG. 5 is a flow chart 500 illustrating a method of one embodiment for measuring an amount of data written by a host (e.g., computer 400 in FIG. 4). As shown in fig. 5, the write amplification analysis tool 440 first divides the time domain into a fixed window size (e.g., 100 milliseconds (ms)) (operation 510). The write amplification analysis tool 440 then monitors the bus 450 between the computer 400 and the memory system 100 to obtain an input record with the following information:
command type, command size, command start time, and command duration (operation 520). Beginning with time window #1, the write amplification analysis tool 440 looks for all write commands whose start times fall within this window (operation 530). The write amplification analysis tool 440 then calculates the amount of data written using the following equation: sum (number of sectors X sector size) (operation 540). Write amplification analysis tool 440 then calculates the write performance by dividing the amount of data by the window size (operation 550). The write amplification analysis tool 440 repeats these operations for each window (530, 540, 550). After all windows have been processed, the write amplification analysis tool 440 measures the amount of data written by the computer 400.
As shown in the flow chart 600 in FIG. 6, the write amplification analysis tool 440 then measures the amount of data written to the memory 104 of the storage system 100. Although measurements are made on the actual storage system 100 in this example, analysis may be performed on the simulation as described above. As shown in fig. 6, in this embodiment, the write amplification analysis tool 440 divides the time domain into fixed size windows (e.g., 100ms) (operation 610). The write amplification analysis tool 440 then receives input from the NAND bus recorder 111 and extracts the command type and command size using a protocol analyzer for recording activity on the NAND bus (operation 620). For example, the amount of data for control or relocation may be obtained from the NAND bus recorder 111, which occurs in parallel and is synchronized with host activity, and the address behavior may be obtained from a recording utility in the host, such as FTrace.
Beginning with time window #1, the write amplification analysis tool 440 looks for all write commands whose start times fall within this window (operation 630). The write amplification analysis tool 440 then calculates the amount of data written using the following equation: sum (number of sectors X sector size) (operation 640). Write amplification analysis tool 440 then calculates the write performance by dividing the amount of data by the window size (operation 650). The write amplification analysis tool 440 repeats these operations for each window (630,640,650). After all windows have been processed, the write amplification analysis tool 440 measures the amount of data written to the memory 104 of the storage system 100.
Turning now to the flowchart in fig. 7, to calculate the write amplification factor, write amplification factor analysis tool 440 may begin with time window #1 (operation 710), dividing the NAND performance of the window (as determined in flowchart 600 of fig. 6) by the host performance of the window (as determined in flowchart 500 of fig. 5) (operation 720). The write amplification factor analysis tool 440 repeats the process for other windows.
With the information now collected, the write amplification factor analysis tool 440 can simultaneously display on the display 410 a graph of the amount of data written from the host, the amount of data written to the memory, and the write amplification values synchronized over the time period of the various write scenarios, so the storage system designer can see various effects on the write analysis factors over time. For example, fig. 8A shows five different write and erase activities over time (sequential write over entire media, random write 4GB, discard/unmap 1/2 media, random write 4GB, sequential write 1/2 media, and random write over entire media), and fig. 8B-8D show the effect of these activities over time on three of the measured data written by the host (fig. 8B), the measured data written to the NAND (fig. 8C), and the calculated write amplification factor (fig. 8D).
Of course, the above-described graphics are merely examples, and may also be used for different or other types of graphics. Some of these additional graphics are shown in fig. 9A-9G.
FIG. 9A illustrates the behavior of commands in address space over time. FIG. 9B (similar to FIG. 8D) shows write amplification over time derived from all of the factors discussed above. Fig. 9C shows over provisioning (e.g., 4 kbytes) in both free blocks and Flash Management Unit (FMU). In the case of random writing, the amount of free FMUs remains the same, but the amount of available blocks may decrease over time until a garbage collection threshold is reached, thereby activating a garbage collection process. Fig. 9D shows data written to the NAND (similar to fig. 8C). Fig. 9E shows NAND data — host data. FIG. 9F shows the excess data generated from control activities only. As described below, sequential writes typically have very few control updates, while purely random writes generate much more table updates. Additional garbage collection generates more control data. FIG. 9G shows relocation data when garbage collection is initiated.
As shown in these figures, for sequential writing, data is written directly to memory with sequential addresses (fig. 9A), and the amount of control data required is minimal (fig. 9F) since updates to the logical-to-physical address table are minimal. Therefore, as shown in fig. 9B, the write amplification factor is almost 1, which means that the amount of data written to the memory is almost entirely from the data written by the host. As shown in fig. 9C, in this example, the derived capacity of the entire medium is written such that the over-provisioned Flash Management Unit (FMU) and over-provisioned blocks reach a level just above the garbage collection threshold (i.e., the memory is filled to its output capacity and all that remains are the minimum level spare blocks).
Next, for random writes in the 4GB range, there is a jump in the write amplification factor (FIG. 9B) because random writes invalidate the data in different blocks and due to the host activity, the logical to physical address table needs to be updated (FIG. 9F). However, as shown in FIG. 9C, at random writes, the number of over-provisioned blocks falls below the garbage collection threshold. This will cause the controller 102 to perform garbage collection and re-allocate data from the old block to the new block (FIG. 9G), which will also cause the controller 102 to update the logical to physical address table (FIG. 9F) according to this garbage collection activity. Writing this extra control data due to garbage collection results in another jump in write amplification (fig. 9B).
Next, the host sends a discard/unmap command (i.e., address independent) to the half media. This means that almost immediately half of the exit capacity is available again, although the data in those blocks may or may not be erased (fig. 9A), and a jump occurs in the over-provisioning (fig. 9C). In addition, as shown in fig. 9F, there is a small bump in the amount of control data because of the update into the physical address table to make the block available.
After the discard command, there is another random write in the 4GB range. This time, since half of the media is available for storage, data can be written without triggering garbage collection. Thus, the logical-to-physical address table only needs to be updated based on host activity (rather than garbage collection activity as with previous random writes), so the write amplification factor is less than that of previous random writes (FIG. 9F). Next, one half of the medium is written sequentially. This is similar to the sequential writing discussed previously. Finally, the entire medium is randomly written. This is also similar to the random writing discussed above, but since more data is now being written, the write amplification factor, control data and relocation data have increased.
These embodiments have several advantages. For example, the write amplification factor analysis tool disclosed herein provides a powerful analysis tool that displays the dynamic behavior of write amplification and can correlate write amplification with other available information. The tool allows a storage system designer to analyze the actual usage of a large number of read, write, discard, refresh, etc. operations having different sizes and addresses. Unlike prior methods that generate values only for write amplification factors, these embodiments allow storage system designers to view the impact of real write usage on write amplification factors over time by simultaneously displaying graphs of various metrics and synchronizing to the same time scale.
Displaying all of this information in a synchronized manner on the same time scale shows the dynamic behavior of the write zoom over time and the relevance to various input scenarios. For example, displaying the amount of data written by the host versus the amount of data written to the memory of the storage system for various types of write activity may show the storage system designer when a problem occurs and the cause of the problem. Using these graphs, a storage system designer may determine what adjustments to make to the flash management algorithms in the storage system 100 in order to improve or optimize write amplification factors to improve response time and avoid reducing memory endurance. As used herein, "flash management algorithm" refers to hardware, software, and/or firmware in the storage system 100 that controls the operation of the storage system 100 and, in particular, affects the relocation and control factors that contribute to write amplification. That is, after viewing various graphics, a storage system designer may modify the flash management algorithm to reduce the write amplification factor (e.g., average or peak) for various scenarios.
For example, as shown and discussed above, performing random writing across the entire medium provides the worst write application coefficients. To reduce write amplification, a storage system designer may change the structure of a logical-to-physical address table, or change the policy as to when to update the table. Generally, updating the table at a lower frequency increases the risk of losing data in case of a sudden power outage. However, if sudden power outages are unlikely to occur, reducing the number of table updates may be acceptable for improving performance. The write amplification factor analysis tool may also be configured to automatically or semi-automatically (e.g., based on designer input) calculate an optimization function of the flash management algorithm to reduce the write amplification factor. For example, a write amplification analysis tool may be used to optimize firmware code of a storage system for folding, garbage collection, and the like.
In another embodiment, a method and system for visualizing a correlation between host commands and storage system performance is provided. A storage system, sometimes referred to herein as a "storage device" or "device," such as a Solid State Drive (SSD) embedded in a mobile device, such as a phone, tablet, or wearable device, may perform a number of input/output operations (e.g., read, write, trim, and refresh) during use of the mobile device. Operations may include features such as timestamps at launch and completion and peripheral data such as power state and aggregate queue depth. Analysis of these input/output commands and their characteristics can be used to design and implement algorithms for data storage. In operation, an application running in the storage system may record various operations that occur, and this log may be analyzed by a computing device (e.g., a host or another device). In a data analysis environment for storage workloads, there are typically millions of individual data points that represent particular characteristics of input/output operations sent from a mobile device (host) to a storage system, and vice versa.
When there is a performance or power problem with the host interacting with the storage system, it may be assumed that the problem is related to the storage system, and the data analysis described above may be used to help identify and resolve the problem. For example, there are inefficiencies in data storage algorithms and analysis of input-output operations of the storage system can be used to identify the cause of the inefficiency and suggest a solution. Examples of such analyses can be found in U.S. patent application No. 15/347565 filed on 9/2016 and U.S. patent application No. 15/226661 filed on 2/8/2016, both hereby incorporated by reference.
However, there may be situations where performance or power issues are caused at least in part by the host. In this case, analyzing the storage system operation alone will not identify the host that is contributing to the problem or the source of the problem. The following embodiments may be used to address this situation.
As shown in the flow diagram 1000 in fig. 10, in one embodiment, a computing device 400 (e.g., a Personal Computer (PC) or server) receives information from a host regarding host operations (events) performed during a period of time (operation 1010), and also receives information from the storage system 100 regarding storage system operations (events) performed during the period of time (operation 1020). For example, the host and storage system 100 may provide its logs to the computing device 400 over a wired or wireless connection. The log may include a list of various commands or operations that occurred and various related data such as, but not limited to, timestamps at initiation and completion, power state, aggregate queue depth, and the like. In addition to or instead of using the log, host and/or storage system activity may be measured by monitoring traffic on the bus used with those components. In one embodiment, the computing device 400 communicates with a server that provides computer readable program code to the computing device 400 to execute software that executes the algorithm shown in the flow chart 1000 in FIG. 10.
The computing device 400 draws these events chronologically on the same graph (operation 1030), with the result that both host operations and storage system operations are displayed simultaneously during this time period. FIG. 11 is an example of such a graph showing host tasks and storage system tasks over time. As shown in FIG. 11, the host performs multiple file write operations within the host. These operations may be, for example, the host's operating system queuing write commands in the host's memory after a user invokes a write function. In this example, there are five write operations, each represented by a different rectangle, with the first write operation being larger than the other write operations. At this point, there is no storage system operation because the host has not yet provided a write command to the storage system 100.
At some point (e.g., due to a host triggered timeout or because the host application called it), the host decides to flush the write command from its internal memory to the memory system 100 by calling a kernel Application Program Interface (API) to begin a synchronization operation. This is indicated by the synchronization start arrow in fig. 11. At this time, the host performs additional operations for a period of time. For example, the host may merge one or more write commands together, place the write commands in a queue, update an internal table in the host's file system, determine which page of memory is "dirty," or even perform an operation unrelated to the write command. Thus, in this example, there is a delay between the start of synchronization and the time that the memory system 100 actually begins to perform a write operation.
At some point, the memory system 100 is ready to perform a write operation. Fig. 11 shows the memory system 100 sequentially performing four write operations (in this example, two of five write operations are merged). In this example, FIG. 11 also shows the host performing additional operations after the storage system 100 has completed the write operation. At some later time, the host calls the kernel API to end the synchronization operation. This is indicated by the synchronization end arrow in fig. 11. Thus, in this example, there is a delay between the time the memory system 100 completes performing the write operation and the end of the synchronization. This delay and the delay after the start of synchronization are measures of the file system overhead.
As shown in fig. 11, this embodiment helps technicians visualize the correlation between host commands and storage system performance by simultaneously displaying information about host operations and information about storage system operations over the same time period. Here, the graph in FIG. 11 shows that relatively little time between the start of synchronization and the end of synchronization is consumed by the storage system operation (operation 1040). If the technician does not benefit from this graph, he can assume that the storage system 100 performs write operations using the entire time between the beginning of synchronization and the end of synchronization, and does not recognize that the delay comes primarily from host overhead. Thus, if a performance problem is encountered while the host is interacting with the storage system 100, the graphic indicates that the problem is on the host side rather than the storage system side. Based on this knowledge, the technician may focus on improving the efficiency of the host rather than the efficiency of the storage system 100. For example, a technician may use information on the graph to determine an optimal size of a write buffer and/or a capacitor size for refreshing data without graceful shutdown (UGSD).
This embodiment may also be used to derive storage system power and throughput as the cost of host activity (operation 1050). As such, this embodiment may be used to visualize the correlation between host commands and device performance as well as the power consumption of each command sequence and storage device and host platform. This embodiment will now be discussed in conjunction with fig. 12-14.
FIG. 12 shows a graph of tasks performed by the storage device 100 over time. The graph does not show host activity (i.e., host activity has been filtered out of the graph). FIG. 13 increases host activity. FIG. 13 is similar to FIG. 11 in that it shows writes to main memory prior to a synchronization operation, and host activity during a synchronization operation. However, the host activity during the synchronization operation is less than that in fig. 11. Finally, fig. 14 adds a curve of power consumption over time to the graph. As can be seen in fig. 14, power consumption is a function of both host operation and storage system operation. The graph shows the amount of power consumed by the storage system during a write operation due to host overhead.
Such a graph may help to attempt to identify contributions to depleting the battery life of the host. These embodiments have several advantages. For example, the profile of tasks over time generated by these embodiments may be used to evaluate file system performance, power, and throughput for each command sequence and each storage system. It also allows different platform file system overheads to be compared. Furthermore, there are several alternatives that can be used with these embodiments. For example, while the host and storage system activities are shown as being displayed simultaneously on the same graphic, in other embodiments the activities are displayed simultaneously on different graphics displayed together with each other, or even in a chart or some other non-graphical form.
Finally, as noted above, any suitable type of memory may be used. Semiconductor memory devices include volatile memory devices such as dynamic random access memory ("DRAM") or static random access memory ("SRAM") devices, non-volatile memory devices such as resistive random access memory ("ReRAM"), electrically erasable programmable read only memory ("EEPROM"), flash memory (which may also be considered a subset of EEPROM), ferroelectric random access memory ("FRAM"), and magnetoresistive random access memory ("MRAM"), and other semiconductor elements capable of storing information. Each type of memory device may have a different configuration. For example, flash memory devices may be configured in a NAND configuration or a NOR configuration.
The memory device may be formed of passive elements and/or active elements in any combination. By way of non-limiting example, a passive semiconductor memory element includes a ReRAM device element, which in some embodiments includes a resistivity-switching memory element such as an antifuse, phase change material, or the like, and optionally a steering element such as a diode, or the like. By way of further non-limiting example, active semiconductor memory elements include EEPROM and flash memory device elements, which in some embodiments include elements having charge storage regions, such as floating gates, conductive nanoparticles, or charge storage dielectric materials.
The plurality of memory elements may be configured such that they are connected in series or such that each element is individually accessible. By way of non-limiting example, a flash memory device (NAND memory) in a NAND configuration typically contains memory elements connected in series. A NAND memory array may be configured such that the array is made up of multiple strings of memory, where a string is made up of multiple memory elements that share a single bit line and are accessed as a group. Alternatively, the memory elements may be configured such that each element is individually accessible, such as a NOR memory array. NAND memory configurations and NOR memory configurations are examples, and the memory elements may be configured in other ways.
Semiconductor memory elements located within and/or over a substrate may be arranged in two or three dimensions, such as a two-dimensional memory structure or a three-dimensional memory structure.
In a two-dimensional memory structure, semiconductor memory elements are arranged in a single plane or a single memory device level. Typically, in a two-dimensional memory structure, the memory elements are arranged in a plane (e.g., in an x-z direction plane) that extends substantially parallel to a major surface of a substrate supporting the memory elements. The substrate may be a wafer on or in which layers of memory elements are formed, or it may be a carrier substrate that is attached to the memory elements after they are formed. As a non-limiting example, the substrate may comprise a semiconductor, such as silicon.
The memory elements may be arranged in a single memory device level in an ordered array, such as in multiple rows and/or columns. However, the memory elements may be arranged in a non-regular or non-orthogonal configuration. The memory elements may each have two or more electrodes or contact lines, such as bit lines and word lines.
The three-dimensional memory array is arranged such that the memory elements occupy multiple planes or multiple memory device levels, forming a structure in three dimensions (i.e., in an x-direction, a y-direction, and a z-direction, where the y-direction is substantially perpendicular to a major surface of the substrate, and the x-direction and the z-direction are substantially parallel to the major surface of the substrate).
As a non-limiting example, the three-dimensional memory structure may be vertically arranged as a stack of multiple two-dimensional memory device levels. As another non-limiting example, a three-dimensional memory array may be arranged as a plurality of vertical columns (e.g., columns extending substantially perpendicular to a major surface of a substrate, i.e., in the y-direction), with each column having a plurality of memory elements in each column. The columns may be arranged in a two-dimensional configuration, e.g., in an x-z plane, resulting in a three-dimensional arrangement of memory elements, where the elements are located on multiple vertically stacked memory planes. Other configurations of three-dimensional memory elements may also constitute a three-dimensional memory array.
By way of non-limiting example, in a three-dimensional NAND memory array, memory elements can be coupled together to form NAND strings within a single level (e.g., x-z) of memory devices. Alternatively, the memory elements can be coupled together to form vertical NAND strings that traverse multiple horizontal memory device levels. Other three-dimensional configurations are contemplated in which some NAND strings contain memory elements in a single memory level, while other strings contain memory elements spanning multiple memory levels. Three-dimensional memory arrays may also be designed in NOR configurations as well as ReRAM configurations.
Typically, in a monolithic three dimensional memory array, one or more memory device levels are formed above a single substrate. Optionally, the monolithic three dimensional memory array may also have one or more memory layers at least partially within a single substrate. As a non-limiting example, the substrate may comprise a semiconductor, such as silicon. In a monolithic three dimensional array, the layers making up each memory device level of the array are typically formed on the layers of the underlying memory device levels of the array. However, the layers of adjacent memory device levels of the monolithic three dimensional memory array may be shared or have intervening layers between the memory device levels.
Two-dimensional arrays may then be formed separately and then packaged together to form a non-monolithic memory device having multiple memory layers. For example, a non-monolithic stacked memory may be constructed by forming memory levels on separate substrates and then stacking the memory levels on top of each other. The substrate may be thinned or removed from the memory device level prior to stacking, but because the memory device level is initially formed on a separate substrate, the resulting memory array is not a monolithic three-dimensional memory array. Further, multiple two-dimensional memory arrays or three-dimensional memory arrays (monolithic or non-monolithic) may be formed on separate chips and then packaged together to form a stacked chip memory device.
Associated circuitry is typically required to operate and communicate with the memory elements. As a non-limiting example, a memory device may have circuitry for controlling and driving memory elements to implement functions such as programming and reading. The associated circuitry may be located on the same substrate as the memory elements and/or on a separate substrate. For example, the controller for memory read-write operations may be located on a separate controller chip and/or on the same substrate as the memory elements.
Those skilled in the art will recognize that the present invention is not limited to the two-dimensional structures and three-dimensional structures described, but encompasses all related memory structures as described herein and as understood by those skilled in the art within the spirit and scope of the present invention.
It is intended that the foregoing detailed description be understood as an illustration of selected forms that the invention can take and not as a definition of the invention. It is intended that only the following claims, including all equivalents, define the scope of the invention as claimed. Finally, it should be noted that any aspect of any embodiment described herein can be used alone or in combination with one another.

Claims (20)

1. A method, comprising:
performing the following operations in a computing device:
receiving information about a host operation of the host performed over a period of time;
receiving information about storage system operations of the storage system performed during the time period; and
displaying both the host operation and the storage system operation during the time period simultaneously.
2. The method of claim 1, wherein the host operations and the storage system operations are displayed simultaneously in a graph.
3. The method of claim 2, wherein the graph shows when a host operation is performed without a storage system operation.
4. The method of claim 2, wherein the graphic indicates a start and a stop of a synchronization operation.
5. The method of claim 2, further comprising using information displayed on the graphic to determine a write buffer size.
6. The method of claim 2, further comprising using information displayed on the graphic to determine capacitor size.
7. The method of claim 1, further comprising displaying a graph of power consumption over the period of time.
8. The method of claim 1, wherein the storage system comprises a three-dimensional memory.
9. The method of claim 1, wherein the storage system is embedded in the host.
10. The method of claim 1, wherein the storage system is removably connected to the host.
11. A method, comprising:
performing the following operations in a computing device:
receiving information about activity of a host after the host initiates a process to flush a command to a storage system;
receiving information about activity of the storage system after the host initiates the process of a refresh command to the storage system; and
concurrently displaying the information about the activity of the host and the information about the activity of the storage system.
12. The method of claim 11, wherein the concurrent display shows a period of time after the host refreshes a command to the storage system that there is host activity and no storage system activity.
13. The method of claim 11, wherein the information about the activity of the host and the information about the activity of the storage system are displayed on a graphic.
14. The method of claim 13, further comprising displaying a curve of power consumption on the graph.
15. The method of claim 13, further comprising displaying an indicator on the graphic of when the host initiated and ended the process of refreshing commands to the memory system.
16. The method of claim 11, further comprising determining a write buffer size based on the information displayed simultaneously.
17. The method of claim 11, further comprising determining a capacitor size based on the information displayed simultaneously.
18. A computing device, comprising:
means for receiving information about a host operation of the host performed over a period of time;
means for receiving information regarding storage system operations of the storage system performed during the period of time; and
means for displaying both the host operation and the storage system operation over the period of time simultaneously.
19. The computing device of claim 18, further comprising means for displaying a graphic showing power consumption over the period of time.
20. The computing device of claim 18, further comprising means for displaying indicators of the start and stop of refresh operations.
CN201980079230.2A 2019-04-01 2019-12-19 Method and system for visualizing correlations between host commands and storage system performance Pending CN113168287A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/371,613 US10564888B2 (en) 2016-11-09 2019-04-01 Method and system for visualizing a correlation between host commands and storage system performance
US16/371,613 2019-04-01
PCT/US2019/067445 WO2020205017A1 (en) 2019-04-01 2019-12-19 Method and system for visualizing a correlation between host commands and storage system performance

Publications (1)

Publication Number Publication Date
CN113168287A true CN113168287A (en) 2021-07-23

Family

ID=72666770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980079230.2A Pending CN113168287A (en) 2019-04-01 2019-12-19 Method and system for visualizing correlations between host commands and storage system performance

Country Status (3)

Country Link
CN (1) CN113168287A (en)
DE (1) DE112019005393T5 (en)
WO (1) WO2020205017A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110191654A1 (en) * 2010-02-03 2011-08-04 Seagate Technology Llc Adjustable error correction code length in an electrical storage device
US20170068451A1 (en) * 2015-09-08 2017-03-09 Sandisk Technologies Inc. Storage Device and Method for Detecting and Handling Burst Operations
US20170131948A1 (en) * 2015-11-06 2017-05-11 Virtium Llc Visualization of usage impacts on solid state drive life acceleration
CN106708431A (en) * 2016-12-01 2017-05-24 华为技术有限公司 Data storage method, data storage device, mainframe equipment and storage equipment
US20170160933A1 (en) * 2014-06-24 2017-06-08 Arm Limited A device controller and method for performing a plurality of write transactions atomically within a nonvolatile data storage device
US20180276116A1 (en) * 2017-03-21 2018-09-27 Western Digital Technologies, Inc. Storage System and Method for Adaptive Scheduling of Background Operations

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5853899B2 (en) * 2012-03-23 2016-02-09 ソニー株式会社 Storage control device, storage device, information processing system, and processing method therefor
US20160105329A1 (en) * 2014-10-09 2016-04-14 Splunk Inc. Defining a service-monitoring dashboard using key performance indicators derived from machine data
US10296260B2 (en) * 2016-11-09 2019-05-21 Sandisk Technologies Llc Method and system for write amplification analysis

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110191654A1 (en) * 2010-02-03 2011-08-04 Seagate Technology Llc Adjustable error correction code length in an electrical storage device
US20170160933A1 (en) * 2014-06-24 2017-06-08 Arm Limited A device controller and method for performing a plurality of write transactions atomically within a nonvolatile data storage device
US20170068451A1 (en) * 2015-09-08 2017-03-09 Sandisk Technologies Inc. Storage Device and Method for Detecting and Handling Burst Operations
US20170131948A1 (en) * 2015-11-06 2017-05-11 Virtium Llc Visualization of usage impacts on solid state drive life acceleration
CN106708431A (en) * 2016-12-01 2017-05-24 华为技术有限公司 Data storage method, data storage device, mainframe equipment and storage equipment
US20180276116A1 (en) * 2017-03-21 2018-09-27 Western Digital Technologies, Inc. Storage System and Method for Adaptive Scheduling of Background Operations

Also Published As

Publication number Publication date
DE112019005393T5 (en) 2021-09-02
WO2020205017A1 (en) 2020-10-08

Similar Documents

Publication Publication Date Title
CN109690468B (en) Write amplification analysis method, related computing device and computer-readable storage medium
US10564888B2 (en) Method and system for visualizing a correlation between host commands and storage system performance
US9921956B2 (en) System and method for tracking block level mapping overhead in a non-volatile memory
CN111149096B (en) Quality of service of adaptive device by host memory buffer range
US10032488B1 (en) System and method of managing data in a non-volatile memory having a staging sub-drive
US20100017650A1 (en) Non-volatile memory data storage system with reliability management
US10776268B2 (en) Priority addresses for storage cache management
US9904477B2 (en) System and method for storing large files in a storage device
US10264097B2 (en) Method and system for interactive aggregation and visualization of storage system operations
JP7293458B1 (en) Storage system and method for quantifying storage fragmentation and predicting performance degradation
US11086389B2 (en) Method and system for visualizing sleep mode inner state processing
CN113168287A (en) Method and system for visualizing correlations between host commands and storage system performance
US20210181980A1 (en) Storage System and Method for Improving Utilization of a Communication Channel between a Host and the Storage System
US20230281122A1 (en) Data Storage Device and Method for Host-Determined Proactive Block Clearance
US11941282B2 (en) Data storage device and method for progressive fading for video surveillance systems
US11599298B1 (en) Storage system and method for prediction-based pre-erase of blocks to improve sequential performance
US11520695B2 (en) Storage system and method for automatic defragmentation of memory
US11880256B2 (en) Data storage device and method for energy feedback and report generation
US11487449B2 (en) Data storage device and method for enabling higher lane utilization in run time via device hints on workload patterns
US11809747B2 (en) Storage system and method for optimizing write-amplification factor, endurance, and latency during a defragmentation operation
US20230315335A1 (en) Data Storage Device and Method for Executing a Low-Priority Speculative Read Command from a Host
US11650758B2 (en) Data storage device and method for host-initiated cached read to recover corrupted data within timeout constraints
US20230062493A1 (en) Storage System and Method for Performing a Targeted Read Scrub Operation During Intensive Host Reads
US20230385068A1 (en) Data Storage Device and Method for Storage-Class-Memory-Accelerated Boot Partition Optimization
CN113176849A (en) Storage system and method for maintaining uniform thermal count distribution using intelligent stream block swapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination