WO2015096455A1 - 固态硬盘使用方法及装置 - Google Patents

固态硬盘使用方法及装置 Download PDF

Info

Publication number
WO2015096455A1
WO2015096455A1 PCT/CN2014/081976 CN2014081976W WO2015096455A1 WO 2015096455 A1 WO2015096455 A1 WO 2015096455A1 CN 2014081976 W CN2014081976 W CN 2014081976W WO 2015096455 A1 WO2015096455 A1 WO 2015096455A1
Authority
WO
WIPO (PCT)
Prior art keywords
data block
latency
block
warning value
solid state
Prior art date
Application number
PCT/CN2014/081976
Other languages
English (en)
French (fr)
Inventor
周建华
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP14873676.2A priority Critical patent/EP3079067A4/en
Publication of WO2015096455A1 publication Critical patent/WO2015096455A1/zh
Priority to US15/189,857 priority patent/US10310930B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/079Root cause analysis, i.e. error or fault diagnosis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0727Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a storage system, e.g. in a DASD or network based storage system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • G06F11/0754Error or fault detection not based on redundancy by exceeding limits
    • G06F11/0757Error or fault detection not based on redundancy by exceeding limits by exceeding a time limit, i.e. time-out, e.g. watchdogs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3419Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • G06F2212/1036Life time enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/21Employing a record carrier using a specific recording technology
    • G06F2212/214Solid state disk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7204Capacity control, e.g. partitioning, end-of-life degradation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7206Reconfiguration of flash memory system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7211Wear leveling

Definitions

  • the present invention relates to the field of storage technologies, and in particular, to a method and an apparatus for using a solid state hard disk. Background technique
  • NAND Flash Most Solid State Disks (SSDs) are implemented by non-volatile random access memory media, NAND Flash.
  • NAND Flash can be divided into Single Level Cell (SLC) and multi-step. Storage Unit (Multi Level Cell, MLC).
  • NAND Flash typically consists of an internal memory and a memory matrix. The storage matrix includes a plurality of blocks, each of which includes a plurality of pages, and each of the pages includes a plurality of bytes (Bytes).
  • MLC Multi Level Cell
  • NAND Flash typically consists of an internal memory and a memory matrix. The storage matrix includes a plurality of blocks, each of which includes a plurality of pages, and each of the pages includes a plurality of bytes (Bytes).
  • MLC Multi Level Cell
  • NAND Flash typically uses MLC chips, and the operations on NAND Flash are mainly read, write and erase.
  • the read and write of NAND Flash is in units of pages.
  • the erase is in block.
  • the page erase operation must be performed before the
  • the erase layer of the internal floating gate transistor of NAND Flash is erased. Caused a break. When an erase failure occurs, etc., the NAND Flash will actively report the SSD, so that the block that fails the SSD operation is a bad block. As the number of NAND Flash erases (also known as PE Cycle by those skilled in the art) increases, when the number of bad blocks reaches a certain level, for example, 3%, NAND Flash is considered to have reached the end of its useful life.
  • the SSD life is reduced.
  • a load balancing technique is introduced, and an equalization table is used to record the number of erasures of each block.
  • the block with a lower number of erases is preferentially selected to operate, thereby ensuring that the number of times of erasing and erasing of each block in the entire SSD is at the same level, that is, the number of times of erasing and writing of each block is as uniform as possible.
  • a part of the redundant block in the SSD is used as a reserved block. When a bad block occurs, the reserved block is used to replace the failed block, thereby avoiding premature failure of the entire SSD, thereby improving the service life of the SSD.
  • Embodiments of the present invention provide a method and an apparatus for using a solid state hard disk, which reduces the generation of a bad block by load balancing to reduce the use of the reserved block, and finally achieves the purpose of improving the service life of the solid state hard disk.
  • the embodiment of the present invention provides a method for using a solid state hard disk, including: determining, when operating a data block in a solid state hard disk, a latency period of a data block to be operated according to a load balancing table of the solid state hard disk, The latency is a time during which the operation is performed, the operation including an erase operation or a write operation;
  • the method further includes:
  • the method further includes:
  • the latency of the data block in the load balancing table is updated based on the obtained latency.
  • the performing the operation on the data block is prohibited After that, it also includes:
  • the data block is recorded in a preset pre-bad block table for indicating a data block requiring a reduction operation.
  • an embodiment of the present invention provides a method for using a solid state hard disk, including: determining, according to a load balancing table of a solid state hard disk, a data block to be operated;
  • the operation includes an erase operation or a write operation; recording an latency of the data block during the operation in the load balancing table, wherein The latency is a time during which the operation is performed on the data block;
  • the data block is recorded in a preset pre-bad block table.
  • an embodiment of the present invention provides a device for using a solid state hard disk, including: a determining module, configured to determine a data block according to a load balancing table of the solid state hard disk when the data block in the solid state hard disk needs to be operated
  • the latency period which is the duration of time during which the operation is performed on the data block, the operation including an erase operation or a write operation;
  • a determining module configured to determine whether the latency of the data block determined by the determining module is greater than a warning value, where the early warning value is less than a typical latency period, where the typical latency is a preset operation on a data block in the solid state hard disk Latency at failure;
  • a processing module configured to: if the determining module determines that the latency of the data block is greater than the pre-alarm value, prohibiting performing the operation on the data block.
  • the processing module is further configured to: if the determining module determines that the latency of the data block is not greater than the early warning value, perform the data block The operation.
  • the apparatus further includes:
  • An obtaining module configured to obtain an incubation period of the data block in the operation process
  • an update module configured to update an incubation period of the data block in the load balancing table according to an incubation period obtained by the obtaining module.
  • the apparatus further includes:
  • a recording module configured to: after the prohibiting operation module prohibits performing the operation on the data block, recording the data block in a preset pre-bad block table, where the pre-bad block table is used to indicate that the The data block of the operation.
  • an embodiment of the present invention provides a device for using a solid state hard disk, including: a selecting module, configured to determine a data block according to a load balancing table of the solid state hard disk; An operation module, configured to operate on the data block determined by the determining module, where the operation includes an erase operation or a write operation;
  • a recording module configured to record, in the load balancing table, an incubation period of the data block during the operation, where the latency is a time duration for performing the operation on the data block; Determining whether the latency of the data block is greater than a warning value, wherein the early warning value is less than a typical latency period, and the typical latency period is a preset latency period when a data block operation in the solid state hard disk fails.
  • the recording module is further configured to: if the determining module determines that the latency of the data block is greater than the warning value, record the data block in a preset pre-bad block table.
  • the method and device for using the solid state hard disk provided by the embodiment of the present invention dynamically monitors the latency of the actually operated block by comparing the current latency of each block with the warning value, and performs the block when the current latency is less than or equal to the warning value. Operation, the method of the embodiment of the present invention achieves load balancing from the physical characteristics of the block in the SSD, and reduces the number of bad blocks to a certain extent, thereby prolonging the life of the SSD.
  • Embodiment 1 is a flowchart of Embodiment 1 of a method for using a solid state hard disk according to the present invention
  • FIG. 2A is a schematic diagram of a programming operation of a solid state hard disk according to the present invention.
  • 2B is a schematic diagram of an erase operation of a solid state hard disk according to the present invention.
  • FIG. 3 is a schematic diagram of the erasing characteristics of the solid state hard disk of the present invention.
  • 4A is a schematic diagram showing the relationship between the rubbing latency tBERS and the number of erasing times of the MLC-based solid state hard disk according to the present invention
  • 4B is a schematic diagram showing the relationship between the wipe latency tBERS and the number of erasing times of the SLC-based solid state hard disk according to the present invention
  • FIG. 5 is a flow chart of determining a warning value in a method for using a solid state hard disk according to the present invention
  • 6 is a schematic diagram showing a relationship between a number of erasing times and a wiping latency in an embodiment of the present invention
  • 7 is a flow chart of load balancing according to a warning value in a method for using a solid state hard disk according to the present invention
  • FIG. 8 is a schematic diagram of a state of use of a solid state drive according to an embodiment of the present invention.
  • Embodiment 9 is a flowchart of Embodiment 2 of a method for using a solid state hard disk according to the present invention.
  • Embodiment 10 is a schematic structural diagram of Embodiment 1 of a device for using a solid state hard disk according to the present invention.
  • Embodiment 11 is a schematic structural diagram of Embodiment 2 of a device for using a solid state hard disk according to the present invention.
  • FIG. 12 is a schematic structural diagram of Embodiment 3 of a device for using a solid state hard disk according to the present invention. detailed description
  • FIG. 1 is a flowchart of Embodiment 1 of a method for using a solid state hard disk according to the present invention.
  • the execution host of the embodiment is a SSD device, which can be set on the SSD or the SSD itself, and is suitable for scenarios in which the load of each block in the SSD needs to be balanced. Specifically, the embodiment includes the following steps:
  • NAND Flash's address, command and data input/output (I/O) ports are multiplexed, and the process of reading and writing data is complicated.
  • an operation command is sent to a data block (Block) to be operated in the SSD, such as an erase operation command or a write operation command
  • the query command can be sent to determine whether the erase operation or the write operation is successfully performed, and finally the operation on the data block is actually completed.
  • the time to wait is called the incubation period.
  • the erase operation and the write operation are simply referred to as an erase operation. It has been verified that the latency of each block changes regularly with the number of erasing times.
  • the incubation period is a parameter that physically reflects the health status of the block.
  • the latency of each block is recorded in the load balancing table. For a specific data block, the SSD uses the device to query the load balancing table to obtain the latency of the data block.
  • the typical latency is a preset latency period when the data block operation in the solid state hard disk fails.
  • the latency of each block in the sample solid state hard disk in which the erasing failure occurs may be counted in advance to obtain a typical latency. For example, the minimum latency period in which all the blocks fail to be erased is taken as the typical latency period, and the warning value is less than the typical latency period; or, the average value of the latency period when all or part of the block fails to be erased is used as the typical latency period, and the warning value is less than the Typical incubation period; or, according to other rules, the typical latency is counted and an early warning value is set.
  • the SSD usage device determines whether to continue the operation on the data block based on the relationship between the latency determined in step 102 and the warning value.
  • the current latency of the block is greater than the warning value, it indicates that the data block has a poor lifetime, that is, the number of erasing and erasing of the data block can be relatively low, and if the data block is continued to operate, the erasing failure may occur. At this time, even if the number of erasures of the data block is much smaller than the number of erasures of other blocks, the erase operation and the write operation of the data block are prohibited.
  • the latency of the data block to be operated is not greater than the warning value, it indicates that the data block to be operated has a strong lifetime, that is, the number of erasures that the data block to be operated can bear is relatively high, and the data block can be erased. You can perform erase operations and write operations.
  • the SSD use device may further record the to-be-operated data block in the preset pre-bad block table after the operation is prohibited.
  • the pre-bad block table is used to indicate a data block that needs to reduce the erase operation.
  • a pre-bad block table may be set, and the format of the pre-bad block table and the bad block table used for recording the bad block are the same, that is, each block has l ⁇ 2b indicating whether the block is It can be used normally or as a preferred use; if 00 is indicated as a good block, 01 is indicated as a bad block, and 10 is indicated as a pre-bad block type.
  • the current latency is recorded.
  • the block is not used, and the block is added to the pre-bad block table.
  • the block added to the pre-bad block table it is temporarily not erased or reduced the number of erasing operations to avoid It quickly becomes bad, but it does not affect the read operations.
  • Block recorded in the pre-bad block table is not a true bad data block of the table, but is a data block that easily becomes a bad data block if it continues to operate.
  • a block with an incubation period greater than the warning value is recorded in the pre-bad block table, and the erase operation or the write operation to the block is reduced or suspended, thereby reducing the probability of occurrence of the bad block to some extent.
  • the SSD uses the device to obtain the latency of the data block during the operation; and updates the latency of the data block in the load balancing table according to the obtained latency.
  • the latency of the data block in the load balancing table may be updated according to the latency of the data block during the current operation.
  • the characteristics of the number of times of erasing and the latency of the solid state hard disk may be counted, thereby determining the pre-alarm value.
  • the latency includes the tweet latency tBERS and the write latency tPROG. Specifically, please refer to FIG. 2A and FIG. 2B.
  • FIG. 2A is a schematic diagram of a programming operation of a solid state hard disk of the present invention.
  • the I/O channel based on the NAND flash's SSD address, command and data is multiplexed.
  • the process of programming a page, ie the write operation is as follows: First send a clock cycle write command (eg "0x80" " ), then send a write address of five clock cycles, and then send the data. After the data transmission is completed, a clock cycle write command (for example, "0x10”) is sent to indicate that the data has been sent. After waiting for a while, enter the query state to determine whether the data is successfully written. If it is not successful, you need to rewrite the data. The period of waiting during this process is called the write latency (tPROG).
  • tPROG write latency
  • FIG. 2B is a schematic diagram of an erase operation of a solid state hard disk according to the present invention.
  • the I/O channel based on the NAND flash's SSD address, command and data is multiplexed.
  • the process of erasing a block of data that is, the process of erasing is as follows: First send a clock cycle wipe
  • the operation command for example, "0x60”
  • the erase address of three clock cycles is sent, and the erase operation of one clock cycle is sent.
  • the command for example, "OxDO"
  • the period of time waiting for this process is called the wipe latency (tBERS).
  • the wipe latency tBERS and the write latency tPROG will change as the number of erases increases. Specifically, see Figure 3.
  • FIG. 3 is a schematic diagram of the erasing characteristics of the solid state hard disk of the present invention.
  • the abscissa is the number of erasing times
  • the left ordinate is the change value of the erasure latency
  • the unit is us (microseconds)
  • the ordinate on the right is the change value of the write latency
  • the unit is us
  • the dotted line shows the change curve of the erase latency with the number of times of erasing
  • the solid line shows the change curve of the write latency with the number of times of erasing.
  • the threshold voltage of the solid state hard disk changes.
  • a rubbing failure or a writing failure occurs, corresponding to data that can be directly measured, that is, the rubbing latency tBERS and The write latency tPROG changes as the number of erases increases.
  • FIG. 4A is a schematic diagram showing the relationship between the wipe latency tBERS and the number of erasing times of the MLC-based solid state hard disk according to the present invention
  • FIG. 4B is a schematic diagram showing the relationship between the wipe latency tBERS and the number of erasing times of the SLC-based solid state hard disk according to the present invention.
  • the abscissa is the number of erasing times, and the ordinate on the left side is the change value of the erasure latency, and the unit is ms (milliseconds).
  • the t-spot latency tBERS will change with the increase of the number of reading and erasing, and has certain regularity:
  • the t-bend latency tBERS will increase regularly with the increase of the number of reading and erasing times, when the block's rubbing latency tBERS is large. At a certain level, erasing the block will result in a wipe failure.
  • write latency # tPROG will decrease regularly as the number of read erases increases.
  • write failure will occur for the block write.
  • the characteristics of the number of times of erasing and the latency of the solid state hard disk are counted, and the time interval tBERS of the actually operated block or the write latency of the page tPROG is dynamically monitored, and the warning value is set, and According to the warning value, a plurality of thresholds smaller than the warning value are set.
  • the wipe latency tBERS or the write latency tPROG reaches the corresponding set threshold and performs different processing (such as reducing the frequency of use or temporarily not using, etc.), temporarily block the block before it expires, thereby reducing the reserved block consumption.
  • FIG. 5 is a flow chart of determining a warning value in a method for using a solid state hard disk according to the present invention. As shown in FIG. 5, the embodiment includes the following steps:
  • FIG. 6 is a schematic diagram showing a change relationship between the erasing times and the erasing latency in the embodiment of the present invention, wherein the abscissa is the erasing frequency and the ordinate is the wiping latency (tBERS).
  • the wiping latency is about 3ms; when the erasing frequency is 200-400, the wiping latency is about 4ms; when the erasing frequency is 400 600, the wiping latency is about For 5ms ⁇
  • the wiping latency is about 10ms.
  • the wiping latency is a typical latency.
  • the number of erasures is greater than 3400, the latency is basically kept at 10ms and no longer changes. You can choose 10ms as the typical latency and 9ms as the warning value. If the warning value is reached, it means if you continue. With this block, the block may soon become a bad block.
  • a multi-level threshold may be set according to the warning value to perform reliability classification on each block, wherein each threshold is smaller than the warning value.
  • the block whose current latency in the load balancing table is less than or equal to the threshold may be operated according to the threshold, so that each block reaches the threshold at the same time.
  • the first level threshold may be set to 6 ms
  • the second level threshold is 7 ms
  • the third level threshold is 8 ms
  • the fourth level threshold is 9 ms.
  • each block When the latency of the data block is about to reach the first threshold, the operation of the data block is avoided, and the other blocks are operated, so that each block reaches The first level of threshold. After each block reaches the first-level threshold, under the premise that the latency is less than the second-level threshold, each block can be randomly operated, and so on, thereby ensuring load balancing of each block according to the threshold to improve the SSD. Service life. In the process, if a bad block is generated, the block is not operated. For example, when each block reaches the third-level threshold, if a random operation causes a block to have a bad block, at this time, the guaranteed latency is less than the fourth-level threshold. Under the premise, you can randomly operate other blocks.
  • different operation modes may be set, for example, when the latency of the data block is less than the first threshold, the data block may be randomly erased; when the latency of the data block is between the first When the level is between the second level threshold, the erase operation of the data block may be reduced; when the latency of the data block is greater than the second level threshold, the operation of the data block may be suspended.
  • a multi-level threshold is set to perform reliability classification on each block, thereby distinguishing blocks of different qualities, and performing different processing after reaching different thresholds.
  • a multi-level threshold can also be set to distinguish different quality pages, and different thresholds are used to perform different processing. This enables a more refined management of the Block.
  • FIG. 7 is a flow chart of load balancing according to a warning value in a method for using a solid state hard disk according to the present invention. As shown in FIG. 7, the embodiment includes the following steps:
  • the current latency of each block is monitored and the current latency is updated to the load balancing table. For example, if the initial latency of a block before the erase operation is 3ms, if the block is erased, the latency of the block is changed to 4ms after the 100th erase operation, after the 100th erase operation. The latency in the load balancing table is updated to 4ms.
  • the load balancing between the blocks is implemented by referring to the load balancing table, that is, if the current latency is less than or equal to the warning value, return to step 302; otherwise, step 304 is performed.
  • the erasing operation is not temporarily performed or the number of erasing operations is reduced, so as to prevent it from becoming bad quickly, but it does not affect the reading operation.
  • the SSD will reserve a part of the redundant block as a reserved block.
  • a 100G SSD may have a bare capacity of 128G, and the 28G can be used as a reserved block.
  • the SSD determines whether the reserved block is exhausted. If the reserved block is not exhausted, then return to step 304; otherwise, if the reserved block is exhausted, then step 306 is performed.
  • step 308 it is determined whether the block in the pre-bad block table of the current operation has failed to be erased. If not, the process returns to step 306; otherwise, if the erasing failure occurs, step 308 is performed.
  • the method for using the solid state hard disk compares the latency of the data block to be processed with the early warning value, dynamically monitors the latency of the data block to be operated, and prohibits the operation of the block with the latency longer than the early warning value, thereby physically Real load balancing, to a certain extent reduce the generation of bad blocks, use reserved blocks as much as possible, thus extending the life of SSD.
  • FIG. 8 is a schematic diagram of a state of use of a solid state drive according to an embodiment of the present invention.
  • the abscissa is used to indicate the maximum number of erasures that each block can actually carry, and the ordinate is used to indicate the SSD. Block.
  • each block is uniformly erased by the same number of times. According to this manner, if the number of erasures of each block reaches 3K times, block3 and blockl 5 , block26 may have bad blocks.
  • the method provided in this embodiment does not implement the load between each block according to the number of times of erasing and erasing of each block.
  • the method of the embodiment of the present invention achieves true load balancing from the physical characteristics of the Block, and reduces the number of bad blocks to a certain extent, thereby prolonging the life of the SSD.
  • FIG. 9 is a flowchart of Embodiment 2 of a method for using a solid state hard disk according to the present invention.
  • the solid state hard disk using device determines the data block to be operated according to the load balancing table before operating the blocks. Specifically, the embodiment includes the following steps:
  • the load balancing table stores a block whose latency is not greater than the warning value.
  • the data block to be operated is determined from the load balancing table each time an erase or write operation is required.
  • a block with a low latency in the load balancing table may be selected as the data block to be operated.
  • the data block is erased or written.
  • the load balancing table In order to store the current latency of each block in the load balancing table, for each specific database, each time the operation is performed, the latency of the data block during the operation is obtained, and the load balancing table is updated with the obtained latency.
  • the SSD uses the device to determine whether the latency of the data block is greater than the warning value. If it is greater than the warning value, the data block is recorded in the pre-bad block table, and Move out in the load balancing table; otherwise, if the latency of the data block is not greater than the warning value, record the data block execution in the load balancing table. The incubation period after the operation.
  • a pre-bad block table may be set, and the format of the pre-bad block table and the bad block table used for recording the bad block are the same, that is, each block has l ⁇ 2bk indicating whether the block is It can be used normally or as a preferred use; if 00 is indicated as a good block, 01 is indicated as a bad block, and 10 is indicated as a pre-bad block type.
  • the set warning value during the erasing of the block, the current latency is recorded. When the current latency is reached, the block is not used, and the block is added to the pre-bad block table. For a block added to the pre-bad block table, it is temporarily not erased or reduced the number of erasing operations to avoid it becoming fast and fast, but does not affect the read operation.
  • a multi-level threshold may also be set according to the warning value to perform reliability classification on each block. Specifically, the characteristics of the number of erasing and erasing of the solid state hard disk and the latency change are dynamically counted, and the static latency tBERS of the actually operated block or the write latency of the page tPROG can be dynamically monitored, and an early warning value can be set, and according to the warning value, A plurality of thresholds less than the warning value.
  • the method for using the solid state hard disk provided by the embodiment of the present invention first determines a data block whose latency is less than the early warning value as the data block to be operated and performs an operation, dynamically monitors the latency of the data block to be operated, and prohibits the latency period from being greater than the early warning value.
  • the Block operates to achieve true load balancing from physical characteristics, reducing the generation of bad blocks to a certain extent, and using reserved blocks as much as possible, thereby extending the life of the SSD.
  • FIG. 10 is a schematic structural diagram of Embodiment 1 of a device for using a solid state disk according to the present invention.
  • the device for using a solid state disk provided in this embodiment is an embodiment of the device corresponding to the embodiment of FIG. 1 of the present invention, and the specific implementation process is not described herein.
  • the SSD device 100 of the present embodiment specifically includes: a determining module 11 configured to: when the data block in the solid state hard disk needs to be operated, the latency of the data block determined according to the load balancing table of the solid state hard disk, the incubation period The duration of the operation of the data block, including the erase operation or the write operation;
  • the determining module 12 is configured to determine whether the latency of the data block determined by the determining module 11 is greater than a warning value, and the early warning value is less than a typical latency period, and the typical latency period is a preset latency period when the data block operation in the solid state hard disk fails.
  • the processing module 13 is configured to, if the determining module 12 determines that the latency of the data block is greater than the early warning value, prohibit performing an operation on the data block.
  • the solid state hard disk using device compares the latency of the data block to be operated with the early warning value, dynamically monitors the latency of the data block to be operated, and prohibits the operation of the block with the latency longer than the early warning value, thereby physically Real load balancing, to a certain extent reduce the generation of bad blocks, use reserved blocks as much as possible, thus extending the life of SSD.
  • the processing module 13 is further configured to perform an operation on the data block if the determining module 12 determines that the latency of the data block is not greater than the early warning value.
  • FIG. 11 is a schematic structural diagram of Embodiment 2 of a device for using a solid state hard disk according to the present invention. As shown in FIG. 11, the solid state hard disk using device provided in this embodiment is further based on the device shown in FIG. 10, and further includes:
  • the obtaining module 14 is configured to obtain an incubation period of the data block during the operation
  • the update module 15 is configured to update the latency of the data blocks in the load balancing table based on the latency obtained by the acquisition module 14.
  • the SSD device also includes:
  • the recording module 16 is configured to record a data block in a preset pre-bad block table after the forbidden operation module prohibits performing an operation on the data block, and the pre-bad block table is used to indicate a data block that needs to be reduced in operation.
  • the SSD device 200 of the embodiment includes: a selection module 21, configured to determine a data block according to a load balancing table of the SSD;
  • the operation module 22 is configured to operate on the data block determined by the determining module, where the operation includes an erase operation or a write operation;
  • the recording module 23 is configured to record, in the load balancing table, an incubation period of the data block during the operation, where the latency is a time duration for performing an operation on the data block;
  • the determining module 24 is configured to determine whether the latency of the data block is greater than a warning value, wherein the early warning value is less than a typical latency period, and the typical latency period is a preset latency period when the data block operation in the solid state hard disk fails.
  • the recording module 23 is further configured to: if the determining module 24 determines that the latency of the data block is greater than the warning value, the data block is recorded in the preset pre-bad block table. It will be understood by those skilled in the art that all or part of the steps of implementing the above method embodiments may be performed by hardware related to the program instructions.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the program when executed, performs the steps including the foregoing method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Debugging And Monitoring (AREA)
  • Read Only Memory (AREA)

Abstract

提供一种固态硬盘使用方法及装置,该方法包括:当需要对固态硬盘中的数据块进行操作时,根据固态硬盘的负载均衡表确定待操作的数据块的潜伏期;判断数据块的潜伏期是否大于预警值,预警值小于典型潜伏期,典型潜伏期为预设的对固态硬盘中的数据块操作失败时的潜伏期;若数据块的潜伏期大于预警值,则禁止对数据块执行操作。该方法中,通过对各Block当前的潜伏期与预警值进行比较,动态的监控实际操作的Block的潜伏期,在当前的潜伏期小于或等于预警值时对该Block进行操作,从而从物理特性上做到真正的负载均衡,在一定程度上减少了坏块的产生,尽可能使用了保留块,从而延长了SSD的寿命。

Description

固态硬盘使用方法及装置
技术领域 本发明实施例涉及存储技术领域, 尤其涉及一种固态硬盘使用方法及装 置。 背景技术
大部分固态硬盘 (Solid State Disk, SSD) 通过非易失性随机访问存 储介质一与非门闪存 (NAND Flash) 实现, NAND Flash可分为单阶存储 单元(Single Level Cell, SLC)和多阶存储单元 (Multi Level Cell, MLC) 。 NAND Flash通常由内部存储器和存储矩阵组成。 其中, 存储矩阵包括若 干个块 (Block) , 每个 Block又包括若干个页 (Page) , 每个 Page进一 歩的包括若干个字节 (Byte) 。 目前市面上的 NAND Flash多采用 MLC 芯片, 对 NAND Flash的操作主要为读、 写和擦除。 NAND Flash的读写 以页 (Page) 为单位, 擦除以块 (Block) 为单位, 在进行写操作之前必 须进行页面擦除操作, 擦写过程中会对 NAND Flash内部浮栅晶体管的绝 缘层造成破换。 当发生擦除失败等时, NAND Flash会主动上报 SSD, 使 得 SSD置操作失败的 Block为坏块 (Bad Block) 。 随着 NAND Flash 擦 写次数 (本领域技术人员也称之为 PE Cycle) 的增加, 当坏块数量达到一 定程度, 例如 3%时, 则认为 NAND Flash达到使用寿命。
为避免对某些热点 Block频繁擦写发生坏块而导致 SSD寿命降低,现 有技术中引入负载均衡技术, 采用均衡表记录每个 Block的擦写次数。 每 次写入数据时, 优先选择擦写次数较低的 Block进行操作, 从而保证整个 SSD中各 Block的擦写次数在同一个水平, 即对各 Block的擦写次数尽量 均匀。 另外, SSD中有一部分冗余 Block作为保留块, 当发生坏块时, 用 保留块来替换失效的 Block, 避免整个 SSD过早失效, 从而提高 SSD的使 用寿命。
假设 SSD有 32000个 Block, 写入某数据时, 会均匀选择该 32000个 Block, 从而保证各 Block的擦写次数相差不大, 例如通过 32000 X 3K次 完成数据写入。 然而, 不同 Block的寿命是不同的, 该数据写入过程中, 若某些 Block达不到 3K次就发生坏块, 则需要使用冗余 Block来替换坏 块, 当冗余 Block消耗完时, SSD的寿命也随之耗尽。 发明内容 本发明实施例提供一种固态硬盘使用方法及装置, 通过负载均衡以减少 坏块的产生从而减少保留块的使用,最终到达提高固态硬盘使用寿命的目的。
第一个方面, 本发明实施例提供一种固态硬盘使用方法, 包括: 当需要对固态硬盘中的数据块进行操作时, 根据所述固态硬盘的负载均 衡表确定待操作的数据块的潜伏期, 所述潜伏期为执行所述操作而持续的时 间, 所述操作包括擦除操作或写操作;
判断所述数据块的潜伏期是否大于预警值,所述预警值小于典型潜伏期, 所述典型潜伏期为预设的对所述固态硬盘中的数据块操作失败时的潜伏期; 若所述数据块的潜伏期大于所述预警值, 则禁止对所述数据块执行所述 操作。
在第一个方面的第一种可能的实现方式中, 该方法还包括:
若所数据块的潜伏期不大于所述预警值,则对所述数据块执行所述操作。 结合第一个方面的第一种可能的实现方式, 在第一个方面的第二种可能 的实现方式中, 所述对所述数据块执行所述操作之后, 还包括:
获得所述操作过程中的所述数据块的潜伏期;
根据获得的潜伏期更新所述负载均衡表中所述数据块的潜伏期。
结合第一个方面、 第一个方面的第一种或第二种可能的实现方式, 在第 一个方面的第三种可能的实现方式中, 所述禁止对所述数据块执行所述操作 之后, 还包括:
在预设的预坏块表中记录所述数据块, 所述预坏块表用于指示需要减少 操作的数据块。
第二个方面, 本发明实施例提供一种固态硬盘使用方法, 包括: 根据固态硬盘的负载均衡表确定待操作的数据块;
对所述数据块进行操作, 其中, 所述操作包括擦除操作或写操作; 在所述负载均衡表中记录在所述操作过程中所述数据块的潜伏期,其中, 所述潜伏期为对所述数据块执行所述操作而持续的时间;
判断所述数据块的潜伏期是否大于预警值, 其中, 所述预警值小于典型 潜伏期, 所述典型潜伏期为预设的对所述固态硬盘中的数据块操作失败时的 潜伏期;
若所述数据块的潜伏期大于所述预警值, 则在预设的预坏块表中记录所 述数据块。
第三个方面, 本发明实施例提供一种固态硬盘使用装置, 包括: 确定模块, 用于当需要对固态硬盘中的数据块进行操作时, 根据所述固 态硬盘的负载均衡表确定的数据块的潜伏期, 所述潜伏期为对所述数据块执 行所述操作而持续的时间, 所述操作包括擦除操作或写操作;
判断模块, 用于判断所述确定模块确定出的所述数据块的潜伏期是否大 于预警值, 所述预警值小于典型潜伏期, 所述典型潜伏期为预设的对所述固 态硬盘中的数据块操作失败时的潜伏期;
处理模块, 用于若所述判断模块判断出所述数据块的潜伏期大于所述预 警值, 则禁止对所述数据块执行所述操作。
在第三个方面的第一种可能的实现方式中, 所述处理模块, 还用于若所 述判断模块判断出所述数据块的潜伏期不大于所述预警值, 则对所述数据块 执行所述操作。
结合第三个方面的第一种可能的实现方式, 在第三个方面的第二种可能 的实现方式中, 该装置还包括:
获取模块, 用于获得所述操作过程中的所述数据块的潜伏期;
更新模块, 用于根据所述获取模块获得的潜伏期更新所述负载均衡表中 所述数据块的潜伏期。
结合第三个方面、 第三个方面的第一种或第二种可能的实现方式, 在第 三个方面的第三种可能的实现方式中, 该装置还包括:
记录模块, 用于在所述禁止操作模块禁止对所述数据块执行所述操作之 后, 在预设的预坏块表中记录所述数据块, 所述预坏块表用于指示需要减少 所述操作的数据块。
第四个方面, 本发明实施例提供一种固态硬盘使用装置, 包括: 选择模块, 用于根据固态硬盘的负载均衡表确定数据块; 操作模块, 用于对所述确定模块确定的所述数据块进行操作, 其中, 所 述操作包括擦除操作或写操作;
记录模块, 用于在所述负载均衡表中记录在所述操作过程中所述数据块 的潜伏期, 其中, 所述潜伏期为对所述数据块执行所述操作而持续的时间; 判断模块, 用于判断所述数据块的潜伏期是否大于预警值, 其中, 所述 预警值小于典型潜伏期, 所述典型潜伏期为预设的对所述固态硬盘中的数据 块操作失败时的潜伏期;
所述记录模块, 还用于若所述判断模块判断出所述数据块的潜伏期大于 所述预警值, 则在预设的预坏块表中记录所述数据块。
本发明实施例提供的固态硬盘使用方法及装置, 通过对各 Block当前的 潜伏期与预警值进行比较, 动态的监控实际操作的 Block的潜伏期, 在当前 的潜伏期小于或等于预警值时对该 Block进行操作, 通过本发明实施例的方 法从 SSD中 Block的物理特性上做到了负载均衡, 在一定程度上减少了坏块 的数量, 进而延长了 SSD的寿命。 附图说明 为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实 施例或现有技术描述中所需要使用的附图作一简单地介绍, 显而易见地, 下 面描述中的附图是本发明的一些实施例, 对于本领域普通技术人员来讲, 在 不付出创造性劳动性的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明固态硬盘使用方法实施例一的流程图;
图 2A为本发明固态硬盘编程操作的示意图;
图 2B为本发明固态硬盘擦除操作的示意图;
图 3为本发明固态硬盘的擦写特性示意图;
图 4A为本发明基于 MLC的固态硬盘的擦潜伏时间 tBERS与擦写次数的 关系曲线示意图;
图 4B是本发明基于 SLC的固态硬盘的擦潜伏时间 tBERS与擦写次数的 关系曲线示意图;
图 5为本发明固态硬盘使用方法中确定预警值的流程图;
图 6为本发明实施例中擦写次数与擦潜伏期的变化关系示意图; 图 7为本发明固态硬盘使用寿命方法中根据预警值进行负载均衡的流程 图;
图 8为本发明实施例提供的一种固态硬盘使用状态图;
图 9为本发明固态硬盘使用方法实施例二的流程图;
图 10为本发明固态硬盘使用装置实施例一的结构示意图;
图 11为本发明固态硬盘使用装置实施例二的结构示意图;
图 12为本发明固态硬盘使用装置实施例三的结构示意图。 具体实施方式
为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合本发 明实施例中的附图, 对本发明实施例中的技术方案进行清楚、 完整地描述, 显然, 所描述的实施例是本发明一部分实施例, 而不是全部的实施例。 基于 本发明中的实施例, 本领域普通技术人员在没有做出创造性劳动前提下所获 得的所有其他实施例, 都属于本发明保护的范围。
图 1为本发明固态硬盘使用方法实施例一的流程图。 本实施例的执行主 体为固态硬盘使用装置,其可设置在 SSD上也可以是 SSD本身,适用于需要 均衡 SSD中各 Block负载的场景。 具体的, 本实施例包括如下歩骤:
101、 当需要对固态硬盘中的数据块进行操作时, 根据固态硬盘的负载均 衡表确定待操作数据块的潜伏期,潜伏期为对数据块执行操作而持续的时间, 操作包括擦除操作或写操作。
NAND Flash的地址、 命令和数据的输入输出 (Input/Output, I/O) 端口 是复用的, 读写数据的过程比较复杂。 一般来说, 每次对 SSD中的待操作的 数据块 (Block) 发送操作命令, 如擦除操作命令、 写操作命令后, 需要等待 一定的时长以执行该操作。 例如, 对于擦除操作来说, 需要等等一定的时长 以对数据块进行擦除; 对于写操作来说, 需要等待一定的时长以对数据块进 行数据写入。然后才能发送查询命令以判断是否成功执行擦除操作或写操作, 最终实际完成对数据块的操作。 其中, 该需要等待的时间称之为潜伏期。 擦 除操作、 写操作简称为擦写操作。 经验证发现: 各 Block的潜伏期随擦写次 数的变化而呈规律性的变化, 潜伏期是物理上真实反映 Block健康状态的参 数。 在本发明实施例中, 负载均衡表中记录有各个 Block的潜伏期, 对于一 个具体的数据块,固态硬盘使用装置查询负载均衡表获取该数据块的潜伏期。
102、 判断数据块的潜伏期是否大于预警值, 预警值小于典型潜伏期, 典 型潜伏期为预设的对固态硬盘中的数据块操作失败时的潜伏期。
对每一数据块 (Block) 来说, 当潜伏期达到一定程度时, 若继续对该
Block进行擦写, 则可能会出现失败。本实施例中, 可预先对样本固态硬盘中 的各 Block发生擦写失败时的潜伏期进行统计, 得到典型潜伏期。 例如, 将 所有 Block中发生擦写失败时的最小潜伏期作为典型潜伏期, 预警值小于该 典型潜伏期; 或者, 将所有或部分 Block发生擦写失败时的潜伏期的平均值 作为典型潜伏期, 预警值小于该典型潜伏期; 或者, 按照其他规则统计出典 型潜伏期并设定预警值。
103、 若数据块的潜伏期大于预警值, 则禁止对数据块执行操作。
固态硬盘使用装置根据歩骤 102中判断出的潜伏期与预警值的关系, 决 定是否继续对该数据块执行操作。
具体的, 若 Block当前的潜伏期大于预警值, 表示该数据块寿命较差, 即该数据块能够承担的擦写次数比较低, 若继续对该数据块进行操作会导致 擦写失败等。 此时, 即使该数据块的擦写次数远远小于其他 Block的擦写次 数, 也禁止对该数据块进行擦除操作、 写操作。
可选的, 若待操作数据块的潜伏期不大于预警值, 表示该待操作数据块 寿命较强, 即该待操作数据块能够承担的擦写次数比较高, 可对数据块执行 擦写操作, 即可执行擦除操作、 写操作。
可选的, 本实施例中, 对于潜伏期大于预警值的待操作数据块, 固态硬 盘使用装置在禁止对其执行操作后, 还可在预设的预坏块表中记录该待操作 数据块, 该预坏块表用于指示需要减少擦写操作的数据块。 例如, 在本发明 的实施例中, 可设置预坏块表, 预坏块表的格式和用于记录坏块的坏块表的 格式相同,即每个块中有 l~2b 指示该块是否可以正常使用或作为优选使用; 如 00指示为好块, 01指示为坏块, 10指示为预坏块类型。 另外, 根据设定 的预警值, 在对块进行擦写过程中记录当前除潜伏时间, 当当前除潜伏时间 到达预警值时, 暂不使用该块, 将该类块加入预坏块表。 对于加入到预坏块 表中的 Block, 暂时不对其进行擦写操作或减少擦写操作的次数, 以避免 其很快成为坏快, 但不影响对其进行的读操作。
需要说明的是, 预坏块表中记录的 Block并不是表真正的坏数据块, 而 是若对其继续操作则容易变为坏数据块的数据块。 将潜伏期大于预警值的 Block记录在预坏块表中, 减少或暂停对该 Block的擦除操作或写操作, 从而 在一定程度上减少坏块出现的概率。
可选的, 上述实施例一种, 对数据块执行操作之后, 固态硬盘使用装置 获得操作过程中的数据块的潜伏期; 根据获得的潜伏期更新负载均衡表中所 述数据块的潜伏期。
具体的, 为了使负载均衡表中记录各 Block当前的潜伏期, 对于每一具 体的数据块, 每次执行完操作后, 获取在操作过程中所述数据块的潜伏期, 并用获取的潜伏期更新负载均衡表。 例如, 在执行所述操作之前, 根据所述 数据块的潜伏期未 2ms,在执行所述操作过程中,所述数据块的潜伏期为 3ms, 则可以将负载均衡表中所述数据块的潜伏期更新为 3ms。 换一种表达方式, 在对数据块执行本次操作之后, 可以根据本次操作过程中该数据块的潜伏期 更新负载均衡表中所述数据块的潜伏期。
可选的, 上述实施例一中, 在判断待操作数据块的潜伏期是否大于预警 值之前, 可统计出固态硬盘的擦写次数与潜伏期变化的特点, 从而确定出预 警值。 一般来说, 潜伏期包括擦潜伏期 tBERS与写潜伏期 tPROG。 具体的, 请参照图 2A、 图 2B。
图 2A为本发明固态硬盘编程操作的示意图。 基于与非门闪存的固态硬 盘地址、 命令和数据的 I/O通道是复用的, 对一个页编程的过程, 即写操作 的过程如下: 先发送一个时钟周期的写操作命令 (例如 "0x80" ) , 再发送 五个时钟周期的写地址, 然后发送数据。 数据发送完成后, 发送一个时钟周 期的写操作命令 (例如 "0x10" ) 表示数据已发送完, 再等待一段时间后, 进入查询状态判断是否写数据成功。 如果没有成功, 需要重新写数据。 该过 程中等待的一段时间称之为写潜伏期 (tPROG) 。
图 2B为本发明固态硬盘擦除操作的示意图。同理,基于与非门闪存的固 态硬盘地址、 命令和数据的 I/O通道是复用的, 对一个数据块擦除的过程, 即擦除操作的过程如下: 先发送一个时钟周期的擦除操作命令 (例如 "0x60" ) , 再发送三个时钟周期的擦地址, 发送一个时钟周期的擦除操作 命令 (例如 "OxDO" ) , 再等待一段时间长后, 进入查询状态判断是否擦除 数据成功, 如果没有成功则表示发送擦失败, 需要将正在操作的 Block置为 坏块。 该过程中等待的一段时间称之为擦潜伏期 (tBERS) 。
一般来说, 擦潜伏时间 tBERS和写潜伏时间 tPROG会随着擦写次数的 增加而发生变化。 具体的, 请参见图 3。
图 3为本发明固态硬盘的擦写特性示意图。 请参照图 3, 横坐标为擦写 次数, 左边的纵坐标为擦除潜伏时间的变化值, 其单位为 us (微秒), 右边的 纵坐标为写潜伏时间的变化值, 其单位为 us, 虚线所示为擦潜伏期随擦写次 数发生变化的变化曲线, 实线所示为写潜伏期随擦写次数发生变化的变化曲 线。 随着擦写次数的增加, 固态硬盘的阈值电压会发生变化, 由图 3可知, 当达到一定程度之后就会出现擦失败或写失败,对应到可以直接测量的数据, 即擦潜伏时间 tBERS和写潜伏时间 tPROG会随着擦写次数的增加而发生变 化。
经验证发现, 固态硬盘的擦写次数与擦潜伏期的变化关系呈现一定的规 律。 下面, 以擦潜伏期与擦写次数的关系为例对本发明进行详细说明, 具体 的, 可参见图 4A与图 4B。
图 4A为本发明基于 MLC的固态硬盘的擦潜伏时间 tBERS与擦写次数的 关系曲线示意图, 图 4B是本发明基于 SLC的固态硬盘的擦潜伏时间 tBERS 与擦写次数的关系曲线示意图。
请参照图 4A与图 4B, 横坐标为擦写次数, 左边的纵坐标为擦除潜伏时 间的变化值, 其单位为 ms (毫秒)。 擦潜伏时间 tBERS会随着读擦写次数的 增加而变化, 并且具有一定的规律性: 擦潜伏时间 tBERS会随着读擦写次数 的增加而规律性增大, 当块的擦潜伏时间 tBERS大到一定程度时, 对该块擦 除会出现擦失败。
同理, 以写潜伏期为例: 写潜伏时间 tPROG会随着读擦写次数的增加而 规律性减少, 当块的写潜伏时间 tPROG减少到一定程度时, 对该块写会出现 写失败。
综合上述可知, 确定预警值的过程中, 统计出固态硬盘的擦写次数与潜 伏期变化的特点, 动态监控实际操作的块的擦潜伏时间 tBERS或页的写潜伏 时间 tPROG, 设定预警值, 并根据该预警值, 设置多个小于该预警值的阈值。 当擦潜伏时间 tBERS或写潜伏时间 tPROG到达相应设定的阈值后做不同处 理 (如减少使用频率或暂不使用等 ),在块失效前将其暂时屏蔽,从而减少保留 块消耗。 即通过动态预判的方式识别出不同品质的块, 尽量减少对品质较差 块的使用, 从而整体上延长了固态硬盘的使用寿命。 下面, 以根据擦写次数 与擦潜伏期的变化关系确定预警值为例对本发明进行详细阐述。
图 5为本发明固态硬盘使用方法中确定预警值的流程图。 如图 5所示, 本实施例包括如下歩骤:
201、 选取样本, 准备擦写测试。
选择某型号新批次的一定数量的 NAND Flash作为样本, 准备擦写测试, 测试各 Block擦写失败时的潜伏期。
202、 对 Block进行擦写, 记录擦潜伏期。
每擦写一次, 记录该 Block的擦潜伏期。
203、 判断是否出现擦失败。
204、 确定所有样本测试完毕。
205、 判断被测试的 NAND Flash的数量是否达到预定的测试数量。
206、 确定擦潜伏期随着擦写次数增加而变化的变化关系。
具体的, 可参见图 6, 图 6为本发明实施例中擦写次数与擦潜伏期的变 化关系示意图, 其中, 横坐标为擦写次数, 纵坐标为擦潜伏期 (tBERS ) 。 如图 6 所示, 当擦写次数为 1~200 时, 擦潜伏期约为 3ms; 当擦写次数为 200-400时, 擦潜伏期约为 4ms; 当擦写次数为 400 600时, 擦潜伏期约为 5ms ··· ···当擦写次数大于 3400时候, 擦潜伏期约为 10ms。
207、 确定典型潜伏期。
若潜伏期随擦写次数的变化小于预设值, 则确定该擦潜伏期为典型潜伏 期。 例如, 请参照图 6, 当擦写次数大于 3400时, 潜伏期基本保持在 10ms 而不再发生变化, 则可以选择 10ms为典型潜伏期, 而将 9ms设置为预警值, 达到该预警值意味着如果继续使用该 Block,则该 Block可能很快会变为坏块。
进一歩的,上述实施例一中,还可根据预警值,设置多级阈值以对各 Block 进行可靠性分级, 其中各阈值小于所述预警值。 此时, 对于每一阈值, 可根 据该阈值, 对负载均衡表中当前的潜伏期小于或等于阈值的 Block进行操作, 使各 Block的同时到达阈值。 具体的, 再请参照图 3, 可根据需求设置第一级阈值为 6ms, 第二级阈值 为 7ms, 第三级阈值为 8ms, 第四级阈值为 9ms。 最初进行擦写操作时, 可 随机对任一个 Block进行操作, 当数据块的潜伏期即将达到第一阈值时, 则 避免对该数据块进行操作, 而对其他各 Block进行操作, 使得各 Block均达 到第一级阈值。 当各 Block都达到第一级阈值后, 在保证潜伏期小于第二级 阈值的前提下, 可随机对各 Block进行操作……以此类推, 从而保证按照阈 值对各 Block进行负载均衡以提高 SSD的使用寿命。该过程中,若产生坏块, 则不对该 Block进行操作, 例如, 各 Block都达到第三级阈值时, 若随机操 作使得某 Block发生坏块, 此时, 在保证潜伏期小于第四级阈值的前提下, 可随机对其他各 Block进行操作。 另外, 对于每一级阈值, 可设置不同的操 作方式, 例如, 当数据块的潜伏期小于第一阈值的时候, 可随机的对该数据 块进行擦写操作; 当数据块的潜伏期介于第一级与第二级阈值之间时, 可减 少对该数据块的擦写操作; 当数据块的潜伏期大于第二级阈值时, 可暂停对 该数据块的操作。
需要说明的是, 上述过程是设置多级阈值以对各 block进行可靠性分级, 从而区分不同品质的 Block,达到不同的阈值后做不同的处理。在实际的实施 方式中, 也可设置多级阈值区分不同品质的 Page, 达到不同的阈值后做不同 的处理。 从而实现对 Block更精细化的管理。
图 7为本发明固态硬盘使用寿命方法中根据预警值进行负载均衡的流程 图。 如图 7所示, 本实施例包括如下歩骤:
301、 生成负载均衡表。
本歩骤中, 将每个 Block初始的潜伏期都进行记录。
302、 进行操作时监控并更新各 Block的负载均衡表。
在对 Block的擦写过程中, 监控各 Block当前的潜伏期, 并将当前的潜 伏期更新到负载均衡表中。 例如, 若某 Block进行擦除操作前初始的潜伏期 为 3ms, 若对该 Blockl进行擦除操作, 第 100次擦除操作后该 Block的潜伏 期变更为 4ms, 则在第 100 次擦除操作后将负载均衡表中的潜伏期更新为 4ms。
303、 判断当前的潜伏期是否小于或等于预警值。
对于一具体的数据块, 每次擦写操作后, 将该数据块的潜伏期与预警值 进行比较, 使得 Block之间的负载均衡参考负载均衡表来实现, 即若当前的 潜伏期小于或等于预警值, 在返回歩骤 302; 否则, 执行歩骤 304。
304、将该待操作数据块加入到预坏块表中, 暂时不对该待操作数据块进 行擦写操作, 在所有 Block或大部分 Block的潜伏期都大于预警值时, 用保 留块替换预坏块表中的 Block。
需要说明的是,对于加入到预坏块表中的 Block,暂时不对其进行擦写操 作或减少擦写操作的次数, 以避免其很快成为坏快, 但不影响对其进行的读 操作。
305、 判断保留块是否耗尽。
SSD在设计时会预留一部分冗余 Block作为保留块, 如 100G的 SSD其 裸容量可能为 128G, 该 28G即可用作保留块。 SSD判断保留块是否耗尽, 若保留块未耗尽, 则返回歩骤 304; 否则, 若保留块耗尽, 则执行歩骤 306。
306、 继续对 Block进行操作。
若保留块耗尽, 则对预坏块表中的 Block继续进行操作。
307、 判断 Block是否发生擦写失败。
继续操作的过程中, 判断当前操作的预坏块表中的 Block是否发生擦写 失败, 若未发生, 则返回歩骤 306; 否则, 若发生擦写失败, 则执行歩骤 308.
308、 将该 Block加入坏块表, 作为真实坏 Block。
本发明实施例提供的固态硬盘使用方法, 通过对待操作数据块的潜伏期 与预警值进行比较, 动态的监控待操作数据块的潜伏期, 禁止对潜伏期大于 预警值的 Block进行操作, 从而从物理特性上做到真正的负载均衡, 在一定 程度上减少了坏块的产生, 尽可能使用了保留块, 从而延长了 SSD的寿命。 例如, 如图 8所示, 图 8为本发明实施例提供的一种固态硬盘使用状态图, 横坐标用于指示各 Block实际能够承载的最大擦写次数, 纵坐标用于指示该 SSD中的 Block。 其中, blockl实际能够承载的最大擦写次数约为 3.2K次, Block2实际能够承载的最大擦写次数约为 4.8K次, Block3实际能够承载的 的最大擦写次数约为 1.5K次……。 现有技术中, 为保证各 Block之间的负载 均衡,会均匀的对每个 Block进行相同次数的擦写,根据这种方式,若各 Block 的擦写次数达到 3K次时, block3、 blockl 5, block26可能会发生坏块。 而本 实施例提供的方法, 不根据各 Block的擦写次数来实现各 Block之间的负载 均衡, 而是根据各 Block当前的潜伏期与预警值的大小关系实现负载均衡以 提高 SSD的使用寿命。 例如, 假设预警值为 9ms, Block3在进行 1.2K次的 擦写后,若继续对其进行擦写,则潜伏期有可能大于 9ms,此时,暂停对 block3 的擦写; 而对 Block4进行擦写 1.2K次后, 若继续对其进行擦写, 也不会导 致潜伏期大于 9ms, 因此, 可继续对 blcok4进行擦写。根据上面的描述可知, 通过本发明实施例的方法从 Block的物理特性上做到了真正的负载均衡, 在 一定程度上减少了坏块的数量, 进而延长了 SSD的寿命。
图 9为本发明固态硬盘使用方法实施例二的流程图。 相较于图 1实施例 中, 本实施例中, 固态硬盘使用装置在对各 Block进行操作之前, 先根据负 载均衡表确定待操作的数据块。 具体的, 本实施例包括如下歩骤:
401、 根据固态硬盘的负载均衡表确定待操作的数据块。
本实施例中, 负载均衡表中存储有潜伏期不大于预警值的 Block。每次需 要进行擦除操作或写操作之前, 先从负载均衡表中确定待操作的数据块。 优 选的, 可选择负载均衡表中潜伏期小的 Block做为待操作的数据块。
402、 对所述数据块进行操作, 其中, 操作包括擦除操作或写操作。
在确定出数据块后, 对该数据块进行擦除操作或写操作。
403、 在负载均衡表中记录在操作过程中所述数据块的潜伏期, 其中, 潜 伏期为对所述数据块执行所述操作而持续的时间。
为了使负载均衡表中存储各 Block当前的潜伏期, 对于每一具体的数据 库, 每次执行完操作后, 获取在操作过程中所述数据块的的潜伏期, 并用获 取的潜伏期更新负载均衡表。
404、 判断所述数据块的潜伏期是否大于预警值, 其中, 预警值小于典型 潜伏期, 典型潜伏期为预设的对固态硬盘中的数据块操作失败时的潜伏期;
405、若待操作数据块的潜伏期大于预警值, 则在预设的预坏块表中记录 待操作数据块。
为确保负载均衡表中的各 Block均为可操作的 Block,即确保负载均衡表 中的各 Block的潜伏期均不大于预警值。 本步骤中, 对于具体的数据块, 执 行完操作后, 固态硬盘使用装置判断该数据块的潜伏期是否大于预警值, 若 大于预警值, 则将该数据块记录到预坏块表中, 并从负载均衡表中移出; 否 则, 若数据块的潜伏期不大于预警值, 则在负载均衡表中记录该数据块执行 操作后的潜伏期。 例如, 在本发明的实施例中, 可设置预坏块表, 预坏块表 的格式和用于记录坏块的坏块表的格式相同, 即每个块中有 l~2bk指示该块 是否可以正常使用或作为优选使用; 如 00指示为好块, 01 指示为坏块, 10 指示为预坏块类型。 另外, 根据设定的预警值, 在对块进行擦写过程中记录 当前除潜伏时间, 当当前除潜伏时间到达预警值时, 暂不使用该块, 将该类 块加入预坏块表。对于加入到预坏块表中的 Block,暂时不对其进行擦写操作 或减少擦写操作的次数, 以避免其很快成为坏快, 但不影响对其进行的读操 作。
需要说明的是,本实施例中,还可根据预警值,设置多级阈值以对各 Block 进行可靠性分级。具体的, 统计出固态硬盘的擦写次数与潜伏期变化的特点, 动态监控实际操作的块的擦潜伏时间 tBERS或页的写潜伏时间 tPROG,可以 设定出预警值, 并根据该预警值, 设置多个小于该预警值的阈值。 当擦潜伏 时间 tBERS或写潜伏时间 tPROG到达相应设定的阈值后做不同处理 (如减少 使用频率或暂不使用等 ), 在块失效前将其暂时屏蔽, 从而减少保留块消耗。 即通过动态预判的方式识别出不同品质的块,尽量减少对品质较差块的使用, 从而整体上延长了固态硬盘的使用寿命。
本发明实施例提供的固态硬盘使用方法, 执行操作之前先确定出潜伏期 小于预警值的数据块作为待操作数据块并执行操作, 动态的监控待操作数据 块的潜伏期, 禁止对潜伏期大于预警值的 Block进行操作, 从而从物理特性 上做到真正的负载均衡, 在一定程度上减少了坏块的产生, 尽可能使用了保 留块, 从而延长了 SSD的寿命。
图 10为本发明固态硬盘使用装置实施例一的结构示意图,本实施例提供 的固态硬盘使用装置是与本发明图 1实施例对应的装置实施例, 具体实现过 程在此不再赘述。具体的, 本实施例提供的固态硬盘使用装置 100具体包括: 确定模块 11, 用于当需要对固态硬盘中的数据块进行操作时, 根据固态 硬盘的负载均衡表确定的数据块的潜伏期, 潜伏期为对数据块执行操作而持 续的时间, 操作包括擦除操作或写操作;
判断模块 12, 用于判断确定模块 11 确定出的数据块的潜伏期是否大于 预警值, 预警值小于典型潜伏期, 典型潜伏期为预设的对固态硬盘中的数据 块操作失败时的潜伏期; 处理模块 13, 用于若判断模块 12判断出数据块的潜伏期大于预警值, 则禁止对数据块执行操作。
本发明实施例提供的固态硬盘使用装置, 通过对待操作数据块的潜伏期 与预警值进行比较, 动态的监控待操作数据块的潜伏期, 禁止对潜伏期大于 预警值的 Block进行操作, 从而从物理特性上做到真正的负载均衡, 在一定 程度上减少了坏块的产生, 尽可能使用了保留块, 从而延长了 SSD的寿命。
可选的, 处理模块 13, 还用于若判断模块 12判断出数据块的潜伏期不 大于预警值, 则对数据块执行操作。
图 11为本发明固态硬盘使用装置实施例二的结构示意图。如图 11所示, 本实施例提供的固态硬盘使用装置在上述图 10 所示装置的基础上, 进一歩 的, 还包括:
获取模块 14, 用于获得操作过程中的数据块的潜伏期;
更新模块 15, 用于根据获取模块 14获得的潜伏期更新负载均衡表中数 据块的潜伏期。
再请参照图 8, 固态硬盘使用装置还包括:
记录模块 16, 用于在禁止操作模块禁止对数据块执行操作之后, 在预设 的预坏块表中记录数据块, 预坏块表用于指示需要减少操作的数据块。
图 12为本发明固态硬盘使用装置实施例三的结构示意图,本实施例提供 的固态硬盘使用装置是与本发明图 9实施例对应的装置实施例, 具体实现过 程在此不再赘述。具体的, 本实施例提供的固态硬盘使用装置 200具体包括: 选择模块 21, 用于根据固态硬盘的负载均衡表确定数据块;
操作模块 22, 用于对确定模块确定的数据块进行操作, 其中, 操作包括 擦除操作或写操作;
记录模块 23, 用于在负载均衡表中记录在操作过程中数据块的潜伏期, 其中, 潜伏期为对数据块执行操作而持续的时间;
判断模块 24, 用于判断数据块的潜伏期是否大于预警值, 其中, 预警值 小于典型潜伏期, 典型潜伏期为预设的对固态硬盘中的数据块操作失败时的 潜伏期;
记录模块 23, 还用于若判断模块 24判断出数据块的潜伏期大于预警值, 则在预设的预坏块表中记录数据块。 本领域普通技术人员可以理解: 实现上述各方法实施例的全部或部分歩 骤可以通过程序指令相关的硬件来完成。 前述的程序可以存储于一计算机可 读取存储介质中。 该程序在执行时, 执行包括上述各方法实施例的歩骤; 而 前述的存储介质包括: ROM、 RAM, 磁碟或者光盘等各种可以存储程序代码 的介质。
最后应说明的是: 以上各实施例仅用以说明本发明的技术方案, 而非对 其限制; 尽管参照前述各实施例对本发明进行了详细的说明, 本领域的普通 技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改, 或者对其中部分或者全部技术特征进行等同替换; 而这些修改或者替换, 并 不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims

权 利 要 求 书
1、 一种固态硬盘使用方法, 其特征在于, 包括:
当需要对固态硬盘中的数据块进行操作时, 根据所述固态硬盘的负载均 衡表确定待操作的数据块的潜伏期, 所述潜伏期为执行所述操作而持续的时 间, 所述操作包括擦除操作或写操作;
判断所述数据块的潜伏期是否大于预警值,所述预警值小于典型潜伏期, 所述典型潜伏期为预设的对所述固态硬盘中的数据块操作失败时的潜伏期; 若所述数据块的潜伏期大于所述预警值, 则禁止对所述数据块执行所述 操作。
2、 根据权利要求 1所述的方法, 其特征在于, 还包括:
若所数据块的潜伏期不大于所述预警值,则对所述数据块执行所述操作。
3、 根据权利要求 2所述的方法, 其特征在于, 所述对所述数据块执行所 述操作之后, 还包括:
获得所述操作过程中的所述数据块的潜伏期;
根据获得的潜伏期更新所述负载均衡表中所述数据块的潜伏期。
4、 根据权利要求 1-3任意一项所述的方法, 其特征在于, 所述禁止对所 述数据块执行所述操作之后, 还包括:
在预设的预坏块表中记录所述数据块, 所述预坏块表用于指示需要减少 所述操作的数据块。
5、 一种固态硬盘使用方法, 其特征在于, 包括:
根据固态硬盘的负载均衡表确定待操作的数据块;
对所述数据块进行操作, 其中, 所述操作包括擦除操作或写操作; 在所述负载均衡表中记录在所述操作过程中所述数据块的潜伏期,其中, 所述潜伏期为对所述数据块执行所述操作而持续的时间;
判断所述数据块的潜伏期是否大于预警值, 其中, 所述预警值小于典型 潜伏期, 所述典型潜伏期为预设的对所述固态硬盘中的数据块操作失败时的 潜伏期;
若所述数据块的潜伏期大于所述预警值, 则在预设的预坏块表中记录所 述数据块。
6、 一种固态硬盘使用装置, 其特征在于, 包括:
确定模块, 用于当需要对固态硬盘中的数据块进行操作时, 根据所述固 态硬盘的负载均衡表确定的数据块的潜伏期, 所述潜伏期为对所述数据块执 行所述操作而持续的时间, 所述操作包括擦除操作或写操作;
判断模块, 用于判断所述确定模块确定出的所述数据块的潜伏期是否大 于预警值, 所述预警值小于典型潜伏期, 所述典型潜伏期为预设的对所述固 态硬盘中的数据块操作失败时的潜伏期;
处理模块, 用于若所述判断模块判断出所述数据块的潜伏期大于所述预 警值, 则禁止对所述数据块执行所述操作。
7、 根据权利要求 6所述的装置, 其特征在于:
所述处理模块, 还用于若所述判断模块判断出所述数据块的潜伏期不大 于所述预警值, 则对所述数据块执行所述操作。
8、 根据权利要求 7所述的装置, 其特征在于, 还包括:
获取模块, 用于获得所述操作过程中的所述数据块的潜伏期;
更新模块, 用于根据所述获取模块获得的潜伏期更新所述负载均衡表中 所述数据块的潜伏期。
9、 根据权利要求 6~8任一项所述的装置, 其特征在于, 还包括: 记录模块, 用于在所述禁止操作模块禁止对所述数据块执行所述操作之 后, 在预设的预坏块表中记录所述数据块, 所述预坏块表用于指示需要减少 所述操作的数据块。
10、 一种固态硬盘使用装置, 其特征在于, 包括:
选择模块, 用于根据固态硬盘的负载均衡表确定数据块;
操作模块, 用于对所述确定模块确定的所述数据块进行操作, 其中, 所 述操作包括擦除操作或写操作;
记录模块, 用于在所述负载均衡表中记录在所述操作过程中所述数据块 的潜伏期, 其中, 所述潜伏期为对所述数据块执行所述操作而持续的时间; 判断模块, 用于判断所述数据块的潜伏期是否大于预警值, 其中, 所述 预警值小于典型潜伏期, 所述典型潜伏期为预设的对所述固态硬盘中的数据 块操作失败时的潜伏期;
所述记录模块, 还用于若所述判断模块判断出所述数据块的潜伏期大于
18
PCT/CN2014/081976 2013-12-23 2014-07-10 固态硬盘使用方法及装置 WO2015096455A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP14873676.2A EP3079067A4 (en) 2013-12-23 2014-07-10 METHOD AND APPARATUS FOR USING INTEGRATED CIRCUIT DISK
US15/189,857 US10310930B2 (en) 2013-12-23 2016-06-22 Solid state disk using method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310718063.6 2013-12-23
CN201310718063.6A CN103678150B (zh) 2013-12-23 2013-12-23 固态硬盘使用方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/189,857 Continuation US10310930B2 (en) 2013-12-23 2016-06-22 Solid state disk using method and apparatus

Publications (1)

Publication Number Publication Date
WO2015096455A1 true WO2015096455A1 (zh) 2015-07-02

Family

ID=50315781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/081976 WO2015096455A1 (zh) 2013-12-23 2014-07-10 固态硬盘使用方法及装置

Country Status (4)

Country Link
US (1) US10310930B2 (zh)
EP (1) EP3079067A4 (zh)
CN (2) CN103678150B (zh)
WO (1) WO2015096455A1 (zh)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678150B (zh) * 2013-12-23 2017-06-09 华为技术有限公司 固态硬盘使用方法及装置
WO2018040115A1 (en) * 2016-09-05 2018-03-08 Telefonaktiebolaget Lm Ericsson (Publ) Determination of faulty state of storage device
US10318423B2 (en) * 2016-12-14 2019-06-11 Macronix International Co., Ltd. Methods and systems for managing physical information of memory units in a memory device
US10261705B2 (en) * 2016-12-15 2019-04-16 Alibaba Group Holding Limited Efficient data consistency verification for flash storage
JP7010667B2 (ja) 2017-11-06 2022-01-26 キオクシア株式会社 メモリシステムおよび制御方法
CN110364216B (zh) * 2018-04-09 2022-03-01 合肥沛睿微电子股份有限公司 固态硬盘及其运行方法
CN108958655B (zh) * 2018-06-26 2021-08-10 郑州云海信息技术有限公司 一种固态硬盘的数据擦写方法、装置、设备及存储介质
KR102533072B1 (ko) * 2018-08-13 2023-05-17 에스케이하이닉스 주식회사 블록의 상태에 따라 사용 여부를 결정하는 메모리 시스템 및 메모리 시스템의 동작 방법
CN109102839B (zh) * 2018-08-15 2021-06-11 浪潮电子信息产业股份有限公司 一种坏块标记方法、装置、设备及可读存储介质
US20200409601A1 (en) * 2019-06-28 2020-12-31 Western Digital Technologies, Inc. Hold of Write Commands in Zoned Namespaces
CN110517718B (zh) * 2019-08-22 2021-06-08 深圳忆联信息系统有限公司 一种有效筛选颗粒新增坏块的方法及其系统
CN110941535A (zh) * 2019-11-22 2020-03-31 山东超越数控电子股份有限公司 一种硬盘负载均衡方法
CN111026997B (zh) * 2019-12-17 2023-04-25 上饶市中科院云计算中心大数据研究院 一种热点事件热度量化方法及装置
CN115552383A (zh) * 2020-08-03 2022-12-30 华为技术有限公司 闪存数据管理方法、存储设备控制器及存储设备
CN113703681A (zh) * 2021-08-26 2021-11-26 杭州海康存储科技有限公司 一种硬盘管理方法及装置、硬盘设备、存储介质
KR20230094622A (ko) * 2021-12-21 2023-06-28 에스케이하이닉스 주식회사 슈퍼 메모리 블록의 프로그램 상태를 기초로 타깃 동작을 실행하는 메모리 시스템 및 그 방법

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1517947A (zh) * 2003-01-28 2004-08-04 株式会社瑞萨科技 非易失性存储器
US20080162803A1 (en) * 2006-12-27 2008-07-03 Kabushiki Kaisha Toshiba Magnetic disk apparatus and method of controlling the same
CN103019969A (zh) * 2011-09-27 2013-04-03 威刚科技(苏州)有限公司 闪存储存装置及其不良储存区域的判定方法
CN103678150A (zh) * 2013-12-23 2014-03-26 华为技术有限公司 固态硬盘使用方法及装置

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050204187A1 (en) * 2004-03-11 2005-09-15 Lee Charles C. System and method for managing blocks in flash memory
JP4575346B2 (ja) * 2006-11-30 2010-11-04 株式会社東芝 メモリシステム
JP5100663B2 (ja) * 2006-12-26 2012-12-19 株式会社アドバンテスト 試験装置および試験方法
EP2286412A1 (en) * 2007-12-21 2011-02-23 Rambus Inc. Flash memory timing pre-characterization for use in ormal operation
CN101477492B (zh) * 2009-01-21 2010-12-29 华中科技大学 一种用于固态硬盘的循环重写闪存均衡方法
CN101740110B (zh) * 2009-12-17 2013-06-12 中兴通讯股份有限公司 一种Nand Flash擦除均衡的方法及装置
US20110252289A1 (en) * 2010-04-08 2011-10-13 Seagate Technology Llc Adjusting storage device parameters based on reliability sensing
US8737141B2 (en) * 2010-07-07 2014-05-27 Stec, Inc. Apparatus and method for determining an operating condition of a memory cell based on cycle information
US8751903B2 (en) * 2010-07-26 2014-06-10 Apple Inc. Methods and systems for monitoring write operations of non-volatile memory
KR101190742B1 (ko) * 2010-12-06 2012-10-12 에스케이하이닉스 주식회사 메모리의 콘트롤러 및 이를 포함하는 스토리지 시스템, 메모리의 수명 측정 방법
US10079068B2 (en) * 2011-02-23 2018-09-18 Avago Technologies General Ip (Singapore) Pte. Ltd. Devices and method for wear estimation based memory management
US9588702B2 (en) * 2014-12-30 2017-03-07 International Business Machines Corporation Adapting erase cycle parameters to promote endurance of a memory

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1517947A (zh) * 2003-01-28 2004-08-04 株式会社瑞萨科技 非易失性存储器
US20080162803A1 (en) * 2006-12-27 2008-07-03 Kabushiki Kaisha Toshiba Magnetic disk apparatus and method of controlling the same
CN103019969A (zh) * 2011-09-27 2013-04-03 威刚科技(苏州)有限公司 闪存储存装置及其不良储存区域的判定方法
CN103678150A (zh) * 2013-12-23 2014-03-26 华为技术有限公司 固态硬盘使用方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3079067A4 *

Also Published As

Publication number Publication date
CN103678150A (zh) 2014-03-26
US10310930B2 (en) 2019-06-04
EP3079067A4 (en) 2016-11-02
US20160299805A1 (en) 2016-10-13
CN106909318A (zh) 2017-06-30
CN106909318B (zh) 2020-05-08
CN103678150B (zh) 2017-06-09
EP3079067A1 (en) 2016-10-12

Similar Documents

Publication Publication Date Title
WO2015096455A1 (zh) 固态硬盘使用方法及装置
KR101348665B1 (ko) 플래시 디스크 메모리의 기대 수명을 추정하고 리포팅하는방법
US9535611B2 (en) Cache memory for hybrid disk drives
US10552063B2 (en) Background mitigation reads in a non-volatile memory system
US10956317B2 (en) Garbage collection in non-volatile memory that fully programs dependent layers in a target block
US11430540B2 (en) Defective memory unit screening in a memory system
US20180165021A1 (en) Adaptive health grading for a non-volatile memory
US11500547B2 (en) Mitigating data errors in a storage device
US20230017942A1 (en) Memory sub-system event log management
CN113272905A (zh) 具有时变位错误率的存储器中的缺陷检测
US10656847B2 (en) Mitigating asymmetric transient errors in non-volatile memory by proactive data relocation
US10614892B1 (en) Data reading method, storage controller and storage device
US11698742B2 (en) Garbage collection in a memory component using an adjusted parameter
US11513890B1 (en) Adaptive read scrub
US11977778B2 (en) Workload-based scan optimization
US20230395156A1 (en) Memory block characteristic determination
US11995320B2 (en) Scan fragmentation in memory devices
US20240045595A1 (en) Adaptive scanning of memory devices with supervised learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14873676

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014873676

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014873676

Country of ref document: EP