WO2018050006A1 - 在基于闪存的存储介质中写入存储数据的方法和装置 - Google Patents
在基于闪存的存储介质中写入存储数据的方法和装置 Download PDFInfo
- Publication number
- WO2018050006A1 WO2018050006A1 PCT/CN2017/100570 CN2017100570W WO2018050006A1 WO 2018050006 A1 WO2018050006 A1 WO 2018050006A1 CN 2017100570 W CN2017100570 W CN 2017100570W WO 2018050006 A1 WO2018050006 A1 WO 2018050006A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- host
- total number
- erasures
- accumulated
- physical storage
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0616—Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0652—Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
Definitions
- the present application relates to the field of computer and network technologies, and in particular, to a method and apparatus for writing stored data in a flash-based storage medium.
- flash memory As a storage medium and SSD (Solid State Drives) with electronic read and write has excellent read and write performance, and has become the mainstream hardware for personal computers and server storage.
- the storage medium of the SSD is a Nand Flash flash memory particle.
- the physical characteristic of each bit is that the value of the bit can be changed from 1 to 0 by charging the transistor, and the value can only be zero by erasing. Set to 1.
- the erase operation is performed in units of Block.
- the lifetime of the Nand Flash flash granule is measured by the number of times it is erased. After a block reaches the number of erases, the block can no longer be used to store data and become a bad block. As the number of bad blocks increases, the service life of the SSD will drop sharply.
- the FTL (Flash Translation Level) equalization erasing algorithm or the file system layer with equalization erasing function is used in the prior art to implement equalization erasing of each block on the same SSD as much as possible to extend The lifetime of a single SSD.
- the FTL layer dynamic equalization strategy the FTL layer static equalization strategy
- JAFF2 Java File System Version 2, log flash file system version 2
- YAFFS Yet Another Flash File System, another flash file system.
- the present application provides a method for writing stored data in a flash-based storage medium, which is applied to a central control function module that performs write control on at least two physical storage units, including:
- the stored data is written to at least one physical storage unit having the least total number of accumulated erases.
- the present application further provides an apparatus for writing stored data in a flash-based storage medium, which is applied to a central control function module that performs write control on at least two physical storage units, including:
- the cumulative total number of times unit is used to obtain the total number of accumulated erasures of all blocks in each physical storage unit;
- the physical storage unit unit is configured to write the storage data to at least one physical storage unit having the least total number of accumulated erasures in a physical storage unit that satisfies a predetermined write condition.
- the central control function module selects the total number of accumulated erasures in the physical storage unit that satisfies the predetermined writing condition based on the total number of accumulated erasures of all the blocks in each physical storage unit. Minimize the writing of stored data, thereby achieving balanced erasing between different physical storage units, avoiding the premature damage of the physical storage unit caused by excessive erasing of a single physical storage unit, in a non-high availability scenario It can reduce the possibility of losing data and improve the stability of the system in a high availability scenario.
- FIG. 1 is a schematic structural diagram of a host structure according to an example of a first application scenario of the embodiment of the present application
- FIG. 2 is a schematic structural diagram of a cluster structure of an example of a second application scenario in the embodiment of the present application
- Embodiment 3 is a flowchart of a method for writing stored data in Embodiment 1 of the present application
- FIG. 5 is a hardware structural diagram of a device in which an embodiment of the present application is located
- FIG. 6 is a logical structural diagram of an apparatus for writing stored data in an embodiment of the present application.
- FIG. 7 is a logical structural diagram of a cluster in the embodiment of the present application.
- Embodiments of the present application propose a new method for writing stored data in a flash-based storage medium, based on statistical data of the total number of accumulated erasures of all blocks in each physical storage unit, when predetermined writes are satisfied.
- the physical storage unit of the condition one to multiple physical storage units having the minimum cumulative total number of erasures are selected to write the storage data, and the equalization erasing between different physical storage units is realized, so that the use of different physical storage units is achieved.
- the lifespan is closer, which reduces the loss of data due to premature damage of a single physical storage unit, or the possibility of affecting system stability, to solve the problems of the prior art.
- two or more physical storage units are controlled by a central control function module, that is, the central control function module determines which physical storage unit or units to write the storage data to.
- the physical storage unit is an entity that is physically independent of other physical storage units and uses a flash-based storage medium, and may be a separate flash-based physical storage component (such as a flash memory chip, a hard disk, etc.), one including at least one A host of independent flash-based physical storage components (such as a disk array cabinet, a personal computer, a server), etc.; the central control function module is implemented by software, or a combination of software and hardware, and can be run at a certain On a physical storage unit, it can also run on other hosts that are independent of all physical storage units it controls.
- FIG. 1 is a schematic diagram of an application scenario of an embodiment of the present application.
- a host has a plurality of hard disks (physical storage units in the application scenario), and a storage control module is run on the host.
- the central control function module is used to read and write control of all hard disks, including determining which data to store on which hard disk.
- FIG. 2 is another example of an application scenario of an embodiment of the present application.
- a cluster consisting of several hosts (physical storage units in the application scenario), one host is a master node, and the other hosts are working. node.
- Cluster control module of the cluster (the central control function module in this application scenario) It runs on the master node and allocates and manages the data storage of several servers. When the host that is the master node fails, the other host can upgrade to the master node to continue the cluster operation.
- the central control function module applied in the embodiment of the present application can be run on any device with computing and storage capabilities, such as a tablet computer, a PC (Personal Computer), a notebook, a server, and the like.
- Step 310 Acquire a total number of accumulated erasures of all blocks in each physical storage unit.
- a block is the smallest unit of storage space allocation in a data store.
- the block is erased so that the application (or thread, process, etc.) that obtains the block space can write data using the block. . Therefore, the number of times a block is allocated is the cumulative number of erasures for that block.
- the partitioning of the blocks is usually performed when the storage components are initialized.
- the administrator can Specifies the size of the block at initialization. It is assumed that the storage management module of the host where the hard disk is located stores and manages the hard disk. After the initialization is completed, the storage control module can represent each block on the hard disk with a unique block identifier in the range of the hard disk, and maintain each hard disk.
- the storage control module may also maintain the total number of accumulated erasures of all the blocks in the super block of the hard disk, and increase the number of blocks allocated in the total number of accumulated erasures each time there is a block allocation. In other words, 1 is added to the total number of accumulated erases after each block is allocated. In the above manner, the total number of accumulated erasures of all the blocks of the physically independent storage unit can be obtained.
- the total number of accumulated erasures of the physical storage components controlled by the storage control module on each host can be counted and reported to the central control function module.
- the specific reporting mode can be determined according to the implementation of the application scenario. For example, suppose all the hosts form a cluster, and the cluster control module of the cluster is in the application scenario.
- the central control function module can actively report the total number of total erasures of all blocks on the host to the cluster control module by a predetermined period, or the cluster control module can poll all the hosts for a predetermined period.
- the total number of accumulated erasures the cluster control module can receive the total number of accumulated erasures of all blocks of each host in a predetermined period.
- Step 320 in the physical storage unit that satisfies the predetermined writing condition, write the storage data to at least one physical storage unit with the least total number of accumulated erasures.
- N is a natural number
- the condition determines which physical storage unit or blocks of storage data to write to.
- the conditions to be met typically include the remaining storage space sufficient to hold the stored data to be written.
- a predetermined write condition may be generated according to an acceptable range of one or more of the above conditions, which is illustrated by two examples:
- the N of the plurality of hard disks are selected as the hard disk to be used for writing the stored data.
- the remaining storage space may be not less than a predetermined value or not less than the hard disk capacity.
- the predetermined ratio is taken as a predetermined writing condition.
- the storage data is selected by the N hosts that have the remaining storage space enough to hold the storage data to be written and the amount of the saved data on the host is the smallest.
- the remaining storage space may be sufficient to accommodate the storage data to be written, and the amount of access to the saved data on the host does not exceed a certain threshold as a predetermined write condition.
- the storage data to be written to the physical storage unit may be storage data that needs to be written when the file is saved, or may be storage data that needs to be written when the file is migrated.
- the storage data that needs to be written when the file is saved is usually the newly added storage data, and the storage data that needs to be written when the file is migrated is the maintenance of the saved storage data.
- the storage data that needs to be written when the file is saved may be one copy, or two or more copies, and each copy is usually written to a different physical storage unit.
- File migration includes two situations.
- One is to replace the physical storage unit of the saved file, that is, the saved file is written to the new physical storage unit and deleted from the physical storage unit where the file was originally saved;
- the file needs to be copied to a working physical storage unit.
- predetermined write conditions may be employed to exclude physical storage units that have been saved with the copy of the file, and/or physical storage units that have failed.
- the physical storage unit needed to write the stored data may be more than one, whether in file saving or file migration. Therefore, after obtaining the total number of accumulated erasures of all the blocks of each physical storage unit, the central control function module determines a plurality of physical storage units that can be selected based on predetermined write conditions when there is a write operation required to store the data. And storing the stored data in the N (N is a natural number) physical storage units in which the total number of accumulated erasures is the smallest. In this way, balanced erasing can be achieved between different physical storage units.
- the central control function module performs the equalization erasing between the physical storage units when there is storage data writing, and can actively perform the file when the physical writing unit has a large difference in erasing and writing. Migrate to balance the usage of individual physical storage units.
- the central control function module can monitor the erasing condition of each physical storage unit, and when the difference between the total number of accumulated erasures of a certain two physical storage units exceeds a predetermined deviation range, the stored data is stored from the above. The physical storage unit with the higher cumulative total number of erasures in the two physical storage units is migrated to the physical storage unit with the lower total number of accumulated erasures.
- the predetermined deviation range between physical storage units may be determined according to factors such as the size of the file stored in the actual application scenario and the requirements for erasing and equalizing. In an application scenario, the predetermined deviation range may be determined based on an average of the total number of accumulated erasures of all physical storage units, for example, 20% of the average value is taken as a predetermined deviation range.
- the central control function module selects the cumulative value in the physical storage unit that satisfies the predetermined writing condition, based on the statistical data of the total number of accumulated erasures of all the blocks in each physical storage unit.
- the physical storage unit with the least total number of erasures is used to write the storage data, thereby achieving balanced erasing between different physical storage units, and avoiding the premature damage of the physical storage unit caused by excessive erasing of a single physical storage unit.
- the embodiments of the present application may be applied at the cluster level and the host level respectively. That is, the erasing and equalization between different hosts is implemented at the cluster level, and the erasing and equalization of different hard disks or other physical storage components on a single host is implemented at the host level.
- the FTL equalization and erasing algorithm of the prior art or the file system layer with equalization and erasing function can be used on a single hard disk or other physical storage component to implement erasing and equalization at the level of the physical storage component, thereby Balanced erasing is achieved at all levels of the entire cluster system, extending the service life of the storage devices in the cluster system and improving the stability of the cluster system.
- the cluster in this embodiment includes at least two hosts, and the hosts are stored and managed by the cluster control module.
- Each host includes at least two hard disks that use flash memory as a storage medium, and the hard disks on each host are controlled by the storage on the host.
- the module performs storage management.
- the second embodiment is a specific implementation of the application of the first embodiment at the two levels (the cluster level and the host level). For the detailed description of each step, refer to the description in the first embodiment, and the description is not repeated.
- Step 410 The cluster control module acquires the total number of accumulated erasures of all blocks on each host.
- Step 420 The storage control module of each host separately obtains the total number of accumulated erasures of each hard disk on the host.
- Step 430 In the host that meets the predetermined write condition of the host, the cluster controller selects at least one host with the least total number of accumulated erasures of the host as the target host.
- the cluster controller When the cluster has stored data that needs to be written, for example, when there is storage data to be written when file migration, and/or file saving, it is assumed that the storage data needs to be written to N hosts, and the cluster controller is Among the hosts that satisfy the predetermined write conditions of the host, select the N hosts with the least total number of accumulated erases of the host as the target host, that is, the host to which the data is stored.
- Step 440 On each target host, the storage controller of the host writes the storage data to at least one hard disk that satisfies the predetermined writing condition of the hard disk and the total number of accumulated erasures of the hard disk is the least.
- M is a natural number
- the storage controller on the host selects the hard disk that meets the predetermined write condition of the hard disk.
- the hard disk with the least total number of accumulated erasures of the M hard disks is used as the target hard disk, and the storage data is written to the target hard disk.
- the difference between the total number of accumulated erasures between the hosts can be monitored by the cluster controller.
- the cluster controller will store data from the two.
- the data can be written by the storage controller thereon to the hard disk that satisfies the predetermined write condition of the hard disk and the total number of accumulated erases of the hard disk is the least.
- the storage controller of each host can monitor the difference in the total number of accumulated erasures between the hard disks of the host. When the difference in the total number of accumulated erasures of a certain hard disk on a host exceeds the predetermined deviation range of the hard disk, The storage controller of the host can migrate the stored data from the hard disk with the total number of erasures of the two hard disks to the hard disk with the lower total number of erasures.
- the second embodiment of the present application applies the method in the first embodiment at the cluster level (between the hosts) and the host level (between the hard disks on each host), so that the physical storage components of the entire cluster system are achieved. Balanced erasing, extending the life of storage devices in a cluster system while improving the stability of the cluster system.
- a high availability cluster system includes K (K is a natural number greater than 3) hosts, and each host includes at least three hard disks that use flash memory as a storage medium.
- K is a natural number greater than 3
- Each host includes at least three hard disks that use flash memory as a storage medium.
- Each file stored in the cluster should be saved on three different hosts.
- the cluster controller of the cluster runs on the host node (one of the hosts) of the cluster to control which three hosts are stored on the file; each host runs a storage controller to control the storage of files in the host. Which hard disk of the host is on.
- the storage controller maintains the cumulative number of erasures block_wear_count for each block on the hard disk and the total number of accumulated erases disk_wear_count for all blocks of the hard disk in the super block of each hard disk.
- the block identifier block_num in the block The corresponding cumulative erasure number block_wear_count is incremented by 1, and 1 is added to the total number of accumulated erasures disk_wear_count of all blocks of the hard disk.
- the storage controller maintains the total number of cumulative wipes of all blocks on the host server_wear_count, server_wear_count equals the sum of disk_wear_count for each hard disk on the host.
- server_wear_count equals the sum of disk_wear_count for each hard disk on the host.
- Each working node in the cluster sends a heartbeat signal to the master node periodically.
- the server_wear_count of the host where the working node is located can be reported to the cluster controller in the periodic heartbeat signal.
- the storage controller on the master node is also the same as the heartbeat signal.
- the period reports the server_wear_count of the host where the master node is located to the cluster controller.
- the cluster controller determines a number of hosts that meet the predetermined write conditions of the host among the K hosts.
- the host writes the new file to a predetermined write condition: the remaining storage capacity exceeds
- the host stores 15% of the total capacity and the amount of access to the saved data does not exceed the established threshold.
- the cluster controller selects the three hosts with the smallest number of server_wear_count as the host to write the new file.
- the storage controller of each host determines a number of hard disks that meet the predetermined write conditions of the hard disk in the hard disk on the host.
- the predetermined write condition of the hard disk is: the remaining storage capacity exceeds 10% of the total storage capacity of the hard disk.
- the storage controller selects the hard disk with the smallest disk_wear_count to write a new file.
- the cluster controller When a host in a cluster or a hard disk on a host fails, the cluster controller considers that the file saved on the host or the hard disk is unusable, so that the number of copies of these files in the cluster is less than three. .
- the host is scheduled to write the condition that the remaining storage capacity exceeds 15% of the total storage capacity of the host, the amount of access to the saved data does not exceed the predetermined threshold, and the file to be written has not been saved.
- the storage controller selects the hard disk with the smallest disk_wear_count in the hard disk that meets the hard disk write condition to write the migrated file.
- the cluster controller checks the difference between the total number of accumulated erasures of each host server_wear_count with a certain host monitoring period, if the maximum value of server_wear_count in the cluster If the difference from the minimum value exceeds 20% of the average of all server_wear_counts, the file on the host with the largest server_wear_count is migrated to the host with the smallest server_wear_count until the difference between the maximum and minimum values of server_wear_count in the cluster is averaged across all server_wear_counts. Within 20%.
- the selection of the migration file can be implemented by referring to the prior art, and will not be described again.
- the storage controller checks the difference between the total number of accumulated erases of each hard disk on the host disk_wear_count with a certain hard disk monitoring period. If the difference between the maximum and minimum values of disk_wear_count on the host exceeds all The 15% of the average value of disk_wear_count migrates the files on the hard disk with the largest disk_wear_count to the disk with the smallest disk_wear_count until the difference between the maximum and minimum values of disk_wear_count on the host is within 15% of the average of all disk_wear_count.
- the choice of migration files can also be implemented with reference to the prior art.
- an embodiment of the present application further provides an apparatus for writing stored data.
- the device can be implemented by software, or can be implemented by hardware or a combination of hardware and software.
- the CPU Central Process Unit
- the device in which the device storing the data is stored usually includes other hardware such as a chip for transmitting and receiving wireless signals, and/ Or other hardware such as a board for implementing network communication functions.
- FIG. 6 is a schematic diagram of an apparatus for writing storage data, which is applied to a central control function module for performing write control on at least two physical storage units, including a cumulative total erasing unit and physical storage. a unit unit, wherein: the total erasing total number unit is used to obtain the total number of accumulated erasures of all blocks in each physical storage unit; the physical storage unit unit is used to store data in a physical storage unit that satisfies a predetermined writing condition Write to at least one physical storage unit with the least total number of accumulated erases.
- the apparatus further includes a deviation migration unit for storing data from the two objects when a difference in the total number of accumulated erasures of the two physical storage units exceeds a predetermined deviation range The physical storage unit with the higher cumulative total number of erasures in the storage unit is migrated to the physical storage unit with the lower total number of accumulated erasures.
- the predetermined deviation range may be determined based on an average of the total number of accumulated erasures of all physical storage units.
- the stored data includes: storage data that needs to be written when the file is migrated, and/or storage data that needs to be written when the file is saved.
- the physical storage unit includes: a host; the central control function module includes: a cluster control module that includes a cluster of all hosts; and the cumulative total erasing unit is specifically configured to: receive each host in a predetermined period The total number of cumulative wipes reported.
- the physical storage unit includes: a hard disk;
- the central control function module includes: a storage control module of the host; and the total erasing total number of times unit is specifically configured to: read the super block maintained in each hard disk The total number of erasures is accumulated, and the total number of accumulated erasures is incremented by one after each block of the hard disk is allocated.
- FIG. 7 is a schematic diagram of a cluster provided by an embodiment of the present application, including at least two hosts, each host including at least two hard disks using flash memory as a storage medium, and a cluster control module and a storage control module on each host.
- the cluster control module is configured to obtain the total number of accumulated erasures of all the blocks on each host, and select the total number of accumulated erasures of the host in the host that satisfies the predetermined write condition of the host when the storage data needs to be written.
- At least one host is the target host; the storage control module on each host is used to obtain the total number of accumulated erasures of each hard disk on the host, and when the host is the target host, the storage data is written to meet The hard disk is scheduled to be written, and the hard disk has the least total number of accumulated erasures on at least one hard disk.
- the cluster control module is further configured to: when the difference between the total number of accumulated erasures of the two hosts exceeds the predetermined deviation range of the host, the total number of accumulated erasures of the stored data from the two hosts is compared.
- the high host migrates to the host with a lower cumulative total number of erasures; the storage control module on each host is also used to: when the total number of accumulated erasures of a certain hard disk on a host exceeds the hard disk reservation In the case of the deviation range, the storage data is migrated from the hard disk with the higher total number of erasures of the two hard disks to the hard disk with the lower total number of accumulated erasures.
- the stored data includes: storage data that needs to be written when performing file migration.
- a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
- processors CPUs
- input/output interfaces network interfaces
- memory volatile and non-volatile memory
- the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
- RAM random access memory
- ROM read only memory
- Memory is an example of a computer readable medium.
- Computer readable media includes both permanent and non-persistent, removable and non-removable media.
- Information storage can be implemented by any method or technology.
- the information can be computer readable instructions, data structures, modules of programs, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
- computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
- embodiments of the present application can be provided as a method, system, or computer program product. Therefore, the present application can employ an entirely hardware embodiment, an entirely software embodiment, or a combination of software and A form of embodiment of the hardware aspect. Moreover, the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
Abstract
Description
Claims (18)
- 一种在基于闪存的存储介质中写入存储数据的方法,应用在对至少两个物理存储单位进行写入控制的中控功能模块上,其特征在于,包括:获取每个物理存储单位内所有块的累计擦除总次数;在满足预定写入条件的物理存储单位中,将存储数据写入到累计擦除总次数最少的至少一个物理存储单位上。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:当某两个物理存储单位的累计擦除总次数的差异超过预定偏差范围时,将存储数据从所述两个物理存储单位中累计擦除总次数较高的物理存储单位迁移到累计擦除总次数较低的物理存储单位。
- 根据权利要求2所述的方法,其特征在于,所述预定偏差范围根据所有物理存储单位的累计擦除总次数的平均值确定。
- 根据权利要求1所述的方法,其特征在于,所述存储数据包括:进行文件迁移时需要写入的存储数据、和/或进行文件保存时需要写入的存储数据。
- 根据权利要求1至4任意一项所述的方法,其特征在于,所述物理存储单位包括:主机;所述中控功能模块包括:包括所有主机的集群的集群控制模块;所述获取每个物理存储单位内所有块的累计擦除总次数,包括:接收每个主机以预定周期上报的累计擦除总次数。
- 根据权利要求1至4任意一项所述的方法,其特征在于,所述物理存储单位包括:硬盘;所述中控功能模块包括:主机的存储控制模块;所述获取每个物理存储单位内所有块的累计擦除总次数,包括:读取每个硬盘的超级块中维护的累计擦除总次数,所述累计擦除总次数在所述硬盘的每个块被分配后加1。
- 一种写入存储数据的装置,应用在对至少两个物理存储单位进行写入控制的中控功能模块上,其特征在于,包括:累计擦除总次数单元,用于获取每个物理存储单位内所有块的累计擦除总次数;物理存储单位单元,用于在满足预定写入条件的物理存储单位中,将存储数据写入到累计擦除总次数最少的至少一个物理存储单位上。
- 根据权利要求7所述的装置,其特征在于,所述装置还包括:偏差迁移单元,用于当某两个物理存储单位的累计擦除总次数的差异超过预定偏差范围时,将存储数据从所述两个物理存储单位中累计擦除总次数较高的物理存储单位迁移到累计擦除总次数较低的物理存储单位。
- 根据权利要求8所述的装置,其特征在于,所述预定偏差范围根据所有物理存储单位的累计擦除总次数的平均值确定。
- 根据权利要求7所述的装置,其特征在于,所述存储数据包括:进行文件迁移时需要写入的存储数据、和/或进行文件保存时需要写入的存储数据。
- 根据权利要求7至10任意一项所述的装置,其特征在于,所述物理存储单位包括:主机;所述中控功能模块包括:包括所有主机的集群的集群控制模块;所述累计擦除总次数单元具体用于:接收每个主机以预定周期上报的累计擦除总次数。
- 根据权利要求7至10任意一项所述的装置,其特征在于,所述物理存储单位包括:硬盘;所述中控功能模块包括:主机的存储控制模块;所述累计擦除总次数单元具体用于:读取每个硬盘的超级块中维护的累计擦除总次数,所述累计擦除总次数在所述硬盘的每个块被分配后加1。
- 一种在集群中写入存储数据的方法,所述集群包括至少两台主机,每台主机包括至少两个采用闪存作为存储介质的硬盘,所述方法包括:由集群控制模块获取每台主机上所有块的累计擦除总次数;由每台主机的存储控制模块分别获取所在主机上每个硬盘的累计擦除总次数;在满足主机预定写入条件的主机中,集群控制器选择主机的累计擦除总次数最少的至少一台主机作为目标主机;在每台目标主机上,所述主机的存储控制器将存储数据写入到满足硬盘预定写入条件、硬盘的累计擦除总次数最少的至少一个硬盘上。
- 根据权利要求13所述的方法,其特征在于,所述方法还包括:当某两台主机的累计擦除总次数的差异超过主机预定偏差范围时,集群控制器将存储数据从所述两台主机中累计擦除总次数较高的主机迁移到累计擦除总次数较低的主机;当某台主机上的某两个硬盘的累计擦除总次数的差异超过硬盘预定偏差范围时,所述主机的存储控制器将存储数据从所述两个硬盘中累计擦除总次数较高的硬盘迁移到累计擦除总次数较低的硬盘。
- 根据权利要求13所述的方法,其特征在于,所述存储数据包括:进行文件迁移时需要写入的存储数据。
- 一种集群,包括至少两台主机,每台主机包括至少两个采用闪存作为存储介质的硬盘,其特征在于,还包括:集群控制模块,用于获取每台主机上所有块的累计擦除总次数,以及在需要写入存储数据时,在满足主机预定写入条件的主机中,选择主机的累计擦除总次数最少的至少一台主机作为目标主机;每台主机上的存储控制模块,用于获取所在主机上每个硬盘的累计擦除总次数,以及当所在主机作为目标主机时,将存储数据写入到满足硬盘预定写入条件、硬盘的累计擦除总次数最少的至少一个硬盘上。
- 根据权利要求16所述的集群,其特征在于,所述集群控制模块还用于:当某两台主机的的累计擦除总次数的差异超过主机预定偏差范围时,将存储数据从所述两台主机中累计擦除总次数较高的主机迁移到累计擦除总次数较低的主机;所述每条主机上的存储控制模块还用于:当某台主机上的某两个硬盘的累计擦除总次数的差异超过硬盘预定偏差范围时,将存储数据从所述两个硬 盘中累计擦除总次数较高的硬盘迁移到累计擦除总次数较低的硬盘。
- 根据权利要求16所述的集群,其特征在于,所述存储数据包括:进行文件迁移时需要写入的存储数据。
Priority Applications (14)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MYPI2019001345A MY188873A (en) | 2016-09-14 | 2017-09-05 | Method and device for writing stored data into storage medium based on flash memory |
RU2019110993A RU2735407C2 (ru) | 2016-09-14 | 2017-09-05 | Способ и устройство для записи сохраненных данных на носитель данных на основе флэш-памяти |
MX2019002948A MX2019002948A (es) | 2016-09-14 | 2017-09-05 | Metodo y dispositivo para escribir datos almacenados en un medio de almacenamiento basado en memoria flash. |
AU2017325886A AU2017325886B2 (en) | 2016-09-14 | 2017-09-05 | Method and device for writing stored data into storage medium based on flash memory |
CA3036415A CA3036415C (en) | 2016-09-14 | 2017-09-05 | Method and device for writing stored data into storage medium based on flash memory |
KR1020197010643A KR102275094B1 (ko) | 2016-09-14 | 2017-09-05 | 저장된 데이터를 플래시 메모리에 기초한 저장 매체에 기입하기 위한 방법 및 디바이스 |
BR112019004916A BR112019004916A2 (pt) | 2016-09-14 | 2017-09-05 | método e dispositivo para gravar dados armazenados no meio de armazenamento com base na memória flash |
JP2019514207A JP2019532413A (ja) | 2016-09-14 | 2017-09-05 | 記憶対象のデータをフラッシュメモリベースの記憶媒体に書き込む方法及びデバイス |
EP17850208.4A EP3514674B1 (en) | 2016-09-14 | 2017-09-05 | Method and device for writing stored data into storage medium based on flash memory |
US16/351,904 US11099744B2 (en) | 2016-09-14 | 2019-03-13 | Method and device for writing stored data into storage medium based on flash memory |
PH12019500555A PH12019500555A1 (en) | 2016-09-14 | 2019-03-14 | Method and device for writing stored data into storage medium based on flash memory |
ZA2019/02298A ZA201902298B (en) | 2016-09-14 | 2019-04-11 | Method and device for writing stored data into storage medium based on flash memory |
AU2019101583A AU2019101583A4 (en) | 2016-09-14 | 2019-12-13 | Method and device for writing stored data into storage medium based on flash memory |
US17/375,033 US11287984B2 (en) | 2016-09-14 | 2021-07-14 | Method and device for writing stored data into storage medium based on flash memory |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610827195.6A CN107025066A (zh) | 2016-09-14 | 2016-09-14 | 在基于闪存的存储介质中写入存储数据的方法和装置 |
CN201610827195.6 | 2016-09-14 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/351,904 Continuation US11099744B2 (en) | 2016-09-14 | 2019-03-13 | Method and device for writing stored data into storage medium based on flash memory |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018050006A1 true WO2018050006A1 (zh) | 2018-03-22 |
Family
ID=59524718
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/100570 WO2018050006A1 (zh) | 2016-09-14 | 2017-09-05 | 在基于闪存的存储介质中写入存储数据的方法和装置 |
Country Status (15)
Country | Link |
---|---|
US (2) | US11099744B2 (zh) |
EP (1) | EP3514674B1 (zh) |
JP (1) | JP2019532413A (zh) |
KR (1) | KR102275094B1 (zh) |
CN (1) | CN107025066A (zh) |
AU (2) | AU2017325886B2 (zh) |
BR (1) | BR112019004916A2 (zh) |
CA (1) | CA3036415C (zh) |
MX (1) | MX2019002948A (zh) |
MY (1) | MY188873A (zh) |
PH (1) | PH12019500555A1 (zh) |
RU (1) | RU2735407C2 (zh) |
TW (1) | TWI676992B (zh) |
WO (1) | WO2018050006A1 (zh) |
ZA (1) | ZA201902298B (zh) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107025066A (zh) * | 2016-09-14 | 2017-08-08 | 阿里巴巴集团控股有限公司 | 在基于闪存的存储介质中写入存储数据的方法和装置 |
KR102611566B1 (ko) * | 2018-07-06 | 2023-12-07 | 삼성전자주식회사 | 솔리드 스테이트 드라이브 및 그의 메모리 할당 방법 |
KR20210039872A (ko) * | 2019-10-02 | 2021-04-12 | 삼성전자주식회사 | 프리 블록의 할당을 관리하는 호스트 시스템, 이를 포함하는 데이터 처리 시스템 및 호스트 시스템의 동작방법 |
CN111143238B (zh) * | 2019-12-27 | 2022-03-15 | 无锡融卡科技有限公司 | 基于eFlash存储芯片的数据擦写方法及系统 |
KR20210101973A (ko) | 2020-02-11 | 2021-08-19 | 에스케이하이닉스 주식회사 | 메모리 시스템 및 그것의 동작 방법 |
US11561729B2 (en) * | 2020-08-19 | 2023-01-24 | Micron Technology, Inc. | Write determination counter |
CN112162934A (zh) * | 2020-09-29 | 2021-01-01 | 深圳市时创意电子有限公司 | 存储块异常磨损处理方法、装置、电子设备及存储介质 |
TWI808384B (zh) * | 2021-02-23 | 2023-07-11 | 慧榮科技股份有限公司 | 儲存裝置、快閃記憶體控制器及其控制方法 |
TWI821152B (zh) * | 2021-02-23 | 2023-11-01 | 慧榮科技股份有限公司 | 儲存裝置、快閃記憶體控制器及其控制方法 |
CN112947862B (zh) * | 2021-03-10 | 2022-09-20 | 歌尔科技有限公司 | 设备、Flash存储器及其数据存储方法 |
CN113452867A (zh) * | 2021-06-25 | 2021-09-28 | 珠海奔图电子有限公司 | 数据清除方法、主机、图像形成装置、系统及存储介质 |
CN115374065B (zh) * | 2022-10-25 | 2023-02-28 | 山东捷瑞数字科技股份有限公司 | 一种基于云平台日志记录监控的文件清理方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441599A (zh) * | 2008-11-28 | 2009-05-27 | 成都市华为赛门铁克科技有限公司 | 一种固态硬盘的均衡方法和固态硬盘 |
CN102135942A (zh) * | 2010-12-31 | 2011-07-27 | 北京握奇数据系统有限公司 | 一种存储设备中实现损耗均衡的方法及存储设备 |
CN102880556A (zh) * | 2012-09-12 | 2013-01-16 | 浙江大学 | 一种实现Nand Flash磨损均衡的方法及其系统 |
CN104731515A (zh) * | 2013-12-18 | 2015-06-24 | 华为技术有限公司 | 控制存储设备机群磨损均衡的方法及设备 |
CN107025066A (zh) * | 2016-09-14 | 2017-08-08 | 阿里巴巴集团控股有限公司 | 在基于闪存的存储介质中写入存储数据的方法和装置 |
Family Cites Families (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6990667B2 (en) | 2001-01-29 | 2006-01-24 | Adaptec, Inc. | Server-independent object positioning for load balancing drives and servers |
US7096313B1 (en) | 2002-10-28 | 2006-08-22 | Sandisk Corporation | Tracking the least frequently erased blocks in non-volatile memory systems |
JP4651913B2 (ja) | 2003-02-17 | 2011-03-16 | 株式会社日立製作所 | 記憶装置システム |
US20050055495A1 (en) * | 2003-09-05 | 2005-03-10 | Nokia Corporation | Memory wear leveling |
CA2552019A1 (en) | 2003-12-29 | 2005-07-21 | Sherwood Information Partners, Inc. | System and method for reduced vibration interaction in a multiple-hard-disk-drive enclosure |
JP4863749B2 (ja) | 2006-03-29 | 2012-01-25 | 株式会社日立製作所 | フラッシュメモリを用いた記憶装置、その消去回数平準化方法、及び消去回数平準化プログラム |
US7411757B2 (en) | 2006-07-27 | 2008-08-12 | Hitachi Global Storage Technologies Netherlands B.V. | Disk drive with nonvolatile memory having multiple modes of operation |
US9153337B2 (en) | 2006-12-11 | 2015-10-06 | Marvell World Trade Ltd. | Fatigue management system and method for hybrid nonvolatile solid state memory system |
US20080140918A1 (en) * | 2006-12-11 | 2008-06-12 | Pantas Sutardja | Hybrid non-volatile solid state memory system |
CN101364437A (zh) * | 2007-08-07 | 2009-02-11 | 芯邦科技(深圳)有限公司 | 一种可使闪存损耗均衡的方法及其应用 |
US8751755B2 (en) * | 2007-12-27 | 2014-06-10 | Sandisk Enterprise Ip Llc | Mass storage controller volatile memory containing metadata related to flash memory storage |
US8427552B2 (en) | 2008-03-03 | 2013-04-23 | Videoiq, Inc. | Extending the operational lifetime of a hard-disk drive used in video data storage applications |
US8554983B2 (en) | 2008-05-27 | 2013-10-08 | Micron Technology, Inc. | Devices and methods for operating a solid state drive |
US8959280B2 (en) | 2008-06-18 | 2015-02-17 | Super Talent Technology, Corp. | Super-endurance solid-state drive with endurance translation layer (ETL) and diversion of temp files for reduced flash wear |
US9123422B2 (en) | 2012-07-02 | 2015-09-01 | Super Talent Technology, Corp. | Endurance and retention flash controller with programmable binary-levels-per-cell bits identifying pages or blocks as having triple, multi, or single-level flash-memory cells |
US8010738B1 (en) | 2008-06-27 | 2011-08-30 | Emc Corporation | Techniques for obtaining a specified lifetime for a data storage device |
JP5242264B2 (ja) | 2008-07-07 | 2013-07-24 | 株式会社東芝 | データ制御装置、ストレージシステムおよびプログラム |
US8024442B1 (en) | 2008-07-08 | 2011-09-20 | Network Appliance, Inc. | Centralized storage management for multiple heterogeneous host-side servers |
US8244995B2 (en) * | 2008-10-30 | 2012-08-14 | Dell Products L.P. | System and method for hierarchical wear leveling in storage devices |
CN101419842B (zh) * | 2008-11-07 | 2012-04-04 | 成都市华为赛门铁克科技有限公司 | 硬盘的损耗均衡方法、装置及系统 |
US20100125696A1 (en) * | 2008-11-17 | 2010-05-20 | Prasanth Kumar | Memory Controller For Controlling The Wear In A Non-volatile Memory Device And A Method Of Operation Therefor |
US9164689B2 (en) * | 2009-03-30 | 2015-10-20 | Oracle America, Inc. | Data storage system and method of processing a data access request |
US8429373B2 (en) | 2009-07-15 | 2013-04-23 | International Business Machines Corporation | Method for implementing on demand configuration changes |
WO2011010344A1 (ja) | 2009-07-22 | 2011-01-27 | 株式会社日立製作所 | 複数のフラッシュパッケージを有するストレージシステム |
US8402242B2 (en) | 2009-07-29 | 2013-03-19 | International Business Machines Corporation | Write-erase endurance lifetime of memory storage devices |
WO2011021126A1 (en) | 2009-08-21 | 2011-02-24 | International Business Machines Corporation | Data storage system and method for operating a data storage system |
US8464106B2 (en) | 2009-08-24 | 2013-06-11 | Ocz Technology Group, Inc. | Computer system with backup function and method therefor |
US8234520B2 (en) | 2009-09-16 | 2012-07-31 | International Business Machines Corporation | Wear leveling of solid state disks based on usage information of data and parity received from a raid controller |
US8621141B2 (en) * | 2010-04-01 | 2013-12-31 | Intel Corporations | Method and system for wear leveling in a solid state drive |
US8700842B2 (en) | 2010-04-12 | 2014-04-15 | Sandisk Enterprise Ip Llc | Minimizing write operations to a flash memory-based object store |
US8737141B2 (en) | 2010-07-07 | 2014-05-27 | Stec, Inc. | Apparatus and method for determining an operating condition of a memory cell based on cycle information |
US8904226B2 (en) | 2010-08-26 | 2014-12-02 | Cleversafe, Inc. | Migrating stored copies of a file to stored encoded data slices |
US8775720B1 (en) | 2010-08-31 | 2014-07-08 | Western Digital Technologies, Inc. | Hybrid drive balancing execution times for non-volatile semiconductor memory and disk |
US8825977B1 (en) | 2010-09-28 | 2014-09-02 | Western Digital Technologies, Inc. | Hybrid drive writing copy of data to disk when non-volatile semiconductor memory nears end of life |
WO2012060824A1 (en) | 2010-11-02 | 2012-05-10 | Hewlett-Packard Development Company, L.P. | Solid-state disk (ssd) management |
FR2977047B1 (fr) | 2011-06-22 | 2013-08-16 | Starchip | Procede de gestion de l'endurance de memoires non volatiles. |
KR101938210B1 (ko) * | 2012-04-18 | 2019-01-15 | 삼성전자주식회사 | 낸드 플래시 메모리, 가변 저항 메모리 및 컨트롤러를 포함하는 메모리 시스템의 동작 방법 |
US9443591B2 (en) | 2013-01-23 | 2016-09-13 | Seagate Technology Llc | Storage device out-of-space handling |
WO2013190597A1 (en) * | 2012-06-21 | 2013-12-27 | Hitachi, Ltd. | Flash memory device and storage control method |
KR20140006299A (ko) * | 2012-07-03 | 2014-01-16 | 삼성전자주식회사 | 낸드 플래시 메모리 기반의 저장부에 데이터 기록을 제어하는 방법 및 장치 |
US8862810B2 (en) * | 2012-09-27 | 2014-10-14 | Arkologic Limited | Solid state device write operation management system |
CN102981970B (zh) * | 2012-11-23 | 2016-08-03 | 深圳市江波龙电子有限公司 | 闪存管理方法和系统 |
US20150143021A1 (en) | 2012-12-26 | 2015-05-21 | Unisys Corporation | Equalizing wear on storage devices through file system controls |
CN103116549B (zh) * | 2013-01-04 | 2016-03-16 | 张亚丽 | 基于最大可擦除次数的闪存存储方法 |
US9652376B2 (en) | 2013-01-28 | 2017-05-16 | Radian Memory Systems, Inc. | Cooperative flash memory control |
JP6005566B2 (ja) * | 2013-03-18 | 2016-10-12 | 株式会社東芝 | 情報処理システム、制御プログラムおよび情報処理装置 |
JP2015014963A (ja) * | 2013-07-05 | 2015-01-22 | 富士通株式会社 | ストレージ制御装置、制御プログラム及び制御方法 |
TWI515736B (zh) | 2013-07-25 | 2016-01-01 | 慧榮科技股份有限公司 | 資料儲存裝置以及快閃記憶體控制方法 |
US9336129B2 (en) * | 2013-10-02 | 2016-05-10 | Sandisk Technologies Inc. | System and method for bank logical data remapping |
US9442662B2 (en) * | 2013-10-18 | 2016-09-13 | Sandisk Technologies Llc | Device and method for managing die groups |
CN104572489B (zh) * | 2013-10-23 | 2019-12-24 | 深圳市腾讯计算机系统有限公司 | 磨损均衡方法及装置 |
US9619381B2 (en) | 2013-12-24 | 2017-04-11 | International Business Machines Corporation | Collaborative health management in a storage system |
JP5858081B2 (ja) * | 2014-03-27 | 2016-02-10 | Tdk株式会社 | メモリコントローラ、メモリシステム及びメモリ制御方法 |
US10725668B1 (en) * | 2014-08-29 | 2020-07-28 | SK Hynix Inc. | Data separation during garbage collection and wear leveling |
US9292210B1 (en) * | 2014-08-29 | 2016-03-22 | International Business Machines Corporation | Thermally sensitive wear leveling for a flash memory device that includes a plurality of flash memory modules |
US9368218B2 (en) * | 2014-10-03 | 2016-06-14 | HGST Netherlands B.V. | Fast secure erase in a flash system |
CN104360957A (zh) * | 2014-11-26 | 2015-02-18 | 上海爱信诺航芯电子科技有限公司 | 一种维持闪存损耗均衡的方法 |
JP6107802B2 (ja) | 2014-12-15 | 2017-04-05 | コニカミノルタ株式会社 | 不揮発性メモリ制御装置、不揮発性メモリ制御方法及びプログラム |
US10459639B2 (en) * | 2015-04-28 | 2019-10-29 | Hitachi, Ltd. | Storage unit and storage system that suppress performance degradation of the storage unit |
TWI563509B (en) * | 2015-07-07 | 2016-12-21 | Phison Electronics Corp | Wear leveling method, memory storage device and memory control circuit unit |
CN105159601B (zh) * | 2015-08-07 | 2018-12-07 | 杭州海兴电力科技股份有限公司 | 一种提高Flash擦写寿命的方法 |
TWI601059B (zh) * | 2015-11-19 | 2017-10-01 | 慧榮科技股份有限公司 | 資料儲存裝置與資料儲存方法 |
US9886324B2 (en) * | 2016-01-13 | 2018-02-06 | International Business Machines Corporation | Managing asset placement using a set of wear leveling data |
US10034407B2 (en) * | 2016-07-22 | 2018-07-24 | Intel Corporation | Storage sled for a data center |
CN107678906B (zh) * | 2016-08-01 | 2021-01-29 | 杭州海康威视数字技术股份有限公司 | 硬盘管理方法和系统 |
KR20200053965A (ko) * | 2018-11-09 | 2020-05-19 | 에스케이하이닉스 주식회사 | 메모리 시스템 및 그것의 동작방법 |
-
2016
- 2016-09-14 CN CN201610827195.6A patent/CN107025066A/zh active Pending
-
2017
- 2017-07-19 TW TW106124168A patent/TWI676992B/zh active
- 2017-09-05 MY MYPI2019001345A patent/MY188873A/en unknown
- 2017-09-05 KR KR1020197010643A patent/KR102275094B1/ko active IP Right Grant
- 2017-09-05 JP JP2019514207A patent/JP2019532413A/ja active Pending
- 2017-09-05 RU RU2019110993A patent/RU2735407C2/ru active
- 2017-09-05 AU AU2017325886A patent/AU2017325886B2/en active Active
- 2017-09-05 WO PCT/CN2017/100570 patent/WO2018050006A1/zh unknown
- 2017-09-05 CA CA3036415A patent/CA3036415C/en active Active
- 2017-09-05 EP EP17850208.4A patent/EP3514674B1/en active Active
- 2017-09-05 BR BR112019004916A patent/BR112019004916A2/pt not_active IP Right Cessation
- 2017-09-05 MX MX2019002948A patent/MX2019002948A/es unknown
-
2019
- 2019-03-13 US US16/351,904 patent/US11099744B2/en active Active
- 2019-03-14 PH PH12019500555A patent/PH12019500555A1/en unknown
- 2019-04-11 ZA ZA2019/02298A patent/ZA201902298B/en unknown
- 2019-12-13 AU AU2019101583A patent/AU2019101583A4/en active Active
-
2021
- 2021-07-14 US US17/375,033 patent/US11287984B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441599A (zh) * | 2008-11-28 | 2009-05-27 | 成都市华为赛门铁克科技有限公司 | 一种固态硬盘的均衡方法和固态硬盘 |
CN102135942A (zh) * | 2010-12-31 | 2011-07-27 | 北京握奇数据系统有限公司 | 一种存储设备中实现损耗均衡的方法及存储设备 |
CN102880556A (zh) * | 2012-09-12 | 2013-01-16 | 浙江大学 | 一种实现Nand Flash磨损均衡的方法及其系统 |
CN104731515A (zh) * | 2013-12-18 | 2015-06-24 | 华为技术有限公司 | 控制存储设备机群磨损均衡的方法及设备 |
CN107025066A (zh) * | 2016-09-14 | 2017-08-08 | 阿里巴巴集团控股有限公司 | 在基于闪存的存储介质中写入存储数据的方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
JP2019532413A (ja) | 2019-11-07 |
CN107025066A (zh) | 2017-08-08 |
EP3514674A1 (en) | 2019-07-24 |
AU2017325886A1 (en) | 2019-04-04 |
AU2019101583A4 (en) | 2020-01-23 |
EP3514674B1 (en) | 2023-07-19 |
US20210342073A1 (en) | 2021-11-04 |
EP3514674A4 (en) | 2020-05-06 |
CA3036415A1 (en) | 2018-03-22 |
RU2019110993A (ru) | 2020-10-15 |
TWI676992B (zh) | 2019-11-11 |
US11099744B2 (en) | 2021-08-24 |
CA3036415C (en) | 2021-07-06 |
MY188873A (en) | 2022-01-12 |
US11287984B2 (en) | 2022-03-29 |
TW201818401A (zh) | 2018-05-16 |
MX2019002948A (es) | 2019-07-18 |
RU2019110993A3 (zh) | 2020-10-15 |
ZA201902298B (en) | 2021-06-30 |
PH12019500555A1 (en) | 2019-11-18 |
US20190212922A1 (en) | 2019-07-11 |
BR112019004916A2 (pt) | 2019-06-04 |
AU2017325886B2 (en) | 2020-11-19 |
RU2735407C2 (ru) | 2020-10-30 |
KR20190052083A (ko) | 2019-05-15 |
KR102275094B1 (ko) | 2021-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018050006A1 (zh) | 在基于闪存的存储介质中写入存储数据的方法和装置 | |
US10223010B2 (en) | Dynamic storage device provisioning | |
US10126970B2 (en) | Paired metablocks in non-volatile storage device | |
US20230013322A1 (en) | Solid state drive management method and solid state drive | |
CN113811862A (zh) | 存储驱动器的动态性能等级调整 | |
CN113934360B (zh) | 多存储设备生命周期管理系统 | |
US10365836B1 (en) | Electronic system with declustered data protection by parity based on reliability and method of operation thereof | |
US11868223B2 (en) | Read-disturb-based read temperature information utilization system | |
US11836073B2 (en) | Storage device operating data counter system | |
US11983431B2 (en) | Read-disturb-based read temperature time-based attenuation system | |
US11928354B2 (en) | Read-disturb-based read temperature determination system | |
US11922035B2 (en) | Read-disturb-based read temperature adjustment system | |
US20230229336A1 (en) | Read-disturb-based read temperature time-based attenuation system | |
US20230305721A1 (en) | Method and apparatus for memory management in memory disaggregation environment | |
CN113811861A (zh) | 存储驱动器的动态性能等级调整 | |
CN113811846A (zh) | 存储驱动器的动态每天写入调整 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17850208 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3036415 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2019514207 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112019004916 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2017325886 Country of ref document: AU Date of ref document: 20170905 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20197010643 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017850208 Country of ref document: EP Effective date: 20190415 |
|
ENP | Entry into the national phase |
Ref document number: 112019004916 Country of ref document: BR Kind code of ref document: A2 Effective date: 20190313 |