CN111414138B - Wear balancing method and device for solid state disk - Google Patents

Wear balancing method and device for solid state disk Download PDF

Info

Publication number
CN111414138B
CN111414138B CN202010195586.7A CN202010195586A CN111414138B CN 111414138 B CN111414138 B CN 111414138B CN 202010195586 A CN202010195586 A CN 202010195586A CN 111414138 B CN111414138 B CN 111414138B
Authority
CN
China
Prior art keywords
data
data block
queue
recovery
idle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010195586.7A
Other languages
Chinese (zh)
Other versions
CN111414138A (en
Inventor
张远高
周炎钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rongming Microelectronics Jinan Co ltd
Original Assignee
Rongming Microelectronics Jinan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rongming Microelectronics Jinan Co ltd filed Critical Rongming Microelectronics Jinan Co ltd
Priority to CN202010195586.7A priority Critical patent/CN111414138B/en
Publication of CN111414138A publication Critical patent/CN111414138A/en
Application granted granted Critical
Publication of CN111414138B publication Critical patent/CN111414138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

The invention provides a wear balancing method and device for a solid state disk, which are used for reducing the overhead of system memory resources and improving the processing efficiency of wear balancing operation. The wear balancing method for the solid state disk comprises the following steps: acquiring a time stamp of a first data block in a cold data queue, wherein the data blocks in the cold data queue are arranged according to the sequence from the early to the late of the time stamp; if the time difference between the acquired time stamp and the current time is not greater than a preset cold data migration threshold value, acquiring the erasing times of a plurality of data blocks at the front section of the cold data queue; if the difference between the erasing times of any one of the acquired front section data blocks and the erasing times of the tail data blocks of the idle queue is larger than a preset wear balancing threshold value, starting wear balancing operation, wherein the idle data blocks in the idle queue are arranged according to the sequence from small to large in erasing times.

Description

Wear balancing method and device for solid state disk
Technical Field
The invention relates to the technical field of solid state disks, in particular to a solid state disk wear balancing method and device.
Background
Solid state disk (SolidStateDrive, SSD) is a computer storage device mainly using flash memory as a permanent memory. Flash memory is used as a storage medium and is divided into a plurality of data blocks, and each block must be erased into a free data block before writing. If the written data is updated, new data is written to different physical addresses, and the data corresponding to the old address is marked as invalid, which is also called data garbage. Through garbage collection, the utilization efficiency of SSD can be improved.
Garbage collection, data retention and wear balance are important functions of solid state disk design. As the amount of data written increases, the free data blocks become smaller, garbage collection will migrate valid data from the source block to the target block, after which the source block may be erased to become a free data block, thus freeing up space occupied by invalid data. The idle data blocks are managed by queues, and can be ordered from small to large according to the erasing times. Garbage collection typically occurs when less space is available, meaning that the free data block queues are not large, regardless of the update efficiency of queue insertion ordering and deletion. The written data may be subject to read disturb and may become unreliable if stored for too long. Data retention is the migration of data that is subject to read disturb or long storage times from a source block to a target block.
The number of reads per data block is updated at run-time and when the threshold is exceeded, the data block may be placed directly into the urgent garbage collection queue. Each free block of data is time stamped when it is first written, and its value remains unchanged until the block of data is erased and rewritten. When writing data to a new free block or erasing a data block, the timestamp of the data block may be added to or deleted from the end of a queue. The queue is naturally ordered by time stamp, called cold data queue, the earlier the time stamp the colder the data and the later the data the hotter.
The number of erasures can indicate the amount of wear of the data block, and the wear balance is required to transfer the data from the source block with high number of erasures to the target block with low number of erasures when the difference between the highest number of erasures and the lowest number of erasures exceeds a threshold value, so that the service life is prolonged.
In the prior art, an independent abrasion queue is needed to track the erasing times, the size of the queue is the number of all data blocks in the solid state disk flash memory, and the data blocks are required to be ordered according to the erasing times. The garbage collection can continuously generate update insertion and deletion operations, and due to the large abrasion queue, corresponding memory and calculation amount are needed for maintaining the large-scale sequencing queue, so that the overhead of the memory resources of the system is increased, and the processing efficiency of abrasion balancing operation is reduced.
Disclosure of Invention
The embodiment of the invention provides a method and a device for balancing wear of a solid state disk, which are used for reducing the overhead of system memory resources and improving the processing efficiency of wear balancing operation.
In a first aspect, a method for wear leveling of a solid state disk is provided, including:
acquiring a time stamp of a first data block in a cold data queue, wherein the data blocks in the cold data queue are arranged according to the sequence from the early to the late of the time stamp;
if the time difference between the acquired time stamp and the current time is not greater than a preset cold data migration threshold value, acquiring the erasing times of a plurality of data blocks at the front section of the cold data queue;
if the difference between the erasing times of any one of the plurality of data blocks in the front section and the erasing times of the tail data blocks of the idle queue is larger than a preset wear balancing threshold value, starting wear balancing operation, wherein the idle data blocks in the idle queue are arranged according to the sequence of the erasing times from small to large.
In one embodiment, the wear leveling operation is initiated, specifically including:
migrating valid data in a first source data block into a first target data block at the tail of an idle queue, wherein the first source data block comprises at least one data block at the front section of the cold data queue;
determining the recovery level of the first source data block according to the effective data proportion in the first source data block;
and according to the recycling level of the first source data block, placing the first source data block into a recycling queue of a corresponding level.
In an implementation manner, the method for balancing the wear of the solid state disk provided by the embodiment of the invention further comprises the following steps:
adding a time stamp to the first target data block according to the data migration time;
and placing the first target data block into the tail of the cold data queue.
In an implementation manner, the method for balancing the wear of the solid state disk provided by the embodiment of the invention further comprises the following steps:
when a data writing request is received, writing the data to be written into a second target data block of the idle queue head;
adding a time stamp of the second target data block according to the writing time;
and placing the second target data block into the tail of the cold data queue.
In an implementation manner, the method for balancing the wear of the solid state disk provided by the embodiment of the invention further comprises the following steps:
deleting the second target data block from the idle queue if the second target data block is full in the process of writing the data to be written;
determining the recovery level of the second target data block according to the proportion of the effective data in the second target data block;
and placing the second target data block into a recovery queue of a corresponding grade according to the recovery grade of the second target data block.
In one embodiment, the reclamation queue further comprises an emergency reclamation queue;
the method further comprises the steps of:
if the data capacity of the idle queue is lower than a preset recovery starting threshold or the emergency recovery queue is not empty, selecting a data block at the head of the recovery queue with the highest recovery level as a second source data block;
migrating the valid data of the second source data block seed to a third target data block at the tail of the idle queue;
erasing data in the second source data block;
and inserting the second source data block into the idle queue according to the erasing times of the second source data block.
In an implementation manner, the method for balancing the wear of the solid state disk provided by the embodiment of the invention further comprises the following steps:
in the data reading and writing process, if the data reading and writing errors are detected, deleting the corresponding data blocks from the idle queue or the non-emergency recovery queue; and added to the emergency recovery queue.
In a second aspect, a wear leveling device for a solid state disk is provided, including:
the first acquisition unit is used for acquiring the time stamp of the first data block in the cold data queue, and the data blocks in the cold data queue are arranged according to the sequence from the early to the late of the time stamp;
the second acquisition unit is used for acquiring the erasing times of a plurality of data blocks at the front section of the cold data queue if the time difference between the time stamp acquired by the first acquisition unit and the current time is not larger than a preset cold data migration threshold value;
and the processing unit is used for starting the wear leveling operation if the difference between the erasing times of any one of the plurality of data blocks in the front section and the erasing times of the tail data blocks of the idle queue acquired by the second acquisition unit is larger than a preset wear leveling threshold value, and the idle data blocks in the idle queue are arranged according to the sequence from the small erasing times to the large erasing times.
In one embodiment, the processing unit is specifically configured to migrate valid data in a first source data block to a first target data block at the tail of the idle queue, where the first source data block includes at least one data block at the front section of the cold data queue; determining the recovery level of the first source data block according to the effective data proportion in the first source data block; and according to the recycling level of the first source data block, placing the first source data block into a recycling queue of a corresponding level.
In one embodiment, the processing unit is further configured to add a timestamp to the first target data block according to a data migration time; and placing the first target data block into the tail of the cold data queue.
In one embodiment, the processing unit is further configured to, when receiving a data writing request, write data to be written into a second target data block of the idle queue head; adding a time stamp of the second target data block according to the writing time; and placing the second target data block into the tail of the cold data queue.
In one embodiment, the processing unit is further configured to, during writing the data to be written, delete the second target data block from the free queue if the second target data block is full; determining the recovery level of the second target data block according to the proportion of the effective data in the second target data block; and placing the second target data block into a recovery queue of a corresponding grade according to the recovery grade of the second target data block.
In one embodiment, the reclamation queue further comprises an emergency reclamation queue;
the processing unit is further configured to select, as the second source data block, a data block at the head of the recovery queue with the highest recovery level if the data capacity of the idle queue is lower than a preset recovery start threshold or the emergency recovery queue is not empty; migrating the valid data of the second source data block seed to a third target data block at the tail of the idle queue; erasing data in the second source data block; and inserting the second source data block into the idle queue according to the erasing times of the second source data block.
In one embodiment, the processing unit is further configured to delete, during the data reading and writing process, the corresponding data block from the free queue or the non-urgent recovery queue if a data reading and writing error is detected; and added to the emergency recovery queue.
In a third aspect, there is provided a computing device comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor performs the steps of any of the methods described above.
In a fourth aspect, a computer storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the steps of any of the methods described above.
By adopting the technical scheme, the invention has at least the following advantages:
in the method and the device for balancing the wear of the solid state disk, the data blocks in the idle queue are arranged according to the sequence from small to large in the number of times of erasure, and the cold data queue is arranged according to the sequence from early to late in the time stamp, so that the number of times of erasure of the first data block of the cold data queue is relatively small, the number of times of erasure of the last data block of the cold data queue can be used for judging whether to start the wear balancing, the wear balancing is realized by multiplexing the cold data queue and the idle queue in the process, the maintenance of the wear balancing queue is not needed, and therefore the overhead of system memory resources is reduced, and in addition, the idle queue and the cold data queue respectively maintain partial data blocks of the solid state disk without maintaining all the data blocks, so that the processing efficiency of the wear balancing operation is improved.
Drawings
FIG. 1 is a schematic flow chart of a wear leveling method for a solid state disk according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a wear leveling operation flow according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a flow chart of a flash memory read operation of a solid state disk according to an embodiment of the present invention;
FIG. 4 is a flow chart of determining whether to initiate data migration according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a wear leveling device for a solid state disk according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a computing device according to an embodiment of the invention.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention for achieving the intended purpose, the following detailed description of the present invention is given with reference to the accompanying drawings and preferred embodiments.
First, some terms related to the embodiments of the present invention will be described so as to be easily understood by those skilled in the art.
It should be noted that the terms "first," "second," and the like in the description and the claims of the embodiments of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein.
Reference herein to "a plurality of" or "a number" means two or more than two. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In the prior art, an independent wear queue is required to track the number of erasures, and the size of the queue is the number of all data blocks in the solid state disk flash memory, and the data blocks are required to be ordered according to the number of erasures. Garbage collection can continually produce update insert and delete operations, requiring corresponding memory and computation to maintain large ordering queues due to the large wear queues, which can be expensive and inefficient for infrequent wear leveling operations. Because wear leveling is not a frequent operation, cold data migration is similar to wear leveling, nor is it a frequent operation, and cold data queues are checked periodically. Therefore, the embodiment of the invention provides a wear balancing method based on cold data migration and garbage collection, which removes the dynamic maintenance of an independent wear queue, multiplexes the cold data queue and an idle queue, thereby reducing the cost of system memory resources and improving the wear balancing efficiency.
In the embodiment of the invention, the following three queues are established:
idle queues: the method is used for tracking the idle data blocks, and the idle data blocks in the idle queue are ordered according to the order of the erasing times from small to large.
Cold data queue: the write data blocks are tracked and the data blocks in the cold data queue are arranged in the order of the time stamps from early to late.
Recovery queue group: the method comprises the steps of determining the recovery level according to the invalid data quantity relative to the data block size, adding the recovery level into a corresponding queue, and sequencing is not needed in each level queue. The queue group also comprises a recovery queue with an emergency level, which is used for processing data migration caused by data read-write errors including read interference.
Based on the three queues established above, the embodiment of the invention provides a method for balancing the wear of a solid state disk, as shown in fig. 1, which may include the following steps:
s11, acquiring a time stamp of a head data block in the cold data queue.
In specific implementation, the cold data queue can be checked periodically according to a set period, and whether the timestamp taking the head of the queue as a source block exceeds a cold data migration threshold value is compared with the current time to judge whether to start cold data migration.
S12, if the time difference between the acquired time stamp and the current time is not larger than a preset cold data migration threshold, acquiring the erasing times of a plurality of data blocks at the front section of the cold data queue.
In this step, the time stamp of the head of the queue data block obtained in step S11 is compared with the current time, the time difference between the two is calculated, if the calculated time difference is greater than a preset cold data migration threshold, cold data migration is started, and if the calculated time difference is not greater than the preset cold data threshold, the number of erasures of a plurality of data blocks in the front section of the cold data queue is obtained.
It should be noted that, the front section of the cold data queue refers to the first N data blocks from the beginning of the cold data queue, N is an integer, and a specific value thereof may be set according to actual needs, for example, N may be set to 10.
S13, if the difference between the erasing times of any one of the plurality of data blocks in the front section and the erasing times of the data blocks at the tail of the idle queue is larger than a preset wear balance threshold value, starting wear balance operation.
It should be noted that, in the embodiment of the present invention, the cold data queue is multiplexed for determining wear leveling. Specifically, in the cold data queue, each data block is arranged from early to late according to the time stamp, the number of times of erasing the data block at the front section of the queue is relatively small, the idle queue is arranged according to the order of the number of times of erasing from small to large, and the number of times of erasing the data block at the tail of the idle queue is large, so that by comparing the difference value between the number of times of erasing the data block at the tail of the idle queue and the number of times of erasing the data block at the head of the cold data queue, whether to start the wear leveling operation can be judged, if the difference value between the number of times of erasing the data block at the tail of the idle queue is larger than the preset wear leveling threshold, the wear leveling operation is started, otherwise, the next processing period is waited for reaching the continuous judgment.
As shown in fig. 2, in the embodiment of the present invention, the wear leveling operation may be started according to the following procedure:
s21, migrating the effective data in the first source data block to a first target data block at the tail of the idle queue.
Wherein the first source data block comprises at least one block of data blocks of the cold data queue front section. In particular, according to the embodiment of the invention, wear leveling operation can be started on a plurality of data blocks at the front section of the cold data queue.
S22, determining the recovery level of the first source data block according to the effective data proportion in the first source data block.
In this step, the recovery level of the first source data block may be determined according to the effective data proportion in the first source data block. Or determining the recovery level of the first source data block according to the invalid data proportion, wherein the higher the invalid data proportion is, the higher the recovery level is, or the higher the valid data proportion is, the lower the recovery level is.
It should be noted that, all data in the data block after the data migration will be marked as invalid, and thus, the reclamation level of the corresponding data block is the highest level.
S23, according to the recovery grade of the first source data block, the first source data block is put into a recovery queue of a corresponding grade.
In addition, after the data in the first source data block is migrated to the first target data block, a timestamp is actually added to the first target data block according to the data migration, and the first target data block is placed in the tail of the cold data queue.
In specific implementation, the data migration flow generated by the cold data migration and the garbage collection is similar to the data migration flow generated by the wear leveling operation, so that the data migration flow generated by the cold data migration and the garbage collection can be implemented by referring to the data migration flow generated by the wear leveling operation, and details are not repeated here.
According to the embodiment of the invention, cold data migration, wear leveling and garbage collection generate cold data to be migrated, an idle block with a large number of erasures, namely an idle queue tail is selected as a target block, and the block is time stamped and placed at the tail of a cold data queue, which means that the cold data becomes hot just after being migrated.
In specific implementation, according to an embodiment of the present invention, there is further provided a data writing method, as shown in fig. 3, which may include the following steps:
s31, when a received data writing request is received, writing the data to be written into a second target data block at the head of the idle queue.
S32, adding a time stamp of the second target data block according to the writing time.
S33, placing the second target data block into the tail of the cold data queue.
According to the embodiment of the invention, the front-end writing generates hot data to be written recently, the idle block with small erasing times, namely the idle queue head is selected as a target block, and the block is timestamped and put at the tail of the cold data queue.
When the method is implemented, in the process of writing the data to be written, if the second target data block is full, deleting the second target data block from the idle queue; determining the recovery level of the second target data block according to the proportion of the effective data in the second target data block; and placing the second target data block into a recovery queue of a corresponding grade according to the recovery grade of the second target data block.
In such an embodiment, the reclamation level of the second target data block may be determined based on the proportion of valid data in the second target data block. Or determining the recovery level of the second target data block according to the invalid data proportion, wherein the higher the invalid data proportion is, the higher the recovery level is, or the higher the valid data proportion is, the lower the recovery level is. For example, the reclamation level of the second target data block may be determined according to the following formula: (invalid data amount/total data amount) number of levels, wherein the number of levels is the number of different levels of reclamation queues contained in the reclamation queue group.
It should be noted that, the above calculation method of the recovery level is only an optional implementation manner of the embodiment of the present invention, and in specific implementation, other formulas may be used to determine the recovery level, which is not limited by the embodiment of the present invention.
In addition, the recovery queue group also comprises an emergency recovery queue, and in the data reading and writing process, if data reading and writing errors including data reading interference or data writing errors and the like are detected, the corresponding data blocks are deleted from the idle queue or the non-emergency recovery queue; and added to the emergency recovery queue. .
When the method is implemented, if the data capacity of the idle queue is lower than a preset recovery starting threshold or the emergency recovery queue is not empty, selecting a data block at the head of the recovery queue with the highest recovery level as a second source data block; migrating the valid data of the second source data block into a third target data block at the tail of the idle queue; erasing data in the second source data block; and inserting the second source data block into the idle queue according to the erasing times of the second source data block, and keeping the data blocks between the head and the tail of the idle queue ordered according to the erasing times.
According to the data migration operation flow provided by the embodiment of the invention, the erasing times of the data blocks near the head of the cold data queue relative to the data blocks at the tail of the cold data queue are less.
For a better understanding of the present invention, the following description of the practice of the present invention will be presented in conjunction with the specific implementation. Setting parameters (including cold data queue checking interval, cold data migration threshold, wear balance threshold, recovery grade number and starting garbage recovery threshold) during system initialization, and inserting all idle data blocks into idle queues according to the sequence of the erasing times, wherein the cold data queues and recovery queue groups are empty; and when the system is restarted, loading the information such as the idle queue, the cold data queue, the recovery queue group and the like which are stored before. And then enters the run cycle. Fig. 3 is a schematic diagram of a flow chart of a flash memory read operation of a solid state disk according to the implementation rule of the present invention, including the following steps:
step 1: judging whether data migration needs to be started or not; if yes, step 2 is carried out, otherwise step 3 is carried out.
Fig. 4 is a schematic flow chart of determining whether to initiate data migration in the embodiment of the present invention, where:
step 1.1: judging whether the cold data checking interval is expired; if yes, step 1.2 is performed, otherwise, step 1.4 is performed.
Step 1.2: judging whether to start cold data migration; if yes, step 2 is carried out, otherwise, step 1.3 is carried out.
Step 1.3: judging whether wear equalization is started or not; if yes, step 2 is carried out, otherwise, step 1.4 is carried out.
Step 1.4: judging whether to start garbage collection; if yes, step 2 is carried out, otherwise step 3 is carried out.
Step 2: and (5) removing the source block from the recovery queue and the cold data queue, arranging data migration judgment, and entering the next round of circulation.
Step 3: judging whether the scheduled data migration is finished or not; if yes, step 4 is performed, otherwise, step 7 is performed.
Step 4: judging whether the finished data migration belongs to garbage collection or not; if yes, step 5 is performed, otherwise step 6 is performed.
Step 5: and erasing the source block, increasing the erasing times, and inserting the source block between the head and the tail of the idle queue according to the ordering of the erasing times to enter the next round of circulation.
Step 6: the reclamation level of the source block which is fully written is calculated and put at the tail of a corresponding reclamation level queue (the source block which is not fully written is kept in an idle queue) and the next round of circulation is started.
Step 7: judging whether to execute/continue data migration; if yes, go to step 8, otherwise step 9.
Step 8: valid data is read from the source block in preparation for writing the cold target data block.
It should be noted that, since the source block is being migrated, if the source block has read/write interference, no processing is required. If an uncorrectable read error occurs, data is lost.
Step 9: judging whether the cold target block is ready; if step 10 is performed, valid data read from the source block is written into the cold target block, otherwise step 11 is performed.
Step 10: the data is written to the target block, step 12.
Step 11: and selecting the idle queue tail as a target block, and placing the target block into the cold data queue tail by a time stamp to carry out step 10.
Step 12: judging whether the target block has a writing error or not; if yes, go to step 13, otherwise step 14.
Step 13: and (3) removing the data block with the read-write error from the idle queue/non-emergency recovery queue, adding the data block to the emergency recovery queue, and entering the next round of circulation.
Step 14: judging whether the target block is fully written; if yes, go to step 15, otherwise go to the next round of circulation.
Step 15: and (3) removing the full target data block from the idle queue, calculating the recovery level of the target data block, putting the target data block into the tail of the corresponding recovery level queue, and entering the next round of circulation.
Step 16: judging whether to execute front-end data writing; if yes, go to step 17, otherwise step 19.
Step 17: judging whether the thermal target block is ready; if yes, go to step 10, otherwise step 18.
Step 18: and selecting the idle queue as a target block, placing the idle queue at the tail of the cold data queue by a time stamp, and writing the front-end data into the hot target block in step 10.
Step 19: judging whether to execute front-end data reading; if yes, go to step 20, otherwise go to the next round of circulation.
Step 20: the data is read according to the logical physical address map, step 21 is performed.
Step 21: judging whether data reading errors (including reading interference) occur or not; if yes, step 13 is performed, otherwise, the next round of circulation is performed.
It should be noted that the order of step 1, step 3, step 7, step 16 and step 19 may be dynamically adjusted according to actual needs according to the priority, and for convenience of description, the embodiment of the present invention is described in the order shown in fig. 3.
According to the wear balancing method for the solid state disk, the data blocks in the idle queue are arranged according to the sequence from small to large in the number of times of erasure, and the cold data queues are arranged according to the sequence from early to late in the time stamp, so that the number of times of erasure of the first data block of the cold data queue is relatively small, the number of times of erasure of the first data block of the cold data queue and the number of times of erasure of the last data block of the idle queue can be used for judging whether to start wear balancing, the wear balancing is realized by multiplexing the cold data queue and the idle queue in the process, the wear balancing queue is not required to be maintained, and therefore the overhead of system memory resources is reduced.
Based on the same technical concept, the embodiment of the invention also provides a solid state disk wear balancing device, as shown in fig. 5, which comprises:
a first obtaining unit 51, configured to obtain a timestamp of a first data block in a cold data queue, where the data blocks in the cold data queue are arranged according to an order from early to late;
a second obtaining unit 52, configured to obtain the number of times of erasing the plurality of data blocks in the front section of the cold data queue if the time difference between the timestamp obtained by the first obtaining unit and the current time is not greater than a preset cold data migration threshold;
and a processing unit 53, configured to start wear leveling operation if a difference between an erasure number of any one of the plurality of data blocks in the front section acquired by the second acquiring unit and an erasure number of a data block at a tail of the free queue is greater than a preset wear leveling threshold, where the free data blocks in the free queue are arranged in order of from small to large of the erasure number.
In one embodiment, the processing unit 53 is specifically configured to migrate valid data in a first source data block to a first target data block at the tail of the idle queue, where the first source data block includes at least one data block at the front section of the cold data queue; determining the recovery level of the first source data block according to the effective data proportion in the first source data block; and according to the recycling level of the first source data block, placing the first source data block into a recycling queue of a corresponding level.
In one embodiment, the processing unit 53 is further configured to add a timestamp to the first target data block according to a data migration time; and placing the first target data block into the tail of the cold data queue.
In one embodiment, the processing unit 53 is further configured to, when receiving a data writing request, write data to be written into a second target data block of the free queue head; adding a time stamp of the second target data block according to the writing time; and placing the second target data block into the tail of the cold data queue.
In one embodiment, the processing unit 53 is further configured to, during writing the data to be written, delete the second target data block from the free queue if the second target data block is full; determining the recovery level of the second target data block according to the proportion of the effective data in the second target data block; and placing the second target data block into a recovery queue of a corresponding grade according to the recovery grade of the second target data block.
In one embodiment, the reclamation queue further comprises an emergency reclamation queue;
the processing unit 53 is further configured to select, as the second source data block, a data block at the head of the recovery queue with the highest recovery level if the data capacity of the idle queue is lower than a preset recovery start threshold or the emergency recovery queue is not empty; migrating the valid data of the second source data block seed to a third target data block at the tail of the idle queue; erasing data in the second source data block; and inserting the second source data block into the idle queue according to the erasing times of the second source data block.
In one embodiment, the processing unit 53 is further configured to delete, during the data reading and writing process, the corresponding data block from the free queue or the non-urgent recovery queue if a data reading and writing error is detected; and added to the emergency recovery queue.
For convenience of description, the above parts are described as being functionally divided into modules (or units) respectively. Of course, the functions of each module (or unit) may be implemented in the same piece or pieces of software or hardware when implementing the present invention.
Having described the method and apparatus for wear leveling of a solid state disk according to an exemplary embodiment of the present invention, a computing apparatus according to another exemplary embodiment of the present invention is described next.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device according to the invention may include at least one processor, and at least one memory. Wherein the memory stores program code that, when executed by the processor, causes the processor to perform the steps in the solid state disk wear leveling method according to various exemplary embodiments of the present invention described above in this specification. For example, the processor may perform step S11 of acquiring a time stamp of a head data block in the cold data queue as shown in fig. 1, and step S12 of acquiring the number of erasures of a number of data blocks in a front section of the cold data queue if a time difference between the acquired time stamp and a current time is not greater than a preset cold data migration threshold; and step S13, if the difference between the erasing times of any one of the plurality of data blocks in the front section and the erasing times of the data blocks at the tail of the idle queue is larger than a preset wear balance threshold value, starting wear balance operation.
A computing device 60 according to such an embodiment of the invention is described below with reference to fig. 6. The computing device 60 shown in fig. 6 is merely an example and should not be taken as limiting the functionality and scope of use of embodiments of the present invention.
As shown in fig. 6, the computing device 60 is in the form of a general purpose computing device. Components of computing device 60 may include, but are not limited to: the at least one processor 61, the at least one memory 62, a bus 63 connecting the different system components, including the memory 62 and the processor 61.
Bus 63 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
Memory 62 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 621 and/or cache memory 622, and may further include Read Only Memory (ROM) 623.
Memory 62 may also include a program/utility 625 having a set (at least one) of program modules 624, such program modules 624 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The computing device 60 may also communicate with one or more external devices 64 (e.g., keyboard, pointing device, etc.), one or more devices that enable a user to interact with the computing device 60, and/or any devices (e.g., routers, modems, etc.) that enable the computing device 60 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 65. Moreover, computing device 60 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 66. As shown, network adapter 66 communicates with other modules for computing device 60 over bus 63. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with computing device 60, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
In some possible embodiments, aspects of the solid state disk wear leveling method provided by the present invention may also be implemented in a form of a program product, which includes program code, when the program product runs on a computer device, for causing the computer device to perform the steps in the solid state disk wear leveling method according to the various exemplary embodiments of the present invention described in the present specification, for example, the computer device may perform step S11 as shown in fig. 1, obtain a timestamp of a first data block in a cold data queue, and step S12, obtain the number of erasures of a number of data blocks in a front section of the cold data queue if a time difference between the obtained timestamp and a current time is not greater than a preset cold data migration threshold; and step S13, if the difference between the erasing times of any one of the plurality of data blocks in the front section and the erasing times of the data blocks at the tail of the idle queue is larger than a preset wear balance threshold value, starting wear balance operation.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for wear leveling of solid state disks of embodiments of the present invention may employ a portable compact disk read-only memory (CD-ROM) and include program code and may run on a computing device. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
While the invention has been described in connection with specific embodiments thereof, it is to be understood that the invention is not limited to the precise form and details of construction illustrated.

Claims (8)

1. The wear balancing method for the solid state disk is characterized by comprising the following steps of:
acquiring a time stamp of a first data block in a cold data queue, wherein the data blocks in the cold data queue are arranged according to the sequence from the early to the late of the time stamp;
if the time difference between the acquired time stamp and the current time is not greater than a preset cold data migration threshold value, acquiring the erasing times of a plurality of data blocks at the front section of the cold data queue;
if the difference between the erasing times of any one of the plurality of data blocks in the front section and the erasing times of the tail data block of the idle queue is larger than a preset wear leveling threshold, starting wear leveling operation, wherein the idle data blocks in the idle queue are arranged according to the sequence of the erasing times from small to large, and the starting wear leveling operation specifically comprises the following steps: migrating valid data in a first source data block into a first target data block at the tail of an idle queue, wherein the first source data block comprises at least one data block at the front section of the cold data queue;
determining the recovery level of the first source data block according to the effective data proportion in the first source data block;
according to the recovery grade of the first source data block, the first source data block is put into a recovery queue of a corresponding grade;
if the data capacity of the idle queue is lower than a preset recovery starting threshold value or the emergency recovery queue is not empty, selecting a data block with the highest recovery level at the head of the recovery queue as a second source data block;
migrating the valid data in the second source data block to a third target data block at the tail of the idle queue;
erasing data in the second source data block;
inserting the second source data block into the idle queue according to the erasing times of the second source data block;
and determining the recycling level of the second target data block according to the effective data proportion in the second target data block, or determining the recycling level of the second target data block according to the ineffective data proportion.
2. The method as recited in claim 1, further comprising:
adding a time stamp to the first target data block according to the data migration time;
and placing the first target data block into the tail of the cold data queue.
3. The method as recited in claim 1, further comprising:
when a data writing request is received, writing the data to be written into a second target data block at the head of the idle queue;
adding a time stamp of the second target data block according to the writing time;
and placing the second target data block into the tail of the cold data queue.
4. A method according to claim 3, further comprising:
deleting the second target data block from the idle queue if the second target data block is full in the process of writing the data to be written;
determining the recovery level of the second target data block according to the proportion of the effective data in the second target data block;
and placing the second target data block into a recovery queue of a corresponding grade according to the recovery grade of the second target data block.
5. The method as recited in claim 1, further comprising:
in the data reading and writing process, if the data reading and writing errors are detected, deleting the corresponding data blocks from the idle queue or the non-emergency recovery queue; and added to the emergency recovery queue.
6. The utility model provides a solid state disk wear leveling device which characterized in that includes:
the first acquisition unit is used for acquiring the time stamp of the first data block in the cold data queue, and the data blocks in the cold data queue are arranged according to the sequence from the early to the late of the time stamp;
the second acquisition unit is used for acquiring the erasing times of a plurality of data blocks at the front section of the cold data queue if the time difference between the time stamp acquired by the first acquisition unit and the current time is not larger than a preset cold data migration threshold value;
the processing unit is used for starting the wear leveling operation if the difference between the erasing times of any one of the plurality of data blocks in the front section and the erasing times of the tail data block of the idle queue is larger than a preset wear leveling threshold value, and the idle data blocks in the idle queue are arranged according to the sequence of the erasing times from small to large, wherein the starting the wear leveling operation specifically comprises: migrating valid data in a first source data block into a first target data block at the tail of an idle queue, wherein the first source data block comprises at least one data block at the front section of the cold data queue;
determining the recovery level of the first source data block according to the effective data proportion in the first source data block;
according to the recovery grade of the first source data block, the first source data block is put into a recovery queue of a corresponding grade;
if the data capacity of the idle queue is lower than a preset recovery starting threshold value or the emergency recovery queue is not empty, selecting a data block with the highest recovery level at the head of the recovery queue as a second source data block;
migrating the valid data in the second source data block to a third target data block at the tail of the idle queue;
erasing data in the second source data block;
inserting the second source data block into the idle queue according to the erasing times of the second source data block;
and determining the recycling level of the second target data block according to the effective data proportion in the second target data block, or determining the recycling level of the second target data block according to the ineffective data proportion.
7. A computing device, the computing device comprising: memory, a processor and a computer program stored on the memory and executable on the processor, which when executed by the processor, performs the steps of the method according to any one of claims 1 to 5.
8. A computer storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method according to any of claims 1 to 5.
CN202010195586.7A 2020-03-19 2020-03-19 Wear balancing method and device for solid state disk Active CN111414138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010195586.7A CN111414138B (en) 2020-03-19 2020-03-19 Wear balancing method and device for solid state disk

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010195586.7A CN111414138B (en) 2020-03-19 2020-03-19 Wear balancing method and device for solid state disk

Publications (2)

Publication Number Publication Date
CN111414138A CN111414138A (en) 2020-07-14
CN111414138B true CN111414138B (en) 2024-02-13

Family

ID=71491246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010195586.7A Active CN111414138B (en) 2020-03-19 2020-03-19 Wear balancing method and device for solid state disk

Country Status (1)

Country Link
CN (1) CN111414138B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112416254B (en) * 2020-11-27 2022-08-12 天地伟业技术有限公司 Optimization method for use efficiency of disk space
CN114327279A (en) * 2021-12-29 2022-04-12 湖南国科微电子股份有限公司 Data migration method and device
CN115586874B (en) * 2022-11-24 2023-03-10 苏州浪潮智能科技有限公司 Data block recovery method and device, electronic equipment and storage medium
CN117666972A (en) * 2024-02-01 2024-03-08 合肥康芯威存储技术有限公司 Storage device and data processing method thereof

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354681A (en) * 2008-09-23 2009-01-28 美商威睿电通公司 Memory system, abrasion equilibrium method and apparatus of non-volatile memory
CN101499033A (en) * 2008-12-18 2009-08-05 上海全景数字技术有限公司 Flash memory data saving method
CN102081576A (en) * 2011-03-01 2011-06-01 华中科技大学 Flash memory wear balance method
CN104461915A (en) * 2014-11-17 2015-03-25 苏州阔地网络科技有限公司 Method and device for dynamically allocating internal storage in online class system
CN106484323A (en) * 2016-09-13 2017-03-08 郑州云海信息技术有限公司 A kind of loss equalizing method of solid-state storage and system
CN106681935A (en) * 2016-12-29 2017-05-17 郑州云海信息技术有限公司 Trash recycling method for solid state disk
CN106951187A (en) * 2017-03-07 2017-07-14 记忆科技(深圳)有限公司 A kind of solid-state storage static wear implementation method in a balanced way
US10008275B1 (en) * 2017-05-24 2018-06-26 Lite-On Electronics (Guangzhou) Limited Control method for solid state storage device
CN108255419A (en) * 2017-12-19 2018-07-06 深圳忆联信息系统有限公司 A kind of abrasion equilibrium method and SSD for TLC types SSD
CN108363813A (en) * 2018-03-15 2018-08-03 北京小度信息科技有限公司 Date storage method, device and system
CN108681435A (en) * 2018-05-25 2018-10-19 郑州云海信息技术有限公司 A kind of abrasion equilibrium method of solid state disk, device, equipment and storage medium
CN109753443A (en) * 2019-01-12 2019-05-14 湖南国科微电子股份有限公司 A kind of data processing method, device and electronic equipment
CN110688326A (en) * 2019-09-23 2020-01-14 西安微电子技术研究所 Flash memory wear leveling algorithm based on multiple thresholds

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354681A (en) * 2008-09-23 2009-01-28 美商威睿电通公司 Memory system, abrasion equilibrium method and apparatus of non-volatile memory
CN101499033A (en) * 2008-12-18 2009-08-05 上海全景数字技术有限公司 Flash memory data saving method
CN102081576A (en) * 2011-03-01 2011-06-01 华中科技大学 Flash memory wear balance method
CN104461915A (en) * 2014-11-17 2015-03-25 苏州阔地网络科技有限公司 Method and device for dynamically allocating internal storage in online class system
CN106484323A (en) * 2016-09-13 2017-03-08 郑州云海信息技术有限公司 A kind of loss equalizing method of solid-state storage and system
CN106681935A (en) * 2016-12-29 2017-05-17 郑州云海信息技术有限公司 Trash recycling method for solid state disk
CN106951187A (en) * 2017-03-07 2017-07-14 记忆科技(深圳)有限公司 A kind of solid-state storage static wear implementation method in a balanced way
US10008275B1 (en) * 2017-05-24 2018-06-26 Lite-On Electronics (Guangzhou) Limited Control method for solid state storage device
CN108255419A (en) * 2017-12-19 2018-07-06 深圳忆联信息系统有限公司 A kind of abrasion equilibrium method and SSD for TLC types SSD
CN108363813A (en) * 2018-03-15 2018-08-03 北京小度信息科技有限公司 Date storage method, device and system
CN108681435A (en) * 2018-05-25 2018-10-19 郑州云海信息技术有限公司 A kind of abrasion equilibrium method of solid state disk, device, equipment and storage medium
CN109753443A (en) * 2019-01-12 2019-05-14 湖南国科微电子股份有限公司 A kind of data processing method, device and electronic equipment
CN110688326A (en) * 2019-09-23 2020-01-14 西安微电子技术研究所 Flash memory wear leveling algorithm based on multiple thresholds

Also Published As

Publication number Publication date
CN111414138A (en) 2020-07-14

Similar Documents

Publication Publication Date Title
CN111414138B (en) Wear balancing method and device for solid state disk
US9052826B2 (en) Selecting storage locations for storing data based on storage location attributes and data usage statistics
US20130138915A1 (en) Data processing system, data processing method, and program
US7325096B2 (en) Computer system, storage subsystem, and write processing control method
CN108132890B (en) Garbage recycling method, device and equipment for memory chip and storage medium
US11210229B2 (en) Method, device and computer program product for data writing
JP2005276192A (en) Method and apparatus for increasing data storage capacity
CN108829342B (en) Log storage method, system and storage device
WO2023125448A1 (en) Proof-of-work operation method, proof-of-work chip, and upper computer
CN104516678A (en) Method and device for storing data
US20190079674A1 (en) Storage Scheme for a Distributed Storage System
WO2016101145A1 (en) Controller, method for identifying data block stability and storage system
US10146783B2 (en) Using file element accesses to select file elements in a file system to defragment
CN107544912B (en) Log recording method, loading method and device
WO2012008731A2 (en) Device and method for managing flash memory using block unit mapping
CN107229421B (en) Method and device for creating video data storage system, method and device for writing file into video data storage system and method and device for reading video data storage system
CN104252423A (en) Consistency processing method and device based on multi-core processor
US20100223442A1 (en) Computer system and data erasing method
US8812780B2 (en) Raid group control device
KR20140041408A (en) Selecting storage locations for storing data based on storage location attributes and data usage statistics
CN109189337A (en) A kind of method, apparatus, equipment and readable storage medium storing program for executing selecting data block
US10430105B2 (en) Storage scheme for a distributed storage system
CN107544913B (en) FTL table rapid reconstruction method and device
US20190339898A1 (en) Method, system and computer program product for managing data storage in data storage systems
US11645198B2 (en) Buffer pool maintenance improvement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211208

Address after: 250100 room 2-302-131, building 7, North District, Shandong design creative industrial park, No. 868, Tangye West Road, Jinan area, free trade pilot zone, Jinan, Shandong Province

Applicant after: Rongming Microelectronics (Jinan) Co.,Ltd.

Address before: 266100 blue Silicon Valley entrepreneurship center, No. 1, Keji Road, Jimo District, Qingdao, Shandong

Applicant before: QINGDAO RONGMING SEMICONDUCTOR Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant