CN112596673B - Multiple-active multiple-control storage system with dual RAID data protection - Google Patents

Multiple-active multiple-control storage system with dual RAID data protection Download PDF

Info

Publication number
CN112596673B
CN112596673B CN202011508298.9A CN202011508298A CN112596673B CN 112596673 B CN112596673 B CN 112596673B CN 202011508298 A CN202011508298 A CN 202011508298A CN 112596673 B CN112596673 B CN 112596673B
Authority
CN
China
Prior art keywords
raid
data
storage
controller
stripe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011508298.9A
Other languages
Chinese (zh)
Other versions
CN112596673A (en
Inventor
胡晓宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Daoshang Information Technology Co ltd
Original Assignee
Nanjing Daoshang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Daoshang Information Technology Co ltd filed Critical Nanjing Daoshang Information Technology Co ltd
Priority to CN202011508298.9A priority Critical patent/CN112596673B/en
Publication of CN112596673A publication Critical patent/CN112596673A/en
Application granted granted Critical
Publication of CN112596673B publication Critical patent/CN112596673B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a multi-activity multi-control storage system with double RAID data protection, which organically fuses a controller RAID with a network RAID technology, wherein the system comprises at least two storage controllers connected through a network, each storage controller has a local RAID protection function which independently operates, each user data block is at least stored in two parts and is respectively stored on different controllers, and the service of the storage system is not interrupted and the data is not lost when any storage controller is down or damaged. When the hard disk is damaged, the data is preferentially applicable to the local RAID to realize data repair; when the local RAID fails to recover data from the node, protection recovery between storage controllers is employed to reconstruct the data. The invention further improves the metadata management mode, combines the technologies of data compression, deduplication, integrity check, self-repair and the like, greatly improves the storage utilization efficiency and the storage performance, improves the GC strategy, and can obviously reduce the write amplification caused by GC and improve the storage IO performance.

Description

Multiple-active multiple-control storage system with dual RAID data protection
Technical Field
The invention belongs to the technical field of disk arrays and external storage systems, and relates to a novel dual RAID (Redundant Arrays of Inexpensive Disks) technology, namely a network RAID implementation method combining RAID inside controllers and crossing at least two storage controllers, in particular to a multi-activity multi-control storage system further combining technologies such as data compression, data deduplication, data integrity check and self-repair on the basis of dual RAID data protection.
Background
With the penetration of digital transformation, mass data has put new demands on storage. Although the traditional disk array has the advantages of mature technology, good performance, high availability and the like, the defects of the traditional disk array are more and more obvious when facing to massive data, especially as the capacity of a hard disk is continuously increased, the reliability of the hard disk and the error rate parameters of the data are not obviously improved, so that the potential safety hazard of the RAID disk group is more and more serious. In addition, the scalability of disk arrays is severely limited by the shared hardware RAID controller architecture, which is difficult to scale from a traditional dual controller to multiple controllers.
The working principle of the traditional disk array is based on RAID principle, namely, the disk group is made by utilizing an array mode, and the safety of data is improved by matching with the design of data dispersion arrangement. The disk array is a large disk group formed by combining a plurality of inexpensive disks with smaller capacity, higher stability and slower speed, and the cumulative effect generated by providing data by individual disks is utilized to improve the performance of the whole disk system. And simultaneously, the technology is utilized to cut the data into a plurality of sections which are respectively stored on the hard disks. The disk array can also use the concept of Parity Check (Parity Check), can still read data when any hard disk in the array fails, and can re-place the data into a new hard disk after calculation when the data is reconstructed.
The most commonly used RAID modes include RAID 0, RAID 1, RAID 10, RAID 2, RAID 3, RAID 4, RAID5, RAID6, RAID 7, etc., and combined RAID, such as RAID 50, RAID60, etc. The purpose of RAID is to protect data in the event of a mechanical hard disk (HDD) failure, and when the HDD fails, the data within it may be recreated through parity or mirrored copies, according to a different type of RAID.
Each RAID has advantages and disadvantages in terms of write performance, read performance, data protection level, data reconstruction speed, and available storage space on each hard disk, e.g., mirroring or multiple mirroring (RAID 1, RAID 10, triple mirroring, etc.) is the best choice if the guaranteed availability of data is the highest priority, and the data has a complete copy on the other HDD or RAID, simplifying the data protection and recovery process, but the cost is a serious challenge. In reality, few organizations adopt this purely mirrored approach, most of which are more willing to accept RAID5 or RAID 6.
When a hard disk in RAID5 breaks down, the system will reconstruct data on the failed disk on the check disk, but as the capacity of the hard disk reaches 10TB, the reconstruction time often takes hours, days or even weeks, the performance of the system during reconstruction decreases, if the performance of the application system user is not allowed to decrease, the reconstruction operation can only be operated with low priority, so that the reconstruction time can be significantly increased, and the longer the reconstruction time means the greater the risk of data loss, and for this reason, many companies have directly attached RAID 6.
RAID6 provides a second block of parity or stripe disk protection data, and the risk of data loss is significantly reduced even if two hard disks are corrupted or unrecoverable read errors occur, but the impact on system performance would be significant if the data on the two hard disks were to be reconstructed simultaneously. In addition, RAID6 is also at risk of mechanical wear, dust damage, and most storage systems include automatic error correction functions, but as the capacity of the hard disk increases, the time required for these operations increases exponentially.
Some storage manufacturers partially solve some of the traditional RAID problems through technical innovation by improving reliability, such as RAID-DP of EVENODD and NetApp of IBM, and RAID6 performance is enhanced by reducing algorithm overhead. The RAID-TM of NEC aims to reduce the data loss risk of RAID 1, writes data into three independent hard disks at the same time, even if two hard disks fail or unrecoverable read errors occur, an application program can still access the data required by the application program without reducing the performance, the performance is not affected even during reconstruction, and the defect of RAID-TM is obvious, namely the disk space utilization is only 1/3.
RAID-X is an innovation in IBM XIV storage systems that uses a large number of stripes to reduce the risk of loss of RAID performance and data loss, which can be seen as a variation of RAID 10, that uses intelligent risk algorithms to randomly allocate blocks of data to the entire array, which allows the XIV to reconstruct the data in less than 30 minutes on a 2TB HDD, which, like other mirroring techniques, has the disadvantage that disk space utilization is not high and there is no controller-specific RAID.
Finally, it is worth noting that Hewlett-packard LeftHand Networks and Pivot corporation provide similar Network (Network) RAID variants for their x 86-based clustered iSCSI storage, network RAID utilizing the principles of RAID, but using storage nodes instead of disks as the lowest component level, so that it can allocate data blocks of logical volumes across clusters, providing 1-4 data mirroring, with very high scalability, depending on the Network RAID level. At the same time, it has self-healing capability, and when one node fails, it can correct the data and then copy it to another node. Although network RAID reduces the risk of data loss, through network data reconstruction, delay is large, and network bandwidth is consumed seriously; how to reduce the dependency of the storage system on the network RAID while maintaining the network RAID expansibility is always a technical problem in the storage field. Furthermore, as with other mirroring techniques, the low utilization of disk space is a significant disadvantage.
Disclosure of Invention
The invention aims to: aiming at the problem that in the traditional disk array, the same RAID group is shared or commonly managed by a plurality of controllers, and RAID group failure becomes a key failure source of the traditional disk array and a performance bottleneck of a storage system, the invention aims to provide a novel multi-active multi-control storage system based on dual RAID data protection, which organically combines the RAID group independently managed in the storage controller and cross-node mirror protection (a network RAID protection) to realize dual (two-layer) RAID data protection, and can realize synchronous expansion of the performance and the capacity of the storage system. And on the basis, novel metadata management, dynamic formation of RAID data packets and address allocation and self-adaptive garbage collection strategies are further adopted, so that the overall IO performance of the system is improved, and the storage space is saved.
The technical scheme is as follows: in order to achieve the aim of the invention, the invention adopts the following technical scheme:
a multi-activity multi-control storage system with dual RAID data protection comprises at least two storage controllers, wherein each storage controller has an independent RAID function, different storage controllers are connected through a network to realize cross-node mirror copy or network RAID of user data blocks, a dual data redundancy structure is formed, and RAID data protection in the storage controllers and cross-node mirror copy or RAID are combined to realize two-layer RAID data protection; the logical address space of the storage system is divided into addressing spaces with fixed sizes, and data blocks of the address space are stored at least in two parts and distributed to local RAID of different storage controllers for storage; when the hard disk is damaged, the storage controller preferably recovers and reconstructs the data by utilizing the local RAID function, and if the local RAID group cannot recover the data, a protection mechanism among the storage controllers is adopted to recover and reconstruct the data. .
As an alternative implementation mode, the storage system adopts two-stage metadata management, comprising two-stage address mapping, wherein the first-stage address mapping allocates at least two storage controllers for each user logic address space according to the number of cross-controller copies and a pre-fixed address mapping method, and forms an address mapping relation with the logic address space of a local RAID group of the storage controllers; the second level of address mapping is implemented by the local RAID of each storage controller to complete the translation from the logical address to the physical address of the local RAID group.
The two-stage metadata management adopts a pre-fixed address mapping method, so that the metadata management is simple, and large-capacity storage is not needed to store the metadata. Metadata management of RAID groups can be realized by a traditional hardware RAID control card or soft RAID functions such as mdrain or LVM in a Linux kernel, but the method has the defects of poor performance, data update of at least two RAID groups can be caused by writing operation of each data block, and the defect of IO amplification is caused; in addition, although the dual RAID has a greatly improved reliability due to the redundancy of the mirror image between the nodes, there is room for further improvement at the cost of lower storage use efficiency.
As a preferred implementation manner, the storage system adopts global single-stage metadata management and dynamic address allocation to realize data writing, when a user writes data, at least two storage controllers for storing user data blocks are determined through a pseudo-random algorithm, the user data blocks are respectively sent into a first buffer area of the corresponding storage controllers, after de-duplication and compression, the user data blocks are sent into a second buffer area, and a plurality of compressed data blocks are combined in the second buffer area and address mapping information, compression algorithm and verification information of the data blocks are added to form a RAID stripe; the RAID stripe is written into a dynamically allocated address space, and the logical addresses and actual written addresses of all user data blocks in the RAID stripe are stored in a global metadata table of a storage controller.
As a preferred embodiment, a storage pool with a wide stripe is formed by a plurality of disks in the storage controller, and the logical address space starts from the first address space of the first disk, transits to the first address space of the next disk according to the disk sequence, and sequentially goes to the next disk; then re-circulated back to the second address space of the first disk, with successive rounds; the storage controller dynamically allocates a contiguous storage address space for each RAID stripe in a front-to-back order in accordance with the logical address space of the storage pool and writes to the RAID stripe.
As a preferred embodiment, the global metadata table of the memory controller includes an address mapping table for expressing an actual write address corresponding to a logical address of the user data block, and a data Hash table for a data deduplication function.
Further, a garbage collection GC strategy is adopted in the storage system, and the storage space occupied by invalid data blocks is reused. As a preferred embodiment, the GC process combines the address mapping information in the RAID stripe and the address mapping information stored in the global metadata table to determine whether the data block on the RAID stripe is valid; if the two are consistent, the data block is valid, and if the two are inconsistent, the data block is invalid.
As an alternative implementation manner, a FIFO garbage collection strategy is adopted in the storage system, before the free space of each storage controller storage pool is exhausted, a GC process is started, the earliest written RAID stripe is sequentially checked, whether valid data is stored in the data stripes or not, if so, the valid data blocks are read again, and are sent to a second cache area to be packaged together with the new written data blocks to be written into a new RAID stripe, and the recovered RAID stripe enters the free stripe pool to be reused for writing data.
In the FIFO GC reclamation strategy described above, if there is valid data in the RAID stripe to be reclaimed, the data will be packed again as new data, written again, resulting in additional write operations, commonly known as write amplification (Write Amplification, WA). Although the dynamic allocation and sequential writing of RAID groups described above may substantially improve system performance, excessive WA may counteract some of the performance improvements.
In order to reduce WA, as a preferred embodiment, the storage system adopts an adaptive garbage collection policy, separates the data blocks recovered by GC from the newly written data blocks to form RAID stripes each independently, and attaches a digital label corresponding to the number of times the data blocks on the RAID stripe belong to for recovery, and the storage system determines the relative recovery frequency of the stripe according to the digital label.
The beneficial effects are that: compared with the prior art, the invention has the following specific advantages:
1. the invention provides dual RAID data protection by combining RAID local to each storage controller with copy (a network RAID) across controllers; when the hard disk is in fault or damaged, the data restoration is carried out by the RAID local to the controller preferentially, and the work of other controllers is not affected. When the hard disk faults exceed the local RAID protection capability, data repair can be performed through the network RAID. Compared with the traditional disk array, the invention can greatly improve the durability of data and the reliability of a storage system.
2. The invention provides a pre-fixed two-stage fixed address mapping method for managing and realizing dual RAID protection; the first level address mapping is mapping from the user logical address space to the logical address space of the RAID group located within each controller; the second level of address mapping is the conversion of logical addresses to physical addresses by each controller RAID group, including computing and saving redundant data blocks of the RAID. The two-stage mapping method is simple to realize, and the second-stage mapping can be realized by using the existing hardware RAID controller or soft RAID method.
3. The invention provides a novel metadata management and data dynamic write address allocation method for realizing dual RAID protection, which integrates the functions of online data compression, data deduplication, dynamic construction of RAID groups, sequential writing and the like, and can greatly improve the performance and the storage resource utilization rate.
4. The invention provides a new method for realizing GC, which is characterized in that the data with similar update frequency is placed in the same RAID strip by setting the digital label based on the recovery times for the RAID strip and classifying the recovery data according to the digital label, so that the write amplification in the GC process can be effectively reduced, and the performance of the system is improved.
Drawings
FIG. 1 is a diagram of the overall architecture of a dual RAID implementation of an embodiment of the present invention in which user logical address space is mapped to two different controllers according to a Module policy to form global fine grain data mirroring protection, with data being further protected by RAID groups in each controller.
FIG. 2 is a diagram of the overall architecture of a dual RAID implementation of another embodiment of the present invention in which user logical address space is mapped to two different controllers in a policy, after which data is protected at each node with RAID data that dynamically allocates RAID stripe addresses.
FIG. 3 is a flow chart of duplication removal and compression in a dynamic data write address allocation method according to an embodiment of the present invention, in dual RAID, data is first reprocessed, then online data compression is performed, and then data packets are formed in a cache again.
FIG. 4 is a block diagram of a method for allocating dynamic write addresses based on data according to an embodiment of the present invention, in dual RAID, after data is duplicated and compressed, a plurality of complete RAID stripes with local data protection capability are formed by adding metadata and redundancy check codes required by RAID protection in a cache.
FIG. 5 is a flow chart of sequential allocation in a dynamic write address allocation method for data according to an embodiment of the present invention, in dual RAID, idle RAID stripes are sequentially allocated, and simultaneously GC space recovery is sequentially performed.
Fig. 6 is a schematic diagram of a novel GC method based on digital tag classification in an embodiment of the present invention, where by setting a digital tag based on the number of recovery times for a RAID stripe and classifying recovered data according to the digital tag, data with similar update frequency can be placed in the same RAID stripe, so that write amplification in the GC process is effectively reduced, and performance of a system is improved.
Detailed Description
The invention will be further described with reference to the drawings and the specific examples.
In existing disk array technology, a RAID group is shared by two or more controllers (multi-active mode), or is switched between controllers (active-standby mode); however, when the disk failure in the RAID group exceeds the protection capability of the RAID group, data loss is unavoidable. Specifically, for RAID5, when two hard disks are damaged simultaneously in one RAID group, or when one hard disk is damaged, an individual data block cannot be correctly read out (commonly called Media Errors) in another hard disk, data is lost. With the increasing capacity of hard disks, but without the limitation of improving the reliability index of hard disks, the above problems need better data protection strategies to cope with.
Aiming at the defects of the prior RAID technology, the multi-activity multi-control storage system based on dual RAID data protection disclosed by the embodiment of the invention is composed of at least two storage controllers, wherein the storage controllers are connected by a high-speed switching network TCP or Infiniband. Each controller (storage controller) has a RAID function which works independently, forms a controller local logic address space with RAID data protection, and consists of logic address spaces of one or more RAID groups; when any hard disk on the controller is damaged, local (non-network) data recovery and reconstruction can be realized through the RAID function in the controller, and the data reconstruction service does not influence the normal operation of other controllers; the logical address space of the storage system, i.e. the user address space, is first split into addressing spaces of fixed size (corresponding to a certain data block, the granularity of which may be 4kb,8kb,16kb, even 1MB or 4MB, the size of which is optional), the data blocks of which will be allocated to two different controllers according to some pseudo-random algorithm, stored at the logical addresses of different RAID groups, so that the user data forms a multi-copy protection mechanism across the controllers by random allocation with a smaller granularity. Then, the data blocks allocated to the same RAID group of the controller are repackaged, RAID redundant data blocks are calculated to form a local RAID group, and then the local RAID group is written into a physical disk corresponding to the RAID group. It is worth noting that the use of a pseudo-random algorithm may allow the storage system of the dual RAID architecture to be extended to hundreds, even thousands of storage controllers.
The dual RAID array technology adopted by the embodiment of the invention constructs the RAID group with local exclusive for each controller, realizes the data recovery of the data local (local), and does not consume the network resources of a storage system or the calculation or storage resources of other controllers; and when the local RAID group cannot be recovered, the data recovery is realized through the data copying technology of the data blocks between the controllers. The data copying technology between the RAID control function exclusive to the local controller and the controller can determine the corresponding relation of data from a logical address to a physical address by adopting a pseudo-random algorithm and fixing an address mapping table in advance through two-stage metadata management. The first-level address mapping is mapping from a user logic address space to logic address spaces of different RAID groups in each controller, and each user logic address space is distributed to two or more controllers according to the number of cross-controller copies and forms an address mapping relation with a certain logic address space of a certain RAID group in the user logic address space, so as to determine read-write address inquiry of user data. The second level of address mapping is implemented by each controller's local RAID to complete the translation from logical to physical addresses for each RAID group, including computing redundant data blocks of the RAID and storing them. Furthermore, the invention provides a novel universal single-stage metadata management and data dynamic write address allocation method to realize dual RAID protection, and integrates the functions of online data compression, data deduplication, dynamic RAID group construction, sequential writing and the like, so that the performance and the storage resource utilization rate can be greatly improved.
First, two-level metadata management dual RAID will be described in detail: in each controller, one or more local RAID groups are managed through a self-exclusive hardware RAID control card or Linux soft RAID software (or other similar software), and a plurality of physical disks (which can be composed of a mechanical hard disk (HDD) or a solid state hard disk (SSD)) are contained, so that the local RAID management (bottom RAID) is called. Metadata management of the local RAID, taking each controller as a unit, and providing a local readable and writable logic address space protected by the RAID for the upper layer; and is responsible for mapping the logic address space of the local RAID group to each HDD, SSD address, providing RAID group easy generation and management of data block, and realizing data repair task when hard disk is damaged. When the hard disk fails, the controller realizes local restoration of data through the local RAID, and the operation of the upper network RAID is not affected.
The cross-controller RAID management, namely the upper RAID, is realized through the network RAID, namely the data block generates a data redundancy structure through the data mirror image copy or RAID group constructed on different controllers, and the protection of the data is realized; since the controller is essentially a stand-alone computer system, connected by a high-speed network, it is known as network RAID. Network RAID may also be a distributed RAID, typically employing some sort of distributed algorithm to determine how to construct a mirror copy of data or a RAID group across multiple controllers. The common distributed storage is recommended to use three copies to realize data protection and service continuity in a production system, but because the dual RAID of the invention and the local RAID protection of the controller are provided with one layer of independent RAID protection, the network RAID layer can realize extremely high data durability protection by only two copies of mirror image. Of course, the upper RAID of the present invention does not exclude three copies or other erasure code protection schemes.
Specifically, as shown in fig. 1, an overall structure diagram of a dual RAID method for implementing two-level metadata management according to the present invention is shown, where four controllers each have two RAID groups to form respective local logical address spaces. When the data of the local logic address space is written into the RAID group, the controller calculates redundant data blocks (P) according to the rule of the RAID group and automatically writes the redundant data blocks, and the consistency with the data of the group is maintained, so that the local RAID protection is formed. In the figure, the user logical address space maps the addresses of the user data to the local logical address spaces on two different controllers respectively through a fixed mapping relation, namely, the user data is kept in the local logical address spaces of the two different controllers, so as to form data protection crossing the controller layers. In this schematic diagram, copy 1 of data block 1 is mapped to the first data block of controller 1 and copy 2 is mapped to the first data block of controller 2; copy 1 of data block 2 maps to the 2 nd data block of controller 2, copy 2 maps to the first data block of controller 3; copy 1 of data block 3 maps to the 2 nd data block of controller 3, copy 2 maps to the first data block of controller 4; copy 1 of data block 4 maps to the 2 nd data block of controller 4 and copy 2 maps to the second data block of controller 1.
By analogy, copy 1 of data block 5 maps to the 3 rd data block of controller 1, and copy 2 maps to the 3 rd data block of controller 2; copy 1 of data block 6 maps to the 4 th data block of controller 2, copy 2 maps to the 3 rd data block of controller 3; copy 1 of data block 7 maps to the 4 th data block of controller 3 and copy 2 maps to the 3 rd data block of controller 4; copy 1 of data block 8 maps to the 4 th data block of controller 4 and copy 2 maps to the 4 th data block of controller 1.
In the above example, a previously fixed mapping from the user logical address space to the local logical address space of the respective controller can be constructed by simple Module operations. One advantage of this approach is that even large memory systems can still be used to calculate the sequence numbers of individual blocks of data and their copies to a certain controller in the memory system, as well as the specific logical addresses therein, by simple Module operations, without consuming excessive, non-volatile caches to maintain the address mapping table.
It should be noted that the number of the controllers shown in the above figure is 4, but is not limited to 4 controllers. The number of controllers may be 2,3,4, or any number thereof. The RAID inside each controller is not limited to 2-group (2+1) RAID5, but may be multiple-group RAID, and the RAID model may be of other types.
While the dual RAID described above improves data endurance and high availability of the system, it is suitable for certain applications where extreme reliability is sought, but still comes at a cost in performance and storage efficiency. Specifically, due to the presence of more data redundancy in the system, as compared to conventional disk arrays, across copies of the controller or other levels of RAID protection, not only more storage space is consumed, but also more RAID-specific Read-Modify-Write operations, resulting in performance degradation.
In order to further improve the storage efficiency and performance of dual RAID, the invention provides a novel universal single-stage metadata management and data dynamic write address allocation method for realizing dual RAID protection, and the functions of online data compression, data deduplication, dynamic RAID group construction, sequential writing and the like are integrated, so that the performance of a storage system is improved while the utilization rate of storage resources is improved. When the data block is written in each time, two storage controllers are selected to be written in simultaneously according to a certain Hash rule; at each storage controller, the data blocks are repackaged after data de-duplication and data compression, data packets with RAID protection are generated, newly allocated addresses are written in a sequential mode, a metadata table is updated, and the mapping between the logical addresses and the actual written addresses of the data blocks is recorded. Referring to FIG. 2, a dual RAID system architecture diagram according to another embodiment of the present invention is shown, wherein a user data block corresponding to a LBN (logical block number) user data block is first determined to be a target controller of two or three copies thereof according to a calculation method, a Module algorithm, or other Hash algorithm, and the data is transferred over a network and stored by writing operations, respectively. If the data block is read, the data block exists in a certain controller, then the storage address of the data block corresponding to the LBN is queried in the controller through a metadata table, and then the data is read out from the address.
As shown in fig. 3, in each controller, new data is first placed in the buffer 1. Each data block is first passed through one or more Hash engines dedicated to data deduplication, such as the well-known cryptographic functions SHA256, SHA512, etc., to calculate a characteristic value that characterizes the data block. These cryptographic functions have an important feature that when two data blocks have the same eigenvalues (SHA 256, SHA 512), the two data blocks can be considered identical, which allows for fast polling and determination of data blocks that can be deduplicated by detecting their eigenvalues without having to make a more burdensome direct comparison between data blocks. If the characteristic value is found to be already present in the metadata table of the controller, it indicates that the current data block is a duplicate data, and the logical address of the data block may be directly pointed to the address of the already present data block with the same characteristic value, without writing the data block again.
When the characteristic value inquiry shows that the current data block is a new data block, the data block is firstly sent to a data compression engine for online data compression. Common data compression algorithms are LZ4, GZIP1-9, etc. After data compression, the data is fed into the buffer 2.
In the buffer 2, a plurality of compressed data blocks are combined together, address mapping information of the data blocks is added, and metadata information such as compression algorithm selection and integrity check codes of the data blocks is added to form a fixed-length data packet. The length of the data packet is determined by the RAID structure of the controller, for example, the RAID structure is 2+1 of RAID5 type, i.e. each RAID group consists of two data blocks and one check data block, each data block is S bytes in size, and the buffer 2 will generate data packets with two S bytes in size at a time. Then, a matched RAID redundant data block is generated for the data packet through soft RAID calculation, thus generating a RAID data packet to be written and having RAID protection. The RAID data packet comprises a plurality of compressed data blocks, corresponding address mapping information and RAID redundant data. These RAID data blocks with self-protection capability are referred to as RAID stripes. The following continues with a description of how the RAID stripes are dynamically assigned addresses.
Firstly, forming a storage pool with a wide band by a plurality of disks in each controller, wherein the logical address space starts from the first (logical) address space of the first disk, transits to the first address space of the next disk according to the disk sequence, and sequentially advances to the next disk; and then re-circulated back to the second address space of the first disk, and the address space can be cycled in turn, so that the concurrent writing operation of the disk can be utilized to the maximum, and the utilization rate of the bandwidth can be improved. When the cache 2 continues to generate a RAID stripe, the controller dynamically allocates a contiguous memory address space for each RAID group data packet in a front-to-back order according to the logical address space of the memory pool and writes the RAID stripe. When the RAID stripe is written according to the linear address space, each fixed-length sub-block is automatically allocated to different magnetic disks, so that the data packet has a dynamic RAID protection function.
It should be noted that, in the above RAID stripe, in addition to the user data, the stripe also stores metadata information related to the RAID stripe, and determines whether the data block on the stripe is valid; more specifically, the stripe will hold the logical address of the data thereon, as well as the actual write address assigned to the stripe; the same address mapping is stored in the global metadata table for each memory controller at the same time. The global metadata table of the memory controller is used for data addressing, and when certain data needs to be read out, the controller firstly inquires the table to obtain a specific address stored by user data, and then reads the data from the address.
When a certain data block is written again, the data block is written into a new stripe together with other data blocks; its mapping in the global metadata table points to the new stripe address. When the old stripe is recovered, firstly reading out metadata on the stripe, and checking whether the mapping address of the data block is consistent with the mapping in the global metadata table; if so, indicating that the data block is valid data (the data block is not overwritten); if not, indicating that the data block has been overwritten with the new address, the corresponding data on the old stripe is invalid data. )
As shown in fig. 4, in each controller, data is stored in the buffer 2 after being decompressed and compressed. Next, the data blocks are recombined, a series of RAID stripes with fixed sizes are formed by a plurality of data blocks, corresponding metadata, RAID protection and the like, and are sent into a cache 3; the RAID stripes are sized to match the physical disks on the controller node such that they may be stored across multiple physical disks and ensure that the RAID stripes can still reconstruct data without causing data loss when one or more disk failures are encountered.
FIG. 5 illustrates a write mode of the present invention for dynamically allocating free RAID stripes in a sequential manner. Firstly, in each controller, the logic address space of all hard disks forms a unified free RAID stripe addressed according to the sequence 0,1,2 and …; whenever 3 types of RAID stripes are cached (multiple compressed data blocks are protected inside) that generate a RAID stripe that needs to be written, the controller dynamically and sequentially allocates free address space to the current data RAID stripe. Thus, each time data is written, the data does not overwrite the previously assigned address, but rather is written at a new address, commonly known as a write-out-of-place write strategy. Because the data is written according to a complete RAID stripe every time, the bandwidth of the disk can be utilized to the greatest extent, the IO performance of the system is improved, and the local RAID data protection is provided. Since a data block is written to a new data RAID stripe each time, the corresponding data block becomes an invalid data block at the last write. Thus, during a continuous write process, a portion of the data blocks of a data RAID stripe correspond to the most recently written data, referred to as valid data, and another portion of the data blocks correspond to updated data, referred to as invalid data, requiring recovery of the storage space occupied by the invalid data by a garbage collection (Garbage Collection) strategy for reuse. FIG. 5 illustrates a First-in-First-out garbage collection strategy based on progressively depleting free RAID stripes as sequential allocation of free RAID stripes progresses; before exhaustion, the controller sequentially checks, through the GC process, whether valid data is still stored in the earliest written stripe, and if so, re-reads the data and sends it to the cache 2, together with the newly written data, to form a new stripe, and writes a new RAID stripe, called Relocation. When valid data in the GC stripes completes relocation, the stripes are entered into a free stripe pool and reused for writing data.
Shown in tables 1 and 2 is a metadata management corresponding to the above operation. These metadata are typically kept in a high-speed DRAM cache, and are supplemented with appropriate power-down protection mechanisms, such as using NVDIMMs, battery protection, or Flash protection, to ensure that these critical metadata are not lost in the event of a power failure or system Crash. The metadata management includes at least two table information, one is an address mapping table for expressing an address where data corresponding to a logical address of a data block is actually stored, that is, logical Block Address (LBA), and a size of the data block. In general, LBAs may be represented or converted to serial numbers of a RAID stripe and offsets (offsets) at the RAID stripe. Each time a data block is written, the LBA address and the data block size corresponding to the data block LBN need to be updated. Thus, when a certain LBN data block needs to be read out, the table is first queried to obtain the address actually stored by the data block, and then a read request is issued. And the other metadata table is a data Hash table used for a data deduplication function, and the data Hash table records the characteristic values of all valid data blocks on the storage controller. When a new data block is written, firstly, calculating the characteristic value of the new data block through a Hash function, and then inquiring a characteristic value table, if the same characteristic value exists in the controller, the data block can be de-duplicated, and only the LBN of the data block is required to be pointed to the address of the existing data block, and the data block is not required to be written again, so that the aim of data de-duplication is achieved.
Table 1 address mapping table
LBN (LBA,size)
0 (xx…,8)
1 (yy…,4)
Table 2 data Hash table
LBA Eigenvalues Count
xx… Hash value 1
yy… Hash value 5
It is worth noting that there is also a portion of metadata that is stored directly in the metadata area of each RAID stripe. The metadata is used for recording the LBN of the data block stored on the RAID stripe, and the corresponding LBA, size, characteristic value, CRC check and other information. When the metadata is used for helping GC data recovery of the RAID stripe, the metadata information is combined, which data blocks are invalid and which data blocks are valid can be determined, and then the valid data blocks are rewritten into a new address, so that the purpose of recovering the RAID stripe is achieved.
The RAID stripes incur additional write overhead, commonly known as Write Amplification (WA), when GC data is retrieved. The root cause of WA is that the frequency of data blocks in the same RAID stripe being rewritten is inconsistent, i.e., the lifecycle of individual data blocks is inconsistent. If the data blocks with the same or similar life cycle can be placed in the same RAID stripe, the effective data which needs to be migrated when the RAID stripe is subjected to GC data recovery can be greatly reduced by adjusting the recovery frequency of the RAID stripe, so that WA can be reduced, and the performance of the system is improved.
In order to achieve the above-mentioned principle, fig. 6 shows that the present invention proposes a new adaptive GC policy, i.e. the data recovered by GC and the newly written data are separated, placed in the RAID groups separately, and a digital tag (a Count field in the Hash table of writable data) is attached to indicate to which recovery the data on the RAID groups belong. For example, when packing RAID groups, there are four RAID groups receiving data at the same time, corresponding to digital tags 0,1,2,3, respectively. According to this processing means, running over a period of time can be adaptively ensured, and the data of each RAID group has a similar update (overwrite) frequency. Wherein, the RAID group with the label of 0 has higher updating frequency than the RAID group with the label of 1, the RAID group with the label of 1 has higher updating frequency than the RAID group with the label of 2, and so on; thus, at GC, by adjusting the frequency of different labeled RAID groups, for example, the frequency of recovery of a labeled 1 RAID group is 1/2 of a labeled 0 RAID group, the frequency of recovery of a labeled 2 RAID group is 1/2 of a labeled 1 RAID group, and the frequency of recovery of a labeled 3 RAID group is 1/2 of a labeled 2 RAID group. Because of the similar data life cycle (update frequency) in RAID groups, when a RAID stripe is selected for GC recycling, the data blocks above the RAID stripe have a higher probability of becoming invalid data blocks, so that WA can be reduced.
In summary, the present invention provides dual RAID data protection by combining RAID local to each storage controller with copy replication across controllers (a network RAID); when the hard disk is in fault or damaged, the data restoration is carried out by the RAID local to the controller preferentially, and the work of other controllers is not affected. When the hard disk faults exceed the local RAID protection capability, data repair can be performed through the network RAID. Compared with the traditional disk array, the invention can greatly improve the durability of data and the reliability of a storage system. In order to realize dual RAID protection, the invention provides a pre-fixed two-stage fixed address mapping method for managing and realizing dual RAID protection; the first level address mapping is mapping from the user logical address space to the logical address space of different RAID groups located within the respective controllers; the second level of address mapping is the conversion of logical addresses to physical addresses by each controller RAID group, including computing and saving redundant data blocks of the RAID. The two-stage mapping method is simple to realize, and the second-stage mapping can be realized by using the existing hardware RAID controller or soft RAID method. In order to further exert the data protection advantages of dual RAID and improve the storage efficiency and the storage performance, the invention provides a novel universal single-stage metadata management and data dynamic write address allocation method for realizing dual RAID protection, integrates the functions of online data compression, data deduplication, dynamic RAID group construction, sequential writing and the like, and can greatly improve the performance and the storage resource utilization rate. The method has the advantages that the method can effectively reduce the write amplification in the GC process and improve the performance of the system.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention. The parts of the present embodiment not described in detail can be implemented by the prior art.

Claims (6)

1. The multi-activity multi-control storage system with the dual RAID data protection is characterized by comprising at least two storage controllers, wherein each storage controller has an independent RAID function, different storage controllers are connected through a network to realize cross-node mirror copy or network RAID of user data blocks to form a dual data redundancy structure, and RAID data protection in the storage controllers and cross-node mirror copy or RAID combination are used for realizing two-layer RAID data protection; the logical address space of the storage system is divided into addressing spaces with fixed sizes, and data blocks of the address space are stored at least in two parts and distributed to local RAID of different storage controllers for storage; when the hard disk is damaged, the storage controller preferably recovers and reconstructs data by utilizing the local RAID function, and if the local RAID group cannot recover the data, a protection mechanism among the storage controllers is adopted to recover and reconstruct the data;
the storage system adopts global single-stage metadata management and dynamic address allocation to realize data writing, when a user writes data, at least two storage controllers for storing user data blocks are determined through a pseudo-random algorithm, the user data blocks are respectively sent into a first buffer area corresponding to the storage controllers, after de-duplication and compression, the user data blocks are sent into a second buffer area, a plurality of compressed data blocks are combined in the second buffer area, address mapping information, compression algorithm and verification information of the data blocks are added, and a RAID (redundant array of independent disks) strip is formed; the RAID stripe is written into a dynamically allocated address space, and the logic addresses and the actual writing addresses of all user data blocks in the RAID stripe are stored in a global metadata table of a storage controller;
the storage system adopts a garbage collection GC strategy to reutilize the storage space occupied by the invalid data blocks;
and the storage system adopts a self-adaptive garbage recycling strategy, separates the data blocks recycled by the GC from the newly written data blocks to form independent RAID strips, and attaches a digital label corresponding to the number of times the data blocks on the RAID strips belong to for recycling, and determines the relative recycling frequency of the strips according to the digital label.
2. The multi-active multi-control storage system with dual RAID data protection according to claim 1 wherein said storage system employs two levels of metadata management, including two levels of address mapping, a first level of address mapping allocating at least two storage controllers for each user logical address space according to the number of cross-controller copies and a pre-fixed address mapping method, and forming an address mapping relationship with the logical address space of the local RAID group of storage controllers; the second level of address mapping is implemented by the local RAID of each storage controller to complete the translation from the logical address to the physical address of the local RAID group.
3. The multiple-active multiple-control storage system with dual RAID data protection according to claim 1, wherein said storage controller comprises a wide stripe storage pool of multiple disks, wherein the logical address space starts from the first address space of the first disk, transitions to the first address space of the next disk in disk order, and sequentially proceeds to the next disk; then re-circulated back to the second address space of the first disk, with successive rounds; the storage controller dynamically allocates a contiguous storage address space for each RAID stripe in a front-to-back order in accordance with the logical address space of the storage pool and writes to the RAID stripe.
4. The multi-active multi-control storage system with dual RAID data protection of claim 1 wherein said storage controller global metadata table comprises an address mapping table for expressing actual write addresses corresponding to logical addresses of user data blocks and a data Hash table for data deduplication functionality.
5. The multiple-activity multiple-control storage system with dual RAID data protection according to claim 1, wherein the GC process determines whether a data block on a RAID stripe is valid in combination with address mapping information in the RAID stripe and address mapping information stored in a global metadata table; if the two are consistent, the data block is valid, and if the two are inconsistent, the data block is invalid.
6. A multiple-active multiple-control storage system with dual RAID data protection according to claim 1 wherein FIFO garbage collection policy is used in the storage system, a GC process is started before the free space of each storage controller pool is exhausted, the earliest written RAID stripe is checked sequentially for valid data, if so, the valid data blocks are read again and fed into the second cache area to be packed with new written data blocks to write new RAID stripe, and the recovered RAID stripe is entered into the free stripe pool for re-use in writing data.
CN202011508298.9A 2020-12-18 2020-12-18 Multiple-active multiple-control storage system with dual RAID data protection Active CN112596673B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011508298.9A CN112596673B (en) 2020-12-18 2020-12-18 Multiple-active multiple-control storage system with dual RAID data protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011508298.9A CN112596673B (en) 2020-12-18 2020-12-18 Multiple-active multiple-control storage system with dual RAID data protection

Publications (2)

Publication Number Publication Date
CN112596673A CN112596673A (en) 2021-04-02
CN112596673B true CN112596673B (en) 2023-08-18

Family

ID=75199547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011508298.9A Active CN112596673B (en) 2020-12-18 2020-12-18 Multiple-active multiple-control storage system with dual RAID data protection

Country Status (1)

Country Link
CN (1) CN112596673B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420341B (en) * 2021-06-11 2023-08-25 联芸科技(杭州)股份有限公司 Data protection method, data protection equipment and computer system
CN114063929B (en) * 2021-11-25 2023-10-20 北京计算机技术及应用研究所 Local RAID reconstruction system and method based on double-controller hard disk array
CN114415981B (en) * 2022-03-30 2022-07-15 苏州浪潮智能科技有限公司 IO processing method and system of multi-control storage system and related components
CN117270758A (en) * 2022-06-20 2023-12-22 华为技术有限公司 Data reconstruction method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003196036A (en) * 2001-12-12 2003-07-11 Internatl Business Mach Corp <Ibm> Storage device, information processing device including the same, and recovery method of information storage system
CN1790250A (en) * 2002-09-18 2006-06-21 株式会社日立制作所 Storage system, and method for controlling the same
CN101055511A (en) * 2007-05-16 2007-10-17 华为技术有限公司 Memory array system and its data operation method
CN101292220A (en) * 2005-10-26 2008-10-22 国际商业机器公司 System, method and program for managing storage
CN103049225A (en) * 2013-01-05 2013-04-17 浪潮电子信息产业股份有限公司 Double-controller active-active storage system
CN107077438A (en) * 2014-10-29 2017-08-18 惠普发展公司有限责任合伙企业 Communicated by the part of communication media
CN111158587A (en) * 2019-12-10 2020-05-15 南京道熵信息技术有限公司 Distributed storage system based on storage pool virtualization management and data read-write method
CN112074819A (en) * 2018-05-18 2020-12-11 国际商业机器公司 Selecting one of a plurality of cache eviction algorithms for evicting a track from a cache

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007066192A (en) * 2005-09-01 2007-03-15 Hitachi Ltd Storage system, control method and computer program
JP4800031B2 (en) * 2005-12-28 2011-10-26 株式会社日立製作所 Storage system and snapshot management method
US20130179634A1 (en) * 2012-01-05 2013-07-11 Lsi Corporation Systems and methods for idle time backup of storage system volumes
US9916241B2 (en) * 2015-08-14 2018-03-13 Netapp, Inc. Storage controller caching using symmetric storage class memory devices
CN107479824B (en) * 2016-06-08 2020-03-06 宜鼎国际股份有限公司 Redundant disk array system and data storage method thereof
US10133630B2 (en) * 2016-09-06 2018-11-20 International Business Machines Corporation Disposable subset parities for use in a distributed RAID
US10884849B2 (en) * 2018-04-27 2021-01-05 International Business Machines Corporation Mirroring information on modified data from a primary storage controller to a secondary storage controller for the secondary storage controller to use to calculate parity data

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003196036A (en) * 2001-12-12 2003-07-11 Internatl Business Mach Corp <Ibm> Storage device, information processing device including the same, and recovery method of information storage system
CN1790250A (en) * 2002-09-18 2006-06-21 株式会社日立制作所 Storage system, and method for controlling the same
CN101292220A (en) * 2005-10-26 2008-10-22 国际商业机器公司 System, method and program for managing storage
CN101055511A (en) * 2007-05-16 2007-10-17 华为技术有限公司 Memory array system and its data operation method
CN103049225A (en) * 2013-01-05 2013-04-17 浪潮电子信息产业股份有限公司 Double-controller active-active storage system
CN107077438A (en) * 2014-10-29 2017-08-18 惠普发展公司有限责任合伙企业 Communicated by the part of communication media
CN112074819A (en) * 2018-05-18 2020-12-11 国际商业机器公司 Selecting one of a plurality of cache eviction algorithms for evicting a track from a cache
CN111158587A (en) * 2019-12-10 2020-05-15 南京道熵信息技术有限公司 Distributed storage system based on storage pool virtualization management and data read-write method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
a comparison of raid storage shemes:reliability and efficiency;Andrew M. Shooman;IEEE;全文 *

Also Published As

Publication number Publication date
CN112596673A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN112596673B (en) Multiple-active multiple-control storage system with dual RAID data protection
KR102229648B1 (en) Apparatus and method of wear leveling for storage class memory
US7761655B2 (en) Storage system and method of preventing deterioration of write performance in storage system
JP5894167B2 (en) Adaptive RAID for SSD environment
JP5848353B2 (en) In-device data protection in RAID arrays
JP4950897B2 (en) Dynamically expandable and contractible fault tolerant storage system and method enabling storage devices of various sizes
US7076606B2 (en) Accelerated RAID with rewind capability
JP6677740B2 (en) Storage system
US9575844B2 (en) Mass storage device and method of operating the same to back up data stored in volatile memory
JP2022512064A (en) Improving the available storage space in a system with various data redundancy schemes
US8543761B2 (en) Zero rebuild extensions for raid
US20130275802A1 (en) Storage subsystem and data management method of storage subsystem
CN108958656B (en) Dynamic stripe system design method based on RAID5 solid state disk array
US10067833B2 (en) Storage system
CN103870352A (en) Method and system for data storage and reconstruction
US11263146B2 (en) Efficient accessing methods for bypassing second layer mapping of data blocks in file systems of distributed data systems
CN111858189A (en) Handling of storage disk offline
US11262919B2 (en) Efficient segment cleaning employing remapping of data blocks in log-structured file systems of distributed data systems
US11334497B2 (en) Efficient segment cleaning employing local copying of data blocks in log-structured file systems of distributed data systems
US11079956B2 (en) Storage system and storage control method
JP6817340B2 (en) calculator
CN107608626B (en) Multi-level cache and cache method based on SSD RAID array
JP2000047832A (en) Disk array device and its data control method
JPH06266510A (en) Disk array system and data write method and fault recovery method for this system
CN113050892B (en) Method and device for protecting deduplication data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant