CN112785523B - Semi-supervised image rain removing method and device for sub-band network bridging - Google Patents

Semi-supervised image rain removing method and device for sub-band network bridging Download PDF

Info

Publication number
CN112785523B
CN112785523B CN202110088761.7A CN202110088761A CN112785523B CN 112785523 B CN112785523 B CN 112785523B CN 202110088761 A CN202110088761 A CN 202110088761A CN 112785523 B CN112785523 B CN 112785523B
Authority
CN
China
Prior art keywords
image
network
learning
subband
rain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110088761.7A
Other languages
Chinese (zh)
Other versions
CN112785523A (en
Inventor
刘家瑛
杨文瀚
胡煜章
郭宗明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202110088761.7A priority Critical patent/CN112785523B/en
Publication of CN112785523A publication Critical patent/CN112785523A/en
Application granted granted Critical
Publication of CN112785523B publication Critical patent/CN112785523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses a semi-supervised image rain removing method and device with network bridging, which performs semi-supervised learning of a rainy day image through deep learning and provides a recursive frequency band representation for connecting an unsupervised framework and a full-supervised framework. A series of coarse-to-fine band representations are extracted and enhanced by recursive end-to-end learning for rain-mark removal and detail correction. Under the antagonism learning of the perceived quality guidance, the depth band representation is used for reconstruction, generating a final restoration result. The invention extracts a series of rough-to-fine frequency band representations, enhances the frequency band representations through recursive end-to-end learning, removes rain marks and corrects details, and provides a recursive frequency band representation to connect an unsupervised and full supervision framework.

Description

Semi-supervised image rain removing method and device for sub-band network bridging
Technical Field
The invention belongs to the field of image processing and enhancement, and particularly relates to a semi-supervised image rain removing method and device with network bridging.
Background
The deep learning raining era began in 2017. Yang et al construct a network that combines rain mark detection and removal to handle heavy rain, overlapping rain marks and fog. The network can detect the position of rainwater through predicting binary masks, remove rain marks by adopting a recursive frame and gradually remove rain mist. The method achieves good effect under the condition of heavy rain. However, the method may erroneously remove the vertical texture and cause underexposure.
In the same year Fu et al tried to remove raindrops by building deep detail networks. The network takes only high frequency details as input and predicts raindrops and clean rain-free images. This work shows that removing background information in the network input facilitates network training.
Following the work of Yang and Fu et al, in subsequent work, a number of convolutional neural network based methods were proposed. The methods employ higher level network structures and embed new priors associated with rain, yielding better results in both quantitative and qualitative analysis. However, because these methods are limited by the fully supervised learning paradigm (i.e., using synthetic rain maps), they may fail in dealing with real rain scenarios that have never been seen during training.
Disclosure of Invention
Aiming at the problems and the defects of the related methods, the invention provides a semi-supervised image rain removing method and device with network bridging. The whole framework is shown in fig. 1, and the method constructs an effective characteristic representation-based on the sub-band representation of learning, and connects supervised learning and unsupervised learning, thereby realizing efficient deep learning semi-supervised rain removal. The supervised learning part of the model fully utilizes paired data and loss measurement based on signal fidelity to learn the rain mark removing and detail correcting process. The semi-supervised learning part learns the image quality enhancement process by using unpaired data and countermeasure learning, and improves the visibility and comfort of the image.
The technical scheme adopted by the invention comprises the following steps:
a semi-supervised image rain removing method for sub-band network bridging comprises the following steps:
1) Generating a plurality of rainy day images y based on a plurality of sample rainless images and generated raindrops and raindrops, constructing paired image data sets, collecting sample images with different qualities, acquiring image quality labels of the sample images, and constructing unpaired image quality data sets;
2) Constructing an image rain removing model, and training the image rain removing model by utilizing a paired image data set and a non-paired image quality data set to obtain a trained image rain removing model;
the image rain removing model comprises an iterative subband learning network for learning subband signals in a rainy day image y or a restored image and an iterative subband reconstruction network for recombining the subband signals to generate a restored image; training the iterative subband learning network by using the paired image data sets, and training the iterative subband reconstruction network by using the paired image data sets and the trained quality evaluation network;
an iterative subband learning network is constructed by the following strategy:
a) Constructing a plurality of U-Net-like deep networks as sub-networks;
b) Restoration of each sub-network with rain image y and last cycleResultsCascade as input and take rainy image y with restoration result +.>Cascade mapping to a feature space, and then feature transformation is carried out through a plurality of convolution layers;
c) In the middle layer, firstly, the spatial resolution of the features is downsampled through convolution and deconvolution with step length, and then upsampled;
d) Using jump connection to connect shallow layer with same space resolution of each sub-network with deep layer characteristics;
an iterative subband reconstruction network is constructed by the following strategy:
a) Constructing a plurality of U-Net-like deep networks as sub-networks;
b) Using jump connection to connect shallow layer with same space resolution of each sub-network with deep layer characteristics;
the trained quality evaluation network is obtained by training an unpaired rainy day image quality data set; the structure of the quality assessment network comprises: a VGG16 network with n unit full connection layers and a softmax layer, wherein n is the variety number of the image quality labels;
3) And inputting the image to be processed into a trained image rain removing model to obtain a rain removed image.
Further, the method for generating the rain marks and the rain mist comprises the following steps: a rain-mark appearance model was used.
Further, parameters of the rain and fog include: light transmittance and background light.
Further, rainy image y=x (1-t) +tα+s, where x is the sample rainless image, s is raindrop, t is light transmittance, and α is background light.
Further, the iterative subband learning network learns subband signals in the rainy day image y or the restored image by:
1) Mapping the rainy day image as a characteristic, or accumulating cross-cycle characteristic residual errors generated by utilizing the restored image to obtain the characteristic;
2) Generating a trans-scale feature residual error by using a long-short time memory network and the features, and accumulating the trans-scale feature residual error to obtain a trans-scale feature residual error accumulation result;
3) And mapping the trans-scale characteristic residual error accumulation result into an enhancement result under different scales to obtain a sub-band signal in the rainy day image y or the restored image.
Further, training the iterative subband learning network with the paired image dataset constrains learning of the iterative subband learning network using a multi-scale loss function, wherein the multi-scale loss function Phi (·) is the structural similarity index of the computed image, s i Is given as a scaling factor, F D (. Cndot.) is a downsampling process, lambda 1 Lambda is the first weight parameter 2 Is a second weight parameter.
Further, the iterative subband reconstruction network reconstructs the subband signals to generate a restored image by:
1) Mapping the subband signals into signal recombination weights;
2) Using the signal recombination weight to weight the subband signal recombination to generate a new enhancement result;
3) And recombining the new enhancement result to generate a restored image.
Further, the loss function L is used when training the iterative subband reconstruction network by utilizing the paired image data set and the trained quality evaluation network SBR Constrained subband reconstruction network learning, where L SBR =L Percept3 L Detail4 L Quality Perceptual loss functionSignal fidelity metric/>Mass loss function->λ 3 Lambda is the third weight parameter 4 For the fourth weight parameter, F p (. Cndot.) is the depth feature extracted from a pre-trained VGG network, phi (-) is the structural similarity index of the computed image, l r As random numbers, D (-) is the quality assessment network after training.
A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method described above when run.
An electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer to perform the method described above.
Compared with the prior art, the invention has the following advantages:
1) The recursive frequency band representation is provided for connecting an unsupervised framework and a full supervision framework, and has the advantages of both supervised learning and unsupervised learning image enhancement methods, namely: better detail resilience and overall visibility and visual comfort;
2) A series of coarse-to-fine band representations are extracted and enhanced by recursive end-to-end learning for rain trace removal and detail correction.
Drawings
Fig. 1 is a diagram of a deep recursive subband network framework of the present invention.
Fig. 2 is a frame diagram of a subband learning network of the present invention.
Fig. 3 is a frame diagram of a sub-band reorganization network of the present invention.
Detailed Description
In order to further illustrate the technical method of the present invention, the present invention is further described in detail below with reference to the drawings and specific examples of the present invention.
The semi-supervised image rain removal method of the invention uses a deep recursive subband network as shown in fig. 1, and comprises the following steps:
step 1: a total of 1800 pairs of rainy/rainless images were constructed for the rainy/rainless training dataset. Generating corresponding raindrops s and raindrop parameters (light transmittance t and background light alpha) according to a rainless image x and based on a raindrop appearance model (random sampling generation illumination direction parameters, visual angle parameters and raindrop vibration parameters) [ Garg and Nayar,2016], and superposing related variables to generate a rainy day image y:
y=x(1-t)+tα+s. (1)
step 2: non-paired image quality datasets were constructed, 1000 images of different quality were collected through public channels together with corresponding image quality labels (1-10 levels, 10 representing highest quality, 1 representing lowest quality).
Step 3: an iterative subband learning network is constructed as in fig. 2. The primary goal is to fully learn each subband signal of the restored image (the output of the iterative subband learning network, the target fitting the rainless image) using the paired training data generated in step 1. As shown in fig. 1, a series of U-Net-like deep networks were constructed. Restoration of each subnetwork with y and last cycleConcatenation is used as input, mapped to feature space and then feature transformed by several convolution layers, where s i (i=1, 2, 3) is the scaling factor, t-1 is the number of last cycles. In the middle layer, the spatial resolution of the feature is first downsampled by step-wise convolution and deconvolution, and then upsampled. The shallow and deep features of the same spatial resolution are connected using a jump connection, which helps to get the local information contained in the shallow features to the output. Each subnet is respectively at s 1 =1/4、s s=1/2 and s3 Three features are produced on the scale of =1.
1) The first cycle of recursive learning is described first below:
wherein ,is a feature extracted at the corresponding scale, +.>Is a characteristic of the enhanced long and short memory network. /> and />Corresponding sub-band learning network and long-short-term memory network processes respectively>For recording the memory information of all cycle states +.>Is a trans-scale feature residual. /> and />Is a mapping process that projects the enhanced features back into the image domain. F (F) U (. Cndot.) is an upsampling process. The image is first at a coarser granularity scale s 1 And (5) upper reconstruction. Thereafter, the residuals of the image signals are predicted at a finer granularity scale and then combined into the whole image. In the step (2), the first row of formulas maps the rainy day image as the characteristics, the second row of formulas generates a trans-scale characteristic residual error by using a long-short time memory network, the third row of formulas accumulates the trans-scale characteristic residual error, and the last three rows of formulas map the characteristics into enhancement results under different scales.
2) Thereafter, at the t-th cycle, the firstAnd learning residual characteristics and images under the guidance of the previous estimation result. Restoration of each subnetwork with y and last cycleCascade as input:
wherein Is a cross-loop feature residual. In the step (3), a first row of formulas generate cross-cycle feature residuals, a second row of formulas accumulate the cross-cycle feature residuals, a third row of formulas generate cross-scale feature residuals by using a long and short-time memory network, a fourth row of formulas accumulate the cross-scale feature residuals, and finally the three rows of formulas map the features into enhancement results under different scales. This formula ties all sub-band features together tightly, resulting in a joint optimization of all sub-bands.
Step 4: an iterative subband reconstruction network is constructed, as in fig. 3, which also employs a U-Net like network structure, except that in the subband reconstruction network, shallow and deep features of the same spatial resolution are connected using a hopped connection. With the paired data, the frequency band restoration process from the rainy day image to the normal light image can be well learned, and at the same time, the details can be well restored and the raindrop can be suppressed. Since signal fidelity is not always well consistent with human visual perception, especially for certain global properties of the image (e.g., visibility, contrast, color illumination distribution, etc.). Therefore, the model is further constrained by a neural network-based perceptual quality assessment method, so that the restoration model learns better restoration enhancement mapping. Use of another U-Net-like network to reorganize subband signals using F RC (·) represents the process, generating the following coefficients to reorganize the subband signals:
wherein T is the total number of cycles,is a subband signal. In the first row formula in (4), F RC The (-) signal recombination module maps the subband signals to signal recombination weights { omega } 123 }. In the second row of the formula in (4), the subband signal reorganization is weighted using the signal reorganization weights to generate a new enhancement result +.>
Step 5: the quality assessment network D is trained using unpaired image quality data sets. D uses the network structure of VGG16 and forms the last layer into an FC layer with 10 units. Then the softmax was attached. The network is pre-trained on ImageNet and then refined using AVA Dataset. The AVA Dataset contained 255,000 pictures, each scored by approximately 200 skilled photographers. Each picture is associated with a game theme (a total of approximately 900 themes). The fractional range [1,10],10 is the highest score.
Step 6: iterative subband learning networks are trained using pairs of image data sets, and learning of the networks is constrained using a multi-scale loss function. The loss function can be expressed as:
wherein ,FD (. Cndot.) is a downsampling procedure, s i Is a given scaling factor. And phi (-) calculating the structural similarity index of the image. Lambda (lambda) 1 and λ2 Is a weight parameter.
Step 7: training subbands to reconstruct a network using paired image dataset and quality assessment network constraints, using perceptual loss function L Percept Signal fidelity measure L Detail And a mass loss function L Quality Restricting learning of the network. The loss function can be expressed as:
wherein ,λ3 and λ4 Is a weight parameter. l (L) r Is a random number between 7 and 12, where 10 represents the highest quality in the database. F (F) P (. Cndot.) is the depth feature extracted from a pre-trained VGG network. D (·) is a trained NIMA quality assessment network (Talebi and Milanfar, 2018).
Fig. 1 summarizes the overall flow of the present invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (5)

1. A semi-supervised image rain removing method for sub-band network bridging comprises the following steps:
1) Generating a plurality of rainy day images y based on a plurality of sample rainless images and generated raindrops and raindrops, constructing paired image data sets, collecting sample images with different qualities, acquiring image quality labels of the sample images, and constructing unpaired image quality data sets;
2) Constructing an image rain removing model, and training the image rain removing model by utilizing a paired image data set and a non-paired image quality data set to obtain a trained image rain removing model;
the image rain removing model comprises an iterative subband learning network for learning subband signals in a rainy day image y or a restored image, a long-period memory network and an iterative subband reconstruction network for recombining the subband signals to generate a restored image;
the iterative subband learning network comprises: the method comprises the steps that a plurality of first deep networks are used as sub-networks, a trunk network of each first deep network is a U-Net network, and the characteristic connection mode in each first deep network is to connect shallow layer characteristics and deep layer characteristics with the same spatial resolution by using jump connection;
the iterative subband reconstruction network comprises: the first deep networks are used as sub-networks, the main network of the first deep networks is a U-Net network, and the characteristic connection mode in the first deep networks is to connect shallow and deep characteristics with the same spatial resolution by using jump connection;
training the image rain removal model by using the paired image data set and the unpaired image quality data set to obtain a trained image rain removal model, wherein the training method comprises the following steps of:
learning the rainy day image y and restoring result based on the iterative subband learning networkNeutron band signals to produce cross-cyclic characteristic residuals at each scale> wherein ,si Is a scaling factor, i=1, 2,3, t-1 is the number of last cycles, said learning the rainy day image y based on said iterative subband learning network and recovering subband signals of the result to produce cross-cycle characteristic residuals ++>Comprising the following steps:
restoring the rainy day image y and last cycleIs mapped to feature space;
after feature transformation is carried out through a plurality of convolution layers, the spatial resolution of the features is sequentially subjected to downsampling and upsampling in the middle layer through convolution and deconvolution with step length;
after shallow and deep features of the same spatial resolution are connected using a jump connection, a crossover is created at each scaleCyclic feature residual
Generating cross-cyclic feature residuals on a per-scale basisObtain enhanced results at different scales +.>Wherein the cross-cyclic feature residuals are generated on a per-scale basis>Obtain enhanced results at different scales +.>Comprising the following steps:
accumulating cross-cycle characteristic residual errors to obtain characteristics
Based on long-short-term memory network and the characteristicsObtaining a trans-scale feature residual-> wherein , memory information for the cyclic state;
accumulating the trans-scale feature residual errors to obtain features
Features to be characterizedMapping to restoration results at different scales +.>
Recombining sub-band signals to generate new enhanced resultsWherein the recombination of the subband signals generates a new enhancement result->Comprising the following steps:
recovery result of the T timeSub-band signal +.>Mapped to signal recombination weights omega i; wherein ,/>
Using signal recombination weights omega i Weighted subband signal reorganization to generate new enhancement result
Training iterative subband learning networks using paired image dataset, using multiscale penaltyFunction L Rect Restricting learning of the network; wherein the multiscale loss functionF D Represents downsampling, phi represents the structural similarity index of the calculated image, lambda 1 、λ 2 Respectively representing a first weight coefficient and a second weight coefficient, wherein x is a sample rain-free image;
training subbands to reconstruct a network using paired image dataset and quality assessment network constraints, using perceptual loss function L Percept Signal fidelity metric L Detail And a mass loss function L Quality Restricting learning of the network; wherein the perceptual loss functionSaid signal fidelity measure->The mass loss functionl r Representing a random number between 7 and 12, F q Depth features extracted from a pre-trained VGG network, D is a trained NIMA quality evaluation network;
3) And inputting the image to be processed into a trained image rain removing model to obtain a rain removed image.
2. The method of claim 1, wherein the parameters of the rain and fog include: light transmittance and background light.
3. The method of claim 1, wherein the rainy image y = x (1-t) +tα+s, where s is rainy, t is light transmittance, and α is background light.
4. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1-3 when run.
5. An electronic device comprising a memory, in which a computer program is stored, and a processor arranged to run the computer program to perform the method of any of claims 1-3.
CN202110088761.7A 2021-01-22 2021-01-22 Semi-supervised image rain removing method and device for sub-band network bridging Active CN112785523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110088761.7A CN112785523B (en) 2021-01-22 2021-01-22 Semi-supervised image rain removing method and device for sub-band network bridging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110088761.7A CN112785523B (en) 2021-01-22 2021-01-22 Semi-supervised image rain removing method and device for sub-band network bridging

Publications (2)

Publication Number Publication Date
CN112785523A CN112785523A (en) 2021-05-11
CN112785523B true CN112785523B (en) 2023-10-17

Family

ID=75758601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110088761.7A Active CN112785523B (en) 2021-01-22 2021-01-22 Semi-supervised image rain removing method and device for sub-band network bridging

Country Status (1)

Country Link
CN (1) CN112785523B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086807A (en) * 2018-07-16 2018-12-25 哈尔滨工程大学 A kind of semi-supervised light stream learning method stacking network based on empty convolution
CN111062892A (en) * 2019-12-26 2020-04-24 华南理工大学 Single image rain removing method based on composite residual error network and deep supervision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109086807A (en) * 2018-07-16 2018-12-25 哈尔滨工程大学 A kind of semi-supervised light stream learning method stacking network based on empty convolution
CN111062892A (en) * 2019-12-26 2020-04-24 华南理工大学 Single image rain removing method based on composite residual error network and deep supervision

Also Published As

Publication number Publication date
CN112785523A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
CN110532859B (en) Remote sensing image target detection method based on deep evolution pruning convolution net
CN108062754B (en) Segmentation and identification method and device based on dense network image
Li et al. A comprehensive benchmark analysis of single image deraining: Current challenges and future perspectives
CN108447041B (en) Multi-source image fusion method based on reinforcement learning
CN111539941B (en) Parkinson's disease leg flexibility task evaluation method and system, storage medium and terminal
CN110751111B (en) Road extraction method and system based on high-order spatial information global automatic perception
CN112488025B (en) Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
CN113450288B (en) Single image rain removing method and system based on deep convolutional neural network and storage medium
CN116822382B (en) Sea surface temperature prediction method and network based on space-time multiple characteristic diagram convolution
CN112053359A (en) Remote sensing image change detection method and device, electronic equipment and storage medium
CN111275686A (en) Method and device for generating medical image data for artificial neural network training
CN112488935B (en) Method for generating anti-finger vein image restoration based on texture constraint and poisson fusion
CN111598793A (en) Method and system for defogging image of power transmission line and storage medium
CN112785523B (en) Semi-supervised image rain removing method and device for sub-band network bridging
CN116503320A (en) Hyperspectral image anomaly detection method, hyperspectral image anomaly detection device, hyperspectral image anomaly detection equipment and readable storage medium
CN108564585B (en) Image change detection method based on self-organizing mapping and deep neural network
Mandal et al. Neural architecture search for image dehazing
CN115358952A (en) Image enhancement method, system, equipment and storage medium based on meta-learning
CN115457081A (en) Hierarchical fusion prediction method based on graph neural network
CN112614073A (en) Image rain removing method based on visual quality evaluation feedback and electronic device
CN114331894A (en) Face image restoration method based on potential feature reconstruction and mask perception
JP6950647B2 (en) Data determination device, method, and program
Daultani et al. ILIAC: Efficient classification of degraded images using knowledge distillation with cutout data augmentation
Sun et al. Kinect depth recovery via the cooperative profit random forest algorithm
Filoche Variational Data Assimilation with Deep Prior. Application to Geophysical Motion Estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant