CN113378736B - Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization - Google Patents

Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization Download PDF

Info

Publication number
CN113378736B
CN113378736B CN202110678330.6A CN202110678330A CN113378736B CN 113378736 B CN113378736 B CN 113378736B CN 202110678330 A CN202110678330 A CN 202110678330A CN 113378736 B CN113378736 B CN 113378736B
Authority
CN
China
Prior art keywords
transformation
network
remote sensing
representing
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110678330.6A
Other languages
Chinese (zh)
Other versions
CN113378736A (en
Inventor
张永军
张斌
万一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110678330.6A priority Critical patent/CN113378736B/en
Publication of CN113378736A publication Critical patent/CN113378736A/en
Application granted granted Critical
Publication of CN113378736B publication Critical patent/CN113378736B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image depth network semi-supervised semantic segmentation method and system based on transformation consistency regularization, and belongs to an image data processing method. According to the remote sensing image depth network semantic segmentation framework based on transformation consistency regularization, under the condition of limited labeled samples, output consistency constraint under different random transformation disturbances is applied to a large number of unlabeled samples, and potential information provided by the unlabeled samples is fully utilized to improve the performance of a depth network. The parameters of the network may be updated by optimizing a weighted sum of the supervised losses computed with labeled samples and the regularization consistency losses computed with unlabeled samples. The method can improve the performance of the network by utilizing information contained in a large number of unmarked samples under the condition of limited marked samples, and is suitable for scenes with few marked samples in practice.

Description

Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization
Technical Field
The invention belongs to an image data processing method, and particularly relates to a remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization.
Background
Semantic segmentation is a high-level task in image processing, the goal of which is to assign a semantic label to each pixel. Semantic segmentation is an important and challenging task in the fields of computer vision and remote sensing. In the field of remote sensing, classification of remote sensing images is one of the most fundamental research problems, and is also the basis of other remote sensing research and application. In the past, traditional machine learning methods typically combine human a priori knowledge and intuitive experience to design and select several features and characteristics closely related to a task, which are typically used for classification and identification of remotely sensed images. In recent years, with the benefit of the development of deep learning techniques and hardware computing power, deep neural networks have become the mainstream method, and Convolutional Neural Networks (CNNs) have enjoyed great success in image processing. With a large number of data sets, the CNN model can be trained end-to-end to achieve more powerful, powerful feature representations and impressive performance on a variety of data sets.
However, most networks today are data driven and are trained by means of supervision. The performance of the network is highly dependent on a large number of marked samples, which means that more large-scale datasets need to be created. But collecting large amounts of accurate marking data is very time consuming and laborious, especially accurate pixel-level marking data. And tagging data requires a certain amount of expert knowledge and is difficult to obtain for security or privacy concerns. In the field of remote sensing, there are typically no large numbers of tagged samples, although recent developments in sensors and earth observation technology have led to explosive growth in remote sensing data. For example, high precision, high quality surface coverage data is difficult to obtain and must be collected and annotated by a telemetric expert. Thus, for many practical problems and applications, the lack of a sufficiently large signature data set limits the widespread use of deep learning techniques in the field of remote sensing. In this case, how to fully utilize the unmarked data to improve the performance of the existing model is a great challenge.
Disclosure of Invention
Semi-supervised learning, which is relatively easy to obtain for unlabeled data compared to labeled data, is a machine learning technique that is intermediate between supervised and unsupervised learning. It typically trains the model using a small amount of labeled data and a large amount of unlabeled data. It has been found that an application combining large amounts of unlabeled data together with a small amount of labeled data can significantly improve learning performance. For supervised learning, obtaining data annotations is expensive and time consuming, and it is difficult to obtain large amounts of labeled data, while the acquisition of unlabeled data is relatively inexpensive. Therefore, in practical application, the application of semi-supervised learning is wider and the prospect is wide.
Therefore, the invention provides a remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization, which is used for solving the problem of how to further improve the model performance by using unmarked samples under the condition of few marked samples in a remote sensing image.
The technical scheme adopted by the invention is as follows: the remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization comprises the following steps:
step 1: firstly, dividing a remote sensing image data set into labeled samples
Figure GDA0003697969350000021
And unlabeled samples
Figure GDA0003697969350000022
Wherein
Figure GDA0003697969350000023
The image of the label is shown,
Figure GDA0003697969350000024
label corresponding to the label image, N L The number of the marked images is shown,
Figure GDA0003697969350000025
representing unmarked images, N U Indicating the number of unmarked images.
Step 2: in the training phase, a student network S and a teacher network T are constructed, their parameters being represented by theta s And theta t
And step 3: and randomly selecting m sample data from the marked samples and the unmarked samples respectively.
And 4, step 4: inputting the selected marking sample into the student network, and calculating the loss of the supervision part
Figure GDA0003697969350000026
And 5: back propagation and updating of student network parameter theta by gradient descent algorithm s
Step 6: selecting a random transform
Figure GDA0003697969350000027
As a perturbation.
And 7: using random transformations
Figure GDA0003697969350000028
Processing the unmarked sample to obtain
Figure GDA0003697969350000029
And 8: inputting the disturbed unmarked sample into the student network to obtain an output characteristic diagram
Figure GDA00036979693500000210
And step 9: inputting the original unmarked sample into the teacher network to obtain the output characteristic diagram
Figure GDA00036979693500000211
Reuse of random transformations
Figure GDA00036979693500000212
For characteristic diagram
Figure GDA00036979693500000213
Performing the same transformation to obtain another characteristic diagram
Figure GDA00036979693500000214
Step 10: feature map from two outputs
Figure GDA00036979693500000215
And
Figure GDA00036979693500000216
computing consistency regularization loss
Figure GDA00036979693500000217
Step 11: back propagation and updating of student network parameter theta by gradient descent algorithm s
Step 12: and updating the parameters of the teacher network by using the parameters of the student network.
Step 13: and 3, repeating iteration until the training is finished.
Step 14: in the testing stage, a window is set to slide on the image, the image block of each window is input into the network to obtain the prediction result of each window, and finally the segmentation result of the remote sensing image is obtained.
Preferably, the loss of the supervision part in step 4
Figure GDA0003697969350000031
It can be defined as the pixel-by-pixel cross entropy as a loss function:
Figure GDA0003697969350000032
wherein the content of the first and second substances,
Figure GDA0003697969350000033
representing the cross entropy loss function, h and w representing the height and width of the image, c representing the number of classes, x L A sample of the annotation is represented as,
Figure GDA0003697969350000034
representing a feature map, y, obtained by inputting annotated samples into a student network L Representing the corresponding authentic label of the specimen.
Preferably, the random transformation in step 6
Figure GDA0003697969350000035
Affine transformations, grid shuffle, and cutmix transformations may be used. Wherein the affine transformation is between the length and width of the image [ -0.2,0.2 [ -0.2 [ ]]Random translation within the range, [0.5,1.5]Range random scaling, [ -180 °,180 °]Rotating randomly within the range. The grid shuffle transform uses a 3 x 3 grid.
Preferably, the unsupervised portion of step 10 is loss of consistency regularization
Figure GDA0003697969350000036
The mean square error can be taken as the loss:
Figure GDA0003697969350000037
wherein x is U The non-annotated sample is represented by,
Figure GDA0003697969350000038
representing a random transformation, θ s And theta t Representing parameters of the student network and the teacher network respectively,
Figure GDA0003697969350000039
and
Figure GDA00036979693500000310
respectively representing a student network model and a teacher network model, and d (·,) represents a mean square error function.
Preferably, in step 12, the parameters of the teacher network may be updated by an exponential moving average method:
θ′ t =α EMA θt +(1-α EMAs (3)
wherein alpha is EMA Represents a smoothing coefficient in an exponential moving average method of θ' t Representing updated parameters of the teacher network.
Compared with the prior art, the invention has the advantages and beneficial effects that: the remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization utilizes potential information of non-labeled sample data, and further improves the identification precision of the model under the conditions of limited labeled samples and a large amount of non-labeled sample data, so that the method is more suitable for scenes with few labeled remote sensing images in practice.
Drawings
FIG. 1: the invention designs a semi-supervised framework;
FIG. 2: three different random transformations are used in the present invention;
FIG. 3: some visualization of the results of the method of the invention;
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to a flow chart of fig. 1, the invention provides a remote sensing image depth network semi-supervised semantic segmentation method based on transformation consistency regularization, which comprises the following steps:
step 1: firstly, dividing a remote sensing image data set into marked samples
Figure GDA0003697969350000041
And unlabeled samples
Figure GDA0003697969350000042
Wherein
Figure GDA0003697969350000043
The image of the label is shown,
Figure GDA0003697969350000044
label corresponding to the label image, N L The number of the marked images is shown,
Figure GDA0003697969350000045
representing unmarked images, N U Indicating the number of unmarked images.
And 2, step: in the training phase, a student network S and a teacher network T are constructed, their parameters being represented by theta s And theta t
And step 3: and randomly selecting m sample data from the marked samples and the unmarked samples respectively.
And 4, step 4: the selected marking sample is input into the student network, and the loss of the supervision part is calculated
Figure GDA0003697969350000046
Can be defined as the pixel-by-pixel cross entropy:
Figure GDA0003697969350000047
wherein, the first and second guide rollers are arranged in a row,
Figure GDA0003697969350000048
representing the cross entropy loss function, h and w representing the height and width of the image, c representing the number of classes, x L A sample of the annotation is represented as,
Figure GDA0003697969350000049
representing a feature map, y, obtained by inputting annotated samples into a student network L Representing the corresponding authentic label of the specimen.
And 5: back propagation and updating of student network parameter theta by gradient descent algorithm s
Step 6: selecting a random transform
Figure GDA00036979693500000410
As the perturbation, affine transformation, grid shuffle, and cutmix transformation may be used. Wherein the affine transformation is between the length and width of the image [ -0.2,0.2 [ -0.2 [ ]]Random translation within the range, [0.5,1.5]Range random scaling, [ -180 °,180 °]Rotating randomly within the range. The grid shuffle transform uses a 3 x 3 grid.
And 7: using random transformations
Figure GDA00036979693500000411
Processing the unmarked sample to obtain
Figure GDA00036979693500000412
And 8: inputting the disturbed unmarked sample into the student network to obtain an output characteristic diagram
Figure GDA0003697969350000051
And step 9: inputting the original unmarked sample into the teacher network to obtain the output characteristic diagram
Figure GDA0003697969350000052
Reuse of random transformations
Figure GDA0003697969350000053
The same transformation is carried out on the characteristic diagram to obtain another characteristic diagram
Figure GDA0003697969350000054
Step 10: feature map from two outputs
Figure GDA0003697969350000055
And
Figure GDA0003697969350000056
computing consistency regularization loss for unsupervised portions
Figure GDA0003697969350000057
The mean square error can be taken as the loss:
Figure GDA0003697969350000058
wherein x is U The non-annotated sample is represented by,
Figure GDA0003697969350000059
representing a random transformation, θ s And theta t Representing parameters of the student network and the teacher network respectively,
Figure GDA00036979693500000510
and
Figure GDA00036979693500000511
respectively representing a student network model and a teacher network model, and d (·,) represents a mean square error function.
Step 11: back propagation and updating of student network parameter theta by gradient descent algorithm s
Step 12: parameters of the teacher network are updated by using parameters of the student network, and parameters of the teacher network can be updated by adopting an exponential moving average method:
θ′ t =α EMA θ t +(1-α EMAs (3)
wherein alpha is EMA Represents a smoothing coefficient in an exponential moving average method of θ' t Representing updated parameters of the teacher network.
Step 13: and 3, repeating iteration until the training is finished.
Step 14: in the testing stage, a window is set to slide on the image, the image block of each window is input into the network to obtain the prediction result of each window, and finally the segmentation result of the remote sensing image is obtained.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. The remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization is characterized by comprising the following steps of:
step 1: firstly, dividing a remote sensing image data set into marked samples
Figure FDA0003697969340000011
And unlabeled samples
Figure FDA0003697969340000012
Wherein
Figure FDA0003697969340000013
The image of the label is shown,
Figure FDA0003697969340000014
label corresponding to the label image, N L The number of the marked images is shown,
Figure FDA0003697969340000015
representing unmarked images, N U Indicating the number of the unmarked images;
and 2, step: in the training phase, a student network S and a teacher network T are constructed, the parameters of which are expressed as theta s And theta t
And step 3: randomly selecting m sample data from the marked sample and the unmarked sample respectively;
and 4, step 4: the selected marking sample is input into the student network, and the loss of the supervision part is calculated
Figure FDA0003697969340000016
And 5: back propagation and updating of student network parameter theta by gradient descent algorithm s
Step 6: selecting a random transform
Figure FDA0003697969340000017
As a perturbation;
and 7: using random transformations
Figure FDA0003697969340000018
Processing the unmarked sample selected in the step 3 to obtain the disturbed unmarked sample
Figure FDA0003697969340000019
And 8: inputting the disturbed unmarked sample into the student network to obtain an output characteristic diagram
Figure FDA00036979693400000110
And step 9: inputting the unmarked sample selected in the step 3 into a teacher network to obtain an output characteristic diagram
Figure FDA00036979693400000111
Reuse of random transformations
Figure FDA00036979693400000112
For characteristic diagram
Figure FDA00036979693400000113
Performing the same transformation to obtain another feature map
Figure FDA00036979693400000114
Step 10: feature map from two outputs
Figure FDA00036979693400000115
And
Figure FDA00036979693400000116
computing consistency regularization loss
Figure FDA00036979693400000117
Step 11: back propagation and updating of student network parameter theta by gradient descent algorithm s
Step 12: updating the parameters of the teacher network by using the parameters of the student network;
step 13: 3, repeating iteration until the training is finished;
step 14: in the testing stage, a window is set to slide on the image, the image block of each window is input into the network to obtain the prediction result of each window, and finally the segmentation result of the remote sensing image is obtained.
2. Transform consistency-based regularization according to claim 1The remote sensing image semi-supervised semantic segmentation method is characterized by comprising the following steps: loss of supervision part in step 4
Figure FDA00036979693400000118
Defining pixel-by-pixel cross entropy as a loss function;
Figure FDA0003697969340000021
wherein the content of the first and second substances,
Figure FDA0003697969340000022
representing the cross entropy loss function, h and w representing the height and width of the image, c representing the number of classes, x L A sample of the annotation is represented as,
Figure FDA0003697969340000023
representing a feature map, y, obtained by inputting annotated samples into a student network L Representing the corresponding authentic label of the specimen.
3. The remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization as recited in claim 1, wherein: random transformation in step 6
Figure FDA0003697969340000024
Affine transformation or grid shuffle or cutmix transformation is used; wherein the affine transformation is between the length and width of the image [ -0.2,0.2 [ -0.2 [ ]]Random translation within the range, [0.5,1.5]Range random scaling, [ -180 °,180 °]The range is randomly rotated, and the grid shuffle transform uses a 3 × 3 grid.
4. The remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization as recited in claim 1, wherein: loss of consistency regularization for unsupervised portions in step 10
Figure FDA0003697969340000025
Taking the mean square error as the loss:
Figure FDA0003697969340000026
wherein x is U Indicating that there is no annotated sample that is,
Figure FDA0003697969340000027
representing a random transformation, θ s And theta t Representing parameters of the student network and the teacher network respectively,
Figure FDA0003697969340000028
and
Figure FDA0003697969340000029
respectively representing a student network model and a teacher network model, and d (·,) represents a mean square error function.
5. The remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization as recited in claim 1, wherein: in step 12, updating parameters of the teacher network by an exponential moving average method:
θ′ t =α EMA θ t +(1-α EMAs (3)
wherein alpha is EMA Represents a smoothing coefficient in an exponential moving average method of θ' t Representing updated parameters of the teacher network.
CN202110678330.6A 2021-06-18 2021-06-18 Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization Active CN113378736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110678330.6A CN113378736B (en) 2021-06-18 2021-06-18 Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110678330.6A CN113378736B (en) 2021-06-18 2021-06-18 Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization

Publications (2)

Publication Number Publication Date
CN113378736A CN113378736A (en) 2021-09-10
CN113378736B true CN113378736B (en) 2022-08-05

Family

ID=77577716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110678330.6A Active CN113378736B (en) 2021-06-18 2021-06-18 Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization

Country Status (1)

Country Link
CN (1) CN113378736B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114189416B (en) * 2021-12-02 2023-01-10 电子科技大学 Digital modulation signal identification method based on consistency regularization
CN114332135B (en) * 2022-03-10 2022-06-10 之江实验室 Semi-supervised medical image segmentation method and device based on dual-model interactive learning
CN114792349B (en) * 2022-06-27 2022-09-06 中国人民解放军国防科技大学 Remote sensing image conversion map migration method based on semi-supervised generation countermeasure network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112885468A (en) * 2021-01-26 2021-06-01 深圳大学 Teacher consensus aggregation learning method based on random response differential privacy technology

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022550222A (en) * 2019-09-30 2022-11-30 ムサシ オート パーツ カナダ インコーポレイテッド System and method for AI visual inspection
CN112036335B (en) * 2020-09-03 2023-12-26 南京农业大学 Inverse convolution guided semi-supervised plant leaf disease identification and segmentation method
CN112347930B (en) * 2020-11-06 2022-11-29 天津市勘察设计院集团有限公司 High-resolution image scene classification method based on self-learning semi-supervised deep neural network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112885468A (en) * 2021-01-26 2021-06-01 深圳大学 Teacher consensus aggregation learning method based on random response differential privacy technology

Also Published As

Publication number Publication date
CN113378736A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN113378736B (en) Remote sensing image semi-supervised semantic segmentation method based on transformation consistency regularization
Yasrab et al. RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures
Hu et al. Deep learning-based investigation of wind pressures on tall building under interference effects
CN110097075B (en) Deep learning-based marine mesoscale vortex classification identification method
CN105893968B (en) The unrelated person's handwriting recognition methods end to end of text based on deep learning
CN109033998A (en) Remote sensing image atural object mask method based on attention mechanism convolutional neural networks
CN114511728B (en) Method for establishing intelligent detection model of esophageal lesion of electronic endoscope
CN111401156B (en) Image identification method based on Gabor convolution neural network
CN104680193B (en) Online objective classification method and system based on quick similitude network integration algorithm
CN106960415A (en) A kind of method for recovering image based on pixel-recursive super-resolution model
CN108596243A (en) The eye movement for watching figure and condition random field attentively based on classification watches figure prediction technique attentively
CN108805102A (en) A kind of video caption detection and recognition methods and system based on deep learning
CN107463954A (en) A kind of template matches recognition methods for obscuring different spectrogram picture
CN113988147B (en) Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device
CN115760869A (en) Attention-guided non-linear disturbance consistency semi-supervised medical image segmentation method
Cao et al. Ancient mural classification method based on improved AlexNet network
CN115410059A (en) Remote sensing image part supervision change detection method and device based on contrast loss
Patel Bacterial colony classification using atrous convolution with transfer learning
CN114399763A (en) Single-sample and small-sample micro-body ancient biogenetic fossil image identification method and system
CN114037699A (en) Pathological image classification method, equipment, system and storage medium
Lodhi et al. Deep Neural Network for Recognition of Enlarged Mathematical Corpus
CN113111803B (en) Small sample character and hand-drawn sketch identification method and device
Wenzel et al. Facade interpretation using a marked point process
CN104573727A (en) Dimension reduction method of handwritten digital image
Wang et al. Self-supervised learning for high-resolution remote sensing images change detection with variational information bottleneck

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant