CN111080648B - Real-time image semantic segmentation algorithm based on residual learning - Google Patents

Real-time image semantic segmentation algorithm based on residual learning Download PDF

Info

Publication number
CN111080648B
CN111080648B CN201911215735.5A CN201911215735A CN111080648B CN 111080648 B CN111080648 B CN 111080648B CN 201911215735 A CN201911215735 A CN 201911215735A CN 111080648 B CN111080648 B CN 111080648B
Authority
CN
China
Prior art keywords
network
picture
training
semantic segmentation
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911215735.5A
Other languages
Chinese (zh)
Other versions
CN111080648A (en
Inventor
韩静
楼啸天
陈霄宇
柏连发
张毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201911215735.5A priority Critical patent/CN111080648B/en
Publication of CN111080648A publication Critical patent/CN111080648A/en
Application granted granted Critical
Publication of CN111080648B publication Critical patent/CN111080648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a real-time image semantic segmentation algorithm based on residual error learning. Sending the picture into a convolutional neural network to obtain a segmented picture with category information, and the first step is as follows: marking the original picture to obtain a marked picture, and forming a training set with the original picture; step two: constructing a novel semantic segmentation network according to a residual error feature extraction method and a single network prediction structure; step three: loading a training set, and training the novel semantic segmentation network in a sectional training mode to obtain a trained model; step four: and sending the picture to be segmented into a novel semantic segmentation network, and loading the trained model to obtain a segmentation result. The speed is accelerated, and meanwhile, the accuracy is remarkably improved. The speed is increased, and meanwhile, the accuracy rate is remarkably improved.

Description

Real-time image semantic segmentation algorithm based on residual learning
Technical Field
The invention relates to the technical field of image segmentation, in particular to a real-time image semantic segmentation method based on residual error learning.
Background
The image semantic segmentation is a key technology in the field of image processing and computer vision, is an important link for recognizing image contents by a computer, and research results of the image semantic segmentation can be applied to numerous fields such as robot navigation, unmanned driving, virtual reality, image retrieval and the like, and have important practical value and academic research significance.
Semantic segmentation is to identify each pixel of the input, so it is necessary to construct a feature map that includes all pixels, and neural networks are usually constructed in a pyramid shape to increase the displacement invariance of the output and reduce the amount of computation.
The common semantic segmentation common algorithm ENet adopts an Encoder-decoder structure, an Endecoder training obtains a low-resolution target, and then a decoder is used behind the Endecoder to reconstruct the high-resolution target. In the form, the feature sizes of the target under different scales are considered, a large amount of calculation amount is saved, and the real-time segmentation effect can be achieved, but due to repeated downsampling in a pyramid neural network, the resolution of an image is obviously reduced, and a large amount of spatial information is lost, so that the final image segmentation effect is often poor, the network performance is poor in the aspects of the edge of image segmentation and the image details, in order to solve the problem, UNet provides a method for predicting by stacking low-level features on corresponding high-level features in a channel stacking mode, namely, the low-level features containing the spatial information and the high-level features containing semantic information are combined for prediction, but the low-level features are directly used, so that not only information redundancy is generated, but also the network reasoning speed is slowed due to the fact that the calculation amount is sharply increased due to feature fusion, and the purpose of real-time segmentation cannot be achieved. RefineNet proposes a reusable multi-path refinement network that fuses the multi-resolution features of different layers to produce high resolution and high quality results. However, this fusion adds many convolutions, causing an increase in computation, only up to 20fps in 512 × 512 resolution images. Meanwhile, the training mode adopted at present is generally directed at single scale training, so that the direction of convolutional neural network training cannot be well supervised, and the network performance cannot reach the optimum.
Disclosure of Invention
The invention aims to provide a real-time image semantic segmentation method based on residual error learning.
The technical solution for realizing the purpose of the invention is as follows: a real-time image semantic segmentation algorithm based on residual learning sends a picture into a convolutional neural network to obtain a segmented picture with category information, and comprises the following steps:
the method comprises the following steps: marking the original picture to obtain a marked picture, and forming a training set with the original picture;
step two: constructing a novel semantic segmentation network according to a residual error feature extraction method and a single network prediction structure;
step three: loading a training set, and training the novel semantic segmentation network in a sectional training mode to obtain a trained model;
step four: and sending the picture to be segmented into a novel semantic segmentation network, and loading the trained model to obtain a segmentation result.
Compared with the prior art, the invention has the following remarkable advantages: (1) The representation of segmentation on image details and edges is improved through the characteristic residual error, so that the segmentation effect is greatly improved. (2) Due to the adoption of the single-network prediction structure, a large amount of convolution in the decoding module is reduced, the model operation speed is improved, and the segmentation can reach the real-time degree. (3) The sectional type training mode is adopted, so that the network can learn hierarchically, and a better segmentation effect is achieved.
Drawings
FIG. 1 is an algorithm flow diagram.
Fig. 2 is a convolutional neural network structure.
Fig. 3 is a feature residual module.
Fig. 4 is a single network prediction structure.
FIG. 5 is a graph showing the effect of the segmentation of the Cammid data by the models ENet, ENet decoder, ENet (RPNet).
FIG. 6 is a graph of the segmentation effect of the model ENet on the Cityscape data at different scales.
Detailed Description
The invention is further described with reference to the drawings and examples.
The invention relates to a real-time image semantic segmentation algorithm based on residual learning, which is used for sending a picture into a convolutional neural network to obtain a segmented picture with category information, and comprises the following steps:
the method comprises the following steps: marking the original picture to obtain a marked picture, and forming a training set with the original picture;
step two: constructing a novel semantic segmentation network according to a residual error feature extraction method and a single network prediction structure;
step three: loading a training set, and training the novel semantic segmentation network in a sectional training mode to obtain a trained model;
step four: and sending the picture to be segmented into a novel semantic segmentation network, and loading the trained model to obtain a segmentation result.
In the first step, a picture in the training set is selected, the picture is subjected to region division according to different targets in the picture, the whole region is subjected to labeling of the type according to the type of the target and pixel points, the whole picture is labeled after the target of the whole picture is labeled, and the original picture and all the labeled pictures are obtained after all the original pictures are labeled and serve as the training set of the neural network.
The method for constructing the novel semantic segmentation network comprises the following steps:
step 3.1: taking an Enet feature coding module as a basic feature extraction network, sending original pictures in a training set into the network, performing convolution to extract main features, extracting the main features after each down-sampling, wherein the main features are respectively half-size main features F 1/2 Feature F of quarter-sized body 1/4 And feature F of one-eighth size body 1/8
Step 3.2: extracting residual error characteristics by using a residual error extraction method, and obtaining residual error characteristics f of corresponding scales by differentiating the characteristics under different scales 1/4 ,f 1/2
Step 3.3: constructing a single network prediction structure, removing a decoding structure in an Enet network, and carrying out interpolation amplification on corresponding main body characteristics and directly adding new main body characteristics on residual error characteristics, wherein the expression is as follows:
Figure BDA0002299453340000031
wherein θ represents a twofold interpolation of the current profile;
step 3.4: and combining the obtained main body characteristic and the residual error characteristic through a single network prediction structure to obtain a novel semantic segmentation network.
The specific method for extracting residual error features in step 2.2 comprises the following steps: taking adjacent main features with different sizes, amplifying the smaller main features by two times through interpolation, and directly subtracting the larger main features from the smaller main features to obtain residual features, wherein the expression is as follows:
Figure BDA0002299453340000032
where θ represents a twofold interpolation of the current profile.
The trainingThe training method is a segmental training method, and adopts marking data of different scales to perform segmental training aiming at residual errors of different scales, and the specific method comprises the following steps: downsampling labeled results L in a training set into different sizes L 1 ,L 1/2 ,L 1/4 ,L 1/8 Corresponding to the size of the main body characteristic one by one, and when the network is trained, firstly using a cross entropy loss function to restrict F 1/8 And L 1/8 As a result of (2), retraining F after having been trained 1/4 And L 1/4 And so on until the final segmentation result F is obtained 1
The invention improves the performance of segmentation on image details and edges through the characteristic residual error, and greatly improves the segmentation effect. Meanwhile, a single network prediction structure is adopted, so that a large amount of convolution in a decoding module is reduced, the model operation speed is improved, and the segmentation can reach a real-time degree. The sectional type training mode is adopted, so that the network can learn hierarchically, and a better segmentation effect is achieved. The network structure is shown in fig. 2.
Specifically, in order to extract the characteristic residual error, the invention provides a brand-new residual error characteristic extraction method by using the thinking of a Laplacian image residual error pyramid and ResNet. Firstly, the features extracted by two adjacent scale network blocks are taken out, the features with small resolution are amplified to the same size as the features with large resolution in a bilinear interpolation mode, and the difference calculation is carried out on the features with small resolution and the features with large resolution, so that the residual error features are considered to be obtained. The simplest 1x1 convolution kernel is used for calculating a target residual error, point addition fusion is carried out on the target residual error and the main body calculated in the previous stage, the main body result of the stage can be obtained, and the like, the residual error characteristics of each stage can be calculated to obtain the residual errors of different scales of the target, and the original scale target result is restored by gradually combining the residual errors obtained by prediction of the lower layer from the main body low-resolution target at the top layer. The residuals are shown in fig. 3.
In order to reduce the amount of calculation and better match the feature learning module, a single-network prediction structure is designed for predicting different parts of a target in forward propagation of a single pyramid network, as shown in fig. 4, the decoder structure of the network is directly removed, prediction is directly started from the feature with the minimum resolution, the feature with the next resolution is obtained by adding residual features until the feature with the original resolution is obtained, a large number of convolution kernels in the original decoder are reduced, and therefore the efficiency is remarkably improved.
Meanwhile, in order to train the network to a better degree, a hierarchical training strategy is designed, and a multi-level residual of the target is predicted by designing a residual loss function and utilizing residual characteristics, so that the training from top to bottom is carried out, and the small target and detail segmentation effect is obviously improved. And aiming at semantic features with different resolutions and different grades, performing nearest neighbor interpolation on the original labeled data to obtain labeled data with corresponding resolutions, and then performing training once aiming at each grade until the last layer of training is finished from top to bottom.
Data set
Public data sets CamVid and Cityscapes are adopted in the experiment to verify the effectiveness of the algorithm.
The Cammid data set comprises 367 training pictures and 367 training mark data respectively, and 101 verification set pictures and mark data respectively, the picture resolution is 360 × 480, and the pictures of city streets under different environments comprise 12 categories.
The Cityscapes data set comprises 2975 training pictures and training mark data respectively, 500 verification set pictures and mark data respectively, 1525 test set pictures and no test set mark data, and the picture resolution is 1024 × 2048. The picture size at the time of training was 512 x 1024.
In the training process, data augmentation, namely displacement of 0-2 pixels and random left-right turning of the picture are adopted.
(II) training strategy
GTX 1080Ti display card has been used in this experiment, and based on Pythrch platform training model, adam is adopted to the optimization algorithm of model training, uses Batchnorm layer and ReLU activating function behind each layer convolution layer. Calculating weights using a new formula, and updating the strategy using a new learning rate: the training process is divided into four phases. The first stage, training the result with 1/8 of the original resolution, setting the initial learning rate to be 0.0005, and iterating for 300 times; in the second stage, 1/8 result is interpolated and amplified by one time, the first part of residual error characteristics are added, the result of 1/4 original resolution is trained, the initial learning rate is set to be 0.0005, and iteration is carried out for 300 times; in the third stage, 1/4 result interpolation is amplified by one time, the residual error characteristics of the second stage are added, 1/2 results are trained, the initial learning rate is set to be 0.0005, and the iteration is carried out for 300 times. In the fourth stage, the result of the previous stage is directly interpolated to the original resolution, the initial learning rate is set to be 0.0005, and the iteration is carried out for 300 times.
(III) results of the experiment
The experiment results of the lightweight feature algorithm on the data sets of Camvid and Cityscapes are shown in tables 1 and 2 by using the same network model and the same training mode. The cammid segmentation effect is shown in fig. 5, and the segmentation training results on the cityscaps dataset are shown in fig. 6.
TABLE 1
Model MIoU FPS Parameters FLOPS
ENet 59.89 111 0.39M 1.50B
ERFNet 60.54 133 2.07M 8.43B
ESPNet 62.6 205 0.68M 0.87B
FC-DenseNett 58.9 27 1.5M 26.29B
RPNet(ENet) 64.67 102 0.35M 1.36B
RPNet(ERFNet) 64.82 149 1.89M 6.78B
TABLE 2
Model Input Size mIoU miIou FPS FLOPs
ENet 1024*512 58.3 34.4 77 4.03B
ERFNet 1024*512 68.0 40.4 59 25.60B
ESPNet 1024*512 60.3 31.8 139 3.19B
BiseNet 1536*768 68.4 - 69 26.37B
RPNet(ENet) 1024*512 63.37 39.0 88 4.28B
RPNet(ERFNet) 1024*512 67.9 44.9 123 20.71B
As can be seen from tables 1 and 2, RPNet based on the net and ErfNet is significantly improved in mlou, and a part of the calculation amount is reduced, so that the network speed is faster. Compared with other split networks, the RPNet also achieves fast and good results.
In the experiment, a target detection algorithm is respectively deployed on 1080Ti and an embedded board Nvidia Jetson Tx2, and the TX2 adopts a GPU with a Pascal architecture. The detection time on the embedded board is shown in table 3, and it can be seen that the speed of RPNet is greatly improved compared with the speed of the comparison algorithm under different input resolutions.
TABLE 3
Figure BDA0002299453340000061
In summary, pixel-level semantic segmentation generally requires a large amount of computation, and especially in the case of a large input size, in a segmentation model, in addition to feature extraction, an additional decoder structure is usually adopted to recover spatial information, but many details are lost due to multiple downsampling in the encoder process. The invention provides a real-time image semantic segmentation algorithm RPNet based on residual learning, which adds characteristic residual on the basis of the original encoder-decoder structure, adopts a brand-new decoder-removing single-network prediction structure, adopts a novel segmented training mode from a top layer to a bottom layer, accelerates the speed and obviously improves the accuracy. Specifically, we use the stem features to learn the main part of the target, the residual features to learn the image edges and details. At test time, the residual information is used to supplement the details of the subject. Meanwhile, the adopted single network prediction structure only needs to pass an encoder structure once, so that the decoder structure is removed, and a large amount of calculation is reduced. Finally, the residual error module divides the network into a plurality of shallow sub-networks, which is beneficial to the training of the network. According to the method, on CamVid and Cityscapes data sets, the speed and the accuracy rate exceed those of a benchmark algorithm, 154FPS is achieved on GTX 1080Ti, 22FPS is achieved on an embedded platform Nvidia Jetson TX1, and meanwhile, the accuracy rates on a Camvid data set and a Cityscapes data set are respectively improved by 4.78 and 5.07 compared with those of an original Enet.
According to the real-time image semantic segmentation algorithm based on the RPNet frame, the residual error characteristics are extracted, the single-network prediction structure is adopted, the accuracy is guaranteed, the operation efficiency of the algorithm is improved, and the effectiveness and efficiency of data sets Camvid and Cityscapes are shown.

Claims (2)

1. A real-time image semantic segmentation algorithm based on residual learning is characterized by comprising the following steps of:
the method comprises the following steps: marking the original picture to obtain a marked picture, and forming a training set with the original picture;
step two: constructing a novel semantic segmentation network according to a residual error feature extraction method and a single network prediction structure;
step three: loading a training set, and training the novel semantic segmentation network in a sectional training mode to obtain a trained model;
step four: sending the picture to be segmented into a novel semantic segmentation network, and loading a trained model to obtain a segmentation result;
the method for constructing the novel semantic segmentation network comprises the following steps:
step 2.1: taking an Enet feature coding module as a basic feature extraction network, sending original pictures in a training set into the network, performing convolution to extract main features, extracting the main features after each down-sampling, wherein the main features are respectively half-size main features F 1/2 Feature F of a quarter-sized body 1/4 And feature F of one-eighth size body 1/8
Step 2.2: extracting residual error characteristics by using a residual error extraction method, and obtaining residual error characteristics f of corresponding scales by differentiating the characteristics under different scales 1/4 ,f 1/2
Step 2.3: constructing a single network prediction structure, removing a decoding structure in an Enet network, and directly adding new main features to corresponding main feature interpolation amplification and residual features, wherein the expression is as follows:
Figure FDA0003802259290000011
wherein θ represents a twofold interpolation of the current profile;
step 2.4: combining the obtained main features and residual features through a single-network prediction structure to obtain a novel semantic segmentation network;
the specific method for extracting residual error features in step 2.2 comprises the following steps: taking adjacent main features with different sizes, amplifying the smaller main features by two times through interpolation, and directly subtracting the larger main features from the smaller main features to obtain residual features, wherein the expression is as follows:
Figure FDA0003802259290000012
wherein θ represents a twofold interpolation of the current profile;
the training method is a segmented training method, and for residual errors of different scales, labeled data of different scales are adopted for segmented training, and the specific method is as follows: downsampling labeled results L in a training set into different sizes L 1 ,L 1/2 ,L 1/4 ,L 1/8 Corresponding to the size of the main body characteristic one by one, and when the network is trained, firstly using a cross entropy loss function to restrict F 1/8 And L 1/8 As a result of (3), retraining F after training 1/4 And L 1/4 Until the final segmentation result F is obtained 1
2. The residual learning-based real-time image semantic segmentation algorithm according to claim 1, characterized in that: in the first step, a picture in the training set is selected, the picture is divided according to different targets in the picture, the whole region is labeled according to the type of the target and the type of the pixel point, the target of the whole picture is labeled, and the original picture and all the labeled pictures are obtained after all the original pictures are labeled and used as the training set of the neural network.
CN201911215735.5A 2019-12-02 2019-12-02 Real-time image semantic segmentation algorithm based on residual learning Active CN111080648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911215735.5A CN111080648B (en) 2019-12-02 2019-12-02 Real-time image semantic segmentation algorithm based on residual learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911215735.5A CN111080648B (en) 2019-12-02 2019-12-02 Real-time image semantic segmentation algorithm based on residual learning

Publications (2)

Publication Number Publication Date
CN111080648A CN111080648A (en) 2020-04-28
CN111080648B true CN111080648B (en) 2022-11-22

Family

ID=70312431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911215735.5A Active CN111080648B (en) 2019-12-02 2019-12-02 Real-time image semantic segmentation algorithm based on residual learning

Country Status (1)

Country Link
CN (1) CN111080648B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112756742A (en) * 2021-01-08 2021-05-07 南京理工大学 Laser vision weld joint tracking system based on ERFNet network
CN113033352B (en) * 2021-03-11 2024-02-23 浙江工业大学 Real-time mobile traffic violation detection method based on combination of improved target semantic segmentation and target detection model
CN113393476B (en) * 2021-07-07 2022-03-11 山东大学 Lightweight multi-path mesh image segmentation method and system and electronic equipment
CN113658189B (en) * 2021-09-01 2022-03-11 北京航空航天大学 Cross-scale feature fusion real-time semantic segmentation method and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276354A (en) * 2019-05-27 2019-09-24 东南大学 A kind of training of high-resolution Streetscape picture semantic segmentation and real time method for segmenting

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10402690B2 (en) * 2016-11-07 2019-09-03 Nec Corporation System and method for learning random-walk label propagation for weakly-supervised semantic segmentation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276354A (en) * 2019-05-27 2019-09-24 东南大学 A kind of training of high-resolution Streetscape picture semantic segmentation and real time method for segmenting

Also Published As

Publication number Publication date
CN111080648A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN111080648B (en) Real-time image semantic segmentation algorithm based on residual learning
CN110443842B (en) Depth map prediction method based on visual angle fusion
Jaritz et al. Sparse and dense data with cnns: Depth completion and semantic segmentation
Yang et al. Lego: Learning edge with geometry all at once by watching videos
CN108665496B (en) End-to-end semantic instant positioning and mapping method based on deep learning
CN110232394B (en) Multi-scale image semantic segmentation method
CN109101975B (en) Image semantic segmentation method based on full convolution neural network
JP6395158B2 (en) How to semantically label acquired images of a scene
EP3608844A1 (en) Methods for training a crnn and for semantic segmentation of an inputted video using said crnn
CN101976435B (en) Combination learning super-resolution method based on dual constraint
CN112052783B (en) High-resolution image weak supervision building extraction method combining pixel semantic association and boundary attention
Thisanke et al. Semantic segmentation using Vision Transformers: A survey
CN110349087B (en) RGB-D image high-quality grid generation method based on adaptive convolution
CN104657962A (en) Image super-resolution reconstruction method based on cascading linear regression
CN109447897B (en) Real scene image synthesis method and system
CN113449612B (en) Three-dimensional target point cloud identification method based on sub-flow sparse convolution
CN110634103A (en) Image demosaicing method based on generation of countermeasure network
CN116612288B (en) Multi-scale lightweight real-time semantic segmentation method and system
CN113657387A (en) Semi-supervised three-dimensional point cloud semantic segmentation method based on neural network
CN114821058A (en) Image semantic segmentation method and device, electronic equipment and storage medium
Douillard et al. Tackling catastrophic forgetting and background shift in continual semantic segmentation
CN116797787A (en) Remote sensing image semantic segmentation method based on cross-modal fusion and graph neural network
Salem et al. Semantic image inpainting using self-learning encoder-decoder and adversarial loss
CN111340189A (en) Space pyramid graph convolution network implementation method
Wang et al. A learnable joint spatial and spectral transformation for high resolution remote sensing image retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant