CN113052858A - Panorama segmentation method based on semantic stream - Google Patents

Panorama segmentation method based on semantic stream Download PDF

Info

Publication number
CN113052858A
CN113052858A CN202110307902.XA CN202110307902A CN113052858A CN 113052858 A CN113052858 A CN 113052858A CN 202110307902 A CN202110307902 A CN 202110307902A CN 113052858 A CN113052858 A CN 113052858A
Authority
CN
China
Prior art keywords
segmentation
semantic
network
feature
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110307902.XA
Other languages
Chinese (zh)
Other versions
CN113052858B (en
Inventor
贾海涛
毛晨
齐晨阳
王云
任利
许文波
周焕来
贾宇明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110307902.XA priority Critical patent/CN113052858B/en
Publication of CN113052858A publication Critical patent/CN113052858A/en
Application granted granted Critical
Publication of CN113052858B publication Critical patent/CN113052858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention discloses a panorama segmentation technology based on semantic stream. The invention has certain universality and generalization capability in the panorama segmentation direction. There is a contradiction between high-level semantics and high resolution in panorama segmentation. In general, in panorama segmentation, a simple bilinear interpolation method is adopted to deconvolute-upsample a feature map to fuse high-level features and low-level features. This operation has the disadvantage of feature misalignment. The invention can align the characteristic graphs in the panoramic segmentation process better by introducing the semantic stream method, thereby improving the accuracy of panoramic segmentation.

Description

Panorama segmentation method based on semantic stream
Technical Field
The invention belongs to the field of computer vision, and belongs to an image segmentation technology for performing pixel-level segmentation on an image in scene analysis.
Background
Image segmentation is always a research hotspot in the field of computer vision, and with the rise of deep learning, semantic segmentation and example segmentation methods of images are rapidly developed. In recent years, a newly proposed panoramic segmentation technology integrates the characteristics of semantic segmentation and instance segmentation, and becomes the leading research edge in the field of image segmentation. However, the accuracy of the current panorama segmentation technology still cannot be guaranteed, so that the technology is difficult to apply to the industry. Therefore, the method has important practical significance for the research on the accuracy of panoramic segmentation.
In panorama segmentation, a network generally adopts a multi-layer convolution pooling operation to extract semantic features of each layer of an image, and then the features are sent into a semantic segmentation and instance segmentation sub-network to predict semantic categories and instance information of pixel points. Wherein deep features generally capture more image semantic information better, but at a lower resolution; while the low-level features near the input end can keep higher resolution and detail information of the image, but semantic prediction of the image is not accurate enough. As shown in fig. 1, when semantic segmentation is performed by using high-level semantic features, the segmentation result can keep more semantic structures of images, but small objects of the images are seriously lost; and features of lower layers can restore more detail information of the image, but the semantic category of the image is not predicted accurately enough.
High resolution and high-level semantics are the intrinsic requirements of the panorama segmentation task, and in panorama segmentation, a simple bilinear interpolation method is usually adopted to deconvolute-upsample the feature map so as to restore the resolution of the high-level feature map. And then fusing the high-level features with the low-level features so as to relieve the contradiction between high resolution and high-level semantics.
In order to enhance the performance of panorama segmentation, the panorama segmentation generally enhances the extraction of context semantics or the recovery of detail information by a network through a series of methods. Common methods include hole convolution, feature pyramids, and the like.
The cavity convolution is to enlarge the receptive field by using a method of filling a convolution kernel with a cavity, so as to improve the context understanding ability of high-level features. As shown in fig. 2, a convolution kernel of 3 × 3 is expanded to a convolution kernel of 5 × 5 by 0 filling, so that a convolution operation can acquire a wider range of information, and the network context semantic extraction capability is improved. However, this method loses the continuity of information and has a poor effect of dividing a small object.
The feature pyramid is used for effectively integrating the feature information of the high layer and the low layer by fusing the features of different scales. A classical representation is FPN, which upsamples deep information and adds it to shallow information to construct a pyramid of features of different sizes, as shown in fig. 3. The FPN structure can better fuse the characteristics of different scales, so that the network can adapt to the detection of small objects and detailed information. However, because there may be deviations in the learning process of features of different sizes, directly adding two features with larger semantic differences will inevitably weaken the expression capability of the multi-scale features.
The two methods optimize the network segmentation from the aspect of enhancing the characteristics, but neglect the problems brought by the characteristic fusion. Because the depth of the neural network corresponds to the receptive field, the deeper features correspond to the larger area of the original image, and the semantic information contained in the deeper features is richer. However, due to the existence of the pooling layer, the accurate position of the pixel point in the deep feature corresponding to the original image cannot be determined. However, bilinear interpolation is a fixed interpolation method, which makes the up-sampled feature map not well aligned with the upper-layer features. When the feature maps which are not aligned are used for fusion, weakening effect is inevitably generated on feature information, and therefore the final panorama segmentation accuracy is reduced. The present invention addresses this problem in improving panorama segmentation.
Disclosure of Invention
In order to improve the defects in the panoramic segmentation technology, the invention aligns the feature maps of the panoramic segmentation by introducing a semantic stream method, so that the fusion of the features is more reasonable, and the network segmentation effect is enhanced. The technical scheme adopted by the invention is as follows:
step 1: ResNet-50 is used as a backbone network for panorama segmentation feature extraction. And extracting feature maps C1, C2, C3, C4 and C5.
Step 2: the build semantic stream module is a semantic stream (FAM) module as shown in fig. 4.
And step 3: the step is the core content of the patent, a feature sharing module is constructed, a semantic stream module is embedded in the step 1 to align the feature graphs in the step 1, and multi-scale fusion is carried out.
And 4, step 4: and (4) sending the multi-scale features obtained in the step (3) into a sub-network for semantic segmentation and example segmentation.
And 5: embedding semantic flow modules in the semantic segmentation and instance segmentation sub-networks aligns the sub-network feature maps.
Step 6: and fusing the sub-network segmentation results to obtain a panoramic segmentation result. The overall network structure is shown in fig. 5.
Compared with the prior art, the method and the device can effectively align the network characteristic graph, thereby improving the panoramic segmentation effect, and are suitable for various network structures.
Description of the figures and accompanying tables
FIG. 1 is a diagram: and (5) semantic segmentation effect of different layer feature maps.
FIG. 2 is a diagram of: schematic diagram of hole convolution.
FIG. 3 is a diagram of: schematic diagram of characteristic pyramid structure.
FIG. 4 is a diagram of: FAM module schematic.
FIG. 5 is a diagram: the invention relates to a whole structure diagram of a panoramic segmentation network.
FIG. 6 is a diagram of: the Mask R-CNN network structure chart of the invention.
FIG. 7 is a diagram of: the invention relates to an ablation experimental result.
FIG. 8 is a diagram of: the invention relates to a panoramic segmentation experiment of a different feature extraction network structure.
Detailed Description
The invention is further described below with reference to the accompanying drawings and tables.
First, the network performs feature extraction on the input image using a ResNet-50 backbone network, where ResNet-50 includes five stages, denoted as resl, res2, res3, res4, res 5. Each stage outputs a feature layer with dimensions 1/2, 1/4, 1/8, 1/16, 1/32 of the original image. As C1, C2, C3, C4 and C5. And taking C2-C5 as network input characteristics. And learning a semantic flow field between two adjacent feature maps through a semantic Flow (FAM) module, performing up-sampling on the high-level feature map according to the semantic flow field, and fusing the high-level feature map and the low-level feature map to form a feature sharing module with multi-scale features.
The semantic stream module is constructed as follows:
input feature maps for given two adjacent levels
Figure BDA0002988317850000031
And
Figure BDA0002988317850000032
firstly F is put inlUp-sampling to Fl-1Size, then connecting the two characteristic maps, and predicting semantic flow field by a network containing two 3 multiplied by 3 convolutions
Figure BDA0002988317850000033
This process can be expressed as:
Δl-1=convl(cat(Fl,Fl-1)) (1)
for the spatial grid omegal-1Each pixel p in (1)l-1By passing
Figure BDA0002988317850000034
It is mapped into upper layers and l. Then p is compared by a differentiable bilinear sampling mechanismlLinear interpolation is performed in the four domains.
The multi-scale features are then fed into semantic segmentation sub-networks and instance segmentation sub-networks, respectively. The semantic segmentation sub-network performs up-sampling on the features through the same semantic stream alignment operation, restores the resolution of a high-level feature map, and performs semantic category prediction on a softmax layer through 1 × 1 convolution. The MaskR-CNN network shown in fig. 6 is used as a main network in the example segmentation sub-network, the candidate regions of the example object are extracted from the input features through one RPN network, and then regression prediction of box, class and Mask is performed through one roiign layer. Where embedding FAM modules in Mask branches aligns the example split subnetwork feature maps.
And finally, fusing the segmentation results of the two sub-networks to obtain a final panoramic segmentation result.
The specific method comprises the following steps:
(1) and the ResNet-50 backbone network performs feature extraction on the input image to obtain five feature layers of C1, C2, C3, C4 and C5. And taking C2-C5 as network input characteristics.
(2) The feature maps C2 to C5 were convolved by 1 × 1 to fix the number of feature map channels to 256 dimensions, which were denoted as C2 'to C5'.
(3) Taking the C5 'layer as an output P5 layer, and then calculating a semantic flow field delta according to C4' and P54According to Δ4P5 was up sampled 2 times and added to C4' to yield P4.
(4) According to the step (3), P3 and P2 are sequentially obtained from top to bottom. The P2-P5 are respectively sent into a semantic segmentation and example segmentation sub-network as shared features to carry out corresponding regression prediction.
(5) The semantic segmentation sub-network firstly maps the shared feature dimension to 128 dimensions by using convolution of 1 x 1, then uses the FAM module to up-sample the features to 1/4 of the original image, and then
(6) And (5) splicing the features obtained in the step (5), and predicting semantic categories by utilizing 1 × 1 convolution and softmax.
(7) The example segmentation sub-network sends the input features into an RPN network to generate candidate regions, and then object detection and mask generation are respectively carried out on each region to obtain an example segmentation result.
(8) And fusing the semantic segmentation and the instance segmentation results to obtain a final panoramic segmentation result.
The improved method can align the panorama segmentation network characteristic graph well. Because the panorama segmentation network needs to guarantee both high resolution and high-level semantics, different-scale feature maps are generally required to be synthesized in panorama segmentation. This makes the influence of the factors of the misalignment of the feature maps on the panorama segmentation algorithm more obvious.
In the invention, an ablation experiment is carried out on a COCO data set to verify the validity of a scheme, ResNet-50 is used as a characteristic extraction backbone network in the experiment, a Mask R-CNN + FPN (marked as RN50-MR-CNN) combination method is used as a comparison algorithm, and the experiment result is shown in Table 1.
In table 1, FAMFPN represents a feature sharing module designed by the present invention, and FAMs and FAMi represent upsampling structures of bilinear interpolation in the original algorithm replaced by FAM structures in the semantic segmentation subbranch and the example segmentation subbranch, respectively. The results of the experiments in Table 1 are combined as follows:
a. the first three rows in the ablation experiment are the results of the ablation experiment on the feature sharing module, the semantic segmentation sub-branch FAM module and the example segmentation sub-branch FAM module respectively, and the experiment results show that the modules play a promoting role in the network.
b. The comparison table ablates data of the first three rows of the experiment, and the feature sharing module improves the network performance to the maximum extent and achieves 0.9% PQ. The enhancement of each branch by FAM modules in the semantic segmentation and instance segmentation branches can also effectively improve the accuracy of panorama segmentation.
c. The feature sharing module, the semantic segmentation sub-branch FAM module and the example segmentation sub-branch FAM module are overlapped, so that the segmentation effect is improved, the final panoramic segmentation quality is improved by 0.4%, and the overall optimization scheme is feasible and effective. Because the feature sharing module performs a semantic stream alignment operation on the feature maps once, the deviation between the feature maps in the branches is reduced, and the overall improvement effect of the FAM modules in the branches is reduced.
The method has better generalization capability on the optimization of the panoramic segmentation, and can well migrate other panoramic segmentation models. Through the design of the network, the network segmentation effect can be further improved. Table 2 shows the effect of the conventional network optimization method in the scheme.
The performance of panorama segmentation under two backbone networks of ResNet-50 and ResNet-101 is compared in Table 2, and the mapping of the feature sharing module C5 to P5 is replaced by a PPM module from convolution of 1 × 1. The PPM module is a characteristic pyramid module proposed by PSPNet, can well obtain long-distance context information, and is widely applied to image segmentation.
As can be seen from the comparison result, the segmentation effect of the network is greatly improved after ResNet-101 is used for replacing ResNet-50 in the original network. After the simple 1 × 1 convolution is replaced by the PPM module, the network is improved, which shows that the FAM module has better generalization capability and does not generate inhibition effect with some common feature enhancement and context semantic enhancement methods.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except combinations where mutually exclusive features or/and steps are present.

Claims (3)

1. A panorama segmentation method based on semantic stream is characterized by comprising the following steps:
step 1: ResNet-50 is used as a backbone network for panorama segmentation feature extraction; extracting feature maps C1, C2, C3, C4 and C5;
step 2: constructing a semantic stream module (FAM) module;
and step 3: the method comprises the following steps that (1) a feature sharing module is constructed according to the core content of the patent, a semantic stream module is embedded in the step 1 to align feature graphs in the step 1, and multi-scale fusion is carried out;
and 4, step 4: sending the multi-scale features obtained in the step (3) into a sub-network for semantic segmentation and example segmentation;
and 5: embedding a semantic flow module in a semantic segmentation and instance segmentation sub-network to align the sub-network feature map;
step 6: and fusing the sub-network segmentation results to obtain a panoramic segmentation result.
2. The method of claim 1, wherein the upsampling method in the feature sharing module in step 3 is a semantic stream alignment method.
3. The method of claim 1, wherein the sub-network features in the sub-network in step 4 use a semantic stream alignment method.
CN202110307902.XA 2021-03-23 2021-03-23 Panorama segmentation method based on semantic stream Active CN113052858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110307902.XA CN113052858B (en) 2021-03-23 2021-03-23 Panorama segmentation method based on semantic stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110307902.XA CN113052858B (en) 2021-03-23 2021-03-23 Panorama segmentation method based on semantic stream

Publications (2)

Publication Number Publication Date
CN113052858A true CN113052858A (en) 2021-06-29
CN113052858B CN113052858B (en) 2023-02-14

Family

ID=76514344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110307902.XA Active CN113052858B (en) 2021-03-23 2021-03-23 Panorama segmentation method based on semantic stream

Country Status (1)

Country Link
CN (1) CN113052858B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963177A (en) * 2021-11-11 2022-01-21 电子科技大学 CNN-based building mask contour vectorization method
CN115063777A (en) * 2022-06-27 2022-09-16 厦门大学 Unmanned vehicle obstacle identification method in field environment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801297A (en) * 2019-01-14 2019-05-24 浙江大学 A kind of image panorama segmentation prediction optimization method realized based on convolution
CN109801307A (en) * 2018-12-17 2019-05-24 中国科学院深圳先进技术研究院 A kind of panorama dividing method, device and equipment
CN110008808A (en) * 2018-12-29 2019-07-12 北京迈格威科技有限公司 Panorama dividing method, device and system and storage medium
CN110276765A (en) * 2019-06-21 2019-09-24 北京交通大学 Image panorama dividing method based on multi-task learning deep neural network
CN111242954A (en) * 2020-01-20 2020-06-05 浙江大学 Panorama segmentation method with bidirectional connection and shielding processing
CN111428726A (en) * 2020-06-10 2020-07-17 中山大学 Panorama segmentation method, system, equipment and storage medium based on graph neural network
CN111524150A (en) * 2020-07-03 2020-08-11 支付宝(杭州)信息技术有限公司 Image processing method and device
CN111598912A (en) * 2019-02-20 2020-08-28 北京奇虎科技有限公司 Image segmentation method and device
US20200357143A1 (en) * 2019-05-09 2020-11-12 Sri International Semantically-aware image-based visual localization
US20200401938A1 (en) * 2019-05-29 2020-12-24 The Board Of Trustees Of The Leland Stanford Junior University Machine learning based generation of ontology for structural and functional mapping
CN113920378A (en) * 2021-11-09 2022-01-11 西安交通大学 Attention mechanism-based radix bupleuri seed identification method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801307A (en) * 2018-12-17 2019-05-24 中国科学院深圳先进技术研究院 A kind of panorama dividing method, device and equipment
CN110008808A (en) * 2018-12-29 2019-07-12 北京迈格威科技有限公司 Panorama dividing method, device and system and storage medium
CN109801297A (en) * 2019-01-14 2019-05-24 浙江大学 A kind of image panorama segmentation prediction optimization method realized based on convolution
CN111598912A (en) * 2019-02-20 2020-08-28 北京奇虎科技有限公司 Image segmentation method and device
US20200357143A1 (en) * 2019-05-09 2020-11-12 Sri International Semantically-aware image-based visual localization
US20200401938A1 (en) * 2019-05-29 2020-12-24 The Board Of Trustees Of The Leland Stanford Junior University Machine learning based generation of ontology for structural and functional mapping
CN110276765A (en) * 2019-06-21 2019-09-24 北京交通大学 Image panorama dividing method based on multi-task learning deep neural network
CN111242954A (en) * 2020-01-20 2020-06-05 浙江大学 Panorama segmentation method with bidirectional connection and shielding processing
CN111428726A (en) * 2020-06-10 2020-07-17 中山大学 Panorama segmentation method, system, equipment and storage medium based on graph neural network
CN111524150A (en) * 2020-07-03 2020-08-11 支付宝(杭州)信息技术有限公司 Image processing method and device
CN113920378A (en) * 2021-11-09 2022-01-11 西安交通大学 Attention mechanism-based radix bupleuri seed identification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHAOBIN WANG等: "Image segmentation evaluation: a survey of methods", 《ARTIFICIAL INTELLIGENCE REVIEW》 *
毛晨: "基于深度学习的全景分割算法研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113963177A (en) * 2021-11-11 2022-01-21 电子科技大学 CNN-based building mask contour vectorization method
CN115063777A (en) * 2022-06-27 2022-09-16 厦门大学 Unmanned vehicle obstacle identification method in field environment

Also Published As

Publication number Publication date
CN113052858B (en) 2023-02-14

Similar Documents

Publication Publication Date Title
CN113052858B (en) Panorama segmentation method based on semantic stream
CN101477684B (en) Process for reconstructing human face image super-resolution by position image block
CN112396607B (en) Deformable convolution fusion enhanced street view image semantic segmentation method
CN111325751A (en) CT image segmentation system based on attention convolution neural network
Piao et al. Accuracy improvement of UNet based on dilated convolution
CN115457498A (en) Urban road semantic segmentation method based on double attention and dense connection
CN111340080B (en) High-resolution remote sensing image fusion method and system based on complementary convolution characteristics
CN113096136A (en) Panoramic segmentation method based on deep learning
CN113222818A (en) Method for reconstructing super-resolution image by using lightweight multi-channel aggregation network
CN116740527A (en) Remote sensing image change detection method combining U-shaped network and self-attention mechanism
CN116883801A (en) YOLOv8 target detection method based on attention mechanism and multi-scale feature fusion
CN115240066A (en) Remote sensing image mining area greening monitoring method and system based on deep learning
CN111914853B (en) Feature extraction method for stereo matching
CN116704350B (en) Water area change monitoring method and system based on high-resolution remote sensing image and electronic equipment
CN114529450B (en) Face image super-resolution method based on improved depth iteration cooperative network
CN114494284B (en) Scene analysis model and method based on explicit supervision area relation
CN116051850A (en) Neural network target detection method, device, medium and embedded electronic equipment
CN111462006B (en) Multi-target image complement method
CN114332780A (en) Traffic man-vehicle non-target detection method for small target
CN114445712A (en) Expressway pavement disease identification method based on improved YOLOv5 model
CN114359120A (en) Remote sensing image processing method, device, equipment and storage medium
CN112488115A (en) Semantic segmentation method based on two-stream architecture
CN112233127B (en) Down-sampling method for curve splicing image
Chen et al. Image Super-Resolution Based on Additional Self-Loop Supervision
CN116485811A (en) Stomach pathological section gland segmentation method based on Swin-Unet model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant