CN111027449B - Positioning and identifying method for paper archive electronic image archive chapter - Google Patents
Positioning and identifying method for paper archive electronic image archive chapter Download PDFInfo
- Publication number
- CN111027449B CN111027449B CN201911230888.7A CN201911230888A CN111027449B CN 111027449 B CN111027449 B CN 111027449B CN 201911230888 A CN201911230888 A CN 201911230888A CN 111027449 B CN111027449 B CN 111027449B
- Authority
- CN
- China
- Prior art keywords
- training
- archive
- chapter
- model
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a positioning and identifying method of a paper archive electronic image archiving chapter, which comprises the following steps: marking seal training data: selecting a fixed part and a part which can be exhausted as the characteristic of archive chapter detection to construct a training set; and (3) configuring a model: an improved SSD network model using ResNet-101 as a pre-feature extraction network; training a model: training the improved SSD network model by adopting the training set; the model was used: and inputting the electronic document to be decrypted into a trained improved SSD network model, outputting a boundary area of the positive sample, and judging whether a 'secret class' filing chapter exists according to the value of the positive sample. The invention can improve the detection precision of the archived chapter.
Description
Technical Field
The invention relates to the technical field of seal detection, in particular to a positioning and identifying method of a paper archive electronic image archive seal.
Background
After the paper file exceeds the time limit specified by the security level, the paper file can be disclosed and reused by the decryption auditing flow of the security establishment based on the principle of the security establishment and the decryption. The electronic images of these paper files also need to be "decrypted" for marking during use, avoiding misunderstanding during public use.
The current positioning and identification of the electronic image archiving chapter of the paper archive mainly comprises the following modes: a manual recognition detection mode, a computer mode recognition mode and a machine learning mode.
The manual identification detection mode mainly comprises manual inspection, positioning and identification of the filed chapter area. The manual mode consumes a great deal of labor cost to perform such repeated operations.
The computer mode recognition mode adopts image mode recognition to detect and position, the method has poor anti-interference performance, is sensitive to noise, has larger influence on detection results by the quality of electronic files or the definition degree of archive chapters, and has lower detection efficiency and accuracy.
Machine learning relies on a single, deep learning model of some object detection to detect and identify objects. The traditional convolution network or the full-connection network has the problems of information loss, loss and the like more or less when information is transmitted, and meanwhile, the problems that the network cannot be trained deeply due to gradient disappearance or gradient explosion are solved.
With the great progress of deep learning technology in the field of object detection and recognition, this field has been rapidly developed, such as SSD object detection and recognition models based on feedforward convolutional networks. However, the SSD model has some disadvantages, such as network training difficulty caused by gradient explosion caused by back propagation of the SSD model feature samples; even though training can be continuously completed at a smaller hierarchical depth, the problem of accuracy degradation caused by saturation of the cleanliness after a certain number of training iterations still has a very large influence on the output result.
Disclosure of Invention
The invention aims to solve the technical problem of providing a positioning and identifying method for a paper archive electronic image archive chapter, which can improve the detection precision of the archive chapter.
The technical scheme adopted for solving the technical problems is as follows: the positioning and identifying method for the electronic image archiving chapter of the paper file comprises the following steps:
(1) Marking seal training data: selecting a fixed part and a part which can be exhausted as the characteristic of archive chapter detection to construct a training set;
(2) And (3) configuring a model: an improved SSD network model using ResNet-101 as a pre-feature extraction network;
(3) Training a model: training the improved SSD network model by adopting the training set;
(4) The model was used: and inputting the electronic document to be decrypted into a trained improved SSD network model, outputting a boundary area of the positive sample, and judging whether a 'secret class' filing chapter exists according to the value of the positive sample.
And (3) in the step (1), when the training set is constructed, the archive chapter is disassembled according to the different security classes and the different styles.
The improved SSD network model in the step (2) comprises a VGG base network, an extended convolution layer and a prediction module layer which are sequentially arranged, wherein a third convolution layer and a fifth convolution layer in the VGG base network respectively adopt a third convolution layer and a fifth convolution layer in ResNet-101.
The extended convolutional layers have a total of 5 layers, and the size of the 5 layers decreases layer by layer.
The prediction module layer is 5 layers in total, a residual error connection is added in each layer, and a jump connection structure with the sampling layer is added.
And a plurality of prediction branches are added to the forefront of the same feature mapping set in the VGG base network.
The positive and negative sample ratio in the training set adopted in the step (3) is 1:3, and the following random operation is carried out on each sample: (a) using the original image and randomly performing rotation; (b) One region slice is randomly sampled for the original image, and each sampled region slice is enlarged or reduced to a fixed size and randomly rotated.
The training in the step (3) is divided into two stages, namely, a first stage: an initialization network for loading an improved SSD network model, freezing parameters of the network, adding only an extended convolution layer, not adding a prediction module layer, and setting the learning rate to be 10 -3 And 10 -4 Respectively performing iterative training; and a second stage: thawingFreezing network parameters during first-stage training, adding a prediction module layer, and setting the learning rate to be 10 -3 And 10 -4 And respectively performing iterative training.
Advantageous effects
Due to the adoption of the technical scheme, compared with the prior art, the invention has the following advantages and positive effects: according to the invention, the VGG16 basic network in the SSD frame is improved to be ResNet-101, the ResNet is used as the SSD front-end network, the whole network only needs to learn a part of input and output differences, the learning target and difficulty are simplified, the number requirement on samples is relatively small, and the detection, positioning and identification precision is improved. According to the invention, the prediction module is added at the rear of the base network, so that the model can be better used and fused with deep features, the context information of the features is increased, the problem of resolution reduction caused by lack of high-level semantic features in low-level feature prediction is solved, and the detection precision is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of an exemplary embodiment of an electronic image file;
FIG. 3 is an archive chapter style diagram in an embodiment of the present invention;
FIG. 4 is another archive chapter style diagram in an embodiment of the present invention;
FIG. 5 is an exemplary diagram of archive chapter sample disassembly in an embodiment of the present invention;
FIG. 6 is a schematic diagram of the architecture of an improved SSD network model in an embodiment of the present invention;
FIG. 7 is a schematic view of the structure of each layer of a prediction module layer in an embodiment of the present invention;
FIG. 8 is a schematic diagram of an addition predicted branch in an embodiment of the invention;
FIG. 9 is a diagram of an example of a sample rotated 180 degrees in a positive position in an embodiment of the invention;
FIG. 10 is a diagram of an example of a sample rotated by a certain angle in an embodiment of the present invention;
FIG. 11 is a sample illustration of random sample area slicing in an embodiment of the invention;
FIG. 12 is an exemplary view of the output of the archive seal security identification positioning result in the embodiment of the present invention.
Detailed Description
The invention will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present invention and are not intended to limit the scope of the present invention. Further, it is understood that various changes and modifications may be made by those skilled in the art after reading the teachings of the present invention, and such equivalents are intended to fall within the scope of the claims appended hereto.
The embodiment of the invention relates to a positioning and identifying method of a paper archive electronic image archive chapter, which is shown in fig. 1 and comprises the following steps:
(1) Marking seal training data. Since the types of archives to be identified are limited, for example, the common secret division of archives of the archives electronic image is "secret" and "confidential", and the archives are relatively fixed in shape, and exist in a planar two-dimensional manner, the archives electronic image style is shown in fig. 2, and the archives electronic image archives style is generally shown in fig. 3 and fig. 4. Aiming at the more regular shape of the universal archive chapter, the embodiment selects a fixed part and an exhaustible part as the feature of archive chapter detection to construct a training set. Referring to the patterns of fig. 3 and 4, in order to reduce the number of training samples, the archive chapter is disassembled, and mainly the portion with "secret level" is detected and identified, and generally the archive chapter of the archive electronic image has several forms, according to the secret level, two forms of "secret" and "secret" can be classified, according to the archive chapter pattern, two forms of transverse arrangement and longitudinal arrangement are generally adopted, and the total training set is classified into 4 forms as shown in fig. 5. The training sets of the 4 original types are respectively rotated, and the enhancement data enable the generalization capability of the model to be better.
(2) The model is configured. An improved SSD network model employing ResNet-101 as the pre-signature extraction network is shown in FIG. 6.
The present embodiment selects SSD as the basic detection architecture and makes the following improvements:
the first improvement is to use ResNet-101 to replace VGG selected by SSD as a base network, namely, the conv3_x layer and the conv5_x layer in the VGG network are replaced by the convolution layer in the original ResNet-101. As the ResNet-101 network is deeper, the feature extraction capability is enhanced, more semantic information can be carried, and the detection and recognition precision is improved.
The second improvement is to add extended convolutional layers (see dark part of fig. 6) at the end of the network res net-101 network, the size of these convolutional layers is decreasing layer by layer, adding a lot of context information, and the prediction can be performed at multiple scales. The basic SSD algorithm directly outputs the feature map output by the front-end to the regression and classification task output results, and in the embodiment, the front-end feature map is input to a rear 5-layer deconvolution model, and the high-level feature map and the low-level feature map are fused to output a corrected feature map.
The third improvement is to add a layer of prediction modules at the end of the network, because in the original SSD model, there are larger gradient values and larger fields of view and eigenvalues cannot be transferred. In order to obtain higher level context information during detection, the prediction module is put into the deconvolution layer, and a residual connection is added in each prediction layer. To add features, the jump connection structure to the sample layer is also added. The characteristics of SSD deep network output are utilized more fully, so that fitting in the training process is faster, and the training speed is effectively improved. Each of the prediction module layers deconvolves deep features using the structure shown in fig. 7. Assuming that H (X) represents the target optimal demapping, fitting another mapping F (X) with stacked nonlinear layers, then F (X) =h (X) +x, where the original optimal demapping can be equivalently F (X) +x, i.e., the shortcut in the feed-forward network shown in fig. 7 can be implemented by the formula y=f (X, { W i })+W s X represents that X is an input vector, Y is an output vector, W i For the parameters of the weight layer, when the input and output dimensions of the module are consistent, a linear projection is added to match the dimensions.
A fourth improvement is directed to the traffic itself of archive chapter detection, which uses more predictive branching in the SSD base network to improve the processing of small and medium sized objects. For square archive chapters, the two additional prediction branches are placed at the forefront of the same feature map set in the original network structure. As shown in fig. 8, the structure of the additional feature branches is added.
(3) Training a model: and training the improved SSD network model by adopting the training set. Because there are a large number of default boxes, there are a large number of negative samples after matching, and for a cleaner dataset, the positive and negative sample ratio is taken to be 1:3 in this embodiment.
For each image training the archived chapter, several choices are randomly made: (1) Using the original image, rotation is performed randomly (see fig. 9 and 10). (2) One region slice patch is randomly sampled for the original image (see fig. 11), after sampling is completed, each sampled region slice patch is scaled up or down to a fixed size and randomly rotated in order to make the model perform better prediction for multi-scale features.
Model training is divided into two phases: (1) Loading the model initializes the network, freezes parameters of the network, then only adds the deconvolution model (i.e., expands the convolution layer), and does not add the prediction model (i.e., predicts the model layer), achieving the effect of training only the deconvolution model and not training the prediction model. And setting the learning rate to be 1e-3 and 1e-4, and respectively carrying out iterative training. (2) Thawing frozen parameters in the first stage of training, adding a prediction model, setting the learning rate to be 1e-3 and 1e-4, and performing iterative training respectively.
To train the model from scratch, a good gradient estimate is required. On the basis of the original data set, training data are classified, and the processing is scaled to a certain size, so that the pressure on hardware processing is reduced when the later-stage feature extraction is performed.
(4) The model was used: and inputting the electronic document to be decrypted into a trained improved SSD network model, outputting a boundary area of the positive sample, and judging whether a 'secret class' filing chapter exists according to the value of the positive sample. Each region is fixed with respect to the position of its corresponding feature mapping unit. In each feature mapping unit, the model predicts the offset between the obtained region and the default region, and the score of the object contained in each region (each class probability is calculated).
(5) Outputting a result: the output when using the model is a series of bounding boxes of fixed size and the prediction score of this box is identified as a result of the final detection output, see in particular fig. 12.
The output test results of the present embodiment include comparisons of three scenarios: (1) a VGG16 network based native SSD model. And (2) training the scheme model based on the positive sample. (3) The model is trained based on sample augmentation (a positive sample, a negative sample, a positive sample rotated by a certain angle and a negative sample rotated by a certain angle).
Compared with the SSD model based on the VGG16 network, the scheme has the advantages that the identification of the transverse and longitudinal seals is improved to a certain extent, and the model of the scheme after sample augmentation has the advantage that the identification detection accuracy is improved to a larger extent. As shown in table 1:
TABLE 1 comparison of native SSD model and results from this scenario
* The output result AP detected after the training data augmentation training model is marked as: average precision mAP: average precision mean value
Compared with the original SSD model, the method has the advantages that the deeper ResNet-101 is used as a basic feature network, the abstract degree of features is improved, and the detection precision of small and medium objects such as archive chapters is improved. In addition, an additional base layer deconvolution layer is added after the feature network, so that the resolution of the feature sampling layer is improved. And the low-level features and the deconvolution and prediction layers are in jump connection, so that the interference similar items in the sample to be detected are filtered and identified better, and the detection precision of the archive chapter is further improved.
In addition, the present solution trains a model using data augmentation samples. Aiming at the situation that the angle of the archive chapter in the archive electronic image possibly deflects to a certain extent, the training sample is amplified, and a plurality of situations such as righting, rotating by 180 degrees, rotating degree and the like are simply adopted as the training sample training model, as shown in the table 1, the recognition rate of the model is greatly improved by the scheme and the scheme of the two training modes of the archive chapter.
Claims (6)
1. The positioning and identifying method for the electronic image archiving chapter of the paper archive is characterized by comprising the following steps:
(1) Marking seal training data: selecting a fixed part and a part which can be exhausted as the characteristic of archive chapter detection to construct a training set;
(2) And (3) configuring a model: an improved SSD network model using ResNet-101 as a pre-feature extraction network; the improved SSD network model comprises a VGG base network, an extended convolution layer and a prediction module layer which are sequentially arranged, wherein a third convolution layer and a fifth convolution layer in the VGG base network respectively adopt a third convolution layer and a fifth convolution layer in ResNet-101;
(3) Training a model: training the improved SSD network model by adopting the training set; when training is performed, the training device is divided into two stages, wherein the first stage is as follows: an initialization network for loading an improved SSD network model, freezing parameters of the network, adding only an extended convolution layer, not adding a prediction module layer, and setting the learning rate to be 10 -3 And 10 -4 Respectively performing iterative training; and a second stage: thawing network parameters frozen in the first stage training, adding a prediction module layer, and setting the learning rate to be 10 -3 And 10 -4 Respectively performing iterative training;
(4) The model was used: and inputting the electronic document to be decrypted into a trained improved SSD network model, outputting a boundary area of the positive sample, and judging whether a 'secret class' filing chapter exists according to the value of the positive sample.
2. The method for locating and identifying paper archive electronic image archive seal according to claim 1, wherein in the step (1), the archive seal is disassembled according to the different security classes and the different styles when the training set is constructed.
3. A method for locating and identifying a paper archive electronic image archive chapter as defined in claim 1 wherein the extended convolution layers are 5 layers in total and the size of the 5 layers decreases from layer to layer.
4. The method for locating and identifying a chapter for electronic image archiving of a paper archive of claim 1 wherein the prediction module layer has a total of 5 layers, each layer having a residual connection added thereto and having a skip connection structure with the sampling layer.
5. The method for locating and identifying a paper archive electronic image archive chapter according to claim 1, wherein a plurality of prediction branches are added to a forefront end of a same feature mapping set in the VGG base network.
6. The method for locating and identifying electronic image archives of paper archives according to claim 1, wherein the positive and negative sample ratio in the training set adopted in the step (3) is 1:3, and each sample is randomly operated as follows:
(a) Using the original image and randomly rotating; (b) One region slice is randomly sampled for the original image, and each sampled region slice is enlarged or reduced to a fixed size and randomly rotated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911230888.7A CN111027449B (en) | 2019-12-05 | 2019-12-05 | Positioning and identifying method for paper archive electronic image archive chapter |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911230888.7A CN111027449B (en) | 2019-12-05 | 2019-12-05 | Positioning and identifying method for paper archive electronic image archive chapter |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111027449A CN111027449A (en) | 2020-04-17 |
CN111027449B true CN111027449B (en) | 2023-05-30 |
Family
ID=70207980
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911230888.7A Active CN111027449B (en) | 2019-12-05 | 2019-12-05 | Positioning and identifying method for paper archive electronic image archive chapter |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111027449B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107563372A (en) * | 2017-07-20 | 2018-01-09 | 济南中维世纪科技有限公司 | A kind of license plate locating method based on deep learning SSD frameworks |
CN109447078A (en) * | 2018-10-23 | 2019-03-08 | 四川大学 | A kind of detection recognition method of natural scene image sensitivity text |
WO2019144575A1 (en) * | 2018-01-24 | 2019-08-01 | 中山大学 | Fast pedestrian detection method and device |
-
2019
- 2019-12-05 CN CN201911230888.7A patent/CN111027449B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107563372A (en) * | 2017-07-20 | 2018-01-09 | 济南中维世纪科技有限公司 | A kind of license plate locating method based on deep learning SSD frameworks |
WO2019144575A1 (en) * | 2018-01-24 | 2019-08-01 | 中山大学 | Fast pedestrian detection method and device |
CN109447078A (en) * | 2018-10-23 | 2019-03-08 | 四川大学 | A kind of detection recognition method of natural scene image sensitivity text |
Non-Patent Citations (2)
Title |
---|
卢海涛 ; 吴磊 ; 周建云 ; 郑蕊蕊 ; 贺建军 ; .基于Faster R-CNN及数据增广的满文文档印章检测.大连民族大学学报.2018,(05),全文. * |
赵庆北 ; 元昌安 ; .基于深度学习的MSSD目标检测方法.企业科技与发展.2018,(05),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111027449A (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hung et al. | Scene parsing with global context embedding | |
CN110197205B (en) | Image identification method of multi-feature-source residual error network | |
US20190042743A1 (en) | Malware detection and classification using artificial neural network | |
US8311368B2 (en) | Image-processing apparatus and image-processing method | |
WO2016054778A1 (en) | Generic object detection in images | |
CN104978521B (en) | A kind of method and system for realizing malicious code mark | |
CN104077765B (en) | Image segmentation device, image partition method | |
CN114897782B (en) | Gastric cancer pathological section image segmentation prediction method based on generation type countermeasure network | |
CN111353062A (en) | Image retrieval method, device and equipment | |
Siddiqui et al. | A robust framework for deep learning approaches to facial emotion recognition and evaluation | |
CN111815526A (en) | Rain image rainstrip removing method and system based on image filtering and CNN | |
EP1930852A1 (en) | Image search method and device | |
CN109582960B (en) | Zero example learning method based on structured association semantic embedding | |
CN111027449B (en) | Positioning and identifying method for paper archive electronic image archive chapter | |
CN105677713A (en) | Position-independent rapid detection and identification method of symptoms | |
CN113269752A (en) | Image detection method, device terminal equipment and storage medium | |
CN112434730A (en) | GoogleNet-based video image quality abnormity classification method | |
CN110188790B (en) | Automatic generation method and system for picture sample | |
CN102375990B (en) | Method and equipment for processing images | |
CN113688263B (en) | Method, computing device, and storage medium for searching for image | |
Fetisov et al. | Unsupervised Prostate Cancer Histopathology Image Segmentation via Meta-Learning | |
Montagner et al. | NILC: a two level learning algorithm with operator selection | |
CN111626373A (en) | Multi-scale widening residual error network, small target identification detection network and optimization method thereof | |
CN111813975A (en) | Image retrieval method and device and electronic equipment | |
CN111753290B (en) | Software type detection method and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |