CN113076806A - Structure-enhanced semi-supervised online map generation method - Google Patents
Structure-enhanced semi-supervised online map generation method Download PDFInfo
- Publication number
- CN113076806A CN113076806A CN202110259938.5A CN202110259938A CN113076806A CN 113076806 A CN113076806 A CN 113076806A CN 202110259938 A CN202110259938 A CN 202110259938A CN 113076806 A CN113076806 A CN 113076806A
- Authority
- CN
- China
- Prior art keywords
- map
- remote sensing
- loss
- supervised
- semi
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/206—Drawing of charts or graphs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a structure-enhanced semi-supervised online map generation method, and relates to the technical field of online maps. The structure-enhanced semi-supervised online map generation method comprises the following specific steps: the method comprises the steps of obtaining two image resources, namely a remote sensing image and a map, by utilizing an existing remote sensing image and a map data set or an API provided by a Google map, carrying out standardization processing, obtaining a mapping relation of partial samples between the two image resources, dividing the two image resources into matched samples and non-matched samples through a file organization structure according to whether the two image resources are matched or not, and constructing a semi-supervised training data set. The invention establishes a semi-supervised training method following cycle consistency, more fully applies available resources, combines the structural characteristics of cycleGAN and Pix2Pix, creates a semi-supervised learning strategy of S2OM, and learns the matched samples and the unpaired samples in stages, so that the model learning is more sufficient, and the map generation error rate is lower.
Description
Technical Field
The invention relates to the technical field of online maps, in particular to a structure-enhanced semi-supervised online map generation method.
Background
The conversion from the remote sensing image to the online map service is a process of transferring spatial information from a carrier of the remote sensing image to a carrier of a map, and can also be regarded as a process of converting an image of the remote sensing image into a map image on the premise of ensuring necessary information quantity. The image-to-image conversion refers to converting an image distributed in an original content domain into a corresponding image in a target content domain, and the latest image-to-image conversion method makes full use of a generated countermeasure Network (GAN) to establish image feature expression, learn feature mapping and an image conversion method, and achieves good effects. A typical generative confrontation network consists of a generator that learns to produce counterfeit samples having the same characteristics as the authentic sample, and a discriminator that learns how to distinguish the authenticity of the sample. The generation countermeasure network controls the generator and the discriminator to train alternately, so that the loss synchronization of the generator and the discriminator is reduced, and the effect of synchronously optimizing the generator and the discriminator is realized. With the increase of training, the capability of the discriminator for judging the authenticity of the sample is continuously improved, so that the fidelity of the sample generated by the generator is continuously close to the real sample.
The Pix2Pix algorithm uses a conditional-confrontation generation network on the picture in the original content domain and adds an L1 loss between the generated target domain picture and the real target domain picture, resulting in a single-input-single-output image conversion network. However, although the Pix2Pix algorithm works well, the requirement for the training data set is high, and the training data set with two images paired one by one is required. In order to solve the defect, the non-paired image-to-image conversion network CycleGAN obtained through optimization and improvement based on the Pix2Pix algorithm modifies the network structure based on the theoretical basis that each image is consistent with the original image after two conversions, so that the limitation that samples must be paired is eliminated, but different network structures and loss function calculation modes lead to certain reduction of the robustness of the algorithm.
Although GAN has a wide application in the image fields including image generation, sample forgery, feature learning, face recognition, image super-resolution, image haze removal, and the like, there are many studies on optimizing GAN from various angles, but there are few studies on remote sensing images and maps. At present, foreign scholars propose GeoGAN on the basis of cycleGAN to try to convert remote sensing images into maps, but do not propose quantitative evaluation indexes aiming at image conversion effects, and do not solve the problem of lack of remote sensing-map matching training sets. There is also an experiment in which paired samples and non-paired samples are subjected to map style migration conversion using Pix2Pix and CycleGAN, but the experiment only proves the possibility of converting the map drawing style using GAN, and does not involve conversion of the remote sensing image. The domestic scholars combine the application of GAN to the remote sensing image and mainly focus on three directions: remote sensing image extraction and segmentation based on the generation countermeasure network, for example, the CDGAN is used for detecting the remote sensing image change and the cloud in the remote sensing image is detected based on the generation countermeasure network; for example, a super-resolution experiment on a single remote sensing image shows that the super-resolution generation countermeasure network can achieve a better super-resolution effect than a super-resolution convolutional neural network, and the countermeasure network is generated by adjusting and improving boundary balance, so that an end-to-end single-frame remote sensing image super-resolution method is realized; and the remote sensing image classification is carried out by using the generated countermeasure network, for example, the classification of the hyperspectral remote sensing image is carried out by using ACGAN, so that a better effect is achieved.
The general image conversion method is highly focused on reducing the difference between the generated image and the real image at the pixel level, which is meaningful in the general img2img task. When an image and a real image are generated by using L1 or L2 distance calculation in a Pix2Pix model, a CycleGAN model, or the like, the performance is good. They ignore structural information of the pixel composition, such as topological information and completeness, which is however very important in cartography. Because it is difficult to reduce the distance of the structural level using the pixel-level loss function alone, many generic img2img models generate maps with many structural feature problems, such as strange object shapes and wrong road topology relations. Even though they do not differ much from a real map at the pixel level, they do not perform well under human viewing angles. Furthermore, we typically have only a few paired map samples and a large number of unpaired samples, e.g., the latest remote sensing images lack corresponding on-line maps or some earlier drawn maps lack corresponding remote sensing image records.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a structure-enhanced semi-supervised online map generation method, which solves the problem of high map generation error rate.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: a structure-enhanced semi-supervised online map generation method comprises the following specific steps:
step 1: acquiring two image resources, namely a remote sensing image and a map, by using an existing remote sensing image and a map data set or an API (application program interface) provided by a Google map, carrying out standardized processing, obtaining a mapping relation of partial samples between the two image resources, and dividing the two image resources into matched samples and non-matched samples through a file organization structure according to whether the two image resources are matched or not so as to construct a semi-supervised training data set;
step 2: learning the Semi-Supervised training data set by using a Structure-structured Semi-Supervised Online Map Automatic Generation Model Based on GAN Model (S2 OM for short);
and step 3: other normalized remote sensing images are input, and map generation is performed by using an S2OM model.
Preferably, the learning of the semi-supervised data set by using the S2OM model comprises the following steps: first, an unsupervised learning phase is performed on the unpaired sample of the semi-supervised training data set by using S2OM, and a supervised learning phase is performed on the paired sample of the semi-supervised training data set by using S2 OM.
Preferably, the S2OM model includes two sets of generators and recognizers, one set of generators and recognizers is used for generating a map and judging authenticity of the map, which are respectively called Grm and Dm, and the other set of generators and recognizers is used for generating a remote sensing image and judging authenticity of the remote sensing image, which is respectively called Gmr and Dr.
Preferably, the supervised learning phase adjusts the model according to the confrontation loss, the content loss and the self content loss converted from the remote sensing image to the map.
Preferably, the content loss comprises an L1 loss, an image gradient L1 loss, and an image gradient covariance loss.
Preferably, the unsupervised learning phase adjusts the model according to the countermeasure Loss and the self-content Loss in the conversion from the remote sensing image to the map, the countermeasure Loss and the self-content Loss in the conversion from the map to the remote sensing image, and the cyclic-content Loss.
Preferably, the unsupervised learning phase comprises the steps of:
s1: grm generates a map by using the remote sensing image in batch;
s2: dm judges the authenticity of the generated map and the map in the sample;
s3: calculating the immunity loss and the content loss of the user according to the generated map, the sample map and the judgment result of the Dm;
s4: gmr using the map in the batch to generate remote sensing images;
s5: dm judges the authenticity of the generated remote sensing image and the remote sensing image in the sample;
s6: calculating the immunity loss and the content loss of the remote sensing image and the Dr according to the generated remote sensing image and the Dr judgment result in the sample;
s7: calculating the loss of the circulating content according to the generated map and the remote sensing image;
s8: adjusting the model based on the losses of step S3 to step S7;
s9: if the training scale meets the requirement, the unsupervised learning phase is completed, otherwise, the step S1 is returned to.
Preferably, the supervised learning phase comprises the following sub-steps:
c1: grm generates a map by using the remote sensing image in batch;
c2: dm judges the authenticity of the generated map and the map in the sample;
c3: calculating the confrontation loss, the content loss and the content loss of the user according to the generated map, the sample map and the judgment result of the Dm;
c4: adjusting the model based on the loss of step C3;
c5: if the training scale is up, the supervised learning phase is completed, otherwise, the step C1 is returned.
(III) advantageous effects
The invention provides a structure-enhanced semi-supervised online map generation method. The method has the following beneficial effects:
the invention provides S2OM, which is an online map generation model based on the structure guidance of a generative confrontation network (GAN), and simultaneously applies the loss of image gradient L1 and the loss of image gradient covariance; the invention establishes a semi-supervised training method following cycle consistency, and more fully applies available resources.
The structure-guided semi-supervised map on-line generation method provided by the invention is based on the structural loss of image gradient, and has better cartography characteristics compared with the existing map generation methods at home and abroad, namely, the topological structures of map elements are more concerned, and the expressions of the relative positions and the shapes of objects such as roads, houses and the like are clearer and more definite and accord with the subjective feelings of human beings. By combining the structural characteristics of the cycleGAN and the Pix2Pix, a semi-supervised learning strategy of S2OM is created, and learning is performed on paired samples and non-paired samples in stages, so that model learning is more sufficient, and the map generation error rate is lower.
Drawings
FIG. 1 is a schematic diagram of S2OM model supervised learning for a structurally enhanced semi-supervised online map generation method of the present invention;
FIG. 2 is a schematic diagram of semi-supervised learning of the S2OM model of a structurally enhanced semi-supervised online map generation method of the present invention;
fig. 3 is a schematic diagram of loss function characteristics under two training strategies of the structure-enhanced semi-supervised online map generation method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment is as follows:
as shown in fig. 1 to 3, an embodiment of the present invention provides a structure-enhanced semi-supervised online map generation method, which is characterized by including the following specific steps:
step 1: the method comprises the steps of obtaining two image resources, namely a remote sensing image and a map, by utilizing an existing remote sensing image and a map data set or an API provided by a Google map, carrying out standardization processing, obtaining a mapping relation of partial samples between the two image resources, dividing the two image resources into matched samples and non-matched samples through a file organization structure according to whether the two image resources are matched or not, and constructing a semi-supervised training data set.
Step 2: the Semi-Supervised training data set was learned using the Structure-structured Semi-Supervised Online Map automated Generation Model Based on GAN Model (S2 OM for short).
And step 3: other normalized remote sensing images are input, and map generation is performed by using an S2OM model.
Learning the semi-supervised data set using the S2OM model comprises the steps of: first, an unsupervised learning phase is performed on the unpaired sample of the semi-supervised training data set by using S2OM, and a supervised learning phase is performed on the paired sample of the semi-supervised training data set by using S2 OM. The S2OM model contains two sets of generators and recognizers, one set for generating maps and determining map authenticity, referred to as Grm and Dm, respectively, and the other set for generating remote sensing images and determining remote sensing image authenticity, referred to as Gmr and Dr, respectively.
And the supervised learning stage adjusts the model according to the confrontation loss, the content loss and the self content loss of the conversion from the remote sensing image to the map. Content loss includes L1 loss, image gradient L1 loss, and image gradient covariance loss. The unsupervised learning stage adjusts the model according to the confrontation Loss and the self-content Loss of the remote sensing image-to-map conversion, the confrontation Loss and the self-content Loss of the map-to-remote sensing image conversion and the cyclic-content Loss (Cycle-content Loss).
The unsupervised learning phase comprises the following steps:
s1: grm generates a map by using the remote sensing image in batch;
s2: dm judges the authenticity of the generated map and the map in the sample;
s3: calculating the immunity loss and the content loss of the user according to the generated map, the sample map and the judgment result of the Dm;
s4: gmr using the map in the batch to generate remote sensing images;
s5: dm judges the authenticity of the generated remote sensing image and the remote sensing image in the sample;
s6: calculating the immunity loss and the content loss of the remote sensing image and the Dr according to the generated remote sensing image and the Dr judgment result in the sample;
s7: calculating the loss of the circulating content according to the generated map and the remote sensing image;
s8: adjusting the model based on the losses of step S3 to step S7;
s9: if the training scale meets the requirement, the unsupervised learning phase is completed, otherwise, the step S1 is returned to.
The supervised learning phase comprises the following substeps:
c1: grm generates a map by using the remote sensing image in batch;
c2: dm judges the authenticity of the generated map and the map in the sample;
c3: calculating the confrontation loss, the content loss and the content loss of the user according to the generated map, the sample map and the judgment result of the Dm;
c4: adjusting the model based on the loss of step C3;
c5: if the training scale is up, the supervised learning phase is completed, otherwise, the step C1 is returned.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (8)
1. A structure-enhanced semi-supervised online map generation method is characterized by comprising the following specific steps:
step 1: acquiring two image resources, namely a remote sensing image and a map, by using an existing remote sensing image and a map data set or an API (application program interface) provided by a Google map, carrying out standardized processing, obtaining a mapping relation of partial samples between the two image resources, and dividing the two image resources into matched samples and non-matched samples through a file organization structure according to whether the two image resources are matched or not so as to construct a semi-supervised training data set;
step 2: learning the Semi-Supervised training data set by using a Structure-structured Semi-Supervised Online Map Automatic Generation Model Based on GAN Model (S2 OM for short);
and step 3: other normalized remote sensing images are input, and map generation is performed by using an S2OM model.
2. The method of claim 1, wherein learning the semi-supervised data set using the S2OM model comprises: first, an unsupervised learning phase is performed on the unpaired sample of the semi-supervised training data set by using S2OM, and a supervised learning phase is performed on the paired sample of the semi-supervised training data set by using S2 OM.
3. A method of structurally enhanced semi-supervised online mapping as recited in claim 1, wherein: the S2OM model comprises two groups of generators and recognizers, one group of generators and recognizers is used for generating a map and judging the authenticity of the map and is respectively called Grm and Dm, and the other group of generators and recognizers is used for generating a remote sensing image and judging the authenticity of the remote sensing image and is respectively called Gmr and Dr.
4. A method of structurally enhanced semi-supervised online mapping as recited in claim 2, wherein: and the supervised learning stage adjusts the model according to the confrontation loss, the content loss and the self content loss converted from the remote sensing image to the map.
5. A method of structurally enhanced semi-supervised online mapping as recited in claim 1, wherein: the content loss includes an L1 loss, an image gradient L1 loss, and an image gradient covariance loss.
6. A method of structurally enhanced semi-supervised online mapping as recited in claim 2, wherein: and the unsupervised learning stage adjusts the model according to the confrontation Loss and the self content Loss of the remote sensing image-to-map conversion, the confrontation Loss and the self content Loss of the map-to-remote sensing image conversion and the cyclic-content Loss (Cycle-content Loss).
7. A method of structurally enhanced semi-supervised online mapping as recited in claim 2, wherein: the unsupervised learning phase comprises the steps of:
s1: grm generates a map by using the remote sensing image in batch;
s2: dm judges the authenticity of the generated map and the map in the sample;
s3: calculating the immunity loss and the content loss of the user according to the generated map, the sample map and the judgment result of the Dm;
s4: gmr using the map in the batch to generate remote sensing images;
s5: dm judges the authenticity of the generated remote sensing image and the remote sensing image in the sample;
s6: calculating the immunity loss and the content loss of the remote sensing image and the Dr according to the generated remote sensing image and the Dr judgment result in the sample;
s7: calculating the loss of the circulating content according to the generated map and the remote sensing image;
s8: adjusting the model based on the losses of step S3 to step S7;
s9: if the training scale meets the requirement, the unsupervised learning phase is completed, otherwise, the step S1 is returned to.
8. A method of structurally enhanced semi-supervised online mapping as recited in claim 2, wherein: the supervised learning phase comprises the following sub-steps:
c1: grm generates a map by using the remote sensing image in batch;
c2: dm judges the authenticity of the generated map and the map in the sample;
c3: calculating the confrontation loss, the content loss and the content loss of the user according to the generated map, the sample map and the judgment result of the Dm;
c4: adjusting the model based on the loss of step C3;
c5: if the training scale is up, the supervised learning phase is completed, otherwise, the step C1 is returned.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110259938.5A CN113076806A (en) | 2021-03-10 | 2021-03-10 | Structure-enhanced semi-supervised online map generation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110259938.5A CN113076806A (en) | 2021-03-10 | 2021-03-10 | Structure-enhanced semi-supervised online map generation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113076806A true CN113076806A (en) | 2021-07-06 |
Family
ID=76612240
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110259938.5A Pending CN113076806A (en) | 2021-03-10 | 2021-03-10 | Structure-enhanced semi-supervised online map generation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113076806A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114792349A (en) * | 2022-06-27 | 2022-07-26 | 中国人民解放军国防科技大学 | Remote sensing image conversion map migration method based on semi-supervised generation confrontation network |
CN117422787A (en) * | 2023-12-18 | 2024-01-19 | 中国人民解放军国防科技大学 | Remote sensing image map conversion method integrating discriminant and generative model |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111625608A (en) * | 2020-04-20 | 2020-09-04 | 中国地质大学(武汉) | Method and system for generating electronic map according to remote sensing image based on GAN model |
CN112434798A (en) * | 2020-12-18 | 2021-03-02 | 北京享云智汇科技有限公司 | Multi-scale image translation method based on semi-supervised learning |
-
2021
- 2021-03-10 CN CN202110259938.5A patent/CN113076806A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111625608A (en) * | 2020-04-20 | 2020-09-04 | 中国地质大学(武汉) | Method and system for generating electronic map according to remote sensing image based on GAN model |
CN112434798A (en) * | 2020-12-18 | 2021-03-02 | 北京享云智汇科技有限公司 | Multi-scale image translation method based on semi-supervised learning |
Non-Patent Citations (1)
Title |
---|
XU CHEN 等: "S2OMGAN: Shortcut from Remote Sensing Images to Online Maps", 《ARXIV:2001.07712V1》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114792349A (en) * | 2022-06-27 | 2022-07-26 | 中国人民解放军国防科技大学 | Remote sensing image conversion map migration method based on semi-supervised generation confrontation network |
CN114792349B (en) * | 2022-06-27 | 2022-09-06 | 中国人民解放军国防科技大学 | Remote sensing image conversion map migration method based on semi-supervised generation countermeasure network |
CN117422787A (en) * | 2023-12-18 | 2024-01-19 | 中国人民解放军国防科技大学 | Remote sensing image map conversion method integrating discriminant and generative model |
CN117422787B (en) * | 2023-12-18 | 2024-03-08 | 中国人民解放军国防科技大学 | Remote sensing image map conversion method integrating discriminant and generative model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108961235B (en) | Defective insulator identification method based on YOLOv3 network and particle filter algorithm | |
CN113449594B (en) | Multilayer network combined remote sensing image ground semantic segmentation and area calculation method | |
CN111126202A (en) | Optical remote sensing image target detection method based on void feature pyramid network | |
CN114117614B (en) | Automatic generation method and system for building elevation texture | |
CN112884758B (en) | Defect insulator sample generation method and system based on style migration method | |
CN109712127B (en) | Power transmission line fault detection method for machine inspection video stream | |
CN113609896A (en) | Object-level remote sensing change detection method and system based on dual-correlation attention | |
CN114758252A (en) | Image-based distributed photovoltaic roof resource segmentation and extraction method and system | |
CN111626994A (en) | Equipment fault defect diagnosis method based on improved U-Net neural network | |
CN113076806A (en) | Structure-enhanced semi-supervised online map generation method | |
CN114419014A (en) | Surface defect detection method based on feature reconstruction | |
CN116805360B (en) | Obvious target detection method based on double-flow gating progressive optimization network | |
CN115331012A (en) | Joint generation type image instance segmentation method and system based on zero sample learning | |
CN114782298A (en) | Infrared and visible light image fusion method with regional attention | |
CN110807372A (en) | Rapid optical remote sensing target identification method based on depth feature recombination | |
CN111126155A (en) | Pedestrian re-identification method for generating confrontation network based on semantic constraint | |
CN112818818B (en) | Novel ultra-high-definition remote sensing image change detection method based on AFFPN | |
CN112330562A (en) | Heterogeneous remote sensing image transformation method and system | |
CN117152630A (en) | Optical remote sensing image change detection method based on deep learning | |
CN117218348A (en) | RGB-D semantic segmentation method based on cross-modal alignment fusion | |
CN116977917A (en) | Infrared image pedestrian detection method | |
CN115661904A (en) | Data labeling and domain adaptation model training method, device, equipment and medium | |
CN115294371A (en) | Complementary feature reliable description and matching method based on deep learning | |
Gao et al. | Building Extraction from High Resolution Remote Sensing Images Based on Improved Mask R-CNN | |
CN111160109B (en) | Road segmentation method and system based on deep neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210706 |