CN112101249A - SAR target type identification method based on deep convolutional memory network - Google Patents

SAR target type identification method based on deep convolutional memory network Download PDF

Info

Publication number
CN112101249A
CN112101249A CN202010987041.XA CN202010987041A CN112101249A CN 112101249 A CN112101249 A CN 112101249A CN 202010987041 A CN202010987041 A CN 202010987041A CN 112101249 A CN112101249 A CN 112101249A
Authority
CN
China
Prior art keywords
memory network
deep convolutional
sar
network
target type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010987041.XA
Other languages
Chinese (zh)
Inventor
黄钰林
裴季方
王陈炜
汪志勇
崔美玲
杨建宇
霍伟博
杨海光
张寅�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010987041.XA priority Critical patent/CN112101249A/en
Publication of CN112101249A publication Critical patent/CN112101249A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a SAR target type identification method based on a deep convolutional memory network, which comprises the following steps: s1, collecting an original SAR image; s2, preprocessing the original SAR image; s3, setting discrete azimuth number m according to actual imaging conditions and performance indexes; s4, generating SAR image slice combination of adjacent azimuth angles for training according to the discrete azimuth angle number; s5, constructing a depth convolution memory network according to the SAR image slice combination; and S6, training the deep convolutional memory network. The SAR target type identification method based on the deep convolutional memory network utilizes sample data combinations of different azimuth angles, combines data information in an actual image and type identification performance requirements, designs the efficient and accurate target type identification method, can adjust the number of the sample combinations and the network structure according to actual image characteristics and performance indexes, and has the advantages of flexibility, accuracy, high efficiency and strong system integration.

Description

SAR target type identification method based on deep convolutional memory network
Technical Field
The invention belongs to the field of radar image processing, and particularly relates to an SAR target type identification method based on a deep convolutional memory network.
Background
Synthetic Aperture Radar (SAR) is a high-resolution microwave imaging Radar with all-weather and all-day working capability, is widely applied to the fields of battlefield sensing reconnaissance, geographic information acquisition, agriculture and forestry environment monitoring, geological and landform exploration, ocean resource utilization and the like, and has extremely high civil and military values. With the rapid development of the imaging theory of SAR, the capability of acquiring images is continuously enhanced. In the face of a large amount of SAR image data, how to perform accurate automatic interpretation is receiving more and more attention and attention. The research of efficient and reliable SAR automatic target recognition algorithm and system is the key point that SAR image data can be fully utilized. The SAR Automatic Target Recognition (ATR) is based on theories such as modern signal processing and pattern Recognition, and under the condition of no manual intervention, potential targets are automatically detected from a scene in a short time, and attributes such as types, models and the like of the targets are identified.
The current mainstream SAR ATR methods mainly include template-based methods and model-based methods. However, the traditional method usually needs an image preprocessing technology, has the problems of high algorithm complexity, poor stability and the like, and is difficult to extract optimal target features and perform efficient and accurate type identification. With the rise and development of artificial intelligence theory, the neural network is widely applied to a plurality of fields such as image classification, speech signal processing and the like as a machine learning algorithm with strong self-adaptive capacity, and a new thought and direction are opened up for SAR ATR.
A SAR target identification method based on a support vector machine is proposed in the literature of Zhao Q, principles J C, support vector machines for SAR automatic target recognition [ J ]. IEEE Transactions on Aerospace and Electronic Systems,2001,37(2):643 and 654.
A convolutional neural network-based SAR target identification method is proposed in the documents 'Ding J, Chen B, Liu H, et al. convolutional neural network with data acquisition for SAR target registration [ J ]. IEEE Geoscience and remote sensing letters,2016,13(3):364 and 368.', and the like, and the method improves the identification effect through three data expansion modes, but neglects the influence of azimuth angle on the SAR target characteristic.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides the SAR target type identification method based on the deep convolutional memory network, which is designed to be efficient and accurate by utilizing sample data combinations of different azimuth angles and combining with data information and type identification performance requirements in actual images.
The purpose of the invention is realized by the following technical scheme: a SAR target type recognition method based on a deep convolutional memory network comprises the following steps:
s1, collecting an original SAR image;
s2, preprocessing the original SAR image;
s3, setting discrete azimuth number m according to actual imaging conditions and performance indexes;
s4, generating SAR image slice combination of adjacent azimuth angles for training according to the discrete azimuth angle number;
s5, constructing a depth convolution memory network according to the SAR image slice combination;
s6, training the deep convolutional memory network, inputting the training sample data obtained in the step S4 into the deep convolutional memory network constructed in the step S5 for forward propagation, and calculating a cost function value; updating the parameters of the deep convolutional memory network by using a backward propagation algorithm based on gradient descent; and (5) carrying out forward and backward propagation in an iteration mode until the cost function is converged.
Further, the specific implementation method of step S2 is as follows: combining a network structure to manufacture an image slice with a target positioned at the center, and carrying out normalization operation on the sliced image: x (u, v) represents the SAR image before normalization, and x' (u, v) represents the SAR image after normalization, then
Figure BDA0002689607060000021
Where max (x (u, v)) represents the maximum value of the data pixel value and min (x (u, v)) represents the minimum value of the data pixel value.
Further, the specific implementation method of step S4 is as follows: discretizing the azimuth range [0 °,360 °) into { phi [ phi ] ] according to the discrete azimuth number mjjE [0 °,360 °), j ═ 1,2, …, m }; SAR image { xi∈RnI-1, 2, …, N belonging to a sample set from the same perspective, with a corresponding discretized azimuth angle of
Figure BDA0002689607060000022
Further, the deep convolutional memory network comprises three feature extraction branches and a long-time memory network; the feature extraction branch is formed by overlapping 4 convolution layers and pooling layers; the long-short time memory network consists of 3 long-short time memory layers, and each long-short time memory layer comprises 2048 long-short time memory modules, 1024 long-short time memory modules and 512 long-short time memory modules respectively; inputting the features extracted by the feature extraction branch into a long-time memory network and then inputting the features into a Softmax layer to obtain the label of the image sample type.
Further, the specific implementation method of step S6 is as follows: updating the parameters of the deep convolutional memory network by using a backward propagation algorithm based on gradient descent; iteratively performing forward and backward propagation until the cost function is converged; the method specifically comprises the following steps:
s61, forward propagation with a(l)The characteristic of the first layer is shown, and l is more than or equal to 2;
then
a(l)=σ(w(l)a(l-1)+b(l))
Wherein, a(l-1)Denotes the characteristics of the l-1 st layer, w(l)Denotes the l-th layer weight, b(l)σ (-) is the activation function for the bias term of layer l;
if the L-th layer is an output layer, the posterior probability that the current sample belongs to the i-th class is as follows:
Figure BDA0002689607060000031
wherein the content of the first and second substances,
Figure BDA0002689607060000032
each pixel value representing an input to the output layer, m representing a total number of discrete azimuth angles;
s62, calculating a cost function value, taking the cross entropy function as the cost function, wherein the calculation formula is as follows:
Figure BDA0002689607060000033
wherein L (w, b) represents a cost function; w and b respectively represent a set of weight and bias items in the network; p (i | z)(L)(ii) a w, b) represent i and z(L)(ii) a w, a posterior probability of b;
s63, updating the network parameters based on the backward propagation algorithm of gradient descent, wherein the specific calculation formula is
Figure BDA0002689607060000034
Figure BDA0002689607060000035
Where α is the learning rate.
The invention has the beneficial effects that: the SAR target type identification method based on the deep convolutional memory network utilizes sample data combinations of different azimuth angles, combines data information in an actual image and type identification performance requirements, designs the efficient and accurate target type identification method, can adjust the number of the sample combinations and the network structure according to actual image characteristics and performance indexes, and has the advantages of flexibility, accuracy, high efficiency and strong system integration.
Drawings
FIG. 1 is a flow chart of an SAR target type identification method based on a deep convolutional memory network of the present invention;
FIG. 2 is a diagram of a deep convolutional memory network according to the present invention.
Detailed Description
The SAR target type identification method based on the deep convolutional memory network provided by the invention utilizes sample data combinations of different azimuth angles and combines data information in an actual image and type identification performance requirements to design and obtain an efficient and accurate target type identification method, and specifically comprises the following steps: based on sample data of different azimuth angles, projecting the data to a low-dimensional space by using a depth convolution network, and extracting specific characteristics of a target under different azimuth angles; analyzing the characteristic distribution relation of the target sequence under adjacent azimuth angles by using a long-time memory module, automatically extracting effective characteristics of targets under different azimuth angles, and realizing the rapid and accurate identification of the SAR target type; the method adjusts the sample combination number and the network structure according to the actual image characteristics and the performance indexes, and has the advantages of flexibility, accuracy, high efficiency and strong system integration. The technical scheme of the invention is further explained by combining the attached drawings.
As shown in fig. 1, a method for identifying a type of an SAR target based on a deep convolutional memory network of the present invention includes the following steps:
s1, collecting an original SAR image;
s2, preprocessing the original SAR image; the specific implementation method comprises the following steps: combining a network structure to manufacture an image slice with a target positioned at the center, and carrying out normalization operation on the sliced image: x (u, v) represents the SAR image before normalization, and x' (u, v) represents the SAR image after normalization, then
Figure BDA0002689607060000041
Where max (x (u, v)) represents the maximum value of the data pixel value and min (x (u, v)) represents the minimum value of the data pixel value.
S3, setting discrete azimuth number m according to actual imaging conditions and performance indexes;
s4, generating SAR image slice combination of adjacent azimuth angles for training according to the discrete azimuth angle number; the specific implementation method comprises the following steps: discretizing the azimuth range [0 °,360 °) into { phi [ phi ] ] according to the discrete azimuth number mjjE [0 °,360 °), j ═ 1,2, …, m }; SAR image { xi∈RnI-1, 2, …, N belonging to a sample set from the same perspective, with a corresponding discretized azimuth angle of
Figure BDA0002689607060000042
S5, constructing a depth convolution memory network according to the SAR image slice combination;
as shown in fig. 2, the deep convolutional memory network of the present invention includes three feature extraction branches and a long-term memory network; the feature extraction branch is formed by overlapping 4 convolution layers and pooling layers; the long-short time memory network consists of 3 long-short time memory layers, and each long-short time memory layer comprises 2048 long-short time memory modules (LSTM), 1024 long-short time memory modules (LSTM) and 512 long-short time memory modules (LSTM); inputting the features extracted by the feature extraction branch into a long-time memory network and then inputting the features into a Softmax layer to obtain the label of the image sample type.
S6, training the deep convolutional memory network, inputting the training sample data obtained in the step S4 into the deep convolutional memory network constructed in the step S5 for forward propagation, and calculating a cost function value; updating the parameters of the deep convolutional memory network by using a backward propagation algorithm based on gradient descent; iteratively performing forward and backward propagation until the cost function is converged; the specific implementation method comprises the following steps: updating the parameters of the deep convolutional memory network by using a backward propagation algorithm based on gradient descent; iteratively performing forward and backward propagation until the cost function is converged; the method specifically comprises the following steps:
s61, forward propagation with a(l)The characteristic of the first layer is shown, and l is more than or equal to 2;
then
a(l)=σ(w(l)a(l-1)+b(l))
Wherein, a(l-1)Denotes the characteristics of the l-1 st layer, w(l)Denotes the l-th layer weight, b(l)σ (-) is the activation function for the bias term of layer l;
if the L-th layer is an output layer, the posterior probability that the current sample belongs to the i-th class is as follows:
Figure BDA0002689607060000051
wherein the content of the first and second substances,
Figure BDA0002689607060000052
each pixel value representing an input to the output layer, m representing a total number of discrete azimuth angles;
s62, calculating a cost function value, taking the cross entropy function as the cost function, wherein the calculation formula is as follows:
Figure BDA0002689607060000053
wherein L (w, b) represents a cost function; w and b respectively represent a set of weight and bias items in the network; p (i | z)(L)(ii) a w, b) represent i and z(L)(ii) a w, a posterior probability of b;
s63, updating the network parameters based on the backward propagation algorithm of gradient descent, wherein the specific calculation formula is
Figure BDA0002689607060000054
Figure BDA0002689607060000055
Where α is the learning rate.
Finally, the invention also comprises: performing a target type identification performance test on the network trained in the step S6, specifically: inputting the test samples in the same view angle combination mode into the trained network in S6 for forward propagation to obtain the posterior probabilities of the test samples belonging to each category, comparing the posterior probabilities of the categories, and taking the category corresponding to the maximum as the final target recognition result.
Table one is a target type identification result of the SAR image provided in the embodiment of the present invention. And a first table is a confusion matrix of the recognition result, and the horizontal row and the vertical row are target types. The result shows that the SAR target type identification method can keep stable performance on various target types by utilizing sample data combinations of different azimuth angles, and realize efficient identification of the SAR target type.
Watch 1
Figure BDA0002689607060000061
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (5)

1. A SAR target type recognition method based on a deep convolutional memory network is characterized by comprising the following steps:
s1, collecting an original SAR image;
s2, preprocessing the original SAR image;
s3, setting discrete azimuth number m according to actual imaging conditions and performance indexes;
s4, generating SAR image slice combination of adjacent azimuth angles for training according to the discrete azimuth angle number;
s5, constructing a depth convolution memory network according to the SAR image slice combination;
s6, training the deep convolutional memory network, inputting the training sample data obtained in the step S4 into the deep convolutional memory network constructed in the step S5 for forward propagation, and calculating a cost function value; updating the parameters of the deep convolutional memory network by using a backward propagation algorithm based on gradient descent; and (5) carrying out forward and backward propagation in an iteration mode until the cost function is converged.
2. The SAR target type recognition method based on deep convolutional memory network as claimed in claim 1, wherein the specific implementation method of step S2 is as follows: combining a network structure to manufacture an image slice with a target positioned at the center, and carrying out normalization operation on the sliced image: x (u, v) represents the SAR image before normalization, and x' (u, v) represents the SAR image after normalization, then
Figure FDA0002689607050000011
Where max (x (u, v)) represents the maximum value of the data pixel value and min (x (u, v)) represents the minimum value of the data pixel value.
3. The SAR target type recognition method based on deep convolutional memory network as claimed in claim 1, wherein the specific implementation method of step S4 is as follows: discretizing the azimuth range [0 °,360 °) into { phi [ phi ] ] according to the discrete azimuth number mjjE [0 °,360 °), j ═ 1,2, …, m }; SAR image { xi∈RnI-1, 2, …, N belonging to a sample set from the same perspective, with a corresponding discretized azimuth angle of
Figure FDA0002689607050000012
4. The SAR target type recognition method based on the deep convolutional memory network as claimed in claim 1, wherein the deep convolutional memory network comprises three feature extraction branches and a long-term memory network; the feature extraction branch is formed by overlapping 4 convolution layers and pooling layers; the long-short time memory network consists of 3 long-short time memory layers, and each long-short time memory layer comprises 2048 long-short time memory modules, 1024 long-short time memory modules and 512 long-short time memory modules respectively; inputting the features extracted by the feature extraction branch into a long-time memory network and then inputting the features into a Softmax layer to obtain the label of the image sample type.
5. The SAR target type recognition method based on deep convolutional memory network as claimed in claim 1, wherein the specific implementation method of step S6 is as follows: updating the parameters of the deep convolutional memory network by using a backward propagation algorithm based on gradient descent; iteratively performing forward and backward propagation until the cost function is converged; the method specifically comprises the following steps:
s61, forward propagation with a(l)The characteristic of the first layer is shown, and l is more than or equal to 2;
then
a(l)=σ(w(l)a(l-1)+b(l))
Wherein, a(l-1)Denotes the characteristics of the l-1 st layer, w(l)Denotes the l-th layer weight, b(l)σ (-) is the activation function for the bias term of layer l;
if the L-th layer is an output layer, the posterior probability that the current sample belongs to the i-th class is as follows:
Figure FDA0002689607050000021
wherein the content of the first and second substances,
Figure FDA0002689607050000022
each pixel value representing an input to the output layer, m representing a total number of discrete azimuth angles;
s62, calculating a cost function value, taking the cross entropy function as the cost function, wherein the calculation formula is as follows:
Figure FDA0002689607050000023
wherein L (w, b) represents a cost function; w and b respectively represent a set of weight and bias items in the network; p (i | z)(L)(ii) a w, b) represent i and z(L)(ii) a w, a posterior probability of b;
s63, updating the network parameters based on the backward propagation algorithm of gradient descent, wherein the specific calculation formula is
Figure FDA0002689607050000024
Figure FDA0002689607050000025
Where α is the learning rate.
CN202010987041.XA 2020-09-18 2020-09-18 SAR target type identification method based on deep convolutional memory network Pending CN112101249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010987041.XA CN112101249A (en) 2020-09-18 2020-09-18 SAR target type identification method based on deep convolutional memory network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010987041.XA CN112101249A (en) 2020-09-18 2020-09-18 SAR target type identification method based on deep convolutional memory network

Publications (1)

Publication Number Publication Date
CN112101249A true CN112101249A (en) 2020-12-18

Family

ID=73759448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010987041.XA Pending CN112101249A (en) 2020-09-18 2020-09-18 SAR target type identification method based on deep convolutional memory network

Country Status (1)

Country Link
CN (1) CN112101249A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990342A (en) * 2021-04-08 2021-06-18 重庆大学 Semi-supervised SAR target recognition method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038445A (en) * 2017-12-11 2018-05-15 电子科技大学 A kind of SAR automatic target recognition methods based on various visual angles deep learning frame
CN108776779A (en) * 2018-05-25 2018-11-09 西安电子科技大学 SAR Target Recognition of Sequential Images methods based on convolution loop network
CN108985445A (en) * 2018-07-18 2018-12-11 成都识达科技有限公司 A kind of target bearing SAR discrimination method based on machine Learning Theory

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038445A (en) * 2017-12-11 2018-05-15 电子科技大学 A kind of SAR automatic target recognition methods based on various visual angles deep learning frame
CN108776779A (en) * 2018-05-25 2018-11-09 西安电子科技大学 SAR Target Recognition of Sequential Images methods based on convolution loop network
CN108985445A (en) * 2018-07-18 2018-12-11 成都识达科技有限公司 A kind of target bearing SAR discrimination method based on machine Learning Theory

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990342A (en) * 2021-04-08 2021-06-18 重庆大学 Semi-supervised SAR target recognition method
CN112990342B (en) * 2021-04-08 2023-09-19 重庆大学 Semi-supervised SAR target recognition method

Similar Documents

Publication Publication Date Title
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
CN112766199B (en) Hyperspectral image classification method based on self-adaptive multi-scale feature extraction model
CN106909924B (en) Remote sensing image rapid retrieval method based on depth significance
CN108428220B (en) Automatic geometric correction method for ocean island reef area of remote sensing image of geostationary orbit satellite sequence
Zhang et al. Deep learning-based automatic recognition network of agricultural machinery images
CN109063569A (en) A kind of semantic class change detecting method based on remote sensing image
CN112990334A (en) Small sample SAR image target identification method based on improved prototype network
CN108345856B (en) SAR automatic target recognition method based on heterogeneous convolutional neural network integration
CN110728706B (en) SAR image fine registration method based on deep learning
CN115908924A (en) Multi-classifier-based small sample hyperspectral image semantic segmentation method and system
CN111192240B (en) Remote sensing image target detection method based on random access memory
CN111046756A (en) Convolutional neural network detection method for high-resolution remote sensing image target scale features
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN112101251A (en) SAR automatic target recognition method based on variable convolutional neural network
CN109117739A (en) One kind identifying projection properties extracting method based on neighborhood sample orientation
CN111611960A (en) Large-area ground surface coverage classification method based on multilayer perceptive neural network
CN112906564B (en) Intelligent decision support system design and implementation method for automatic target recognition of unmanned airborne SAR (synthetic aperture radar) image
CN108229426B (en) Remote sensing image change vector change detection method based on difference descriptor
CN112101249A (en) SAR target type identification method based on deep convolutional memory network
CN109871907A (en) Radar target high resolution range profile recognition methods based on SAE-HMM model
CN104331711B (en) SAR image recognition methods based on multiple dimensioned fuzzy mearue and semi-supervised learning
CN113420593A (en) Small sample SAR automatic target recognition method based on hybrid inference network
CN113989612A (en) Remote sensing image target detection method based on attention and generation countermeasure network
CN110956221A (en) Small sample polarization synthetic aperture radar image classification method based on deep recursive network
Osei et al. Long term monitoring of Ghana’s forest reserves Using Google Earth Engine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201218