CN107239797A - Polarization SAR terrain classification method based on full convolutional neural networks - Google Patents
Polarization SAR terrain classification method based on full convolutional neural networks Download PDFInfo
- Publication number
- CN107239797A CN107239797A CN201710369376.3A CN201710369376A CN107239797A CN 107239797 A CN107239797 A CN 107239797A CN 201710369376 A CN201710369376 A CN 201710369376A CN 107239797 A CN107239797 A CN 107239797A
- Authority
- CN
- China
- Prior art keywords
- mrow
- mtd
- pixel
- msub
- mtr
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses the polarization SAR terrain classification method based on full convolutional neural networks, Pauli decomposition is carried out to polarization scattering matrix S to be sorted, obtain odd scattering coefficient, even scattering coefficient and volume scattering coefficient, using odd scattering coefficient, even scattering coefficient and volume scattering coefficient as Polarimetric SAR Image 3-D view feature F;Obtained 3-D view eigenmatrix F is changed into RGB figures F1 again;Scheme to randomly select m × n block of pixels on F1 as training sample in RGB again, whole RGB figures F1 is used as test sample;Reconstruct full convolutional neural networks model;Again training sample, the model trained are trained by full convolutional neural networks;Test set is classified by the model trained again, classification results are obtained.The method of the present invention can solve the problem that the problem of prior art time efficiency is low, in the case where ensureing that nicety of grading is higher, shorten run time.
Description
【Technical field】
The invention belongs to technical field of image processing, and in particular to a kind of Classification of Polarimetric SAR Image method, available for polarizing
SAR image terrain classification and target identification, are specifically the polarization SAR terrain classification method based on full convolutional neural networks.
【Background technology】
Polarimetric synthetic aperture radar as contemporary remote sensing technology a popular research field, with the excellent of many protrusions
Point, can be with imaging in 24 hours etc. if do not influenceed by the time.Before Polarimetric SAR Image has unique advantage and is widely applied
Scape, has been successfully applied to land use classes, change detection, surface parameters inversion, soil moisture and soil moisture anti-at present
Drill, man-made target classification, building extract etc..
The Polarization target decomposition method such as the Freeman decomposition of the Integrated comparatives such as Chen Jun, Yamaguchi are decomposed, Pauli is decomposed
Obtained polarization characteristic, and it is classified by SVMs method, as a result show to decompose and supporting vector using Pauli
Machine carries out classification to full polarimetric SAR and achieves higher nicety of grading.
With the further development of full-polarization SAR remote sensing technology and deepening continuously for level of application, full polarimetric SAR point
Class field still has that full polarimetric SAR is influenceed by resolution ratio, noise, filtering etc., such as traditional convolution god
Through network, network parameter is relatively more, and the training time is long, selects a number of training sample difficult on image, and this certainly will
Influence nicety of grading and classification performance.How in the case where training sample is less, progress precision is guaranteed, and the speed of service compares
Fast classification be one need research the problem of.
【The content of the invention】
It is a kind of based on full convolutional neural networks it is an object of the invention to propose to solve problems of the prior art
Polarization SAR terrain classification method, by this method can solve the problem that conventional method classification duration and tradition CNN network parameters
It is many, classification duration the problem of can ensure degree of precision on the premise of, reduce Classification of Polarimetric SAR Image processing time.
To realize as above purpose, the present invention is adopted the following technical scheme that:
Polarization SAR terrain classification method based on full convolutional neural networks, comprises the following steps:
(1) to polarization scattering matrix S to be sorted carry out Pauli decomposition, obtain odd scattering coefficient, even scattering coefficient and
Volume scattering coefficient, the 3-D view of odd scattering coefficient, even scattering coefficient and volume scattering coefficient as Polarimetric SAR Image is special
Levy F;
(2) the 3-D view eigenmatrix F that step (1) is obtained is changed into RGB figures F1;
(3) scheme to randomly select m × n block of pixels on F1 as training sample in RGB, m and n are positive integer, whole RGB
Figure F1 is used as test sample;
(4) constructing full convolutional neural networks model is:The convolution of the pond layer of the convolutional layer of input layer → first → first → second
The convolutional layer of the pond layer of the pond layer of the convolutional layer of the layer → second pond layer → the 3rd → the 3rd → Volume Four lamination → the 4th → the 5th →
Warp lamination → Eltwise layers → of the convolutional layer of the 6th convolutional layer → the seven → convolutional layer of the first warp lamination → the eight → second
Three warp laminations → crop layers → softmax graders;
(5) training sample, the model trained are trained by full convolutional neural networks;
(6) test set is classified by the model trained, obtains classification results.
Pauli decomposition is carried out to polarization scattering matrix S to be sorted in the step (1), step is as follows:
(1a) defines Pauli bases { S1,S2,S3It is formula<1>, formula<1>It is as follows:
Wherein, S1Represent odd scattering, S2Represent even scattering, S3Represent volume scattering;
(1b) is decomposed to define by Pauli obtains equation<2>, equation<2>It is as follows:
Wherein, a is odd scattering coefficient, and b is even scattering coefficient, and c is volume scattering coefficient, SHHFor horizontal emission and level
The scattering component of reception, SVVFor Vertical Launch and the scattering component of vertical reception, SHVFor the scattering of horizontal emission and vertical reception
Component;
(1c) passes through formula<1>And equation<2>Obtain odd scattering coefficient a, even scattering coefficient b and volume scattering coefficient c:
Using odd scattering coefficient a, even scattering coefficient b and volume scattering coefficient as Polarimetric SAR Image 3-D view eigenmatrix F,
It is described to regard odd scattering coefficient, even scattering coefficient and volume scattering coefficient as polarization SAR in the step (1)
The 3-D view eigenmatrix F of image process is as follows:The eigenmatrix F that a size is M1 × M2 × 3 is first defined, then will be strange
Secondary scattering coefficient, even scattering coefficient, volume scattering coefficient are assigned to eigenmatrix F, wherein, M1 is Polarimetric SAR Image to be sorted
Length, M2 is the width of Polarimetric SAR Image to be sorted.
In the step (2), the three-dimensional feature matrix that step (1) is obtained is converted into RGB pcolors F1.
The detailed process of the step (3) is as follows:
(3a) falls into 5 types polarization SAR atural object to be sorted, and each pixel has corresponding position in image to be classified
Put, the position L1 of the corresponding position first kind pixel of 5 kinds of pixels, the position L2 of Equations of The Second Kind pixel, the 3rd are obtained first
The position L3 of class pixel, the position L4 of the 4th class pixel and the position L5 of the 5th class pixel;
(3b) schemes to randomly select 5% pixel on F1 as training sample in RGB again, then the obtained from step (3a)
The position L1 of one class pixel, the position L2 of Equations of The Second Kind pixel, the position L3 of the 3rd class pixel, the position of the 4th class pixel
N1 pixel is randomly selected in the position L5 for putting L4 and the 5th class pixel respectively as the central pixel point of training sample block,
Extend m11 pixel respectively to the left and upwards from central pixel point, extend m21 pixel respectively to the right and downwards, obtain
Position of the pixel of selection in figure to be sorted is respectively to be chosen for the pixel of training sample in first kind atural object treating point
Position S2 of the pixel of training sample in figure to be sorted, the 3rd class are chosen in position S1 in class figure, Equations of The Second Kind atural object
It is chosen for being chosen for training sample in position S3 of the pixel of training sample in figure to be sorted, the 4th class atural object in atural object
Pixel is chosen for the pixel of training sample in figure to be sorted in the position S4 and the 5th class atural object in figure to be sorted
Position S5, wherein,
Wherein m11 is the number of the central pixel point chosen in image to be classified to the left with the pixel extended up,
The number for the pixel that m21 extends to the right and downwards for the central pixel point of selection in image to be classified, M1 is pole to be sorted
Change the length of SAR image, M2 is the width of classification Polarimetric SAR Image, and n1 is the number for the central pixel point that each class is chosen, n2
For the species number of polarization SAR to be sorted, m1 is the length for being chosen for training sample block, and m2 is the width for being chosen for training sample block
Degree, p is chooses the percentage that the pixel of sample accounts for pixel to be sorted, and n1, m11 and m21 are positive integer;
(3c) is used as test sample with RGB figures F1 again.
In the step (4), when constructing full convolutional neural networks model, based on full convolutional network model, parameter is as follows:
For input layer, it is 3 to set Feature Mapping map number;
For the first convolutional layer, it is 32 to set Feature Mapping map number, and filter size is that 5, pad is 2;
For the first pond layer, it is 2 to set down-sampling size;
For the second convolutional layer, it is 64 to set Feature Mapping map number, and it is that 5, pad is 2 to set filter size;
For the second pond layer, it is 2 to set down-sampling size;
For the 3rd convolutional layer, it is 96 to set Feature Mapping map number, and it is that 3, pad is 1 to set filter size;
For the 3rd pond layer, it is 2 to set down-sampling size;
For Volume Four lamination, it is 128 to set Feature Mapping map number, and it is that 3, pad is 1 to set filter size;
For the 4th pond layer, it is 2 to set down-sampling size;
For the 5th convolutional layer, it is 128 to set Feature Mapping map number, and it is that 3, pad is 1 to set filter size;
For the 6th convolutional layer, it is 128 to set Feature Mapping map number, and it is that 1, pad is 0 to set filter size;
For the 7th convolutional layer, it is 5 to set Feature Mapping map number, and it is that 1, pad is 0 to set filter size;
For the first warp lamination, it is 5 to set Feature Mapping map number, and it is that 4, stride is 2 to set filter size;
For the 8th convolutional layer, it is 5 to set Feature Mapping map number, and it is that 1, pad is 0 to set filter size;
For the second warp lamination, it is 5 to set Feature Mapping map number, and it is that 4, stride is 2 to set filter size;
For Eltwise layers, it is 5 to set Feature Mapping map number, and setting operation is SUM;
For the 3rd warp lamination, it is 5 to set Feature Mapping map number, and it is that 16, stride is 8 to set filter size;
For crop layers, it is 5 to set Feature Mapping map number, and it is that 2, offset is 4 to set axis;
For softmax graders, it is 5 to set Feature Mapping map number.
The present invention has advantages below compared with the existing technology in this area:
The polarization SAR terrain classification method based on full convolutional neural networks of the present invention is by the way that full convolutional neural networks are answered
Use in polarization SAR terrain classification, realize based on end-to-end and Pixel-level classification, the present invention is compared with tradition CNN, Home Network
Traditional CNN full articulamentum is changed into convolutional layer by network, reduces network parameter, improves run time, and add anti-
Convolutional layer, to convolutional layer carry out up-sampling operation, return to input picture identical size, it is achieved thereby that dividing end to end
Class;The present invention, in test phase, can be tested due to not limiting the size for inputting picture by whole figure, it is to avoid
Splice brought edge effect by block, improve test accuracy rate and run time.
【Brief description of the drawings】
Fig. 1 is the implementation process figure of the present invention;
Fig. 2 is the handmarking of image to be classified to be schemed in the present invention;
Fig. 3 is the classification results figure to image to be classified with the present invention.
Wherein, 1- sea areas, 2- wood lands, 3- high density downtown area, 4- low-density downtown area, 5- meadows area
Domain, 6- background areas.
【Embodiment】
Step and experiment effect, which are described in further detail, to be realized to the present invention below in conjunction with drawings and examples:
Reference picture 1, of the invention to implement step as follows:
Step 1, Pauli decomposition is carried out to polarization scattering matrix S, obtains odd scattering coefficient, even scattering coefficient and body
Scattering coefficient, with the 3-D view of odd scattering coefficient, even scattering coefficient and volume scattering coefficient as Polarimetric SAR Image is special
Matrix F is levied, following steps are specifically included:
(1a) defines Pauli bases { S1,S2,S3It is formula<1>, formula<1>It is as follows:
Wherein S1Represent odd scattering, S2Represent even scattering, S3Represent volume scattering;
(1b) is decomposed to define by Pauli obtains equation<2>, equation<2>It is as follows:
Wherein a is odd scattering coefficient, and b is even scattering coefficient, and c is volume scattering coefficient, SHHFor horizontal emission and level
The scattering component of reception, SVVFor Vertical Launch and the scattering component of vertical reception, SHVFor the scattering of horizontal emission and vertical reception
Component;
(1c) passes through formula<1>And equation<2>Obtain odd scattering coefficient a, even scattering coefficient b and volume scattering coefficient c:
Using odd scattering coefficient a, even scattering coefficient b and volume scattering coefficient as Polarimetric SAR Image 3-D view eigenmatrix F:
(1d) defines the matrix F that size is M1 × M2 × 3, and by odd scattering coefficient a, even scattering coefficient b, body
Scattering coefficient c is assigned to matrix F, obtains the eigenmatrix F based on pixel, and wherein M1 is the length of Polarimetric SAR Image to be sorted,
M2 is the width of Polarimetric SAR Image to be sorted;
Step 2, this three-dimensional feature is changed into RGB pcolors F1;
Step 3, m × n block of pixels is randomly selected on figure F1 as training sample, whole is schemed F1 as test sample,
Detailed process is as follows:
(3a) falls into 5 types polarization SAR atural object to be sorted, and each pixel has corresponding position in image to be classified
Put,
5 kinds of different types of pixels corresponding position L1, L2, L3, L4 and L5 is obtained first, and wherein L1 represents the first kind
The position of pixel, L2 represents the position of Equations of The Second Kind pixel, and L3 represents the position of the 3rd class pixel, and L4 represents the 4th class picture
The position of vegetarian refreshments, L5 represents the position of the 5th class pixel;
(3b) randomly selects 5% pixel as training sample, the random choosing respectively from above-mentioned L1, L2, L3, L4 and L5
Take n1 pixel as the central pixel point of training sample block, extend m11 picture respectively to the left and upwards from central pixel point
Vegetarian refreshments, to the right with downward m21 pixel of extension respectively, position S1, S2 of the pixel chosen in figure to be sorted,
S3, S4 and S5, wherein S1 represent the position that the pixel of training sample is chosen in first kind atural object in figure to be sorted, S2 generations
Position of the pixel of training sample in figure to be sorted is chosen in table Equations of The Second Kind atural object, S3 is represented in the 3rd class atural object and chosen
For position of the pixel in figure to be sorted of training sample, S4 represents the pixel that training sample is chosen in the 4th class atural object
Position in figure to be sorted, S5, which is represented, is chosen for position of the pixel of training sample in figure to be sorted in the 5th class atural object
Put, related calculation formula is as follows:
Wherein m11 represents of the central pixel point chosen in image to be classified to the left with the pixel extended up
Number, m21 represents the number for the pixel that the central pixel point chosen extends to the right and downwards in image to be classified, and M1 represents to treat
The length of classification Polarimetric SAR Image, the width of M2 presentation class Polarimetric SAR Images, n1 represents the center pixel that each class is chosen
The number of point, n2 represents the species number of polarization SAR to be sorted, and m1 represents to be chosen for the length of training sample block, and m2 represents to choose
For the width of training sample block, p represents that the pixel for choosing sample accounts for the percentage of pixel to be sorted, and n1, m11 and m21 are equal
For positive integer;
(3c) is used as test sample with RGB figures F1;
Step 4, construction training dataset TtrainEigenmatrix WtrainWith test data set TtestEigenmatrix Wtest;
Specifically include following steps:
(4a) defines training dataset TtrainEigenmatrix Wtrain, training sample set is generated, and be assigned to training data
Collect TtrainEigenmatrix Wtrain;
Wtrain={ W1, W2, W3... WiI=1, and 2,3 ... n, wherein, WiFor the feature square of i-th of training sample block
Battle array, n represents the number for the training sample block chosen;
(4b) defines test data set TtestEigenmatrix Wtest, the pixel value for scheming F1 is assigned to test data set T's
Eigenmatrix Wtest;
Step 5, model of the construction based on full convolutional neural networks is specific as follows:
(5a) constructs full convolutional neural networks model:The convolution of the pond layer of the convolutional layer of input layer → first → first → second
The convolutional layer of the pond layer of the pond layer of the convolutional layer of the layer → second pond layer → the 3rd → the 3rd → Volume Four lamination → the 4th → the 5th →
Warp lamination → Eltwise layers → of the convolutional layer of the 6th convolutional layer → the seven → convolutional layer of the first warp lamination → the eight → second
Three warp laminations → crop layers → softmax graders;Every layer of parameter is as follows:
For input layer, it is 3 to set Feature Mapping map number;
For the first convolutional layer, it is 32 to set Feature Mapping map number, and filter size is that 5, pad is 2;
For the first pond layer, it is 2 to set down-sampling size;
For the second convolutional layer, it is 64 to set Feature Mapping map number, and it is that 5, pad is 2 to set filter size;
For the second pond layer, it is 2 to set down-sampling size;
For the 3rd convolutional layer, it is 96 to set Feature Mapping map number, and it is that 3, pad is 1 to set filter size;
For the 3rd pond layer, it is 2 to set down-sampling size;
For Volume Four lamination, it is 128 to set Feature Mapping map number, and it is that 3, pad is 1 to set filter size;
For the 4th pond layer, it is 2 to set down-sampling size;
For the 5th convolutional layer, it is 128 to set Feature Mapping map number, and it is that 3, pad is 1 to set filter size;
For the 6th convolutional layer, it is 128 to set Feature Mapping map number, and it is that 1, pad is 0 to set filter size;
For the 7th convolutional layer, it is 5 to set Feature Mapping map number, and it is that 1, pad is 0 to set filter size;
For the first warp lamination, it is 5 to set Feature Mapping map number, and it is 4 to set filter size, and step-length is 2;
For the 8th convolutional layer, it is 5 to set Feature Mapping map number, and it is that 1, pad is 0 to set filter size;
For the second warp lamination, it is 5 to set Feature Mapping map number, and it is 4 to set filter size, and step-length is 2;
For Eltwise layers, it is 5 to set Feature Mapping map number, and setting operation is SUM;
For the 3rd warp lamination, it is 5 to set Feature Mapping map number, and it is 16 to set filter size, and step-length is 8;
For crop layers, it is 5 to set Feature Mapping map number, and it is that 2, offset is 4 to set axis;
For softmax graders, it is 5 to set Feature Mapping map number.
Step 6, with training dataset training network model, the parameter model trained is specific as follows:
By eigenmatrix WiAs the input of network model, output one and eigenmatrix WiThe classification matrix of identical dimensional,
By solving the error between above-mentioned classification and the correct classification of handmarking and carrying out backpropagation to error, carry out Optimum Classification
The network parameter of model, the disaggregated model trained, the correct category of handmarking is as shown in Figure 2;
Step 7, the model trained is loaded, test set is classified, classification results are obtained, it is specific as follows:
By test data set TtestEigenmatrix WtestTest network is input to, and with the model parameter trained come just
Beginningization test network, obtains testing classification result.
The effect of the present invention can be further illustrated by following emulation experiment:
Simulated conditions:
Hardware platform is:Xeon(R)CPU E5606@2.13GHz×8
Video card:Quadro K2200/PCIe/SSE2,2.40GHz*16
Inside save as 8G
Software platform is:Caffe, is a kind of one of deep learning framework, is write based on C Plus Plus, and have
Licensed BSD, open source code is there is provided towards order line, Matlab and Python interfaces are one clear, readable
By force, quick deep learning framework
The present invention is with full convolutional neural networks to San Francisco Bay Area Classification of Polarimetric SAR Image, and control methods has:2 layers
Traditional CNN networks of convolution stream, support vector machine method SVM.
Emulation content and result:
Emulation 1, is tested with the inventive method under above-mentioned simulated conditions, i.e., after polarization SAR data Pauli decomposition
RGB pcolors on randomly select 5% markd pixel as training sample, all pixels are made on whole pcolor
For test sample, the classification results such as Fig. 3 are obtained, as can be seen from Figure 3:Sea area classifying quality is relatively good, high density city
There are some wrong points city region and low-density urban area part, and also there are some wrong points, but energy in wood land and meadow region part
Main part is recognized, and maintains detailed information.It is 96.5605% to calculate accuracy rate according to classification results and label information, instruction
It is 195.7004 seconds to practice the time, and the testing time is 50.6154 seconds.
Emulation 2, traditional CNN methods of stream are accumulated to San Francisco Bay Area Polarimetric SAR Image with level 2 volume of the prior art
Classified, obtain classification results, it is 96.7246% to calculate accuracy rate according to classification results and label information, the training time is
225.6592 second, the testing time is 321.3895 seconds.
Emulation 3, is classified to San Francisco Bay Area Polarimetric SAR Image with SVM methods of the prior art, is divided
Class result, it is 91.4376% to calculate accuracy rate according to classification results and label information, and the training time is 10988.1022 seconds, is surveyed
The examination time is 376.9558 seconds.
The present invention and contrast experiment take 5% pixel as training sample, and above-mentioned three kinds of emulation modes are to San Francisco sea
The accuracy rate and run time that gulf area Polarimetric SAR Image is classified are as shown in table 1:
Table 1
Method | Accuracy rate | Training time (second) | Testing time (second) |
FCN of the present invention | 96.5605% | 195.7004 | 50.6154 |
CNN | 96.7246% | 225.6592 | 321.3895 |
SVM | 91.4376% | 10988.1022 | 376.9558 |
As seen from Table 1, compared with tradition CNN, test data set nicety of grading of the invention is lower than traditional CNN about
0.2 percentage point, testing time more than the 6 times present invention faster than CNN and SVM methods ratios, accuracy rate are higher than SVM about 5 hundred
Branch, the testing time is faster than SVM more than 7 times.
To sum up, in the case where ensureing compared with high-accuracy, polarization SAR is shortened to Classification of Polarimetric SAR Image with the present invention
The run time of image.
Claims (6)
1. the polarization SAR terrain classification method based on full convolutional neural networks, it is characterised in that comprise the following steps:
(1) Pauli decomposition is carried out to polarization scattering matrix S to be sorted, obtains odd scattering coefficient, even scattering coefficient and body
Scattering coefficient, using odd scattering coefficient, even scattering coefficient and volume scattering coefficient as Polarimetric SAR Image 3-D view feature
Matrix F;
(2) the 3-D view eigenmatrix F that step (1) is obtained is converted into RGB figures F1;
(3) scheme to randomly select m × n block of pixels on F1 as training sample in RGB, m and n are positive integer, whole RGB schemes F1
It is used as test sample;
(4) constructing full convolutional neural networks model is:The convolutional layer of the pond layer of the convolutional layer of input layer → first → first → second →
Convolutional layer → 6th of the pond layer of the pond layer of the convolutional layer of second pond layer → the 3rd → the 3rd → Volume Four lamination → the 4th → the 5th
The convolutional layer of the warp lamination of the convolutional layer of convolutional layer → the 7th → first → the 8th → second warp lamination → Eltwise layers → the 3 is anti-
Convolutional layer → crop layers → softmax graders;
(5) by full convolutional neural networks model training training sample, the model trained;
(6) test set is classified by the model trained, obtains classification results.
2. the polarization SAR terrain classification method according to claim 1 based on full convolutional neural networks, it is characterised in that
Pauli decomposition is carried out to polarization scattering matrix S to be sorted in the step (1), step is as follows:
(1a) defines Pauli bases { S1,S2,S3It is formula<1>, formula<1>It is as follows:
<mrow>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>S</mi>
<mn>1</mn>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msqrt>
<mn>2</mn>
</msqrt>
</mfrac>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>S</mi>
<mn>2</mn>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msqrt>
<mn>2</mn>
</msqrt>
</mfrac>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mrow>
<mo>-</mo>
<mn>1</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<msub>
<mi>S</mi>
<mn>3</mn>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msqrt>
<mn>2</mn>
</msqrt>
</mfrac>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mn>0</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mo><</mo>
<mn>1</mn>
<mo>></mo>
</mrow>
Wherein, S1Represent odd scattering, S2Represent even scattering, S3Represent volume scattering;
(1b) is decomposed to define by Pauli obtains equation<2>, equation<2>It is as follows:
<mrow>
<mi>S</mi>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>S</mi>
<mrow>
<mi>H</mi>
<mi>H</mi>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>S</mi>
<mrow>
<mi>H</mi>
<mi>V</mi>
</mrow>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>S</mi>
<mrow>
<mi>H</mi>
<mi>V</mi>
</mrow>
</msub>
</mtd>
<mtd>
<msub>
<mi>S</mi>
<mrow>
<mi>V</mi>
<mi>V</mi>
</mrow>
</msub>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<msub>
<mi>aS</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<msub>
<mi>bS</mi>
<mn>2</mn>
</msub>
<mo>+</mo>
<msub>
<mi>cS</mi>
<mn>3</mn>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mo><</mo>
<mn>2</mn>
<mo>></mo>
</mrow>
Wherein, a is odd scattering coefficient, and b is even scattering coefficient, and c is volume scattering coefficient, SHHFor horizontal emission and level reception
Scattering component, SVVFor the scattering component for Vertical Launch and vertical reception, SHVFor the scattering point of horizontal emission and vertical reception
Amount;
(1c) passes through formula<1>And equation<2>Obtain odd scattering coefficient a, even scattering coefficient b and volume scattering coefficient c:Will be strange
Secondary scattering coefficient a, even scattering coefficient b and volume scattering coefficient as Polarimetric SAR Image 3-D view eigenmatrix F,
<mrow>
<mi>F</mi>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>a</mi>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msqrt>
<mn>2</mn>
</msqrt>
</mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>S</mi>
<mrow>
<mi>H</mi>
<mi>H</mi>
</mrow>
</msub>
<mo>+</mo>
<msub>
<mi>S</mi>
<mrow>
<mi>V</mi>
<mi>V</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>b</mi>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msqrt>
<mn>2</mn>
</msqrt>
</mfrac>
<mrow>
<mo>(</mo>
<msub>
<mi>S</mi>
<mrow>
<mi>H</mi>
<mi>H</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>S</mi>
<mrow>
<mi>V</mi>
<mi>V</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mi>c</mi>
<mo>=</mo>
<msqrt>
<mn>2</mn>
</msqrt>
<msub>
<mi>S</mi>
<mrow>
<mi>H</mi>
<mi>V</mi>
</mrow>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>.</mo>
</mrow>
3. the polarization SAR terrain classification method according to claim 1 based on full convolutional neural networks, it is characterised in that
It is described to regard odd scattering coefficient, even scattering coefficient and volume scattering coefficient as the three of Polarimetric SAR Image in the step (1)
The process for tieing up image characteristic matrix F is as follows:The eigenmatrix F that a size is M1 × M2 × 3 is first defined, then odd is scattered into system
Number, even scattering coefficient, volume scattering coefficient are assigned to eigenmatrix F, wherein, M1 is the length of Polarimetric SAR Image to be sorted, and M2 is
The width of Polarimetric SAR Image to be sorted.
4. the polarization SAR terrain classification method according to claim 1 based on full convolutional neural networks, it is characterised in that
In the step (2), the three-dimensional feature matrix that step (1) is obtained is converted into RGB pcolors F1.
5. the polarization SAR terrain classification method according to claim 1 based on full convolutional neural networks, it is characterised in that
The detailed process of the step (3) is as follows:
(3a) falls into 5 types polarization SAR atural object to be sorted, and each pixel has corresponding position in image to be classified,
The position L1 of the corresponding position first kind pixel of 5 kinds of pixels, the position L2 of Equations of The Second Kind pixel, the 3rd class picture are obtained first
The position L3 of vegetarian refreshments, the position L4 of the 4th class pixel and the position L5 of the 5th class pixel;
(3b) schemes to randomly select 5% pixel on F1 as training sample in RGB again, then the first kind obtained from step (3a)
The position L1 of pixel, the position L2 of Equations of The Second Kind pixel, the position L3 of the 3rd class pixel, the position L4 of the 4th class pixel
With n1 pixel is randomly selected in the position L5 of the 5th class pixel respectively as the central pixel point of training sample block, therefrom
Imago vegetarian refreshments extends m11 pixel respectively to the left and upwards, extends m21 pixel respectively to the right and downwards, is chosen
Position of the pixel in figure to be sorted be respectively to be chosen for the pixel of training sample in figure to be sorted in first kind atural object
In position S1, be chosen for position S2 of the pixel of training sample in figure to be sorted, the 3rd class atural object in Equations of The Second Kind atural object
In be chosen for being chosen for the pixel of training sample in position S3 of the pixel of training sample in figure to be sorted, the 4th class atural object
Point is chosen for position of the pixel of training sample in figure to be sorted in the position S4 and the 5th class atural object in figure to be sorted
S5, wherein,
<mrow>
<mi>p</mi>
<mo>=</mo>
<mfrac>
<mrow>
<mi>m</mi>
<mn>1</mn>
<mo>&times;</mo>
<mi>m</mi>
<mn>2</mn>
<mo>&times;</mo>
<mi>n</mi>
<mn>1</mn>
<mo>&times;</mo>
<mi>n</mi>
<mn>2</mn>
</mrow>
<mrow>
<mi>M</mi>
<mn>1</mn>
<mo>&times;</mo>
<mi>M</mi>
<mn>2</mn>
</mrow>
</mfrac>
<mo>,</mo>
</mrow>
Wherein m11 is the number of the central pixel point chosen in image to be classified to the left with the pixel extended up, and m21 is
The number for the pixel that the central pixel point of selection extends to the right and downwards in image to be classified, M1 is polarization SAR to be sorted
The length of image, M2 is the width of classification Polarimetric SAR Image, and n1 is the number for the central pixel point that each class is chosen, and n2 is to treat
The species number of classification polarization SAR, m1 is the length for being chosen for training sample block, and m2 is the width for being chosen for training sample block, and p is
The pixel for choosing sample accounts for the percentage of pixel to be sorted, and n1, m11 and m21 are positive integer;
(3c) is used as test sample with RGB figures F1 again.
6. the polarization SAR terrain classification method according to claim 1 based on full convolutional neural networks, it is characterised in that
In the step (4), when constructing full convolutional neural networks model, based on full convolutional network model, parameter is as follows:
For input layer, it is 3 to set Feature Mapping map number;
For the first convolutional layer, it is 32 to set Feature Mapping map number, and filter size is that 5, pad is 2;
For the first pond layer, it is 2 to set down-sampling size;
For the second convolutional layer, it is 64 to set Feature Mapping map number, and it is that 5, pad is 2 to set filter size;
For the second pond layer, it is 2 to set down-sampling size;
For the 3rd convolutional layer, it is 96 to set Feature Mapping map number, and it is that 3, pad is 1 to set filter size;
For the 3rd pond layer, it is 2 to set down-sampling size;
For Volume Four lamination, it is 128 to set Feature Mapping map number, and it is that 3, pad is 1 to set filter size;
For the 4th pond layer, it is 2 to set down-sampling size;
For the 5th convolutional layer, it is 128 to set Feature Mapping map number, and it is that 3, pad is 1 to set filter size;
For the 6th convolutional layer, it is 128 to set Feature Mapping map number, and it is that 1, pad is 0 to set filter size;
For the 7th convolutional layer, it is 5 to set Feature Mapping map number, and it is that 1, pad is 0 to set filter size;
For the first warp lamination, it is 5 to set Feature Mapping map number, and it is that 4, stride is 2 to set filter size;
For the 8th convolutional layer, it is 5 to set Feature Mapping map number, and it is that 1, pad is 0 to set filter size;
For the second warp lamination, it is 5 to set Feature Mapping map number, and it is that 4, stride is 2 to set filter size;
For Eltwise layers, it is 5 to set Feature Mapping map number, and setting operation is SUM;
For the 3rd warp lamination, it is 5 to set Feature Mapping map number, and it is that 16, stride is 8 to set filter size;
For crop layers, it is 5 to set Feature Mapping map number, and it is that 2, offset is 4 to set axis;
For softmax graders, it is 5 to set Feature Mapping map number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710369376.3A CN107239797A (en) | 2017-05-23 | 2017-05-23 | Polarization SAR terrain classification method based on full convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710369376.3A CN107239797A (en) | 2017-05-23 | 2017-05-23 | Polarization SAR terrain classification method based on full convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107239797A true CN107239797A (en) | 2017-10-10 |
Family
ID=59985128
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710369376.3A Pending CN107239797A (en) | 2017-05-23 | 2017-05-23 | Polarization SAR terrain classification method based on full convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107239797A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108182426A (en) * | 2018-01-30 | 2018-06-19 | 国家统计局湖北调查总队 | Coloured image sorting technique and device |
CN108241854A (en) * | 2018-01-02 | 2018-07-03 | 天津大学 | A kind of deep video conspicuousness detection method based on movement and recall info |
CN108564098A (en) * | 2017-11-24 | 2018-09-21 | 西安电子科技大学 | Based on the polarization SAR sorting technique for scattering full convolution model |
CN108564115A (en) * | 2018-03-30 | 2018-09-21 | 西安电子科技大学 | Semi-supervised polarization SAR terrain classification method based on full convolution GAN |
CN108776772A (en) * | 2018-05-02 | 2018-11-09 | 北京佳格天地科技有限公司 | Across the time building variation detection modeling method of one kind and detection device, method and storage medium |
CN108986108A (en) * | 2018-06-26 | 2018-12-11 | 西安电子科技大学 | A kind of SAR image sample block selection method based on sketch line segment aggregation properties |
CN109543767A (en) * | 2018-11-29 | 2019-03-29 | 东南大学 | Urban land use automatic classification method under big data environment |
CN109886992A (en) * | 2017-12-06 | 2019-06-14 | 深圳博脑医疗科技有限公司 | For dividing the full convolutional network model training method in abnormal signal area in MRI image |
CN110096994A (en) * | 2019-04-28 | 2019-08-06 | 西安电子科技大学 | A kind of small sample PolSAR image classification method based on fuzzy label semanteme priori |
CN110728324A (en) * | 2019-10-12 | 2020-01-24 | 西安电子科技大学 | Depth complex value full convolution neural network-based polarimetric SAR image classification method |
CN111507047A (en) * | 2020-04-17 | 2020-08-07 | 电子科技大学 | Inverse scattering imaging method based on SP-CUnet |
CN113240047A (en) * | 2021-06-02 | 2021-08-10 | 西安电子科技大学 | SAR target recognition method based on component analysis multi-scale convolutional neural network |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105354565A (en) * | 2015-12-23 | 2016-02-24 | 北京市商汤科技开发有限公司 | Full convolution network based facial feature positioning and distinguishing method and system |
CN105574829A (en) * | 2016-01-13 | 2016-05-11 | 合肥工业大学 | Adaptive bilateral filtering algorithm for polarized SAR image |
CN105718957A (en) * | 2016-01-26 | 2016-06-29 | 西安电子科技大学 | Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network |
CN105868793A (en) * | 2016-04-18 | 2016-08-17 | 西安电子科技大学 | Polarization SAR image classification method based on multi-scale depth filter |
CN106355188A (en) * | 2015-07-13 | 2017-01-25 | 阿里巴巴集团控股有限公司 | Image detection method and device |
CN106447658A (en) * | 2016-09-26 | 2017-02-22 | 西北工业大学 | Significant target detection method based on FCN (fully convolutional network) and CNN (convolutional neural network) |
CN106600571A (en) * | 2016-11-07 | 2017-04-26 | 中国科学院自动化研究所 | Brain tumor automatic segmentation method through fusion of full convolutional neural network and conditional random field |
-
2017
- 2017-05-23 CN CN201710369376.3A patent/CN107239797A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106355188A (en) * | 2015-07-13 | 2017-01-25 | 阿里巴巴集团控股有限公司 | Image detection method and device |
CN105354565A (en) * | 2015-12-23 | 2016-02-24 | 北京市商汤科技开发有限公司 | Full convolution network based facial feature positioning and distinguishing method and system |
CN105574829A (en) * | 2016-01-13 | 2016-05-11 | 合肥工业大学 | Adaptive bilateral filtering algorithm for polarized SAR image |
CN105718957A (en) * | 2016-01-26 | 2016-06-29 | 西安电子科技大学 | Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network |
CN105868793A (en) * | 2016-04-18 | 2016-08-17 | 西安电子科技大学 | Polarization SAR image classification method based on multi-scale depth filter |
CN106447658A (en) * | 2016-09-26 | 2017-02-22 | 西北工业大学 | Significant target detection method based on FCN (fully convolutional network) and CNN (convolutional neural network) |
CN106600571A (en) * | 2016-11-07 | 2017-04-26 | 中国科学院自动化研究所 | Brain tumor automatic segmentation method through fusion of full convolutional neural network and conditional random field |
Non-Patent Citations (1)
Title |
---|
汤浩 等: "全卷积网络结合改进的条件随机场-循环神经网络用于SAR图像场景分类", 《计算机应用》 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108564098A (en) * | 2017-11-24 | 2018-09-21 | 西安电子科技大学 | Based on the polarization SAR sorting technique for scattering full convolution model |
CN108564098B (en) * | 2017-11-24 | 2021-09-03 | 西安电子科技大学 | Polarization SAR classification method based on scattering complete convolution model |
CN109886992A (en) * | 2017-12-06 | 2019-06-14 | 深圳博脑医疗科技有限公司 | For dividing the full convolutional network model training method in abnormal signal area in MRI image |
CN108241854A (en) * | 2018-01-02 | 2018-07-03 | 天津大学 | A kind of deep video conspicuousness detection method based on movement and recall info |
CN108241854B (en) * | 2018-01-02 | 2021-11-09 | 天津大学 | Depth video saliency detection method based on motion and memory information |
CN108182426A (en) * | 2018-01-30 | 2018-06-19 | 国家统计局湖北调查总队 | Coloured image sorting technique and device |
CN108564115A (en) * | 2018-03-30 | 2018-09-21 | 西安电子科技大学 | Semi-supervised polarization SAR terrain classification method based on full convolution GAN |
CN108776772A (en) * | 2018-05-02 | 2018-11-09 | 北京佳格天地科技有限公司 | Across the time building variation detection modeling method of one kind and detection device, method and storage medium |
CN108776772B (en) * | 2018-05-02 | 2022-02-08 | 北京佳格天地科技有限公司 | Cross-time building change detection modeling method, detection device, method and storage medium |
CN108986108A (en) * | 2018-06-26 | 2018-12-11 | 西安电子科技大学 | A kind of SAR image sample block selection method based on sketch line segment aggregation properties |
CN108986108B (en) * | 2018-06-26 | 2022-04-19 | 西安电子科技大学 | SAR image sample block selection method based on sketch line segment aggregation characteristics |
CN109543767A (en) * | 2018-11-29 | 2019-03-29 | 东南大学 | Urban land use automatic classification method under big data environment |
CN110096994A (en) * | 2019-04-28 | 2019-08-06 | 西安电子科技大学 | A kind of small sample PolSAR image classification method based on fuzzy label semanteme priori |
CN110728324A (en) * | 2019-10-12 | 2020-01-24 | 西安电子科技大学 | Depth complex value full convolution neural network-based polarimetric SAR image classification method |
CN110728324B (en) * | 2019-10-12 | 2022-03-04 | 西安电子科技大学 | Depth complex value full convolution neural network-based polarimetric SAR image classification method |
CN111507047A (en) * | 2020-04-17 | 2020-08-07 | 电子科技大学 | Inverse scattering imaging method based on SP-CUnet |
CN111507047B (en) * | 2020-04-17 | 2022-10-14 | 电子科技大学 | Inverse scattering imaging method based on SP-CUnet |
CN113240047A (en) * | 2021-06-02 | 2021-08-10 | 西安电子科技大学 | SAR target recognition method based on component analysis multi-scale convolutional neural network |
CN113240047B (en) * | 2021-06-02 | 2022-12-02 | 西安电子科技大学 | SAR target recognition method based on component analysis multi-scale convolutional neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107239797A (en) | Polarization SAR terrain classification method based on full convolutional neural networks | |
CN108564006B (en) | Polarized SAR terrain classification method based on self-learning convolutional neural network | |
CN107463948A (en) | Classification of Multispectral Images method based on binary channels multiple features fusion network | |
CN105868793B (en) | Classification of Polarimetric SAR Image method based on multiple dimensioned depth filter | |
CN107563428A (en) | Classification of Polarimetric SAR Image method based on generation confrontation network | |
CN104123555B (en) | Super-pixel polarimetric SAR land feature classification method based on sparse representation | |
CN108846426A (en) | Polarization SAR classification method based on the twin network of the two-way LSTM of depth | |
CN105160678A (en) | Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method | |
CN107392122A (en) | Polarization SAR silhouette target detection method based on multipolarization feature and FCN CRF UNEs | |
CN107273938A (en) | Multi-source Remote Sensing Images terrain classification method based on binary channels convolution ladder net | |
CN107657285A (en) | Hyperspectral image classification method based on Three dimensional convolution neutral net | |
CN102999762B (en) | Decompose and the Classification of Polarimetric SAR Image method of spectral clustering based on Freeman | |
CN107590515A (en) | The hyperspectral image classification method of self-encoding encoder based on entropy rate super-pixel segmentation | |
CN103186794B (en) | Based on the Classification of Polarimetric SAR Image method of the neighbour's propagation clustering improved | |
CN107992891A (en) | Based on spectrum vector analysis multi-spectral remote sensing image change detecting method | |
CN110163213A (en) | Remote sensing image segmentation method based on disparity map and multiple dimensioned depth network model | |
CN104156728A (en) | Polarized SAR image classification method based on stacked code and softmax | |
CN107239799A (en) | Polarization SAR image classification method with depth residual error net is decomposed based on Pauli | |
CN106203444A (en) | Classification of Polarimetric SAR Image method based on band ripple Yu convolutional neural networks | |
CN105046268A (en) | Polarization SAR image classification method based on Wishart deep network | |
CN107239757A (en) | A kind of polarization SAR silhouette target detection method based on depth ladder net | |
CN104751173A (en) | Polarized SAR (Synthetic Aperture Radar) image classifying method based on cooperative representation and deep learning. | |
CN107358192A (en) | A kind of polarization SAR image classification method based on depth Curvelet residual error nets | |
CN107341449A (en) | A kind of GMS Calculation of precipitation method based on cloud mass changing features | |
CN114372521A (en) | SAR image classification method based on attention mechanism and residual error relation network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171010 |