CN107609579A - Classification of radar targets method based on sane variation self-encoding encoder - Google Patents
Classification of radar targets method based on sane variation self-encoding encoder Download PDFInfo
- Publication number
- CN107609579A CN107609579A CN201710743598.7A CN201710743598A CN107609579A CN 107609579 A CN107609579 A CN 107609579A CN 201710743598 A CN201710743598 A CN 201710743598A CN 107609579 A CN107609579 A CN 107609579A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- sane
- sample
- training sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a kind of classification of radar targets method based on sane variation self-encoding encoder, mainly solves in the prior art the problem of Radar High Range Resolution classification performance is poor, and classification performance is unstable.The present invention's comprises the following steps that:(1) data are read;(2) compensation data;(3) average distance picture is extracted;(4) sane variation self-encoding encoder is built;(5) sane variation self-encoding encoder is trained;(6) linear SVM is trained;(7) prediction class label is obtained.The present invention has the advantages of good to Radar High Range Resolution classification performance and classification performance is sane.
Description
Technical field
The invention belongs to communication technique field, further relates to Radar High Range Resolution (HRRP) sorting technique field
In a kind of Radar High Range Resolution based on sane variation self-encoding encoder sorting technique.The present invention can be used for radar height
Resolution distance picture is classified, and effectively increases the performance of Radar High Range Resolution classification.
Background technology
Radar High Range Resolution is the target scattering idea echo obtained with wideband-radar signal in radar ray upslide
The vector of shadow.It includes the important structural informations such as scattering point distribution, target size, and is easily obtained and quickly handles,
It is very valuable to target recognition and classification, nowadays it has been an important research side in Radar data assimilation (RATR) field
To.For Radar data assimilation, good feature can not only eliminate redundant components and noise component(s) in echo, and
And sample class information can be retained as much as possible while change data dimension (being usually dimensionality reduction), so as to improve recognition efficiency
And precision.Therefore, many scholars have made extensive and intensive studies to Radar High Range Resolution Target Features Extracting Technology.
Patent document that research institute of China Shipbuilding Industry Corporation the 7th two or four applies at it " based on high-resolution it is one-dimensional away from
From the naval vessels and freighter sorting technique of picture " (number of patent application:201410707516.X publication No.:CN104459663B it is public in)
A kind of naval vessels based on high-resolution lattice image and freighter sorting technique are driven.This method main flow is:First to receiving
To one-dimensional range profile pre-process;Then target area is extracted from one-dimensional range profile;Target area is carried out by entropy-peak method
Strong scattering point extracts;Pass through the degree of bias legally constituted authority meter target area strong scattering point distribution characteristics that makes a variation;Finally carry out naval vessels and freighter point
Class.Weak point is existing for this method, because the distribution characteristics of the strong scattering point on naval vessel changes very under different observing environments
Greatly, grader adjust the distance picture change it is sensitive, thus classification performance is unstable.
Patent document " the Radar range profile's based on matching dictionary and compressed sensing that University of Electronic Science and Technology applies at it
Target identification method " (number of patent application:201410371180.4 publication No.:CN 104122540B) in disclose one kind and be based on
Match dictionary and the Radar range profile's target identification method of compressed sensing.This method matches according to radar return Construction of A Model
Dictionary, choose training sample one-dimensional picture and to be identified test sample one-dimensional picture of the suitable test matrix to Known Species information
Perception is compressed respectively, reaches the purpose of Data Dimensionality Reduction.Then, sparse reconstruct is carried out to the data after compressed sensing, obtained
The sparse coefficient of the one-dimensional picture of training sample and the one-dimensional picture of test sample in the case where matching dictionary.Using the sparse coefficient of training sample as
Template vector, test sample is identified using nearest neighbor method.Weak point is existing for this method, as a result of shallow-layer
Linear model structure, feature descriptive power is limited, be unfavorable for obtain target deep layer classification information, therefore classification performance by
Limitation.
The content of the invention
The purpose of the present invention is in view of the deficienciess of the prior art, proposing a kind of based on sane variation self-encoding encoder
Classification of radar targets method.The present invention is compared with other Radar High Range Resolution sorting techniques in the prior art, feature extraction
Ability is stronger, and classification accuracy is higher, and classification performance is more sane.
The present invention realizes that the thinking of above-mentioned purpose is:Instruction is read from the High Range Resolution data set obtained using radar
Practice sample picture and test sample collection, training sample set and test sample are concentrated all samples translate sensitiveness compensation and
Amplitude sensitive compensating operation, the average distance picture of training sample set is extracted, builds the cost function of sane variation self-encoding encoder,
Sane variation self-encoding encoder is trained with training sample set, the sane variation self-encoding encoder that training sample set input trains is obtained
Training characteristics collection, with training characteristics collection training linear SVM, the sane variation that test sample collection input is trained is certainly
Encoder obtains test feature collection, and the linear SVM that the input of test feature collection trains is obtained into the pre- of test sample collection
Survey class label.
The specific steps that the present invention realizes include as follows:
(1) data are read:
In the High Range Resolution data set obtained from radar, 14000 sample composition training sample sets are successively read, according to
It is secondary to read 5200 sample composition test sample collections;
(2) training sample set and test sample collection data are compensated:
(2a) carries out translating sensitiveness benefit using barycenter alignment method to training sample set and the test sample collection data of reading
Repay, obtain translating training sample set and test sample collection after translation sensitiveness compensation after sensitiveness compensates;
(2b) uses European norm normalization method, to the training sample set and test sample collection data after translation sensitiveness compensation
Amplitude sensitive compensation is carried out, is compensated test sample collection after rear training sample set and compensation;
(3) training sample set average distance picture is extracted:
(3a) according to the following formula, calculates the sample number of every frame of training sample set after compensating:
Wherein, N represents the sample number that every frame data of training sample set after compensating include, and Y represents training sample after compensation
The total sample number of collection, c represent the light velocity, and A represents the angular domain of training sample set covering after compensation, and B represents to obtain High Range Resolution
The bandwidth of the radar of data set, L represent that High Range Resolution data set corresponds to the lateral dimension of target;
(3b) according to the following formula, training sample concentrates each sample in each frame data after calculating framing, by all samples
Training sample set after composition framing:
Wherein, xp,nTraining sample concentrates n-th of sample in pth frame data after expression framing, and F is trained after representing framing
The framing number of sample set,
(3c) according to the following formula, calculates average distance picture corresponding to each frame data of training sample set after framing:
Wherein, JpAverage distance picture corresponding to training sample set pth frame data after expression framing, ∑ represent sum operation;
(4) sane variation self-encoding encoder is built:
Training sample concentrates feature corresponding to each sample after (4a) calculates framing;
Training sample concentrates reconstructed sample corresponding to each sample after (4b) calculates framing;
Training sample concentrates reconstruct average distance picture corresponding to each sample after (4c) calculates framing;
(4d) builds the cost function of sane variation self-encoding encoder;
(5) sane variation self-encoding encoder is trained;
(6) linear SVM is trained;
(7) the prediction class label of test sample collection is obtained:
Test sample collection after compensation is input to the sane variation self-encoding encoder trained by (7a), and the test exported is special
Collection;
Test feature collection is input to the linear SVM trained by (7b), obtains the prediction classification of test sample collection
Label.
The present invention has advantages below compared with prior art:
First, due to present invention uses the feature of variation self-encoding encoder extraction Radar High Range Resolution, overcoming existing
There is feature descriptive power present in technology limited, the problem of being unfavorable for obtaining the deep layer classification information of target so that the present invention
Improve the classification performance to Radar High Range Resolution.
Second, due to present invention uses the operation that average distance picture is reconstructed, overcoming and existing in the prior art
Grader adjust the distance the change tender subject of picture so that the present invention improves the sane of Radar High Range Resolution classification performance
Property.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention.
Embodiment
1 the present invention will be further described below in conjunction with the accompanying drawings.
Step 1, data are read.
In the High Range Resolution data set obtained from radar, 14000 sample composition training sample sets are successively read, according to
It is secondary to read 5200 sample composition test sample collections.
Step 2, training sample set and test sample collection data are compensated.
Using barycenter alignment method, training sample set and the test sample collection data of reading are carried out translating sensitiveness compensation,
Obtain translating training sample set and test sample collection after translation sensitiveness compensation after sensitiveness compensates.
The step of barycenter alignment method, is as follows:
The first step, according to the following formula, calculate the barycenter that training sample concentrates each Range Profile:
Wherein, Ox,iRepresent that training sample concentrates the barycenter of i-th of Range Profile, D represents that training sample concentrates i-th of distance
The dimension sum of picture, ∑ represent sum operation, and n represents that training sample concentrates the sequence number of i-th of Range Profile dimension, xi(n) represent
Training sample concentrates the value of the n-th dimension of i-th of Range Profile.
Second step, according to the following formula, calculate the barycenter that test sample concentrates each Range Profile:
Wherein, Oy,zRepresent that test sample concentrates the barycenter of z-th of Range Profile, yz(n) represent that test sample is concentrated z-th
The value of n-th dimension of Range Profile.
3rd step, according to the following formula, training sample concentrates the every one-dimensional of each Range Profile after calculating translation sensitiveness compensation
Value:
Wherein, x 'i(n) represent to translate the value that training sample after sensitiveness compensates concentrates the n-th dimension of i-th of Range Profile,
IFFT () expression inverse discrete Fourier transform operations, FFT () represent discrete Fourier transform operation, and e is represented with naturally normal
Number is the index operation at bottom, and j represents imaginary unit's symbol,Represent that training sample concentrates the barycenter of i-th of Range Profile corresponding
Phase,Representing that training sample concentrates phase corresponding to the center of i-th of Range Profile, k represents amount of movement,
4th step, according to the following formula, test sample concentrates the every one-dimensional of each Range Profile after calculating translation sensitiveness compensation
Value:
Wherein, y'z(n) represent to translate the value that test sample after sensitiveness compensates concentrates the n-th dimension of z-th of Range Profile,Represent that test sample concentrates phase corresponding to the barycenter of z-th of Range Profile,Represent that training sample concentrates z-th of distance
As center corresponding to phase.
Using European norm normalization method, the training sample set after translation sensitiveness compensation is carried out with test sample collection data
Amplitude sensitive compensates, and is compensated test sample collection after rear training sample set and compensation.
The step of European norm normalization method, is as follows:
The first step, according to the following formula, amplitude sensitive compensation is carried out to training sample set after translation sensitiveness compensation, calculates and mends
Repay every one-dimensional value that rear training sample concentrates each Range Profile:
Wherein, x "i(n) value of n-th dimension of training sample i-th of Range Profile of concentration after compensating is represented,Represent evolution behaviour
Make.
Second step, according to the following formula, amplitude sensitive compensation is carried out to test sample collection after translation sensitiveness compensation, calculates and mends
Repay every one-dimensional value that rear test sample concentrates each Range Profile:
Wherein, y "z(n) value of n-th dimension of test sample z-th of Range Profile of concentration after compensating is represented.
Step 3, training sample set average distance picture is extracted.
According to the following formula, the sample number of every frame of training sample set after compensating is calculated:
Wherein, N represents the sample number that every frame data of training sample set after compensating include, and Y represents training sample after compensation
The total sample number of collection, c represent the light velocity, and A represents the angular domain of training sample set covering after compensation, and B represents to obtain High Range Resolution
The bandwidth of the radar of data set, L represent that High Range Resolution data set corresponds to the lateral dimension of target.
According to the following formula, training sample concentrates each sample in each frame data after calculating framing, and all samples are formed
Training sample set after framing:
Wherein, xp,nTraining sample concentrates n-th of sample in pth frame data after expression framing, and F is trained after representing framing
The framing number of sample set,
According to the following formula, average distance picture corresponding to each frame data of training sample set after calculating framing:
Wherein, JpAverage distance picture corresponding to training sample set pth frame data after expression framing, ∑ represent sum operation.
Step 4, sane variation self-encoding encoder is built.
Training sample concentrates feature corresponding to each sample after calculating framing.
The step of training sample concentrates feature corresponding to each sample after the calculating framing is as follows:
The first step, according to the following formula, training sample concentrates the average of feature corresponding to each sample after calculating framing:
μp,n=Relu (xp,nW11+b11)W12+b12
Wherein, μp,nTraining sample concentrates the average of feature corresponding to n-th of sample in pth frame data after expression framing,
Relu represents to correct linear unit R ectified Linear Units operations, W11Represent the input layer of sane variation self-encoding encoder
It is mapped to the weight coefficient matrix of the 1st hidden layer, b11Represent that the input layer of sane variation self-encoding encoder is mapped to the inclined of the 1st hidden layer
Put vector, W12Represent that the 1st hidden layer of sane variation self-encoding encoder is mapped to the weight coefficient matrix of the average of characteristic layer, b12Table
Show that the 1st hidden layer of sane variation self-encoding encoder is mapped to the bias vector of the average of characteristic layer.
Second step, according to the following formula, training sample concentrates the standard deviation of feature corresponding to each sample after calculating framing:
σp,n=Relu (xp,nW11+b11)W13+b13
Wherein, σp,nTraining sample concentrates the standard of feature corresponding to n-th of sample in pth frame data after expression framing
Difference, W13Represent that the 1st hidden layer of sane variation self-encoding encoder is mapped to the weight coefficient matrix of the standard deviation of characteristic layer, b13Represent
1st hidden layer of sane variation self-encoding encoder is mapped to the bias vector of the standard deviation of characteristic layer.
3rd step, according to the following formula, training sample concentrates feature corresponding to each sample after calculating framing:
zp,n=μp,n+∈·σp,n
Wherein, zp,nTraining sample concentrates feature corresponding to n-th of sample in pth frame data after expression framing, and ∈ is represented
Obey a sampled value of standardized normal distribution.
According to the following formula, training sample concentrates reconstructed sample corresponding to each sample after calculating framing:
Wherein,Training sample concentrates reconstructed sample corresponding to n-th of sample in pth frame data, W after expression framing21
Represent that the characteristic layer of sane variation self-encoding encoder is mapped to the weight coefficient matrix of the 2nd hidden layer, b21Represent sane variation own coding
The characteristic layer of device is mapped to the bias vector of the 2nd hidden layer, W22Represent that the 2nd hidden layer of sane variation self-encoding encoder is mapped to weight
The weight coefficient matrix of the average of structure sample output layer, b22Represent that the 2nd hidden layer of sane variation self-encoding encoder is mapped to reconstruct sample
The bias vector of the average of this output layer, W23Represent that the 2nd hidden layer of sane variation self-encoding encoder is mapped to reconstructed sample output
The weight coefficient matrix of the standard deviation of layer, b23Represent that the 2nd hidden layer of sane variation self-encoding encoder is mapped to reconstructed sample output layer
Standard deviation bias vector.
According to the following formula, training sample concentrates reconstruct average distance picture corresponding to each sample after calculating framing:
mp,n=Relu (zp,nW31+b31)W32+b32+∈·[Relu(zp,nW31+b31)W33+b33]
Wherein, mp,nTraining sample concentrates the corresponding reconstruct average departure of n-th of sample in pth frame data after representing framing
From picture, W31Represent that the characteristic layer of sane variation self-encoding encoder is mapped to the weight coefficient matrix of the 3rd hidden layer, b31Represent sane to become
The characteristic layer of point self-encoding encoder is mapped to the bias vector of the 3rd hidden layer, W32Represent the 3rd hidden layer of sane variation self-encoding encoder
Reconstruct average distance is mapped to as the weight coefficient matrix of the average of output layer, b32Represent the 3rd of sane variation self-encoding encoder it is hidden
Layer is mapped to reconstruct average distance as the bias vector of the average of output layer, W33Represent the 3rd of sane variation self-encoding encoder it is hidden
Layer is mapped to reconstruct average distance as the weight coefficient matrix of the standard deviation of output layer, b33Represent the 3rd of sane variation self-encoding encoder
Individual hidden layer is mapped to reconstruct average distance as the bias vector of the standard deviation of output layer.
According to the following formula, the cost function of sane variation self-encoding encoder is built:
Wherein, L represents the cost function of sane variation self-encoding encoder, | | | | modulus Value Operations are represented, tr represents track taking
Operate, T represents that transposition operates, and the number of dimensions of feature, log are represented with certainly corresponding to n-th of sample in Q expression pth frame data
Right logarithm is the operation of taking the logarithm at bottom, and det represents to take determinant to operate.
Step 5, sane variation self-encoding encoder is trained.
The step of training sane variation self-encoding encoder, is as follows:
The first step, initial value is assigned to the parameter of sane variation self-encoding encoder.
Second step, training sample set after framing is inputted into sane variation self-encoding encoder and is trained, it is steady after being updated
The parameter of strong variation self-encoding encoder.
3rd step, judges whether cycle-index is equal to 50, if so, the 4th step is then performed, otherwise, after cycle-index plus 1
Perform second step.
4th step, obtain the sane variation self-encoding encoder trained.
Step 6, linear SVM is trained.
The step of training linear SVM, is as follows:
The first step, training sample set after framing is input to the sane variation self-encoding encoder trained, obtains training characteristics
Collection.
Second step, with training characteristics collection training linear SVM.
3rd step, obtain the linear SVM trained.
Step 7, the prediction class label of test sample collection is obtained.
Test sample collection after compensation is input to the sane variation self-encoding encoder trained, the test feature exported
Collection.
Test feature collection is input to the linear SVM trained, obtains the prediction classification mark of test sample collection
Label.
The effect of the present invention can be illustrated by emulation experiment:
1. experiment condition:
The emulation experiment of the present invention is Inter (R) Core (TM) i5-6500 CPU, the internal memory 8GB in dominant frequency 3.2GHz
Carried out under hardware environment and software environment based on Python3.6.
2. emulation content and interpretation of result:
The data of emulation experiment of the present invention are the airplane datas of certain domestic institute's ISAR measurement.Packet contains three
Class Aircraft Targets:" Ya Ke (Yark) -42 " is medium-and-large-sized jet plane;" diploma (Cessna) " is miniature jet formula aircraft;
" peace (An) -26 " is middle-size and small-size propeller aeroplane.Table 1 is the running parameter of radar and the dimensional parameters of aircraft.
The running parameter of the radar of table 1. and the dimensional parameters list of aircraft
The emulation experiment of the present invention is that Radar High Range Resolution to be sorted is divided into 3 classes, in emulation experiment, trains sample
This collection and test sample are concentrated 14000 and 5200 samples respectively.
Table 2 is to use the inventive method and prior art (linear discriminant analysis method, singular value decomposition method, principal component
Analysis method, support vector machine method, depth confidence network method, storehouse noise reduction own coding method, storehouse correction own coding side
Method) test sample collection is predicted to obtain prediction class label respectively, prediction class label and true class label are contrasted
The statistical form of the accuracy obtained afterwards.
The emulation experiment accuracy statistical form of table 2.
Simulation algorithm | Classification accuracy rate (%) |
The inventive method | 92.12 |
Linear discriminant analysis method | 81.30 |
Singular value decomposition method | 74.70 |
Principal component analytical method | 83.81 |
Support vector machine method | 88.28 |
Depth confidence network method | 90.64 |
Storehouse noise reduction own coding method | 91.20 |
Storehouse corrects own coding | 92.03 |
From Table 2, it can be seen that the classification higher than other art methods can obtained just using the inventive method
True rate, it was demonstrated that the present invention really can improve the performance to Radar High Range Resolution classification.
Claims (9)
1. a kind of classification of radar targets method based on sane variation self-encoding encoder, comprises the following steps:
(1) data are read:
In the High Range Resolution data set obtained from radar, 14000 sample composition training sample sets are successively read, are read successively
Take 5200 sample composition test sample collections;
(2) training sample set and test sample collection data are compensated:
(2a) using barycenter alignment method, training sample set and test sample collection data to reading carry out translating sensitiveness compensation,
Obtain translating training sample set and test sample collection after translation sensitiveness compensation after sensitiveness compensates;
(2b) uses European norm normalization method, and the training sample set after translation sensitiveness compensation is carried out with test sample collection data
Amplitude sensitive compensates, and is compensated test sample collection after rear training sample set and compensation;
(3) training sample set average distance picture is extracted:
(3a) according to the following formula, calculates sample number in the frame of the frame data of training sample set one:
<mrow>
<mi>N</mi>
<mo>=</mo>
<mfrac>
<mrow>
<mi>Y</mi>
<mi>c</mi>
</mrow>
<mrow>
<mn>2</mn>
<mi>A</mi>
<mi>B</mi>
<mi>L</mi>
</mrow>
</mfrac>
</mrow>
Wherein, N represents sample number in the frame of the frame data of training sample set one, and Y represents the total sample number of training sample set after compensation,
C represents the light velocity, and A represents the angular domain of training sample set covering after compensation, and B represents to obtain the radar of High Range Resolution data set
Bandwidth, L represent that High Range Resolution data set corresponds to the lateral dimension of target;
(3b) according to the following formula, training sample concentrates each sample in each frame data after calculating framing, and all samples are formed
Training sample set after framing:
<msubsup>
<mrow>
<mo>{</mo>
<msubsup>
<mrow>
<mo>{</mo>
<msub>
<mi>x</mi>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>}</mo>
</mrow>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</msubsup>
<mo>}</mo>
</mrow>
<mrow>
<mi>p</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>F</mi>
</msubsup>
Wherein, xp,nTraining sample concentrates n-th of sample in pth frame data after expression framing, and F represents training sample after framing
The framing number of collection,
(3c) according to the following formula, calculates average distance picture corresponding to each frame data of training sample set after framing:
<mrow>
<msub>
<mi>J</mi>
<mi>p</mi>
</msub>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<mi>N</mi>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</munderover>
<msub>
<mi>x</mi>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
</mrow>
Wherein, JpAverage distance picture corresponding to training sample set pth frame data after expression framing, ∑ represent sum operation;
(4) sane variation self-encoding encoder is built:
Training sample concentrates feature corresponding to each sample after (4a) calculates framing;
Training sample concentrates reconstructed sample corresponding to each sample after (4b) calculates framing;
Training sample concentrates reconstruct average distance picture corresponding to each sample after (4c) calculates framing;
(4d) builds the cost function of sane variation self-encoding encoder;
(5) sane variation self-encoding encoder is trained;
(6) linear SVM is trained;
(7) the prediction class label of test sample collection is obtained:
Test sample collection after compensation is input to the sane variation self-encoding encoder trained, the test feature exported by (7a)
Collection;
Test feature collection is input to the linear SVM trained by (7b), obtains the prediction classification mark of test sample collection
Label.
2. the classification of radar targets method according to claim 1 based on sane variation self-encoding encoder, it is characterised in that step
Suddenly the alignment of barycenter described in (2a) method comprises the following steps that:
The first step, according to the following formula, calculate the barycenter that training sample concentrates each Range Profile:
<mrow>
<msub>
<mi>O</mi>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>i</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<msub>
<mi>nx</mi>
<mi>i</mi>
</msub>
<msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
</mrow>
Wherein, Ox,iRepresent that training sample concentrates the barycenter of i-th of Range Profile, D represents training sample i-th of Range Profile of concentration
Dimension sum, ∑ represent sum operation, and n represents that training sample concentrates the sequence number of i-th of Range Profile dimension, xi(n) training is represented
The value of n-th dimension of i-th of Range Profile in sample set;
Second step, according to the following formula, calculate the barycenter that test sample concentrates each Range Profile:
<mrow>
<msub>
<mi>O</mi>
<mrow>
<mi>y</mi>
<mo>,</mo>
<mi>z</mi>
</mrow>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<msub>
<mi>ny</mi>
<mi>z</mi>
</msub>
<msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<msub>
<mi>y</mi>
<mi>z</mi>
</msub>
<msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</mfrac>
</mrow>
Wherein, Oy,zRepresent that test sample concentrates the barycenter of z-th of Range Profile, yz(n) represent that test sample concentrates z-th of Range Profile
N-th dimension value;
3rd step, according to the following formula, calculate every one-dimensional value that training sample after translation sensitiveness compensates concentrates each Range Profile:
<mrow>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mo>&prime;</mo>
</msubsup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>I</mi>
<mi>F</mi>
<mi>F</mi>
<mi>T</mi>
<mo>{</mo>
<mi>F</mi>
<mi>F</mi>
<mi>T</mi>
<mo>{</mo>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>}</mo>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<mi>j</mi>
<mo>&lsqb;</mo>
<msub>
<mi>&Phi;</mi>
<mrow>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>&Phi;</mi>
<mrow>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<mn>2</mn>
</mrow>
</msub>
<mo>&rsqb;</mo>
<mi>k</mi>
</mrow>
</msup>
<mo>}</mo>
</mrow>
Wherein, x 'i(n) represent to translate the value that training sample after sensitiveness compensates concentrates the n-th dimension of i-th of Range Profile, IFFT ()
Inverse discrete Fourier transform operation is represented, FFT () represents discrete Fourier transform operation, and e is represented using natural constant the bottom of as
Index operation, j represent imaginary unit's symbol,Represent that training sample concentrates phase corresponding to the barycenter of i-th of Range Profile,Representing that training sample concentrates phase corresponding to the center of i-th of Range Profile, k represents moving parameter,
4th step, according to the following formula, calculate every one-dimensional value that test sample after translation sensitiveness compensates concentrates each Range Profile:
<mrow>
<msubsup>
<mi>y</mi>
<mi>z</mi>
<mo>&prime;</mo>
</msubsup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>I</mi>
<mi>F</mi>
<mi>F</mi>
<mi>T</mi>
<mo>{</mo>
<mi>F</mi>
<mi>F</mi>
<mi>T</mi>
<mo>{</mo>
<msub>
<mi>y</mi>
<mi>z</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>}</mo>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<mi>j</mi>
<mo>&lsqb;</mo>
<msub>
<mi>&Phi;</mi>
<mrow>
<msub>
<mi>y</mi>
<mi>z</mi>
</msub>
<mo>,</mo>
<mn>1</mn>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mi>&Phi;</mi>
<mrow>
<msub>
<mi>y</mi>
<mi>z</mi>
</msub>
<mo>,</mo>
<mn>2</mn>
</mrow>
</msub>
<mo>&rsqb;</mo>
<mi>k</mi>
</mrow>
</msup>
<mo>}</mo>
</mrow>
Wherein, y'z(n) represent to translate the value that test sample after sensitiveness compensates concentrates the n-th dimension of z-th of Range Profile,Represent
Test sample concentrates phase corresponding to the barycenter of z-th of Range Profile,Represent that training sample concentrates the center of z-th of Range Profile
Corresponding phase.
3. the classification of radar targets method according to claim 1 based on sane variation self-encoding encoder, it is characterised in that step
Suddenly European norm normalization method described in (2b) comprises the following steps that:
The first step, according to the following formula, amplitude sensitive compensation is carried out to training sample set after translation sensitiveness compensation, after calculating compensation
Training sample concentrates every one-dimensional value of each Range Profile:
<mrow>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mrow>
<mo>&prime;</mo>
<mo>&prime;</mo>
</mrow>
</msubsup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mo>&prime;</mo>
</msubsup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</mrow>
<msqrt>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<msubsup>
<mi>x</mi>
<mi>i</mi>
<mo>&prime;</mo>
</msubsup>
<msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mfrac>
</mrow>
Wherein, x "i(n) value of n-th dimension of training sample i-th of Range Profile of concentration after compensating is represented,Represent evolution operation;
Second step, according to the following formula, amplitude sensitive compensation is carried out to test sample collection after translation sensitiveness compensation, after calculating compensation
Test sample concentrates every one-dimensional value of each Range Profile:
<mrow>
<msubsup>
<mi>y</mi>
<mi>z</mi>
<mrow>
<mo>&prime;</mo>
<mo>&prime;</mo>
</mrow>
</msubsup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msubsup>
<mi>y</mi>
<mi>z</mi>
<mo>&prime;</mo>
</msubsup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</mrow>
<msqrt>
<mrow>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>D</mi>
</munderover>
<msubsup>
<mi>y</mi>
<mi>z</mi>
<mo>&prime;</mo>
</msubsup>
<msup>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
</msqrt>
</mfrac>
</mrow>
Wherein, y "z(n) value of n-th dimension of test sample z-th of Range Profile of concentration after compensating is represented.
4. the classification of radar targets method according to claim 1 based on sane variation self-encoding encoder, it is characterised in that step
Suddenly training sample concentrates comprising the following steps that for feature corresponding to each sample after calculating framing described in (4a):
The first step, according to the following formula, training sample concentrates the average of feature corresponding to each sample after calculating framing:
μp,n=Relu (xp,nW11+b11)W12+b12
Wherein, μp,nTraining sample concentrates the average of feature corresponding to n-th of sample in pth frame data, Relu after expression framing
Represent to correct linear unit R ectified Linear Units operations, W11Represent the input layer mapping of sane variation self-encoding encoder
To the weight coefficient matrix of the 1st hidden layer, b11Represent that the input layer of sane variation self-encoding encoder is mapped to being biased towards for the 1st hidden layer
Amount, W12Represent that the 1st hidden layer of sane variation self-encoding encoder is mapped to the weight coefficient matrix of the average of characteristic layer, b12Represent steady
1st hidden layer of strong variation self-encoding encoder is mapped to the bias vector of the average of characteristic layer;
Second step, according to the following formula, training sample concentrates the standard deviation of feature corresponding to each sample after calculating framing:
σp,n=Relu (xp,nW11+b11)W13+b13
Wherein, σp,nTraining sample concentrates the standard deviation of feature corresponding to n-th of sample in pth frame data, W after expression framing13
Represent that the 1st hidden layer of sane variation self-encoding encoder is mapped to the weight coefficient matrix of the standard deviation of characteristic layer, b13Represent sane to become
1st hidden layer of point self-encoding encoder is mapped to the bias vector of the standard deviation of characteristic layer;
3rd step, according to the following formula, training sample concentrates feature corresponding to each sample after calculating framing:
zp,n=μp,n+∈·σp,n
Wherein, zp,nTraining sample concentrates feature corresponding to n-th of sample in pth frame data after expression framing, and ∈ represents to obey
One sampled value of standardized normal distribution.
5. the classification of radar targets method according to claim 1 based on sane variation self-encoding encoder, it is characterised in that step
Suddenly training sample concentrates the specific formula of reconstructed sample corresponding to each sample as follows after calculating framing described in (4b):
<mrow>
<msub>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>=</mo>
<mi>Re</mi>
<mi>l</mi>
<mi>u</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>z</mi>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<msup>
<mi>W</mi>
<mn>21</mn>
</msup>
<mo>+</mo>
<msup>
<mi>b</mi>
<mn>21</mn>
</msup>
<mo>)</mo>
</mrow>
<msup>
<mi>W</mi>
<mn>22</mn>
</msup>
<mo>+</mo>
<msup>
<mi>b</mi>
<mn>22</mn>
</msup>
<mo>+</mo>
<mo>&Element;</mo>
<mo>&CenterDot;</mo>
<mo>&lsqb;</mo>
<mi>Re</mi>
<mi>l</mi>
<mi>u</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>z</mi>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<msup>
<mi>W</mi>
<mn>21</mn>
</msup>
<mo>+</mo>
<msup>
<mi>b</mi>
<mn>21</mn>
</msup>
<mo>)</mo>
</mrow>
<msup>
<mi>W</mi>
<mn>23</mn>
</msup>
<mo>+</mo>
<msup>
<mi>b</mi>
<mn>23</mn>
</msup>
<mo>&rsqb;</mo>
</mrow>
Wherein,Training sample concentrates reconstructed sample corresponding to n-th of sample in pth frame data, W after expression framing21Represent
The characteristic layer of sane variation self-encoding encoder is mapped to the weight coefficient matrix of the 2nd hidden layer, b21Represent sane variation self-encoding encoder
Characteristic layer is mapped to the bias vector of the 2nd hidden layer, W22Represent that the 2nd hidden layer of sane variation self-encoding encoder is mapped to reconstruct sample
The weight coefficient matrix of the average of this output layer, b22Representing the 2nd hidden layer of sane variation self-encoding encoder, to be mapped to reconstructed sample defeated
Go out the bias vector of the average of layer, W23Represent that the 2nd hidden layer of sane variation self-encoding encoder is mapped to reconstructed sample output layer
The weight coefficient matrix of standard deviation, b23Represent that the 2nd hidden layer of sane variation self-encoding encoder is mapped to the mark of reconstructed sample output layer
The bias vector of quasi- difference.
6. the classification of radar targets method according to claim 1 based on sane variation self-encoding encoder, it is characterised in that step
Suddenly training sample concentrates the specific formula of reconstruct average distance picture corresponding to each sample such as after framing is calculated described in (4c)
Under:
mp,n=Relu (zp,nW31+b31)W32+b32+∈·[Relu(zp,nW31+b31)W33+b33]
Wherein, mp,nTraining sample concentrates the corresponding reconstruct average distance picture of n-th of sample in pth frame data after representing framing,
W31Represent that the characteristic layer of sane variation self-encoding encoder is mapped to the weight coefficient matrix of the 3rd hidden layer, b31Represent that sane variation is self-editing
The characteristic layer of code device is mapped to the bias vector of the 3rd hidden layer, W32Represent that the 3rd hidden layer of sane variation self-encoding encoder is mapped to
Average distance is reconstructed as the weight coefficient matrix of the average of output layer, b32Represent the 3rd hidden layer mapping of sane variation self-encoding encoder
To reconstruct average distance as the bias vector of the average of output layer, W33Represent the 3rd hidden layer mapping of sane variation self-encoding encoder
To reconstruct average distance as the weight coefficient matrix of the standard deviation of output layer, b33Represent the 3rd hidden layer of sane variation self-encoding encoder
Reconstruct average distance is mapped to as the bias vector of the standard deviation of output layer.
7. the classification of radar targets method according to claim 1 based on sane variation self-encoding encoder, it is characterised in that step
Suddenly the specific formula of the cost function of the sane variation self-encoding encoder of structure described in (4d) is as follows:
<mrow>
<mi>L</mi>
<mo>=</mo>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>x</mi>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>-</mo>
<msub>
<mover>
<mi>x</mi>
<mo>^</mo>
</mover>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>J</mi>
<mi>p</mi>
</msub>
<mo>-</mo>
<msub>
<mi>m</mi>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mn>2</mn>
</mfrac>
<mo>{</mo>
<mi>t</mi>
<mi>r</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&sigma;</mi>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msup>
<msub>
<mi>&mu;</mi>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mi>T</mi>
</msup>
<msub>
<mi>&mu;</mi>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>-</mo>
<mi>Q</mi>
<mo>-</mo>
<mi>log</mi>
<mi> </mi>
<mi>det</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>&sigma;</mi>
<mrow>
<mi>p</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</msub>
<mo>)</mo>
</mrow>
<mo>}</mo>
</mrow>
Wherein, L represents the cost function of sane variation self-encoding encoder, | | | | modulus Value Operations are represented, tr represents track taking operation,
T represents that transposition operates, and the number of dimensions of feature, log are represented with natural logrithm corresponding to n-th of sample in Q expression pth frame data
For the operation of taking the logarithm at bottom, det represents to take determinant to operate.
8. the classification of radar targets method according to claim 1 based on sane variation self-encoding encoder, it is characterised in that step
Suddenly (5) are described trains comprising the following steps that for sane variation self-encoding encoder:
The first step, initial value is assigned to the parameter of sane variation self-encoding encoder;
Second step, training sample set after framing is inputted into sane variation self-encoding encoder and is trained, the sane change after being updated
Divide the parameter of self-encoding encoder;
3rd step, judges whether cycle-index is equal to 50, if so, then performing the 4th step, otherwise, will be performed after cycle-index plus 1
Second step;
4th step, obtain the sane variation self-encoding encoder trained.
9. the classification of radar targets method according to claim 1 based on sane variation self-encoding encoder, it is characterised in that step
Suddenly (6) are described trains comprising the following steps that for linear SVM:
The first step, training sample set after framing is input to the sane variation self-encoding encoder trained, obtains training characteristics collection;
Second step, with training characteristics collection training linear SVM;
3rd step, obtain the linear SVM trained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710743598.7A CN107609579B (en) | 2017-08-25 | 2017-08-25 | Radar target classification method based on steady variational self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710743598.7A CN107609579B (en) | 2017-08-25 | 2017-08-25 | Radar target classification method based on steady variational self-encoder |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107609579A true CN107609579A (en) | 2018-01-19 |
CN107609579B CN107609579B (en) | 2020-01-07 |
Family
ID=61055805
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710743598.7A Active CN107609579B (en) | 2017-08-25 | 2017-08-25 | Radar target classification method based on steady variational self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107609579B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109470648A (en) * | 2018-11-21 | 2019-03-15 | 中国科学院合肥物质科学研究院 | A kind of single grain crop unsound grain quick nondestructive determination method |
CN110109110A (en) * | 2019-04-26 | 2019-08-09 | 西安电子科技大学 | Based on the optimal variation of priori from the HRRP target identification method of code machine |
CN110161480A (en) * | 2019-06-18 | 2019-08-23 | 西安电子科技大学 | Radar target identification method based on semi-supervised depth probabilistic model |
CN110412548A (en) * | 2019-07-20 | 2019-11-05 | 中国船舶重工集团公司第七二四研究所 | Radar Multi Target recognition methods based on high-resolution lattice image |
CN111564188A (en) * | 2020-04-29 | 2020-08-21 | 核工业北京地质研究院 | Quantitative analysis method for mineral information based on variational self-coding |
CN111598881A (en) * | 2020-05-19 | 2020-08-28 | 西安电子科技大学 | Image anomaly detection method based on variational self-encoder |
CN112836736A (en) * | 2021-01-28 | 2021-05-25 | 哈尔滨理工大学 | Hyperspectral image semi-supervised classification method based on depth self-encoder composition |
CN114885036A (en) * | 2022-07-12 | 2022-08-09 | 深圳安德空间技术有限公司 | Real-time lossy compression method and system for ground penetrating radar data |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101598783A (en) * | 2009-07-08 | 2009-12-09 | 西安电子科技大学 | Based on distance by radar under the strong noise background of PPCA model as statistical recognition method |
CN106054155A (en) * | 2016-06-03 | 2016-10-26 | 西安电子科技大学 | Radar high resolution range profile (HRRP) target recognition method based on convolution factor analysis (CFA) model |
-
2017
- 2017-08-25 CN CN201710743598.7A patent/CN107609579B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101598783A (en) * | 2009-07-08 | 2009-12-09 | 西安电子科技大学 | Based on distance by radar under the strong noise background of PPCA model as statistical recognition method |
CN106054155A (en) * | 2016-06-03 | 2016-10-26 | 西安电子科技大学 | Radar high resolution range profile (HRRP) target recognition method based on convolution factor analysis (CFA) model |
Non-Patent Citations (5)
Title |
---|
KINGMA D P, WELLING M: "Auto-Encoding Variational Bayes", 《INTERNATIONAL CONFERENCE ON LEARNING REPRESENTATIONS 2014》 * |
LAN DU ET AL.: "Radar HRRP Statistical Recognition: Parametric Model and Model Selection", 《IEEE TRANSACTIONS ON SIGNAL PROCESSING》 * |
冯博 等: "基于稳健深层网络的雷达高分辨距离像目标特征提取算法", 《电子与信息学报》 * |
李飞: ""基于一维距离像的雷达目标识别", 《船舶电子工程》 * |
陈渤 等: "基于三种不同绝对对齐方法的分类器分析", 《现代雷达》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109470648A (en) * | 2018-11-21 | 2019-03-15 | 中国科学院合肥物质科学研究院 | A kind of single grain crop unsound grain quick nondestructive determination method |
CN109470648B (en) * | 2018-11-21 | 2021-03-16 | 中国科学院合肥物质科学研究院 | Rapid nondestructive determination method for imperfect grains of single-grain crops |
CN110109110A (en) * | 2019-04-26 | 2019-08-09 | 西安电子科技大学 | Based on the optimal variation of priori from the HRRP target identification method of code machine |
CN110161480A (en) * | 2019-06-18 | 2019-08-23 | 西安电子科技大学 | Radar target identification method based on semi-supervised depth probabilistic model |
CN110412548A (en) * | 2019-07-20 | 2019-11-05 | 中国船舶重工集团公司第七二四研究所 | Radar Multi Target recognition methods based on high-resolution lattice image |
CN111564188A (en) * | 2020-04-29 | 2020-08-21 | 核工业北京地质研究院 | Quantitative analysis method for mineral information based on variational self-coding |
CN111564188B (en) * | 2020-04-29 | 2023-09-12 | 核工业北京地质研究院 | Quantitative analysis method based on variation self-coding mineral information |
CN111598881A (en) * | 2020-05-19 | 2020-08-28 | 西安电子科技大学 | Image anomaly detection method based on variational self-encoder |
CN111598881B (en) * | 2020-05-19 | 2022-07-12 | 西安电子科技大学 | Image anomaly detection method based on variational self-encoder |
CN112836736A (en) * | 2021-01-28 | 2021-05-25 | 哈尔滨理工大学 | Hyperspectral image semi-supervised classification method based on depth self-encoder composition |
CN112836736B (en) * | 2021-01-28 | 2022-12-30 | 哈尔滨理工大学 | Hyperspectral image semi-supervised classification method based on depth self-encoder composition |
CN114885036A (en) * | 2022-07-12 | 2022-08-09 | 深圳安德空间技术有限公司 | Real-time lossy compression method and system for ground penetrating radar data |
Also Published As
Publication number | Publication date |
---|---|
CN107609579B (en) | 2020-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107609579A (en) | Classification of radar targets method based on sane variation self-encoding encoder | |
CN108008385B (en) | Interference environment ISAR high-resolution imaging method based on management loading | |
CN103984966B (en) | SAR image target recognition method based on sparse representation | |
CN108960330A (en) | Remote sensing images semanteme generation method based on fast area convolutional neural networks | |
CN109376574A (en) | Refuse to sentence radar HRRP target identification method based on CNN | |
CN105809198A (en) | SAR image target recognition method based on deep belief network | |
CN106054189B (en) | Radar target identification method based on dpKMMDP models | |
CN109242889A (en) | SAR image change detection based on context conspicuousness detection and SAE | |
CN104298999B (en) | EO-1 hyperion feature learning method based on recurrence autocoding | |
CN114564982B (en) | Automatic identification method for radar signal modulation type | |
CN104866871B (en) | Hyperspectral image classification method based on projection structure sparse coding | |
CN110298085A (en) | Analog-circuit fault diagnosis method based on XGBoost and random forests algorithm | |
CN107133648A (en) | The sparse one-dimensional range profile recognition methods for keeping projecting is merged based on self-adapting multi-dimension | |
CN106951822B (en) | One-dimensional range profile fusion identification method based on multi-scale sparse preserving projection | |
CN112684427A (en) | Radar target identification method based on serial quadratic reinforcement training | |
CN107392863A (en) | SAR image change detection based on affine matrix fusion Spectral Clustering | |
CN103093243A (en) | High resolution panchromatic remote sensing image cloud discriminating method | |
CN110865340B (en) | Sea surface corner reflector interference countermeasure method based on polarization characteristic assistance | |
Reyment | Reification of classical multivariate statistical analysis in morphometry | |
CN106908774B (en) | One-dimensional range profile identification method based on multi-scale nuclear sparse preserving projection | |
CN109886315A (en) | A kind of Measurement of Similarity between Two Images method kept based on core | |
CN108921110A (en) | Radar signal classification method of the novel convolutional neural networks in conjunction with Wigner-Ville distribution | |
CN113486917A (en) | Radar HRRP small sample target identification method based on metric learning | |
CN113780346A (en) | Method and system for adjusting prior constraint classifier and readable storage medium | |
CN117119377A (en) | Indoor fingerprint positioning method based on filtering transducer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |