CN106599831A - SAR target identification method based on sample weighting category specific and shared dictionary - Google Patents
SAR target identification method based on sample weighting category specific and shared dictionary Download PDFInfo
- Publication number
- CN106599831A CN106599831A CN201611136982.2A CN201611136982A CN106599831A CN 106599831 A CN106599831 A CN 106599831A CN 201611136982 A CN201611136982 A CN 201611136982A CN 106599831 A CN106599831 A CN 106599831A
- Authority
- CN
- China
- Prior art keywords
- sectioning image
- clutter
- class
- training
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an SAR target identification method based on a sample weighting category specific and shared dictionary with the object of solving the problem that the SAR target identification performance is low in complicated scenes in the prior art. The method comprises the following steps: 1) extracting local characteristics from the given training slices and testing slices; 2) obtaining a global dictionary through the use of the local characteristics of the training slices; 3) performing standard sparse coding respectively to the local characteristics of the training slices and the testing slices through the global dictionary so as to obtain the coding coefficients of the local characteristics; 4) performing characteristic combination and dimension reduction respectively to the coding coefficients of the local characteristics to obtain the global characteristics of the training slices and the testing slices; and 5) using a vector machine to identify the global characteristics of the testing slices. The method of the invention is capable of increasing the identification performance and can be applied for SAR target identification in complicated scenes.
Description
Technical field
The invention belongs to radar target authentication technique field, relates generally to a kind of SAR targets discrimination method, can be used for car
Target recognition and classification provides important information.
Background technology
Synthetic aperture radar SAR utilizes microwave remote sensing technique, climate and does not affect round the clock, with round-the-clock, round-the-clock
Ability to work, and the features such as with multiband, multipolarization, variable visual angle and penetrability.With increasing airborne and star
The appearance of SAR is carried, the SAR data under a large amount of different scenes is brought, is exactly that automatic target is known to one important application of SAR data
Other ATR.Target under complex scene differentiates also to become one of current research direction.
The feature extraction in target discrimination process is an important process.In in the past few decades, have it is a large amount of with regard to
The research that SAR targets diagnostic characteristics is extracted, is broadly divided into four kinds:The first is characterized in that textural characteristics, and such as Lincoln laboratory is proposed
Standard deviation characteristic, FRACTAL DIMENSION feature and arrangement energy ratio feature;Second feature is relevant with the shape of target, such as ERIM
Qualitative character, characteristics of diameters and the variance that (Environmental Research Institute of Michigan) is proposed is returned
One changes horizontal and vertical projection properties, the minimum and maximum projected length spy used in rotary inertia feature, and other documents
Levy;The third feature is decided by the contrast of target and background, peak C FAR and average CFAR and CFAR that such as ERIM is proposed
Most bright percentage feature, and the average signal-to-noise ratio that Gao is proposed, Y-PSNR and brightest pixel percentage feature.In addition,
Lincoln laboratory also proposed several features for describing what image was spread plus high luminance pixel during different threshold values in space
Change, these features depend not only on target and additionally depend on the size of target with the difference of background;4th kind of feature is that polarization is special
Levy, such as percent purity, pure even percentage and by force idol percentage feature, but these polarization characteristics can only be in full-polarization SAR number
Just can be extracted according to upper.
Above-mentioned traditional characteristic mainly has the shortcomings that following two aspects:First, these features target is provided only it is coarse,
Partial description, they can not describe target and the detailed local shape of clutter and structural information, and this shows that discriminating can not be abundant
The detailed information enriched using full resolution pricture.When target and clutter do not have significantly poor at texture, shape and contrast aspect
When other, these features cannot show to differentiate performance well.Second, existing feature is applied to naturally miscellaneous under simple scenario
The discriminating of ripple and target.The checking of most of SAR targets discrimination methods at present is all based on MSTAR data sets, with 0.3m point
Resolution.The scene of this standard data set is fairly simple, and target slice possesses similar feature, and each section only includes a mesh
Mark and positioned at the center of sectioning image.Target is a high intensity region compacted, and is around that intensity is relatively low, homogeneity background
Clutter.Clutter section also shows some similar attributes, the region correspondence tree crown of most of high intensity in clutter section.These
Target slice and clutter section differ greatly on texture, shape and contrast, and traditional target diagnostic characteristics is suitable for the number
According to collection, and show reasonable identification feature.However, real scene is more complicated, such as miniSAR data sets, target
The position and direction of target is different in section, and with the presence of multiple target or partial target in a width sectioning image
Situation.For clutter section, the type of clutter is diversified, including nature clutter, such as trees, also many artificial miscellaneous
The edge of ripple, such as building.Therefore existing texture, shape and contrast metric be not enough to differentiate target in this case and
Clutter.
In sum, with the continuous lifting of SAR image resolution ratio, traditional characteristic differentiates tool to the target under complex scene
There is larger limitation.
The content of the invention
Present invention aims to the deficiency of existing SAR target discrimination methods, proposes a kind of based on sample weighting class
The not specific SAR target discrimination methods with shared dictionary, with the target improved under complex scene performance is differentiated.
The technical scheme is that what is be achieved in that:
(1) using SAR-SIFT descriptors to given training sectioning imageWith test slice map
PictureLocal feature is extracted, obtains training the local feature of sectioning image
With the local feature of test sectioning imageWherein,Represent the training section of clutter class
Image,Target class training sectioning image is represented,Clutter class testing sectioning image is represented,Table
Show target class testing sectioning image,It is the local feature of clutter class training sectioning image,It is mesh
Mark class trains the local feature of sectioning image,It is the local feature of clutter class testing sectioning image,
It is the local feature of target class testing sectioning image, p1Represent clutter class training sectioning image number, p2Represent that target class training is cut
Picture number, k1Represent clutter class testing sectioning image number, k2Represent target class testing sectioning image number;
(2) by the clutter class training sectioning image local feature in the X obtained by (1)As the training of clutter class
Sample, target class training sectioning image local featureAs target class training sample, Global Dictionary U is obtained;
2a) initialize clutter category dictionary U1, target category dictionary U2, shared dictionary U0, clutter class training sample weight
With target class training sample weightIf current iteration number of times iter=0;
2b) according to the clutter category dictionary U under current iteration number of times1, target category dictionary U2With shared dictionary U0, calculate clutter
Class training section local featureRarefaction representation coefficient H1With target class training section local feature
Rarefaction representation coefficient H2;
2c) according to 2b) H that obtains1And H2, using alternate optimization method, update clutter category dictionary U1, target category dictionary U2
With shared dictionary U0, the clutter category dictionary U after being updated1', target category dictionary U2' and shared dictionary U0′;
2d) iter=iter+1 is made, record current iterations, sample weights renewal is judged whether to, if mod
(iter, iterSkip) is equal to 0, execution step 2e) it is trained sample weights renewal;Otherwise, sample weights are not trained
Update, make U1=U1′、U2=U2′、U0=U0' return to step 2b), wherein iterSkip represents that training sample weight updates interval,
Mod is represented and taken the remainder;
2e) utilize 2c) obtain U1′、U2' and U0' update clutter class training sample weightClutter after being updated
Class training sample weightUsing 2c) obtain U1′、U2' and U0' update target class training sample weightObtain
Target class training sample weight after renewal
2f) judge that current iteration number of times iter, whether less than maximum iteration time iterMax, if being less than, makes U1=U1′、
U2=U2′、U0=U0′、Return to step 2b), if being equal to, iteration stopping,
Obtain final Global Dictionary U=[U0′,U1′,U2′];
(3) the Global Dictionary U for utilizing (2) to obtain, obtains training the local feature X of sectioning image and test section to (1)
The local feature Y of image carries out respectively standardized sparse coding, obtains training the local feature code coefficient of sectioning imageWith the local feature code coefficient of test sectioning image:
(4) the local feature code coefficient V of the training sectioning image for obtaining (3) and the local feature of test sectioning image
Code coefficient W carries out respectively feature merging and dimensionality reduction, the training sectioning image global characteristics for obtaining:
With the global characteristics of test sectioning image
(5) " one two class Linear SVM grader of ' training, using training using the global characteristics V of training sectioning image
Grader to test sectioning image global characteristics W " ' classify, obtain each test sectioning image categorised decision value
Decision, categorised decision value decision is compared with threshold value Thr=0 of setting, if decision >=Thr, is recognized
Otherwise it is the section of clutter class to be target class section.
The present invention has compared with prior art advantages below:
1. the present invention is the SAR image vehicle target discrimination method under complex scene, reflects compared to the target of traditional characteristic
Other method, due to considering complex scene under the section of target and clutter partial structurtes information and the distribution letter of partial structurtes
Breath, takes full advantage of the detailed information of full resolution pricture, and the SAR targets improved under complex scene differentiate performance.
2. the present invention is during global dictionary is generated due to increasing the study to descriptive bad sample, and existing
Based on the specific target class global characteristics for compared with the SAR target discrimination methods of shared dictionary learning CSDL, obtaining of classification with
The discrimination of the global characteristics of clutter class is bigger, so as to further improve complex scene under SAR targets discriminating performance.
Description of the drawings
Fig. 1 is the flowchart of the present invention;
Fig. 2 is that the Global Dictionary in the present invention generates sub-process figure;
Fig. 3 is the part miniSAR sectioning images used by present invention experiment 1;
Fig. 4 is the part miniSAR sectioning images used by present invention experiment 2;
Fig. 5 is the part miniSAR sectioning images used by present invention experiment 3;
Fig. 6 is the part miniSAR sectioning images used by present invention experiment 4.
Specific embodiment
Embodiments of the invention and effect are described in further detail below in conjunction with the accompanying drawings:
The vehicle target that the inventive method is related generally under complex scene differentiates that existing target diagnostic characteristics is mostly
Verified based on MSTAR data sets, the scene of the data description is relatively simple.Target slice possesses similar feature, each
Section is only comprising a target and positioned at the center of sectioning image.Target area is one and compacts, high intensity region, surrounding
It is that intensity is relatively low, homogeneity clutter background.Clutter section also shows some similar attributes, most of high in clutter section
The region correspondence tree crown of intensity.These target slices and clutter section differ greatly on texture, shape and contrast.With thunder
Up to the lifting of resolution ratio, the scene of SAR image description is also increasingly complex, and target slice does not only have single goal and also has multiple target drawn game
The situation of portion's target, and target is also not necessarily located in the center of section.Clutter section is also not only nature clutter, also a large amount of shapes
The different artificial clutter of shape.For problem above, it is specific with shared dictionary learning phase that the present invention is taken based on sample weighting classification
With reference to, SAR targets are differentiated, improve under complex scene to the discriminating performance of SAR targets.
Referring to Fig. 1, the present invention's realizes step including as follows:
Step 1, the training sectioning image and test sectioning image to giving extracts local feature.
2a) using SAR-SIFT descriptors to given training sectioning imageCarry out local special
Extraction is levied, obtains training the local feature of sectioning imageWhereinRepresent clutter
Class trains sectioning image,Target class training sectioning image is represented,It is clutter class training sectioning image
Local feature,Be target class train sectioning image local feature, p1Represent clutter class training sectioning image number
Mesh, p2Represent target class training sectioning image number;
2b) using SAR-SIFT descriptors to given test sectioning imageCarry out local
Feature extraction, obtains testing the local feature of sectioning imageWherein,Represent clutter
Class testing sectioning image,Target class testing sectioning image is represented,It is clutter class testing sectioning image
Local feature,Be target class testing practice sectioning image local feature, k1Represent clutter class testing sectioning image number
Mesh, k2Represent target class testing sectioning image number.
Step 2, according to the local feature of training sectioning imageObtain Global Dictionary U.
Clutter class is trained into the local feature of sectioning imageAs clutter class training sample, target class is instructed
Practice the local feature of sectioning imageAs target class training sample, Global Dictionary U is obtained.
With reference to Fig. 2, this step is implemented as follows:
2a) initialize clutter category dictionary U1, target category dictionary U2, shared dictionary U0, clutter class training sample weight
With target class training sample weight
2a1) from10000 local features are randomly selected, with K-SVD algorithms to clutter category dictionaryInitialization, with Lagrange duality algorithm by U1Update once, wherein d represents training sectioning image local feature
Dimension, n1Represent clutter category dictionary atom number;
2a2) from10000 local features are randomly selected, with K-SVD algorithms to target category dictionaryInitialization, with Lagrange duality algorithm by U2Update once, wherein n2Represent target category dictionary atom number;
2a3) fromWithIn randomly select 10000 local features, with K-SVD algorithms to altogether
Enjoy dictionaryInitialization, with Lagrange duality algorithm by U0Update once, wherein n0Represent shared dictionary atom
Number;
2a4) by clutter class training sample weightWith target class training sample weightIt is initialized to 1;
2a5) set current iteration number of times iter=0;
2b) according to the clutter category dictionary U under current iteration number of times1, target category dictionary U2With shared dictionary U0, calculate clutter
Class training section local featureRarefaction representation coefficient H1With target class training section local featureRarefaction representation coefficient H2, calculation procedure is as follows:
2b1) by the following optimization problems of feature-sign search Algorithm for Solving, obtain i-th clutter class training and cut
The local feature of pictureRarefaction representation coefficient
Wherein i=1 ..., p1, λ represents weighting parameters, | | | |FF norms are represented, | | | |1Represent l1Norm,
The local feature that all clutter classes train sectioning image is solvedAfter rarefaction representation coefficient, obtain more
Clutter class after new trains the local feature rarefaction representation coefficient of sectioning image
2b2) by the following optimization problems of feature-sign search Algorithm for Solving, obtain j-th target class training and cut
The local feature of pictureRarefaction representation it is sparse
Wherein j=1 ..., p2,
The local feature that all target class train sectioning image is solvedAfter rarefaction representation coefficient, obtain more
Target class after new trains the local feature rarefaction representation coefficient of sectioning image
2c) according to 2b) H that obtains1And H2, using alternate optimization method, update clutter category dictionary U1, target category dictionary U2
With shared dictionary U0, update step as follows:
2c1) following optimization problems are solved by alternate optimization method, update clutter class U1Dictionary, it is miscellaneous after being updated
Ripple category dictionary U1′:
s.t.||U1(:,b1)||2=1, b1=1 ..., n1
Wherein,
η11WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norms,It is that size is n0List
Bit matrix,It is that size is n1×n0Null matrix,It is that size is n1Unit matrix,It is that size is n0×n1's
Null matrix, n1It is clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atom
Number, n=n0+n1+n2, W1It is clutter class training sample weight matrix:
m1=nL×p1It is that clutter class trains the local feature of sectioning image total, nLIn representing a training sectioning image
Local feature number.
2c2) following optimization problems are solved by alternate optimization method, update target category dictionary U2, the mesh after being updated
Mark category dictionary U2′:
s.t.||U2(:,b2)||2=1, b2=1 ..., n2
Wherein, η21WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norms,It is that size is n0's
Unit matrix,It is that size is n2×n0Null matrix,It is that size is n2Unit matrix,It is that size is n0×n2
Null matrix, n1It is clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atom
Number, n=n0+n1+n2, W2It is target class training sample weight matrix:
m2=nL×p2It is that target class trains the local feature of sectioning image total, nLIn representing a training sectioning image
Local feature number.
2c3) following optimization problems are solved by alternate optimization method, update shared dictionary U0, it is shared after being updated
Dictionary U0′:
s.t.||U0(:,b0)||2,b0=1 ..., n0
Wherein, η01WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norms,n1It is
Clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atomicity, n=n0+n1+
n2, It is that size is
n0Unit matrix,It is that size is n1×n0Null matrix,It is that size is n1Unit matrix,It is that size is n0×
n1Null matrix, W1It is clutter class training sample weight matrix:
m1=nL×p1It is that clutter class trains the local feature of sectioning image total, nLIn representing a training sectioning image
Local feature number,
It is that size is n0Unit matrix,It is that size is n2×n0Null matrix,It is that size is n2Unit matrix,It is big
It is little for n0×n2Null matrix, W2It is target class training sample weight matrix:
m2=nL×p2It is the local feature sum of target class training sectioning image.
Clutter category dictionary U after completing above-mentioned renewal step, after being updated1', target category dictionary U2', shared dictionary
U0′;
2d) iter=iter+1 is made, record current iterations, sample weights renewal is judged whether to, if mod
(iter, iterSkip) is equal to 0, execution step 2e) it is trained sample weights renewal;Otherwise, sample weights are not trained
Update, make U1=U1′、U2=U2′、U0=U0' return to step 2b), wherein iterSkip represents that training sample weight updates interval,
Mod is represented and taken the remainder;
2e) utilize 2c) obtain U1′、U2' and U0' update clutter class training sample weightWith target class training sample
WeightIts step is as follows,
2e1) utilize 2c) obtain U1′、U2' and U0' update clutter class training sample weightIt is miscellaneous after being updated
The weight of ripple class training sampleWherein i-th clutter class training sample weight w1i' obtained by equation below solution,
In formula, i=1 ..., p1, α is the big scaling factor of a ratio 1, wmIt is the maximum in weight allowed band
Value,It is the local feature X of i-th clutter class training sectioning image1 iRarefaction representation coefficient, its value utilizes feature-sign
Search Algorithm for Solving optimization problemsObtain,It is U0' corresponding sparse table
Show coefficient,It is U1' corresponding rarefaction representation coefficient,It is U2' corresponding rarefaction representation coefficient,It is clutter
Sectioning image local featureUsing target category dictionary U2The average energy of ' reconstruct;2e2) utilize 2c) obtain U1′、U2' and U0′
Update target class training sample weightThe weight of the target class training sample after being updatedWherein jth
Individual target class training sample weight w2j' obtained by equation below solution,
In formula, j=1 ..., p2,It is the local feature X of j-th target class training sectioning image2 jRarefaction representation system
Number, its value utilizes feature-sign search Algorithm for Solving optimization problems:
Obtain,It is U0' corresponding rarefaction representation coefficient,It is U1' corresponding rarefaction representation coefficient,It is U2' right
The rarefaction representation coefficient answered,It is target slice image local featureUsing clutter category dictionary U1' reconstruct it is flat
Equal energy;
2f) judge that current iteration number of times iter, whether less than maximum iteration time iterMax, if being less than, makes U1=U1′、
U2=U2′、U0=U0′、Return to step 2b), if being equal to, iteration stopping,
Obtain final Global Dictionary U=[U0′,U1′,U2′];
Step 3, solves the local feature of training sectioning image and the local feature code coefficient of test sectioning image.
This step is implemented as follows:
3a) the Global Dictionary U obtained using step 2, obtains training the local feature X of sectioning image to enter rower to step 1
Quasi- sparse coding, obtains training the local feature code coefficient of sectioning image
3b) the Global Dictionary U obtained using step 2, to the local feature Y that step 1 obtains testing sectioning image rower is entered
Quasi- sparse coding, obtains testing the local feature code coefficient of sectioning image
Step 4, the local feature code coefficient V of the training sectioning image that step 3 is obtained and the office of test sectioning image
Portion feature coding coefficient W carries out respectively feature merging and dimensionality reduction.
4a) utilization space pyramid Matching Model by train sectioning image be divided into size be 1 × 1,2 × 2,4 × 4 three
Sub-regions A1, A2, A3;
4b) merged subregion A1, the local feature code coefficient V of A2, A3 correspondence training sectioning image using maximum
Merge and splicing, form the global feature of training sectioning image:
Global feature V ' to training sectioning image carries out l2Norm is normalized, the training slice map after being normalized
Picture global feature V ", wherein h represent feature merge after global characteristics dimension;
" dimensionality reduction is carried out, the global characteristics of the training sectioning image after dimensionality reduction are obtained 4c) using principal component analysis to VWherein h ' is the dimension of the global characteristics after dimensionality reduction;
4d) utilization space pyramid Matching Model will test sectioning image be divided into size for 1 × 1,2 × 2,4 × 4 this three
Sub-regions B1, B2, B3;
4e) merged subregion B1 using maximum, the local feature code coefficient of the corresponding test sectioning image of B2, B3
W is merged and splicing, forms the global feature of test sectioning image:
Global feature W ' to testing sectioning image carries out l2Norm is normalized, the test slice map after being normalized
Picture global characteristics W ";
" dimensionality reduction is carried out, the global characteristics of the test sectioning image after dimensionality reduction are obtained 4f) using principal component analysis to WWherein h ' is the dimension of the global characteristics after dimensionality reduction.
Step 5, using the global characteristics V of training sectioning image " one two class Linear SVM grader of ' training, using training
Global characteristics W of the good grader to test sectioning image " ' classifies, and obtains categorised decision of each test sectioning image
Value decision, categorised decision value decision is compared with threshold value Thr=0 of setting, if decision >=Thr,
It is considered that target class is cut into slices, is otherwise the section of clutter class.
The effect of the present invention can be further illustrated by following experimental data:
Experiment 1:
1.1) experiment scene:
This experiment sectioning image used comes from miniSAR data sets disclosed in U.S. Sandia laboratories, these numbers
The website in Sandia laboratories is downloaded from according under, partially sliced example images are as shown in figure 3, Fig. 3 (a) is target class training slice map
As example, Fig. 3 (b) is clutter class sectioning image example, and Fig. 3 (c) is test sectioning image example.
1.2) four groups of traditional characteristics of experimental selection:
First stack features are:Optimal threshold feature, the average value tag of image pixel quality, image pixel spatial cohesion
Feature, corner feature, the combination of acceleration signature;
Second stack features are:The average value tag of optimal threshold feature, image pixel quality, image pixel spatial cohesion
Feature, corner feature, acceleration signature, average signal to noise ratio feature, Y-PSNR feature and brightest pixel percentage feature
Combination, uses;
3rd stack features are:The combination of standard deviation characteristic, FRACTAL DIMENSION feature and arrangement energy ratio feature;
4th stack features are:Standard deviation characteristic, FRACTAL DIMENSION feature, arrangement energy ratio feature, optimal threshold feature, image slices
The average value tag of quality amount, image pixel spatial cohesion feature, corner feature, acceleration signature, average signal to noise ratio feature,
The combination of Y-PSNR feature and brightest pixel percentage feature.
1.3) experiment parameter:
Training clutter number of slices p1=1442, training objective number of slices p2=2091, test clutter number of slices k1=599, survey
Examination target slice number k2=140, weighting parameters λ=0.1, scale factor, α=50, weighting parameters η01=η11=η21=η02
=η12=η22=0.05, dictionary learning iterations iterMax=15, sample weights update interval iterSkip=5, dictionary
Atomicity n0=n1=n2=300, weight limit value wm=50, SVM classifier adopts LIBSVM kits, SVM punishment in experiment
Coefficient C=10;
1.4) experiment content:
With the existing SAR targets discrimination method for being based on first group of traditional characteristic Verbout and the inventive method to complexity
SAR targets under scene carry out contrast experiment;
With the existing SAR targets discrimination method for being based on second group of traditional characteristic Verbout+Gao and the inventive method pair
SAR targets under complex scene carry out contrast experiment;
With the existing SAR targets discrimination method for being based on the 3rd group of traditional characteristic Lincoln and the inventive method to complexity
SAR targets under scene carry out contrast experiment;
With the existing SAR targets discrimination method for being based on the 4th group of traditional characteristic Lincoln+Verbout+Gao and this
Bright method carries out contrast experiment to the SAR targets under complex scene;
The SAR targets under complex scene are entered with the existing SAR targets discrimination method based on CSDL and the inventive method
Row contrast experiment.
The identification result of experiment 1 is as shown in table 1:
The identification result of the distinct methods of table 1
Distinct methods | AUC | Pc (Thr=0) | Pd (Thr=0) | Pf (Thr=0) | Pd (Thr correspondence Pd=0.9) | Pf (Thr correspondence Pd=0.9) |
Verbout | 0.8739 | 87.0095% | 0.6143 | 0.0701 | 0.9000 | 0.4040 |
Verbout+Gao | 0.8813 | 86.1976% | 0.6071 | 0.0785 | 0.9000 | 0.3539 |
Lincoln | 0.9398 | 90.6631% | 0.9571 | 0.1052 | 0.9000 | 0.0801 |
Lincoln+Verbout+Gao | 0.9408 | 90.3924% | 0.9143 | 0.0985 | 0.9000 | 0.0851 |
CSDL | 0.9580 | 92.0162% | 0.7500 | 0.0401 | 0.9000 | 0.1185 |
The present invention | 0.9694 | 93.3694% | 0.7429 | 0.0217 | 0.9000 | 0.0801 |
AUC in table 1 represents the area under ROC curve, and Pc represents overall accuracy, and Pd represents verification and measurement ratio, and Pf represents false-alarm
Rate, Thr is the threshold value of SVM classifier.
It can be seen in table 1 that the AUC of the present invention and overall accuracy Pc highests, and during the same verification and measurement ratio 0.9 of correspondence, this
Bright false alarm rate is minimum, is illustrated under complex scene, and the discriminating performance of the present invention is more preferable than existing method.
Experiment 2:
2.1) experiment scene:
This experiment sectioning image used comes from miniSAR data sets disclosed in U.S. Sandia laboratories, these numbers
The website in Sandia laboratories is downloaded from according under, partially sliced example images are as shown in figure 4, Fig. 4 (a) is target class training slice map
As example, Fig. 4 (b) is clutter class sectioning image example, and Fig. 4 (c) is test sectioning image example.
2.2) experimental selection and experiment 1 identical, four groups of traditional characteristics:
2.3) experiment parameter:
Training clutter number of slices p1=1531, training objective number of slices p2=2080, test clutter number of slices k1=510, survey
Examination target slice number k2=79, weighting parameters λ=0.1, scale factor, α=50, weighting parameters η01=η11=η21=η02
=η12=η22=0.05, dictionary learning iterations iterMax=15, sample weights update interval iterSkip=5, dictionary
Atomicity n0=n1=n2=300, weight limit value wm=50, SVM classifier adopts LIBSVM kits, SVM punishment in experiment
Coefficient C=10;
2.4) content of the test:
It is identical with experiment 1.
The identification result of experiment 2 is as shown in table 2:
The identification result of the distinct methods of table 2
Distinct methods | AUC | Pc (Thr=0) | Pd (Thr=0) | Pf (Thr=0) | Pd (Thr correspondence Pd=0.9) | Pf (Thr correspondence Pd=0.9) |
Verbout | 0.8671 | 75.7216% | 0.8734 | 0.2608 | 0.8987 | 0.2980 |
Verbout+Gao | 0.8225 | 65.0255% | 0.8354 | 0.3784 | 0.8987 | 0.5333 |
Lincoln | 0.8359 | 67.0628% | 0.8861 | 0.3627 | 0.8987 | 0.3784 |
Lincoln+Verbout+Gao | 0.7131 | 64.5121% | 0.7342 | 0.3686 | 0.8987 | 0.5294 |
CSDL | 0.8757 | 86.2479% | 0.5190 | 0.0843 | 0.8987 | 0.2784 |
The present invention | 0.8923 | 86.2479% | 0.5063 | 0.0824 | 0.8987 | 0.2490 |
As seen from Table 2, AUC of the invention and overall accuracy Pc highests, and during the same verification and measurement ratio 0.9 of correspondence, this
Bright false alarm rate is minimum, is illustrated under complex scene, and the discriminating performance of the present invention is more preferable than existing method.
Experiment 3:
3.1) experiment scene:
This experiment sectioning image used comes from miniSAR data sets disclosed in U.S. Sandia laboratories, these numbers
The website in Sandia laboratories is downloaded from according under, partially sliced example images are as shown in figure 5, Fig. 5 (a) is target class training slice map
As example, Fig. 5 (b) is clutter class sectioning image example, and Fig. 5 (c) is test sectioning image example.
3.2) experimental selection and experiment 1 identical, four groups of traditional characteristics.
3.3) experiment parameter:
Training clutter number of slices p1=1414, training objective number of slices p2=1567, test clutter number of slices k1=627, survey
Examination target slice number k2=159, weighting parameters λ=0.1, scale factor, α=50, weighting parameters η01=η11=η21=η02
=η12=η22=0.05, dictionary learning iterations iterMax=15, sample weights update interval iterSkip=5, dictionary
Atomicity n0=n1=n2=300, weight limit value wm=50, SVM classifier adopts LIBSVM kits, SVM punishment in experiment
Coefficient C=10;
3.4) experiment content:
It is identical with experiment 1.
The identification result of experiment 3 is as shown in table 3:
The identification result of the distinct methods of table 3
Distinct methods | AUC | Pc (Thr=0) | Pd (Thr=0) | Pf (Thr=0) | Pd (Thr correspondence Pd=0.9) | Pf (Thr correspondence Pd=0.9) |
Verbout | 0.5688 | 42.4936% | 0.8428 | 0.6810 | 0.8994 | 0.7927 |
Verbout+Gao | 0.5662 | 42.4936% | 0.8428 | 0.6810 | 0.8994 | 0.7927 |
Lincoln | 0.5663 | 44.5293% | 0.9623 | 0.6858 | 0.8994 | 0.6284 |
Lincoln+Verbout+Gao | 0.5751 | 43.1298% | 0.9560 | 0.7018 | 0.8994 | 0.6268 |
CSDL | 0.8529 | 75.5729% | 0.7987 | 0.2552 | 0.8994 | 0.3907 |
The present invention | 0.8555 | 77.4809% | 0.7799 | 0.2265 | 0.8994 | 0.3652 |
As seen from Table 3, AUC of the invention and overall accuracy Pc highests, and during the same verification and measurement ratio 0.9 of correspondence, this
Bright false alarm rate is minimum, is illustrated under complex scene, and the discriminating performance of the present invention is more preferable than existing method.
Experiment 4:
4.1) experiment scene:
This experiment sectioning image used comes from miniSAR data sets disclosed in U.S. Sandia laboratories, these numbers
The website in Sandia laboratories is downloaded from according under, partially sliced example images are as shown in fig. 6, Fig. 6 (a) is target class training slice map
As example, Fig. 6 (b) is clutter class sectioning image example, and Fig. 6 (c) is test sectioning image example.
4.2) experimental selection and experiment 1 identical, four groups of traditional characteristics:
4.3) experiment parameter:
Clutter class trains number of slices p1=1736, target class training number of slices p2=2044, clutter class testing number of slices k1=
305, target class testing number of slices k2=115, weighting parameters λ=0.1, scale factor, α=50, weighting parameters η01=η11
=η21=η02=η12=η22=0.05, dictionary learning iterations iterMax=15, sample weights update interval iterSkip
=5, dictionary atomicity n0=n1=n2=300, weight limit value wm=50, SVM classifier adopts LIBSVM instruments in experiment
Bag, SVM penalty coefficient C=10;
4.4) experiment content:
It is identical with experiment 1.
The identification result of experiment 4 is as shown in table 4:
The identification result of the distinct methods of table 4
Distinct methods | AUC | Pc (Thr=0) | Pd (Thr=0) | Pf (Thr=0) | Pd (Thr correspondence Pd=0.9) | Pf (Thr correspondence Pd=0.9) |
Verbout | 0.7508 | 77.3810% | 0.5043 | 0.1246 | 0.8957 | 0.5443 |
Verbout+Gao | 0.7382 | 76.6667% | 0.4957 | 0.1311 | 0.8957 | 0.5836 |
Lincoln | 0.8922 | 86.6667% | 0.9913 | 0.1803 | 0.8957 | 0.1541 |
Lincoln+Verbout+Gao | 0.8933 | 84.5238% | 0.8957 | 0.1738 | 0.8957 | 0.1738 |
CSDL | 0.9456 | 88.8095% | 0.8174 | 0.0852 | 0.8957 | 0.1213 |
The present invention | 0.9508 | 88.8095% | 0.8087 | 0.0820 | 0.8957 | 0.1148 |
As seen from Table 4, AUC of the invention and overall accuracy Pc highests, and during the same verification and measurement ratio 0.9 of correspondence, this
Bright false alarm rate is minimum, is illustrated under complex scene, and the discriminating performance of the present invention is more preferable than existing method.
To sum up, the present invention is, based on the specific SAR target discrimination methods with shared dictionary of sample weighting classification, to solve multiple
SAR targets under miscellaneous scene differentiate problem, the detailed information that effectively make use of High Resolution SAR image abundant, improve complexity
SAR targets under scene differentiate performance.
Claims (9)
1. the specific SAR target discrimination methods with shared dictionary of sample weighting classification are based on, including:
(1) using SAR-SIFT descriptors to given training sectioning imageWith test sectioning imageLocal feature is extracted, obtains training the local feature of sectioning image
With the local feature of test sectioning imageWherein,Represent the training section of clutter class
Image,Target class training sectioning image is represented,Clutter class testing sectioning image is represented,Table
Show target class testing sectioning image,It is the local feature of clutter class training sectioning image,It is mesh
Mark class trains the local feature of sectioning image,It is the local feature of clutter class testing sectioning image,
It is the local feature of target class testing sectioning image, p1Represent clutter class training sectioning image number, p2Represent that target class training is cut
Picture number, k1Represent clutter class testing sectioning image number, k2Represent target class testing sectioning image number;
(2) by the clutter class training sectioning image local feature in the X obtained by (1)As clutter class training sample,
Target class trains sectioning image local featureAs target class training sample, Global Dictionary U is obtained;
2a) initialize clutter category dictionary U1, target category dictionary U2, shared dictionary U0, clutter class training sample weightAnd mesh
Mark class training sample weightIf current iteration number of times iter=0;
2b) according to the clutter category dictionary U under current iteration number of times1, target category dictionary U2With shared dictionary U0, calculate clutter class instruction
Practice section local featureRarefaction representation coefficient H1With target class training section local feature's
Rarefaction representation coefficient H2;
2c) according to 2b) H that obtains1And H2, using alternate optimization method, update clutter category dictionary U1, target category dictionary U2With it is shared
Dictionary U0, the clutter category dictionary U after being updated1', target category dictionary U2' and shared dictionary U0′;
2d) iter=iter+1 is made, record current iterations, sample weights renewal is judged whether to, if mod
(iter, iterSkip) is equal to 0, execution step 2e) it is trained sample weights renewal;Otherwise, sample weights are not trained
Update, make U1=U1′、U2=U2′、U0=U0' return to step 2b), wherein iterSkip represents that training sample weight updates interval,
Mod is represented and taken the remainder;
2e) utilize 2c) obtain U1′、U2' and U0' update clutter class training sample weightClutter class instruction after being updated
Practice sample weightsUsing 2c) obtain U1′、U2' and U0' update target class training sample weightUpdated
Target class training sample weight afterwards
2f) judge that current iteration number of times iter, whether less than maximum iteration time iterMax, if being less than, makes U1=U1′、U2=
U2′、U0=U0′、Return to step 2b), if being equal to, iteration stopping is obtained
Final Global Dictionary U=[U0′,U1′,U2′];
(3) the Global Dictionary U for utilizing (2) to obtain, obtains training the local feature X of sectioning image and test sectioning image to (1)
Local feature Y carry out standardized sparse coding respectively, obtain train sectioning image local feature code coefficientWith the local feature code coefficient of test sectioning image:
(4) the local feature code coefficient V of the training sectioning image for obtaining (3) and the local feature coding of test sectioning image
Coefficient W carries out respectively feature merging and dimensionality reduction, the training sectioning image global characteristics for obtaining:
With the global characteristics of test sectioning image
(5) " one two class Linear SVM grader of ' training, using dividing for training using the global characteristics V of training sectioning image
Global characteristics W of the class device to test sectioning image " ' classifies, and obtains categorised decision value of each test sectioning image
Decision, categorised decision value decision is compared with threshold value Thr=0 of setting, if decision >=Thr, is recognized
Otherwise it is the section of clutter class to be target class section.
2. method according to claim 1, wherein step 2a) in initialization clutter category dictionary U1, target category dictionary U2, altogether
Enjoy dictionary U0, clutter class training sample weightWith target class training sample weightCarry out as follows:
2a1) from10000 local features are randomly selected, with K-SVD algorithms to clutter category dictionaryJust
Beginningization, with Lagrange duality algorithm by U1Update once, wherein d represents the dimension of training sectioning image local feature, n1Represent
Clutter category dictionary atom number;
2a2) from10000 local features are randomly selected, with K-SVD algorithms to target category dictionary
Initialization, with Lagrange duality algorithm by U2Update once, wherein n2Represent target category dictionary atom number;
2a3) fromWith10000 local features are randomly selected, with K-SVD algorithms to sharing dictionaryInitialization, with Lagrange duality algorithm by U0Update once, wherein n0Represent shared dictionary atom number;
2a4) by clutter class training sample weightWith target class training sample weightIt is initialized to 1.
3. method according to claim 1, wherein step 2b) in calculate clutter class training section local featureRarefaction representation coefficient H1With target class training section local featureRarefaction representation coefficient H2,
Carry out as follows;
2b1) by the following optimization problems of feature-sign search Algorithm for Solving, i-th clutter class training slice map is obtained
The local feature of pictureRarefaction representation coefficient
Wherein i=1 ..., p1, λ represents weighting parameters, | | | |FF norms are represented, | | | |1Represent l1Norm,
The local feature that all clutter classes train sectioning image is solvedAfter rarefaction representation coefficient, after being updated
Clutter class train sectioning image local feature rarefaction representation coefficient
2b2) by the following optimization problems of feature-sign search Algorithm for Solving, j-th target class training slice map is obtained
The local feature of pictureRarefaction representation it is sparse
Wherein j=1 ..., p2,
The local feature that all target class train sectioning image is solvedAfter rarefaction representation coefficient, after being updated
Target class train sectioning image local feature rarefaction representation coefficient。
4. method according to claim 1, wherein step 2c) in update clutter category dictionary U1, carry out as follows;
2c1) following optimization problems are solved by alternate optimization method, update clutter class U1Dictionary, the clutter class word after being updated
Allusion quotation U1′:
s.t.||U1(:,b1)||2=1, b1=1 ..., n1
Wherein,
η11WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norms, It is that size is n0Unit matrix,It is size
For n1×n0Null matrix,It is that size is n1Unit matrix,It is that size is n0×n1Null matrix, n1It is clutter class word
Allusion quotation U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atomicity, n=n0+n1+n2, W1It is miscellaneous
Ripple class training sample weight matrix:
m1=nL×p1It is that clutter class trains the local feature of sectioning image total, nLRepresent that local is special in a training sectioning image
Levy number.
5. method according to claim 1, wherein step 2c) in update target category dictionary U2, carry out as follows;
2c2) following optimization problems are solved by alternate optimization method, update target category dictionary U2, the target class word after being updated
Allusion quotation U2′:
s.t.||U2(:,b2)||2=1, b2=1 ..., n2
Wherein,
η21WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norms, It is that size is n0's
Unit matrix,It is that size is n2×n0Null matrix,It is that size is n2Unit matrix,It is that size is n0×n2
Null matrix, n1It is clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atom
Number, n=n0+n1+n2, W2It is target class training sample weight matrix:
m2=nL×p2It is that target class trains the local feature of sectioning image total, nLRepresent the local in a training sectioning image
Characteristic Number.
6. method according to claim 1, wherein step 2c) in update shared dictionary U0, carry out as follows;
2c3) following optimization problems are solved by alternate optimization method, update shared dictionary U0, the shared dictionary after being updated
U0′:
s.t.||U0(:,b0)||2,b0=1 ..., n0
Wherein,
η01WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norms,n1It is
Clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atomicity, n=n0+n1+
n2, It is that size is n0List
Bit matrix,It is that size is n1×n0Null matrix,It is that size is n1Unit matrix,It is that size is n0×n1Zero
Matrix, W1It is clutter class training sample weight matrix:
m1=nL×p1It is that clutter class trains the local feature of sectioning image total, nLRepresent that local is special in a training sectioning image
Levy number, It is big
It is little for n0Unit matrix,It is that size is n2×n0Null matrix,It is that size is n2Unit matrix,It is that size is
n0×n2Null matrix, W2It is target class training sample weight matrix:
m2=nL×p2It is the local feature sum of target class training sectioning image.
7. method according to claim 1, wherein step 2e) in update clutter class training sample weightAnd target
Class training sample weightCarry out as follows:
2e1) utilize 2c) obtain U1′、U2' and U0' update clutter class training sample weightClutter class after being updated
The weight of training sampleWherein i-th clutter class training sample weight w1i' obtained by equation below solution,
In formula, i=1 ..., p1, α is the big scaling factor of a ratio 1, wmIt is the maximum in weight allowed band,
It is the local feature X of i-th clutter class training sectioning image1 iRarefaction representation coefficient, its value utilizes feature-sign
Search Algorithm for Solving optimization problemsObtain,It is U0' corresponding sparse table
Show coefficient,It is U1' corresponding rarefaction representation coefficient,It is U2' corresponding rarefaction representation coefficient,It is clutter
Sectioning image local featureUsing target category dictionary U2The average energy of ' reconstruct;
2e2) utilize 2c) obtain U1′、U2' and U0' update target class training sample weightTarget class after being updated
The weight of training sampleWherein j-th target class training sample weight w2j' obtained by equation below solution,
In formula, j=1 ..., p2,It is the local feature X of j-th target class training sectioning image2 jRarefaction representation coefficient, its
Value utilizes feature-sign search Algorithm for Solving optimization problems:
Obtain,It is U0' corresponding rarefaction representation coefficient,It is U1' corresponding rarefaction representation coefficient,It is U2' corresponding dilute
Dredge and represent coefficient,It is target slice image local featureUsing clutter category dictionary U1The average energy of ' reconstruct
Amount.
8. method according to claim 1, to training the local feature code coefficient V of sectioning image wherein in step (4)
Feature merging and dimensionality reduction are carried out, is carried out as follows:
4a) utilization space pyramid Matching Model will train sectioning image to be divided into three sons that size is 1 × 1,2 × 2,4 × 4
Region A1, A2, A3;
4b) merged subregion A1 using maximum, the local feature code coefficient V of A2, A3 correspondence training sectioning image is carried out
Merge and splicing, form the global feature of training sectioning image:
Global feature V ' to training sectioning image carries out l2Norm is normalized, the training sectioning image after being normalized
Global feature V ", wherein h represent feature merge after global characteristics dimension;
" dimensionality reduction is carried out, the global characteristics of the training sectioning image after dimensionality reduction are obtained 4c) using principal component analysis to VWherein h ' is the dimension of the global characteristics after dimensionality reduction.
9. method according to claim 1, to testing the local feature code coefficient W of sectioning image wherein in step (4)
Feature merging and dimensionality reduction are carried out, is carried out as follows:
4d) utilization space pyramid Matching Model will be tested sectioning image and be divided into size for 1 × 1,2 × 2,4 × 4 these three sons
Region B1, B2, B3;
4e) merged subregion B1 using maximum, the local feature code coefficient W of the corresponding test sectioning image of B2, B3 enters
Row merges and splicing, forms the global feature of test sectioning image:
Global feature W ' to testing sectioning image carries out l2Norm is normalized, the test sectioning image after being normalized
Global characteristics W ";
" dimensionality reduction is carried out, the global characteristics of the test sectioning image after dimensionality reduction are obtained 4f) using principal component analysis to WWherein h ' is the dimension of the global characteristics after dimensionality reduction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611136982.2A CN106599831B (en) | 2016-12-12 | 2016-12-12 | Based on the specific SAR target discrimination method with shared dictionary of sample weighting classification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611136982.2A CN106599831B (en) | 2016-12-12 | 2016-12-12 | Based on the specific SAR target discrimination method with shared dictionary of sample weighting classification |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106599831A true CN106599831A (en) | 2017-04-26 |
CN106599831B CN106599831B (en) | 2019-01-29 |
Family
ID=58598338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611136982.2A Active CN106599831B (en) | 2016-12-12 | 2016-12-12 | Based on the specific SAR target discrimination method with shared dictionary of sample weighting classification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106599831B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107122753A (en) * | 2017-05-08 | 2017-09-01 | 西安电子科技大学 | SAR target discrimination methods based on integrated study |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110222781A1 (en) * | 2010-03-15 | 2011-09-15 | U.S. Government As Represented By The Secretary Of The Army | Method and system for image registration and change detection |
CN102651073A (en) * | 2012-04-07 | 2012-08-29 | 西安电子科技大学 | Sparse dynamic ensemble selection-based SAR (synthetic aperture radar) image terrain classification method |
CN103714353A (en) * | 2014-01-09 | 2014-04-09 | 西安电子科技大学 | Polarization SAR image classification method based on vision prior model |
US20140347213A1 (en) * | 2012-03-09 | 2014-11-27 | U.S. Army Research Laboratory Attn: Rdrl-Loc-I | Method and System for Estimation and Extraction of Interference Noise from Signals |
-
2016
- 2016-12-12 CN CN201611136982.2A patent/CN106599831B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110222781A1 (en) * | 2010-03-15 | 2011-09-15 | U.S. Government As Represented By The Secretary Of The Army | Method and system for image registration and change detection |
US20140347213A1 (en) * | 2012-03-09 | 2014-11-27 | U.S. Army Research Laboratory Attn: Rdrl-Loc-I | Method and System for Estimation and Extraction of Interference Noise from Signals |
CN102651073A (en) * | 2012-04-07 | 2012-08-29 | 西安电子科技大学 | Sparse dynamic ensemble selection-based SAR (synthetic aperture radar) image terrain classification method |
CN103714353A (en) * | 2014-01-09 | 2014-04-09 | 西安电子科技大学 | Polarization SAR image classification method based on vision prior model |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107122753A (en) * | 2017-05-08 | 2017-09-01 | 西安电子科技大学 | SAR target discrimination methods based on integrated study |
CN107122753B (en) * | 2017-05-08 | 2020-04-07 | 西安电子科技大学 | SAR target identification method based on ensemble learning |
Also Published As
Publication number | Publication date |
---|---|
CN106599831B (en) | 2019-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106874889B (en) | Multiple features fusion SAR target discrimination method based on convolutional neural networks | |
CN108510467B (en) | SAR image target identification method based on depth deformable convolution neural network | |
CN104036239B (en) | Fast high-resolution SAR (synthetic aperture radar) image ship detection method based on feature fusion and clustering | |
CN105518709B (en) | The method, system and computer program product of face for identification | |
CN105404886B (en) | Characteristic model generation method and characteristic model generating means | |
CN109284704A (en) | Complex background SAR vehicle target detection method based on CNN | |
CN106096506B (en) | Based on the SAR target identification method for differentiating doubledictionary between subclass class | |
CN109902590A (en) | Pedestrian's recognition methods again of depth multiple view characteristic distance study | |
CN109766835A (en) | The SAR target identification method of confrontation network is generated based on multi-parameters optimization | |
CN106251332B (en) | SAR image airport target detection method based on edge feature | |
CN109583305A (en) | A kind of advanced method that the vehicle based on critical component identification and fine grit classification identifies again | |
CN108564094A (en) | A kind of Material Identification method based on convolutional neural networks and classifiers combination | |
CN109284786A (en) | The SAR image terrain classification method of confrontation network is generated based on distribution and structure matching | |
CN105138970A (en) | Spatial information-based polarization SAR image classification method | |
CN108647695A (en) | Soft image conspicuousness detection method based on covariance convolutional neural networks | |
CN104182763A (en) | Plant type identification system based on flower characteristics | |
CN105913090B (en) | SAR image objective classification method based on SDAE-SVM | |
CN110533606A (en) | Safety check X-ray contraband image data Enhancement Method based on production confrontation network | |
CN107895139A (en) | A kind of SAR image target recognition method based on multi-feature fusion | |
CN110263712A (en) | A kind of coarse-fine pedestrian detection method based on region candidate | |
CN105223561B (en) | Radar ground target discriminator design method based on spatial distribution | |
CN102945374A (en) | Method for automatically detecting civil aircraft in high-resolution remote sensing image | |
CN106022241A (en) | Face recognition method based on wavelet transformation and sparse representation | |
CN107341505A (en) | A kind of scene classification method based on saliency Yu Object Bank | |
CN106326938A (en) | SAR image target discrimination method based on weakly supervised learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |