CN106599831B - Based on the specific SAR target discrimination method with shared dictionary of sample weighting classification - Google Patents

Based on the specific SAR target discrimination method with shared dictionary of sample weighting classification Download PDF

Info

Publication number
CN106599831B
CN106599831B CN201611136982.2A CN201611136982A CN106599831B CN 106599831 B CN106599831 B CN 106599831B CN 201611136982 A CN201611136982 A CN 201611136982A CN 106599831 B CN106599831 B CN 106599831B
Authority
CN
China
Prior art keywords
sectioning image
clutter
local feature
target
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611136982.2A
Other languages
Chinese (zh)
Other versions
CN106599831A (en
Inventor
王英华
吕翠文
刘宏伟
周生华
纠博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Xian Cetc Xidian University Radar Technology Collaborative Innovation Research Institute Co Ltd
Original Assignee
Xidian University
Xian Cetc Xidian University Radar Technology Collaborative Innovation Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University, Xian Cetc Xidian University Radar Technology Collaborative Innovation Research Institute Co Ltd filed Critical Xidian University
Priority to CN201611136982.2A priority Critical patent/CN106599831B/en
Publication of CN106599831A publication Critical patent/CN106599831A/en
Application granted granted Critical
Publication of CN106599831B publication Critical patent/CN106599831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of based on the specific SAR target discrimination method with shared dictionary of sample weighting classification, and it is low mainly to solve the problems, such as that prior art SAR target under complex scene identifies performance.Its scheme is: 1. pairs of given training slices and test slice extract local feature;2. obtaining Global Dictionary using the local feature of training slice;3. being sliced using Global Dictionary to training and the local feature of test slice carrying out standardized sparse coding respectively, local feature code coefficient is obtained;4. the local feature code coefficient of pair training slice and test slice carries out feature merging and dimensionality reduction respectively, the global characteristics of training slice and the global characteristics of test slice are obtained;5. being identified using support vector machines to test slice global characteristics.The present invention improves the performance of identification, can be used for identifying the SAR target of complex scene.

Description

Based on the specific SAR target discrimination method with shared dictionary of sample weighting classification
Technical field
The invention belongs to radar target authentication technique fields, relate generally to a kind of SAR target discrimination method, can be used for vehicle Target recognition and classification provides important information.
Background technique
Synthetic aperture radar SAR utilizes microwave remote sensing technique, climate and does not influence round the clock, with round-the-clock, round-the-clock Ability to work, and have the characteristics that multiband, multipolarization, visual angle be variable and penetrability.With more and more airborne and stars The appearance for carrying SAR, brings the SAR data under a large amount of different scenes, is exactly that automatic target is known to SAR data one important application Other ATR.Target under complex scene, which identifies, also becomes one of current research direction.
Feature extraction is an important process in target discrimination process.In in the past few decades, have largely about The research that SAR target diagnostic characteristics extract, be broadly divided into four kinds: the first is characterized in textural characteristics, as Lincoln laboratory proposes Standard deviation characteristic, FRACTAL DIMENSION feature and arrangement energy ratio feature;Second of feature is related with the shape of target, such as ERIM Qualitative character, characteristics of diameters and the variance that (Environmental Research Institute of Michigan) is proposed are returned One changes horizontal and vertical projection properties used in rotary inertia feature and other documents, minimum and maximum projected length spy Sign;The third feature is decided by the contrast of target and background, the peak C FAR and mean value CFAR and CFAR proposed such as ERIM The average signal-to-noise ratio that most bright percentage feature and Gao are proposed, Y-PSNR and brightest pixel percentage feature.In addition to this, Lincoln laboratory also proposed what high luminance pixel when several features are used to describe to image plus different threshold values was spread in space Change, the difference that these features depend not only on target and background additionally depends on the size of target;4th kind of feature is that polarization is special Sign, such as percent purity, pure idol percentage and strong even percentage feature, but these polarization characteristics can only be in full-polarization SAR number It can be just extracted according to upper.
The shortcomings that above-mentioned traditional characteristic mainly has in terms of following two: first, these features target is only provided it is coarse, Partial description, they cannot describe target and the detailed local shape of clutter and structural information, this shows that identification cannot be abundant Utilize full resolution pricture detailed information abundant.When target and clutter are no apparent poor in terms of texture, shape and contrast When other, these features cannot show to identify performance well.Second, existing feature is suitable for naturally miscellaneous under simple scenario The identification of wave and target.The verifying of current most of SAR target discrimination methods is all based on MSTAR data set, with 0.3m points Resolution.The scene of this standard data set is fairly simple, and target slice possesses similar feature, and each slice only includes a mesh Mark and the center for being located at sectioning image.Target is a compact high intensity region, is around that intensity is lower, background of homogeneity Clutter.Clutter slice also shows some similar attributes, and most of high-intensitive region corresponds to tree crown in clutter slice.These Target slice and clutter slice differ greatly on texture, shape and contrast, and traditional target diagnostic characteristics are suitable for the number According to collection, and show relatively good identification feature.However, true scene is more complicated, such as miniSAR data set, target The position and direction of target are different in slice, and are had existing for multiple target or partial target in a width sectioning image Situation.Clutter is sliced, the type of clutter is diversified, including natural clutter, and such as trees, there are many more artificial miscellaneous Wave, such as the edge of building.Therefore existing texture, shape and contrast metric be not enough to identify target in this case and Clutter.
In conclusion traditional characteristic identifies tool to the target under complex scene with the continuous promotion of SAR image resolution ratio There is biggish limitation.
Summary of the invention
It is an object of the invention to the deficiencies for existing SAR target discrimination method, propose a kind of based on sample weighting class The not specific SAR target discrimination method with shared dictionary identifies performance with the target improved under complex scene.
The technical scheme of the present invention is realized as follows:
(1) using SAR-SIFT descriptor to given training sectioning imageIt is sliced with test ImageLocal feature is extracted, the local feature for training sectioning image is obtainedWith the local feature of test sectioning imageWherein,Indicate clutter class training sectioning image,Indicate target class training sectioning image,Indicate clutter Class testing sectioning image,Indicate target class testing sectioning image,It is clutter class training sectioning image Local feature,It is the local feature of target class training sectioning image,It is clutter class testing slice The local feature of image,It is the local feature of target class testing sectioning image, p1Indicate clutter class training slice map As number, p2Indicate target class training sectioning image number, k1Indicate clutter class testing sectioning image number, k2Indicate target class Test sectioning image number;
(2) by the clutter class training sectioning image local feature in (1) resulting XAs the training of clutter class Sample, target class training sectioning image local featureAs target class training sample, Global Dictionary U is obtained;
2a) initialize clutter category dictionary U1, target category dictionary U2, shared dictionary U0, clutter class training sample weight With target class training sample weightIf current iteration number iter=0;
2b) according to the clutter category dictionary U under current iteration number1, target category dictionary U2With shared dictionary U0, calculate clutter Class training slice local featureRarefaction representation coefficient H1With target class training slice local featureRarefaction representation coefficient H2
2c) according to 2b) obtained H1And H2, using alternate optimization method, update clutter category dictionary U1, target category dictionary U2 With shared dictionary U0, obtain updated clutter category dictionary U1', target category dictionary U2' and shared dictionary U0′;
Iter=iter+1 2d) is enabled, current the number of iterations is recorded, sample weights update is judged whether to, if mod (iter, iterSkip) is equal to 0, executes step 2e) it is trained sample weights update;Otherwise, without training sample weight It updates, enables U1=U1′、U2=U2′、U0=U0' return step 2b), wherein iterSkip indicates that training sample weight updates interval, Mod expression takes the remainder;
2e) utilize 2c) obtain U1′、U2' and U0' update clutter class training sample weightObtain updated clutter Class training sample weightUtilize 2c) obtain U1′、U2' and U0' update target class training sample weightIt obtains Updated target class training sample weight
2f) judge whether current iteration number iter is less than maximum number of iterations iterMax, if being less than, enables U1=U1′、 U2=U2′、U0=U0′、Return step 2b), if being equal to, iteration stopping, Obtain final Global Dictionary U=[U0′,U1′,U2′];
(3) the Global Dictionary U for utilizing (2) to obtain obtains training the local feature X of sectioning image to (1) and test is sliced The local feature Y of image carries out standardized sparse coding respectively, obtains the local feature code coefficient for training sectioning imageWith test sectioning image local feature code coefficient:
(4) local feature of the local feature code coefficient V for the training sectioning image for obtaining (3) and test sectioning image Code coefficient W carries out feature merging and dimensionality reduction respectively, obtained training sectioning image global characteristics:
With the global characteristics of test sectioning image
(5) using global characteristics V " ' one two class Linear SVM classifier of training of training sectioning image, using training Classifier to test sectioning image global characteristics W " ' classify, obtain it is each test sectioning image categorised decision value The categorised decision value decision is compared with the threshold value Thr=0 of setting, if decision >=Thr, recognizes by decision It is otherwise clutter class slice to be target class slice.
Compared with the prior art, the present invention has the following advantages:
1. the present invention is the SAR image vehicle target discrimination method under complex scene, the target compared to traditional characteristic is reflected Other method, the distribution due to considering target and the partial structurtes information and partial structurtes of clutter slice under complex scene are believed Breath, takes full advantage of the detailed information of full resolution pricture, and the SAR target improved under complex scene identifies performance.
It is and existing 2. the present invention is during generating global dictionary due to increasing the study to descriptive bad sample Based on the specific target class global characteristics compared with the SAR target discrimination method of shared dictionary learning CSDL, obtained of classification with The discrimination of the global characteristics of clutter class is bigger, to further improve the identification performance of the SAR target under complex scene.
Detailed description of the invention
Fig. 1 is implementation flow chart of the invention;
Fig. 2 is that the Global Dictionary in the present invention generates sub-process figure;
Fig. 3 is part miniSAR sectioning image used in present invention experiment 1;
Fig. 4 is part miniSAR sectioning image used in present invention experiment 2;
Fig. 5 is part miniSAR sectioning image used in present invention experiment 3;
Fig. 6 is part miniSAR sectioning image used in present invention experiment 4.
Specific embodiment
The embodiment of the present invention and effect are described in further detail with reference to the accompanying drawing:
The method of the present invention relates generally to the identification of the vehicle target under complex scene, and existing target diagnostic characteristics are mostly It is verified based on MSTAR data set, the scene of data description is relatively simple.Target slice possesses similar feature, each Slice is only comprising a target and positioned at the center of sectioning image.Target area is compact a, high intensity region, surrounding It is that intensity is lower, clutter background of homogeneity.Clutter slice also shows some similar attributes, most of height in clutter slice The region of intensity corresponds to tree crown.These target slices and clutter slice differ greatly on texture, shape and contrast.With thunder Up to the promotion of resolution ratio, the scene of SAR image description is also increasingly complex, and target slice not only has single goal, and there are also multiple target drawn games The case where portion's target, and target is also not necessarily located in the center of slice.Clutter slice is also not only nature clutter, and there are also a large amount of shapes The different artificial clutter of shape.In view of the above problems, to be taken based on sample weighting classification specific with shared dictionary learning phase by the present invention In conjunction with, SAR target is identified, improve under complex scene to the identification performance of SAR target.
Referring to Fig. 1, realization step of the invention includes the following:
Step 1, local feature is extracted to given training sectioning image and test sectioning image.
2a) using SAR-SIFT descriptor to given training sectioning imageIt is special to carry out part Sign is extracted, and the local feature for training sectioning image is obtainedWhereinIndicate clutter Class trains sectioning image,Indicate target class training sectioning image,It is clutter class training sectioning image Local feature,It is the local feature of target class training sectioning image, p1Indicate clutter class training sectioning image Number, p2Indicate target class training sectioning image number;
2b) using SAR-SIFT descriptor to given test sectioning imageCarry out part Feature extraction obtains the local feature of test sectioning imageWherein,Indicate clutter Class testing sectioning image,Indicate target class testing sectioning image,It is clutter class testing sectioning image Local feature,It is the local feature that target class testing practices sectioning image, k1Indicate clutter class testing sectioning image number Mesh, k2Indicate target class testing sectioning image number.
Step 2, according to the local feature of training sectioning imageObtain Global Dictionary U.
By the local feature of clutter class training sectioning imageAs clutter class training sample, target class is instructed Practice the local feature of sectioning imageAs target class training sample, Global Dictionary U is obtained.
Referring to Fig. 2, this step is implemented as follows:
2a) initialize clutter category dictionary U1, target category dictionary U2, shared dictionary U0, clutter class training sample weight With target class training sample weight
2a1) from10000 local features are randomly selected, with K-SVD algorithm to clutter category dictionaryInitialization, with Lagrange duality algorithm by U1It updates once, wherein d indicates training sectioning image local feature Dimension, n1Indicate clutter category dictionary atom number;
2a2) from10000 local features are randomly selected, with K-SVD algorithm to target category dictionaryInitialization, with Lagrange duality algorithm by U2It updates once, wherein n2Indicate target category dictionary atom number;
2a3) fromWithIn randomly select 10000 local features, with K-SVD algorithm to altogether Enjoy dictionaryInitialization, with Lagrange duality algorithm by U0It updates once, wherein n0Indicate shared dictionary atom Number;
2a4) by clutter class training sample weightWith target class training sample weightIt is initialized to 1;
2a5) set current iteration number iter=0;
2b) according to the clutter category dictionary U under current iteration number1, target category dictionary U2With shared dictionary U0, calculate clutter Class training slice local featureRarefaction representation coefficient H1With target class training slice local featureRarefaction representation coefficient H2, steps are as follows for calculating:
Following optimization problems 2b1) are solved by feature-sign search algorithm, i-th of clutter class training is obtained and cuts The local feature of pictureRarefaction representation coefficient
Wherein i=1 ..., p1, λ expression weighting parameters, | | | |FIndicate F norm, | | | |1Indicate l1Norm,
The local feature of all clutter class training sectioning images is solvedAfter rarefaction representation coefficient, obtain more The local feature rarefaction representation coefficient of clutter class training sectioning image after new
Following optimization problems 2b2) are solved by feature-sign search algorithm, j-th of target class training is obtained and cuts The local feature of pictureRarefaction representation it is sparse
Wherein j=1 ..., p2,
The local feature of all target class training sectioning images is solvedAfter rarefaction representation coefficient, obtain more The local feature rarefaction representation coefficient of target class training sectioning image after new
2c) according to 2b) obtained H1And H2, using alternate optimization method, update clutter category dictionary U1, target category dictionary U2 With shared dictionary U0, steps are as follows for update:
Following optimization problems 2c1) are solved by alternate optimization method, update clutter class U1Dictionary obtains updated miscellaneous Wave category dictionary U1':
s.t.||U1(:,b1)||2=1, b1=1 ..., n1
Wherein, η11WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norm,Be size be n0List Bit matrix,Be size be n1×n0Null matrix,Be size be n1Unit matrix,Be size be n0×n1's Null matrix, n1It is clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atom Number, n=n0+n1+n2, W1It is clutter class training sample weight matrix:
m1=nL×p1It is the local feature sum of clutter class training sectioning image, nLIt indicates in a trained sectioning image Local feature number.
Following optimization problems 2c2) are solved by alternate optimization method, update target category dictionary U2, obtain updated mesh Mark category dictionary U2':
s.t.||U2(:,b2)||2=1, b2=1 ..., n2
Wherein, η21WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norm,Be size be n0's Unit matrix,Be size be n2×n0Null matrix,Be size be n2Unit matrix,Be size be n0×n2 Null matrix, n1It is clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atom Number, n=n0+n1+n2, W2It is target class training sample weight matrix:
m2=nL×p2It is the local feature sum of target class training sectioning image, nLIt indicates in a trained sectioning image Local feature number.
Following optimization problems 2c3) are solved by alternate optimization method, update shared dictionary U0, obtain updated shared Dictionary U0':
s.t.||U0(:,b0)||2,b0=1 ..., n0
Wherein, η01WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norm,n1It is Clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atomicity, n=n0+n1+ n2, It is size For n0Unit matrix,Be size be n1×n0Null matrix,Be size be n1Unit matrix,Be size be n0 ×n1Null matrix, W1It is clutter class training sample weight matrix:
m1=nL×p1It is the local feature sum of clutter class training sectioning image, nLIt indicates in a trained sectioning image Local feature number, Be size be n0Unit matrix,Be size be n2×n0Null matrix,Be size be n2Unit matrix,It is Size is n0×n2Null matrix, W2It is target class training sample weight matrix:
m2=nL×p2It is the local feature sum of target class training sectioning image.
After completing above-mentioned update step, updated clutter category dictionary U is obtained1', target category dictionary U2', shared dictionary U0′;
Iter=iter+1 2d) is enabled, current the number of iterations is recorded, sample weights update is judged whether to, if mod (iter, iterSkip) is equal to 0, executes step 2e) it is trained sample weights update;Otherwise, without training sample weight It updates, enables U1=U1′、U2=U2′、U0=U0' return step 2b), wherein iterSkip indicates that training sample weight updates interval, Mod expression takes the remainder;
2e) utilize 2c) obtain U1′、U2' and U0' update clutter class training sample weightWith target class training sample WeightIts step are as follows,
2e1) utilize 2c) obtain U1′、U2' and U0' update clutter class training sample weightIt obtains updated miscellaneous The weight of wave class training sampleWherein i-th of clutter class training sample weight w1i' obtained by following equations,
In formula, i=1 ..., p1, α is a scaling factor bigger than 1, wmIt is the maximum in weight allowed band Value,It is the local feature X of i-th of clutter class training sectioning image1 iRarefaction representation coefficient, value utilize feature- Sign search algorithm solving optimization problemIt obtains,It is U0' corresponding Rarefaction representation coefficient,It is U1' corresponding rarefaction representation coefficient,It is U2' corresponding rarefaction representation coefficient, It is clutter sectioning image local featureUse target category dictionary U2The average energy of ' reconstruct;2e2) utilize 2c) obtain U1′、U2′ And U0' update target class training sample weightObtain the weight of updated target class training sampleIts In j-th of target class training sample weight w2j' obtained by following equations,
In formula, j=1 ..., p2,It is the local feature X of j-th of target class training sectioning image2 jRarefaction representation system Number, value utilize feature-sign search algorithm solving optimization problem:
It obtains,It is U0' corresponding rarefaction representation coefficient,It is U1' corresponding rarefaction representation coefficient,It is U2' right The rarefaction representation coefficient answered,It is target slice image local featureUse clutter category dictionary U1' reconstruct is put down Equal energy;
2f) judge whether current iteration number iter is less than maximum number of iterations iterMax, if being less than, enables U1=U1′、 U2=U2′、U0=U0′、Return step 2b), if being equal to, iteration stopping, Obtain final Global Dictionary U=[U0′,U1′,U2′];
Step 3, the local feature of training sectioning image and the local feature code coefficient of test sectioning image are solved.
This step is implemented as follows:
3a) the Global Dictionary U obtained using step 2 obtains step 1 the local feature X of sectioning image is trained to mark Quasi- sparse coding obtains the local feature code coefficient for training sectioning image
3b) the Global Dictionary U obtained using step 2, the local feature Y for obtaining test sectioning image to step 1 are marked Quasi- sparse coding obtains the local feature code coefficient of test sectioning image
Step 4, the local feature code coefficient V of training sectioning image step 3 obtained and the office of test sectioning image Portion feature coding coefficient W carries out feature merging and dimensionality reduction respectively.
4a) using spatial pyramid Matching Model by training sectioning image be divided into size be 1 × 1,2 × 2,4 × 4 three Sub-regions A1, A2, A3;
4b) merged using maximum value by subregion A1, the local feature code coefficient V of the corresponding training sectioning image of A2, A3 It merges and splices, form the global feature of training sectioning image:
To the global feature V ' carry out l of training sectioning image2Norm normalization, the training slice map after being normalized The global feature V " of picture, wherein h indicates the dimension of global characteristics after feature merging;
Dimensionality reduction, the global characteristics of the training sectioning image after obtaining dimensionality reduction 4c) are carried out to V " using principal component analysisWherein h ' is the dimension of the global characteristics after dimensionality reduction;
4d) using spatial pyramid Matching Model will test sectioning image be divided into size be 1 × 1,2 × 2,4 × 4 this three Sub-regions B1, B2, B3;
4e) merged using maximum value by subregion B1, the local feature code coefficient of the corresponding test sectioning image of B2, B3 W is merged and is spliced, and forms the global feature of test sectioning image:
To the global feature W ' carry out l of test sectioning image2Norm normalization, the test slice map after being normalized The global characteristics W " of picture;
Dimensionality reduction 4f) is carried out to W " using principal component analysis, the global characteristics of the test sectioning image after obtaining dimensionality reductionWherein h ' is the dimension of the global characteristics after dimensionality reduction.
Step 5, using global characteristics V " ' one two class Linear SVM classifier of training of training sectioning image, training is used Good classifier classifies to the global characteristics W " ' of test sectioning image, obtains the categorised decision of each test sectioning image The categorised decision value decision is compared by value decision with the threshold value Thr=0 of setting, if decision >=Thr, It is considered that target class is sliced, is otherwise clutter class slice.
Effect of the invention can be further illustrated by following experimental data:
Experiment 1:
1.1) experiment scene:
This experiment sectioning image used miniSAR data set disclosed in the U.S. laboratory Sandia, these numbers The website in the laboratory Sandia is downloaded under, partially sliced example images are as shown in figure 3, Fig. 3 (a) is target class training slice map As example, Fig. 3 (b) is clutter class sectioning image example, and Fig. 3 (c) is test sectioning image example.
1.2) four groups of traditional characteristics of experimental selection:
First group of feature is: optimal threshold feature, the average value tag of image pixel quality, image pixel spatial cohesion The combination of feature, corner feature, acceleration signature;
Second group of feature is: the average value tag of optimal threshold feature, image pixel quality, image pixel spatial cohesion Feature, corner feature, acceleration signature, mean value signal-to-noise ratio feature, Y-PSNR feature and brightest pixel percentage feature Combination is used;
Third group feature is: standard deviation characteristic, FRACTAL DIMENSION feature and the combination for arranging energy ratio feature;
4th group of feature is: standard deviation characteristic, FRACTAL DIMENSION feature, arrangement energy ratio feature, optimal threshold feature, image slices The average value tag of quality amount, image pixel spatial cohesion feature, corner feature, acceleration signature, mean value signal-to-noise ratio feature, The combination of Y-PSNR feature and brightest pixel percentage feature.
1.3) experiment parameter:
Training clutter number of slices p1=1442, training objective number of slices p2=2091, test clutter number of slices k1=599, it surveys Try target slice number k2=140, weighting parameters λ=0.1, scale factor, α=50, weighting parameters η011121021222=0.05, dictionary learning the number of iterations iterMax=15, sample weights update interval iterSkip=5, dictionary Atomicity n0=n1=n2=300, weight limit value wm=50, SVM classifier uses LIBSVM kit, SVM punishment in experiment Coefficient C=10;
1.4) experiment content:
With the existing SAR target discrimination method based on first group of traditional characteristic Verbout and the method for the present invention to complexity SAR target under scene compares experiment;
With the existing SAR target discrimination method based on second group of traditional characteristic Verbout+Gao and the method for the present invention pair SAR target under complex scene compares experiment;
With the existing SAR target discrimination method based on third group traditional characteristic Lincoln and the method for the present invention to complexity SAR target under scene compares experiment;
With the existing SAR target discrimination method based on the 4th group of traditional characteristic Lincoln+Verbout+Gao and this hair Bright method compares experiment to the SAR target under complex scene;
With the existing SAR target discrimination method based on CSDL and the method for the present invention to the SAR target under complex scene into Row comparative experiments.
The identification result of experiment 1 is as shown in table 1:
The identification result of 1 distinct methods of table
Distinct methods AUC Pc (Thr=0) Pd (Thr=0) Pf (Thr=0) Pd (Thr corresponds to Pd=0.9) Pf (Thr corresponds to Pd=0.9)
Verbout 0.8739 87.0095% 0.6143 0.0701 0.9000 0.4040
Verbout+Gao 0.8813 86.1976% 0.6071 0.0785 0.9000 0.3539
Lincoln 0.9398 90.6631% 0.9571 0.1052 0.9000 0.0801
Lincoln+Verbout+Gao 0.9408 90.3924% 0.9143 0.0985 0.9000 0.0851
CSDL 0.9580 92.0162% 0.7500 0.0401 0.9000 0.1185
The present invention 0.9694 93.3694% 0.7429 0.0217 0.9000 0.0801
AUC in table 1 indicates that the area under ROC curve, Pc indicate that overall accuracy, Pd indicate that verification and measurement ratio, Pf indicate false-alarm Rate, Thr are the threshold values of SVM classifier.
It can be seen in table 1 that AUC and overall accuracy Pc highest of the invention, and when corresponding same verification and measurement ratio 0.9, this hair Bright false alarm rate be it is minimum, illustrate under complex scene, identification performance of the invention is more preferable than existing method.
Experiment 2:
2.1) experiment scene:
This experiment sectioning image used miniSAR data set disclosed in the U.S. laboratory Sandia, these numbers The website in the laboratory Sandia is downloaded under, partially sliced example images are as shown in figure 4, Fig. 4 (a) is target class training slice map As example, Fig. 4 (b) is clutter class sectioning image example, and Fig. 4 (c) is test sectioning image example.
2.2) experimental selection with experiment 1 identical four groups of traditional characteristics:
2.3) experiment parameter:
Training clutter number of slices p1=1531, training objective number of slices p2=2080, test clutter number of slices k1=510, it surveys Try target slice number k2=79, weighting parameters λ=0.1, scale factor, α=50, weighting parameters η011121021222=0.05, dictionary learning the number of iterations iterMax=15, sample weights update interval iterSkip=5, dictionary Atomicity n0=n1=n2=300, weight limit value wm=50, SVM classifier uses LIBSVM kit, SVM punishment in experiment Coefficient C=10;
2.4) content of the test:
It is identical with experiment 1.
The identification result of experiment 2 is as shown in table 2:
The identification result of 2 distinct methods of table
Distinct methods AUC Pc (Thr=0) Pd (Thr=0) Pf (Thr=0) Pd (Thr corresponds to Pd=0.9) Pf (Thr corresponds to Pd=0.9)
Verbout 0.8671 75.7216% 0.8734 0.2608 0.8987 0.2980
Verbout+Gao 0.8225 65.0255% 0.8354 0.3784 0.8987 0.5333
Lincoln 0.8359 67.0628% 0.8861 0.3627 0.8987 0.3784
Lincoln+Verbout+Gao 0.7131 64.5121% 0.7342 0.3686 0.8987 0.5294
CSDL 0.8757 86.2479% 0.5190 0.0843 0.8987 0.2784
The present invention 0.8923 86.2479% 0.5063 0.0824 0.8987 0.2490
As seen from Table 2, AUC of the invention and overall accuracy Pc highest, and when corresponding same verification and measurement ratio 0.9, this hair Bright false alarm rate be it is minimum, illustrate under complex scene, identification performance of the invention is more preferable than existing method.
Experiment 3:
3.1) experiment scene:
This experiment sectioning image used miniSAR data set disclosed in the U.S. laboratory Sandia, these numbers The website in the laboratory Sandia is downloaded under, partially sliced example images are as shown in figure 5, Fig. 5 (a) is target class training slice map As example, Fig. 5 (b) is clutter class sectioning image example, and Fig. 5 (c) is test sectioning image example.
3.2) experimental selection with experiment 1 identical four groups of traditional characteristics.
3.3) experiment parameter:
Training clutter number of slices p1=1414, training objective number of slices p2=1567, test clutter number of slices k1=627, it surveys Try target slice number k2=159, weighting parameters λ=0.1, scale factor, α=50, weighting parameters η011121021222=0.05, dictionary learning the number of iterations iterMax=15, sample weights update interval iterSkip=5, dictionary Atomicity n0=n1=n2=300, weight limit value wm=50, SVM classifier uses LIBSVM kit, SVM punishment in experiment Coefficient C=10;
3.4) experiment content:
It is identical with experiment 1.
The identification result of experiment 3 is as shown in table 3:
The identification result of 3 distinct methods of table
Distinct methods AUC Pc (Thr=0) Pd (Thr=0) Pf (Thr=0) Pd (Thr corresponds to Pd=0.9) Pf (Thr corresponds to Pd=0.9)
Verbout 0.5688 42.4936% 0.8428 0.6810 0.8994 0.7927
Verbout+Gao 0.5662 42.4936% 0.8428 0.6810 0.8994 0.7927
Lincoln 0.5663 44.5293% 0.9623 0.6858 0.8994 0.6284
Lincoln+Verbout+Gao 0.5751 43.1298% 0.9560 0.7018 0.8994 0.6268
CSDL 0.8529 75.5729% 0.7987 0.2552 0.8994 0.3907
The present invention 0.8555 77.4809% 0.7799 0.2265 0.8994 0.3652
As seen from Table 3, AUC of the invention and overall accuracy Pc highest, and when corresponding same verification and measurement ratio 0.9, this hair Bright false alarm rate be it is minimum, illustrate under complex scene, identification performance of the invention is more preferable than existing method.
Experiment 4:
4.1) experiment scene:
This experiment sectioning image used miniSAR data set disclosed in the U.S. laboratory Sandia, these numbers The website in the laboratory Sandia is downloaded under, partially sliced example images are as shown in fig. 6, Fig. 6 (a) is target class training slice map As example, Fig. 6 (b) is clutter class sectioning image example, and Fig. 6 (c) is test sectioning image example.
4.2) experimental selection with experiment 1 identical four groups of traditional characteristics:
4.3) experiment parameter:
Clutter class trains number of slices p1=1736, target class training number of slices p2=2044, clutter class testing number of slices k1= 305, target class testing number of slices k2=115, weighting parameters λ=0.1, scale factor, α=50, weighting parameters η011121021222=0.05, dictionary learning the number of iterations iterMax=15, sample weights update interval iterSkip =5, dictionary atomicity n0=n1=n2=300, weight limit value wm=50, SVM classifier uses LIBSVM tool in experiment Packet, SVM penalty coefficient C=10;
4.4) experiment content:
It is identical with experiment 1.
The identification result of experiment 4 is as shown in table 4:
The identification result of 4 distinct methods of table
Distinct methods AUC Pc (Thr=0) Pd (Thr=0) Pf (Thr=0) Pd (Thr corresponds to Pd=0.9) Pf (Thr corresponds to Pd=0.9)
Verbout 0.7508 77.3810% 0.5043 0.1246 0.8957 0.5443
Verbout+Gao 0.7382 76.6667% 0.4957 0.1311 0.8957 0.5836
Lincoln 0.8922 86.6667% 0.9913 0.1803 0.8957 0.1541
Lincoln+Verbout+Gao 0.8933 84.5238% 0.8957 0.1738 0.8957 0.1738
CSDL 0.9456 88.8095% 0.8174 0.0852 0.8957 0.1213
The present invention 0.9508 88.8095% 0.8087 0.0820 0.8957 0.1148
As seen from Table 4, AUC of the invention and overall accuracy Pc highest, and when corresponding same verification and measurement ratio 0.9, this hair Bright false alarm rate be it is minimum, illustrate under complex scene, identification performance of the invention is more preferable than existing method.
To sum up, the present invention is that solved multiple based on the specific SAR target discrimination method with shared dictionary of sample weighting classification SAR target under miscellaneous scene identifies problem, and High Resolution SAR image detailed information abundant is effectively utilized, improves complexity SAR target under scene identifies performance.

Claims (9)

1. including: based on the specific SAR target discrimination method with shared dictionary of sample weighting classification
(1) using SAR-SIFT descriptor to given training sectioning imageWith test sectioning imageLocal feature is extracted, the local feature for training sectioning image is obtainedWith the local feature of test sectioning imageWherein,Indicate clutter class training sectioning image,Indicate target class training sectioning image,Indicate clutter Class testing sectioning image,Indicate target class testing sectioning image,It is clutter class training sectioning image Local feature,It is the local feature of target class training sectioning image,It is clutter class testing slice The local feature of image,It is the local feature of target class testing sectioning image, p1Indicate clutter class training slice map As number, p2Indicate target class training sectioning image number, k1Indicate clutter class testing sectioning image number, k2Indicate target class Test sectioning image number;
(2) by the clutter class training sectioning image local feature in (1) resulting XAs clutter class training sample, Target class trains sectioning image local featureAs target class training sample, Global Dictionary U is obtained;
2a) initialize clutter category dictionary U1, target category dictionary U2, shared dictionary U0, clutter class training sample weightAnd mesh Mark class training sample weightIf current iteration number iter=0;
2b) according to the clutter category dictionary U under current iteration number1, target category dictionary U2With shared dictionary U0, calculate clutter class instruction Practice slice local featureRarefaction representation coefficient H1With target class training slice local feature's Rarefaction representation coefficient H2
2c) according to 2b) obtained H1And H2, using alternate optimization method, update clutter category dictionary U1, target category dictionary U2With it is shared Dictionary U0, obtain updated clutter category dictionary U1', target category dictionary U2' and shared dictionary U0′;
Iter=iter+1 2d) is enabled, current the number of iterations is recorded, judges whether to sample weights update, if mod (iter, IterSkip) it is equal to 0, executes step 2e) it is trained sample weights update;Otherwise, it updates, enables without training sample weight U1=U1′、U2=U2′、U0=U0' return step 2b), wherein iterSkip indicates that training sample weight updates interval, and mod is indicated It takes the remainder;
2e) utilize 2c) obtain U1′、U2' and U0' update clutter class training sample weightObtain updated clutter class instruction Practice sample weightsUtilize 2c) obtain U1′、U2' and U0' update target class training sample weightIt is updated Target class training sample weight afterwards
2f) judge whether current iteration number iter is less than maximum number of iterations iterMax, if being less than, enables U1=U1′、U2= U2′、U0=U0′、Return step 2b), if being equal to, iteration stopping is obtained Final Global Dictionary U=[U0′,U1′,U2′];
(3) the Global Dictionary U for utilizing (2) to obtain obtains training the local feature X and test sectioning image of sectioning image to (1) Local feature Y carry out standardized sparse coding respectively, obtain train sectioning image local feature code coefficientWith test sectioning image local feature code coefficient:
(4) the local feature coding of the local feature code coefficient V for the training sectioning image for obtaining (3) and test sectioning image Coefficient W carries out feature merging and dimensionality reduction respectively, obtained training sectioning image global characteristics:
With the global characteristics of test sectioning image
(5) using global characteristics V " ' one two class Linear SVM classifier of training of training sectioning image, trained point is used Class device classifies to the global characteristics W " ' of test sectioning image, obtains the categorised decision value of each test sectioning image The categorised decision value decision is compared with the threshold value Thr=0 of setting, if decision >=Thr, recognizes by decision It is otherwise clutter class slice to be target class slice.
2. according to the method described in claim 1, wherein step 2a) in initialize clutter category dictionary U1, target category dictionary U2, altogether Enjoy dictionary U0, clutter class training sample weightWith target class training sample weightIt carries out as follows:
2a1) from10000 local features are randomly selected, with K-SVD algorithm to clutter category dictionaryJust Beginningization, with Lagrange duality algorithm by U1It updates once, wherein d indicates the dimension of training sectioning image local feature, n1It indicates Clutter category dictionary atom number;
2a2) from10000 local features are randomly selected, with K-SVD algorithm to target category dictionaryJust Beginningization, with Lagrange duality algorithm by U2It updates once, wherein n2Indicate target category dictionary atom number;
2a3) fromWith10000 local features are randomly selected, with K-SVD algorithm to shared dictionaryInitialization, with Lagrange duality algorithm by U0It updates once, wherein n0Indicate shared dictionary atom number;
2a4) by clutter class training sample weightWith target class training sample weightIt is initialized to 1.
3. according to the method described in claim 1, wherein step 2b) in calculate clutter class training slice local featureRarefaction representation coefficient H1With target class training slice local featureRarefaction representation coefficient H2, It carries out as follows;
Following optimization problems 2b1) are solved by feature-sign search algorithm, obtain i-th of clutter class training slice map The local feature of pictureRarefaction representation coefficient
Wherein i=1 ..., p1, λ expression weighting parameters, | | | |FIndicate F norm, | | | |1Indicate l1Norm,
The local feature of all clutter class training sectioning images is solvedAfter rarefaction representation coefficient, after obtaining update Clutter class training sectioning image local feature rarefaction representation coefficient
Following optimization problems 2b2) are solved by feature-sign search algorithm, obtain j-th of target class training slice map The local feature of pictureRarefaction representation it is sparse
Wherein j=1 ..., p2,
The local feature of all target class training sectioning images is solvedAfter rarefaction representation coefficient, after obtaining update Target class training sectioning image local feature rarefaction representation coefficient
4. according to the method described in claim 1, wherein step 2c) in update clutter category dictionary U1, carry out as follows;
Following optimization problems 2c1) are solved by alternate optimization method, update clutter class U1Dictionary obtains updated clutter class word Allusion quotation U1':
s.t.||U1(:,b1)||2=1, b1=1 ..., n1
Wherein, η11WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norm, Be size be n0List Bit matrix,Be size be n1×n0Null matrix,Be size be n1Unit matrix,Be size be n0×n1Zero Matrix, n1It is clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atomicity, n =n0+n1+n2, W1It is clutter class training sample weight matrix:
m1=nL×p1It is the local feature sum of clutter class training sectioning image, nLIndicate that part is special in a trained sectioning image Levy number.
5. according to the method described in claim 1, wherein step 2c) in update target category dictionary U2, carry out as follows;
Following optimization problems 2c2) are solved by alternate optimization method, update target category dictionary U2, obtain updated target class word Allusion quotation U2':
s.t.U2(:,b2)2=1, b2=1 ..., n2
Wherein, η21WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norm, Be size be n0's Unit matrix,Be size be n2×n0Null matrix,Be size be n2Unit matrix,Be size be n0×n2 Null matrix, n1It is clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atom Number, n=n0+n1+n2, W2It is target class training sample weight matrix:
m2=nL×p2It is the local feature sum of target class training sectioning image, nLIndicate the part in a trained sectioning image Characteristic Number.
6. according to the method described in claim 1, wherein step 2c) in update shared dictionary U0, carry out as follows;
Following optimization problems 2c3) are solved by alternate optimization method, update shared dictionary U0, obtain updated shared dictionary U0':
s.t.||U0(:,b0)||2,b0=1 ..., n0
Wherein, η01WithIt is weighting parameters, | | | |2It is l2Norm, | | | |FIt is F norm,n1It is Clutter category dictionary U1Atomicity, n2It is target category dictionary U2Atomicity, n0It is shared dictionary U0Atomicity, n=n0+n1+ n2, Be size be n0's Unit matrix,Be size be n1×n0Null matrix,Be size be n1Unit matrix,Be size be n0×n1's Null matrix, W1It is clutter class training sample weight matrix:
m1=nL×p1It is the local feature sum of clutter class training sectioning image, nLIndicate that part is special in a trained sectioning image Number is levied, It is big Small is n0Unit matrix,Be size be n2×n0Null matrix,Be size be n2Unit matrix,It is that size is n0×n2Null matrix, W2It is target class training sample weight matrix:
m2=nL×p2It is the local feature sum of target class training sectioning image.
7. according to the method described in claim 1, wherein step 2e) in update clutter class training sample weightAnd target Class training sample weightIt carries out as follows:
2e1) utilize 2c) obtain U1′、U2' and U0' update clutter class training sample weightObtain updated clutter class The weight of training sampleWherein i-th of clutter class training sample weight w1i' obtained by following equations,
In formula, i=1 ..., p1, α is a scaling factor bigger than 1, wmIt is the maximum value in weight allowed band, It is the local feature X of i-th of clutter class training sectioning image1 iRarefaction representation coefficient, value utilize feature-sign Search algorithm solving optimization problemIt obtains,It is U0' corresponding sparse table Show coefficient,It is U1' corresponding rarefaction representation coefficient,It is U2' corresponding rarefaction representation coefficient,It is clutter Sectioning image local featureUse target category dictionary U2The average energy of ' reconstruct, nLIndicate office in a trained sectioning image Portion's Characteristic Number;
2e2) utilize 2c) obtain U1′、U2' and U0' update target class training sample weightObtain updated target class The weight of training sampleWherein j-th of target class training sample weight w2j' obtained by following equations,
In formula, j=1 ..., p2,It is the local feature X of j-th of target class training sectioning image2 jRarefaction representation coefficient, Value utilizes feature-sign search algorithm solving optimization problem:
It obtains,It is U0' corresponding rarefaction representation coefficient,It is U1' corresponding rarefaction representation coefficient,It is U2' corresponding Rarefaction representation coefficient,It is target slice image local featureUse clutter category dictionary U1The average energy of ' reconstruct Amount.
8. according to the method described in claim 1, wherein to the local feature code coefficient V of training sectioning image in step (4) Feature merging and dimensionality reduction are carried out, is carried out as follows:
Training sectioning image 4a) is divided into three sons that size is 1 × 1,2 × 2,4 × 4 using spatial pyramid Matching Model Region A1, A2, A3;
4b) merged using maximum value and carries out subregion A1, the local feature code coefficient V of the corresponding training sectioning image of A2, A3 Merge and the global feature of sectioning image trained in splicing, formation:
To the global feature V ' carry out l of training sectioning image2Norm normalizes, the training sectioning image after being normalized Global feature V ", wherein h indicates the dimension of global characteristics after feature merging;
Dimensionality reduction, the global characteristics of the training sectioning image after obtaining dimensionality reduction 4c) are carried out to V " using principal component analysisWherein h ' is the dimension of the global characteristics after dimensionality reduction.
9. according to the method described in claim 1, wherein to the local feature code coefficient W of test sectioning image in step (4) Feature merging and dimensionality reduction are carried out, is carried out as follows:
4d) will test sectioning image to be divided into size using spatial pyramid Matching Model is 1 × 1,2 × 2,4 × 4 these three sons Region B1, B2, B3;
4e) merged using maximum value by subregion B1, the local feature code coefficient W of the corresponding test sectioning image of B2, B3 into Row merges and splicing, forms the global feature of test sectioning image:
To the global feature W ' carry out l of test sectioning image2Norm normalizes, the test sectioning image after being normalized Global characteristics W ";
Dimensionality reduction 4f) is carried out to W " using principal component analysis, the global characteristics of the test sectioning image after obtaining dimensionality reductionWherein h ' is the dimension of the global characteristics after dimensionality reduction.
CN201611136982.2A 2016-12-12 2016-12-12 Based on the specific SAR target discrimination method with shared dictionary of sample weighting classification Active CN106599831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611136982.2A CN106599831B (en) 2016-12-12 2016-12-12 Based on the specific SAR target discrimination method with shared dictionary of sample weighting classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611136982.2A CN106599831B (en) 2016-12-12 2016-12-12 Based on the specific SAR target discrimination method with shared dictionary of sample weighting classification

Publications (2)

Publication Number Publication Date
CN106599831A CN106599831A (en) 2017-04-26
CN106599831B true CN106599831B (en) 2019-01-29

Family

ID=58598338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611136982.2A Active CN106599831B (en) 2016-12-12 2016-12-12 Based on the specific SAR target discrimination method with shared dictionary of sample weighting classification

Country Status (1)

Country Link
CN (1) CN106599831B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122753B (en) * 2017-05-08 2020-04-07 西安电子科技大学 SAR target identification method based on ensemble learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8620093B2 (en) * 2010-03-15 2013-12-31 The United States Of America As Represented By The Secretary Of The Army Method and system for image registration and change detection
US9363024B2 (en) * 2012-03-09 2016-06-07 The United States Of America As Represented By The Secretary Of The Army Method and system for estimation and extraction of interference noise from signals
CN102651073B (en) * 2012-04-07 2013-11-20 西安电子科技大学 Sparse dynamic ensemble selection-based SAR (synthetic aperture radar) image terrain classification method
CN103714353B (en) * 2014-01-09 2016-11-23 西安电子科技大学 The Classification of Polarimetric SAR Image method of view-based access control model prior model

Also Published As

Publication number Publication date
CN106599831A (en) 2017-04-26

Similar Documents

Publication Publication Date Title
CN106874889B (en) Multiple features fusion SAR target discrimination method based on convolutional neural networks
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN106156744B (en) SAR target detection method based on CFAR detection and deep learning
CN105518709B (en) The method, system and computer program product of face for identification
CN108197538B (en) Bayonet vehicle retrieval system and method based on local features and deep learning
CN102214298B (en) Method for detecting and identifying airport target by using remote sensing image based on selective visual attention mechanism
CN105404886B (en) Characteristic model generation method and characteristic model generating means
CN110210362A (en) A kind of method for traffic sign detection based on convolutional neural networks
CN103473571B (en) Human detection method
CN109284704A (en) Complex background SAR vehicle target detection method based on CNN
CN106096506B (en) Based on the SAR target identification method for differentiating doubledictionary between subclass class
CN109902590A (en) Pedestrian's recognition methods again of depth multiple view characteristic distance study
CN109583305A (en) A kind of advanced method that the vehicle based on critical component identification and fine grit classification identifies again
CN109389080A (en) Hyperspectral image classification method based on semi-supervised WGAN-GP
CN103617426A (en) Pedestrian target detection method under interference by natural environment and shelter
CN106874905B (en) A method of the natural scene text detection based on self study Color-based clustering
CN104966081B (en) Spine image-recognizing method
Wang et al. 3-D point cloud object detection based on supervoxel neighborhood with Hough forest framework
CN104850822B (en) Leaf identification method under simple background based on multi-feature fusion
CN108647695A (en) Soft image conspicuousness detection method based on covariance convolutional neural networks
CN104732248B (en) Human body target detection method based on Omega shape facilities
CN106022241A (en) Face recognition method based on wavelet transformation and sparse representation
CN107085731A (en) A kind of image classification method based on RGB D fusion features and sparse coding
CN102945374A (en) Method for automatically detecting civil aircraft in high-resolution remote sensing image
CN109145947B (en) Fashion women's dress image fine-grained classification method based on part detection and visual features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant