CN108280460B - SAR vehicle target identification method based on improved convolutional neural network - Google Patents

SAR vehicle target identification method based on improved convolutional neural network Download PDF

Info

Publication number
CN108280460B
CN108280460B CN201711257577.0A CN201711257577A CN108280460B CN 108280460 B CN108280460 B CN 108280460B CN 201711257577 A CN201711257577 A CN 201711257577A CN 108280460 B CN108280460 B CN 108280460B
Authority
CN
China
Prior art keywords
image
convolutional neural
neural network
sar
test sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711257577.0A
Other languages
Chinese (zh)
Other versions
CN108280460A (en
Inventor
白雪茹
周雪宁
王力
周峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201711257577.0A priority Critical patent/CN108280460B/en
Publication of CN108280460A publication Critical patent/CN108280460A/en
Application granted granted Critical
Publication of CN108280460B publication Critical patent/CN108280460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an SAR vehicle target identification method based on an improved convolutional neural network, which mainly solves the problems that the SAR vehicle target identification accuracy is low and the network is easy to generate overfitting in the prior art. The scheme is as follows: removing background clutter of each image in the training sample, and cutting each SAR image; constructing an improved convolutional neural network structure based on a caffe framework, namely setting a classifier of a target identification part of the convolutional neural network as a mixed maximum boundary softmax; inputting the cut training sample into an improved convolutional neural network for training to obtain a trained network model; removing background noise and cutting the test sample; and inputting the processed test sample into a trained improved convolutional neural network model for testing to obtain the recognition rate of the test sample. The invention improves the accuracy of SAR vehicle target identification, accelerates the network convergence speed and improves the generalization performance of the network.

Description

SAR vehicle target identification method based on improved convolutional neural network
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a radar target identification method, which is used for identifying a ground SAR vehicle target.
Background
The synthetic aperture radar SAR has the characteristics of all-time, all-weather, long acting distance, high resolution and the like, and plays an important role in the fields of reconnaissance, detection guidance and the like. At present, an automatic target recognition technology ATR based on an SAR image, in particular, ground target recognition, is receiving wide attention in various fields.
The convolutional neural network CNN is one of artificial neural networks, and has become a research hotspot in the fields of image recognition and segmentation, voice recognition, human behavior recognition and the like. Compared with the traditional SAR ATR method, the CNN can automatically extract image features through training without excessive professional knowledge and manual intervention. Meanwhile, the provided features have high robustness and certain expandability on new target types.
The invention patent of the university of electronic technology of Xian in the application of the invention of a CNN-based SAR target recognition method (publication number: CN104732243A, application number: 201510165886.X) discloses a CNN-based SAR ATR method. The method comprises the following specific steps: firstly, carrying out multiple random translation transformations on a target to be recognized of the SAR image to expand a data set, and then inputting the expanded SAR image into a CNN network for training and testing to obtain the recognition accuracy. The method solves the problem that the existing SAR target identification method is greatly influenced by the position of the target to be identified in the sample image, but the method still does not solve the problem that the similarity background clutter influences accurate identification aiming at the SAR vehicle target.
In the published paper "SAR image target recognition research based on convolutional neural network" (Radar report, 2016,5(3): 320-. The method comprises the following specific steps: firstly, a category separability measurement is introduced into a cost function to improve a traditional CNN network, and then the improved CNN is used for carrying out feature extraction on the SAR image; and finally, classifying the features by using a Support Vector Machine (SVM) to obtain the recognition accuracy. The method has the advantages that the classification distinguishing capability of the CNN is improved by improving the traditional CNN structure, but the method still does not solve the defects that the CNN network is difficult to converge, overfitting is easy to generate and the like.
Disclosure of Invention
The invention aims to provide an SAR vehicle target identification method for improving a convolutional neural network aiming at the defects of the prior art so as to improve the network identification accuracy, accelerate the network convergence speed and improve the generalization performance of the network.
The technical scheme of the invention is as follows: removing background clutter of the SAR image; cutting the image into 60 × 60 in size to enable the target area to be located in the center of the image; inputting the processed training sample into an improved convolutional neural network based on a caffe framework for training, and then inputting the processed test sample into the trained improved convolutional neural network for testing the recognition rate, wherein the implementation steps are as follows:
(1) 3671 SAR images observed by a radar under a 17-degree pitch angle are obtained from the MSTAR data set and corresponding labels are used as a training sample set; acquiring 3203 SAR images observed under a 15-degree pitch angle and corresponding labels as a test sample set;
(2) sample training:
(2a) removing background clutter of each image in the training sample set to obtain a processed training sample;
(2b) cutting each SAR image in the training sample after background noise removal into 60 multiplied by 60, and enabling a target area to be located in the center of the picture;
(2c) improving a convolutional neural network based on a caffe framework, namely setting a classifier of a target identification part of the convolutional neural network as a mixed maximum boundary softmax;
(2e) inputting the cut training samples into an improved convolutional neural network model for training to obtain a trained network model;
(3) and (3) sample testing:
(3a) removing the background clutter of each image in the test sample set to obtain a processed test sample;
(3b) cutting the size of each SAR image in the test sample from which the background noise is removed into 60 multiplied by 60, and enabling the target area to be located in the center of the picture;
(3c) inputting the cut test sample into a trained improved convolutional neural network model for classification, and obtaining an identification Accuracy according to the real category of the test sample and the category of network discrimination:
Figure BDA0001492879620000021
where N is the number of input test samples, tiLabel, the class identified for the ith test sample networkiThe true category of the ith test sample;
compared with the prior art, the invention has the following advantages:
1. removing similarity background clutter effects
Aiming at the problem that the similarity background clutter can influence the accurate identification of a typical SAR target, the method firstly removes the background clutter of the SAR image, only reserves the target area image, then uses the target area image for identification, removes the influence of the similarity background clutter on the identification, and improves the accuracy of SAR ATR.
2. The network has faster convergence speed and better generalization capability
The invention provides an improved convolutional neural network structure on the basis of the traditional CNN structure, namely, the original softmax classifier is replaced by the mixed maximum boundary softmax classifier, so that the convergence rate and the generalization capability of the original CNN are improved, and the identification accuracy of the network for SAR vehicle targets is improved.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
fig. 2 is a comparison graph of the recognition accuracy for the first 50 cycles of the test sample using the conventional CNN method and the method of the present invention.
Detailed Description
Referring to fig. 1, the method of the present invention includes two stages of training and testing, and includes the following specific steps:
a training phase
Step 1, obtaining an SAR image training sample and a test sample.
3671 target images and corresponding class labels of a radar in the disclosed MSTAR dataset under a 17-degree pitch angle are selected as training samples, 3203 target images and corresponding class labels under a 15-degree pitch angle are selected as test samples, and the size of all the samples is 128 x 128.
And 2, removing the background clutter of the SAR image of the training sample.
The background clutter of the SAR image can be removed by methods such as background removal, two-dimensional filtering, wavelet transform, etc., and this example is implemented by, but not limited to, the following methods:
(2a) for each input SAR image I0Performing 0.5 power transformation to enhance separability of background clutter and shadow region to obtain transformed image I1
(2b) For the transformed image I1Performing wiener filtering to smooth the target image and the background clutter to obtain a processed image I2
(2c) Using a square sliding window with a window length of 15, at step 1 at I2Carrying out upward sliding, counting the average values of the pixels in all the sliding windows, and taking out the maximum value a in all the average values; selection of I2Calculating the average value b of pixels in a region with the size of 5 multiplied by 5 at the upper left corner;
setting threshold t as 0.35 × a +0.65 × b, using I2All pixel points larger than t in the pixel values form a target area, and the values of all pixel points in the area are made to be 1; by means of I2All pixel points smaller than t in the pixel values form a background area, all pixel point values in the area are made to be 0, and the aim of I alignment is realized2The binary image I is obtained by the classification mark of the target and the background3
(2d) For binary image I3Performing morphological closing operation processing by adopting the following formula to fuse the gap at the edge of the target area and fill the internal defect to obtain a fused and filled image I4
Figure BDA0001492879620000041
Wherein the content of the first and second substances,
Figure BDA0001492879620000042
and
Figure BDA0001492879620000043
the expansion and erosion operators are represented separately,
Figure BDA0001492879620000044
is a structural element;
(2e) to the fused filled image I4Marking all connected domains in the image, selecting the connected domain with the largest area as a new target area, enabling all pixel point values in the area to be 1, enabling the other areas to be new background areas, enabling all pixel point values in the area to be 0, and obtaining a new binary image I5
(2f) New binary image I5With the original image I0And performing point multiplication to obtain the training sample after removing the background noise.
And 3, cutting the training sample image with the background noise removed.
And for each training sample image with background noise removed, reserving all pixel points in a 60 × 60 neighborhood of the central pixel point, and cutting off the rest pixel points to obtain a training sample image with the size of 60 × 60.
And 4, improving the convolutional neural network based on the caffe framework, and constructing a new convolutional neural network.
The convolutional neural network based on the caffe framework is divided into two parts of target feature map extraction and target identification, the improvement of the convolutional neural network is that a classifier of the target identification part is set as a mixed maximum boundary softmax from an original softmax, and the implementation steps are as follows:
(4a) the target feature map extraction section:
(4a1) constructing a convolution layer with convolution kernel size of 5 multiplied by 5, step length of 1 and activation function of ReLU, performing convolution operation on the cropped training sample image obtained in the step 3, and outputting 16 first-layer feature maps L of 56 multiplied by 561And to L1Normalization is carried out and output16 first-level normalization maps L of 56 × 562(ii) a Using downsampled layer pairs L with kernel window size of 2 x 2 and step size of 22To perform a down-sampling operation on each image to reduce L2Reduce the computational complexity, and output 16 first-layer dimension reduction maps L of 28 x 283
(4a2) Constructing a convolution layer with convolution kernel size of 3 multiplied by 3, step length of 1 and activation function of ReLU, and performing dimension reduction on the image L3Performing convolution operation to output 32 second layer characteristic maps L of 26 × 264And to L4Normalization is carried out, and 32 second-layer normalization graphs L of 26 multiplied by 26 are output5(ii) a Using downsampled layer pairs L with kernel window size of 2 x 2 and step size of 25To perform a down-sampling operation on each image to reduce L5Reducing the computational complexity, and outputting 32 second-layer dimension reduction graphs L of 13 multiplied by 136
(4a3) Constructing a convolution layer with convolution kernel size of 4 x 4, step length of 1 and activation function of ReLU, and applying to L6Performing convolution operation to output 64 third layer characteristic graphs L of 10 multiplied by 107And to L7Normalization is carried out, and 64 third-layer normalization graphs L of 10 multiplied by 10 are output8(ii) a Using a downsampled layer pair L with a kernel window size of 2 x 2 and a step size of 28To perform a down-sampling operation on each image to reduce L8Reducing the computational complexity, and outputting 64 third-layer dimension reduction graphs L of 5 multiplied by 59(ii) a Constructing a Dropout layer with the probability of 0.5, and reducing the number of network weights so as to reduce the computational complexity;
(4a4) constructing a convolution layer with convolution kernel size of 5 multiplied by 5, step length of 1 and activation function of ReLU, and applying to L9Performing convolution operation to output 64 fourth layer characteristic graphs L of 1 multiplied by 110And to L10Normalization is carried out, and 64 fourth-layer normalization graphs L of 1 multiplied by 1 are output11
(4a5) Constructing a convolution layer with convolution kernel size of 1 × 1, step length of 1 and activation function of ReLU, and applying to L11Performing convolution operation to output a1 × 10 feature vector L12
(4b) An object recognition section:
building a hybridMaximum boundary softmax classifier, output network to judge the probability p that input sample x belongs to kth classk
Figure BDA0001492879620000051
Where j 1,2, Q10 denote 10 object classes contained in the MSTAR data, fjIs an intermediate variable, and the expression is:
Figure BDA0001492879620000061
wherein λ is a mixing coefficient, and satisfies λ ═ max { λ { (λ) }min0(1+γ·niter)-pIn which λ ismin1 is the minimum value of the mixing parameter, λ0100 is the initial mixing coefficient, niterFor the current iteration number of the network, p is 35 as an index parameter, and gamma is 10-5To control the rate of decay of the exponential parameter, L11For the extracted fourth layer normalized graph, W, of the input imagejIs a feature vector L12The weight vector, theta, corresponding to the jth elementjIs a weight vector WjNormalized with the fourth layer11The included angle of (A); psi (theta)j)=(-1)kcos(mθj) -2k is a transformation function, where m is 4, k ∈ [0, m-1 ∈]And are integers.
And 5, inputting the training sample cut in the step 3 into the improved convolutional neural network model constructed in the step 4 for training to obtain the trained improved convolutional neural network model.
Second, testing stage
Step 6, removing the background clutter of each image in the test sample set to obtain a processed test sample; the method for removing background noise used therein is identical to the method in step 2.
And 7, cutting the size of each SAR image in the test sample after the background noise is removed to 60 multiplied by 60, and enabling the target area to be positioned in the center of the picture, wherein the used cutting method is consistent with the method in the step 3.
Step 8, inputting the cut test samples into a trained improved convolutional neural network model for classification to obtain a network identification Accuracy:
Figure BDA0001492879620000062
where N is the number of input test samples, tiLabel, the class identified for the ith test sample networkiIs the true category of the ith test sample.
The effect of the invention can be illustrated by the following simulation experiment:
1. conditions of the experiment
The data used in the experiments are public MSTAR data sets, including 10 types of ground vehicle targets with radar pitch angles at 15 ° and 17 °: armored car: BMP-2, BRDM-2, BTR-60, and BTR-70; tank: t62 and T72; a rocket launcher: 2S 1; an air defense unit: ZSU-234; truck: ZIL-131; a bulldozer: D7.
in the experiment, 3671 target images and corresponding class labels of the radar under a 17-degree pitch angle are selected as training samples, 3203 target images and corresponding class labels of the radar under a 15-degree pitch angle are selected as test samples, and the size of all the samples is 128 x 128.
2. Experimental contents and results:
2.1) 3671 image data of the radar at a pitch angle of 17 degrees is selected as a training sample; 3203 image data of the radar under a 15-degree pitch angle are used as test samples;
2.2) removing background clutter of each image in the training sample set;
2.3) cutting the size of each SAR image in the training sample after background noise removal to 60 x 60 to ensure that the target area is positioned in the center of the picture;
2.4) inputting the cut training samples into an improved convolutional neural network for training to obtain a trained network model;
2.5) removing background clutter of each image in the test sample set;
2.6) cutting the size of each SAR image in the test sample after the background noise is removed to 60 multiplied by 60, so that the target area is positioned in the center of the picture;
2.7) inputting the cut test sample into an improved convolutional neural network for testing to obtain the recognition rate, comparing the recognition rate of the MSTAR data with that of the traditional CNN method, and drawing the change curves of the recognition accuracy rates of the two methods in the previous 50 cycles, wherein the result is shown in FIG. 2.
As can be seen from fig. 2, the improved convolutional neural network can quickly converge to the vicinity of the optimal value with fewer cycles, which proves that the present invention has a faster convergence rate and a stronger generalization capability compared with the conventional CNN.
The accuracy results of the final convergence of the two networks are shown in table 1.
TABLE 1 comparison of recognition rates of conventional CNN method and the inventive method
Network architecture Conventional CNN method The method of the invention
Percent identification (%) 94.79% 96.44%
As can be seen from Table 1, the recognition rate of the method of the invention is improved by 1.65% compared with the conventional CNN, which shows that the operation of removing SAR background clutter in the method improves the recognition accuracy of SAR target.

Claims (2)

1. An SAR vehicle target identification method based on an improved convolutional neural network comprises the following steps:
(1) 3671 SAR images observed by a radar under a 17-degree pitch angle are obtained from the MSTAR data set and corresponding labels are used as a training sample set; acquiring 3203 SAR images observed under a 15-degree pitch angle and corresponding labels as a test sample set;
(2) sample training:
(2a) removing background clutter of each image in the training sample set to obtain a processed training sample;
(2b) cutting each SAR image in the training sample after background noise removal into 60 multiplied by 60, and enabling a target area to be located in the center of the picture;
(2c) improving a convolutional neural network based on a caffe framework, namely setting a classifier of a target identification part of the convolutional neural network as a mixed maximum boundary softmax; the mixed maximum boundary softmax expression is:
Figure FDA0003112495130000011
wherein p iskThe probability that the input sample x belongs to the kth class is determined for the network, j 1,2, Q10 represent 10 target classes contained in MSTAR data, and fjIs an intermediate variable, and the expression is:
Figure FDA0003112495130000012
wherein λ is a mixing coefficient, and satisfies λ ═ max { λ { (λ) }min0(1+γ·niter)-pIn which λ ismin1 is the minimum value of the mixing parameter, λ0100 is the initial mixing coefficient, niterFor the current iteration number of the network, p is 35 as an index parameter, and gamma is 10-5To control the rate of decay of the exponential parameter, L11For the extracted fourth layer normalized graph, W, of the input imagejIs a feature vector L12The weight vector, theta, corresponding to the jth elementjIs the weight directionQuantity WjNormalized with the fourth layer11The included angle of (A); psi (theta)j)=(-1)kcos(mθj) -2k is a transformation function, where m is 4, k ∈ [0, m-1 ∈]And is an integer;
(2e) inputting the cut training samples into an improved convolutional neural network model for training to obtain a trained network model;
(3) and (3) sample testing:
(3a) removing the background clutter of each image in the test sample set to obtain a processed test sample;
(3b) cutting the size of each SAR image in the test sample from which the background noise is removed into 60 multiplied by 60, and enabling the target area to be located in the center of the picture;
(3c) inputting the cut test sample into a trained improved convolutional neural network model for classification, and obtaining an identification Accuracy according to the real category of the test sample and the category of network discrimination:
Figure FDA0003112495130000021
where N is the number of input test samples, tiLabel, the class identified for the ith test sample networkiIs the true category of the ith test sample.
2. The method of claim 1, wherein the removing of the background clutter of each image in the training sample set in step (2a) is performed by:
(2a) for each input SAR image I0Performing 0.5 power transformation to enhance separability of background clutter and shadow region to obtain transformed image I1
(2b) For the transformed image I1Performing wiener filtering to smooth the target image and the background clutter to obtain a processed image I2
(2c) Using a square sliding window with a window length of 15, at step 1 at I2Sliding upwards, counting the average value of the pixels in all the sliding windows, and taking out the average valueHas the maximum value a in the mean value; selection of I2Calculating the average value b of pixels in a region with the size of 5 multiplied by 5 at the upper left corner;
setting threshold t as 0.35 × a +0.65 × b, using I2All pixel points larger than t in the pixel values form a target area, and the values of all pixel points in the area are made to be 1; by means of I2All pixel points smaller than t in the pixel values form a background area, all pixel point values in the area are made to be 0, and the aim of I alignment is realized2The binary image I is obtained by the classification mark of the target and the background3
(2d) For binary image I3Performing morphological closing operation processing by adopting the following formula to fuse the gap at the edge of the target area and fill the internal defect to obtain a fused and filled image I4
Figure FDA0003112495130000022
Wherein the content of the first and second substances,
Figure FDA0003112495130000031
and
Figure FDA0003112495130000032
the expansion and erosion operators are represented separately,
Figure FDA0003112495130000033
is a structural element;
(2e) to the fused filled image I4Marking all connected domains in the image, selecting the connected domain with the largest area as a new target area, making all pixel point values in the area be 1, using the rest areas as new background areas, making all pixel point values in the area be 0, and obtaining a new processed image I5
(2f) New processed image I5With the original image I0And performing dot multiplication to obtain an image with background noise removed.
CN201711257577.0A 2017-12-04 2017-12-04 SAR vehicle target identification method based on improved convolutional neural network Active CN108280460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711257577.0A CN108280460B (en) 2017-12-04 2017-12-04 SAR vehicle target identification method based on improved convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711257577.0A CN108280460B (en) 2017-12-04 2017-12-04 SAR vehicle target identification method based on improved convolutional neural network

Publications (2)

Publication Number Publication Date
CN108280460A CN108280460A (en) 2018-07-13
CN108280460B true CN108280460B (en) 2021-07-27

Family

ID=62801308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711257577.0A Active CN108280460B (en) 2017-12-04 2017-12-04 SAR vehicle target identification method based on improved convolutional neural network

Country Status (1)

Country Link
CN (1) CN108280460B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145815B (en) * 2018-08-21 2022-05-03 深圳大学 SAR target recognition method and device, computer equipment and storage medium
CN109344717B (en) * 2018-09-01 2021-10-19 哈尔滨工程大学 Multi-threshold dynamic statistical deep sea target online detection and identification method
CN109284704A (en) * 2018-09-07 2019-01-29 中国电子科技集团公司第三十八研究所 Complex background SAR vehicle target detection method based on CNN
CN109871829B (en) * 2019-03-15 2021-06-04 北京行易道科技有限公司 Detection model training method and device based on deep learning
CN110136142A (en) * 2019-04-26 2019-08-16 微梦创科网络科技(中国)有限公司 A kind of image cropping method, apparatus, electronic equipment
CN110163275B (en) * 2019-05-16 2021-10-29 西安电子科技大学 SAR image target classification method based on deep convolutional neural network
CN110471840A (en) * 2019-07-11 2019-11-19 平安科技(深圳)有限公司 Applied program testing method, device, computer-readable medium and electronic equipment
CN110781830B (en) * 2019-10-28 2023-03-10 西安电子科技大学 SAR sequence image classification method based on space-time joint convolution
CN113095417B (en) * 2021-04-16 2023-07-28 西安电子科技大学 SAR target recognition method based on fusion graph convolution and convolution neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001014907A1 (en) * 1999-08-26 2001-03-01 Raytheon Company Target acquisition system and radon transform based method for target azimuth aspect estimation
US6437728B1 (en) * 1999-11-05 2002-08-20 Lockheed Martin Corporation A-scan ISAR target recognition system and method
CN101526995A (en) * 2009-01-19 2009-09-09 西安电子科技大学 Synthetic aperture radar target identification method based on diagonal subclass judgment analysis
CN103761731A (en) * 2014-01-02 2014-04-30 河南科技大学 Small infrared aerial target detection method based on non-downsampling contourlet transformation
CN104732243A (en) * 2015-04-09 2015-06-24 西安电子科技大学 SAR target identification method based on CNN
CN106570477A (en) * 2016-10-28 2017-04-19 中国科学院自动化研究所 Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning
CN107194336A (en) * 2017-05-11 2017-09-22 西安电子科技大学 The Classification of Polarimetric SAR Image method of network is measured based on semi-supervised depth distance
CN107247930A (en) * 2017-05-26 2017-10-13 西安电子科技大学 SAR image object detection method based on CNN and Selective Attention Mechanism

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001014907A1 (en) * 1999-08-26 2001-03-01 Raytheon Company Target acquisition system and radon transform based method for target azimuth aspect estimation
US6437728B1 (en) * 1999-11-05 2002-08-20 Lockheed Martin Corporation A-scan ISAR target recognition system and method
CN101526995A (en) * 2009-01-19 2009-09-09 西安电子科技大学 Synthetic aperture radar target identification method based on diagonal subclass judgment analysis
CN103761731A (en) * 2014-01-02 2014-04-30 河南科技大学 Small infrared aerial target detection method based on non-downsampling contourlet transformation
CN104732243A (en) * 2015-04-09 2015-06-24 西安电子科技大学 SAR target identification method based on CNN
CN106570477A (en) * 2016-10-28 2017-04-19 中国科学院自动化研究所 Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning
CN107194336A (en) * 2017-05-11 2017-09-22 西安电子科技大学 The Classification of Polarimetric SAR Image method of network is measured based on semi-supervised depth distance
CN107247930A (en) * 2017-05-26 2017-10-13 西安电子科技大学 SAR image object detection method based on CNN and Selective Attention Mechanism

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Convolutional Neural Network With Data Augmentation for SAR Target Recognition;Jun Ding等;《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》;20160330;第13卷(第3期);第364-368页 *
SAR Target Recognition Based on Deep Learning;Sizhe Chen等;《2014 International conference on data science and advanced analytics(DSAA)》;20150312;第541-547页 *
地面车辆目标识别研究综述;李开明等;《电子学报》;20140531;第42卷(第3期);第538-546页 *

Also Published As

Publication number Publication date
CN108280460A (en) 2018-07-13

Similar Documents

Publication Publication Date Title
CN108280460B (en) SAR vehicle target identification method based on improved convolutional neural network
CN108776779B (en) Convolutional-circulation-network-based SAR sequence image target identification method
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN107229918B (en) SAR image target detection method based on full convolution neural network
CN107564025B (en) Electric power equipment infrared image semantic segmentation method based on deep neural network
CN109271856B (en) Optical remote sensing image target detection method based on expansion residual convolution
CN109299688B (en) Ship detection method based on deformable fast convolution neural network
CN108491854B (en) Optical remote sensing image target detection method based on SF-RCNN
CN107274401B (en) High-resolution SAR image ship detection method based on visual attention mechanism
CN108921030B (en) SAR automatic target recognition method
CN110543837A (en) visible light airport airplane detection method based on potential target point
CN104715252B (en) A kind of registration number character dividing method of dynamic template combination pixel
CN110163275B (en) SAR image target classification method based on deep convolutional neural network
CN108305260B (en) Method, device and equipment for detecting angular points in image
Zhang et al. Study on traffic sign recognition by optimized Lenet-5 algorithm
CN110298227B (en) Vehicle detection method in unmanned aerial vehicle aerial image based on deep learning
CN110569782A (en) Target detection method based on deep learning
CN103049763A (en) Context-constraint-based target identification method
CN110008900B (en) Method for extracting candidate target from visible light remote sensing image from region to target
CN110659601B (en) Depth full convolution network remote sensing image dense vehicle detection method based on central point
CN105989334A (en) Monocular vision-based road detection method
CN111461039A (en) Landmark identification method based on multi-scale feature fusion
CN112784757B (en) Marine SAR ship target significance detection and identification method
Liu et al. Research on vehicle object detection algorithm based on improved YOLOv3 algorithm
CN112949655A (en) Fine-grained image recognition method combined with attention mixed cutting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant