CN112446357A - SAR automatic target recognition method based on capsule network - Google Patents

SAR automatic target recognition method based on capsule network Download PDF

Info

Publication number
CN112446357A
CN112446357A CN202011478677.8A CN202011478677A CN112446357A CN 112446357 A CN112446357 A CN 112446357A CN 202011478677 A CN202011478677 A CN 202011478677A CN 112446357 A CN112446357 A CN 112446357A
Authority
CN
China
Prior art keywords
features
convolution
feature
layer
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011478677.8A
Other languages
Chinese (zh)
Other versions
CN112446357B (en
Inventor
于雪莲
任浩浩
孙新栋
陈智伶
周云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202011478677.8A priority Critical patent/CN112446357B/en
Publication of CN112446357A publication Critical patent/CN112446357A/en
Application granted granted Critical
Publication of CN112446357B publication Critical patent/CN112446357B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an SAR automatic target recognition method based on a capsule network, and belongs to the field of radar target recognition. The method comprises the following main processes: firstly, cutting an original SAR image, then simply convolving the cut image, then extracting multi-scale features by using convolution kernels with different expansion rates, then enhancing important features by using a self-adaptive feature refinement module, fusing the enhanced multi-scale features by a pixel-by-pixel fusion strategy, then inputting the features into a network layer based on a capsule unit for more abstract feature learning and reserving the spatial relationship among the features, and finally inputting the features output by an encoder network into a decoder network consisting of four transposed convolution layers for SAR target reconstruction to improve the learning capability of the encoder; and the SAR target identification result is output at the last layer of the encoder network. Compared with the existing deep convolutional neural network algorithm, the method has higher precision.

Description

SAR automatic target recognition method based on capsule network
Technical Field
The invention is applied to the field of Synthetic Aperture Radar (SAR) automatic target identification, and particularly relates to an SAR automatic target identification method based on deep learning.
Background
Synthetic aperture radars have been widely used in the fields of geological exploration, environmental monitoring, military target detection and identification, etc. due to their all-weather, all-time, high-resolution, etc. characteristics. Automatic Target Recognition (ATR) is one of the important applications for SAR image interpretation.
In recent years, with the continuous development of deep learning technology, in the field of automatic target recognition of SAR, various ATR algorithms based on deep learning models have been proposed and achieve better recognition performance than the conventional method under some Standard conditions (SOC). However, under Extended Operating Conditions (EOCs), such as in a noisy environment, various complex noises can seriously affect the image feature extraction; in most real SAR scenes, it is difficult to collect a large number of training samples, and under the condition that the training samples are insufficient, a classification model is easy to be over-fitted; partial target occlusion and camouflage is common in combat scenarios, and it is challenging to extract robust discriminating features from partial target occlusions. In view of the above problems, the present invention provides a method for identifying an SAR target based on a convolutional capsule network. In particular, the method comprises two sub-networks: an encoder network and a decoder network. The encoder network may extract robust features from the SAR image. The decoder network may encourage the encoder network to learn the authentication features.
Disclosure of Invention
Aiming at the problems in SAR ATR, the invention provides a method based on a convolution capsule network to realize high-precision SAR automatic target recognition.
The technical scheme adopted by the invention for solving the problems is as follows: a deep classification model is designed, which is composed of an encoder network and a decoder network. Specifically, in consideration of the fact that the depth model can learn local features such as textures and shapes in a lower-layer network, in order to relieve the influence of noise on SAR target identification, the method extracts multi-scale features through convolution of a plurality of holes in the lower-layer network of an encoder; considering that the multi-scale features possibly contain redundant information, embedding a feature self-adaptive refining module on a multi-scale feature channel to self-adaptively enhance useful features and inhibit useless features; a top-level network considering depth models can learn the structural characteristics of image features, and therefore, two capsule-unit-based feature structure-preserving layers are deployed at the top level of the encoder network to learn more abstract features and preserve spatial relationships between different features. The decoder network composed of multiple layers of transposed convolutional layers can realize image reconstruction to further improve the learning capability of the encoder network.
The technical scheme of the invention is a capsule network-based SAR automatic target recognition method, which comprises the following steps:
step 1: cutting the acquired SAR image into 64 × 64 slices to reduce the influence of redundant background on feature extraction;
step 2: carrying out preliminary feature learning through a convolution layer with the kernel size of 3 x 3 to obtain 32 feature maps of 60 x 60;
and step 3: extracting features of different scales by using 3 convolution kernels with different expansion rates for each feature map, wherein the size of the convolution kernels is 5 multiplied by 5, and the expansion rates are 2, 3 and 5 respectively;
and 4, step 4: respectively passing the features of each scale through an adaptive feature refinement module, wherein the processing method of the adaptive feature refinement module comprises the following steps:
Q=σ(MLP(AvgPool(F))+MLP(MaxPool(F)))
Figure BDA0002836658480000021
S=σ(MLPConv7×7([AvgPool(Fc);MaxPool(Fc)]))
Figure BDA0002836658480000022
wherein F represents the input of the adaptive feature refinement module which is one of the scale features obtained in the step 3, AvgPool and MaxPool respectively represent average pooling and maximum pooling, MLP represents a perceptron including an implied layer, sigma is a sigmoid function, Q represents the weight of a corresponding channel,
Figure BDA0002836658480000023
representing a pixel-by-pixel fusion operation, Conv7×7Representing a 7 × 7 convolutional layer, S represents a spatial weight, and Fout is a final output through an adaptive feature refinement module;
and 5: then, the identification features obtained in the step 4 are sampled to the same size, and the sampled features are fused to fuse a feature FMeltComprises the following steps:
Fmelt=T(Fout1)+T(Fout2)+T(Fout3)
Where T represents the upsampling operation, Fout1,Fout2,Fout3Respectively representing the 3 scales of the identification features obtained by the step 4;
step 6: inputting the fused multi-scale features into a capsule unit-based network layer;
step 6.1: fusing the characteristics F obtained in the step 5MeltPerforming convolution operation and converting the operation into vector characteristics Cap;
step 6.2: performing nonlinear conversion on the characteristics of the Cap by adopting the following formula;
Figure BDA0002836658480000024
wherein, CapiRepresenting the ith vector feature, uiIs the result of vector feature non-linearisation;
step 6.3: learning a characteristic space relation by adopting the following formula;
uj|i=Wij·ui
Figure BDA0002836658480000025
wherein, WijFor feature mapping matrices learned by back propagation algorithms, cijIs the capsule coefficient obtained by the dynamic routing algorithm;
step 6.4: the characteristics s obtained in step 6.3jOutputting through an SAR capsule layer;
and 7: inputting the features output in step 6 into a decoder network consisting of four transposed convolutional layers, wherein the specific parameters of the transposed convolutional layers are as follows: a first layer: the convolution kernel size is 7 × 7, the convolution step is 1, and the padding is 0; a second layer: convolution kernel size 5 × 5, convolution step size 2, padding 1; and a third layer: the convolution kernel size is 5 × 5, the convolution step size is 2, and the padding is 1; a fourth layer: the convolution kernel size is 4 × 4, the convolution step size is 2, and the padding is 0.
The invention extracts the multi-scale features by introducing the hole convolution, and can relieve the influence of noise on target identification. In addition, considering that some features with small information amount or useless identification are probably contained in the multi-scale features, the target identification performance can be further improved by embedding an adaptive feature refining module into a multi-scale channel for adaptive feature weighting. Considering that training a classification model based on a convolutional neural network requires a large number of training samples, however, capturing a large number of SAR images is difficult in most cases. In order to solve the problem, the excessive dependence of the model on the training sample size is relieved by designing a characteristic space relation retaining layer by utilizing capsule units.
Drawings
FIG. 1 is a network architecture of a convolutional capsule network;
FIG. 2 architecture of a feature refinement module;
FIG. 3 shows the recognition result of the method proposed in the present invention under the condition of small sample training number;
FIG. 4 shows the recognition results of the method proposed in the present invention under the condition of partial target occlusion of different degrees.
Detailed Description
Hereinafter, a detailed description will be given of an embodiment of the present disclosure in order to better embody the technical points of the present disclosure. The invention relates to an SAR target recognition method based on a convolution capsule network, and each step is implemented in the following mode.
Step 1: cutting the acquired SAR image into 64 × 64 slices to reduce the influence of redundant background on feature extraction;
step 2: carrying out preliminary feature learning through a convolution layer with the kernel size of 3 x 3 to obtain 32 feature maps of 60 x 60;
and step 3: features of different scales are extracted using a plurality of convolution kernels of different expansion ratios. The expansion convolution used in the invention is used for extracting features of different scales, specifically, the convolution kernel size is 5 multiplied by 5, and the expansion rates are 2, 3 and 5 respectively;
and 4, step 4: the multi-scale features are input to an adaptive feature refinement module that adaptively enhances useful features and suppresses useless information. Order to
Figure BDA0002836658480000031
Is a multi-scale feature, the channel weight of the adaptive feature refinement module
Figure BDA0002836658480000032
Figure BDA0002836658480000033
And spatial weight
Figure BDA0002836658480000034
The calculation process is as follows:
Q=σ(MLP(AvgPool(F))+MLP(MaxPool(F)))
Figure BDA0002836658480000041
S=σ(MLPConv7×7([AvgPool(Fc);MaxPool(Fc)]))
Figure BDA0002836658480000042
wherein Fout is the discriminative features learned by the adaptive feature refinement module, σ is the sigmoid function, MLP is derived from a perceptron comprising a hidden layer,
Figure BDA0002836658480000043
representing a pixel-by-pixel fusion operation, AvgPool and MaxPool representing average pooling and maximum pooling, Conv, respectively7×7Representing 7 × 7 convolutional layers.
And 5: fusing the refined multi-scale features through a pixel-by-pixel fusion strategy, wherein the fusion features are as follows:
Fmelt=T(Fout1)+T(Fout2)+T(Fout3)
Where T represents the upsampling operation, Fout1,Fout2,Fout3Respectively representing the 3 multi-scale identification characteristics obtained in the step 4;
step 6: inputting the fused multi-scale features into a capsule unit-based network layer;
step 6.1: fusing the characteristics F obtained in the step 5MeltPerforming convolution operation and converting the operation into vector characteristics Cap;
step 6.2: performing nonlinear conversion on the characteristics of the Cap by adopting the following formula;
Figure BDA0002836658480000044
wherein, CapiRepresenting the ith vector feature, uiOutputting after the vector characteristic is nonlinear;
step 6.3: learning a characteristic space relation by adopting the following formula;
uj|i=Wij·ui
Figure BDA0002836658480000045
wherein, WijFor feature mapping matrices learned by back propagation algorithms, cijThe capsule coefficient is obtained through a dynamic routing algorithm;
step 6.4: step 6.3 features sjOutputting through an SAR capsule layer;
and 7: features output by the encoder network are input to a decoder network consisting of four transposed convolutional layers in order to facilitate the encoder network to learn more discriminating features from the SAR image.
In the embodiment, the method adopts the MSTAR data set of the public standard for verification, and can complete high-precision SAR target identification compared with other deep networks. Four different verification approaches were devised:
(1) verifying the identification capability of the convolution capsule network under standard operating conditions, and adopting ten types of foundation military targets: BMP2, BRDM _2, BTR70, BTR60, T72, 2S1, D7, T62, ZIL131 and ZSU23_ 4. The SAR image collected under the pitch angle of 17 degrees is used as a training set, the data collected under the pitch angle of 15 degrees is used as a test set, and the experimental data are shown in table 1. In order to show that the method provided by the invention has high discrimination capability, three ATR algorithms based on deep learning are compared in experiments, and the algorithms are DCNN, A-ConvNets and MFCNNs respectively. The results of the experiment are shown in table 2. As can be seen from table 2, the model proposed by the present invention can obtain the optimal recognition performance compared to the comparative method.
(2) And verifying the identification capability of the convolution capsule network provided by the invention under the noise interference condition. Training data as in experiment (1), we performed noise contamination to different degrees on the test images using a simulation method. Specifically, the original element values of different ratios in the test image are replaced with random numbers between (0, 1). In this experiment, the noise pollution ratio was changed from 1% to 15%, and the experimental results are shown in table 3. From experimental results, along with the improvement of the noise pollution level, the method provided by the invention has higher identification precision than that of a comparison method. Experimental results show that the method has strong noise robustness.
(3) And verifying the identification capability of the method provided by the invention under the condition of insufficient training samples. Specifically, a small number of samples are randomly drawn from the original training set to simulate a limited training sample scenario, and then the network is trained using these limited samples, and the experimental result is shown in fig. 3. From the experimental results, the method provided by the invention is still superior to the comparison method under the condition of insufficient training samples. Particularly, when the training sample only has 10% of the original training data, the method provided by the invention can still achieve the recognition accuracy of more than 80%, and the recognition accuracy of other methods is lower than 80%. The experimental result shows that the method provided by the invention still has advantages over the existing methods in the scene of limited training samples.
(4) The identification capability of the method provided by the invention is verified under the condition that part of the target is shielded. In this experiment, we used a random erasure method to simulate part of the target occlusion data. In the experiment, the results of the recognition capability of different networks under different levels of shielding of the target are shown in fig. 4. From experimental results, the recognition ability of the proposed method of the present invention is always superior to the comparative method at different occlusion levels. In a real military target battle scene, target shielding and disguising are very common, and the experimental result shows that the research content of the invention has potential application value.
Table 1 description of the data set used in the experiment
Figure BDA0002836658480000051
Table 2 overall recognition accuracy for four network architectures under standard operating conditions
Network name DCNN MFCNNs A-ConvNets Algorithm of the invention
Percent identification (%) 92.30 95.52 95.27 99.18
Table 3 overall recognition accuracy of four network architectures under noisy interference conditions
Figure BDA0002836658480000061

Claims (1)

1. A capsule network-based SAR automatic target recognition method comprises the following steps:
step 1: cutting the acquired SAR image into 64 × 64 slices to reduce the influence of redundant background on feature extraction;
step 2: carrying out preliminary feature learning through a convolution layer with the kernel size of 3 x 3 to obtain 32 feature maps of 60 x 60;
and step 3: extracting features of different scales by using 3 convolution kernels with different expansion rates for each feature map, wherein the size of the convolution kernels is 5 multiplied by 5, and the expansion rates are 2, 3 and 5 respectively;
and 4, step 4: respectively passing the features of each scale through an adaptive feature refinement module, wherein the processing method of the adaptive feature refinement module comprises the following steps:
Q=σ(MLP(AvgPool(F))+MLP(MaxPool(F)))
Figure FDA0002836658470000011
S=σ(MLPConv7×7([AvgPool(Fc);MaxPool(Fc)]))
Figure FDA0002836658470000012
wherein F represents the input of the adaptive feature refinement module which is one of the scale features obtained in the step 3, AvgPool and MaxPool respectively represent average pooling and maximum pooling, MLP represents a perceptron including an implied layer, sigma is a sigmoid function, Q represents the weight of a corresponding channel,
Figure FDA0002836658470000013
representing a pixel-by-pixel fusion operation, Conv7×7Representing a 7 × 7 convolutional layer, S represents a spatial weight, and Fout is a final output through an adaptive feature refinement module;
and 5: then, the identification features obtained in the step 4 are sampled to the same size, and the sampled features are fused to fuse a feature FMeltComprises the following steps:
Fmelt=T(Fout1)+T(Fout2)+T(Fout3)
Where T represents the upsampling operation, Fout1,Fout2,Fout3Respectively representing the 3 scales of the identification features obtained by the step 4;
step 6: inputting the fused multi-scale features into a capsule unit-based network layer;
step 6.1: fusing the characteristics F obtained in the step 5MeltPerforming convolution operation and converting the operation into vector characteristics Cap;
step 6.2: performing nonlinear conversion on the characteristics of the Cap by adopting the following formula;
Figure FDA0002836658470000014
wherein, CapiRepresenting the ith vector feature, uiIs the result of vector feature non-linearisation;
step 6.3: learning a characteristic space relation by adopting the following formula;
uj|i=Wij·ui
Figure FDA0002836658470000021
wherein, WijFor feature mapping matrices learned by back propagation algorithms, cijIs the capsule coefficient obtained by the dynamic routing algorithm;
step 6.4: the characteristics s obtained in step 6.3jOutputting through an SAR capsule layer;
and 7: inputting the features output in step 6 into a decoder network consisting of four transposed convolutional layers, wherein the specific parameters of the transposed convolutional layers are as follows: a first layer: the convolution kernel size is 7 × 7, the convolution step is 1, and the padding is 0; a second layer: convolution kernel size 5 × 5, convolution step size 2, padding 1; and a third layer: the convolution kernel size is 5 × 5, the convolution step size is 2, and the padding is 1; a fourth layer: the convolution kernel size is 4 × 4, the convolution step size is 2, and the padding is 0.
CN202011478677.8A 2020-12-15 2020-12-15 SAR automatic target recognition method based on capsule network Expired - Fee Related CN112446357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011478677.8A CN112446357B (en) 2020-12-15 2020-12-15 SAR automatic target recognition method based on capsule network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011478677.8A CN112446357B (en) 2020-12-15 2020-12-15 SAR automatic target recognition method based on capsule network

Publications (2)

Publication Number Publication Date
CN112446357A true CN112446357A (en) 2021-03-05
CN112446357B CN112446357B (en) 2022-05-03

Family

ID=74739412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011478677.8A Expired - Fee Related CN112446357B (en) 2020-12-15 2020-12-15 SAR automatic target recognition method based on capsule network

Country Status (1)

Country Link
CN (1) CN112446357B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111975A (en) * 2021-05-12 2021-07-13 合肥工业大学 SAR image target classification method based on multi-kernel scale convolutional neural network
CN113283390A (en) * 2021-06-24 2021-08-20 中国人民解放军国防科技大学 SAR image small sample target identification method based on gating multi-scale matching network
CN113807206A (en) * 2021-08-30 2021-12-17 电子科技大学 SAR image target identification method based on denoising task assistance
CN114241245A (en) * 2021-12-23 2022-03-25 西南大学 Image classification system based on residual error capsule neural network
CN114511644A (en) * 2022-01-21 2022-05-17 电子科技大学 Self-adaptive digital camouflage method based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490107A (en) * 2019-08-06 2019-11-22 北京工商大学 A kind of fingerprint identification technology based on capsule neural network
CN110674748A (en) * 2019-09-24 2020-01-10 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and readable storage medium
US20200025877A1 (en) * 2018-07-18 2020-01-23 Qualcomm Incorporated Object verification using radar images
CN110929735A (en) * 2019-10-17 2020-03-27 杭州电子科技大学 Rapid significance detection method based on multi-scale feature attention mechanism
CN111967537A (en) * 2020-04-13 2020-11-20 江西理工大学 SAR target classification method based on two-way capsule network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200025877A1 (en) * 2018-07-18 2020-01-23 Qualcomm Incorporated Object verification using radar images
CN110490107A (en) * 2019-08-06 2019-11-22 北京工商大学 A kind of fingerprint identification technology based on capsule neural network
CN110674748A (en) * 2019-09-24 2020-01-10 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and readable storage medium
CN110929735A (en) * 2019-10-17 2020-03-27 杭州电子科技大学 Rapid significance detection method based on multi-scale feature attention mechanism
CN111967537A (en) * 2020-04-13 2020-11-20 江西理工大学 SAR target classification method based on two-way capsule network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZONGYONG CUI 等: ""Dense Attention Pyramid Networks for Multi-Scale Ship Detection in SAR Images"", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING (VOLUME:57,ISSUE:11,NOV.2019)》 *
冯伟业 等: ""基于胶囊神经网络的合成孔径雷达图像分类方法"", 《科学技术与工程》 *
申威 等: ""基于机器学习的SAR目标识别方法研究"", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111975A (en) * 2021-05-12 2021-07-13 合肥工业大学 SAR image target classification method based on multi-kernel scale convolutional neural network
CN113283390A (en) * 2021-06-24 2021-08-20 中国人民解放军国防科技大学 SAR image small sample target identification method based on gating multi-scale matching network
CN113283390B (en) * 2021-06-24 2022-03-08 中国人民解放军国防科技大学 SAR image small sample target identification method based on gating multi-scale matching network
CN113807206A (en) * 2021-08-30 2021-12-17 电子科技大学 SAR image target identification method based on denoising task assistance
CN113807206B (en) * 2021-08-30 2023-04-07 电子科技大学 SAR image target identification method based on denoising task assistance
CN114241245A (en) * 2021-12-23 2022-03-25 西南大学 Image classification system based on residual error capsule neural network
CN114241245B (en) * 2021-12-23 2024-05-31 西南大学 Image classification system based on residual capsule neural network
CN114511644A (en) * 2022-01-21 2022-05-17 电子科技大学 Self-adaptive digital camouflage method based on deep learning

Also Published As

Publication number Publication date
CN112446357B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN112446357B (en) SAR automatic target recognition method based on capsule network
CN112750140B (en) Information mining-based disguised target image segmentation method
CN108122008B (en) SAR image recognition method based on sparse representation and multi-feature decision-level fusion
CN107944483B (en) Multispectral image classification method based on dual-channel DCGAN and feature fusion
CN109741340B (en) Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network
CN110490265B (en) Image steganalysis method based on double-path convolution and feature fusion
CN110516728B (en) Polarized SAR terrain classification method based on denoising convolutional neural network
CN109359661B (en) Sentinel-1 radar image classification method based on convolutional neural network
CN114581752B (en) Camouflage target detection method based on context awareness and boundary refinement
CN113657491A (en) Neural network design method for signal modulation type recognition
CN112489168A (en) Image data set generation and production method, device, equipment and storage medium
CN115984979A (en) Unknown-countermeasure-attack-oriented face counterfeiting identification method and device
CN117058558A (en) Remote sensing image scene classification method based on evidence fusion multilayer depth convolution network
Xu et al. LMO-YOLO: A ship detection model for low-resolution optical satellite imagery
CN112149526A (en) Lane line detection method and system based on long-distance information fusion
CN114049537B (en) Countermeasure sample defense method based on convolutional neural network
CN115909172A (en) Depth-forged video detection, segmentation and identification system, terminal and storage medium
CN117475145B (en) Multi-scale remote sensing image semantic segmentation method and system integrating multiple attention mechanisms
CN112598032B (en) Multi-task defense model construction method for anti-attack of infrared image
CN112560034B (en) Malicious code sample synthesis method and device based on feedback type deep countermeasure network
CN112784777A (en) Unsupervised hyperspectral image change detection method based on antagonistic learning
CN112613354A (en) Heterogeneous remote sensing image change detection method based on sparse noise reduction self-encoder
CN112800882A (en) Mask face posture classification method based on weighted double-flow residual error network
CN113807206B (en) SAR image target identification method based on denoising task assistance
CN114049551B (en) ResNet 18-based SAR raw data target identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220503