CN113283390B - SAR image small sample target identification method based on gating multi-scale matching network - Google Patents

SAR image small sample target identification method based on gating multi-scale matching network Download PDF

Info

Publication number
CN113283390B
CN113283390B CN202110716103.8A CN202110716103A CN113283390B CN 113283390 B CN113283390 B CN 113283390B CN 202110716103 A CN202110716103 A CN 202110716103A CN 113283390 B CN113283390 B CN 113283390B
Authority
CN
China
Prior art keywords
test
meta
module
train
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110716103.8A
Other languages
Chinese (zh)
Other versions
CN113283390A (en
Inventor
张新禹
刘旗
刘永祥
姜卫东
黎湘
张双辉
霍凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202110716103.8A priority Critical patent/CN113283390B/en
Publication of CN113283390A publication Critical patent/CN113283390A/en
Application granted granted Critical
Publication of CN113283390B publication Critical patent/CN113283390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The invention belongs to the field of radar target identification, and particularly relates to a SAR image target identification method based on a gating multi-scale matching network under the condition of a small sample. Specifically, the multi-scale feature extraction module is used for giving different weights to the multi-scale features of different convolutional layers of the matching network, and the gating unit is used for giving different weights to the features of the different convolutional layers according to different recognition tasks, so that the effect of selecting the features of the different convolutional layers according to specific tasks to perform SAR image target recognition is achieved, and the generalization capability and the recognition accuracy of the recognition model are finally improved.

Description

SAR image small sample target identification method based on gating multi-scale matching network
Technical Field
The invention belongs to the field of Radar target identification, and particularly relates to a Synthetic Aperture Radar (SAR) image target identification method based on a gated multi-scale matching network under the condition of a small sample.
Background
A Synthetic Aperture Radar (SAR) system transmits electromagnetic waves to a target to be detected to obtain scattering point echoes of the target to be detected, and then performs coherent processing on the Radar echoes of the target to be detected to obtain a high-resolution image of the target. Different from optical and infrared imaging systems, due to the penetrability of electromagnetic waves, the SAR system can work under extreme conditions all day long and all weather, and is not influenced by natural conditions such as weather and light. Moreover, the SAR system can be carried on automobiles, airplanes and satellites to image a target area, and has wide application in civil and military fields.
SAR images have scattering point distribution information in both a distance direction and a transverse direction, and therefore, in recent years, a Radar Automatic Target Recognition (rarr) technology based on SAR images is gradually becoming a research hotspot in the Radar field and receiving attention of many researchers. The general target identification method can be roughly divided into three parts, namely data preprocessing, feature extraction and classifier design. In the traditional SAR image radar target identification method, the characteristic with strong expression capability is artificially extracted or designed, and then a corresponding classifier is designed according to the characteristic of the characteristic to realize the classification and identification of the SAR target. The conventional SAR target identification method by manually extracting features realizes automatic identification of SAR targets to a certain extent, and achieves some results, but the conventional SAR target radar automatic target identification method needs manual feature extraction or design, and needs a large amount of manpower and material resources, in recent years, researchers do a large amount of work for pursuing features with strong representation capability, but the semantic gap between low-level visual features and high-level semantic representations makes the conventional SAR target identification method limited in performance. In recent years, deep learning has been a rapidly developing technology, and has achieved satisfactory results in many fields. In the field of SAR image radar target automatic identification, deep learning methods are widely applied to SAR target identification tasks by researchers due to good generalization performance, and show certain advancement. The SAR image target recognition method based on deep learning realizes automatic extraction of the features with strong representation capability, avoids a fussy process of manually extracting the features compared with the traditional SAR image recognition method, and improves the recognition performance to a certain extent. However, the above SAR image target recognition method based on deep learning needs a large amount of labeled data for training, and once the training data is insufficient, a severe overfitting phenomenon is often caused, which finally causes the reduction of the generalization capability of the model and the reduction of the recognition performance.
In fact, the small sample problem is ubiquitous in the field of target recognition of SAR images — the small sample problem in the field of target recognition of SAR images means that under the condition of acquiring a small number of SAR images, a model can be rapidly learned by using a small number of training samples and has good generalization capability. The small sample problem in the SAR image target identification field is mainly caused by the following two reasons: firstly, in the military field, aiming at a non-cooperative target, few target SAR images can be acquired, and in an emergency situation, a model is often required to quickly and accurately identify the target under the condition of acquiring a small amount of sample information; secondly, a certain priori knowledge and experience are needed for labeling the SAR image target, so that only a part of people who know the priori knowledge and the working experience of the radar can participate in the labeling work of the SAR image target. Therefore, a new method for recognizing the SAR image is needed, the generalization capability of the SAR image recognition model under the condition of a small sample is improved, and the accuracy of SAR image target recognition under the condition of a small sample is improved.
The training mode of the model under the condition of the small sample is greatly different from the training mode of the traditional deep learning. The training mode of the traditional deep learning can be expressed as follows: first a data set D is formedLDivided into training sets DL-trainAnd test set DL-testUsually, a training set is selected to train the model, and a test set is selected to test the model. While the data set under the condition of small sample is not only the data set D of a single learning taskLThe data set under the condition of small sample is formed by a plurality of learning tasks D obeying the same distributionLComposed larger data set DM. The training mode under the condition of small sample can be expressed as that firstly, a data set D containing a plurality of learning tasks obeying the same distribution is obtainedMTraining set D for randomly selecting part of learning tasksLComponent training set DM-TrainThen DMThe respective data sets D of the remaining learning tasksLComponent test set DM-TestGenerally speaking, using a meta-training set DM-TrainTraining the model by using the meta-test set DM-TestThe model was tested. (details of the training mode under small sample conditions are described in the references O.Vinyals, C.Blundell, T.Lillicrap, K.Kavukcuglu, and D.Wierstra.matching networks for one shot learning. in NIPS.2016.). For the matching network, the meta-training set is divided into a plurality of support sets DM-Train-sAnd a challenge set DM-Train-q. The meta-test set is also divided into a plurality of support sets DM-Test-sAnd a challenge set DM-Test-q
Disclosure of Invention
Aiming at the problem that the existing SAR image target identification method has overfitting under the condition of a small sample, so that the generalization capability and the identification accuracy rate of the model are reduced, the invention firstly provides a new network model called a gating multi-scale matching network.
The invention relates to a method for improving a traditional matching network by utilizing the matching network which has excellent performance in a target small sample identification task in the field of optical images and introducing a multi-scale feature extraction module and a gate control unit on the basis of the matching network, so that the matching network is more suitable for the SAR image target identification task. Specifically, the multi-scale feature extraction module is used for giving different weights to the multi-scale features of different convolutional layers of the matching network, and the gating unit is used for giving different weights to the features of the different convolutional layers according to different recognition tasks, so that the effect of selecting the features of the different convolutional layers according to specific tasks to perform SAR image target recognition is achieved, and the generalization capability and the recognition accuracy of the recognition model are finally improved.
The invention adopts the scheme that an SAR image small sample target identification method based on a gating multi-scale matching network comprises the following steps:
s1 from DMMiddle-construction meta training set DM-TrainAnd meta test set DM-Test,DMIs a collection containing M different types of radar target datasets.
S1.1 construction of Meta-training set DM-TrainAnd dividing element training set support set DM-Train-sAnd meta training set challenge set DM-Train-q
From D using the "N-way K-shot" methodMWell-sampled support set D for obtaining well-divided training setM-Train-sAnd meta training set challenge set DM-Train-qMeta training set D ofM-Train. The specific process is as follows:
s1.1.1 Slave DMRandomly selecting a data set containing N different types of radar targets, wherein N is less than or equal to M.
S1.1.2 samples are taken from a selected data set containing N different types of radar targets, K samples are taken for each radar target, and the N x K samples selected form a support set of the meta-training set.
S1.1.3 randomly selects P samples different from the N x K samples from the data set containing N different types of radar targets, and the P selected samples form a challenge set of the meta-training set.
S1.1.4 repeat S1.1.1-S1.1.3m times, then we can get the support set D of the divided training setM-Train-sAnd meta training set challenge set DM-Train-qMeta training set D ofM-Train. In general, m>100. N, K, P are integers, for example, N is 5, K is 10, and P is 20.
S1.2 construction of Meta-test set DM-TestAnd dividing element test set support set DM-Test-sAnd meta test set challenge set DM-Test-q
Will be contained in DMBut is not contained in DM-TrainThe collection of radar target datasets in (1) is denoted as DH. Suppose DHThe data set comprises H different types of radar targets, and H is more than or equal to (M-N). From D by means of "N-way K-shot" methodHThe well-divided element test set support set D is obtained by middle samplingM-Test-sAnd meta test set challenge set DM-Test-qMeta test set D ofM-Test. The specific process is as follows:
s1.2.1 Slave DHRandomly selecting a data set containing N different types of radar targets, wherein N is less than or equal to H.
S1.2.2 samples are taken from a selected data set containing N different types of radar targets, K samples are taken for each radar target, and the N x K samples selected form a support set of the meta-test set.
S1.2.3 randomly selects P samples different from the N x K samples from the data set containing N different types of radar targets, the P samples selected constituting a challenge set of the meta-test set.
S1.2.4 repeating the steps of S1.2.1-S1.2.3m times, a well-divided meta test set support set D can be obtainedM-Test-sAnd meta test set challenge set DM-Test-qMeta test set D ofM-Test. Here, the values of m, N, K, P are consistent with the values in step S1.1.
S2 constructing a gated multi-scale matching network. The gated multi-scale Network is composed of five modules, namely a feature extraction module based on a Convolutional Neural Network (CNN), a multi-scale feature extraction module, a weight gating unit, a feature cosine similarity matching module and an output module of the Network, and the construction process is as follows:
s2.1, constructing a CNN-based feature extraction module.
The CNN feature extraction module is composed of a plurality of (e.g., 4) convolution modules, each of which is connected in sequence, that is, the last layer of the previous convolution module is connected to the first layer of the next convolution module, and each of which is composed of a 3 × 3 convolution layer, a modified Linear Unit (ReLU), a batch normalization layer, and a 2 × 2 pooling layer.
S2.2, constructing a multi-scale feature extraction module.
Each convolution module in the CNN-based feature extraction module is connected with a multi-scale feature extraction module; the multi-scale feature extraction module consists of two parts, namely a feature compression module and a RoI pooling module: the characteristic compression module is composed of 1-by-1 convolution layers and used for reducing data volume; the RoI pooling module is made up of one 9 x 9 pooling layer for changing the spatial shape of the features.
S2.3 a weight gating unit is constructed.
And each multi-scale feature extraction module is connected with a weight gating unit, the weight gating unit consists of three parts, namely a 1 × 1 convolution layer, 2 full-connection layers and a Sigmoid activation function layer, and the three parts are sequentially connected to form the weight gating unit.
And S2.4, constructing a feature cosine similarity matching module.
During training, the characteristic cosine similarity matching module is used for calculating a meta-training set challenge set DM-Train-qSample feature and meta-training set support set D in (1)M-Train-sCosine similarity between each sample feature in the set of samples; during testing, the characteristic cosine similarity matching module is used for calculating a meta-test set challenge set DM-Test-qSample feature and meta-test set support set D in (1)M-Test-sThe cosine similarity between each sample feature. The structure can be constructed by using the formula (1):
Figure GDA0003455756430000041
wherein a, b represent n-dimensional feature vectors, ai,biRespectively representing elements in the feature vectors a, b, and c (a, b) representing cosine similarity of the feature vectors.
S2.5, an output module of the network is constructed.
The output module of the network is composed of a vector maximum value solving module and is used for solving the sample with the highest cosine similarity between the samples in the inquiry set and the samples in the support set and outputting the label corresponding to the sample, and the formula is as follows:
cmax=max(c1,c2,...,cT) (2)
where T denotes the number of samples in the support set, c1,...,cTRespectively representing the cosine similarity of one sample in the challenge set and T samples in the support set, cmaxDenotes c1,...,cTMax represents the maximum function.
S3 trains a gated multi-scale matching network.
S3.1 utilizing the meta training set D obtained in the step S1.1M-TrainAnd training the gated multi-scale matching network constructed in the S2. The specific process is as follows:
s3.1.1 supporting the meta training set as DM-Train-sInputting K samples of each radar target into a CNN-based feature extraction module, and enabling each convolution module in the CNN feature extraction module to carry out element training set support set DM-Train-sPerforming feature extraction on K samples of each radar target.
S3.1.2 the features extracted by each convolution module in the CNN feature extraction module are input into the multi-scale feature extraction module respectively.
S3.1.3 the features extracted by each convolution module are processed by the multi-scale feature extraction module and then input to the weight gate control unit, and the weight gate control unit gives different weights to the processed features according to different recognition tasks, thereby realizing that the features of different convolution modules are selected as the leading edge according to specific recognition tasks to finish the recognition of different radar targets.
S3.1.4, splicing the features with different weights according to the connection sequence of each convolution module to be used as fusion features, and then averaging the fusion features of K samples of each radar target to be used as the features of each radar target input to the cosine similarity matching module.
S3.1.5 repeat S3.1.1-S3.1.4N times to obtain a meta-training set support set DM-Train-sAnd (4) characteristics of N radar targets.
S3.2 challenge set D for meta training setM-Train-qIs subjected to the operation in S3.1 to obtain a meta-training set challenge set DM-Train-qOf each sample.
S3.3, the meta-training set challenge set D obtained after the processing of the step S3.2 is processedM-Train-qThe feature of each sample in the training set and the meta-training set support set D obtained after S3.1 processingM-Train-sInputting the characteristics of the medium N radar targets into a characteristic cosine similarity matching module to obtain a meta-training set challenge set DM-Train-qSupport set D of feature and meta training set of each sample inM-Train-sAnd (4) the cosine similarity of the characteristics of the medium N radar targets.
The output module of the S3.4 network obtains the maximum value of the characteristic cosine similarity obtained in S3.3 through the formula (2), and outputs a meta-training set support set D corresponding to the maximum value of the cosine similarityM-Train-sThe label of the radar target is used as a model pair element training set challenge set DM-Train-qPrediction tag of the medium sample.
S3.5, solving a model pair element training set challenge set D through a cross entropy loss functionM-Train-qLabel y' of middle sample prediction and element training set challenge set DM-Train-qLoss function value of true label y of middle sample:
Figure GDA0003455756430000051
wherein L represents the calculated loss function value, (x, y) represent the sample and its corresponding true label, respectively, and y' represents the label output by the network model.
S3.6, optimizing the weight and bias parameter theta of the gated multi-scale matching network through a random gradient descent algorithm, wherein the theta is a parameter needing to be optimized through iteration and is thetatRepresenting the value of theta at the t-th iteration. The updating formula of the weight and the bias parameter of the gated multi-scale matching network by using the stochastic gradient descent algorithm is shown as the formula (4):
Figure GDA0003455756430000052
wherein, thetat+1And thetatThe weights and bias parameters of the t +1 th iteration and the t-th iteration gated multi-scale network are respectively expressed, eta represents the learning rate (i.e. the updating step),
Figure GDA0003455756430000053
gradient information representing the loss function value.
S3.7 repeats S3.1 to S3.6 until the weights and bias parameters θ of the gated multi-scale matching network converge.
And S4, recognizing the target to be recognized by using the trained gated multi-scale matching network.
S4.1 challenge set D of the meta-test set by using the gated multi-scale matching network trained in the step S3M-Test-qThe target to be identified is identified, and the specific process is as follows:
s4.1.1 supporting set D of element test setM-Test-sInputting K samples of each radar target into a CNN-based feature extraction module, and enabling each convolution module in the CNN feature extraction module to carry out element test set support set DM-Test-sPerforming feature extraction on K samples of each radar target.
S4.1.2 the features extracted by each convolution module in the CNN feature extraction module are input into the multi-scale feature extraction module respectively.
S4.1.3 the features extracted by each convolution module are processed by the multi-scale feature extraction module and then input to the weight gate control unit, and the weight gate control unit gives different weights to the features of different convolution modules according to different recognition tasks, thereby realizing that the features of different convolution modules are selected as the leading edge according to specific recognition tasks to finish the recognition of different radar targets.
S4.1.4, splicing the features with different weights according to the connection sequence of each convolution module to be used as fusion features, and then averaging the fusion features of K samples of each radar target to be used as the features of each radar target input to the cosine similarity matching module.
S4.1.5 repeat S4.1.1-S4.1.4N times to obtain a meta-test set support set DM-Test-sAnd (4) characteristics of N radar targets.
S4.2 challenge set D for meta test setM-Test-qPerforms the operations in S4.1 to obtain a meta-testset challenge set DM-Test-qOf each sample.
S4.3, the meta-test set challenge set D obtained after the processing of the step S4.2 is obtainedM-Test-qThe characteristics of each sample in the set and the meta-test set support set D obtained after the processing of step S4.1M-Test-sInputting the characteristics of the medium N radar targets into a cosine similarity matching module to obtain a meta-test set challenge set DM-Test-qSupport set D of feature and meta-test set for each sample in the setM-Test-sCosine similarity of medium N radar target characteristics.
The output module of the S4.4 network obtains the maximum value obtained by the S4.3 through the formula (2), and outputs the element test set support set D corresponding to the maximum value of the cosine similarityM-Test-sThe label of the radar target is used as a model pair element test set challenge set DM-Test-qPrediction tag of the medium sample. And the prediction label output by the model is the recognition result of the model to the sample in the meta-test set challenge set.
S4.5 repeating steps S4.1 to S4.4 until the model pair meta-test set challenge set DM-Test-qAnd (5) finishing the identification of all the samples to be detected.
The invention has the following beneficial effects: according to the SAR image recognition method based on the multi-scale feature extraction, a multi-scale feature extraction module and a gate control unit structure are introduced on the basis of a traditional matching network, the multi-scale features of different convolutional layers of the matching network are extracted through the multi-scale feature extraction module, different weights are set for the features of the different convolutional layers by the gate control unit according to different recognition tasks, therefore, the effect of SAR image recognition by selecting the features of the different convolutional layers according to specific tasks is achieved, and the generalization capability and the recognition accuracy of the model are improved finally. The method provided by the invention relieves the problems of low SAR image target identification accuracy and poor generalization capability under the condition of small samples to a certain extent, and has important engineering value for SAR image target identification under the condition of small samples.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a weight gating unit;
FIG. 3 is a schematic diagram of a gated multi-scale matching network; (a) a traditional matching network structure schematic diagram; (b) a schematic diagram of a gated multi-scale feature extraction network structure;
FIG. 4 is an optical image of a class 10 target in the public dataset MSTAR and its corresponding SAR image employed by the present invention;
FIG. 5 is a diagram of a meta-training set according to the present invention;
FIG. 6 is a diagram illustrating a meta-test set according to the present invention;
fig. 7 is a graph of the recognition accuracy of the method of the present invention and other small sample learning methods (Matching Net, MAML, Meta-LSTM) under the condition that K is 1;
fig. 8 is a graph of the recognition accuracy of the method of the present invention and other small sample learning methods (Matching Net, MAML, Meta-LSTM) under the condition of K ═ 2;
fig. 9 is a graph of the recognition accuracy of the method of the present invention and other small sample learning methods (Matching Net, MAML, Meta-LSTM) under the condition that K is 5;
fig. 10 is a graph of the recognition accuracy of the method of the present invention and other small sample learning methods (Matching Net, MAML, Meta-LSTM) under the condition that K is 10;
Detailed Description
The invention is further illustrated with reference to the accompanying drawings:
FIG. 1 is an overall process flow of the present invention. The invention discloses a radar small sample target identification method based on a gated multi-scale matching network, which comprises the following steps:
s1 from data set DMConstruction meta-training set DM-TrainAnd meta test set DM-Test
S2 constructing a gated multi-scale matching network. The gated multi-scale Network is composed of five modules, namely a feature extraction module based on a Convolutional Neural Network (CNN), a multi-scale feature extraction module, a weight gating unit, a feature cosine similarity matching module and an output module of the Network.
S3 trains a gated multi-scale matching network.
And S4, recognizing the target to be recognized by using the trained gated multi-scale matching network.
Fig. 2 is a schematic structural diagram of a weight gating unit. The weight gating unit sets different weights for the features extracted by different convolution modules of the matching network according to a specific recognition task, so that the effect of selecting the most representative target feature according to the specific recognition task is realized, and the recognition task is completed by taking the feature as the leading factor. The gate control unit consists of a 1 × 1 convolution layer, 2 full-connection layers and a final Sigmoid activation function layer
Figure GDA0003455756430000071
Can be obtained from equation (5).
Figure GDA0003455756430000072
Where σ denotes a Sigmoid activation function, δ denotes a ReLU activation function,
Figure GDA0003455756430000073
weight parameter, W, representing the fully connected layercA weight parameter representing the convolutional layer,
Figure GDA0003455756430000074
representing the features after processing by the RoI pooling module.
FIG. 3 is a schematic structural diagram of a gated multi-scale matching network according to the present invention, wherein (a) is a schematic structural diagram of a conventional matching network; (b) the structure of the gated multi-scale network is shown schematically. Wherein g isθ=fθExpressing a gated multi-scale network, respectively inputting samples of a support set and samples to be identified in an inquiry set into the gated multi-scale network, and respectively outputting each type of target in the support set by the gated multi-scale networkThe characteristics of each sample in the inquiry set and the characteristics of each type of target in the supporting set are input into a cosine similarity calculation module, the cosine similarity between the characteristics of the sample to be tested in the inquiry set and the characteristics of each type of target in the supporting set is calculated respectively, and the network output module outputs labels to the sample to be tested in the inquiry set by finding the maximum value of the cosine similarity between the characteristics of the sample to be tested in the inquiry set and the characteristics of each type of target in the supporting set. The structural diagram of the gated multi-scale network is shown in (b). Suppose that a gated multi-scale network can use gθInstead, the above process may be represented by equations (6), (7) and (8). Wherein the content of the first and second substances,
Figure GDA0003455756430000081
representing samples to be tested in a challenge set
Figure GDA0003455756430000082
And the real label of the sample to be tested
Figure GDA0003455756430000083
Represents a supporting set, (X)i,Yi) Representing the sample data in the supporting set and its corresponding label,
Figure GDA0003455756430000084
representing matching network to samples to be tested in the inquiry set
Figure GDA0003455756430000085
The output probability distribution of the recognition class,
Figure GDA0003455756430000086
is given by formula (7), wherein fθ,gθRepresenting a gated multi-scale network, and fθ=gθAnd c represents the cosine similarity between the two feature vectors, which is given by equation (8). a, b denote n-dimensional vectors, ai,biRepresenting elements in a vector.
Figure GDA0003455756430000087
Figure GDA0003455756430000088
Figure GDA0003455756430000089
As described above, fθ=gθThe gated multi-scale network is represented as shown in fig. 3(b), and as shown in the figure, the gated multi-scale network is composed of two modules, namely a multi-scale feature extraction network and a weight gating unit. The multi-scale feature extraction network is composed of a feature compression module and a RoI pooling module. The characteristic compression module is used for compressing the characteristics obtained by different convolutional layers, reducing data volume and improving identification efficiency, and the RoI pooling module is used for extracting partial characteristics which are beneficial to identification in the characteristic diagram and unifying the characteristic diagrams with different convolutional layers in different shapes into the characteristic diagram with the same shape. It is assumed that the features of the input feature compression module can be expressed as
Figure GDA00034557564300000810
Features representing input-C in commoninAnd each channel has a characteristic space shape of H multiplied by W. The feature compression module processes the input feature graph through 1-to-1 convolution operation, and outputs features with fewer output channels
Figure GDA00034557564300000811
In general, Cout<Cin1-1 convolution operation is expressed as formula (9), where uiAnd (4) representing the ith compression factor of the feature compression module, and representing convolution operation.
Figure GDA00034557564300000812
The RoI pooling module processes the compressed output of the feature compression module through a pooling operationTo change the spatial shape of the feature, the feature after processing by the RoI pooling module is represented as
Figure GDA0003455756430000091
Wherein
Figure GDA0003455756430000092
And
Figure GDA0003455756430000093
representing the spatial shape of the RoI pooling feature.
The weight gating unit sets different weights for the RoI pooling characteristics with different scales according to a specific identification task, so that the most representative target characteristics are selected according to the specific identification task, and the target identification task is completed by taking the layer of characteristics as a leading factor. The weight gate control unit is composed of a 1 × 1 convolution layer, 2 full-connection layers and a Sigmoid activation function layer, the structural schematic diagram is shown in figure 2, and the weight gate control unit
Figure GDA0003455756430000094
The weight gating unit G sets different weights for the features extracted by different convolution modules by hadamard product with the RoI pooling features, which can be expressed by equation (5).
Figure GDA0003455756430000095
Feature F processed by a weight gating unitgAnd RoI pooling feature
Figure GDA0003455756430000096
Has the same spatial shape, and FgThe features that are helpful in identifying are given greater weight and the redundant features are given less weight. The weight gating unit achieves the effect of giving different weights to different scale features according to a specific recognition task, so that the recognition accuracy is improved to a certain extent.
Fig. 4 is an optical image of a class 10 target in the public dataset MSTAR used in the experiments of the present invention and its corresponding SAR image. The data set used herein is the mstar (moving and static Target Acquisition and recognition) data set provided by the united states defense advanced research project and the air force research laboratory. The data set contains SAR images of class 10 military vehicle targets acquired by high resolution beamformed synthetic aperture radar. The class 10 military vehicle targets in the MSTAR dataset were measured by radars operating in the X band in 1995 and 1996. The 10 types of military vehicle targets are: 2S1, BMP2, BRDM2, BTR60, BTR70, D7, T62, T72, ZIL131 and ZSU 234. The resolution of each SAR image in the distance and azimuth directions is 0.3m, the pixel size is 128 x 128, and the SAR image is usually an SAR target slice image acquired every 1-2 degrees at an azimuth angle of 0-360 degrees.
FIG. 5 is a meta-training set D configured in accordance with the present inventionM-Train. The SAR image forming method selects SAR images with the MSTAR data set pitch angle of 17 degrees to form a meta-training set. Training set D for pairs of elements according to step S1.1M-TrainPartition support set DM-Train-sAnd a challenge set DM-Train-q. Specifically, first, N (for example, N ═ 5) classes are randomly selected from 10 classes of military vehicle targets, and then K (for example, K ═ 1) samples are randomly selected from each of the N classes of samples. A support set D forming a meta-training setM-Train-sThen, P samples different from the samples in the support set are randomly selected from the N types of targets, and the selected P samples form a challenge set D of the meta-training setM-Train-q
FIG. 6 is a set of meta-tests D set forth in the present inventionM-Test. The SAR image formation element test set is formed by selecting the SAR images under the condition that the MSTAR data set pitch angle is 15 degrees. According to the step S1.2, the meta test set D is pairedM-TestPartition support set DM-Test-sAnd a challenge set DM-Test-q. The specific process is similar to the meta-training set described in fig. 5.
Fig. 7 is a graph of the recognition accuracy of the method of the present invention and other small sample learning methods (Matching Net, MAML, Meta-LSTM) under the condition that K is 1. The case where K is 1 means that each class of target is trained with only 1 sample per training. In the experiment, an experiment method of 'N-way K-shot' is adopted, N is 5, and K is 1, so that 5 different types of targets are identified. Three small sample learning methods, namely Matching Net, MAML and Meta-LSTM, are target identification methods with good performance under the condition of small samples in the field of optical images. (Matching nets reference: O.Vinyals, C.Blundell, T.Lillicrap, K.Kavukuloglu, and D.Wierstra.matching networks for one shot learning. In NIPS, 2016.; MAML reference: C.Finn, P.Abbel, and S.Levine.model-imaging method-learning for a fast adaptation of deep networks, ICML, 2017.; Meta-LSTM reference: S.Ravi and H.Larochele.optimization a model for a small-shot learning. in LR, 2016.) the present invention compares the above three classical small sample learning methods and the present invention learning method under the same experimental environment. As can be seen from fig. 7, the experiment is performed 100 times of iterations, and according to the above step S1, 5 different types of targets are selected from 10 different types of targets in the MSTAR dataset for each iteration, and each type of target only provides 1 sample for training for each iteration, so as to obtain the change curve of the recognition accuracy of the method of the present invention and the other three small sample learning methods. The highest identification accuracy rate obtained by the method in the iterative process is 70.5 percent, the highest identification accuracy rate obtained by the Matching Net method is 59.7 percent, the highest identification accuracy rate realized by the MAML method is 56.4 percent, the highest identification accuracy rate obtained by the Meta-LSTM method is 61.1 percent, and the highest identification accuracy rate of the method can be found by comparing the identification accuracy rates of the four different methods. Moreover, it can be found by observing the change curve of the identification accuracy rate that each iteration selects different 5 types of different targets from the MSTAR data set for identification, so that the identification accuracy rate curve has certain oscillation in the iteration process, and the oscillation amplitude of the method is minimum and the performance is most stable.
Fig. 8 is a graph of the recognition accuracy of the method of the present invention and other small sample learning methods (Matching Net, MAML, Meta-LSTM) under the condition of K ═ 2. In the experiment, an experiment method of 'N-way K-shot' is adopted, N-5 and K-2 are selected to identify 5 different types of targets, and each type of target is trained by only providing 2 samples for training. Under the condition that K is 2, the highest recognition accuracy rate obtained by the method is 82.9 percent, the highest recognition accuracy rate obtained by Matching Net is 60.0 percent, the highest recognition accuracy rate obtained by the MAML method is 73.5 percent, and the highest recognition accuracy rate obtained by the Meta-LSTM method is 67.1 percent. As can be found by comparing the data and observing the graph 8, the method has the highest identification accuracy, the minimum local oscillation amplitude of the identification accuracy curve and the most stable identification effect.
Fig. 9 is a graph of the recognition accuracy of the method of the present invention and other small sample learning methods (Matching Net, MAML, Meta-LSTM) under the condition that K is 5. Still adopt the experimental method of "N-way K-shot", select N5, K5, show that discerns 5 kinds of different targets, train each kind of target and only provide 5 samples each time. Under the condition that K is 5, the highest recognition accuracy obtained by the method is 84.2%, the highest recognition accuracy curve obtained by Matching Net is 70.1%, the highest recognition accuracy obtained by the MAML method is 77.5%, and the highest recognition accuracy obtained by the Meta-LSTM method is 79.2%. The method has the advantages of highest identification accuracy, minimum local oscillation amplitude of an identification accuracy curve and most stable identification effect.
Fig. 10 is a graph of the recognition accuracy of the method of the present invention and other small sample learning methods (Matching Net, MAML, Meta-LSTM) under the condition that K is 10. Still adopt the experimental method of "N-way K-shot", select N5, K10, show that discerns 5 kinds of different targets, train each kind of target and only provide 10 samples each time. Under the condition that K is 10, the highest recognition accuracy obtained by the method is 93.2%, the highest recognition accuracy curve obtained by Matching Net is 77.8%, the highest recognition accuracy obtained by the MAML method is 87.7%, and the highest recognition accuracy obtained by the Meta-LSTM method is 84.4%. The method has the advantages of highest identification accuracy, minimum local oscillation amplitude of an identification accuracy curve and most stable identification effect.
From a comprehensive analysis of fig. 7 to fig. 10, it can be seen that under the condition of K ═ 1, 2, 5 and 10, the highest recognition accuracy was achieved by the method of the present invention, which was 70.5%, 82.9%, 84.2% and 93.2%, respectively. Since 5 different types of targets are randomly selected from the MSTAR dataset for target identification under the condition of a small sample in each iteration, the change curves of the identification rates of the different methods in fig. 7-10 have different degrees of oscillation because the different methods have different stabilities and different degrees of oscillation. 7-10, the method of the present invention has the smallest local oscillation amplitude and shows better stability.
In conclusion, the method provided by the invention improves the SAR target identification accuracy under the condition of small samples, improves the generalization capability of the model, reduces the oscillation amplitude of the identification accuracy in the training process, and shows certain stability. The method can solve the problems of low identification accuracy and poor model generalization capability of the existing SAR image target identification method, and has higher engineering application value.

Claims (4)

1. A SAR image small sample target identification method based on a gating multi-scale matching network is characterized by comprising the following steps:
s1 from DMMiddle-construction meta training set DM-TrainAnd meta test set DM-Test,DMFor a set containing M different types of radar target datasets:
s1.1 construction of Meta-training set DM-TrainAnd dividing element training set support set DM-Train-sAnd meta training set challenge set DM-Train-q
From D using the "N-way K-shot" methodMWell-sampled support set D for obtaining well-divided training setM-Train-sAnd meta training set challenge set DM-Train-qMeta training set D ofM-TrainThe specific process is as follows:
s1.1.1 Slave DMRandomly selecting a data set containing N different types of radar targets, wherein N is less than or equal to M;
s1.1.2 sampling from the selected data set containing N different types of radar targets, each radar target selecting K samples, the selected N x K samples forming a support set of the element training set;
s1.1.3 randomly selecting P samples different from the N x K samples from the data set containing N different types of radar targets, wherein the P selected samples form a challenge set of the meta-training set;
s1.1.4 repeat S1.1.1-S1.1.3m times, then we can get the support set D of the divided training setM-Train-sAnd meta training set challenge set DM-Train-qMeta training set D ofM-Train
S1.2 construction of Meta-test set DM-TestAnd dividing element test set support set DM-Test-sAnd meta test set challenge set DM-Test-q
Will be contained in DMBut is not contained in DM-TrainThe collection of radar target datasets in (1) is denoted as DHLet D beHThe data set comprises H different types of radar targets, wherein H is more than or equal to (M-N); from D by means of "N-way K-shot" methodHThe well-divided element test set support set D is obtained by middle samplingM-Test-sAnd meta test set challenge set DM-Test-qMeta test set D ofM-TestThe specific process is as follows:
s1.2.1 Slave DHRandomly selecting a data set containing N different types of radar targets, wherein N is less than or equal to H;
s1.2.2 sampling from the selected data sets containing N different types of radar targets, each radar target selecting K samples, the selected N x K samples forming a support set of the meta-test set;
s1.2.3 randomly selecting P samples different from the N x K samples from the data set containing N different types of radar targets, the P selected samples forming a challenge set of the meta-test set;
s1.2.4 repeating the steps of S1.2.1-S1.2.3m times, a well-divided meta test set support set D can be obtainedM-Test-sAnd meta test set challenge set DM-Test-qMeta test set D ofM-Test
S2, constructing a gating multi-scale matching network; the gated multi-scale network is composed of five modules, namely a CNN-based feature extraction module, a multi-scale feature extraction module, a weight gating unit, a feature cosine similarity matching module and a network output module, and the construction process is as follows:
s2.1 construction of CNN-based feature extraction module
The CNN feature extraction module is composed of a plurality of convolution modules, each module is connected in sequence, namely the last layer of the last convolution module is connected to the first layer of the next convolution module, and each convolution module is composed of a 3 x 3 convolution layer, a correction linear unit, a batch normalization layer and a 2 x 2 pooling layer;
s2.2 construction of multi-scale feature extraction module
Each convolution module in the CNN-based feature extraction module is connected with a multi-scale feature extraction module; the multi-scale feature extraction module consists of two parts, namely a feature compression module and a RoI pooling module: the characteristic compression module is composed of 1-by-1 convolution layers and used for reducing data volume; the RoI pooling module is composed of one 9 × 9 pooling layer for changing the spatial shape of the features;
s2.3 construction of a weight gating cell
Each multi-scale feature extraction module is connected with a weight gating unit, the weight gating unit consists of three parts, namely a 1 × 1 convolution layer, 2 full-connection layers and a Sigmoid activation function layer, and the three parts are connected in sequence to form the weight gating unit;
s2.4 construction of feature cosine similarity matching module
During training, the characteristic cosine similarity matching module is used for calculating a meta-training set challenge set DM-Train-qSample feature and meta-training set support set D in (1)M-Train-sCosine similarity between each sample feature in the set of samples; during testing, the characteristic cosine similarity matching module is used for calculating a meta-test set challenge set DM-Test-qSample feature and meta-test set support set D in (1)M-Test-sThe cosine similarity between each sample feature can be constructed by using formula (1):
Figure FDA0003455756420000021
wherein a, b represent n-dimensional feature vectors, ai,biIndividual watchRepresenting elements in the feature vectors a, b, c (a, b) representing cosine similarity of the feature vectors;
s2.5 output module for constructing network
The output module of the network is composed of a vector maximum value solving module and is used for solving the sample with the highest cosine similarity between the samples in the inquiry set and the samples in the support set and outputting the label corresponding to the sample, and the formula is as follows:
cmax=max(c1,c2,...,cT) (2)
where T denotes the number of samples in the support set, c1,...,cTRespectively representing the cosine similarity of one sample in the challenge set and T samples in the support set, cmaxDenotes c1,...,cTMax represents the maximum function;
s3 training a gated multi-scale matching network:
s3.1 utilizing the meta training set D obtained in the step S1.1M-TrainTraining the gated multi-scale matching network constructed in S2, wherein the specific process is as follows:
s3.1.1 supporting the meta training set as DM-Train-sInputting K samples of each radar target into a CNN-based feature extraction module, and enabling each convolution module in the CNN feature extraction module to carry out element training set support set DM-Train-sPerforming feature extraction on K samples of each radar target;
s3.1.2 respectively inputting the features extracted by each convolution module in the CNN feature extraction module into the multi-scale feature extraction module;
s3.1.3 the features extracted by each convolution module are respectively processed by the multi-scale feature extraction module and then input to the weight gate control unit, the weight gate control unit endows the processed features with different weights according to different recognition tasks, thereby realizing that the features of different convolution modules are selected as the leading edge according to specific recognition tasks to finish the recognition of different radar targets;
s3.1.4, splicing the features with different weights according to the connection sequence of each convolution module to be used as fusion features, and then averaging the fusion features of K samples of each radar target to be used as the features of each radar target input to the cosine similarity matching module;
s3.1.5 repeating the steps for S3.1.1-S3.1.4N times to obtain a meta-training set support set DM-Train-sCharacteristics of medium N kinds of radar targets;
s3.2 challenge set D for meta training setM-Train-qIs subjected to the operation in S3.1 to obtain a meta-training set challenge set DM-Train-qThe characteristics of each sample in (a);
s3.3, the meta-training set challenge set D obtained after the processing of the step S3.2 is processedM-Train-qThe feature of each sample in the training set and the meta-training set support set D obtained after S3.1 processingM-Train-sInputting the characteristics of the medium N radar targets into a characteristic cosine similarity matching module to obtain a meta-training set challenge set DM-Train-qSupport set D of feature and meta training set of each sample inM-Train-sCharacteristic cosine similarity of the medium N radar targets;
the output module of the S3.4 network obtains the maximum value of the characteristic cosine similarity obtained in S3.3 through the formula (2), and outputs a meta-training set support set D corresponding to the maximum value of the cosine similarityM-Train-sThe label of the radar target is used as a model pair element training set challenge set DM-Train-qA prediction tag for the medium sample;
s3.5, solving a model pair element training set challenge set D through a cross entropy loss functionM-Train-qLabel y' of middle sample prediction and element training set challenge set DM-Train-qLoss function value of true label y of middle sample:
Figure FDA0003455756420000031
wherein, P represents the number of samples of different types of radar targets, L represents the obtained loss function value, (x, y) respectively represents the samples and the corresponding real labels, and y' represents the labels output by the network model;
s3.6, optimizing the weight and the bias parameter theta of the gated multi-scale matching network through a random gradient descent algorithm, wherein the theta is a parameter needing to be optimized through iteration,θtrepresents the value of θ at the t-th iteration; the updating formula of the weight and the bias parameter of the gated multi-scale matching network by using the stochastic gradient descent algorithm is shown as the formula (4):
Figure FDA0003455756420000032
wherein, thetat+1And thetatRespectively representing the weight and the bias parameters of the t +1 th iteration and the t-th iteration gating multi-scale network, wherein eta represents the learning rate,
Figure FDA0003455756420000033
gradient information representing a loss function value;
s3.7, repeating S3.1 to S3.6 until the weight and the bias parameter theta of the gated multi-scale matching network converge;
s4, identifying the target to be identified by using the trained gated multi-scale matching network:
s4.1 challenge set D of the meta-test set by using the gated multi-scale matching network trained in the step S3M-Test-qThe target to be identified is identified, and the specific process is as follows:
s4.1.1 supporting set D of element test setM-Test-sInputting K samples of each radar target into a CNN-based feature extraction module, and enabling each convolution module in the CNN feature extraction module to carry out element test set support set DM-Test-sPerforming feature extraction on K samples of each radar target;
s4.1.2 respectively inputting the features extracted by each convolution module in the CNN feature extraction module into the multi-scale feature extraction module;
s4.1.3 the features extracted by each convolution module are respectively processed by the multi-scale feature extraction module and then input to the weight gate control unit, the weight gate control unit endows different weights to the features of different convolution modules according to different recognition tasks, thereby realizing that the features of different convolution modules are selected as the leading edge according to specific recognition tasks to finish the recognition of different radar targets;
s4.1.4, splicing the features with different weights according to the connection sequence of each convolution module to be used as fusion features, and then averaging the fusion features of K samples of each radar target to be used as the features of each radar target input to the cosine similarity matching module;
s4.1.5 repeat S4.1.1-S4.1.4N times to obtain a meta-test set support set DM-Test-sCharacteristics of medium N kinds of radar targets;
s4.2 challenge set D for meta test setM-Test-qPerforms the operations in S4.1 to obtain a meta-testset challenge set DM-Test-qThe characteristics of each sample in (a);
s4.3, the meta-test set challenge set D obtained after the processing of the step S4.2 is obtainedM-Test-qThe characteristics of each sample in the set and the meta-test set support set D obtained after the processing of step S4.1M-Test-sInputting the characteristics of the medium N radar targets into a cosine similarity matching module to obtain a meta-test set challenge set DM-Test-qSupport set D of feature and meta-test set for each sample in the setM-Test-sCosine similarity of medium N radar target features;
the output module of the S4.4 network obtains the maximum value obtained by the S4.3 through the formula (2), and outputs the element test set support set D corresponding to the maximum value of the cosine similarityM-Test-sThe label of the radar target is used as a model pair element test set challenge set DM-Test-qA prediction tag for the medium sample;
s4.5 repeating steps S4.1 to S4.4 until the model pair meta-test set challenge set DM-Test-qAnd (5) finishing the identification of all the samples to be detected.
2. The method for identifying the SAR image small sample target based on the gated multi-scale matching network according to claim 1 is characterized in that: s1.1.4, m is greater than 100, N is 5, K is 10, and P is 20.
3. The method for identifying the SAR image small sample target based on the gated multi-scale matching network according to claim 1 is characterized in that: s1.2.4, m is greater than 100, N is 5, K is 10, and P is 20.
4. The method for identifying the SAR image small sample target based on the gated multi-scale matching network according to claim 1 is characterized in that: in S2.1, the CNN feature extraction module consists of 4 convolution modules.
CN202110716103.8A 2021-06-24 2021-06-24 SAR image small sample target identification method based on gating multi-scale matching network Active CN113283390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110716103.8A CN113283390B (en) 2021-06-24 2021-06-24 SAR image small sample target identification method based on gating multi-scale matching network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110716103.8A CN113283390B (en) 2021-06-24 2021-06-24 SAR image small sample target identification method based on gating multi-scale matching network

Publications (2)

Publication Number Publication Date
CN113283390A CN113283390A (en) 2021-08-20
CN113283390B true CN113283390B (en) 2022-03-08

Family

ID=77285741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110716103.8A Active CN113283390B (en) 2021-06-24 2021-06-24 SAR image small sample target identification method based on gating multi-scale matching network

Country Status (1)

Country Link
CN (1) CN113283390B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298387A (en) * 2019-06-10 2019-10-01 天津大学 Incorporate the deep neural network object detection method of Pixel-level attention mechanism
WO2020101448A1 (en) * 2018-08-28 2020-05-22 Samsung Electronics Co., Ltd. Method and apparatus for image segmentation
CN112446357A (en) * 2020-12-15 2021-03-05 电子科技大学 SAR automatic target recognition method based on capsule network
CN112766199A (en) * 2021-01-26 2021-05-07 武汉大学 Hyperspectral image classification method based on self-adaptive multi-scale feature extraction model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472483B (en) * 2019-07-02 2022-11-15 五邑大学 SAR image-oriented small sample semantic feature enhancement method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020101448A1 (en) * 2018-08-28 2020-05-22 Samsung Electronics Co., Ltd. Method and apparatus for image segmentation
CN110298387A (en) * 2019-06-10 2019-10-01 天津大学 Incorporate the deep neural network object detection method of Pixel-level attention mechanism
CN112446357A (en) * 2020-12-15 2021-03-05 电子科技大学 SAR automatic target recognition method based on capsule network
CN112766199A (en) * 2021-01-26 2021-05-07 武汉大学 Hyperspectral image classification method based on self-adaptive multi-scale feature extraction model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SAMNet: Stereoscopically Attentive Multi-Scale Network for Lightweight Salient Object Detection;Yun Liu等;《IEEE Transactions on Image Processing》;20210318;3804-3814 *

Also Published As

Publication number Publication date
CN113283390A (en) 2021-08-20

Similar Documents

Publication Publication Date Title
CN110135267B (en) Large-scene SAR image fine target detection method
CN110334741B (en) Radar one-dimensional range profile identification method based on cyclic neural network
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
CN106355151B (en) A kind of three-dimensional S AR images steganalysis method based on depth confidence network
CN109214452B (en) HRRP target identification method based on attention depth bidirectional cyclic neural network
Zhang et al. Convolutional neural network with attention mechanism for SAR automatic target recognition
CN110909667B (en) Lightweight design method for multi-angle SAR target recognition network
CN111914728B (en) Hyperspectral remote sensing image semi-supervised classification method and device and storage medium
CN110766084B (en) Small sample SAR target identification method based on CAE and HL-CNN
CN109359525B (en) Polarized SAR image classification method based on sparse low-rank discrimination spectral clustering
CN108345856B (en) SAR automatic target recognition method based on heterogeneous convolutional neural network integration
CN106096506A (en) Based on the SAR target identification method differentiating doubledictionary between subclass class
CN113095416B (en) Small sample SAR target classification method based on mixing loss and graph meaning force
Wang et al. Target detection and recognition based on convolutional neural network for SAR image
CN110490894A (en) Background separating method before the video decomposed based on improved low-rank sparse
CN114972885A (en) Multi-modal remote sensing image classification method based on model compression
CN115880497A (en) Wing-type icing ice shape prediction method based on combination of self-encoder and multi-layer perceptron
Toğaçar et al. Classification of cloud images by using super resolution, semantic segmentation approaches and binary sailfish optimization method with deep learning model
CN109871907B (en) Radar target high-resolution range profile identification method based on SAE-HMM model
CN114511785A (en) Remote sensing image cloud detection method and system based on bottleneck attention module
CN113283390B (en) SAR image small sample target identification method based on gating multi-scale matching network
CN116797928A (en) SAR target increment classification method based on stability and plasticity of balance model
CN115909086A (en) SAR target detection and identification method based on multistage enhanced network
CN115293639A (en) Battlefield situation studying and judging method based on hidden Markov model
CN114612729A (en) Image classification model training method and device based on SAR image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant