CN113095416A - Small sample SAR target classification method based on mixed loss and graph attention - Google Patents

Small sample SAR target classification method based on mixed loss and graph attention Download PDF

Info

Publication number
CN113095416A
CN113095416A CN202110408623.2A CN202110408623A CN113095416A CN 113095416 A CN113095416 A CN 113095416A CN 202110408623 A CN202110408623 A CN 202110408623A CN 113095416 A CN113095416 A CN 113095416A
Authority
CN
China
Prior art keywords
training
test
layer
node
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110408623.2A
Other languages
Chinese (zh)
Other versions
CN113095416B (en
Inventor
白雪茹
杨敏佳
孟昭晗
周峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110408623.2A priority Critical patent/CN113095416B/en
Publication of CN113095416A publication Critical patent/CN113095416A/en
Application granted granted Critical
Publication of CN113095416B publication Critical patent/CN113095416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a small sample SAR target classification method based on mixed loss and graph attention, which comprises the following steps: acquiring a training sample set and a test sample set; constructing a network model H based on mixed loss and graph attention; performing iterative training on the H; and obtaining a target classification result of the small sample SAR image. The invention passes the classification loss value l of the training task setCAnd the embedding loss value l of the training task setEThe weighting and the mixed loss value l forming the training task set update the parameters of all the first convolution layers and all the second convolution layers in the embedded network module E and the parameters of all the first full-connection layers and all the second full-connection layers in the graph attention network module G, thereby enhancing the similarity between the characteristics of the same SAR target class and the difference between the characteristics of different SAR target classes, and the method can also improve the stability of the SAR target classesData enhancement effectively reduces the risk of overfitting in the model training process, and improves the classification precision of the small sample SAR target.

Description

Small sample SAR target classification method based on mixed loss and graph attention
Technical Field
The invention belongs to the technical field of radar image processing, relates to an SAR target classification method, and particularly relates to a small sample SAR target classification method based on mixed loss and drawing attention, which can be used for SAR target classification under the condition that the number of SAR images acquired by a target is small.
Background
The synthetic aperture radar SAR has the characteristics of all-time, all-weather, long range, high resolution and the like, and is widely used for target classification in military fields such as battlefield reconnaissance and the like because the two-dimensional high-resolution SAR image of the target contains rich information such as the shape, the size, the texture and the like of the target. The SAR target classification is an algorithm which is based on a computer system, extracts features after SAR image data of a target is obtained from a sensor, and gives target class attributes according to the extracted features. Although a large number of traditional SAR target classification methods based on manually selected features and designed classifiers have been generated, these traditional methods require setting a specific algorithm for a specific target according to a large amount of experience and strong professional knowledge, are time-consuming and difficult to popularize. In recent years, the SAR target classification method based on deep learning realizes data-driven SAR target classification, and the method can autonomously learn and extract the characteristics effective for classification from data and classify the target by using the characteristics, so that characteristics do not need to be manually selected, a classifier is not designed, professional knowledge is not needed to be stronger, and the method is easy to popularize into a new target class, so that excellent performance is obtained, and the method is widely researched and used by the industry.
However, some targets observed by the SAR are non-cooperative small-sample SAR targets, that is, the number of SAR images that can be acquired by these targets is small, and each target has only 1 image to ten or more images, whereas the SAR target classification method based on deep learning generally requires a large number of training samples to train a model to obtain a high classification accuracy on a test sample, and for small-sample SAR targets, the SAR target classification method based on deep learning has a problem of low classification accuracy due to the shortage of the training samples.
In order to solve the problem, in the prior art, a special model with low requirement on the number of samples is designed by improving a model structure to improve the classification accuracy of the small-sample SAR target. For example, a patent application with a publication number of CN111191718A entitled "small sample SAR target recognition method based on graph attention network" discloses a small sample SAR target recognition method based on graph attention network, which first obtains a small number of labeled SAR images and a large number of unlabeled SAR images of a target and performs noise reduction processing, then iteratively trains a self-encoder by using the denoised images to obtain feature vectors of all the SAR images, finally constructs an initial adjacency moment, and iteratively trains a graph attention network by using the initial adjacency moment and the feature vectors of all the SAR images, and the trained graph attention network can realize classification of the target in the unlabeled SAR images by using an attention mechanism. The graph attention network adopted by the method has less requirements on the SAR images with the labels in the class prediction process, and the application of the attention mechanism can improve the classification accuracy of the network, but the method has the defects that the obtained result is independently calculated and lost with the original SAR images after the self-encoder performs encoding and decoding operation on each SAR image, so that the extracted features of the same SAR target class have lower similarity, the difference between the features of different SAR target classes is weaker, and the classification accuracy of the model on the small-sample SAR target is still lower.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a small sample SAR target classification method based on mixed loss and drawing attention, which is used for solving the technical problems of low classification accuracy caused by low similarity between the characteristics of the same SAR target category and weak difference between the characteristics of different SAR target categories in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps:
(1) obtaining a training sample set
Figure BDA0003023285250000021
And test sample set
Figure BDA0003023285250000022
(1a) Acquiring a plurality of Synthetic Aperture Radar (SAR) images containing C different target categories, wherein each target category corresponds to M SAR images with the size of h multiplied by h, each SAR image contains 1 target, C is more than or equal to 10, M is more than or equal to 200, and h is 128;
(1b) marking the target category in each SAR image, and randomly selecting CtrainTotal of C for each object classtrainTaking x M SAR images and labels of each SAR image as training sample sets
Figure BDA0003023285250000023
Mixing the rest CtestTotal of C for each object classtestTaking x M SAR images and labels of each SAR image as test sample sets
Figure BDA0003023285250000024
wherein Ctrain+Ctest=C,3≤Ctest≤5;
(2) Constructing a network model H based on mixed loss and graph attention:
constructing a mixed loss and graph attention-based network model H comprising a data enhancement module D, an embedded network module E, a node feature initialization module I and a graph attention network module G which are sequentially cascaded, wherein the embedded network module E comprises a plurality of first volume modules E which are sequentially cascadedCAnd a second convolution module ELEach first convolution module ECComprises a first convolution layer, a first batch of normalization layers, a Mish activation layer, a maximum pooling layer and a second convolution module E which are sequentially stackedLThe second convolution layer and the second batch of normalization layer are sequentially stacked; the graph attention network module G comprises a plurality of graph updating layers U which are sequentially cascaded, and each graph updating layer U comprises edge feature building modules U which are sequentially stackedEAttention weight calculation module UWAnd node feature update module UNEach edge feature building block UEComprises a plurality of first full-connection layers which are sequentially stacked, and each nodeFeature update module UNA second full-link layer is included;
(3) performing iterative training on a network model H based on mixed loss and graph attention:
(3a) initializing the iteration number to be N, wherein the maximum iteration number is N, N is more than or equal to 1000, and N is set to be 0;
(3b) from a training sample set
Figure BDA0003023285250000031
Randomly selecting a group containing CtestTotal of C for each object classtestMultiplying the SAR images by x M, and performing one-hot coding on the label of each SAR image to obtain C of each SAR imagetestDimension label vector, then from CtestRandomly selecting K SAR images contained in each target category and corresponding label vectors from the xM SAR images as training support sample sets
Figure BDA0003023285250000032
C is remainedtest(M-K) SAR images and corresponding label vectors serving as training query sample sets
Figure BDA0003023285250000033
After one-hot coding, the C-dimensional element in the tag vector of each SAR image indicates that the target in the SAR image belongs to CtestThe probability of the c-th object class of the object classes,
Figure BDA0003023285250000034
representing the a-th training support sample consisting of the SAR image and its corresponding label vector,
Figure BDA0003023285250000035
representing a b-th training query sample consisting of the SAR image and the corresponding label vector, wherein K is more than or equal to 1 and less than or equal to 10;
(3c) will train and support the sample set
Figure BDA00030232852500000311
With each training query sample
Figure BDA0003023285250000036
Combined into a training task
Figure BDA0003023285250000037
Obtaining a training task set
Figure BDA0003023285250000038
And will be
Figure BDA0003023285250000039
Forward propagation as input to the mixed-loss and graph attention based network model H:
(3c1) data enhancement module D pairs training task set
Figure BDA00030232852500000310
Performing data enhancement on each SAR image: performing power transformation on each SAR image, adding noise to the SAR image subjected to power transformation, performing turnover transformation on the SAR image subjected to noise addition, and performing rotation transformation on the SAR image subjected to turnover transformation to obtain an enhanced training task set
Figure BDA0003023285250000041
Figure BDA0003023285250000042
Figure BDA0003023285250000043
wherein ,
Figure BDA0003023285250000044
representing training tasks
Figure BDA0003023285250000045
A corresponding one of the enhanced training tasks is,
Figure BDA0003023285250000046
representing training tasks
Figure BDA0003023285250000047
Training support sample in (1)
Figure BDA0003023285250000048
The corresponding reinforced training support sample is used for training,
Figure BDA0003023285250000049
representing training query samples
Figure BDA00030232852500000410
Corresponding enhanced training query samples;
(3c2) embedded network module E pair enhanced training task set
Figure BDA00030232852500000411
Each of the enhanced training tasks in (1)
Figure BDA00030232852500000412
Mapping each SAR image to obtain a training embedded vector set
Figure BDA00030232852500000413
And using an embedded loss function LEBy training sets of embedded vectors
Figure BDA00030232852500000429
Computing training task set
Figure BDA00030232852500000414
Value of insertion loss lE
Figure BDA00030232852500000415
wherein ,
Figure BDA00030232852500000416
representing an enhanced training task
Figure BDA00030232852500000417
Corresponding training embedded vector set, satisfies a ≠ CtestOf K +1
Figure BDA00030232852500000418
Presentation of enhanced training support samples
Figure BDA00030232852500000419
The corresponding training is embedded into the vector(s),
Figure BDA00030232852500000420
representing enhanced training query samples
Figure BDA00030232852500000421
Corresponding training embedding vectors, log (-) representing the logarithm based on the natural constant e, exp (-) representing the natural constanteA base exponent, Σ denotes a continuous sum,
Figure BDA00030232852500000422
representing a training task
Figure BDA00030232852500000423
Training support sample set in (1)
Figure BDA00030232852500000424
Each training embedded vector corresponding to each SAR image of the included c-th target class
Figure BDA00030232852500000425
The class center of the c-th object class obtained by averaging,
Figure BDA00030232852500000426
representing and training tasks
Figure BDA00030232852500000427
Training query sample in (1)
Figure BDA00030232852500000428
Targets in the contained SAR images belong to the class center of the same target class, d represents a measurement function, and d (p, q) | | p-q | | luminance2
(3c3) Node characteristic initialization module I constructs a virtual label vector
Figure BDA0003023285250000051
And embedding a set of vectors for each training
Figure BDA0003023285250000052
In the condition that a is not equal to CtestEach training embedded vector of K +1
Figure BDA0003023285250000053
Splicing with the label vector of the corresponding SAR image, and simultaneously embedding each training vector group
Figure BDA0003023285250000054
Training embedded vector in
Figure BDA0003023285250000055
And virtual tag vector
Figure BDA0003023285250000056
Splicing to obtain a training node 1-layer feature group set
Figure BDA0003023285250000057
Figure BDA0003023285250000058
Figure BDA0003023285250000059
wherein ,
Figure BDA00030232852500000510
is represented by CtestA vector in which the element values of each dimension are all 1,
Figure BDA00030232852500000511
representing training embedded vector sets
Figure BDA00030232852500000512
The corresponding training node layer 1 feature set,
Figure BDA00030232852500000513
representing training embedded vectors
Figure BDA00030232852500000514
Corresponding training node 1 layer characteristics;
(3c4) graph attention network module G trains node layer 1 feature group set through input
Figure BDA00030232852500000515
To pair
Figure BDA00030232852500000516
Including each training node layer 1 feature set
Figure BDA00030232852500000517
Training node level 1 features in
Figure BDA00030232852500000518
Corresponding training query samples
Figure BDA00030232852500000519
Carrying out category prediction on targets in the SAR image to obtain a training prediction result vector set
Figure BDA00030232852500000520
wherein ,
Figure BDA00030232852500000521
to representTraining node layer 1 features
Figure BDA00030232852500000522
Corresponding dimension is CtestThe c-th element represents the 1-level feature of the training node
Figure BDA00030232852500000523
Corresponding training query samples
Figure BDA00030232852500000524
The prediction probability that the target in the included SAR image belongs to the c-th target class;
(3c5) using a classification loss function LCPredicting the result vector set by training
Figure BDA00030232852500000525
And training the query sample set
Figure BDA00030232852500000526
All the label vectors in (1) calculate the training task set
Figure BDA00030232852500000527
Is a classification loss value lC
Figure BDA00030232852500000528
wherein ,
Figure BDA00030232852500000529
predicting outcome vectors for training
Figure BDA00030232852500000530
Value of the c-th element in (y)b,cPredicting outcome vectors for training
Figure BDA0003023285250000061
The value of the c-dimension element in the label vector of the corresponding SAR image;
(3d) for the training task set
Figure BDA0003023285250000062
Is a classification loss value lCAnd training task set
Figure BDA0003023285250000063
Value of insertion loss lEObtaining a training task set by calculating a weighted sum
Figure BDA0003023285250000064
Mixing loss value l, l ═ λ lC+(1-λ)lEThen, updating parameters of all first convolution layers and all second convolution layers embedded in the network module E and parameters of all first full-connection layers and all second full-connection layers in the attention network module G through a mixed loss value l by using a random gradient descent algorithm, wherein lambda is weight, and lambda is more than or equal to 0.7 and less than 1;
(3e) judging whether N is greater than or equal to N, if so, obtaining a trained network model H' based on the mixing loss and the graph attention, otherwise, enabling N to be N +1, and executing the step (3 b);
(4) obtaining a target classification result of the small sample SAR image:
(4a) for test sample set
Figure BDA0003023285250000065
Carrying out one-hot coding on the label of each SAR image to obtain C of each SAR imagetestDimension label vectors, then from the test sample set
Figure BDA0003023285250000066
C of (A)testRandomly selecting K SAR images contained in each target category and corresponding label vectors from the xM SAR images as a test support sample set
Figure BDA0003023285250000067
C is remainedtest(M-K) SAR images and corresponding label vectors serving as test query sample sets
Figure BDA0003023285250000068
wherein ,
Figure BDA0003023285250000069
representing the e-th test support sample consisting of the SAR image and its corresponding tag vector,
Figure BDA00030232852500000610
representing a g-th test query sample consisting of the SAR image and the corresponding label vector;
(4b) supporting the test with a sample set
Figure BDA00030232852500000611
With each test query sample
Figure BDA00030232852500000612
Combined into test tasks
Figure BDA00030232852500000613
Figure BDA00030232852500000614
Obtaining a set of test tasks
Figure BDA00030232852500000615
And will be
Figure BDA00030232852500000616
Forward propagation as input to the trained mixed-loss and attention-based network model H':
(4b1) trained embedded network module E' pair test task set
Figure BDA00030232852500000617
Each test task in (1)
Figure BDA00030232852500000618
Mapping each SAR image to obtain a test embedded vector group set
Figure BDA00030232852500000619
Figure BDA00030232852500000620
Figure BDA00030232852500000621
wherein ,
Figure BDA0003023285250000071
representing test tasks
Figure BDA0003023285250000072
Corresponding test embedded vector groups satisfy e ≠ CtestOf K +1
Figure BDA0003023285250000073
Presentation of test support samples
Figure BDA0003023285250000074
The corresponding test-embedded vector is inserted into the vector,
Figure BDA0003023285250000075
representing test query samples
Figure BDA0003023285250000076
A corresponding test embedding vector;
(4b2) node characteristic initialization module I constructs a virtual label vector
Figure BDA0003023285250000077
And embedding the vector set for each test
Figure BDA0003023285250000078
In the condition that e ≠ CtestEach test embedding vector of K +1
Figure BDA0003023285250000079
Splicing with the label vector of the corresponding SAR image, and simultaneously embedding each test into a vector group
Figure BDA00030232852500000710
Test embedded vector in
Figure BDA00030232852500000711
And virtual tag vector
Figure BDA00030232852500000712
Splicing to obtain a 1-layer feature group set of the test nodes
Figure BDA00030232852500000713
Figure BDA00030232852500000714
Figure BDA00030232852500000715
wherein ,
Figure BDA00030232852500000716
representing test embedding vector sets
Figure BDA00030232852500000717
The corresponding test point 1 layer feature set,
Figure BDA00030232852500000718
table test embedded vector
Figure BDA00030232852500000719
Corresponding test node 1 layer characteristics;
(4b3) the trained graph attention network module G' tests the node layer 1 feature group set through input
Figure BDA00030232852500000720
To pair
Figure BDA00030232852500000721
Including each test node a layer 1 feature set
Figure BDA00030232852500000722
Test node level 1 features in
Figure BDA00030232852500000723
Corresponding test query sample
Figure BDA00030232852500000724
Carrying out category prediction on targets in the SAR image to obtain a test prediction result vector set
Figure BDA00030232852500000725
Each test prediction vector
Figure BDA00030232852500000726
The dimension number corresponding to the medium maximum value is
Figure BDA00030232852500000727
Corresponding test query sample
Figure BDA00030232852500000728
Including a prediction category of the target in the SAR image, wherein,
Figure BDA00030232852500000729
representing test node level 1 features
Figure BDA00030232852500000730
Corresponding dimension is CtestThe element value of the c-th dimension represents the test node level 1 feature
Figure BDA00030232852500000731
Corresponding test query sample
Figure BDA00030232852500000732
The probability that the object in the included SAR image belongs to the c-th object class.
Compared with the prior art, the invention has the following advantages:
1. the invention passes the classification loss value l of the training task setCAnd the embedding loss value l of the training task setEThe mixed loss value l of the training task set formed by the weighted sum updates the parameters of all the first convolution layers and all the second convolution layers in the embedded network module E and the parameters of all the first full-connection layers and all the second full-connection layers in the graph attention network module G, thereby enhancing the similarity between the characteristics of the same SAR target class and the difference between the characteristics of different SAR target classes.
2. In the training process of the network model H based on the mixed loss and the attention of the graph, the data enhancement module D can effectively relieve the over-fitting risk of the model by enhancing the data of all SAR images; and the data enhancement module D, the embedded network module E and the node characteristic initialization module I can acquire the effective characteristics of each SAR image, the effective characteristics of each SAR image are combined with the corresponding label vector, data support can be provided for the class prediction of the attention network module G, and compared with the prior art, the classification precision of the small-sample SAR target is further improved.
Experimental results show that the method can obtain higher classification accuracy in small sample SAR target classification.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
FIG. 2 is a flow chart of an implementation of the present invention for iterative training of a network model H based on mixed loss and graph attention.
Fig. 3 is a flow chart of an implementation of the present invention to obtain a target classification result of a small sample SAR image.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
Referring to fig. 1, the present invention includes the steps of:
step 1) obtaining a training sample set
Figure BDA0003023285250000083
And test sample set
Figure BDA0003023285250000084
(1a) Acquiring a plurality of Synthetic Aperture Radar (SAR) images containing C different target categories, wherein each target category corresponds to M SAR images with the size of h multiplied by h, each SAR image contains 1 target, C is more than or equal to 10, M is more than or equal to 200, and h is 128;
(1b) marking the target category in each SAR image, and randomly selecting CtrainTotal of C for each object classtrainTaking x M SAR images and labels of each SAR image as training sample sets
Figure BDA0003023285250000081
Mixing the rest CtestTotal of C for each object classtestTaking x M SAR images and labels of each SAR image as test sample sets
Figure BDA0003023285250000082
wherein Ctrain+Ctest=C,3≤Ctest≤5;
The invention trains the model by adopting the target class containing sufficient SAR images, and the trained model can have higher classification precision in the small sample SAR target classification of other classes, so the training sample set constructed in the step
Figure BDA0003023285250000091
And test sample set
Figure BDA0003023285250000092
The medium SAR targets are different in category;
step 2), constructing a network model H based on mixed loss and graph attention:
constructing a mixed loss and graph attention-based network model H comprising a data enhancement module D, an embedded network module E, a node feature initialization module I and a graph attention network module G which are sequentially cascaded, wherein the embedded network module E comprises five first convolution modules E which are sequentially cascadedCAnd a second convolution module ELEach first convolution module ECComprises a first convolution layer, a first batch of normalization layers, a Mish activation layer, a maximum pooling layer and a second convolution module E which are sequentially stackedLThe second convolution layer and the second batch of normalization layer are sequentially stacked; the graph attention network module G comprises three graph updating layers U which are sequentially cascaded, and each graph updating layer U comprises edge feature building modules U which are sequentially stackedEAttention weight calculation module UWAnd node feature update module UNEach edge feature building block UEComprises five first full-connection layers which are sequentially stacked, and each node feature updating module UNA second full-link layer is included;
the embedded network module E and the attention network module G have the following specific parameters:
first volume module E included in embedded network module ECThe sizes of the convolution kernels of the five first convolution layers in the three first convolution layers are all 3 multiplied by 3, the step lengths are all 1, the filling is all 1, the number of the convolution kernels of the first three first convolution layers is 64, the number of the convolution kernels of the fourth and fifth first convolution layers is 128, the sizes of the pooling kernels of the five maximum pooling layers are all 2 multiplied by 2, and the sliding step lengths are all 2; second convolution module ELThe size of the convolution kernel of the second convolution layer in (1) is 4 × 4, and the step size is 1;
the graph attention network module G comprises an edge feature building module U in each graph updating layer UEThe number of the neurons of the first four first full connection layers is 96, and the number of the neurons of the fifth first full connection layer is 1; node feature update module U included in the first two graph update layers UNThe number of the neurons of the second full-connection layer is 24, and the third graph updating layer U comprises a node feature updating module UNThe number of neurons in the second fully-connected layer in (1) is Ctest
Step 3) iterative training is carried out on the network model H based on the mixed loss and the drawing attention, and the implementation steps are as shown in FIG. 2:
(3a) initializing the iteration number to be N, wherein the maximum iteration number is N, N is more than or equal to 1000, and N is set to be 0;
(3b) from a training sample set
Figure BDA0003023285250000101
Randomly selecting a group containing CtestTotal of C for each object classtestMultiplying the SAR images by x M, and performing one-hot coding on the label of each SAR image to obtain C of each SAR imagetestDimension label vector, then from CtestRandomly selecting K SAR images contained in each target category and corresponding label vectors from the xM SAR images as training support sample sets
Figure BDA0003023285250000102
C is remainedtest(M-K) SAR images and corresponding label vectors serving as training query sample sets
Figure BDA0003023285250000103
After one-hot coding, the C-dimensional element in the tag vector of each SAR image indicates that the target in the SAR image belongs to CtestThe probability of the c-th object class of the object classes,
Figure BDA0003023285250000104
representing the a-th training support sample consisting of the SAR image and its corresponding label vector,
Figure BDA0003023285250000105
representing a b-th training query sample consisting of the SAR image and the corresponding label vector, wherein K is more than or equal to 1 and less than or equal to 10;
in order to ensure that the trained model can have higher classification precision in the classification of small sample SAR targets of other classes, the invention simulates the test process in the training process and carries out the training on a sample set
Figure BDA0003023285250000106
Randomly selecting SAR images with the same SAR target category number in the testing process, and dividing all selected SAR images and corresponding label vectors into training support sample sets
Figure BDA0003023285250000107
And training the query sample set
Figure BDA0003023285250000108
Two subsets, a training support sample set
Figure BDA0003023285250000109
Has less SAR images and only CtestK images for simulating a small number of SAR images acquired by the small sample SAR target, training the query sample set
Figure BDA00030232852500001010
The SAR image in (1) is used for simulating the SAR image needing classification, and at the moment, a support sample set is trained
Figure BDA00030232852500001011
Training a query sample set for model prediction
Figure BDA00030232852500001012
The category of the medium SAR image provides data support;
(3c) will train and support the sample set
Figure BDA00030232852500001013
With each training query sample
Figure BDA00030232852500001014
Combined into a training task
Figure BDA00030232852500001015
At this time, each training task
Figure BDA00030232852500001016
Are all one smallSample SAR target problem, the model needs to support the sample set by training of known classes
Figure BDA00030232852500001017
To predict each training query sample
Figure BDA00030232852500001018
Class of medium SAR image, all training tasks
Figure BDA00030232852500001019
The combination can obtain a training task set
Figure BDA0003023285250000111
And will be
Figure BDA0003023285250000112
Forward propagation as input to the mixed-loss and graph attention based network model H:
(3c1) data enhancement module D pairs training task set
Figure BDA0003023285250000113
Performing data enhancement on each SAR image: performing power transformation on each SAR image, adding noise to the SAR image subjected to power transformation, performing turnover transformation on the SAR image subjected to noise addition, and performing rotation transformation on the SAR image subjected to turnover transformation to obtain an enhanced training task set
Figure BDA0003023285250000114
Figure BDA0003023285250000115
Figure BDA0003023285250000116
wherein ,
Figure BDA0003023285250000117
representing training tasks
Figure BDA0003023285250000118
A corresponding one of the enhanced training tasks is,
Figure BDA0003023285250000119
representing training tasks
Figure BDA00030232852500001110
Training support sample in (1)
Figure BDA00030232852500001111
The corresponding reinforced training support sample is used for training,
Figure BDA00030232852500001112
representing training query samples
Figure BDA00030232852500001113
Corresponding enhanced training query samples;
the training task set may be changed during each iteration using the data enhancement module D
Figure BDA00030232852500001114
Can increase the training task set
Figure BDA00030232852500001115
The risk of model overfitting in the training process is reduced. The data enhancement module D comprises the following concrete implementation steps: performing a power transformation on each SAR image B, i.e. B1=(B/255)γX 255 to obtain SAR image B1For each SAR image B1Making an addition of noise, i.e. B2=B1+ noise, obtaining SAR image B2For each SAR image B2Performing a flip-flop transformation, i.e. B3=flip(B2) Obtaining an SAR image B3For each SAR image B3Performing a rotational transformation, i.e. B' ═ rot (B)3) Obtaining an enhanced SAR imageB', where γ represents a power, and γ is randomly taken to be [0.7,1.3 ]]The value within the range, noise, means compliance [ -alpha, alpha ]]Of uniformly distributed noise, alpha is randomly taken to be (0, 50)]The value in the range, film (DEG) indicates that the left and right or up and down overturn is carried out randomly, the probability of adopting each overturning mode is 1/2, rot (DEG) indicates that the clockwise rotation is carried out randomly by 90 degrees or 180 degrees or 270 degrees, and the probability of adopting each rotation angle is 1/3;
(3c2) embedded network module E pair enhanced training task set
Figure BDA00030232852500001116
Each of the enhanced training tasks in (1)
Figure BDA00030232852500001117
Mapping each SAR image to obtain a training embedded vector set
Figure BDA0003023285250000121
And using an embedded loss function LEBy training sets of embedded vectors
Figure BDA0003023285250000122
Computing training task set
Figure BDA0003023285250000123
Value of insertion loss lE
Figure BDA0003023285250000124
wherein ,
Figure BDA0003023285250000125
representing an enhanced training task
Figure BDA0003023285250000126
Corresponding training embedded vector set, satisfies a ≠ CtestOf K +1
Figure BDA0003023285250000127
Presentation of enhanced training support samples
Figure BDA0003023285250000128
The corresponding training is embedded into the vector(s),
Figure BDA0003023285250000129
representing enhanced training query samples
Figure BDA00030232852500001210
The corresponding training embedding vector, log (-) represents the logarithm based on the natural constant e, exp (-) represents the exponent based on the natural constant e, Σ represents the running sum,
Figure BDA00030232852500001211
representing a training task
Figure BDA00030232852500001212
Training support sample set in (1)
Figure BDA00030232852500001213
Each training embedded vector corresponding to each SAR image of the included c-th target class
Figure BDA00030232852500001214
The class center of the c-th object class obtained by averaging,
Figure BDA00030232852500001215
representing and training tasks
Figure BDA00030232852500001216
Training query sample in (1)
Figure BDA00030232852500001217
Targets in the contained SAR images belong to the class center of the same target class, d represents a measurement function, and d (p, q) | | p-q | | luminance2
In this step, the process of the embedded network module E mapping the SAR image into an embedded vector is equivalent to the process of the embedded network module E mapping the SAR image into an embedded vectorFeatures of each SAR image extracted, embedding loss by for each training task
Figure BDA00030232852500001218
Calculating the class center of each SAR target class in the SAR target class, and connecting each class center with a training query sample
Figure BDA00030232852500001219
Distance measurement is carried out on corresponding embedded vectors, and the query sample is trained under the constraint of an embedded loss function
Figure BDA00030232852500001220
The corresponding embedded vector is closer to the class center belonging to the same class and is further away from the class center belonging to the different classes, and after multiple iterations, the embedded network module E can achieve the effects of higher similarity between the extracted features of the same SAR target class and stronger difference between the extracted features of the different SAR target classes, so that the attention network module G of the subsequent graph is improved for each training query sample
Figure BDA00030232852500001221
The category prediction precision of the medium SAR image;
(3c3) node characteristic initialization module I constructs a virtual label vector
Figure BDA00030232852500001222
And embedding a set of vectors for each training
Figure BDA00030232852500001223
In the condition that a is not equal to CtestEach training embedded vector of K +1
Figure BDA00030232852500001224
Splicing with the label vector of the corresponding SAR image, and simultaneously embedding each training vector group
Figure BDA0003023285250000131
Training embedded vector in
Figure BDA0003023285250000132
And virtual tag vector
Figure BDA0003023285250000133
Splicing to obtain a training node 1-layer feature group set
Figure BDA0003023285250000134
Figure BDA0003023285250000135
Figure BDA0003023285250000136
wherein ,
Figure BDA0003023285250000137
is represented by CtestA vector in which the element values of each dimension are all 1,
Figure BDA0003023285250000138
representing training embedded vector sets
Figure BDA0003023285250000139
The corresponding training node layer 1 feature set,
Figure BDA00030232852500001310
representing training embedded vectors
Figure BDA00030232852500001311
Corresponding training node 1 layer characteristics;
in this step, the sample is supported by each enhanced training
Figure BDA00030232852500001312
Corresponding training embedded vector
Figure BDA00030232852500001313
And
Figure BDA00030232852500001314
the label vectors in (1) are spliced, and a vector can be embedded for each training
Figure BDA00030232852500001315
Adding class information to obtain 1-layer characteristics of all training nodes
Figure BDA00030232852500001316
Each training query sample may be paired for the model
Figure BDA00030232852500001317
The class prediction of the medium SAR image provides data support; for each training query sample for which a prediction class is required
Figure BDA00030232852500001318
The corresponding training embedded vector of the SAR image is
Figure BDA00030232852500001319
And virtual tag vector
Figure BDA00030232852500001320
Stitching to ensure 1-level features per training node
Figure BDA00030232852500001321
Are equal in dimension;
(3c4) graph attention network module G trains node layer 1 feature group set through input
Figure BDA00030232852500001322
To pair
Figure BDA00030232852500001323
Including each training node layer 1 feature set
Figure BDA00030232852500001324
Training node level 1 features in
Figure BDA00030232852500001325
Corresponding training query samples
Figure BDA00030232852500001326
Carrying out category prediction on targets in the SAR image to obtain a training prediction result vector set
Figure BDA00030232852500001327
wherein ,
Figure BDA00030232852500001328
representing training node layer 1 features
Figure BDA00030232852500001329
Corresponding dimension is CtestThe c-th element represents the 1-level feature of the training node
Figure BDA00030232852500001330
Corresponding training query samples
Figure BDA00030232852500001331
The prediction probability that the target in the included SAR image belongs to the c-th target class;
in this step, the graph attention network module G may be assembled according to the training node layer 1 feature group
Figure BDA00030232852500001332
1-layer features for each training node
Figure BDA00030232852500001333
Corresponding training query samples
Figure BDA00030232852500001334
Carrying out category prediction on the SAR image; the graph attention network module G firstly trains the task
Figure BDA0003023285250000141
Each SAR image in the SAR image is regarded as a node, two opposite directional edges are connected between each two nodes, namely a fully-connected directed graph is formed, each node and each edge have the characteristics, and the initial characteristics of each node/the input characteristics of the 1 st graph updating layer U are the 1-layer characteristics of the training nodes
Figure BDA0003023285250000142
The feature is updated for multiple times through each graph updating layer U, and the ith graph updating layer U is used for inputting training nodes and i-layer features
Figure BDA0003023285250000143
Updated to obtain the i +1 layer characteristics of the training nodes
Figure BDA0003023285250000144
Features of each edge in the ith graph update layer U
Figure BDA0003023285250000145
The attention weight is obtained by calculating the characteristics of the corresponding nodes in the ith graph updating layer U, the characteristics of each edge correspond to one attention weight, the attention weight can be used for guiding the corresponding nodes to aggregate the characteristics from other nodes, the node characteristics of the corresponding nodes are updated accordingly, and after the characteristics of the nodes are updated for three times, the attention network module G can be used for searching the training query sample
Figure BDA0003023285250000146
Training node 4-layer characteristics corresponding to SAR image in (synthetic aperture radar)
Figure BDA0003023285250000147
Converting to its predictive label; the specific implementation steps of the whole process are as follows:
(3c41) initializing the iteration number to be i, and noting that the graph force network module G includes three graph updating layers U, so that the maximum iteration number is 3, and making i equal to 1;
(3c42) the ith graph updates the edge of layer UFeature building module UEUsing training node i-layer feature set sets
Figure BDA0003023285250000148
Each training node in (1) i-layer feature set
Figure BDA0003023285250000149
All included training node i-layer features
Figure BDA00030232852500001410
Constructing edge features to obtain a training edge i-layer feature group set
Figure BDA00030232852500001411
Figure BDA00030232852500001412
Figure BDA00030232852500001413
Figure BDA00030232852500001414
wherein ,
Figure BDA00030232852500001415
representing training node i-layer feature set
Figure BDA00030232852500001416
The corresponding training edge i-layer feature set,
Figure BDA00030232852500001417
representing training node i-layer feature set
Figure BDA00030232852500001418
The i-layer characteristics of the h-th training node,
Figure BDA00030232852500001419
representing training node i-layer feature set
Figure BDA00030232852500001420
The i-layer feature of the j-th training node,
Figure BDA00030232852500001421
representing i-layer features by training nodes
Figure BDA00030232852500001422
And training node i-layer features
Figure BDA00030232852500001423
The obtained training edge i-layer characteristic, abs (-) represents taking the absolute value of each element,
Figure BDA0003023285250000151
representing the edge feature building Module U included in the ith graph update layer UEA plurality of first full connection layers sequentially laminated in the above order;
(3c43) attention weight calculation module U of ith map update layer UWFor training edge i-layer feature group set
Figure BDA0003023285250000152
Each training edge i-layer feature set in (1)
Figure BDA0003023285250000153
All training edge i-layer features included
Figure BDA0003023285250000154
Calculating attention weight to obtain training i-layer attention weight group set
Figure BDA0003023285250000155
Figure BDA0003023285250000156
Figure BDA0003023285250000157
Figure BDA0003023285250000158
Figure BDA0003023285250000159
wherein ,
Figure BDA00030232852500001510
representing training edge i-layer feature set
Figure BDA00030232852500001511
Correspondingly training the i-layer attention weight group,
Figure BDA00030232852500001512
representing training edge i-layer features
Figure BDA00030232852500001513
The corresponding training i-layer attention weight,
Figure BDA00030232852500001514
from 1 to C representing removal of htestA set of positive integers of (M-K);
in this step, the attention weight calculation module UWCalculating attention weights among the nodes according to the characteristics of each edge, wherein the weights are normalized, namely the sum of the attention weights of one node to all other nodes is 1;
(3c44) node characteristic updating module U of ith graph updating layer UNUsing training node i-layer feature set sets
Figure BDA00030232852500001515
And training the i-layer attention weight set
Figure BDA00030232852500001516
Updating the node characteristics to obtain a training node i +1 layer characteristic group set
Figure BDA00030232852500001517
Figure BDA00030232852500001518
Figure BDA00030232852500001519
Figure BDA00030232852500001520
wherein ,
Figure BDA0003023285250000161
representing training node i-layer feature set
Figure BDA0003023285250000162
The corresponding training node i +1 level feature set,
Figure BDA0003023285250000163
representing training node i-layer features
Figure BDA0003023285250000164
Corresponding training nodes are i +1 layer characteristics, a | b represents that after the vector b is spliced in the vector a,
Figure BDA0003023285250000165
indicating the node characteristic update module U included by inputting n into the ith graph update layer UNThe second fully connected layer of (a);
in this step, the node characteristic update module UNBy pouringIn the process, each node can aggregate the characteristics from all other nodes, the aggregated characteristics are spliced with the characteristics of the node, and the spliced result is input into a second full-connection layer to obtain the updated characteristics, namely the characteristics of the i layer of the training node
Figure BDA0003023285250000166
Updating to training node i +1 layer characteristics
Figure BDA0003023285250000167
Node feature aggregation guided by attention weights, per training query sample
Figure BDA0003023285250000168
The node corresponding to the SAR image in the system can acquire the category information from the nodes of the same category, and is converted into the prediction category after being updated for multiple times;
(3c45) judging whether i is true or not, if so, collecting the obtained training node 4-layer feature group set
Figure BDA0003023285250000169
Each training node in (4) layer feature set
Figure BDA00030232852500001610
Included training node 4-level features
Figure BDA00030232852500001611
Performing softmax transformation to obtain a vector set of training prediction results
Figure BDA00030232852500001612
Otherwise, let i be i +1, and perform step (3c42), wherein,
Figure BDA00030232852500001613
for training node 4-level features
Figure BDA00030232852500001614
A corresponding training predictor vector;
(3c5) using a classification loss function LCPredicting the result vector set by training
Figure BDA00030232852500001615
And training the query sample set
Figure BDA00030232852500001616
All the label vectors in (1) calculate the training task set
Figure BDA00030232852500001617
Is a classification loss value lC
Figure BDA00030232852500001618
wherein ,
Figure BDA00030232852500001619
predicting outcome vectors for training
Figure BDA00030232852500001620
Value of the c-th element in (y)bC is a vector of training predictors
Figure BDA00030232852500001621
The value of the c-dimension element in the label vector of the corresponding SAR image;
(3d) for the training task set
Figure BDA00030232852500001622
Is a classification loss value lCAnd training task set
Figure BDA00030232852500001623
Value of insertion loss lEObtaining a training task set by calculating a weighted sum
Figure BDA0003023285250000171
Mixing loss value l, l of=λlC+(1-λ)lEThen, updating parameters of all first convolution layers and all second convolution layers embedded in the network module E and parameters of all first full-connection layers and all second full-connection layers in the attention network module G through a mixed loss value l by using a random gradient descent algorithm, wherein lambda is weight, and lambda is more than or equal to 0.7 and less than 1;
in this step, the classification loss value l is setCAnd an insertion loss value lEThe weighted and obtained mixed loss value l has the effects of enhancing the similarity between the characteristics of the same SAR target class and the difference between the characteristics of different SAR target classes while updating the parameters of the whole model, so that the classification precision is improved;
(3e) judging whether N is greater than or equal to N, if so, obtaining a trained network model H' based on the mixing loss and the graph attention, otherwise, enabling N to be N +1, and executing the step (3 b);
step 4) obtaining a target classification result of the small sample SAR image, wherein the implementation steps are as shown in FIG. 2:
(4a) for test sample set
Figure BDA0003023285250000172
Carrying out one-hot coding on the label of each SAR image to obtain C of each SAR imagetestDimension label vectors, then from the test sample set
Figure BDA0003023285250000173
C of (A)testRandomly selecting K SAR images contained in each target category and corresponding label vectors from the xM SAR images as a test support sample set
Figure BDA0003023285250000174
C is remainedtest(M-K) SAR images and corresponding label vectors serving as test query sample sets
Figure BDA0003023285250000175
wherein ,
Figure BDA0003023285250000176
representing the e-th test support sample consisting of the SAR image and its corresponding tag vector,
Figure BDA0003023285250000177
representing a g-th test query sample consisting of the SAR image and the corresponding label vector;
(4b) supporting the test with a sample set
Figure BDA00030232852500001717
With each test query sample
Figure BDA0003023285250000178
Combined into test tasks
Figure BDA0003023285250000179
Figure BDA00030232852500001710
Obtaining a set of test tasks
Figure BDA00030232852500001711
And will be
Figure BDA00030232852500001712
Forward propagation as input to the trained mixed-loss and attention-based network model H':
(4b1) trained embedded network module E' pair test task set
Figure BDA00030232852500001713
Each test task in (1)
Figure BDA00030232852500001714
Mapping each SAR image to obtain a test embedded vector group set
Figure BDA00030232852500001715
Figure BDA00030232852500001716
Figure BDA0003023285250000181
wherein ,
Figure BDA0003023285250000182
representing test tasks
Figure BDA0003023285250000183
Corresponding test embedded vector groups satisfy e ≠ CtestOf K +1
Figure BDA0003023285250000184
Presentation of test support samples
Figure BDA0003023285250000185
The corresponding test-embedded vector is inserted into the vector,
Figure BDA0003023285250000186
representing test query samples
Figure BDA0003023285250000187
A corresponding test embedding vector;
(4b2) node characteristic initialization module I constructs a virtual label vector
Figure BDA0003023285250000188
And embedding the vector set for each test
Figure BDA0003023285250000189
In the condition that e ≠ CtestEach test embedding vector of K +1
Figure BDA00030232852500001810
Splicing with the label vector of the corresponding SAR image, and simultaneously embedding each test into a vector group
Figure BDA00030232852500001811
Test embedded vector in
Figure BDA00030232852500001812
And virtual tag vector
Figure BDA00030232852500001813
Splicing to obtain a 1-layer feature group set of the test nodes
Figure BDA00030232852500001814
Figure BDA00030232852500001815
Figure BDA00030232852500001816
wherein ,
Figure BDA00030232852500001817
representing test embedding vector sets
Figure BDA00030232852500001818
The corresponding test point 1 layer feature set,
Figure BDA00030232852500001819
table test embedded vector
Figure BDA00030232852500001820
Corresponding test node 1 layer characteristics;
(4b3) the trained graph attention network module G' tests the node layer 1 feature group set through input
Figure BDA00030232852500001821
To pair
Figure BDA00030232852500001822
Including 1-level characteristics per test nodeGroup of
Figure BDA00030232852500001823
Test node level 1 features in
Figure BDA00030232852500001824
Corresponding test query sample
Figure BDA00030232852500001825
Carrying out category prediction on targets in the SAR image to obtain a test prediction result vector set
Figure BDA00030232852500001826
Each test prediction vector
Figure BDA00030232852500001827
The dimension number corresponding to the medium maximum value is
Figure BDA00030232852500001828
Corresponding test query sample
Figure BDA00030232852500001829
Including a prediction category of the target in the SAR image, wherein,
Figure BDA00030232852500001830
representing test node level 1 features
Figure BDA00030232852500001831
Corresponding dimension is CtestThe element value of the c-th dimension represents the test node level 1 feature
Figure BDA00030232852500001832
Corresponding test query sample
Figure BDA00030232852500001833
The probability that the target included in the SAR image belongs to the c-th target class;
in this step, test data is obtainedSet of measurement vectors
Figure BDA00030232852500001834
Similar to the step (3c4), only the input data of the trained graph attention network module G' is adjusted.
The technical effects of the present invention are further explained by combining simulation experiments as follows:
1. simulation experiment conditions and contents:
the hardware platform of the simulation experiment is as follows: the GPU is NVIDIA GeForce RTX 3090, and the software platform is as follows: the operating system is windows 10. The data set of the simulation experiment is a public MSTAR data set, wherein the SAR sensor is in a high-resolution beam-gathering type, the resolution is 0.3m multiplied by 0.3m, the SAR sensor works in an X wave band, the polarization mode is HH polarization, the pitch angles are respectively 17 degrees, the azimuth angles are continuously changed from 0 degree to 360 degrees, the interval is about 5 degrees, and the size after image cutting is 128 multiplied by 128 degrees. The consolidated MSTAR data set contains 10 classes of ground military vehicle targets, i.e., C-10, model numbers 2S1, BMP-2, BRDM-2, BTR-60, BTR-70, D-7, T-62, T-72, ZIL-131, ZSU-234, and each class of target has 210 SAR images, i.e., M-210.
In order to compare the small sample SAR target classification accuracy with the existing small sample SAR target identification method based on the graph attention network, 1050 total SAR images of 5 target categories and the label of each SAR image are selected from the MSTAR data set as a training sample set, namely CtrainSelecting 1050 total SAR images of the remaining 5 target categories and labels of each SAR image as a test sample set, namely Ctest5. Meanwhile, the number K of training/testing support samples sampled by each target class in each training/testing task is 10, and the number M-K of training/testing query samples is 200. The classification of the target classes in the training sample set and the testing sample set and the number of the SAR images of each class of target are shown in Table 1:
TABLE 1
Figure BDA0003023285250000191
Figure BDA0003023285250000201
The classification confusion matrix and the classification average accuracy of the small sample SAR target recognition method based on the graph attention network are compared and simulated, and the result is shown in table 2:
TABLE 2
Figure BDA0003023285250000202
As can be seen from Table 2, the average accuracy of the small sample SAR target classification of the invention is improved by 5.1% compared with the prior art.
The foregoing description is only exemplary of the invention and is not intended to limit the invention, and it will be apparent to those skilled in the art that various changes and modifications in form and detail may be made without departing from the principles and arrangements of the invention, but these changes and modifications are within the scope of the invention as defined in the appended claims.

Claims (4)

1. A small sample SAR target classification method based on mixed loss and graph attention is characterized by comprising the following steps:
(1) obtaining a training sample set
Figure FDA0003023285240000011
And test sample set
Figure FDA0003023285240000012
(1a) Acquiring a plurality of Synthetic Aperture Radar (SAR) images containing C different target categories, wherein each target category corresponds to M SAR images with the size of h multiplied by h, each SAR image contains 1 target, C is more than or equal to 10, M is more than or equal to 200, and h is 128;
(1b) for the purpose in each SAR imageMarking the mark class and randomly selecting the mark class containing CtrainTotal of C for each object classtrainTaking x M SAR images and labels of each SAR image as training sample sets
Figure FDA0003023285240000013
Mixing the rest CtestTotal of C for each object classtestTaking x M SAR images and labels of each SAR image as test sample sets
Figure FDA0003023285240000014
wherein Ctrain+Ctest=C,3≤Ctest≤5;
(2) Constructing a network model H based on mixed loss and graph attention:
constructing a mixed loss and graph attention-based network model H comprising a data enhancement module D, an embedded network module E, a node feature initialization module I and a graph attention network module G which are sequentially cascaded, wherein the embedded network module E comprises a plurality of first volume modules E which are sequentially cascadedCAnd a second convolution module ELEach first convolution module ECComprises a first convolution layer, a first batch of normalization layers, a Mish activation layer, a maximum pooling layer and a second convolution module E which are sequentially stackedLThe second convolution layer and the second batch of normalization layer are sequentially stacked; the graph attention network module G comprises a plurality of graph updating layers U which are sequentially cascaded, and each graph updating layer U comprises edge feature building modules U which are sequentially stackedEAttention weight calculation module UWAnd node feature update module UNEach edge feature building block UEComprises a plurality of first full-connection layers which are stacked in sequence, and each node feature updating module UNA second full-link layer is included;
(3) performing iterative training on a network model H based on mixed loss and graph attention:
(3a) initializing the iteration number to be N, wherein the maximum iteration number is N, N is more than or equal to 1000, and N is set to be 0;
(3b) from a training sample set
Figure FDA0003023285240000015
Randomly selecting a group containing CtestTotal of C for each object classtestMultiplying the SAR images by x M, and performing one-hot coding on the label of each SAR image to obtain C of each SAR imagetestDimension label vector, then from CtestRandomly selecting K SAR images contained in each target category and corresponding label vectors from the xM SAR images as training support sample sets
Figure FDA0003023285240000016
C is remainedtest(M-K) SAR images and corresponding label vectors serving as training query sample sets
Figure FDA0003023285240000021
After one-hot coding, the C-dimensional element in the tag vector of each SAR image indicates that the target in the SAR image belongs to CtestThe probability of the c-th object class of the object classes,
Figure FDA0003023285240000022
representing the a-th training support sample consisting of the SAR image and its corresponding label vector,
Figure FDA0003023285240000023
representing a b-th training query sample consisting of the SAR image and the corresponding label vector, wherein K is more than or equal to 1 and less than or equal to 10;
(3c) will train and support the sample set
Figure FDA0003023285240000024
With each training query sample
Figure FDA0003023285240000025
Combined into a training task
Figure FDA0003023285240000026
Figure FDA0003023285240000027
Obtaining a training task set
Figure FDA0003023285240000028
And will be
Figure FDA0003023285240000029
Forward propagation as input to the mixed-loss and graph attention based network model H:
(3c1) data enhancement module D pairs training task set
Figure FDA00030232852400000210
Performing data enhancement on each SAR image: performing power transformation on each SAR image, adding noise to the SAR image subjected to power transformation, performing turnover transformation on the SAR image subjected to noise addition, and performing rotation transformation on the SAR image subjected to turnover transformation to obtain an enhanced training task set
Figure FDA00030232852400000211
Figure FDA00030232852400000212
Figure FDA00030232852400000213
wherein ,
Figure FDA00030232852400000214
representing training tasks
Figure FDA00030232852400000215
A corresponding one of the enhanced training tasks is,
Figure FDA00030232852400000216
representing training tasks
Figure FDA00030232852400000217
Training support sample in (1)
Figure FDA00030232852400000218
The corresponding reinforced training support sample is used for training,
Figure FDA00030232852400000219
representing training query samples
Figure FDA00030232852400000220
Corresponding enhanced training query samples;
(3c2) embedded network module E pair enhanced training task set
Figure FDA00030232852400000221
Each of the enhanced training tasks in (1)
Figure FDA00030232852400000222
Mapping each SAR image to obtain a training embedded vector set
Figure FDA00030232852400000223
And using an embedded loss function LEBy training sets of embedded vectors
Figure FDA00030232852400000224
Computing training task set
Figure FDA00030232852400000225
Value of insertion loss lE
Figure FDA00030232852400000226
wherein ,
Figure FDA0003023285240000031
representing an enhanced training task
Figure FDA0003023285240000032
Corresponding training embedded vector set, satisfies a ≠ CtestOf K +1
Figure FDA0003023285240000033
Presentation of enhanced training support samples
Figure FDA0003023285240000034
The corresponding training is embedded into the vector(s),
Figure FDA0003023285240000035
representing enhanced training query samples
Figure FDA0003023285240000036
The corresponding training embedding vector, log (-) represents the logarithm based on the natural constant e, exp (-) represents the exponent based on the natural constant e, Σ represents the running sum,
Figure FDA0003023285240000037
representing a training task
Figure FDA0003023285240000038
Training support sample set in (1)
Figure FDA0003023285240000039
Each training embedded vector corresponding to each SAR image of the included c-th target class
Figure FDA00030232852400000310
The class center of the c-th object class obtained by averaging,
Figure FDA00030232852400000311
representing and training tasks
Figure FDA00030232852400000312
Training query sample in (1)
Figure FDA00030232852400000313
Targets in the contained SAR images belong to the class center of the same target class, d represents a measurement function, and d (p, q) | | p-q | | luminance2
(3c3) Node characteristic initialization module I constructs a virtual label vector
Figure FDA00030232852400000314
And embedding a set of vectors for each training
Figure FDA00030232852400000315
In the condition that a is not equal to CtestEach training embedded vector of K +1
Figure FDA00030232852400000316
Splicing with the label vector of the corresponding SAR image, and simultaneously embedding each training vector group
Figure FDA00030232852400000317
Training embedded vector in
Figure FDA00030232852400000318
And virtual tag vector
Figure FDA00030232852400000319
Splicing to obtain a training node 1-layer feature group set
Figure FDA00030232852400000320
Figure FDA00030232852400000321
Figure FDA00030232852400000322
wherein ,
Figure FDA00030232852400000323
Figure FDA00030232852400000324
is represented by CtestA vector in which the element values of each dimension are all 1,
Figure FDA00030232852400000325
representing training embedded vector sets
Figure FDA00030232852400000326
The corresponding training node layer 1 feature set,
Figure FDA00030232852400000327
representing training embedded vectors
Figure FDA00030232852400000328
Corresponding training node 1 layer characteristics;
(3c4) graph attention network module G trains node layer 1 feature group set through input
Figure FDA00030232852400000329
To pair
Figure FDA00030232852400000330
Including each training node layer 1 feature set
Figure FDA00030232852400000331
Training node level 1 features in
Figure FDA00030232852400000332
Corresponding training query samples
Figure FDA00030232852400000333
Carrying out category prediction on targets in the SAR image to obtain a training prediction result vector set
Figure FDA00030232852400000334
wherein ,
Figure FDA00030232852400000335
representing training node layer 1 features
Figure FDA00030232852400000336
Corresponding dimension is CtestThe c-th element represents the 1-level feature of the training node
Figure FDA00030232852400000337
Corresponding training query samples
Figure FDA00030232852400000338
The prediction probability that the target in the included SAR image belongs to the c-th target class;
(3c5) using a classification loss function LCPredicting the result vector set by training
Figure FDA00030232852400000339
And training the query sample set
Figure FDA00030232852400000340
All the label vectors in (1) calculate the training task set
Figure FDA00030232852400000341
Is a classification loss value lC
Figure FDA0003023285240000041
wherein ,
Figure FDA0003023285240000042
predicting outcome vectors for training
Figure FDA0003023285240000043
Value of the c-th element in (y)b,cPredicting outcome vectors for training
Figure FDA0003023285240000044
The value of the c-dimension element in the label vector of the corresponding SAR image;
(3d) for the training task set
Figure FDA0003023285240000045
Is a classification loss value lCAnd training task set
Figure FDA0003023285240000046
Value of insertion loss lEObtaining a training task set by calculating a weighted sum
Figure FDA0003023285240000047
Mixing loss value l, l ═ λ lC+(1-λ)lEThen, updating parameters of all first convolution layers and all second convolution layers embedded in the network module E and parameters of all first full-connection layers and all second full-connection layers in the attention network module G through a mixed loss value l by using a random gradient descent algorithm, wherein lambda is weight, and lambda is more than or equal to 0.7 and less than 1;
(3e) judging whether N is greater than or equal to N, if so, obtaining a trained network model H' based on the mixing loss and the graph attention, otherwise, enabling N to be N +1, and executing the step (3 b);
(4) obtaining a target classification result of the small sample SAR image:
(4a) For test sample set
Figure FDA0003023285240000048
Carrying out one-hot coding on the label of each SAR image to obtain C of each SAR imagetestDimension label vectors, then from the test sample set
Figure FDA0003023285240000049
C of (A)testRandomly selecting K SAR images contained in each target category and corresponding label vectors from the xM SAR images as a test support sample set
Figure FDA00030232852400000410
C is remainedtest(M-K) SAR images and corresponding label vectors serving as test query sample sets
Figure FDA00030232852400000411
wherein ,
Figure FDA00030232852400000412
representing the e-th test support sample consisting of the SAR image and its corresponding tag vector,
Figure FDA00030232852400000413
representing a g-th test query sample consisting of the SAR image and the corresponding label vector;
(4b) supporting the test with a sample set
Figure FDA00030232852400000414
With each test query sample
Figure FDA00030232852400000415
Combined into test tasks
Figure FDA00030232852400000416
Figure FDA00030232852400000417
Obtaining a set of test tasks
Figure FDA00030232852400000418
And will be
Figure FDA00030232852400000419
Forward propagation as input to the trained mixed-loss and attention-based network model H':
(4b1) trained embedded network module E' pair test task set
Figure FDA00030232852400000420
Each test task in (1)
Figure FDA00030232852400000421
Mapping each SAR image to obtain a test embedded vector group set
Figure FDA00030232852400000422
Figure FDA00030232852400000423
Figure FDA00030232852400000424
wherein ,
Figure FDA0003023285240000051
representing test tasks
Figure FDA0003023285240000052
Corresponding test embedded vector groups satisfy e ≠ CtestOf K +1
Figure FDA0003023285240000053
Presentation of test support samples
Figure FDA0003023285240000054
The corresponding test-embedded vector is inserted into the vector,
Figure FDA0003023285240000055
representing test query samples
Figure FDA0003023285240000056
A corresponding test embedding vector;
(4b2) node characteristic initialization module I constructs a virtual label vector
Figure FDA0003023285240000057
And embedding the vector set for each test
Figure FDA0003023285240000058
In the condition that e ≠ CtestEach test embedding vector of K +1
Figure FDA0003023285240000059
Splicing with the label vector of the corresponding SAR image, and simultaneously embedding each test into a vector group
Figure FDA00030232852400000510
Test embedded vector in
Figure FDA00030232852400000511
And virtual tag vector
Figure FDA00030232852400000512
Splicing to obtain a 1-layer feature group set of the test nodes
Figure FDA00030232852400000513
Figure FDA00030232852400000514
Figure FDA00030232852400000515
wherein ,
Figure FDA00030232852400000516
representing test embedding vector sets
Figure FDA00030232852400000517
The corresponding test point 1 layer feature set,
Figure FDA00030232852400000518
table test embedded vector
Figure FDA00030232852400000519
Corresponding test node 1 layer characteristics;
(4b3) the trained graph attention network module G' tests the node layer 1 feature group set through input
Figure FDA00030232852400000520
To pair
Figure FDA00030232852400000521
Including each test node a layer 1 feature set
Figure FDA00030232852400000522
Test node level 1 features in
Figure FDA00030232852400000523
Corresponding test query sample
Figure FDA00030232852400000524
IncludedCarrying out category prediction on the target in the SAR image to obtain a test prediction result vector set
Figure FDA00030232852400000525
Each test prediction vector
Figure FDA00030232852400000526
The dimension number corresponding to the medium maximum value is
Figure FDA00030232852400000527
Corresponding test query sample
Figure FDA00030232852400000528
Including a prediction category of the target in the SAR image, wherein,
Figure FDA00030232852400000529
representing test node level 1 features
Figure FDA00030232852400000530
Corresponding dimension is CtestThe element value of the c-th dimension represents the test node level 1 feature
Figure FDA00030232852400000531
Corresponding test query sample
Figure FDA00030232852400000532
The probability that the object in the included SAR image belongs to the c-th object class.
2. The mixed-loss and attention-based small-sample SAR target classification method according to claim 1, characterized in that the mixed-loss and attention-based network model H in the step (2) is embedded in a first convolution module E included in a network module ECIs five, the figure notes the number of figure update layers U comprised by the force network module GFor three, each graph updating layer U comprises an edge feature building module UEThe number of the first full connection layers in (1) is five, and the specific parameters of the embedded network module E and the graph attention network module G are as follows:
first volume module E included in embedded network module ECThe sizes of the convolution kernels of the five first convolution layers in the three first convolution layers are all 3 multiplied by 3, the step lengths are all 1, the filling is all 1, the number of the convolution kernels of the first three first convolution layers is 64, the number of the convolution kernels of the fourth and fifth first convolution layers is 128, the sizes of the pooling kernels of the five maximum pooling layers are all 2 multiplied by 2, and the sliding step lengths are all 2; second convolution module ELThe size of the convolution kernel of the second convolution layer in (1) is 4 × 4, and the step size is 1;
the graph attention network module G comprises an edge feature building module U in each graph updating layer UEThe number of the neurons of the first four first full connection layers is 96, and the number of the neurons of the fifth first full connection layer is 1; node feature update module U included in the first two graph update layers UNThe number of the neurons of the second full-connection layer is 24, and the third graph updating layer U comprises a node feature updating module UNThe number of neurons in the second fully-connected layer in (1) is Ctest
3. The method for classifying small-sample SAR target based on mixed loss and attention of claim 1, wherein the data enhancement module D in step (3c1) is used for training task set
Figure FDA0003023285240000064
The method for enhancing the data of each SAR image comprises the following specific steps: performing a power transformation on each SAR image B, i.e. B1=(B/255)γX 255 to obtain SAR image B1For each SAR image B1Making an addition of noise, i.e. B2=B1+ noise, obtaining SAR image B2For each SAR image B2Performing a flip-flop transformation, i.e. B3=flip(B2) Obtaining an SAR image B3For each SAR image B3To make a rotary changeAlternatively, i.e. B' ═ rot (B)3) Obtaining an enhanced SAR image B', wherein gamma represents power and is randomly selected to be [0.7,1.3 ]]The value within the range, noise, means compliance [ -alpha, alpha ]]Of uniformly distributed noise, alpha is randomly taken to be (0, 50)]The value within the range, film (DEG) indicates that the left and right or up and down are randomly turned, the probability of each turning mode is 1/2, rot (DEG) indicates that the clockwise rotation is randomly carried out by 90 DEG or 180 DEG or 270 DEG, and the probability of each rotation angle is 1/3.
4. The method for classifying SAR targets based on mixed loss and graph attention of claim 1, wherein the graph attention network module G in step (3c4) is used for training node layer 1 feature set sets
Figure FDA0003023285240000061
Including each training node layer 1 feature set
Figure FDA0003023285240000062
Training node level 1 features in
Figure FDA0003023285240000063
Performing category prediction, comprising the following steps:
(3c41) initializing the iteration times to be i, setting the maximum iteration time to be 3, and setting i to be 1;
(3c42) edge feature construction module U of ith graph updating layer UEUsing training node i-layer feature set sets
Figure FDA0003023285240000071
Each training node in (1) i-layer feature set
Figure FDA0003023285240000072
All included training node i-layer features
Figure FDA0003023285240000073
Constructing edge characteristics to obtain training edge i-layer characteristicsGroup collection
Figure FDA0003023285240000074
Figure FDA0003023285240000075
Figure FDA0003023285240000076
Figure FDA0003023285240000077
wherein ,
Figure FDA0003023285240000078
representing training node i-layer feature set
Figure FDA0003023285240000079
The corresponding training edge i-layer feature set,
Figure FDA00030232852400000710
representing training node i-layer feature set
Figure FDA00030232852400000711
The i-layer characteristics of the h-th training node,
Figure FDA00030232852400000712
representing training node i-layer feature set
Figure FDA00030232852400000713
The i-layer feature of the j-th training node,
Figure FDA00030232852400000714
show the result of trainingTraining node i-layer features
Figure FDA00030232852400000715
And training node i-layer features
Figure FDA00030232852400000716
The obtained training edge i-layer characteristic, abs (-) represents taking the absolute value of each element,
Figure FDA00030232852400000717
representing the edge feature building Module U included in the ith graph update layer UEA plurality of first full connection layers sequentially laminated in the above order;
(3c43) attention weight calculation module U of ith map update layer UWFor training edge i-layer feature group set
Figure FDA00030232852400000718
Each training edge i-layer feature set in (1)
Figure FDA00030232852400000719
All training edge i-layer features included
Figure FDA00030232852400000720
Calculating attention weight to obtain training i-layer attention weight group set
Figure FDA00030232852400000721
Figure FDA00030232852400000722
Figure FDA00030232852400000723
Figure FDA00030232852400000724
Figure FDA00030232852400000725
wherein ,
Figure FDA00030232852400000726
representing training edge i-layer feature set
Figure FDA00030232852400000727
Correspondingly training the i-layer attention weight group,
Figure FDA00030232852400000728
representing training edge i-layer features
Figure FDA00030232852400000729
The corresponding training i-layer attention weight,
Figure FDA00030232852400000730
from 1 to C representing removal of htestA set of positive integers of (M-K);
(3c44) node characteristic updating module U of ith graph updating layer UNUsing training node i-layer feature set sets
Figure FDA00030232852400000731
And training the i-layer attention weight set
Figure FDA00030232852400000732
Updating the node characteristics to obtain a training node i +1 layer characteristic group set
Figure FDA0003023285240000081
Figure FDA0003023285240000082
Figure FDA0003023285240000083
Figure FDA0003023285240000084
wherein ,
Figure FDA0003023285240000085
representing training node i-layer feature set
Figure FDA0003023285240000086
The corresponding training node i +1 level feature set,
Figure FDA0003023285240000087
representing training node i-layer features
Figure FDA0003023285240000088
Corresponding training nodes are i +1 layer characteristics, a | b represents that after the vector b is spliced in the vector a,
Figure FDA0003023285240000089
indicating the node characteristic update module U included by inputting n into the ith graph update layer UNThe second fully connected layer of (a);
(3c45) judging whether i is true or not, if so, collecting the obtained training node 4-layer feature group set
Figure FDA00030232852400000810
Each training node in (4) layer feature set
Figure FDA00030232852400000811
Included training node 4-level features
Figure FDA00030232852400000812
Performing softmax transformation to obtain a vector set of training prediction results
Figure FDA00030232852400000813
Otherwise, let i be i +1, and perform step (3c42), wherein,
Figure FDA00030232852400000814
for training node 4-level features
Figure FDA00030232852400000815
A corresponding training predictor vector.
CN202110408623.2A 2021-04-16 2021-04-16 Small sample SAR target classification method based on mixing loss and graph meaning force Active CN113095416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110408623.2A CN113095416B (en) 2021-04-16 2021-04-16 Small sample SAR target classification method based on mixing loss and graph meaning force

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110408623.2A CN113095416B (en) 2021-04-16 2021-04-16 Small sample SAR target classification method based on mixing loss and graph meaning force

Publications (2)

Publication Number Publication Date
CN113095416A true CN113095416A (en) 2021-07-09
CN113095416B CN113095416B (en) 2023-08-18

Family

ID=76677939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110408623.2A Active CN113095416B (en) 2021-04-16 2021-04-16 Small sample SAR target classification method based on mixing loss and graph meaning force

Country Status (1)

Country Link
CN (1) CN113095416B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592008A (en) * 2021-08-05 2021-11-02 哈尔滨理工大学 System, method, equipment and storage medium for solving small sample image classification based on graph neural network mechanism of self-encoder
CN113655479A (en) * 2021-08-16 2021-11-16 西安电子科技大学 Small sample SAR target classification method based on deformable convolution and double attention
CN115131580A (en) * 2022-08-31 2022-09-30 中国科学院空天信息创新研究院 Space target small sample identification method based on attention mechanism

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516561A (en) * 2019-08-05 2019-11-29 西安电子科技大学 SAR image target recognition method based on DCGAN and CNN
CN111191718A (en) * 2019-12-30 2020-05-22 西安电子科技大学 Small sample SAR target identification method based on graph attention network
WO2021000906A1 (en) * 2019-07-02 2021-01-07 五邑大学 Sar image-oriented small-sample semantic feature enhancement method and apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021000906A1 (en) * 2019-07-02 2021-01-07 五邑大学 Sar image-oriented small-sample semantic feature enhancement method and apparatus
CN110516561A (en) * 2019-08-05 2019-11-29 西安电子科技大学 SAR image target recognition method based on DCGAN and CNN
CN111191718A (en) * 2019-12-30 2020-05-22 西安电子科技大学 Small sample SAR target identification method based on graph attention network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何凯;冯旭;高圣楠;马希涛;: "基于多尺度特征融合与反复注意力机制的细粒度图像分类算法", 天津大学学报(自然科学与工程技术版), no. 10 *
刘晨;曲长文;周强;李智;李健伟;: "基于卷积神经网络迁移学习的SAR图像目标分类", 现代雷达, no. 03 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592008A (en) * 2021-08-05 2021-11-02 哈尔滨理工大学 System, method, equipment and storage medium for solving small sample image classification based on graph neural network mechanism of self-encoder
CN113655479A (en) * 2021-08-16 2021-11-16 西安电子科技大学 Small sample SAR target classification method based on deformable convolution and double attention
CN113655479B (en) * 2021-08-16 2023-07-07 西安电子科技大学 Small sample SAR target classification method based on deformable convolution and double attentions
CN115131580A (en) * 2022-08-31 2022-09-30 中国科学院空天信息创新研究院 Space target small sample identification method based on attention mechanism

Also Published As

Publication number Publication date
CN113095416B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN113095416A (en) Small sample SAR target classification method based on mixed loss and graph attention
CN111199214B (en) Residual network multispectral image ground object classification method
CN103955702B (en) SAR image terrain classification method based on depth RBF network
CN105931253B (en) A kind of image partition method being combined based on semi-supervised learning
CN110309868A (en) In conjunction with the hyperspectral image classification method of unsupervised learning
CN110633708A (en) Deep network significance detection method based on global model and local optimization
CN113486981A (en) RGB image classification method based on multi-scale feature attention fusion network
CN112085059B (en) Breast cancer image feature selection method based on improved sine and cosine optimization algorithm
CN107992891A (en) Based on spectrum vector analysis multi-spectral remote sensing image change detecting method
CN107292341A (en) Adaptive multi views clustering method based on paired collaboration regularization and NMF
CN109446894A (en) The multispectral image change detecting method clustered based on probabilistic segmentation and Gaussian Mixture
CN113095409A (en) Hyperspectral image classification method based on attention mechanism and weight sharing
CN113655479B (en) Small sample SAR target classification method based on deformable convolution and double attentions
CN106446806B (en) Semi-supervised face identification method based on the sparse reconstruct of fuzzy membership and system
CN114169442A (en) Remote sensing image small sample scene classification method based on double prototype network
CN115311478A (en) Federal image classification method based on image depth clustering and storage medium
CN115631396A (en) YOLOv5 target detection method based on knowledge distillation
CN111179272B (en) Rapid semantic segmentation method for road scene
CN113420593B (en) Small sample SAR automatic target recognition method based on hybrid inference network
CN114998688A (en) Large-view-field target detection method based on YOLOv4 improved algorithm
WO2022100607A1 (en) Method for determining neural network structure and apparatus thereof
CN115116539A (en) Object determination method and device, computer equipment and storage medium
CN109063750B (en) SAR target classification method based on CNN and SVM decision fusion
CN111222534A (en) Single-shot multi-frame detector optimization method based on bidirectional feature fusion and more balanced L1 loss
CN116824485A (en) Deep learning-based small target detection method for camouflage personnel in open scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant