CN113095416B - Small sample SAR target classification method based on mixing loss and graph meaning force - Google Patents
Small sample SAR target classification method based on mixing loss and graph meaning force Download PDFInfo
- Publication number
- CN113095416B CN113095416B CN202110408623.2A CN202110408623A CN113095416B CN 113095416 B CN113095416 B CN 113095416B CN 202110408623 A CN202110408623 A CN 202110408623A CN 113095416 B CN113095416 B CN 113095416B
- Authority
- CN
- China
- Prior art keywords
- training
- test
- layer
- node
- sar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000012549 training Methods 0.000 claims abstract description 300
- 238000012360 testing method Methods 0.000 claims abstract description 188
- 239000013598 vector Substances 0.000 claims description 152
- 230000009466 transformation Effects 0.000 claims description 23
- 238000010276 construction Methods 0.000 claims description 10
- 210000002569 neuron Anatomy 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 5
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 10
- 238000002347 injection Methods 0.000 abstract description 2
- 239000007924 injection Substances 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 6
- 238000004088 simulation Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000010287 polarization Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 108010049931 Bone Morphogenetic Protein 2 Proteins 0.000 description 1
- 102100024506 Bone morphogenetic protein 2 Human genes 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention provides a small sample SAR target classification method based on mixing loss and schematic injection force, which comprises the following implementation steps: acquiring a training sample set and a test sample set; constructing a network model H based on the mixing loss and the schematic drawing force; carrying out iterative training on the H; and obtaining a target classification result of the small sample SAR image. The invention uses the classification loss value l of the training task set C And an embedding loss value l for the training task set E The weighted sum of the first and second convolution layers embedded in the network module E and the parameters of the first and second full-connection layers in the graph-based artificial force network module G are updated to form a mixed loss value l of the training task set, so that the similarity between the features of the same SAR target class and the difference between the features of different SAR target classes are enhanced, the risk of overfitting in the model training process is effectively reduced through data enhancement, and the classification precision of the small sample SAR targets is improved.
Description
Technical Field
The invention belongs to the technical field of radar image processing, relates to a SAR target classification method, and in particular relates to a small sample SAR target classification method based on mixed loss and deliberate force, which can be used for SAR target classification under the condition that the number of acquired SAR images of a target is small.
Background
The synthetic aperture radar SAR has the characteristics of all-day time, all weather, long acting distance, high resolution and the like, and is widely used in target classification in military fields such as battlefield reconnaissance and the like because the two-dimensional high-resolution SAR image of the target contains abundant information such as the shape, the size, the texture and the like of the target. SAR target classification is an algorithm based on a computer system, wherein characteristics are extracted after SAR image data of a target are acquired from a sensor, and target class attributes are given according to the extracted characteristics. While a number of conventional SAR target classification methods based on manually selected features, designed classifiers, have been developed, these conventional methods require a great deal of experience and strong expertise to set a particular algorithm for a particular target, which is time consuming and difficult to popularize. In recent years, the SAR target classification method based on deep learning realizes SAR target classification based on data driving, and the method can autonomously learn and extract characteristics effective for classification from data and classify targets by utilizing the characteristics, so that the characteristics, the design classifier and strong professional knowledge are not required to be manually selected, and the method is easy to popularize to new target types, thereby obtaining excellent performance and being widely researched and used by the industry.
However, some targets observed by the SAR are non-cooperative small sample SAR targets, that is, the number of SAR images which can be acquired by the targets is small, each target has only 1 image to more than ten images, and the deep learning-based SAR target classification method generally needs a large number of training samples to train a model to obtain high classification accuracy on a test sample, and for the small sample SAR targets, the deep learning-based SAR target classification method has the problem of low classification accuracy due to insufficient training samples.
To solve this problem, the prior art designs a special model with low requirements on the number of samples to improve the classification accuracy of small sample SAR targets by improving the model structure. For example, patent application publication number CN111191718A, entitled "small sample SAR target recognition method based on graph attention network", discloses a small sample SAR target recognition method based on graph attention network, which firstly acquires a small amount of labeled SAR images and a large amount of unlabeled SAR images of a target and performs noise reduction processing, then iteratively trains a self-encoder by using the noise reduced images to obtain feature vectors of all SAR images, finally constructs an initial adjacent moment, and iteratively trains the graph attention network by using the initial adjacent moment and the feature vectors of all SAR images, wherein the trained graph attention network can realize classification of the target in the unlabeled SAR images by using an attention mechanism. The graph attention network adopted by the method has less requirements on the labeled SAR images in the category prediction process, and the application of the attention mechanism can improve the classification accuracy of the network, but the method has the defects that the obtained result is independently calculated and lost with the original SAR image after the encoding and decoding operation is carried out on each SAR image by the self-encoder, so that the similarity between the extracted characteristics of the same SAR target category is lower, the difference between the characteristics of different SAR target categories is weaker, and the classification accuracy of the model on the small sample SAR targets is still lower.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a small sample SAR target classification method based on mixing loss and schematic injection force, which is used for solving the technical problems of low classification accuracy caused by low similarity between features of the same SAR target class and weak difference between features of different SAR target classes in the prior art.
In order to achieve the above purpose, the technical scheme adopted by the invention comprises the following steps:
(1) Acquiring training sample setsAnd test sample set->
(1a) Acquiring multiple synthetic aperture radar SAR images containing C different target categories, wherein each target category corresponds to M SAR images with the size of h multiplied by h, each SAR image contains 1 target, wherein C is more than or equal to 10, M is more than or equal to 200, and h=128;
(1b) Marking the target category in each SAR image, and randomly selecting the target category containing C train Total of C of individual target classes train X M SAR images and labels of each SAR image are used as training sample setsWill rest C test Total of C of individual target classes test X M SAR images and labels of each SAR image are used as a test sample set +.> wherein Ctrain +C test =C,3≤C test ≤5;
(2) Constructing a network model H based on the mixed loss and the schematic representation:
constructing a network model H which comprises a data enhancement module D, an embedded network module E, a node characteristic initialization module I and a graph meaning force network module G which are sequentially cascaded and is based on mixed loss and graph meaning force, wherein the embedded network module E comprises a plurality of first convolution modules E which are sequentially cascaded C And a second convolution module E L Each first convolution module E C Comprises a first convolution layer, a first batch of normalization layers, a Mish activation layer and a maximum pooling layer which are sequentially stacked, a second convolution module E L The method comprises a second convolution layer and a second normalization layer which are sequentially stacked; the graph attention network module G comprises a plurality of graph update layers U which are sequentially cascaded, and each graph update layer U comprises an edge feature construction module U which is sequentially cascaded E Attention weight calculation module U W And node characteristic updating module U N Each edge feature building module U E Comprises a plurality of first full-connection layers which are sequentially stacked, and each node characteristic updating module U N Comprises a second full connection layer;
(3) Iterative training is carried out on a network model H based on the mixed loss and the graph meaning force:
(3a) Initializing iteration times to be N, wherein the maximum iteration times are N, N is more than or equal to 1000, and n=0;
(3b) From a training sample setRandomly select to include C test Total of individual target classes C test X M SAR images, and carrying out one-hot coding on the label of each SAR image to obtain C of each SAR image test The vector of the dimension labels will then be derived from C test Randomly selecting K SAR images contained in each target class from the X M SAR images and corresponding tag vectors to serve as a training support sample set +. >Will remain C test (M-K) SAR images and corresponding tag vectors as training query sample set +.>Wherein, the element of the C-th dimension in the tag vector of each SAR image after one-hot encoding represents that the target in the SAR image belongs to C test Probability of the c-th target class of the target classes,/->Representing an a-th training support sample consisting of SAR images and their corresponding tag vectors, < + >>Representing a b training query sample consisting of SAR images and corresponding label vectors thereof, wherein K is more than or equal to 1 and less than or equal to 10;
(3c) Support the sample set for trainingSample +/with each training query>Combining training tasksObtaining training task setAnd will->Forward propagation is performed as input to the network model H based on the mixing loss and the graph meaning force:
(3c1) The data enhancement module D trains the task setData enhancement is performed on each SAR image in the (a): performing power transformation on each SAR image, adding noise to the SAR image after power transformation, performing overturn transformation on the SAR image after noise addition, and performing rotation transformation on the SAR image after overturn transformation to obtain an enhanced training task set->
wherein ,representing training task->Corresponding enhanced training task->Representing training task->Training support samples- >Corresponding reinforced training support samples, +.>Representing training query samples +.>Corresponding enhanced training query samples;
(3c2) Embedded network module E pair enhanced training task setIs +.>Mapping each contained SAR image to obtain a training embedded vector group setAnd adopts an embedded loss function L E Embedding a set of vector sets by training +.>Computing training task set->Is embedded loss value l of (2) E :
wherein ,representing enhanced training tasks->Corresponding training embedded vector group meeting a not equal to C test K+1Representing enhanced training support samples->Corresponding training embedded vector, ">Representing enhanced training query samplesThe corresponding training embedded vector, log (·) represents the logarithm based on the natural constant e, exp (·) represents the natural constant e The underlying index, Σ represents a continuous summation, +.>Representing->Training support sample set of->Each training embedded vector corresponding to each SAR image of the c-th target class included +.>Class center of c-th target class obtained by averaging,>representation and training task->Training query sample->The targets in the contained SAR image belong to the class center of the same target class, d represents a measurement function, d is # p,q)=||p-q|| 2 ;
(3c3) Node characteristic initialization module I constructs a virtual tag vectorAnd embeds a vector group for each training>Satisfying a not equal to C test K+1 per training Embedded vector +.>Splicing with tag vectors of corresponding SAR image while embedding vector group +/for each training>Training embedded vector +.>And virtual tag vector->Splicing to obtain a training node 1-layer feature group set +.>
wherein ,represent C test Vector with element value of 1 for each dimension, and +.>Representing training Embedded vector group->Corresponding training node layer 1 feature set, +.>Representing training embedded vectors +.>Corresponding training node 1 layer characteristics;
(3c4) The graph attention network module G trains the node 1 layer characteristic group set through inputFor->Each training node 1 layer feature group is included +.>Training node 1 layer feature in->Corresponding training query sample->Performing category prediction on targets in the included SAR image to obtain a training prediction result vector set wherein ,Representing training node layer 1 features->Corresponding dimensionIs C test The element of dimension c represents the training node layer 1 feature +_>Corresponding training query sample->The target in the included SAR image belongs to the prediction probability of the c-th target class;
(3c5) Using a classification loss function L C By training a set of predictor vectorsAnd training a set of query samplesAll tag vectors of (a) a training task set is calculated +.>Classification loss value l of (2) C :
wherein ,for training the predictor vector->The value of the c-th dimension element, y b,c To train the predictive result vectorThe value of the c-th dimension element in the label vector of the corresponding SAR image;
(3d) For training task setClassification loss value l of (2) C And training task set->Is embedded loss value l of (2) E Weighting and summing to obtain training task set +.>I=λl C +(1-λ)l E Then, updating parameters of all the first convolution layers and the second convolution layers embedded in the network module E and parameters of all the first full-connection layers and the second full-connection layers in the graph annotation force network module G by using a random gradient descent algorithm, wherein lambda is a weight, and lambda is more than or equal to 0.7 and less than 1;
(3e) Judging whether N is greater than or equal to N, if so, obtaining a trained network model H' based on mixing loss and drawing meaning force, otherwise, enabling N to be equal to n+1, and executing the step (3 b);
(4) Obtaining a target classification result of the small sample SAR image:
(4a) For test sample setsCarrying out one-hot coding on the tag of each SAR image to obtain C of each SAR image test A dimension tag vector, then +.>C of (2) test Randomly selecting K SAR images contained in each target class from the X M SAR images and corresponding tag vectors as a test support sample setWill remain C test (M-K) SAR images and corresponding tag vectors as test query sample set +.> wherein ,Representing an e-th test support sample consisting of SAR images and their corresponding tag vectors,/th test support sample consisting of SAR images and their corresponding tag vectors>Representing a g test query sample consisting of the SAR image and a corresponding tag vector thereof;
(4b) Support the sample set to be testedSample +/with each test query>Combined test task-> Obtaining a test task setAnd will->Forward propagation is performed as an input to a trained network model H' based on mixing loss and graph meaning force:
(4b1) Trained embedded network module E' pair test task setIs>Mapping each contained SAR image to obtain a test embedded vector group set +.>
wherein ,representing test task->Corresponding test embedded vector group meeting e not equal to C test K+1->Representing test support sample->Corresponding test embedded vector, ">Representing test query sample->A corresponding test embedded vector;
(4b2) Node characteristic initialization module I constructs a virtual tag vector And embeds a vector group for each test>Satisfies e.noteq.C test K+1 per test embedding vector +.>Tag orientation with corresponding SAR imageQuantity is spliced, and meanwhile, each test embedded vector group is +.>Test embedded vector +.>And virtual tag vector->Splicing to obtain a test node 1-layer feature group set +.>
wherein ,representing test embedded vector set->Corresponding test point 1 layer feature group, +.>Table test embedded vector +.>Corresponding test node 1 layer characteristics;
(4b3) The trained graph attention network module G' tests the node 1 layer characteristic group set through inputFor a pair ofEach test node included 1 layer feature set +.>Test node 1 layer feature in->Corresponding test inquiry sample->Performing category prediction on targets in the included SAR image to obtain a test prediction result vector setEach test predictor vector->The dimension number corresponding to the maximum value is +.>Corresponding test inquiry sample->A predictive category of the target in the SAR image included, wherein, < ->Layer 1 feature representing test node>Corresponding dimension is C test The element value of dimension c represents the test node layer 1 featureCorresponding test inquiry sample->The target in the included SAR image belongs to the probability of the c-th target class.
Compared with the prior art, the invention has the following advantages:
1. the invention uses the classification loss value l of the training task set C And an embedding loss value l for the training task set E The parameters of all the first convolution layers and the second convolution layers embedded in the network module E and the parameters of all the first full-connection layers and the second full-connection layers in the graph-annotation-force network module G are updated, so that the similarity between the characteristics of the same SAR target class and the difference between the characteristics of different SAR target classes are enhanced, and compared with the prior art, the classification precision of the small-sample SAR target is effectively improved.
2. In the training process of the network model H based on the mixing loss and the graph annotation meaning force, the data enhancement module D can effectively relieve the overfitting risk of the model by carrying out data enhancement on all SAR images; the data enhancement module D, the embedded network module E and the node characteristic initialization module I can acquire the effective characteristic of each SAR image, the effective characteristic of each SAR image is combined with the corresponding label vector, data support can be provided for category prediction of the graph meaning network module G, and compared with the prior art, the classification precision of the small sample SAR target is further improved.
Experimental results show that the method can obtain higher classification accuracy in small-sample SAR target classification.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
Fig. 2 is a flow chart of an implementation of the present invention for iterative training of a network model H based on mixing loss and graph meaning forces.
Fig. 3 is a flowchart of an implementation of the target classification result of the small sample SAR image acquisition of the present invention.
Detailed Description
The invention will now be described in further detail with reference to the drawings and to specific embodiments.
Referring to fig. 1, the present invention includes the steps of:
step 1) acquiring a training sample setAnd test sample set->
(1a) Acquiring multiple synthetic aperture radar SAR images containing C different target categories, wherein each target category corresponds to M SAR images with the size of h multiplied by h, each SAR image contains 1 target, wherein C is more than or equal to 10, M is more than or equal to 200, and h=128;
(1b) Marking the target category in each SAR image, and randomly selecting the target category containing C train Total of C of individual target classes train X M SAR images and labels of each SAR image are used as training sample setsWill rest C test Total of C of individual target classes test X M SAR images and labels of each SAR image are used as a test sample set +.> wherein Ctrain +C test =C,3≤C test ≤5;
The invention adopts the target class containing sufficient SAR image to train the model, and the trained model can have higher classification precision in small sample SAR target classification of other classes, so the training sample set constructed in the step And test sample set->The classes of the mid SAR targets are different;
step 2) constructing a network model H based on the mixed loss and the schematic meaning force:
constructing a network model H which comprises a data enhancement module D, an embedded network module E, a node characteristic initialization module I and a graph meaning force network module G which are sequentially cascaded and is based on mixed loss and graph meaning force, wherein the embedded network module E comprises five first convolution modules E which are sequentially cascaded C And a second convolution module E L Each first convolution module E C Comprises a first convolution layer, a first batch of normalization layers, a Mish activation layer and a maximum pooling layer which are sequentially stacked, a second convolution module E L The method comprises a second convolution layer and a second normalization layer which are sequentially stacked; the graph attention network module G comprises three graph update layers U which are sequentially cascaded, and each graph update layer U comprises an edge feature construction module U which is sequentially cascaded E Attention weight calculation module U W And node characteristic updating module U N Each edge feature building module U E Comprises five first full-connection layers which are sequentially stacked, and each node characteristic updating module U N Comprises a second full connection layer;
the specific parameters of the embedded network module E and the schematic network module G are as follows:
a first convolution module E included in the embedded network module E C The sizes of convolution kernels of the five first convolution layers are 3 multiplied by 3, the step sizes are 1, the filling is 1, the number of convolution kernels of the first three first convolution layers is 64, the number of convolution kernels of the fourth and fifth first convolution layers is 128, the sizes of pooling kernels of the five largest pooling layers are 2 multiplied by 2, and the sliding step sizes are 2; second convolution module E L The size of the convolution kernel of the second convolution layer is 4 multiplied by 4, and the step length is 1;
edge feature construction module U in each graph update layer U included in graph attention network module G E The number of the neurons of the first four first full-connection layers is 96, and the number of the neurons of the fifth first full-connection layer is 1; node characteristic updating module U included in first two graph updating layers U N The number of neurons of the second full-connection layer is 24, and the third graph updating layer U comprises a node characteristic updating module U N The number of neurons of the second full-connection layer in the array is C test ;
Step 3) performing iterative training on a network model H based on the mixed loss and the graph meaning force, wherein the implementation steps are as shown in fig. 2:
(3a) Initializing iteration times to be N, wherein the maximum iteration times are N, N is more than or equal to 1000, and n=0;
(3b) From a training sample setRandomly select to include C test Total of individual target classes C test X M SAR images, and carrying out one-hot coding on the label of each SAR image to obtain C of each SAR image test The vector of the dimension labels will then be derived from C test Randomly selecting K SAR images contained in each target class from the X M SAR images and corresponding tag vectors to serve as a training support sample set +.>Will remain C test (M-K) SAR images and corresponding tag vectors as training query sample set +.>Wherein, the element of the C-th dimension in the tag vector of each SAR image after one-hot encoding represents that the target in the SAR image belongs to C test Probability of the c-th target class of the target classes,/->Representing an a-th training support sample consisting of SAR images and their corresponding tag vectors, < + >>Representing a b training query sample consisting of SAR images and corresponding label vectors thereof, wherein K is more than or equal to 1 and less than or equal to 10;
in order to ensure that the trained model has higher classification precision in the classification of small sample SAR targets of other classes, the invention simulates the test process in the training process and takes the training sample set as the training sample setSAR images with the same SAR target class number in the random selection and test process, and dividing all the selected SAR images and corresponding label vectors into a training support sample set +.>And training a sample set of queries- >Two subsets, training support sample set +.>The SAR image number is smaller, only C test K SAR images with smaller number obtained by simulating small sample SAR target are used for training and inquiring sample set +.>The SAR image in (2) is used for simulating the SAR image to be classified, and the support sample set is trained at the moment>A training query sample set may be trained for model predictions +.>Providing data support for the class of the middle SAR image;
(3c) Support the sample set for trainingSample +/with each training query>Combining training tasksAt this time, every training task->All is a small sample SAR target problem, the model needs to support the sample set +.>To predict each training query sample +.>Class of mid SAR image, all training tasks->Merging to obtain training task setAnd will->Forward propagation is performed as input to the network model H based on the mixing loss and the graph meaning force:
(3c1) The data enhancement module D trains the task setData enhancement is performed on each SAR image in the (a): performing power transformation on each SAR image, adding noise to the SAR image after power transformation, performing overturn transformation on the SAR image after noise addition, and performing rotation transformation on the SAR image after overturn transformation to obtain an enhanced training task set- >
wherein ,representing training task->Corresponding enhanced training task->Representing training task->Training support samples->Corresponding reinforced training support samples, +.>Representing training query samples +.>Corresponding enhanced training query samples;
the training task set may be changed during each iteration using the data enhancement module DEach SAR image in (a) can be added with training task set +.>The risk of model overfitting during training is reduced. The specific implementation steps of the data enhancement module D are as follows: performing power transformation on each SAR image B, namely B 1 =(B/255) γ X 255 to obtain SAR image B 1 For each SAR image B 1 Adding noise, i.e. B 2 =B 1 +noise, SAR image B is obtained 2 For each SAR image B 2 Performing inversion transformation, i.e. B 3 =flip(B 2 ) Obtaining SAR image B 3 For each SAR image B 3 Performing a rotation transformation, i.e. B' =rot (B 3 ) An enhanced SAR image B' is obtained, wherein gamma represents a power and gamma is randomly fetched [0.7,1.3 ]]Values within the range, noise means compliance with [ - α, α]Is obtained by taking a random (0, 50]Values in the range, wherein file (-) represents that left and right overturning or up and down overturning is performed randomly, the probability of adopting each overturning mode is 1/2 respectively, rot (-) represents that clockwise rotation is performed randomly by 90 degrees or 180 degrees or 270 degrees, and the probability of adopting each rotation angle is 1/3 respectively;
(3c2) Embedded network module E pair enhanced training task setIs +.>Mapping each contained SAR image to obtain a training embedded vector group setAnd adopts an embedded loss function L E Embedding a set of vector sets by training +.>Computing training task set->Is embedded loss value l of (2) E :
wherein ,representing enhanced training tasks->Corresponding trainingEmbedding vector sets satisfying a not equal to C test K+1Representing enhanced training support samples->Corresponding training embedded vector, ">Representing enhanced training query samplesThe corresponding training embedded vector, log (·) represents the logarithm based on the natural constant e, exp (·) represents the exponent based on the natural constant e, Σ represents the continuous summation,/-)>Representing->Training support sample set of->Each training embedded vector corresponding to each SAR image of the c-th target class included +.>Class center of c-th target class obtained by averaging,>representation and training task->Training query sample->Objects in contained SAR images belong toThe class center of the same target class, d represents the measure function, d (p, q) = | p-q| 2 ;
In this step, the process of mapping the SAR image into the embedded vectors by the embedded network module E corresponds to the extracted features of each SAR image, the embedding loss is achieved by the method for each training task Each SAR target category in the training query sample is calculated to form a category center, and each category center is combined with the training query sample +.>The corresponding embedded vector is distance-measured, and training query samples +.>The corresponding embedded vector and the class center which belongs to the same class are closer in distance, the class centers of different classes are farther in distance, after multiple iterations, the embedded network module E can achieve the effects of higher similarity between the extracted features of the same SAR target class and stronger difference between the features of different SAR target classes, so that the follow-up graph annotating meaning network module G can improve the influence of the follow-up graph annotating meaning network module G on each training query sample>Class prediction accuracy of the mid-SAR image;
(3c3) Node characteristic initialization module I constructs a virtual tag vectorAnd embeds a vector group for each training>Satisfying a not equal to C test K+1 per training Embedded vector +.>And corresponding SAR imageIs spliced with the tag vector of (1) while inserting a vector group for each training>Training embedded vector +.>And virtual tag vector->Splicing to obtain a training node 1-layer feature group set +.>
wherein ,represent C test Vector with element value of 1 for each dimension, and +.>Representing training Embedded vector group- >Corresponding training node layer 1 feature set, +.>Representing training embedded vectors +.>Corresponding training node 1 layer characteristics;
at this stepIn step, the sample is supported by training each reinforcementCorresponding training embedding vector->And (3) withThe tag vectors in (a) are spliced, and the embedded vector can be used for each training>Adding category information, all training node 1 layer characteristics obtained +.>Sample +/can be queried for each training for model>Class prediction of the mid SAR image provides data support; sample +.for each training query requiring predictive category>The SAR image in (1) is embedded with the corresponding training embedded vector +.>And virtual tag vector->Splicing is performed to ensure 1-layer feature of each training node>Is equal in dimension;
(3c4) The graph attention network module G trains the node 1 layer characteristic group set through inputFor->Each training node 1 layer feature group is included +.>Training node 1 layer feature in->Corresponding training query sample->Performing category prediction on targets in the included SAR image to obtain a training prediction result vector set wherein ,Representing training node layer 1 features->Corresponding dimension is C test The element of dimension c represents the training node layer 1 feature +_>Corresponding training query sample- >The target in the included SAR image belongs to the prediction probability of the c-th target class;
in this step, the graph meaning network module G may set according to the training node 1 layer feature setLayer 1 features for each training node>Corresponding training query sample->Class prediction is carried out on SAR images in the process; the diagram attention network module G first of all exercises tasks +.>Each SAR image in the system is regarded as a node, two opposite directional edges are connected between each two nodes, namely a fully connected directional graph is formed, each node and each edge are characterized in that the initial characteristic of each node/the input characteristic of a 1 st graph update layer U is training node 1 layer characteristics->The feature is updated for a plurality of times through each graph updating layer U, and the ith graph updating layer U updates the input training node i layer feature +.>After updating, obtaining the training node i+1 layer characteristic +.>Features of each edge in the ith map update layer U +.>The attention weight corresponding to the characteristics of each edge is calculated by the corresponding node characteristics in the ith graph updating layer U and can be used for guiding the corresponding node to aggregate the characteristics from other nodes so as to update the node characteristics of the corresponding node, and after three node characteristic updates, the graph attention network module G can take training query samples- >Training node 4-layer feature corresponding to SAR image ∈4->Converting into a predictive label thereof; the specific implementation steps of the whole process are as follows:
(3c41) The initialization iteration number is i, and the graph annotation meaning network module G comprises three graph updating layers U, so that the maximum iteration number is 3, and i=1;
(3c42) Edge feature construction module U of ith graph updating layer U E Using training node i-layer feature set setsI-layer feature set of each training node in (1)>All training node i-layer features involved +.>Performing edge feature construction to obtain a training edge i layer feature set +.>
wherein ,representing training node i-layer feature set +.>Corresponding training side i layer feature set, +.>Representing training node i-layer feature set +.>I-layer characteristics of h training node in (2),>representing training node i-layer feature setsI-layer characteristics of jth training node in (a),>representing i-layer characteristics of training nodes>And training node i-layer features->The obtained training edge i layer characteristics, abs (·) represents taking the absolute value of each element, ++>Representing an edge feature building block U included in an m-input ith graph update layer U E The output obtained after the first full connection layers are sequentially laminated;
(3c43) Attention weight calculation module U of ith diagram update layer U W For training side i layer characteristic group setI-layer feature set for each training side of (a)>All training edge i-layer features included +.>Calculating attention weights to obtain a training i-layer attention weight set +.>
wherein ,representing training edge i layer feature set +.>Corresponding training i layer attention weight set, < ->Representing training side i layer features->Corresponding training i-layer attention weight, < +.>Represents removal of h from 1 to C test A set of positive integers of (M-K); />
In this stepAttention weight calculation module U W Calculating the attention weight among the nodes according to the characteristics of each edge, wherein the weight is normalized, namely the sum of the attention weights of one node to all other nodes is 1;
(3c44) Node characteristic updating module U of ith graph updating layer U N Using training node i-layer feature set setsAnd training the i-layer attention weight set +.>Updating the node characteristics to obtain a training node i+1 layer characteristic group set
wherein ,representing training node i-layer feature set +.>The corresponding training node i +1 layer feature set,representing training node i-layer features->Corresponding training node i+1 layer features, a||b represents the ++L after vector b is spliced to vector a>Node characteristic updating module U for updating layer U by inputting n into ith diagram N The output obtained after the second full connection layer;
in this step, the node characteristic update module U N Updating node characteristics through node characteristic aggregation guided by attention weight, in the process, each node aggregates characteristics from all other nodes, the aggregated characteristics are spliced with the characteristics, the spliced result is input into a second full-connection layer to obtain updated characteristics, namely the characteristics of a training node i layerUpdating to training node i+1 layer feature +.>Each training query sample +.>The node corresponding to the SAR image in the database can acquire category information from the nodes in the same category as the node, and the category information is converted into the prediction category after multiple updates;
(3c45) Judging whether i=3 is true, if so, collecting the obtained training node 4-layer characteristic group4-layer feature set for each training node in (3)>Includes training node 4 layer characteristics->Proceeding withsoftmax transformation to obtain training predictive result vector set +.>Otherwise, let i=i+1 and perform step (3 c 42), wherein ∈>For training node 4-layer features->A corresponding training predictive result vector;
(3c5) Using a classification loss function L C By training a set of predictor vectors And training a set of query samplesAll tag vectors of (a) a training task set is calculated +.>Classification loss value l of (2) C :
wherein ,for training the predictor vector->The value of the c-th dimension element, y b C is the training predictive result vectorThe value of the c-th dimension element in the label vector of the corresponding SAR image;
(3d) For training tasksCollection setClassification loss value l of (2) C And training task set->Is embedded loss value l of (2) E Weighting and summing to obtain training task set +.>I=λl C +(1-λ)l E Then, updating parameters of all the first convolution layers and the second convolution layers embedded in the network module E and parameters of all the first full-connection layers and the second full-connection layers in the graph annotation force network module G by using a random gradient descent algorithm, wherein lambda is a weight, and lambda is more than or equal to 0.7 and less than 1;
in this step, the classification loss value l C And embedding a loss value l E The weighted sum obtained mixed loss value l has the effect of enhancing the similarity between the characteristics of the same SAR target category and the difference between the characteristics of different SAR target categories while updating the parameters of the whole model, so that the classification precision is improved;
(3e) Judging whether N is greater than or equal to N, if so, obtaining a trained network model H' based on mixing loss and drawing meaning force, otherwise, enabling N to be equal to n+1, and executing the step (3 b);
Step 4) obtaining a target classification result of the small sample SAR image, wherein the implementation steps are shown in fig. 2:
(4a) For test sample setsCarrying out one-hot coding on the tag of each SAR image to obtain C of each SAR image test A dimension tag vector, then +.>C of (2) test Randomly selecting each target class packet from the X M SAR imagesK contained SAR images and corresponding tag vectors are used as test support sample setsWill remain C test (M-K) SAR images and corresponding tag vectors as test query sample set +.> wherein ,Representing an e-th test support sample consisting of SAR images and their corresponding tag vectors,/th test support sample consisting of SAR images and their corresponding tag vectors>Representing a g test query sample consisting of the SAR image and a corresponding tag vector thereof;
(4b) Support the sample set to be testedSample +/with each test query>Combined test task-> Obtaining a test task setAnd will->Forward propagation is performed as an input to a trained network model H' based on mixing loss and graph meaning force:
(4b1) Trained embedded network module E' pair test task setIs>Mapping each contained SAR image to obtain a test embedded vector group set +.>
wherein ,representing test task->Corresponding test embedded vector group meeting e not equal to C test K+1->Representing test support sample->Corresponding test embedded vector, ">Representing test query sample->A corresponding test embedded vector;
(4b2) Node characteristic initialization module I constructs a virtual tag vectorAnd embeds a vector group for each test>Satisfies e.noteq.C test K+1 per test embedding vector +.>Splicing with tag vectors of the corresponding SAR image while embedding a vector group +.>Test embedded vector +.>And virtual tag vector->Splicing to obtain a test node 1-layer feature group set +.>
wherein ,representing test embedded vector set->Corresponding test point 1 layer feature group, +.>Table test embedded vector +.>Corresponding test node 1 layer characteristics;
(4b3) The trained graph attention network module G' tests the node 1 layer characteristic group set through inputFor a pair ofEach test node included 1 layer feature set +.>Test node 1 layer feature in->Corresponding test inquiry sample->Performing category prediction on targets in the included SAR image to obtain a test prediction result vector setEach test predictor vector->The dimension number corresponding to the maximum value isCorresponding test inquiry sample->A predictive category of the target in the SAR image included, wherein, < - >Representation and measurementTrial node layer 1 feature->Corresponding dimension is C test The element value of the c-th dimension represents the test node layer 1 feature +.>Corresponding test inquiry sample->The probability that the target in the included SAR image belongs to the c-th target class;
acquiring a set of test predictor vectors in the present stepSimilar to step (3 c 4), only the input data of the trained graph attention network module G' is adjusted.
The technical effects of the invention are further described below in conjunction with simulation experiments:
1. simulation experiment conditions and content:
the hardware platform of the simulation experiment is as follows: the GPU is NVIDIA GeForce RTX 3090, and the software platform is: the operating system is windows 10. The data set of the simulation experiment is a published MSTAR data set, wherein the SAR sensor is a high-resolution beam-focusing type, the resolution is 0.3m multiplied by 0.3m, the SAR sensor works in an X-band, the polarization mode is HH polarization, the pitch angles are 17 degrees respectively, the azimuth angles are continuously changed from 0 degrees to 360 degrees, the interval is about 5 degrees, and the size of the cut image is 128 multiplied by 128. The consolidated MSTAR data set contains 10 types of ground military vehicle targets, namely C=10, the models are respectively 2S1, BMP-2, BRDM-2, BTR-60, BTR-70, D-7, T-62, T-72, ZIL-131 and ZSU-234, and the SAR image of each type of target is 210, namely M=210.
In order to compare the accuracy of small sample SAR target classification with the existing small sample SAR target recognition method based on graph attention network, a total of 1050 SAR images of 5 target classes and each SAR image are selected from the MSTAR data setThe labels being training sample sets, i.e. C train =5, a total of 1050 SAR images of the remaining 5 target classes and the label of each SAR image are selected as a test sample set, i.e. C test =5. Meanwhile, the number of training/testing support samples k=10, and the number of training/testing query samples M-k=200, sampled for each target class in each training/testing task. The target class division in the training sample set and the test sample set and the SAR image quantity of each class of targets are shown in table 1:
TABLE 1
The classification confusion matrix and the classification average accuracy of the small sample SAR target recognition method based on the graph attention network are compared and simulated, and the results are shown in the table 2:
TABLE 2
As can be seen from Table 2, the average accuracy of the small sample SAR target classification of the present invention is improved by 5.1% over the prior art.
The foregoing description is only a specific example of the invention and is not intended to limit the invention in any way, but it will be apparent to those skilled in the art that various modifications and changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (4)
1. The small sample SAR target classification method based on the mixed loss and the schematic drawing force is characterized by comprising the following steps of:
(1) Acquiring training sample setsAnd test sample set->
(1a) Acquiring multiple synthetic aperture radar SAR images containing C different target categories, wherein each target category corresponds to M SAR images with the size of h multiplied by h, each SAR image contains 1 target, wherein C is more than or equal to 10, M is more than or equal to 200, and h=128;
(1b) Marking the target category in each SAR image, and randomly selecting the target category containing C train Total of C of individual target classes train X M SAR images and labels of each SAR image are used as training sample setsWill rest C test Total of C of individual target classes test X M SAR images and labels of each SAR image are used as a test sample set +.> wherein Ctrain +C test =C,3≤C test ≤5;
(2) Constructing a network model H based on the mixed loss and the schematic representation:
constructing a network model H which comprises a data enhancement module D, an embedded network module E, a node characteristic initialization module I and a graph meaning force network module G which are sequentially cascaded and is based on mixed loss and graph meaning force, wherein the embedded network module E comprises a plurality of first convolution modules E which are sequentially cascaded C And a second convolution module E L Each first convolution module E C Comprises a first convolution layer, a first batch of normalization layers, a Mish activation layer and a maximum pooling layer which are sequentially stacked, a second convolution module E L The method comprises a second convolution layer and a second normalization layer which are sequentially stacked; attention to the drawingThe force network module G comprises a plurality of image updating layers U which are sequentially cascaded, and each image updating layer U comprises an edge characteristic construction module U which is sequentially cascaded E Attention weight calculation module U W And node characteristic updating module U N Each edge feature building module U E Comprises a plurality of first full-connection layers which are sequentially stacked, and each node characteristic updating module U N Comprises a second full connection layer;
(3) Iterative training is carried out on a network model H based on the mixed loss and the graph meaning force:
(3a) Initializing iteration times to be N, wherein the maximum iteration times are N, N is more than or equal to 1000, and n=0;
(3b) From a training sample setRandomly select to include C test Total of individual target classes C test X M SAR images, and carrying out one-hot coding on the label of each SAR image to obtain C of each SAR image test The vector of the dimension labels will then be derived from C test Randomly selecting K SAR images contained in each target class from the X M SAR images and corresponding tag vectors to serve as a training support sample set +.>Will remain C test (M-K) SAR images and corresponding tag vectors as training query sample set +. >Wherein, the element of the C-th dimension in the tag vector of each SAR image after one-hot encoding represents that the target in the SAR image belongs to C test Probability of the c-th target class of the target classes,/->Representing an a-th training support sample consisting of SAR images and their corresponding tag vectors, < + >>Representing a b training query sample consisting of SAR images and corresponding label vectors thereof, wherein K is more than or equal to 1 and less than or equal to 10;
(3c) Support the sample set for trainingSample +/with each training query>Combined training task-> Obtaining training task setAnd will->Forward propagation is performed as input to the network model H based on the mixing loss and the graph meaning force:
(3c1) The data enhancement module D trains the task setData enhancement is performed on each SAR image in the (a): performing power transformation on each SAR image, adding noise to the SAR image after power transformation, performing overturn transformation on the SAR image after noise addition, and performing rotation transformation on the SAR image after overturn transformation to obtain an enhanced training task set->
wherein ,representing training task->Corresponding enhanced training task->Representing training task->Training support samples->Corresponding reinforced training support samples, +.>Representing training query samples +.>Corresponding enhanced training query samples;
(3c2) Embedded network module E pair enhanced training task setIs +.>Mapping each contained SAR image to obtain a training embedded vector group setAnd adopts an embedded loss function L E Embedding a set of vector sets by training +.>Computing training task set->Is embedded loss value l of (2) E :
wherein ,representing enhanced training tasks->Corresponding training embedded vector group meeting a not equal to C test K+1->Representing enhanced training support samples->Corresponding training embedded vector, ">Representing enhanced training query samples->The corresponding training embedded vector, log (·) represents the logarithm based on the natural constant e, exp (·) represents the exponent based on the natural constant e, Σ represents the continuous summation,/-)>Representing->Training support sample set of->Each training embedded vector corresponding to each SAR image of the c-th target class included +.>Class center of c-th target class obtained by averaging,>representation and training task->Training query sample->The targets in the contained SAR image belong to the class center of the same target class, d represents the measure function, d (p, q) = | p-q| 2 ;
(3c3) Node characteristic initialization module I constructs a virtual tag vectorAnd embeds a vector group for each training Satisfying a not equal to C test K+1 per training Embedded vector +.>With corresponding SAR imageThe label vectors are spliced, and meanwhile, each training embedded vector group is +.>Training embedded vector +.>And virtual tag vector->Splicing to obtain a training node 1-layer feature group set +.>
wherein , represent C test Vector with element value of 1 for each dimension, and +.>Representing training Embedded vector group->Corresponding training node layer 1 feature set, +.>Representing training embedded vectors +.>Corresponding training node 1 layer characteristics;
(3c4) The graph attention network module G trains the node 1 layer characteristic group set through inputFor->Each training node 1 layer feature group is included +.>Training node 1 layer feature in->Corresponding training query sample->Performing category prediction on targets in the included SAR image to obtain a training prediction result vector set wherein ,Representing training node layer 1 features->Corresponding dimension is C test The element of dimension c represents the training node layer 1 feature +_>Corresponding training query sample->The target in the included SAR image belongs to the prediction probability of the c-th target class;
(3c5) Using a classification loss function L C By training a set of predictor vectorsAnd training a sample set of queries- >All tag vectors of (a) a training task set is calculated +.>Classification loss value l of (2) C :
wherein ,for training the predictor vector->The value of the c-th dimension element, y b,c For training the predictor vector->The value of the c-th dimension element in the label vector of the corresponding SAR image;
(3d) For training task setClassification loss value l of (2) C And training task set->Is embedded loss value l of (2) E Weighting and summing to obtain training task set +.>I=λl C +(1-λ)l E Then, updating parameters of all the first convolution layers and the second convolution layers embedded in the network module E and parameters of all the first full-connection layers and the second full-connection layers in the graph annotation force network module G by using a random gradient descent algorithm, wherein lambda is a weight, and lambda is more than or equal to 0.7 and less than 1;
(3e) Judging whether N is greater than or equal to N, if so, obtaining a trained network model H' based on mixing loss and drawing meaning force, otherwise, enabling N to be equal to n+1, and executing the step (3 b);
(4) Obtaining a target classification result of the small sample SAR image:
(4a) For test sample setsCarrying out one-hot coding on the tag of each SAR image to obtain C of each SAR image test A dimension tag vector, then +.>C of (2) test Randomly selecting K SAR images contained in each target class from the X M SAR images and using corresponding tag vectors as a test support sample set +. >Will remain C test (M-K) SAR images and corresponding tag vectors as test query sample set wherein ,Representing an e-th test support sample consisting of SAR images and their corresponding tag vectors,/th test support sample consisting of SAR images and their corresponding tag vectors>Representing a g test query sample consisting of the SAR image and a corresponding tag vector thereof;
(4b) Support the sample set to be testedSample +/with each test query>Combined test task-> Obtaining a test task setAnd will->Forward propagation is performed as an input to a trained network model H' based on mixing loss and graph meaning force:
(4b1) Trained embedded network module E' pair test task setIs>Mapping each contained SAR image to obtain a test embedded vector group set +.>
wherein ,representing test task->Corresponding test embedded vector group meeting e not equal to C test K+1->Representing test support sample->Corresponding test embedded vector, ">Representing test query sample->A corresponding test embedded vector;
(4b2) Node characteristic initialization module I constructs a virtual tag vectorAnd embeds a vector group for each test>Satisfies e.noteq.C test K+1 per test embedding vector +.>Splicing with tag vectors of the corresponding SAR image while embedding a vector group +. >Test embedded vector +.>And virtual tag vector->Splicing to obtain a test node 1-layer feature group set +.>
wherein ,representing test embedded vector set->Corresponding test point 1 layer feature group, +.>Table test embedded vector +.>Corresponding test node 1 layer characteristics;
(4b3) The trained graph attention network module G' tests the node 1 layer characteristic group set through inputFor->Each test node included 1 layer feature set +.>Test node 1 layer feature in->Corresponding test query samplesPerforming category prediction on targets in the included SAR image to obtain a test prediction result vector setEach test predictor vector->The dimension number corresponding to the maximum value isCorresponding test inquiry sample->A predictive category of the target in the SAR image included, wherein, < ->Layer 1 feature representing test node>Corresponding dimension is C test The element value of the c-th dimension represents the test node layer 1 feature +.>Corresponding test inquiry sample->The target in the included SAR image belongs to the probability of the c-th target class.
2. The method of claim 1, wherein the network model H based on the mixed loss and the schematic representation in the step (2) includes a first convolution module E included in the embedded network module E C The number of the graph update layers U included in the graph annotation force network module G is three, and each graph update layer U includes an edge feature construction module U E The number of the first full connection layers is five, and specific parameters of the embedded network module E and the schematic network module G are as follows:
a first convolution module E included in the embedded network module E C The sizes of convolution kernels of the five first convolution layers are 3 multiplied by 3, the step sizes are 1, the filling is 1, the number of convolution kernels of the first three first convolution layers is 64, the number of convolution kernels of the fourth and fifth first convolution layers is 128, the sizes of pooling kernels of the five largest pooling layers are 2 multiplied by 2, and the sliding step sizes are 2; second convolution module E L The size of the convolution kernel of the second convolution layer is 4 multiplied by 4, and the step length is 1;
edge feature construction module U in each graph update layer U included in graph attention network module G E The number of the neurons of the first four first full-connection layers is 96, and the number of the neurons of the fifth first full-connection layer is 1; front twoNode characteristic updating module U included in personal graph updating layer U N The number of neurons of the second full-connection layer is 24, and the third graph updating layer U comprises a node characteristic updating module U N The number of neurons of the second full-connection layer in the array is C test 。
3. The method for classifying small sample SAR targets based on mixed loss and mindset force according to claim 1, wherein said data enhancement module D in step (3 c 1) performs training on the set of tasksThe data enhancement is carried out on each SAR image, and the specific implementation steps are as follows: performing power transformation on each SAR image B, namely B 1 =(B/255) γ X 255 to obtain SAR image B 1 For each SAR image B 1 Adding noise, i.e. B 2 =B 1 +noise, SAR image B is obtained 2 For each SAR image B 2 Performing inversion transformation, i.e. B 3 =flip(B 2 ) Obtaining SAR image B 3 For each SAR image B 3 Performing a rotation transformation, i.e. B' =rot (B 3 ) An enhanced SAR image B' is obtained, wherein gamma represents a power and gamma is randomly fetched [0.7,1.3 ]]Values within the range, noise means compliance with [ - α, α]Is obtained by taking a random (0, 50]Values in the range are that the file (-) is randomly turned left and right or up and down, the probability of adopting each turning mode is 1/2, the rot (-) is randomly rotated clockwise by 90 degrees or 180 degrees or 270 degrees, and the probability of adopting each rotating angle is 1/3.
4. The small sample SAR target classification method based on mixed loss and graph meaning force of claim 1, wherein the graph meaning force network module G pair training node 1 layer feature set in step (3 c 4) Each training node 1 layer feature group is included +.>Training node 1 layer feature in->The category prediction is carried out, and the implementation steps are as follows:
(3c41) Initializing iteration times as i, and setting the maximum iteration times as 3, wherein i=1;
(3c42) Edge feature construction module U of ith graph updating layer U E Using training node i-layer feature set setsI-layer feature set of each training node in (1)>All training node i-layer features involved +.>Performing edge feature construction to obtain a training edge i layer feature set +.>
wherein ,representing training node i-layer feature set +.>Corresponding training side i layer feature set, +.>Representing training node i-layer feature set +.>I-layer characteristics of h training node in (2),>representing training node i-layer feature setsI-layer characteristics of jth training node in (a),>representing i-layer characteristics of training nodes>And training node i-layer features->The obtained training edge i layer characteristics, abs (·) represents taking the absolute value of each element, ++>Representing an edge feature building block U included in an m-input ith graph update layer U E The output obtained after the first full connection layers are sequentially laminated;
(3c43) Attention weight calculation module U of ith diagram update layer U W For training side i layer characteristic group setI-layer feature set for each training side of (a) >All training edge i-layer features included +.>Calculating attention weights to obtain a training i-layer attention weight set +.>
wherein ,representing training edge i layer feature set +.>Corresponding training i layer attention weight set, < ->Representing training side i layer features->Corresponding training i-layer attention weight, < +.>Represents removal of h from 1 to C test A set of positive integers of (M-K);
(3c44) Node characteristic updating module U of ith graph updating layer U N Using training node i-layer feature set setsAnd training the i-layer attention weight set +.>Updating node characteristics to obtain a training node i+1 layer characteristic group set +.>
wherein ,representing training node i-layer feature set +.>Corresponding training node i+1 layer feature set, < >>Representing training node i-layer features->Corresponding training node i+1 layer features, a||b represents the ++L after vector b is spliced to vector a>Node characteristic updating module U for updating layer U by inputting n into ith diagram N The output obtained after the second full connection layer;
(3c45) Judging whether i=3 is true, if so, collecting the obtained training node 4-layer characteristic group4-layer feature set for each training node in (3)>Includes training node 4 layer characteristics->Performing softmax transformation to obtain training predictive result vector set +. >Otherwise, let i=i+1, and execute step (3 c 42), wherein,for training node 4-layer features->And corresponding training predicted result vectors.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110408623.2A CN113095416B (en) | 2021-04-16 | 2021-04-16 | Small sample SAR target classification method based on mixing loss and graph meaning force |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110408623.2A CN113095416B (en) | 2021-04-16 | 2021-04-16 | Small sample SAR target classification method based on mixing loss and graph meaning force |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113095416A CN113095416A (en) | 2021-07-09 |
CN113095416B true CN113095416B (en) | 2023-08-18 |
Family
ID=76677939
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110408623.2A Active CN113095416B (en) | 2021-04-16 | 2021-04-16 | Small sample SAR target classification method based on mixing loss and graph meaning force |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113095416B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113592008B (en) * | 2021-08-05 | 2022-05-31 | 哈尔滨理工大学 | System, method, device and storage medium for classifying small sample images |
CN113655479B (en) * | 2021-08-16 | 2023-07-07 | 西安电子科技大学 | Small sample SAR target classification method based on deformable convolution and double attentions |
CN114372521A (en) * | 2021-12-30 | 2022-04-19 | 西安邮电大学 | SAR image classification method based on attention mechanism and residual error relation network |
CN115131580B (en) * | 2022-08-31 | 2022-11-22 | 中国科学院空天信息创新研究院 | Space target small sample identification method based on attention mechanism |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110516561A (en) * | 2019-08-05 | 2019-11-29 | 西安电子科技大学 | SAR image target recognition method based on DCGAN and CNN |
CN111191718A (en) * | 2019-12-30 | 2020-05-22 | 西安电子科技大学 | Small sample SAR target identification method based on graph attention network |
WO2021000906A1 (en) * | 2019-07-02 | 2021-01-07 | 五邑大学 | Sar image-oriented small-sample semantic feature enhancement method and apparatus |
-
2021
- 2021-04-16 CN CN202110408623.2A patent/CN113095416B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021000906A1 (en) * | 2019-07-02 | 2021-01-07 | 五邑大学 | Sar image-oriented small-sample semantic feature enhancement method and apparatus |
CN110516561A (en) * | 2019-08-05 | 2019-11-29 | 西安电子科技大学 | SAR image target recognition method based on DCGAN and CNN |
CN111191718A (en) * | 2019-12-30 | 2020-05-22 | 西安电子科技大学 | Small sample SAR target identification method based on graph attention network |
Non-Patent Citations (1)
Title |
---|
基于多尺度特征融合与反复注意力机制的细粒度图像分类算法;何凯;冯旭;高圣楠;马希涛;;天津大学学报(自然科学与工程技术版)(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113095416A (en) | 2021-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113095416B (en) | Small sample SAR target classification method based on mixing loss and graph meaning force | |
CN111914728B (en) | Hyperspectral remote sensing image semi-supervised classification method and device and storage medium | |
CN114169442B (en) | Remote sensing image small sample scene classification method based on double prototype network | |
CN110633708A (en) | Deep network significance detection method based on global model and local optimization | |
CN113420593B (en) | Small sample SAR automatic target recognition method based on hybrid inference network | |
CN110766084B (en) | Small sample SAR target identification method based on CAE and HL-CNN | |
CN113095409A (en) | Hyperspectral image classification method based on attention mechanism and weight sharing | |
CN111814685A (en) | Hyperspectral image classification method based on double-branch convolution self-encoder | |
CN113655479B (en) | Small sample SAR target classification method based on deformable convolution and double attentions | |
CN112560966B (en) | Polarized SAR image classification method, medium and equipment based on scattering map convolution network | |
Ying et al. | Processor free time forecasting based on convolutional neural network | |
CN116452863A (en) | Class center knowledge distillation method for remote sensing image scene classification | |
CN114998688A (en) | Large-view-field target detection method based on YOLOv4 improved algorithm | |
CN109063750A (en) | SAR target classification method based on CNN and SVM decision fusion | |
CN102360497B (en) | SAR (synthetic aperture radar) image segmentation method based on parallel immune clone clustering | |
CN116704382A (en) | Unmanned aerial vehicle image semantic segmentation method, device, equipment and storage medium | |
CN116956997A (en) | LSTM model quantization retraining method, system and equipment for time sequence data processing | |
CN108764301B (en) | A kind of distress in concrete detection method based on reversed rarefaction representation | |
CN113283390B (en) | SAR image small sample target identification method based on gating multi-scale matching network | |
Shen et al. | One-Dimensional Feature Supervision Network for Object Detection | |
Lan et al. | Spatial-Transformer and Cross-Scale Fusion Network (STCS-Net) for Small Object Detection in Remote Sensing Images | |
CN113988163A (en) | Radar high-resolution range profile identification method based on multi-scale grouping fusion convolution | |
CN109934292B (en) | Unbalanced polarization SAR terrain classification method based on cost sensitivity assisted learning | |
Ukwuoma et al. | Synthetic aperture radar automatic target recognition based on a simple attention mechanism | |
CN111191617B (en) | Remote sensing scene classification method based on hierarchical structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |