CN115270872A - Radar radiation source individual small sample learning and identifying method, system, device and medium - Google Patents

Radar radiation source individual small sample learning and identifying method, system, device and medium Download PDF

Info

Publication number
CN115270872A
CN115270872A CN202210885334.6A CN202210885334A CN115270872A CN 115270872 A CN115270872 A CN 115270872A CN 202210885334 A CN202210885334 A CN 202210885334A CN 115270872 A CN115270872 A CN 115270872A
Authority
CN
China
Prior art keywords
samples
query
support set
sample
query set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210885334.6A
Other languages
Chinese (zh)
Inventor
王伟
施皓然
周永坤
饶彬
王涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Sun Yat Sen University Shenzhen Campus
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202210885334.6A priority Critical patent/CN115270872A/en
Publication of CN115270872A publication Critical patent/CN115270872A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a system, a device and a medium for learning and identifying individual small samples of a radar radiation source, wherein the method comprises the following steps: acquiring a data set to be identified, and dividing the data set to be identified into a support set and a query set; inputting the support set and the query set into a trained small sample learning model to obtain a relation score vector of the support set sample and the query set sample; carrying out data classification on the data set to be identified according to the obtained relation score vector; and the small sample learning model is trained by adopting a triple loss relation function. The invention adopts a small sample learning method to identify the radiation source signal, and can better solve the problems of insufficient data samples and poor generalization capability of the original deep learning model. In addition, the model is trained by adopting the triple loss relation function, so that the training of the difficult samples can be enhanced, the identification rate of the difficult samples can be increased, the identification performance of the network can be improved, and the identification capability of the radiation source can be finally improved. The invention can be widely applied to the field of electromagnetism.

Description

Radar radiation source individual small sample learning and identifying method, system, device and medium
Technical Field
The invention relates to the field of electromagnetism, in particular to a method, a system, a device and a medium for learning and identifying individual small samples of a radar radiation source.
Background
In the field of radar radiation source individual identification, the traditional method is realized by designing and extracting manual features. However, with the rapid development of radar systems, the performance of the conventional method is greatly degraded, and thus, more advanced techniques need to be researched.
In recent years, deep learning is successfully applied to the field of signal identification, and the basic ideas are as follows: firstly, carrying out domain transformation on the signal to obtain a corresponding domain transformation graph (such as a time-frequency graph), and then taking the domain transformation graph as the input of a deep learning network to complete the task of identification. Although deep learning has a good performance in the task of signal recognition, it still has certain drawbacks. Firstly, to achieve better performance of the deep learning model, a large amount of labeled data is required for support, which is difficult in the field of radar radiation source individual identification. Due to the existence of low interception radars and serious signal overlapping in an electromagnetic environment, high-quality radar radiation source labeling data are difficult to obtain. Secondly, the good performance of the deep learning model is generally only for the data classes existing in the training set, i.e. the generalization performance to the new signal classes is poor. If we need to identify new signal classes, we need to redesign and train the network, which increases the cost greatly.
In addition, in the individual identification of the radar radiation source, the identification rate of some samples is always low, and the samples are called hard samples (hard templates). In contrast, those samples with higher recognition accuracy are called simple samples (easy templates). The existing model, such as a relational network model, does not deliberately distinguish a simple sample from a difficult sample in the training process, that is, the simple sample and the difficult sample are treated as the same sample, which causes the training insufficiency of the difficult sample and limits the performance of the model.
Disclosure of Invention
In order to solve at least one of the technical problems in the prior art to a certain extent, the invention aims to provide a method, a system, a device and a medium for learning and identifying small samples of radar radiation sources.
The technical scheme adopted by the invention is as follows:
a learning and identifying method for small samples of radar radiation sources comprises the following steps:
acquiring a data set to be identified, and dividing the data set to be identified into a support set and a query set;
inputting the support set and the query set into a trained small sample learning model to obtain a relation score vector of the support set sample and the query set sample;
carrying out data classification on the data set to be identified according to the obtained relation score vector;
and the small sample learning model is trained by adopting a triple loss relation function.
Further, the small sample learning model is trained by:
acquiring a training data set, and dividing the training data set into a support set and a query set;
inputting the support set and the query set into a feature extraction network, and outputting a support set feature graph and a query set feature graph;
connecting the support set characteristic diagram and the query set characteristic diagram on the channel dimension to obtain a support set-query set sample pair;
inputting the support set-query set sample pairs into a similar network to obtain a relationship score;
and calculating a triple loss relation function according to the relation fraction, and training the small sample learning model according to the triple loss relation function.
Further, the inputting the support set and the query set into the feature extraction network, and outputting the support set feature graph and the query set feature graph includes:
support set sample xiInputting a feature extraction network to obtain a support set feature map
Figure BDA0003765615030000021
Sample x of query setjInputting a feature extraction network to obtain a query set feature map
Figure BDA0003765615030000022
Wherein f (-) represents a feature extraction network,
Figure BDA0003765615030000023
extracting weights of the network for the features;
the obtaining a support set-query set sample pair by connecting the support set feature map and the query set feature map in the channel dimension includes:
by characterizing the support set in the channel dimension
Figure BDA0003765615030000024
And query set feature graph
Figure BDA0003765615030000025
Connecting to obtain support set-query set sample pair
Figure BDA0003765615030000026
The relationship score is calculated by the following formula:
Figure BDA0003765615030000027
where ρ is the weight of the similar network, and h (·) represents the similar network.
Further, the triplet loss relation function includes a first loss function and a second loss function;
the first loss function is constructed by adopting a minimum mean square error criterion, and the similarity degree between samples is measured through the quantitative relation between the relation fraction and the label indicative function;
the second loss function is a triplet loss function.
Further, the first penalty function is expressed as follows:
Figure BDA0003765615030000028
wherein,
Figure BDA0003765615030000031
ri,jrepresenting a relationship score, y, between support set samples i and query set samples ji,yjLabels representing support set samples and query set samples, respectively; p × k represents the total number of support set samples, p × n represents the total number of query set samples;
the expression of the second penalty function is as follows:
Figure BDA0003765615030000032
wherein N is the total number of triples in a small sample learning task, dp(i) Is shown as AiAnd PiOf the Euclidean distance between dn(i) Is represented by AiAnd NiThe euclidean distance between them; a. TheiAnchor samples, which are triplets, corresponding to samples in the support set; piThe positive sample corresponding to the anchor sample corresponds to a sample in the query set, wherein the sample is the same as the anchor sample in category; n is a radical ofiThe negative examples corresponding to the anchor examples correspond to examples in the query set that are of a different category than the anchor examples.
Further, the feature extraction network comprises four convolution modules, wherein the first two convolution modules comprise a convolution layer, a batch normalization layer, an activation function and a maximum pooling layer, and the second two convolution modules comprise a convolution layer, a batch normalization layer and an activation function.
Further, the similarity network comprises a convolution module and two fully connected layers, wherein the convolution module comprises a convolution layer, a batch normalization layer, an activation function and a maximum pooling layer.
The other technical scheme adopted by the invention is as follows:
a radar radiation source individual small sample learning identification system comprising:
the data acquisition module is used for acquiring a data set to be identified and dividing the data set to be identified into a support set and a query set;
the vector calculation module is used for inputting the support set and the query set into the trained small sample learning model to obtain a relation score vector;
the data classification module is used for performing data classification on the data set to be identified according to the obtained relation score vector;
and the small sample learning model is trained by adopting a triple loss relation function.
The invention adopts another technical scheme that:
a radar radiation source individual small sample learning and identifying device comprises:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The invention adopts another technical scheme that:
a computer readable storage medium in which a processor executable program is stored, which when executed by a processor is for performing the method as described above.
The invention has the beneficial effects that: the invention adopts a small sample learning method to identify the radiation source signal, and can better solve the problems of insufficient data samples and poor generalization capability of the original deep learning model. In addition, the triple loss relation function is adopted to train the model, so that the training of the difficult samples can be enhanced, the identification rate of the difficult samples can be increased, the identification performance of the network can be improved, and the identification capability of the radiation source can be finally improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description is made on the drawings of the embodiments of the present invention or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic representation of triplets in an embodiment of the invention;
FIG. 2 is a diagram of a small sample learning framework based on a triplet loss relation network according to an embodiment of the present invention;
FIG. 3 is a diagram of a triple loss relationship network architecture in accordance with an embodiment of the present invention;
FIG. 4 is a flowchart illustrating steps of a method for learning and identifying small samples of an individual radar radiation source according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the steps of model training in an embodiment of the present invention;
FIG. 6 is a flow chart of the steps of experimental validation in an embodiment of the present invention;
fig. 7 is a schematic diagram of the recognition accuracy of the 5-way-5-shot task in the embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention and are not to be construed as limiting the present invention. For the step numbers in the following embodiments, they are set for convenience of illustration only, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
1.1 conventional radiation source identification scheme
The traditional individual identification of the radiation source is mainly based on the design and extraction of expert features, namely, related fingerprint features are designed in advance and then identified through a machine learning classifier. Common fingerprint characteristics include pulse envelope characteristics, high-order spectral characteristics, phase noise, device nonlinear characteristics, and the like. Due to the uneven quality of data (low signal-to-noise ratio, etc.) and the lack of measured data, the design and extraction of partial fingerprint features are difficult and time-consuming, resulting in partial or total failure of the conventional radiation source individual identification method.
1.2 deep learning-based radiation source identification scheme
Because of the great success of deep learning in the fields of computer vision, speech processing and the like, deep learning is considered by learners for radiation source individual recognition. The existing basic ideas are as follows: firstly, carrying out domain transformation on individual signals of the radar radiation source to obtain a corresponding transformation graph, and then taking the transformation graph as the input of a deep learning network to identify the individual radar radiation source.
1.3 radiation source identification scheme based on small sample learning
For the conventional deep learning method, the sample classes contained in the training set and the test set are the same, so the recognition capability of the model is generally limited to only the data classes contained in the training set. The learning of small samples is different, the sample classes of the training set and the test set are completely different, and the two sets do not have intersection. Therefore, the small sample learning model obtained based on the training set has better migration capability on the samples of the test set, and is suitable for being used as a radar radiation source signal identification task.
Therefore, the embodiment adopts a small sample learning method to identify the radiation source signal; the small sample learning aims at identifying new data types by using a small amount of labeled samples, and the problems of insufficient data samples and poor generalization capability of the original deep learning model are solved well.
In addition, in order to enhance the training of the difficult samples, the invention improves the loss function of the Relational Network (RN), and introduces the triple loss function based on the difficult sample selection strategy. The improved Network is called Triple Loss Relation Network (TLRN). The TLRN aims to increase the identification rate of the difficult samples by enhancing the training of the network on the difficult samples, thereby improving the identification performance of the network and finally improving the identification capability of the radiation source.
2.1 Small samples learning symbols and terms
We denote X and Y as the data and the label corresponding to the data, respectively, and X and Y denote the data set and the label set corresponding to the data set, respectively. Consider a small sample learning task T that processes a data set D = { D = {support,DqueryIn which D issupportFor supporting the collection, DqueryFor a query set, the support set and the query set have the same data categories. If the support set of the task T contains P data categories, and each category contains K label samples, the task T is called a P-way-K-shot task. For the P-way-K-shot task, P data categories are optionally selected in the data set X, and K label samples are optionally selected in each category, so that a support set D is constructedsupport=(xi,yi) I =1,2, …, P × k, while extracting a batch of samples from the remaining data in class P to form query set Dquery=(xi,yi) I =1,2, …, p × n. In a small sample learning scenario, the data sets will be divided into a training set, a test set, and a validation set, where the data categories of the three data sets do not intersect with each other. In each training round, the model performs a different P-way-K-shot task. Within a particular training round interval, the model will perform multiple P-way-K-shot validation tasks. After the model is trained, the model will perform a different test task in each test run.
2.2 triple loss function principle based on difficult sample selection strategy
The advantage of the triple loss function (Triplet loss) is detail differentiation. When two inputs are similar, the triple loss function can better model the details of the two inputs, so that the triple loss function can have excellent performance in a very similar task.
The triplet loss function is implemented based on triplets. A schematic of a triplet is shown in figure 1. The basic idea of the triple loss function is: the data samples are learned and modeled through a loss function, so that the distance between an anchor sample (anchor exemplars) and a positive sample (positive exemplars) is as small as possible, and the distance between the anchor sample and a negative exemplars (negative exemplars) is as large as possible. Based on the definition of the triple loss, the triples are usually divided into three categories, i.e. simple triples, general triples, and difficult triples. Simple triplets do not contribute much to the improvement of the model performance, but rather lead to a reduction in the convergence and convergence speed of the network. The pre-training of the general triple fitting model can accelerate the convergence of the model. The difficult triples are suitable for training of the model in the later period, the recognition capability of the model to the difficult samples is improved, and the detail capturing capability is enhanced. For a conventional triple loss function, the triples are randomly selected, which cannot ensure the high quality of the selected triples, and the simple triples are most easily selected, so that the value of the triple loss function is very unstable, the performance of the model is greatly limited, and the convergence of the model is reduced.
According to the analysis, the method needs to select the difficult triples, so that the training of the difficult samples is enhanced, and the performance of the model is improved. In the context of the present invention, an anchor sample corresponds to a sample in the support set, a positive sample corresponds to a sample in the query set that is of the same type as the anchor sample, and a negative sample corresponds to a sample in the query set that is of a different type than the anchor sample.
Let anchor sample of triplet be AiCorresponding to a positive sample of PiNegative sample is NiThen the expression of the triplet loss function is as follows:
Figure BDA0003765615030000071
where m is the maximum edge, N is the total number of triples in a small sample learning task (in a p-way-k-shot task, if the number of query set samples is p × N, N = p × (p-1) × N), dp(i) Represents AiAnd PiEuclidean distance of dn(i) Represents AiAnd NiThe euclidean distance between them.
Since equation (1) limits d by the penalty term as a function of the maximum edgep(i) And dn(i) The distance between the two, and the value of the maximum edge depends on experience and data set distribution, which affects the convergence and performance of the model to a certain extent. Therefore, the present invention contemplates the use of a soft edge-triplet penalty function, whose expression is as follows:
Figure BDA0003765615030000072
wherein N is the total number of triples in a small sample learning task, dp(i) Represents AiAnd PiOf the Euclidean distance between dn(i) Represents AiAnd NiThe euclidean distance between them. The expression (2) is only used for training the feature extraction network, that is, the features output by the feature extraction network are directly used for calculating the expression (2).
Setting a feature graph output by a feature extraction network as follows:
Figure BDA0003765615030000073
when equation (3) is first flattened, equation (3) becomes:
Figure BDA0003765615030000074
anchor samples were then set to SsupportBoth positive and negative samples are Squery(Positive and negative examples are distinguished by whether they are of the same class as the anchor example).
2.3 triple loss relational network architecture
Fig. 2 shows a small sample learning network framework adopted by the present invention. The network framework is the same as the network framework of the relationship network. The network framework is composed of a feature extraction network and a similar network. The feature extraction network consists of four convolution modules. The first two convolution modules contain one convolution layer (64 convolution kernel, convolution kernel size 3 x 3), one batch normalization layer, one activation function (ReLU), and one max pooling layer, and the last two convolution modules contain one convolution layer (64 convolution kernel, convolution kernel size 3 x 3), one batch normalization layer, one activation function (ReLU). Fig. 3 is a network configuration diagram of a network employed in the present invention. As shown in fig. 3, support set sample xiInputting the feature extraction network to obtain a feature map
Figure BDA0003765615030000075
Where f (-) represents a feature extraction network,
Figure BDA0003765615030000076
weights for the network are extracted for the features. Obtaining a feature map by inquiring a set sample x input feature extraction network
Figure BDA0003765615030000077
By connecting feature maps in channel dimensions
Figure BDA0003765615030000081
And with
Figure BDA0003765615030000082
Obtaining support set-query set sample pairs
Figure BDA0003765615030000083
Inputting the relation score into a similar network to obtain a relation score:
Figure BDA0003765615030000084
where ρ is the weight of the similar network and h (-) represents the similar network.
The similar network consists of convolution modules, two fully connected layers, the convolution modules comprising one convolution layer (64 convolution kernels with a convolution kernel size of 3 x 3), one batch normalization layer, one activation function (ReLU) and one max pooling layer.
2.4 triple loss relationship network model loss function
The loss function of the model consists of two parts, wherein the first part adopts the minimum Mean Square Error criterion to construct the loss function (Mean Square Error, MSE), as shown in formula (6). Equation (6) measures the degree of similarity between samples by the quantitative relationship of the relationship score and the label indicativity function.
Figure BDA0003765615030000085
Wherein
Figure BDA0003765615030000086
ri,jRepresenting a relationship score, y, between support set samples i and query set samples ji,yjLabels representing support set samples and query set samples, respectively.
The second part is a triple loss function, and the expression thereof is as formula (7):
Figure BDA0003765615030000087
where N is the total number of triples in a small sample learning task, dp(i) Represents AiAnd PiEuclidean distance of dn(i) Represents AiAnd NiThe euclidean distance between them. (A)iAnchor samples of triples, PiFor the positive sample corresponding to the anchor sample, NiNegative sample corresponding to anchor sample)
The loss function expression of the model is as follows:
Figure BDA0003765615030000088
wherein,
Figure BDA0003765615030000089
weights of the networks are extracted for the features, and ρ is the weight of the similar network.
Based on the above, as shown in fig. 4, the present embodiment provides a method for learning and identifying individual small samples of a radar radiation source, including the following steps:
s1, acquiring a data set to be identified, and dividing the data set to be identified into a support set and a query set;
s2, inputting the support set and the query set into the trained small sample learning model to obtain a relation score vector of the support set sample and the query set sample; and the small sample learning model is trained by adopting a triple loss relation function.
And S3, carrying out data classification on the data set to be recognized according to the obtained relation score vector.
Referring to fig. 5, a small sample learning model is trained by:
a1, acquiring a training data set, and dividing the training data set into a support set and a query set;
a2, inputting the support set and the query set into a feature extraction network, and outputting a support set feature graph and a query set feature graph;
a3, connecting the support set feature graph and the query set feature graph on the channel dimension to obtain a support set-query set sample pair;
a4, inputting the support set-query set sample pair into a similar network to obtain a relationship score;
and A5, calculating a triple loss relation function according to the relation scores, and training the small sample learning model according to the triple loss relation function.
Specifically, offline training of the model can be performed through existing measured data and simulation data, and in an actual working scene, a small amount of collected data can be used as a sample for online training for identification.
As an alternative embodiment, in practical application, the support set is a small amount of known annotation data, and the query set is unknown data. The relation score vector dimension is P N, P is the support set data category, N = query set data category (S, S = P) × number of samples (Q) contained in each data category of the query set. Obtaining data categories through the relation score vector: the relationship score vector maximizes the first dimension to obtain vector 1*N. The subscript where the maximum value is located is the data category.
The above method is explained below with reference to experimental data.
3.1, data set
Referring to fig. 6, the present invention performs validation on a simulated radar signal data set. The radar signal data set I takes linear frequency modulation as a signal modulation mode, and different radar radiation source individual signals are generated through different parameter combinations and fingerprint feature combinations, wherein 50 types are generated. The signal parameters include carrier frequency, bandwidth. The method adopts a phase noise model, a Taylor series model and a Saleh model to carry out fingerprint feature modeling, and generates different fingerprint features through the change of parameters in the models. The sampling rate of the signal is 512MHz, and the pulse width is 10 mus. For the 50 types of signals, each type of signal contains 50 samples, gaussian white noise with the signal-to-noise ratio of-12 dB to-7 dB is added in the 50 samples respectively, and the step size of the signal-to-noise ratio is 1dB. After generating the signal samples, the signal is transformed into a time-frequency diagram by a Choi-williams distribution (CWD) and the time-frequency diagram is clipped to 84 × 84. The time-frequency diagram will be used as a sample for model training and testing. The data set is divided into a training set, a test set and a verification set, wherein 32 types of signals are the training set, 10 types of signals are the test set and 8 types of signals are the verification set. To further validate the generalization ability of the model to new signal classes, we created dataset II. The modulation pattern of data II is BPSK, QPSK, P1. Each modulation pattern generates 16 classes of signals, for a total of 48 classes of signals, by different combinations of parameters and fingerprint characteristics. The fingerprint modeling mode and other settings of the data set are the same as those of data set I.
3.2 Experimental settings
The Adam optimizer is adopted for experimental network training, the learning rate is 0.001, the training round is 50000, and the verification round is 500. Each test stage contains 500 test rounds, and the final accuracy is the average value of the accuracy of 5 test stages. The initial parameters of the model are set in a random initialization mode. The task used for the experiment was 5-way-5-shot, according to the standard set adopted for most small sample learning work. The number of query set samples used for this task during training is 10. The number of query set samples used for this task was 15 during testing and validation.
3.3, results of the experiment
The recognition accuracy of the 5-way-5-shot task is shown in fig. 7. Where fig. 7 (a) is the recognition accuracy of data set I and fig. 7 (b) is the recognition accuracy of data set II. As can be seen from fig. 7 (a), the identification accuracy of TLRN on data set I is slightly different from RN, and there are three signal-to-noise ratio points which are slightly lower. At all SNR points, the recognition accuracy of the model is over 95%. As can be seen from fig. 7 (b), the recognition accuracy of TLRN on dataset II is significantly improved compared with RN, and the average improvement amplitude is 4.78%, wherein at-10 dB, the improvement amplitude is maximum, and reaches 5.45%. The model identification accuracy rate is more than 80% when the model identification accuracy rate is-8 dB. Experimental results show that the triple loss function can enhance the training of difficult samples and improve the performance of the model.
In summary, compared with the prior art, the method of the embodiment has the following advantages and beneficial effects:
(1) The invention provides a small sample learning method based on a triple loss relation network, and radar radiation source individual identification is realized based on the network. By improving the loss function of the relational network, the identification capability of the difficult samples is enhanced, and the identification accuracy is improved. Experimental results show that the network structure provided by the invention has stronger generalization capability, and the signal identification rate on new data categories is obviously improved compared with the original network structure (relational network).
(2) Compared with the traditional expert feature extraction method, the method disclosed by the invention has the advantages that the process of feature design is omitted (due to the complexity and quality problems of data and the lack of actually measured data, part of expert features have the characteristics of generalization and poor adaptability), and the adaptability and the convenience of the algorithm are improved. Meanwhile, the traditional expert feature extraction method generally adopts a machine learning classifier, the classifier needs manual input to extract features, and the features generally need dimension reduction, redundancy removal and other processing, which is troublesome. The small sample learning method does not need a large amount of data support, and solves the problem of lack of actual measurement data of individual signals of the radiation source. Meanwhile, the small sample learning method has better generalization capability and can quickly transfer the trained model to a new class.
This embodiment still provides a radar radiation source individual small sample learning identification system, includes:
the data acquisition module is used for acquiring a data set to be identified and dividing the data set to be identified into a support set and a query set;
the vector calculation module is used for inputting the support set and the query set into the trained small sample learning model to obtain a relation score vector;
the data classification module is used for performing data classification on the data set to be identified according to the obtained relation score vector;
and the small sample learning model is trained by adopting a triple loss relation function.
The system for learning and identifying the individual small sample of the radar radiation source can execute the method for learning and identifying the individual small sample of the radar radiation source provided by the embodiment of the method, can execute any combination implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.
This embodiment still provides a radar radiation source individual small sample learning recognition device, includes:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method of fig. 4.
The device for learning and identifying the individual small samples of the radar radiation source can execute the method for learning and identifying the individual small samples of the radar radiation source provided by the embodiment of the method, can execute any combination of the implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.
The embodiment of the invention also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 4.
The embodiment also provides a storage medium, which stores an instruction or a program capable of executing the method for learning and identifying the individual small sample of the radar radiation source provided by the embodiment of the invention, and when the instruction or the program is run, the steps can be implemented by any combination of the embodiments of the method, and the method has corresponding functions and beneficial effects.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more comprehensive understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer given the nature, function, and interrelationships of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A learning and identifying method for small samples of radar radiation sources is characterized by comprising the following steps:
acquiring a data set to be identified, and dividing the data set to be identified into a support set and a query set;
inputting the support set and the query set into a trained small sample learning model to obtain a relation score vector of the support set sample and the query set sample;
carrying out data classification on the data set to be identified according to the obtained relation score vector;
and the small sample learning model is trained by adopting a triple loss relation function.
2. The method for learning and identifying the individual small samples of the radar radiation source according to claim 1, wherein the small sample learning model is trained by:
acquiring a training data set, and dividing the training data set into a support set and a query set;
inputting the support set and the query set into a feature extraction network, and outputting a support set feature graph and a query set feature graph;
connecting the support set characteristic diagram and the query set characteristic diagram on the channel dimension to obtain a support set-query set sample pair;
inputting the support set-query set sample pairs into a similar network to obtain a relationship score;
and calculating a triple loss relation function according to the relation fraction, and training the small sample learning model according to the triple loss relation function.
3. The method for learning and identifying the individual small sample of the radar radiation source according to claim 2, wherein the step of inputting the support set and the query set into a feature extraction network and outputting a support set feature map and a query set feature map comprises the steps of:
support set sample xiInputting a feature extraction network to obtain a support set feature map
Figure FDA0003765615020000011
Sample x of query setjInputting a feature extraction network to obtain a query set feature map
Figure FDA0003765615020000012
Wherein f (-) represents a feature extraction network,
Figure FDA0003765615020000013
extracting weights of the network for the features;
the obtaining a support set-query set sample pair by connecting the support set feature map and the query set feature map in the channel dimension includes:
by characterizing the support set in the channel dimension
Figure FDA0003765615020000014
And query set feature graph
Figure FDA0003765615020000015
Connecting to obtain support set-query set sample pair
Figure FDA0003765615020000016
The relationship score is calculated by the following formula:
Figure FDA0003765615020000017
where ρ is the weight of the similar network, and h (·) represents the similar network.
4. The method for learning and identifying the small samples of the radar radiation source individuals according to claim 2, wherein the triple loss relation function comprises a first loss function and a second loss function;
the first loss function is constructed by adopting a minimum mean square error criterion, and the similarity degree between samples is measured through the quantitative relation between the relation fraction and the label indicative function;
the second loss function is a triplet loss function.
5. The method for learning and identifying the individual small samples of the radar radiation source according to claim 4, wherein the expression of the first loss function is as follows:
Figure FDA0003765615020000021
wherein,
Figure FDA0003765615020000022
ri,jrepresenting a relationship score, y, between support set samples i and query set samples ji,yjLabels representing support set samples and query set samples, respectively; p × k represents the total number of support set samples, and p × n represents the total number of query set samples;
the expression of the second loss function is as follows:
Figure FDA0003765615020000023
wherein N is the total number of triples in a small sample learning task, dp(i) Is represented by AiAnd PiOf the Euclidean distance between dn(i) Is represented by AiAnd NiThe Euclidean distance between; a. TheiAnchor samples, which are triplets, corresponding to samples in the support set; piThe positive samples corresponding to the anchor samples correspond to samples in the query set, wherein the samples are the same as the anchor samples in category; n is a radical ofiThe negative examples corresponding to the anchor examples correspond to examples in the query set that are of a different category than the anchor examples.
6. The method of claim 2, wherein the feature extraction network comprises four convolution modules, the first two convolution modules comprise a convolution layer, a batch normalization layer, an activation function, and a max pooling layer, and the second two convolution modules comprise a convolution layer, a batch normalization layer, and an activation function.
7. The method of claim 2, wherein the similarity network comprises a convolution module and two fully-connected layers, wherein the convolution module comprises a convolution layer, a batch normalization layer, an activation function, and a max-pooling layer.
8. A system for learning and identifying an individual small sample of a radar radiation source comprises:
the data acquisition module is used for acquiring a data set to be identified and dividing the data set to be identified into a support set and a query set;
the vector calculation module is used for inputting the support set and the query set into the trained small sample learning model to obtain a relation score vector;
the data classification module is used for performing data classification on the data set to be identified according to the obtained relation score vector;
and the small sample learning model is trained by adopting a triple loss relation function.
9. The utility model provides a radar radiation source individual small sample learning recognition device which characterized in that includes:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, in which a program executable by a processor is stored, wherein the program executable by the processor is adapted to perform the method according to any one of claims 1 to 7 when executed by the processor.
CN202210885334.6A 2022-07-26 2022-07-26 Radar radiation source individual small sample learning and identifying method, system, device and medium Pending CN115270872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210885334.6A CN115270872A (en) 2022-07-26 2022-07-26 Radar radiation source individual small sample learning and identifying method, system, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210885334.6A CN115270872A (en) 2022-07-26 2022-07-26 Radar radiation source individual small sample learning and identifying method, system, device and medium

Publications (1)

Publication Number Publication Date
CN115270872A true CN115270872A (en) 2022-11-01

Family

ID=83768913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210885334.6A Pending CN115270872A (en) 2022-07-26 2022-07-26 Radar radiation source individual small sample learning and identifying method, system, device and medium

Country Status (1)

Country Link
CN (1) CN115270872A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091867A (en) * 2023-01-12 2023-05-09 北京邮电大学 Model training and image recognition method, device, equipment and storage medium
CN116127298A (en) * 2023-02-22 2023-05-16 北京邮电大学 Small sample radio frequency fingerprint identification method based on triplet loss
CN116401588A (en) * 2023-06-08 2023-07-07 西南交通大学 Radiation source individual analysis method and device based on deep network
CN116842457A (en) * 2023-07-17 2023-10-03 中国船舶集团有限公司第七二三研究所 Long-short-term memory network-based radar radiation source individual identification method
CN118152762A (en) * 2024-05-11 2024-06-07 之江实验室 Neutral hydrogen source identification and segmentation method, device and medium based on deep learning

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091867A (en) * 2023-01-12 2023-05-09 北京邮电大学 Model training and image recognition method, device, equipment and storage medium
CN116091867B (en) * 2023-01-12 2023-09-29 北京邮电大学 Model training and image recognition method, device, equipment and storage medium
CN116127298A (en) * 2023-02-22 2023-05-16 北京邮电大学 Small sample radio frequency fingerprint identification method based on triplet loss
CN116127298B (en) * 2023-02-22 2024-03-19 北京邮电大学 Small sample radio frequency fingerprint identification method based on triplet loss
CN116401588A (en) * 2023-06-08 2023-07-07 西南交通大学 Radiation source individual analysis method and device based on deep network
CN116401588B (en) * 2023-06-08 2023-08-15 西南交通大学 Radiation source individual analysis method and device based on deep network
CN116842457A (en) * 2023-07-17 2023-10-03 中国船舶集团有限公司第七二三研究所 Long-short-term memory network-based radar radiation source individual identification method
CN118152762A (en) * 2024-05-11 2024-06-07 之江实验室 Neutral hydrogen source identification and segmentation method, device and medium based on deep learning

Similar Documents

Publication Publication Date Title
CN115270872A (en) Radar radiation source individual small sample learning and identifying method, system, device and medium
CN111369563B (en) Semantic segmentation method based on pyramid void convolutional network
CN109685135B (en) Few-sample image classification method based on improved metric learning
CN111126386B (en) Sequence domain adaptation method based on countermeasure learning in scene text recognition
CN110046671A (en) A kind of file classification method based on capsule network
CN107527337A (en) A kind of object video based on deep learning removes altering detecting method
CN114564982B (en) Automatic identification method for radar signal modulation type
CN104463101A (en) Answer recognition method and system for textual test question
CN109993236A (en) Few sample language of the Manchus matching process based on one-shot Siamese convolutional neural networks
CN113488060B (en) Voiceprint recognition method and system based on variation information bottleneck
CN110532932A (en) A kind of multi -components radar emitter signal intra-pulse modulation mode recognition methods
CN110417694A (en) A kind of modulation mode of communication signal recognition methods
CN111126361A (en) SAR target identification method based on semi-supervised learning and feature constraint
CN113780242A (en) Cross-scene underwater sound target classification method based on model transfer learning
CN111564179A (en) Species biology classification method and system based on triple neural network
CN111259917A (en) Image feature extraction method based on local neighbor component analysis
CN113628297A (en) COVID-19 deep learning diagnosis system based on attention mechanism and transfer learning
CN105335689A (en) Character recognition method and apparatus
CN114980122A (en) Small sample radio frequency fingerprint intelligent identification system and method
CN116578945A (en) Multi-source data fusion method based on aircraft, electronic equipment and storage medium
CN115165366A (en) Variable working condition fault diagnosis method and system for rotary machine
CN116738330A (en) Semi-supervision domain self-adaptive electroencephalogram signal classification method
CN115908142A (en) Contact net tiny part damage testing method based on visual recognition
CN113516097B (en) Plant leaf disease identification method based on improved EfficentNet-V2
CN108694375B (en) Imaging white spirit identification method applicable to multi-electronic nose platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240524

Address after: 518107 Sun Yat sen University Shenzhen Campus, Guangming District, Shenzhen, Guangdong Province

Applicant after: Shenzhen, Zhongshan University

Country or region after: China

Applicant after: SUN YAT-SEN University

Address before: 510275 No. 135 West Xingang Road, Guangzhou, Guangdong, Haizhuqu District

Applicant before: SUN YAT-SEN University

Country or region before: China

TA01 Transfer of patent application right