CN113177558A - Radiation source individual identification method based on feature fusion of small samples - Google Patents

Radiation source individual identification method based on feature fusion of small samples Download PDF

Info

Publication number
CN113177558A
CN113177558A CN202110395018.6A CN202110395018A CN113177558A CN 113177558 A CN113177558 A CN 113177558A CN 202110395018 A CN202110395018 A CN 202110395018A CN 113177558 A CN113177558 A CN 113177558A
Authority
CN
China
Prior art keywords
time
radiation source
frequency
frequency analysis
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110395018.6A
Other languages
Chinese (zh)
Other versions
CN113177558B (en
Inventor
蒋季宏
林静然
杨健
张伟
邵怀宗
潘晔
利强
胡全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110395018.6A priority Critical patent/CN113177558B/en
Publication of CN113177558A publication Critical patent/CN113177558A/en
Application granted granted Critical
Publication of CN113177558B publication Critical patent/CN113177558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/148Wavelet transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Optimization (AREA)
  • Molecular Biology (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The invention discloses a radiation source individual recognition method based on small sample feature fusion, which belongs to the technical field of data processing, and is characterized in that individual signal time-frequency features extracted based on different time-frequency analysis methods are used as feature fusion sources, the features are subjected to fusion preprocessing based on single-domain pre-training accuracy and a random Gaussian measurement matrix, and the features subjected to fusion preprocessing are spliced on different channels, so that a neural network is input, and a radiation source individual recognition task is completed.

Description

Radiation source individual identification method based on feature fusion of small samples
Technical Field
The invention relates to the technical field of data processing, in particular to a radiation source individual identification method based on small sample feature fusion.
Background
The individual identification system of the radiation source can be roughly divided into four links of data acquisition, data preprocessing, feature extraction or feature dimension reduction and selection and classification identification.
The data acquisition and data preprocessing method is mature and has small difference;
the feature extraction method is various and can be roughly divided into three major directions, namely feature extraction based on signal parameters, feature extraction based on a signal processing method and feature extraction based on the inherent nonlinearity of a radiation source individual: the main entry point of feature extraction based on signal parameters is parameter statistics of a time-frequency domain, is mainly used for radiation source model and state identification, and is difficult to complete individual identification of a radiation source; the feature extraction by using the signal processing method is mainly based on the feature extraction of each large transform domain, but the features extracted from a single domain are often not rich enough and have strong pertinence, and the method may only have a good feature extraction effect on certain specific radiation source data sets, and the application range of the method is often limited; the characteristic extraction is usually performed from two aspects of the inherent nonlinear characteristics of devices in the radiation source, such as a filter, a power amplifier, a digital-to-analog converter and the like, and the nonlinear characteristics of the transmitter system, and the method is complex and has a limited application range.
In recent years, machine learning has been gradually introduced into the field of radiation source classification and identification due to its powerful data analysis and self-learning capabilities. The existing radiation source classification and identification method based on machine learning can be roughly divided into three directions based on cluster analysis, statistical learning and neural network: wherein the clustering analysis is based on data similarity, thereby dividing the data samples into a number of different classes of data mining processes; the radiation source identification based on statistical learning aims at constructing a classification model by extracting radiation source characteristic parameters; the radiation source identification based on the neural network mainly utilizes the strong self-learning capability and self-adaptive capability to complete feature extraction and classification. Because the input of the neural network is the characteristics extracted from the radiation source signals, namely the data samples, the classification effect of the neural network is often dependent on the quality of the samples, and machine learning itself needs a large number of training samples.
Disclosure of Invention
Aiming at the defects in the prior art, the radiation source individual identification method based on the feature fusion of the small sample provided by the invention solves the following two problems:
1. the method solves the problems of difficult training, low recognition rate and unreliable recognition caused by the problems of rare samples, insufficient characteristics and the like of machine learning under the small sample scene.
2. The individual identification accuracy of the radiation source under the condition of a small sample is improved, the problem of serious overfitting of the network is solved, and the performance of the neural network is improved.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a radiation source individual identification method based on feature fusion of small samples comprises the following steps:
s1, preprocessing the radiation source data of the small sample to obtain initial sample data;
s2, performing time-frequency processing on the initial sample data by adopting different time-frequency analysis methods to obtain four-dimensional time-frequency matrixes of different time-frequency analysis methods;
s3, making a four-dimensional time-frequency matrix of different time-frequency analysis methods into a pre-training data set under the time-frequency analysis methods, and pre-training the radiation source individual recognition model by adopting the pre-training data set under each time-frequency analysis method to obtain the accuracy of a test set when a network converges under the different time-frequency analysis methods;
s4, according to the accuracy of the test set during network convergence under different time frequency analysis methods, eliminating the time frequency analysis method with low accuracy to obtain the time frequency analysis method with high accuracy;
s5, performing feature splicing and fusion on the four-dimensional time-frequency matrix of the high-accuracy time-frequency analysis method, and constructing a sample feature matrix based on the feature splicing and fusion;
s6, constructing a sample feature matrix based on feature splicing fusion into a feature fusion data set under a corresponding time-frequency analysis method, training a radiation source individual recognition model by adopting the feature fusion data set, and adjusting the hyper-parameters of the radiation source individual recognition model to obtain a radiation source individual recognition model after training;
and S7, inputting the radiation source data into the trained radiation source individual recognition model to obtain the recognized radiation source individual.
Further, the preprocessing in step S1 includes: filtering, energy detection and normalization.
Further, step S1 includes the following substeps:
s11, filtering time domain intermediate frequency data of the radiation source data of the small sample;
s12, performing energy detection on the filtered radiation source data, and extracting signal intermediate frequency data;
s13, carrying out normalization processing on the intermediate frequency data of the signals to obtain normalization sample data with the value range of [ -1, 1 ];
the formula of the normalization process is:
Figure BDA0003018197240000031
wherein x is*To normalize the sample data, x is the intermediate frequency data of the signal, xmaxIs the maximum value of the intermediate frequency data, x, of the signalminFor intermediate frequency data of signalsMinimum value of (d);
and S14, slicing the normalized sample data according to the same time domain length to obtain initial sample data.
Further, in step S2, the time-frequency analysis method includes: short-time Fourier transform STFT, Hilbert-Huang transform HHT, Weigner-Weiley distribution WVD and continuous wavelet transform CWT;
in step S2, the first dimension of the four-dimensional time-frequency matrix represents the number of samples of the initial sample data, the second dimension represents the length of the time domain graph, the third dimension represents the width of the time domain graph, and the fourth dimension represents the number of channels;
step S2 includes the following substeps:
s21, performing time-frequency analysis on the initial sample data by adopting short-time Fourier transform (STFT) to obtain a first-class four-dimensional time-frequency matrix;
s22, performing time-frequency analysis on the initial sample data by using Hilbert-Huang transform (HHT) to obtain a second four-dimensional time-frequency matrix;
s23, performing time-frequency analysis on the initial sample data by adopting a Wegener-Weili distribution WVD to obtain a third four-dimensional time-frequency matrix;
s24, performing time-frequency analysis on the initial sample data by adopting continuous wavelet transform CWT to obtain a fourth class of four-dimensional time-frequency matrix.
Further, step S3 includes the following substeps:
s31, generating a one-dimensional label matrix corresponding to each radiation source by adopting one-hot coding;
s32, corresponding the one-dimensional label matrix and the four-dimensional time-frequency matrix, and performing random disorder processing to obtain a pre-training data set under each time-frequency analysis method;
s33, dividing the pre-training data set under each time-frequency analysis method into a training set, a verification set and a test set, and pre-training the radiation source individual recognition model by adopting the training set, the verification set and the test set to obtain the accuracy of the test set when the network converges under different time-frequency analysis methods.
Further, step S5 includes the following substeps:
s51, constructing a random Gaussian measurement matrix with an element-independent obeying mean value of 0, a variance of 1/M and a size of M multiplied by N, wherein M is the second dimension of the four-dimensional time-frequency matrix, and N is the third dimension of the four-dimensional time-frequency matrix;
s52, setting a penalty factor according to the accuracy of the test set during network convergence under the high-accuracy time-frequency analysis method;
s53, constructing a second dimension characteristic and a third dimension characteristic of a four-dimensional time-frequency matrix of the high-accuracy time-frequency analysis method as a characteristic matrix, and fusing the characteristic matrix based on a random Gaussian measurement matrix and a penalty factor to obtain a fusion matrix:
Bi=βiAiΦ
Figure BDA0003018197240000051
wherein, BiIs a fusion matrix, beta, corresponding to the ith high-accuracy time-frequency analysis methodiA penalty factor corresponding to the ith high-accuracy time-frequency analysis method, AiIs a characteristic matrix corresponding to the ith high-accuracy time frequency analysis method, phi is a random Gaussian measurement matrix, alphaiThe accuracy corresponding to the ith high-accuracy time frequency analysis method,
Figure BDA0003018197240000052
is alphaiL is the number of time-frequency analysis methods of high accuracy;
and S54, splicing the fusion matrixes obtained by the high-accuracy time-frequency analysis method on different channels to obtain a sample characteristic matrix based on characteristic splicing fusion.
Further, in step S6, the individual identification model of the radiation source is ResNet 50.
Further, the Loss function Loss used in training the ResNet50 is:
Figure BDA0003018197240000053
wherein K is the number of radiation source samples in the pre-training dataset or the feature fusion dataset,
Figure BDA0003018197240000054
as a prediction of the ith radiation source sample, yiIs the true label of the ith radiation source sample.
In conclusion, the beneficial effects of the invention are as follows:
1. the method of the invention provides a characteristic extraction mode of individual signals of a radiation source based on different time frequency analysis methods (namely based on different transform domains). Because different time-frequency methods have different advantages and the extracted signal characteristics are not completely the same, the characteristic extraction of individual radiation source signals based on different time-frequency analysis methods can extract more abundant characteristics and simultaneously reserve the advantages of each time-frequency analysis method, thereby forming high-quality radiation source fingerprint characteristics;
2. and performing fusion preprocessing on the signal time-frequency characteristics extracted based on different transform domains based on the single-domain pre-training accuracy and the random Gaussian measurement matrix, and performing characteristic fusion on the characteristics subjected to fusion preprocessing, so that the method is applied to the identification task of the radiation source individuals. The fusion preprocessing mainly adopts data processing based on single-domain pre-training accuracy and a random Gaussian measurement matrix, wherein the random Gaussian measurement matrix can effectively perform sparsification processing on data and simultaneously ensure the data reconstruction effect, and the time cost of subsequent network operation can be effectively reduced; in addition, a penalty factor is set based on the accuracy of single-domain pre-training, so that the characteristic data with better training results on a single domain occupy larger weight, and the negative influence of the characteristics processed by a time-frequency analysis method insensitive to task data on the network training after fusion is reduced;
3. the fused characteristic data quantity is increased, and the requirement of the neural network on the data set is better met. Compared with the method that the characteristics extracted by adopting a single time-frequency analysis method are used as network input in the neural network training process, the method that the characteristics fused by adopting the method are used as the network input can obviously improve the accuracy of individual identification of the radiation source, reduce the number of iterations required during network convergence and achieve the effect of optimizing the network performance.
4. For a radiation source identification task based on a small sample, the classification effect of a common neural network depends on a large number of high-quality training samples, so that the training is difficult under the small sample situation, the identification rate is low, and the reliable identification task cannot be completed. The method can effectively solve the problem of rare characteristics caused by insufficient sample quantity, can obviously improve the radiation source identification effect based on small samples, and can effectively relieve the problem of serious overfitting of network training in a single transform domain due to richer characteristics.
Drawings
FIG. 1 is a flow chart of a method for identifying individuals of radiation sources based on feature fusion of small samples;
FIG. 2 is a time domain waveform of an original signal of the radiation source 1;
FIG. 3 is a time domain waveform of an original signal of the radiation source 2;
fig. 4 is a time domain waveform of an original signal of the radiation source 3;
FIG. 5 is a Hilbert-Huang transform time-frequency diagram;
FIG. 6 is a Wegener-Weili distribution time-frequency plot;
FIG. 7 is a time-frequency diagram of continuous wavelet transform;
FIG. 8 is a schematic view of feature fusion.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a method for identifying individuals of radiation sources based on feature fusion of small samples includes the following steps:
s1, preprocessing the radiation source data of the small sample to obtain initial sample data;
the preprocessing method in step S1 includes: filtering, energy detection and normalization.
Step S1 includes the following substeps:
s11, filtering time domain intermediate frequency data of the radiation source data of the small sample;
s12, performing energy detection on the filtered radiation source data, and extracting signal intermediate frequency data;
s13, carrying out normalization processing on the intermediate frequency data of the signals to obtain normalization sample data with the value range of [ -1, 1 ];
the formula of the normalization process is:
Figure BDA0003018197240000071
wherein x is*To normalize the sample data, x is the intermediate frequency data of the signal, xmaxIs the maximum value of the intermediate frequency data, x, of the signalminIs the minimum value of the intermediate frequency data of the signal;
and S14, slicing the normalized sample data according to the same time domain length to obtain initial sample data.
S2, performing time-frequency processing on the initial sample data by adopting different time-frequency analysis methods to obtain four-dimensional time-frequency matrixes of different time-frequency analysis methods;
the time-frequency analysis method in step S2 includes: short-time Fourier transform STFT, Hilbert-Huang transform HHT, Weigner-Weiley distribution WVD and continuous wavelet transform CWT;
in step S2, the first dimension of the four-dimensional time-frequency matrix represents the number of samples of the initial sample data, the second dimension represents the length of the time-domain graph, the third dimension represents the width of the time-domain graph, the fourth dimension represents the number of channels, the length of the time-frequency graph obtained by each transform domain needs to be the same, the width needs to be the same, and the number of channels can be different;
step S2 includes the following substeps:
s21, performing time-frequency analysis on the initial sample data by adopting short-time Fourier transform (STFT) to obtain a first-class four-dimensional time-frequency matrix;
s22, performing time-frequency analysis on the initial sample data by using Hilbert-Huang transform (HHT) to obtain a second four-dimensional time-frequency matrix;
s23, performing time-frequency analysis on the initial sample data by adopting a Wegener-Weili distribution WVD to obtain a third four-dimensional time-frequency matrix;
s24, performing time-frequency analysis on the initial sample data by adopting continuous wavelet transform CWT to obtain a fourth class of four-dimensional time-frequency matrix.
S3, making a four-dimensional time-frequency matrix of different time-frequency analysis methods into a pre-training data set under the time-frequency analysis methods, and pre-training the radiation source individual recognition model by adopting the pre-training data set under each time-frequency analysis method to obtain the accuracy of a test set when a network converges under the different time-frequency analysis methods;
step S3 includes the following substeps:
s31, generating a one-dimensional label matrix corresponding to each radiation source by adopting one-hot coding;
s32, corresponding the one-dimensional label matrix and the four-dimensional time-frequency matrix, and performing random disorder processing to obtain a pre-training data set under each time-frequency analysis method;
s33, dividing the pre-training data set under each time-frequency analysis method into a training set, a verification set and a test set according to the ratio of 6:2:2, and pre-training the radiation source individual recognition model by adopting the training set, the verification set and the test set to obtain the accuracy of the test set when the network converges under different time-frequency analysis methods.
S4, according to the accuracy of the test set during network convergence under different time frequency analysis methods, eliminating the time frequency analysis method with low accuracy to obtain the time frequency analysis method with high accuracy;
s5, performing feature splicing and fusion on the four-dimensional time-frequency matrix of the high-accuracy time-frequency analysis method, and constructing a sample feature matrix based on the feature splicing and fusion;
step S5 includes the following substeps:
s51, constructing a random Gaussian measurement matrix with an element-independent obeying mean value of 0, a variance of 1/M and a size of M multiplied by N, wherein M is the second dimension of the four-dimensional time-frequency matrix, and N is the third dimension of the four-dimensional time-frequency matrix;
s52, setting a penalty factor according to the accuracy of the test set during network convergence under the high-accuracy time-frequency analysis method;
s53, constructing a second dimension characteristic and a third dimension characteristic of a four-dimensional time-frequency matrix of the high-accuracy time-frequency analysis method as a characteristic matrix, and fusing the characteristic matrix based on a random Gaussian measurement matrix and a penalty factor to obtain a fusion matrix:
Bi=βiAiΦ
Figure BDA0003018197240000091
wherein, BiIs a fusion matrix, beta, corresponding to the ith high-accuracy time-frequency analysis methodiA penalty factor corresponding to the ith high-accuracy time-frequency analysis method, AiIs a characteristic matrix corresponding to the ith high-accuracy time frequency analysis method, phi is a random Gaussian measurement matrix, alphaiThe accuracy corresponding to the ith high-accuracy time frequency analysis method,
Figure BDA0003018197240000092
is alphaiL is the number of time-frequency analysis methods of high accuracy;
and S54, splicing the fusion matrixes obtained by the high-accuracy time-frequency analysis method on different channels to obtain a sample characteristic matrix based on characteristic splicing fusion.
S6, constructing a sample feature matrix based on feature splicing fusion into a feature fusion data set under a corresponding time-frequency analysis method, training a radiation source individual recognition model by adopting the feature fusion data set, adjusting the hyper-parameters of the radiation source individual recognition model to obtain the trained radiation source individual recognition model, namely storing the radiation source individual recognition model after network convergence;
the method for constructing the sample feature matrix based on feature splicing and fusion into the feature fusion data set under the corresponding time-frequency analysis method in the step S6 is the same as the method for manufacturing the four-dimensional time-frequency matrix of the different time-frequency analysis method into the pre-training data set under the time-frequency analysis method in the step S3, then the feature fusion data set is divided into training sets, and the verification set and the test set are used for training the radiation source individual identification model.
The individual identification model of the radiation source in step S6 is ResNet 50.
And S7, inputting the radiation source data into the trained radiation source individual recognition model to obtain the recognized radiation source individual.
The Loss function Loss used in training ResNet50 is:
Figure BDA0003018197240000101
wherein K is the number of radiation source samples in the pre-training dataset or the feature fusion dataset,
Figure BDA0003018197240000102
as a prediction of the ith radiation source sample, yiFor the true label of the ith radiation source sample, the Loss function Loss adopts an Adam optimizer to find the optimal solution.
Different time-frequency analysis methods have different characteristics, wherein the STFT is used for carrying out fast Fourier transform on a series of windowed data substantially, and the method has the advantages that the physical significance is clear, a time-frequency structure which is consistent with the visual perception of a human body can be provided for a plurality of actual test signals, but the STFT window width is fixed and cannot be adjusted in a self-adaptive manner. In addition, due to the restriction of the inaccurate measurement principle on the time-frequency resolution capability of the window function (namely, the time resolution is higher but the frequency resolution is poor when a short window is used, and the frequency resolution is higher but the time resolution is poor when a long window is used), in practical application, the time resolution and the frequency resolution are difficult to be both complete, and the requirements of time-frequency characteristics of all types of signals cannot be met, namely, when the analyzed signals contain a plurality of scale components with large differences, the STFT is difficult to extract high-quality characteristics; the CWT can provide a time-frequency window changing along with the frequency, compared with the STFT, the CWT can better solve the contradiction between the time resolution and the frequency resolution, can extract the time-frequency localization characteristic of the signal, but has higher redundancy, in addition, because the scale factor introduced by the CWT is not directly connected with the frequency, the frequency is not obviously expressed in the wavelet transformation; WVD belongs to non-linear time-frequency distribution, which essentially distributes the energy of a signal in a time-frequency plane. The time and the frequency of the signal are not mutually constrained, the resolution is better, but when more than one signal component exists, the WVD conversion can generate cross term interference, the original signal can be seriously influenced, and the extracted characteristic is poorer; HHT is a new method of nonlinear analysis for process sampling, which can describe and simulate non-stationary processes. The method decomposes a signal into a limited number of eigen-mode functions IMF or fundamental mode components through EMD empirical mode decomposition, and then performs Hilbert transform on each IMF or component to obtain meaningful instantaneous power, thereby giving an accurate expression of frequency variation. Therefore, HHT can adaptively use local information of the signal, reflecting non-stationary characteristics of the signal, but it lacks strict physical and mathematical significance.
The individual radiation source identification task based on machine learning needs a sufficient number of samples, the samples contain rich information as much as possible, and the neural network can only discover the slight difference among all sources in the huge features, so that the classification identification task is completed. However, under the condition of small samples, the number of samples is insufficient, and the features extracted under a single transform domain are difficult to train to obtain a good neural network model. Therefore, the invention adopts the feature fusion technology to splice the features extracted from a plurality of transform domains, enlarge the feature dimension of each sample, enrich the feature quantity of each sample, and finally input the fused features into the neural network. The method can solve the problem of insufficient sample characteristics under a small sample, can fully utilize the advantages of each time-frequency analysis method, and improves the individual identification accuracy of the radiation source.
The following takes a three-classification small sample-based individual identification task of the radiation source as an example. The time domain waveforms of the original signals of the three existing radiation source units with the same model are shown in figures 2-4.
One pulse data generated by each individual radiation source is taken as a sample, and only 100 samples are taken for training and 100 samples are taken for testing. The characteristics extracted by three time-frequency methods of fusion of Hilbert-Huang transform, Weigner-Weiley distribution and continuous wavelet transform are taken as an example: firstly, time-frequency analysis is performed on each sample on different transform domains, taking the radiation source 1 as an example, and time-frequency graphs after the time-frequency analysis are shown in fig. 5 to 7.
Requiring that each sample be processed at each transform domain to obtain a complex matrix of the same dimension (224 x 224 as an example), one sample will have dimensions [1,224,224,2 ] at one transform domain]The first dimension represents the number of samples, the second and third dimensions are the size of the time-frequency diagram, and the fourth dimension represents the number of channels. Then, performing fusion preprocessing based on a single-domain pre-training accuracy result and a random Gaussian measurement matrix on the time-frequency characteristics of the second dimension and the third dimension: setting the sample characteristic matrix of each channel of each sample (namely the second dimension and the third formed characteristic matrix of the four-dimensional time-frequency matrix) to be A under different time-frequency analysis methodsiAccuracy of the single-domain pre-training record is alphaiThe random Gaussian measurement matrix is phi, and the matrix after the fusion pretreatment is BiThen, then
Figure BDA0003018197240000121
Then the matrix B obtained by the fusion preprocessing is processediSplicing on different channels (all transform domains can be selected here, or some transform domains with optimal training result on a single transform domain can be selected here, which takes 3 transform domains listed as an example), splicing and fusing the characteristic dimensions [1,224, 6] of the subsequent sample]The schematic process of feature fusion is shown in fig. 8.
All sample data (3 radiation sources, each radiation source training set contains 100 samples, and 300 samples in total) are fused to obtain feature dimensions [300,224, 6], and then the feature dimensions are input into a radiation source individual recognition model as a feature fusion data set for training.

Claims (8)

1. A radiation source individual identification method based on feature fusion of small samples is characterized by comprising the following steps:
s1, preprocessing the radiation source data of the small sample to obtain initial sample data;
s2, performing time-frequency processing on the initial sample data by adopting different time-frequency analysis methods to obtain four-dimensional time-frequency matrixes of different time-frequency analysis methods;
s3, making a four-dimensional time-frequency matrix of different time-frequency analysis methods into a pre-training data set under the time-frequency analysis methods, and pre-training the radiation source individual recognition model by adopting the pre-training data set under each time-frequency analysis method to obtain the accuracy of a test set when a network converges under the different time-frequency analysis methods;
s4, according to the accuracy of the test set during network convergence under different time frequency analysis methods, eliminating the time frequency analysis method with low accuracy to obtain the time frequency analysis method with high accuracy;
s5, performing feature splicing and fusion on the four-dimensional time-frequency matrix of the high-accuracy time-frequency analysis method, and constructing a sample feature matrix based on the feature splicing and fusion;
s6, constructing a sample feature matrix based on feature splicing fusion into a feature fusion data set under a corresponding time-frequency analysis method, training a radiation source individual recognition model by adopting the feature fusion data set, and adjusting the hyper-parameters of the radiation source individual recognition model to obtain a radiation source individual recognition model after training;
and S7, inputting the radiation source data into the trained radiation source individual recognition model to obtain the recognized radiation source individual.
2. The individual identification method for the radiation source based on the small sample feature fusion as claimed in claim 1, wherein the preprocessing in step S1 comprises: filtering, energy detection and normalization.
3. The individual identification method for the radiation source based on the small sample feature fusion as claimed in claim 1, wherein the step S1 comprises the following sub-steps:
s11, filtering time domain intermediate frequency data of the radiation source data of the small sample;
s12, performing energy detection on the filtered radiation source data, and extracting signal intermediate frequency data;
s13, carrying out normalization processing on the intermediate frequency data of the signals to obtain normalization sample data with the value range of [ -1, 1 ];
the formula of the normalization process is:
Figure FDA0003018197230000021
wherein x is*To normalize the sample data, x is the intermediate frequency data of the signal, xmaxIs the maximum value of the intermediate frequency data, x, of the signalminIs the minimum value of the intermediate frequency data of the signal;
and S14, slicing the normalized sample data according to the same time domain length to obtain initial sample data.
4. The individual identification method for radiation source based on small sample feature fusion as claimed in claim 1, wherein the time-frequency analysis method in step S2 includes: short-time Fourier transform STFT, Hilbert-Huang transform HHT, Weigner-Weiley distribution WVD and continuous wavelet transform CWT;
in step S2, the first dimension of the four-dimensional time-frequency matrix represents the number of samples of the initial sample data, the second dimension represents the length of the time domain graph, the third dimension represents the width of the time domain graph, and the fourth dimension represents the number of channels;
step S2 includes the following substeps:
s21, performing time-frequency analysis on the initial sample data by adopting short-time Fourier transform (STFT) to obtain a first-class four-dimensional time-frequency matrix;
s22, performing time-frequency analysis on the initial sample data by using Hilbert-Huang transform (HHT) to obtain a second four-dimensional time-frequency matrix;
s23, performing time-frequency analysis on the initial sample data by adopting a Wegener-Weili distribution WVD to obtain a third four-dimensional time-frequency matrix;
s24, performing time-frequency analysis on the initial sample data by adopting continuous wavelet transform CWT to obtain a fourth class of four-dimensional time-frequency matrix.
5. The individual identification method for the radiation source based on the small sample feature fusion as claimed in claim 1, wherein the step S3 comprises the following sub-steps:
s31, generating a one-dimensional label matrix corresponding to each radiation source by adopting one-hot coding;
s32, corresponding the one-dimensional label matrix and the four-dimensional time-frequency matrix, and performing random disorder processing to obtain a pre-training data set under each time-frequency analysis method;
s33, dividing the pre-training data set under each time-frequency analysis method into a training set, a verification set and a test set, and pre-training the radiation source individual recognition model by adopting the training set, the verification set and the test set to obtain the accuracy of the test set when the network converges under different time-frequency analysis methods.
6. The individual identification method for the radiation source based on the small sample feature fusion as claimed in claim 1, wherein the step S5 comprises the following sub-steps:
s51, constructing a random Gaussian measurement matrix with an element-independent obeying mean value of 0, a variance of 1/M and a size of M multiplied by N, wherein M is the second dimension of the four-dimensional time-frequency matrix, and N is the third dimension of the four-dimensional time-frequency matrix;
s52, setting a penalty factor according to the accuracy of the test set during network convergence under the high-accuracy time-frequency analysis method;
s53, constructing a second dimension characteristic and a third dimension characteristic of a four-dimensional time-frequency matrix of the high-accuracy time-frequency analysis method as a characteristic matrix, and fusing the characteristic matrix based on a random Gaussian measurement matrix and a penalty factor to obtain a fusion matrix:
Bi=βiAiΦ
Figure FDA0003018197230000031
wherein, BiIs a fusion matrix, beta, corresponding to the ith high-accuracy time-frequency analysis methodiA penalty factor corresponding to the ith high-accuracy time-frequency analysis method, AiIs a characteristic matrix corresponding to the ith high-accuracy time frequency analysis method, phi is a random Gaussian measurement matrix, alphaiThe accuracy corresponding to the ith high-accuracy time frequency analysis method,
Figure FDA0003018197230000032
is alphaiL is the number of time-frequency analysis methods of high accuracy;
and S54, splicing the fusion matrixes obtained by the high-accuracy time-frequency analysis method on different channels to obtain a sample characteristic matrix based on characteristic splicing fusion.
7. The small sample-based feature fusion radiation source individual identification method according to claim 1, wherein the radiation source individual identification model in the step S6 is ResNet 50.
8. The individual identification method for the radiation source based on the feature fusion of the small samples as claimed in claim 7, wherein the Loss function Loss adopted in the training of ResNet50 is:
Figure FDA0003018197230000041
wherein K is the number of radiation source samples in the pre-training dataset or the feature fusion dataset,
Figure FDA0003018197230000042
as a prediction of the ith radiation source sample, yiIs the true label of the ith radiation source sample.
CN202110395018.6A 2021-04-13 2021-04-13 Radiation source individual identification method based on small sample feature fusion Active CN113177558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110395018.6A CN113177558B (en) 2021-04-13 2021-04-13 Radiation source individual identification method based on small sample feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110395018.6A CN113177558B (en) 2021-04-13 2021-04-13 Radiation source individual identification method based on small sample feature fusion

Publications (2)

Publication Number Publication Date
CN113177558A true CN113177558A (en) 2021-07-27
CN113177558B CN113177558B (en) 2022-06-14

Family

ID=76923315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110395018.6A Active CN113177558B (en) 2021-04-13 2021-04-13 Radiation source individual identification method based on small sample feature fusion

Country Status (1)

Country Link
CN (1) CN113177558B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887502A (en) * 2021-10-21 2022-01-04 西安交通大学 Communication radiation source time-frequency feature extraction and individual identification method and system
CN114492604A (en) * 2022-01-11 2022-05-13 电子科技大学 Radiation source individual identification method under small sample scene
CN114940142A (en) * 2022-05-31 2022-08-26 中国人民解放军国防科技大学 Automobile anti-theft method and system based on individual verification of radiation source and vehicle

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215439B1 (en) * 1990-12-07 2001-04-10 General Dynamics Government Systems Corporation Emitter identification from radio signals using filter bank algorithm
CN107133952A (en) * 2017-05-26 2017-09-05 浙江工业大学 A kind of alligatoring recognition methods for merging time-frequency characteristics
CN109492529A (en) * 2018-10-08 2019-03-19 中国矿业大学 A kind of Multi resolution feature extraction and the facial expression recognizing method of global characteristics fusion
CN110110738A (en) * 2019-03-20 2019-08-09 西安电子科技大学 A kind of Recognition Method of Radar Emitters based on multi-feature fusion
CN110197209A (en) * 2019-05-15 2019-09-03 电子科技大学 A kind of Emitter Recognition based on multi-feature fusion
CN110879989A (en) * 2019-11-22 2020-03-13 四川九洲电器集团有限责任公司 Ads-b signal target identification method based on small sample local machine learning model
CN111580064A (en) * 2020-06-28 2020-08-25 南京信息工程大学 Sea surface small target detection method based on multi-domain and multi-dimensional feature fusion
CN111767848A (en) * 2020-06-29 2020-10-13 哈尔滨工程大学 Radiation source individual identification method based on multi-domain feature fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215439B1 (en) * 1990-12-07 2001-04-10 General Dynamics Government Systems Corporation Emitter identification from radio signals using filter bank algorithm
CN107133952A (en) * 2017-05-26 2017-09-05 浙江工业大学 A kind of alligatoring recognition methods for merging time-frequency characteristics
CN109492529A (en) * 2018-10-08 2019-03-19 中国矿业大学 A kind of Multi resolution feature extraction and the facial expression recognizing method of global characteristics fusion
CN110110738A (en) * 2019-03-20 2019-08-09 西安电子科技大学 A kind of Recognition Method of Radar Emitters based on multi-feature fusion
CN110197209A (en) * 2019-05-15 2019-09-03 电子科技大学 A kind of Emitter Recognition based on multi-feature fusion
CN110879989A (en) * 2019-11-22 2020-03-13 四川九洲电器集团有限责任公司 Ads-b signal target identification method based on small sample local machine learning model
CN111580064A (en) * 2020-06-28 2020-08-25 南京信息工程大学 Sea surface small target detection method based on multi-domain and multi-dimensional feature fusion
CN111767848A (en) * 2020-06-29 2020-10-13 哈尔滨工程大学 Radiation source individual identification method based on multi-domain feature fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHANG-MENG LIU: ""Multi-feature fusion for specific emitter identification via deepensemble learning"", 《DIGITAL SIGNAL PROCESSING》 *
金秋 等: ""雷达辐射源分类识别方法综述"", 《电讯技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887502A (en) * 2021-10-21 2022-01-04 西安交通大学 Communication radiation source time-frequency feature extraction and individual identification method and system
CN113887502B (en) * 2021-10-21 2023-10-17 西安交通大学 Communication radiation source time-frequency characteristic extraction and individual identification method and system
CN114492604A (en) * 2022-01-11 2022-05-13 电子科技大学 Radiation source individual identification method under small sample scene
CN114940142A (en) * 2022-05-31 2022-08-26 中国人民解放军国防科技大学 Automobile anti-theft method and system based on individual verification of radiation source and vehicle
CN114940142B (en) * 2022-05-31 2023-10-13 中国人民解放军国防科技大学 Automobile anti-theft method and system based on radiation source individual verification and automobile

Also Published As

Publication number Publication date
CN113177558B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN113177558B (en) Radiation source individual identification method based on small sample feature fusion
CN110491416B (en) Telephone voice emotion analysis and identification method based on LSTM and SAE
CN110702411B (en) Residual error network rolling bearing fault diagnosis method based on time-frequency analysis
US20220343898A1 (en) Speech recognition method and apparatus, and computer-readable storage medium
CN110808033B (en) Audio classification method based on dual data enhancement strategy
CN107393554A (en) In a kind of sound scene classification merge class between standard deviation feature extracting method
CN101404160A (en) Voice denoising method based on audio recognition
Wei et al. PRI modulation recognition based on squeeze-and-excitation networks
CN111724770B (en) Audio keyword identification method for generating confrontation network based on deep convolution
Li et al. Sams-net: A sliced attention-based neural network for music source separation
CN105810191A (en) Prosodic information-combined Chinese dialect identification method
Zhang et al. Temporal Transformer Networks for Acoustic Scene Classification.
CN109065073A (en) Speech-emotion recognition method based on depth S VM network model
CN110192864B (en) Cross-domain electrocardiogram biological characteristic identity recognition method
CN103426439B (en) A kind of broadcast television audio signal content consistency detecting method
CN112927723A (en) High-performance anti-noise speech emotion recognition method based on deep neural network
Zhang et al. Fault diagnosis method based on MFCC fusion and SVM
CN112329819A (en) Underwater target identification method based on multi-network fusion
CN116911419A (en) Long time sequence prediction method based on trend correlation feature learning
CN114077851B (en) FSVC-based ball mill working condition identification method
CN114492604A (en) Radiation source individual identification method under small sample scene
CN112735477B (en) Voice emotion analysis method and device
CN113160823B (en) Voice awakening method and device based on impulse neural network and electronic equipment
CN112434716B (en) Underwater target data amplification method and system based on condition countermeasure neural network
Khan et al. Speech recognition: increasing efficiency of support vector machines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant