CN116226721A - Unsupervised communication radiation source individual identification method based on bispectrum feature contrast learning - Google Patents
Unsupervised communication radiation source individual identification method based on bispectrum feature contrast learning Download PDFInfo
- Publication number
- CN116226721A CN116226721A CN202310242426.7A CN202310242426A CN116226721A CN 116226721 A CN116226721 A CN 116226721A CN 202310242426 A CN202310242426 A CN 202310242426A CN 116226721 A CN116226721 A CN 116226721A
- Authority
- CN
- China
- Prior art keywords
- sample
- data
- network
- feature
- samples
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000005855 radiation Effects 0.000 title claims abstract description 85
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000004891 communication Methods 0.000 title claims abstract description 52
- 238000002474 experimental method Methods 0.000 claims abstract description 35
- 230000003416 augmentation Effects 0.000 claims abstract description 21
- 238000012549 training Methods 0.000 claims description 40
- 239000011159 matrix material Substances 0.000 claims description 24
- 238000013434 data augmentation Methods 0.000 claims description 21
- 238000013528 artificial neural network Methods 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 13
- 239000000284 extract Substances 0.000 claims description 12
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 9
- 238000012512 characterization method Methods 0.000 claims description 7
- 230000000052 comparative effect Effects 0.000 claims description 7
- 230000000875 corresponding effect Effects 0.000 claims description 7
- 230000009466 transformation Effects 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 6
- 230000003190 augmentative effect Effects 0.000 claims description 5
- 238000009826 distribution Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 5
- 230000003321 amplification Effects 0.000 claims description 4
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000002776 aggregation Effects 0.000 claims description 2
- 238000004220 aggregation Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000002596 correlated effect Effects 0.000 claims description 2
- 238000007781 pre-processing Methods 0.000 claims description 2
- 238000012795 verification Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 13
- 238000000605 extraction Methods 0.000 description 11
- 230000008901 benefit Effects 0.000 description 8
- 238000001228 spectrum Methods 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 239000013598 vector Substances 0.000 description 5
- 238000003064 k means clustering Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 240000003433 Miscanthus floridulus Species 0.000 description 1
- 102100029469 WD repeat and HMG-box DNA-binding protein 1 Human genes 0.000 description 1
- 101710097421 WD repeat and HMG-box DNA-binding protein 1 Proteins 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000005311 autocorrelation function Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an individual identification method of an unsupervised communication radiation source based on bispectrum feature contrast learning, which comprises the following steps: firstly, a residual error network shared by two parameters is used as a main network to perform feature contrast learning, then rectangular integral bispectrum features of an augmentation sample are input into a contrast learning module, and feature representation with more distinguishing force is further learned, so that feature separability among different radiation source samples is enhanced; and secondly, performing contrast learning on the clustering level by using the extracted new characteristic representation to finish the classification recognition task. By carrying out experiments on the actually measured ultrashort wave communication radio station data set, compared with other unsupervised learning algorithms, the method provided by the invention has a better recognition effect, and can achieve a recognition accuracy of 77.8%.
Description
Technical Field
The invention relates to the technical field of radiation source identification, in particular to an unsupervised communication radiation source individual identification method based on bispectrum feature contrast learning.
Background
Communication radiation source individual identification (Specific Emitter Identification, SEI) generally refers to associating an intercepted communication signal with a certain type of station individual through signal feature matching, thereby achieving tactical purposes of reconnaissance, identification, etc., with accuracy of the matching being related to reliability of intelligence reconnaissance or operational effectiveness of the entire electronic warfare system (K.C.Ho, W.Prokopiw, and y.t. chan. "Modulation identification of digital signals by the wavel-et transform". In: IEE Proceedings-Radar, sonar and Navigation 147.4 (2002), pp.169-176 (cit.on p.1)). For practical battlefield environments, we often face the problem of classification of a large amount of radiation source signals without tag information, namely unsupervised identification, which is mainly divided into two key links: feature extraction and classifier design. Wherein how to extract the distinguishable data features from these unlabeled information radiation source signals affects the performance of downstream classification recognition tasks.
Conventional unsupervised communication radiation source individual identification often combines artificial features with an unsupervised algorithm, pre-processes received radiation source signals, extracts fine features of the signals by an artificial method, such as time-frequency analysis (Yuan Y, huang Z, hao W, et al Specific emitter identification based on Hilbert-Huang transform-b-imparted time-frequency-energy distribution features (J): communications Iet,2014,8 (13): 2404-2412.), modulation analysis, gao Jiepu (Zhang X D, shi Y, bao Z.A new feature vector using selected bispectra for signal classification with application in radar target recognition (J): IEEE Transactions on Signal Processing,2001,49 (09): 1875-1885.), modeling based on transmitter device nonlinearity principles (A.C.Polak, S.Dolatshahi and D.L.Goeckel, "Identifying Wireless Users via Transmitter Imperfections," in IEEE Journal on Selected Areas in Communications, vol.29, no.7, pp.1469-1479,August 2011,doi:10.1109/AC.2011.110812), and the like, and then classifies these features by using an unsupervised clustering algorithm. However, these artificial features have obvious limitations, and only partial features of the radiation source signal can be reflected, so that the identification effect is often not ideal without strong distinguishability. With the continuous research of deep learning, the strong nonlinear fitting capability of the deep neural network (Deep neural network, DNN) shows excellent performance in image recognition and face detection, and is a first-come frontal miscanthus in the communication radiation source individual recognition technology.
In the literature (seventh second and fourth research institute of China shipping heavy industry group company, a radiation source signal multi-model comprehensive classification method based on deep learning is CN202110751828.0 (P). 2021-09-07.), wang Jiaming et al construct a comprehensive network model by using a deep convolutional neural network and a long and short time memory network, so that intelligent identification of radar radiation source signals is realized, and the method has very strong generalization capability; li Lixin et al (university of northwest industry, shanghai satellite engineering institute, radio signal modulation identification network based on hybrid neural network and implementation method: CN202011368021.0 (P). 2021-03-26.) more fully utilize state characteristics of signals in time and space by performing dimension division and cyclic threshold control on characteristic information extracted by a convolution layer, and improve classification performance of modulated signals; xie Cunxiang et al (Xie Cunxiang, zhang Limin, zhong Zhaogen. Specific radiation source identification based on Hilbert-Huang transform and countermeasure training (J.) System engineering and electronics, 2021,43 (12): 3478-3487.DOI:10.12305/J. Issn.1001-506 X.2021.12.08.) during the study, a model was created that fused Hilbert-Huang transform and countermeasure training that entered the critical time-frequency points of the radiation source signal and their corresponding energy values into the convolutional neural network for training during the process, and good identification results could be achieved with fewer training samples; literature (S.Wang, H.Jiang, X.Fang, Y.Ying, J.Li and B.zhang, radio Frequency Fingerprint Identification Based on Deep Complex Residual Network, in IEEE Access, vol.8, pp.204417-204424,2020.) adopts a deep complex residual network to combine two processes of feature extraction and classification, establishes an end-to-end model suitable for radiation source identification, and improves identification accuracy; literature (L.ying, J.Li and B.zhang, "Differential Complex-Valued Convolutional Neural Network-Based Individual Recognition of Communication Radiation Sources," in IEEE Access, vol.9, pp.132533-132540,2021.) "uses a differential complex-valued convolutional neural network to capture nonlinear characteristics of baseband I/Q signals of 20 communication radiation sources, and the recognition rate reaches 99.7%; the literature (Qu Lingzhi, yang Junan, liu Hui, huang Keju) discloses a communication radiation source individual identification method (J) embedded with an attention mechanism, system engineering and electronic technology,2022,44 (01): 20-27.QU LZ,YANG J A,LIU H,HUANG K J.Individual identification method of communication radiation source with embedded attention mechanism (J), systems Engineering and Electronic Technology,2022,44 (01): 20-27 (in Chinese)), which provides a communication radiation source identification method embedded with a double-layer attention mechanism in a residual network, thereby improving identification accuracy and having better stability; document (YANG HAIFEN, ZHANG HAO, WANG HOUJUN, et al a novel approach for unlabeled samples in radiation source identification (J). Systems engineering and electronics (english edition), 2022,33 (2): 354-359.Doi: 10.23919/jsee.2022.000037.) first trains the network on the marked samples, then utilizes semi-supervised learning to detect unmarked samples and automatically mark new samples, enabling dynamic identification of unknown radiation source individuals.
The existing communication radiation source identification method based on deep learning is usually supervised learning performed on a labeled data set, but in actual non-cooperative communication, the intercepted radiation source data is often free of priori information, so that the algorithm performance of the supervised learning is limited. In the unsupervised learning algorithm, the clustering algorithm can group feature vectors into different clusters without any labels, such as sparse embedded k-means clustering, multi-core k-means clustering with matrix-induced regularization, and the like, but most algorithms can produce poor results on complex data sets due to the fact that the method is based on artificial features. To address the problem of insufficient feature representation, deep clustering utilizes neural networks to extract representative information from images to obtain more discriminative feature representations, facilitating downstream clustering tasks (Caron M, bojanowski P, joulin A, et al deep clustering for unsupervised learning of visual features (C)// Proceedings of the European conference on computer vision (ECCV). 2018:132-149.). These algorithms tend to iteratively group features and update the depth network with subsequent assignments. During the process of feature representation learning and clustering alternation, errors tend to accumulate, thereby affecting clustering performance.
In summary, deep neural networks perform well in the problem of classification recognition of labeled datasets, while often fail to achieve satisfactory results in unlabeled data.
Disclosure of Invention
The invention aims to solve the problems that the data characteristics of a communication radiation source without label information are difficult to extract, the classification precision is low and the like, and provides an individual identification method of an unsupervised communication radiation source based on bispectral characteristic contrast learning by introducing a contrast learning theory.
The technical solution for realizing the purpose of the invention is as follows: an unsupervised communication radiation source individual identification method based on bispectrum feature contrast learning comprises the following steps:
step 5, selecting a batch of data from the data set χTwo data augmentation modes T are selected randomly a ,T b Calculate the contrast loss L of the sample pair ins ;
Step 6, calculating cluster contrast loss L clu And calculates the overall loss value L, updates the network f, g by minimizing L I ,g C Parameters of (2);
step 7, for the numberEach sample x in the dataset is characterized by h=f (x) and by c=argmaxg C (h) Calculating cluster allocation of each sample;
and 8, outputting one-hot codes of each cluster to finish the identification of the unsupervised communication radiation source individuals.
Further, the similarity between the original signal sample x and the positive example and the difference between the original signal sample x and the negative example are utilized for comparison learning, and the characteristic representation h with more distinguishing degree is extracted a ,h b For downstream classification tasks;
according to h a ,h b The characteristic representation obtained by the same data sample through two augmentation modes belongs to the same class during clustering, a cluster loss function is designed, and comparison learning is carried out on a cluster class level to finish a classification recognition task;
positive pairs of samples are defined as two amplified samples of the same signal sample, and other pairs of samples are defined as negative pairs of samples.
Further, the network model in step 1 is specifically as follows:
changing the one-dimensional time sequence into a two-dimensional feature matrix, inputting the two-dimensional feature matrix into a contrast learning module for training, and selecting a residual error network as a backbone network for contrast learning;
the residual network is constructed by residual blocks, wherein the residual blocks consist of a plurality of cascaded convolution layers and a short circuit, the output values of the two are accumulated, and then the ReLU function is utilized for activation to obtain an output result, and the residual blocks are connected in series to form a deeper network; optimizing network parameters through experiments, and determining a network model to perform feature contrast learning:
when training a model, taking a bispectrum feature map of 5 ultrashort wave radio stations as the input of a network, taking 1500 data samples from each radio station, and calculating bispectrum features after data augmentation processing of each sample; each sample contains 4096 data points with feature matrix dimensions set to 128 x 128; randomly selecting 60% of data from the data as a training set, 20% as a verification set and 20% as a test set;
ResNet-18 and ResNet-34 are respectively selected as backbone networks, the network optimization method is an Adam method, the initial learning rate is set to 0.001, the training times E are set to 400, the batch data quantity N is set to 32, recognition results corresponding to different network depths are obtained, and finally ResNet-34 is selected as the backbone network for extracting the comparison characteristics.
Further, in step 5, a sample-to-contrast loss L is calculated ins The method is characterized by comprising the following steps:
for a given signal sample x i Two data augmentation modes T are selected randomly with set probability a ,T b Constructing a sample pair to obtain two related samplesDenoted as->The two related samples amplified by the same sample are marked as positive sample pairs; />
Inputting a pair of samples into a deep neural network f (-) with shared parameters to perform contrast training and extract features, thus obtaining new feature characterizationDenoted as->
A nonlinear full-connection layer g of two layers is stacked I (. Cndot.) byMapping the feature matrix to subspace applying contrast loss, at +.> andPerforming calculation of contrast loss;
the pair-wise similarity is measured by cosine distance, i.e
wherein ,k1 ,k 2 ∈{a,b},i,j∈[1,N];
Intercepting N sections of radiation source signal samples, and for each sample x i Performing two augmentation modes to obtain 2N signal samplesFor a particular sample->There are 2N-1 sample pairs, the amplified sample to be associated therewith +.>Marking as positive sample to obtain positive sample pair +.>The remaining 2N-2 pairs are all noted as negative pairs;
in order to optimize pairwise similarity throughout the radiation source data set, a specific sample is definedThe comparative loss pattern of (c) is as follows:
wherein ,τI Is a hyper-parameter that calculates a sample contrast loss for each augmented sample in order to identify all the positive pairs in the entire dataset, namely:
further, in step 6, a cluster contrast loss is calculatedL clu The method is characterized by comprising the following steps:
characterization of the obtained novel featuresInput to clustering network g C (. About.) the->Aggregation into the same class;
setting the dimension of the network output matrix Y to meet Y a ∈R N×M Where N is the number of samples per batch of training, M is the number of clusters, Y a ,Y b Respectively outputting the data under the augmentation of two times of data of each batch of samples; since each sample belongs to only one cluster, the Y row should resemble a one-hot distribution;
when projecting a data sample into a space having a dimension equal to the number of clusters, the ith element of the feature is considered to be a probability of belonging to the ith cluster, the ith column of Y is considered to be a representation of the ith cluster, and all columns should be different from each other;
with the other two fully-connected layers g C Make sure of featuresSubspace mapped to an M dimension, denoted as whereinIs a matrix Y a Is sample x i The output is carried out through a clustering network after the a augmentation mode;
memory matrix Y a Is listed as (i)I.e. the representation of the ith cluster after the first augmentation of the data sample, will likewise +.>And->Combining to form a positive cluster pair->The other 2M-2 cluster pairs are regarded as negative cluster pairs;
the cosine distance is used to measure the similarity between cluster pairs, namely:
wherein ,k1 ,k 2 ∈{a,b},i,j∈[1,M];
Clustering using the following penalty functionAnd remove->All other clusters except for the one are distinguished: />
wherein ,τC Super parameters of a clustering network;
by traversing all clusters, cluster contrast loss L is avoided to assign most samples to the same class clu The definition is as follows:
Further, in step 6, the overall loss value L is calculated as follows:
the feature comparison and cluster comparison loss function is optimized simultaneously to be used as the optimization of the whole unsupervised communication radiation source classification recognition network, namely:
L=L ins +L clu
compared with the prior art, the invention has the remarkable advantages that: (1) Firstly, a residual error network shared by two parameters is used as a main network to perform feature contrast learning, then rectangular integral bispectrum features of an augmentation sample are input into a contrast learning module, and feature representation with more distinguishing force is further learned, so that feature separability among different radiation source samples is enhanced; (2) The extracted new characteristic representation is used for carrying out contrast learning on a clustering layer to finish classification recognition tasks, and experiments are carried out on the actually measured ultrashort wave communication radio station data set.
Drawings
Fig. 1 is a comparative learning schematic block diagram.
Fig. 2 is a schematic diagram of an augmentation of radiation source signal sample data.
Fig. 3 is a diagram of a two-spectral plot of 3 station signal samples of the same type.
Fig. 4 is a schematic diagram of a communication radiation source individual identification algorithm network structure based on contrast learning.
Fig. 5 is a flow chart of a communication radiation source individual identification algorithm based on contrast learning.
Fig. 6 is a schematic diagram of a signal collector frequency interface.
Fig. 7 is a schematic diagram of a signal time domain waveform of two ultrashort radio stations.
Fig. 8 is a schematic diagram of training results of an ultrashort wave radio station data set.
Fig. 9 is a schematic diagram of a test set confusion matrix.
FIG. 10 is a comparison of recognition rates for three different features.
Fig. 11 is a diagram showing average recognition rates (%) when three features are subjected to comparative learning to extract features of different dimensions.
Detailed Description
In order to directly and effectively extract fingerprint information in a large number of unlabeled radiation source signals by utilizing a deep neural network and promote downstream classification and identification tasks, the invention introduces a comparison learning idea, utilizes two Residual networks (ResNet) with identical structures to construct a comparison learning module, inputs rectangular integral double-spectrum characteristics (Square Integrated Bispectra, SIB) of each signal sample and the augmentation data thereof into the network for comparison learning so as to learn more distinguished characteristic representation and further improve the overall identification performance of the network.
1 communication radiation source individual identification algorithm basic principle based on contrast learning
1.1 contrast learning
Contrast learning (contrastive learning, CL) is an unsupervised learning algorithm whose main idea is to build positive and negative pairs of samples through data augmentation and then map the data to a feature representation space where positive and negative pairs of similarities are maximized and negative pairs of similarities are minimized, thereby training the network to learn more differentiated feature representations.
In communication radiation source identification, two amplification modes T are randomly employed in a series of data amplification methods for a given signal sample x a and Tb Obtaining two related samples x a and xb We consider it as a positive pair of samples, while the other pairs of samples are all considered as negative pairs of samples. Then, a pair of samples are respectively obtained into coded representations h of the two samples through two neural network encoders f (·) with the same structure a ,h b . The output of the encoder f (·) is then input into a set of nonlinear fully-connected layers g (·), converting the data into another space, denoted as z a ,z b . This additional step can avoid the loss of information caused by contrast loss, thereby improving network performance. Finally, by setting a contrast loss function, the feature similarity of the positive sample pair is maximized, and the negative sample pair is minimizedThe input training sample training network further extracts a feature representation that has more distinguishing components. The principle flow is shown in figure 1:
in the figure, sim (z a ,z b ) In experiments, we find that the characteristic dimension extracted by the comparison learning module is higher, and the characteristic dimension has better radiation source identification performance, if the Euclidean distance is used for measuring the characteristic dimension influence of the characteristic dimension, the cosine similarity still follows the characteristics of 1 when the vector is identical, 0 when the vector is orthogonal and-1 when the vector is orthogonal, so that the cosine distance is used for measuring the similarity of the sample pair, namely:
1.2 sample data augmentation
"data augmentation" is a data augmentation technique that constructs more samples from a small number of samples. At present, in the field of image recognition, the data augmentation technology is widely applied, for example, one picture can be "augmented" into at least four pictures by performing operations such as rotation, turning, cutting and the like on the picture. If each picture of the data set is subjected to the augmentation treatment, the number of the whole pictures can reach four times of the original number. The core idea of the technology is to obtain a data sample similar to the original sample by performing some operations on the original sample without changing the essential characteristics of the original sample. In the radiation source contrast characterization learning, the purpose of using data augmentation is not to expand samples, but to establish a 'pseudo tag' by regarding the samples amplified by the same signal data as the same class, thereby constructing positive and negative sample pairs for contrast training to further extract feature representations.
Unlike image data, the communication radiation source signals we acquire are one-dimensional sequence data, where there is a timing relationship between the sampling points in the samples, which requires attention to the timing relationship between the newly generated samples and the original samples during the augmentation operation. As shown in fig. 2, we mainly adopt the methods of flipping, stretching, adding noise, etc. in the experiment to ensure the semantic invariance between the front and the back of the sequence.
1.3 bispectral characterization
When analyzing an actual radiation source signal, many non-gaussian problems are faced, and compared with the known high-order statistics, the method can extract more statistical signal characteristics, and compared with the second-order statistics, the method provides more abundant information. The double spectrum is the high-order spectrum with the lowest order, the processing method is simple, the amplitude and phase information of the signals are contained, the double spectrum has a certain suppression effect on noise, the rectangular integrated double Spectrum (SIB) has the best effect in the 4 kinds of integrated double spectrums at present, and the problems of repetition and omission of a plurality of double spectrum values can be avoided. We consider therefore in experiments a contrast study using rectangular integrated bispectral features of the signal samples.
The bispectrum can be expressed as:
wherein: the |b (ω1, ω2) | and Φb (ω1, ω2) represent the magnitude and phase of the bispectrum, respectively.
Let b (t) be the continuous signal, the bispectrum can be expressed as:
in the formula :c3b (τ 1 ,τ 2 ) Is a third order autocorrelation function of signal b (t).
The SIB may be defined as:
in the formula :Sl Is any integral path of the SIB.
And respectively carrying out random analysis on one sample of each of 5 FM ultrashort wave radio stations with the same model, analyzing the bispectrum of a sample signal according to the bispectrum estimation algorithm, and selecting 3 bispectrum characteristic diagrams in the bispectrum characteristic diagrams to display as shown in figure 3.
As can be seen from fig. 3, there is also a certain difference in bispectral characteristics between individuals of different stations of the same model. Such a feature may thus be selected to identify the individual source of radiation. From practical experience, the advantages of extracting signal features through bispectral transformation are shown as follows: the phase and amplitude information of the individual signals can be reserved; insensitive to the choice of time origin; additive Gaussian noise can be effectively suppressed.
2 communication radiation source individual identification algorithm based on contrast learning
2.1 network model
The basic idea of the experiment is to use the similarity between the original signal sample x and the positive example and the difference between the original signal sample x and the negative example to perform contrast learning, and extract the characteristic representation h with more distinguishing degree a ,h b For downstream classification recognition tasks. Then according to h a ,h b The characteristic representation obtained by the same data sample through two augmentation modes belongs to the same class during clustering, a cluster loss function is designed, and comparison learning is carried out on a cluster class level to finish the classification recognition task. In experiments we define positive pairs of samples as two amplified samples of the same signal sample, and other pairs of samples as negative pairs of samples. The network structure schematic diagram of the method is shown in fig. 4, and the algorithm flow is shown in fig. 5.
In the method, a one-dimensional time sequence is changed into a two-dimensional feature matrix to be input into a contrast learning module for training, and a Convolutional Neural Network (CNN) has obvious advantages in the field of feature extraction and is commonly used for processing two-dimensional and three-dimensional data, but the problem of degradation occurs when shallow convolutional neural networks are directly stacked into a deep network. In order to take advantage of the strong nonlinear feature fitting capability of the depth network, we consider choosing the residual network as the backbone network for contrast learning.
The residual network is proposed by Kaiming He et al in 2015 and is constructed by residual blocks (Residual Building Block), wherein the residual blocks consist of a plurality of cascaded convolution layers and a short circuit (shortcut connections), the output values of the two are accumulated, and then the ReLU function is utilized for activation to obtain an output result, and the residual blocks are connected in series to form a deeper network, so that the problem of gradient disappearance of the deep network is solved. There are two main typical structures: resNet-18 and ResNet-34, respectively. In order to select a better network model for feature comparison learning, the invention optimizes network parameters through experiments.
When training the model, taking the bispectrum feature map of 5 ultrashort wave radio stations as the input of the network, taking 1500 data samples from each radio station, and calculating bispectrum features after data augmentation processing of each sample. Each sample contains 4096 data points with the feature matrix dimension set to 128 x 128. From this, 60% of the data was randomly selected as training set, 20% as validation set, and 20% as test set.
ResNet-18 and ResNet-34 are respectively selected as a backbone network, the network optimization method is an Adam method, the initial learning rate is set to 0.001, the training times are set to 400, the batch size is 32, and recognition results corresponding to different network depths are obtained as shown in table 1:
TABLE 1 influence of different network depths on radiation source identification accuracy
As can be seen from the above experiments, in the comparative training of radiation source data, the ResNet-34 extracts the characteristic representation with better distinction, and compared with the ResNet-18, the better radiation source identification performance is realized, which shows that the deeper residual network has no network 'degradation' problem in the training of the radiation source data, therefore, we select the ResNet-34 as the backbone network for the comparative characteristic extraction, and the network parameter setting is shown in Table 2.
Table 2ResNet34 network Structure
2.2 feature contrast learning Module
For a given signal sample x i We randomly select two data augmentation modes T with a certain probability a ,T b Constructing a sample pair to obtain two related samplesCan be expressed as +.>Two correlated samples amplified by the same sample are denoted as a positive sample pair. Then inputting a pair of samples into a depth neural network f (-) with shared parameters to perform contrast training and extract features, thus obtaining a new feature characterization +.>Can be expressed as +.>In order to suppress the possible loss of information due to contrast loss, we do not directly apply to the feature matrix +.>Contrast training is performed by stacking a two-layer non-linear fully-connected layer g I (. Cndot.) by->Mapping the feature matrix to subspace applying contrast loss, at +.> andAnd the comparison loss is calculated, so that the similarity of positive samples is improved to the greatest extent, and the similarity of negative sample pairs is reduced to the greatest extent. The pair-wise similarity is measured by cosine distance, i.e
wherein ,k1 ,k 2 ∈{a,b},i,j∈[1,N].
In the experiment, because the radiation source signal has no available label information under the unsupervised condition, positive and negative sample pairs can only be constructed according to the pseudo label generated by data augmentation. Specifically, in experiments we cut N segments of radiation source signal samples, x for each sample i Performing two augmentation modes to obtain 2N signal samplesFor a particular sample->There are 2N-1 sample pairs, the amplified sample to be associated therewith +.>Marking as positive sample to obtain positive sample pair +.>The remaining 2N-2 pairs are all noted as negative pairs.
Then in order to optimize pairwise similarity throughout the radiation source data set, a specific sample may be definedThe comparative loss pattern of (c) is as follows:
wherein ,τI Is a hyper-parameter, since we want to identify all the positive faces in the whole dataset, we calculate the sample contrast loss for each augmented sample, namely:
2.3 clustering network module
After the features are extracted through the contrast training, the obtained new features are characterizedInput to clustering network g C (. Cndot.) in the ideal case, +.>And are grouped into the same class. Based on this knowledge, we set the dimensions of the network output matrix Y to satisfy Y a ∈R N×M Where N is the number of samples per batch of training, M is the number of clusters, Y a ,Y b And outputting the data under augmentation of two times of data for each batch of samples. Since each sample belongs to only one cluster, ideally the row of Y should resemble a one-hot distribution. When a data sample is projected into a space having a dimension equal to the number of clusters, the i-th element of its characteristics can be considered as a probability of belonging to the i-th cluster. In this sense, the ith column of Y can be considered as a representation of the ith cluster, and all columns should be different from each other. Use g in comparison to features I (. Cndot.) similarly, we use the full tie layer g of the other two layers C (. About.) the features->Subspace mapped to one M dimension, denoted +.> wherein ,Is a matrix Y a Can be regarded as sample x i And (3) outputting through a clustering network after the a augmentation mode.
Memory matrix Y a Is listed as (i)I.e. the representation of the ith cluster after the first augmentation of the data sample, which is likewise associated with +.>Combining to form a positive cluster pair->An additional 2M-2 cluster pairs are considered as negative cluster pairs. Cosine distances are still used to measure similarity between cluster pairs, namely:
wherein ,k1 ,k 2 ∈{a,b},i,j∈[1,M]Clustering using the following loss functionAnd remove->All other clusters except for the one are distinguished:
wherein ,τC For the super-parameters of the clustering network, by traversing all clusters, the clustering contrast loss L is reduced to avoid the allocation of most samples to the same class clu The definition is as follows:
The feature comparison and cluster comparison loss function is optimized simultaneously to be used as the optimization of the whole unsupervised communication radiation source classification recognition network, namely:
L=L ins +L clu (11)
2.4 Algorithm step
The specific steps of the unsupervised communication radiation source individual identification algorithm based on comparison learning and proposed according to the analysis are shown in the table 3:
table 3 main steps of an individual identification algorithm of an unsupervised communication radiation source based on contrast learning
3 experimental results and analysis
In order to evaluate the feasibility and effectiveness of the unsupervised communication radiation source individual identification algorithm based on contrast learning, a large number of experiments are carried out on 5 ultrashort wave communication radio station signal data sets of the same model. To analyze experimental results more intuitively, we will propose a method (SIB/CL) combined with traditional artificial features and an unsupervised learning algorithm: k-means clustering (K-means), density spatial clustering (DBSCAN), density Peak Clustering (DPC) were experimentally compared.
We set up 4 groups of experiments altogether, the first group is the unsupervised identification experiment of the radiation source, mainly through identifying the unlabeled radiation source dataset, verify the feasibility of the proposed method (SIB/CL); the second group is different pre-characteristic comparison experiments, after the amplification of the radiation source signal sample data, training is carried out by extracting different artificial characteristic input networks, and the recognition performance difference of the algorithm in different artificial pre-characteristic extraction is verified; the third group is a comparison experiment of different feature dimensions, and the influence of the features of different dimensions extracted by the comparison learning network on the identification performance of the radiation source is verified; and the fourth group is a comparison experiment of different unsupervised algorithms, and the superiority of the algorithm is verified by comparing the comparison experiment with several classical unsupervised communication radiation source identification algorithms.
3.1 Experimental data acquisition
In the experiment, the voice signals of 5 FM ultrashort wave radio stations with the same model are respectively collected, the voice signals are voice call data of fixed personnel, the working center frequencies of the radio stations are 35 MHz, 55 MHz and 85MHz respectively, and a 'low power' mode is selected. The acquisition scene is 50m diffraction, i.e. there is a tall building shelter and a distance of 50 meters between the receiver and the radio station. In the radio station voice signal acquisition process, a receiver acquires zero intermediate frequency I/Q signals. Fig. 6 shows the signal frequency interface corresponding to the collector:
the parameter settings of the receiver are shown in table 4:
table 4 parameter settings of the receiver
The time domain signal waveform constructed from 40000 data points of 2 stations is shown in fig. 7:
3.2 unsupervised identification experiment of radiation source
In order to verify the feasibility of the SIB/CL algorithm, the experiment is performed on 5 ultrashort wave communication radio station signal data of the same model, 1500 data samples are intercepted by each radio station signal, the training is performed 400 times, the batch size is set to be 32, and the change curve of the accuracy rate along with the training times is shown in figure 8.
After training, the network model is saved, the saved neural network model is called to test the data which do not participate in training, and the obtained result is shown in table 5:
table 5 experimental results of ultrashort wave radio station
As can be seen from fig. 9, the overall recognition rate distribution of the 5-class radiation sources is relatively average, and although the recognition rate still has room for improvement, the method for further extracting the feature representation through bispectral feature contrast learning has feasibility in the problem of unsupervised recognition of the communication radiation sources.
3.3 different characteristic comparison experiments
Since conventional fourier transforms cannot describe the time-dependent nature of the signal frequency, the fine features of the signal cannot be extracted in all directions. To understand the frequency versus time, time-frequency analysis methods of the signals are required. Wavelet transformation is developed on the basis of Fourier transformation, is commonly used for analyzing time domain and frequency domain information of signals, and has wide application in radiation source characteristic extraction. The one-dimensional time domain signal can be transformed to a two-dimensional time-frequency plane by wavelet transformation. In addition, the one-dimensional signal data can be converted into a two-dimensional matrix by using a time domain waveform splicing method so as to adapt to the input dimension of the network.
In the group of experiments, the time domain waveform, wavelet transformation and rectangular integral bispectrum features of 5 ultrashort wave radio communication data samples are extracted respectively, the features are input into a contrast learning module for training 400 times, and 1500 samples are intercepted from each radio communication data. When the bispectral features are extracted, each segment contains 4096 data points, the data overlap ratio is set to be 5%, each segment of data is 128 in length, and the FFT point number is 128. The results obtained were processed as in fig. 10:
as can be seen from the figure, as the number of experiments increases, inputting bispectral features into the comparison learning network can extract more differentiated feature representations, which are most effective in radiation source identification. The time-frequency diagram features obtained by wavelet transform are somewhat more capable of improving the feature representation by the comparison network than the original time-domain waveform features, but the recognition performance is limited because the dual-spectrum features are more sensitive to the augmented transform of the signal.
3.4 different feature dimension contrast experiments
In order to verify the change condition of the radiation source identification performance when the dimension of the new feature extracted by contrast learning is changed, the invention respectively applies three feature extraction methods to classify the 32, 64, 96, 128, 160, 192, 224 and 256 dimension features further extracted after the network is compared and learned, each method carries out 10 experiments to obtain the average identification rate, and the experimental results are shown in table 6 and fig. 11.
Table 6 three feature extraction methods average recognition rate (%)
As can be seen from table 6 and fig. 11, the recognition rate of the contrast learning algorithm varies with the feature extraction dimension; in general, when the feature dimension is respectively 32-dimension, 64-dimension and 96-dimension, the algorithm identification performance has no obvious advantage, but when the feature dimension output by the contrast learning network exceeds 160-dimension, the algorithm identification performance tends to be stable, and the overall identification rate is obviously improved. The reason for this is that the lower-dimensional network output features are not sufficient to distinguish between the different radiation source signals, but if the extracted feature dimensions are high, overfitting is likely to occur, so we should choose the appropriate feature extraction dimensions for the categorical identification of the radiation source in the experiment.
3.5 comparison experiments of different unsupervised algorithms
The set of experiments evaluated the network model employed on the ultrashort wave dataset and compared it to the previous 3 clustering algorithms used for unsupervised communication radiation source identification, including K-means clustering (K-means), density space clustering (DBSCAN), density Peak Clustering (DPC). In the experiment, radiation source identification experiments are carried out on 3 clustering algorithms by extracting bispectral features with different dimensions from a signal sample, and the comparison of the bispectral features with corresponding dimensions with the SIB/CL algorithm is carried out by adopting the bispectral features with corresponding dimensions as network input as shown in a table 7, in the group of experiments, the training times of a comparison learning network are set to be 400 times, and each group of experiments respectively takes 10 times of average identification rate;
table 7 comparison of SIB/CL algorithm with different unsupervised algorithms
From comparison of experimental results, it can be seen that this method does not have significant advantages when extracting features of lower dimension using a comparison learning network, where the reason may be that the comparison learning network does not have significant feature discrimination when extracting features of lower dimension. However, as the feature dimension increases, we can find that the strong nonlinear fitting capability of the neural network gives advantage, with significantly higher recognition rates than other unsupervised algorithms.
The strong nonlinear fitting capability of the deep neural network has great advantages in feature extraction, the contrast learning algorithm is introduced, the feature contrast learning module is constructed by using the residual neural network (Resnet 34) to re-extract the features of the bispectrum features of the communication radiation source data, and the clustering performance is improved by using contrast learning again at the clustering level. Experimental results show that on an ultrashort wave communication radio station data set without a tag, the algorithm can reach 77.8% of identification accuracy, and compared with the algorithm combining other artificial features and unsupervised learning, the algorithm provided by the invention has better identification performance, and shows that the method has certain feasibility and can be used for solving the problem of individual identification of a communication radiation source without prior information. The algorithm applies a deeper residual neural network when constructing a contrast learning backbone network, has mature technical development and stable recognition effect, and shows higher application value.
Claims (6)
1. An unsupervised communication radiation source individual identification method based on bispectrum feature contrast learning is characterized by comprising the following steps:
step 1, constructing a network model, inputting a data set χ, setting training times E, a batch data amount N and a super-parameter τ I 、τ C A category number M;
step 2, data preprocessing: normalizing the time domain signal data and intercepting a data sample;
step 3, data augmentation: randomly cutting, randomly adding noise and turning transformation to each signal sample, and independently applying each data augmentation method with set probability to generate positive samples;
step 4, extracting bispectral features: obtaining bispectrum characteristics of each sample after data augmentation, and changing one-dimensional time domain signals into two-dimensional characteristic matrixes;
step 5, selecting a batch of data from the data set χTwo data augmentation modes T are selected randomly a ,T b Calculate the contrast loss L of the sample pair ins ;
Step 6, calculating cluster contrast loss L clu And calculates the overall loss value L, updates the network f, g by minimizing L I ,g C Parameters of (2);
step 7, for each sample x in the dataset, extracting features by h=f (x), by c=argmax g C (h) Calculating cluster allocation of each sample;
and 8, outputting one-hot codes of each cluster to finish the identification of the unsupervised communication radiation source individuals.
2. The method for identifying the unsupervised communication radiation source individual based on the bispectral feature contrast learning as claimed in claim 1, wherein the contrast learning is performed by utilizing the similarity between the original signal sample x and the positive example and the difference between the negative examples, and the feature representation h with more distinguishing degree is extracted a ,h b For downstream classification tasks;
according to h a ,h b The characteristic representation obtained by the same data sample through two augmentation modes belongs to the same class during clustering, a cluster loss function is designed, and comparison learning is carried out on a cluster class level to finish a classification recognition task;
positive pairs of samples are defined as two amplified samples of the same signal sample, and other pairs of samples are defined as negative pairs of samples.
3. The method for identifying an individual unsupervised communication radiation source based on bispectrum feature contrast learning as claimed in claim 1, wherein the network model in step 1 is specifically as follows:
changing the one-dimensional time sequence into a two-dimensional feature matrix, inputting the two-dimensional feature matrix into a contrast learning module for training, and selecting a residual error network as a backbone network for contrast learning;
the residual network is constructed by residual blocks, wherein the residual blocks consist of a plurality of cascaded convolution layers and a short circuit, the output values of the two are accumulated, and then the ReLU function is utilized for activation to obtain an output result, and the residual blocks are connected in series to form a deeper network; optimizing network parameters through experiments, and determining a network model to perform feature contrast learning:
when training a model, taking a bispectrum feature map of 5 ultrashort wave radio stations as the input of a network, taking 1500 data samples from each radio station, and calculating bispectrum features after data augmentation processing of each sample; each sample contains 4096 data points with feature matrix dimensions set to 128 x 128; randomly selecting 60% of data from the data as a training set, 20% as a verification set and 20% as a test set;
ResNet-18 and ResNet-34 are respectively selected as backbone networks, the network optimization method is an Adam method, the initial learning rate is set to 0.001, the training times E are set to 400, the batch data quantity N is set to 32, recognition results corresponding to different network depths are obtained, and finally ResNet-34 is selected as the backbone network for extracting the comparison characteristics.
4. The method for identifying an individual source of unsupervised communication radiation based on bispectral feature contrast learning as claimed in claim 2, wherein in step 5, a sample contrast loss L is calculated ins The method is characterized by comprising the following steps:
for a given signal sample x i Two data augmentation modes T are selected randomly with set probability a ,T b Constructing a sample pair to obtain two related samplesDenoted as->Amplified by the same sampleAnd the two obtained correlated samples are marked as positive sample pairs;
inputting a pair of samples into a deep neural network f (-) with shared parameters to perform contrast training and extract features, thus obtaining new feature characterizationDenoted as->
A nonlinear full-connection layer g of two layers is stacked I (. Cndot.) byMapping the feature matrix to subspace applying contrast loss, at +.> andPerforming calculation of contrast loss;
the pair-wise similarity is measured by cosine distance, i.e
wherein ,k1 ,k 2 ∈{a,b},i,j∈[1,N];
Intercepting N sections of radiation source signal samples, and for each sample x i Performing two augmentation modes to obtain 2N signal samplesFor a particular sample->There are 2N-1 pairs of samples, and the amplification to be associated therewithSample->Marking as positive sample to obtain positive sample pair +.>The remaining 2N-2 pairs are all noted as negative pairs;
in order to optimize pairwise similarity throughout the radiation source data set, a specific sample is definedThe comparative loss pattern of (c) is as follows:
wherein ,τI Is a hyper-parameter that calculates a sample contrast loss for each augmented sample in order to identify all the positive pairs in the entire dataset, namely:
5. the method for identifying an individual source of unsupervised communication radiation based on bispectral feature contrast learning as claimed in claim 4, wherein in step 6, a cluster contrast loss L is calculated clu The method is characterized by comprising the following steps:
characterization of the obtained novel featuresInput to clustering network g C (. About.) the->Aggregation into the same class;
setting the dimension of the network output matrix YSatisfy Y a ∈R N×M Where N is the number of samples per batch of training, M is the number of clusters, Y a ,Y b Respectively outputting the data under the augmentation of two times of data of each batch of samples; since each sample belongs to only one cluster, the Y row should resemble a one-hot distribution;
when projecting a data sample into a space having a dimension equal to the number of clusters, the ith element of the feature is considered to be a probability of belonging to the ith cluster, the ith column of Y is considered to be a representation of the ith cluster, and all columns should be different from each other;
with the other two fully-connected layers g C Make sure of featuresSubspace mapped to an M dimension, denoted as whereinIs a matrix Y a Is sample x i The output is carried out through a clustering network after the a augmentation mode;
memory matrix Y a Is listed as (i)I.e. the representation of the ith cluster after the first augmentation of the data sample, will likewise +.>And->Combining to form a positive cluster pair->The other 2M-2 cluster pairs are regarded as negative cluster pairs;
the cosine distance is used to measure the similarity between cluster pairs, namely:
wherein ,k1 ,k 2 ∈{a,b},i,j∈[1,M];
Clustering using the following penalty functionAnd remove->All other clusters except for the one are distinguished:
wherein ,τC Super parameters of a clustering network;
by traversing all clusters, cluster contrast loss L is avoided to assign most samples to the same class clu The definition is as follows:
6. The method for identifying an individual unsupervised communication radiation source based on bispectrum feature contrast learning according to claim 5, wherein in step 6, the overall loss value L is calculated as follows:
the feature comparison and cluster comparison loss function is optimized simultaneously to be used as the optimization of the whole unsupervised communication radiation source classification recognition network, namely:
L=L ins +L clu 。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310242426.7A CN116226721A (en) | 2023-03-14 | 2023-03-14 | Unsupervised communication radiation source individual identification method based on bispectrum feature contrast learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310242426.7A CN116226721A (en) | 2023-03-14 | 2023-03-14 | Unsupervised communication radiation source individual identification method based on bispectrum feature contrast learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116226721A true CN116226721A (en) | 2023-06-06 |
Family
ID=86576769
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310242426.7A Pending CN116226721A (en) | 2023-03-14 | 2023-03-14 | Unsupervised communication radiation source individual identification method based on bispectrum feature contrast learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116226721A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116628481A (en) * | 2023-07-21 | 2023-08-22 | 江西红声技术有限公司 | Electronic countermeasure information source identification method, system, computer and readable storage medium |
CN116821737A (en) * | 2023-06-08 | 2023-09-29 | 哈尔滨工业大学 | Crack acoustic emission signal identification method based on improved weak supervision multi-feature fusion |
-
2023
- 2023-03-14 CN CN202310242426.7A patent/CN116226721A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116821737A (en) * | 2023-06-08 | 2023-09-29 | 哈尔滨工业大学 | Crack acoustic emission signal identification method based on improved weak supervision multi-feature fusion |
CN116821737B (en) * | 2023-06-08 | 2024-04-30 | 哈尔滨工业大学 | Crack acoustic emission signal identification method based on improved weak supervision multi-feature fusion |
CN116628481A (en) * | 2023-07-21 | 2023-08-22 | 江西红声技术有限公司 | Electronic countermeasure information source identification method, system, computer and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116226721A (en) | Unsupervised communication radiation source individual identification method based on bispectrum feature contrast learning | |
Sun et al. | Ensemble-based information retrieval with mass estimation for hyperspectral target detection | |
Wang et al. | Automatic modulation classification based on joint feature map and convolutional neural network | |
Zhou et al. | Specific emitter identification via bispectrum‐radon transform and hybrid deep model | |
CN111582320A (en) | Dynamic individual identification method based on semi-supervised learning | |
CN112381144B (en) | Heterogeneous deep network method for non-European and Euclidean domain space spectrum feature learning | |
CN108805102A (en) | A kind of video caption detection and recognition methods and system based on deep learning | |
Bai et al. | An adaptive threshold fast DBSCAN algorithm with preserved trajectory feature points for vessel trajectory clustering | |
Wang et al. | A novel hyperspectral image change detection framework based on 3D-wavelet domain active convolutional neural network | |
CN109829352A (en) | Communication fingerprint identification method integrating multilayer sparse learning and multi-view learning | |
Chen et al. | Deep residual learning in modulation recognition of radar signals using higher-order spectral distribution | |
Liu et al. | Uncertainty-aware graph reasoning with global collaborative learning for remote sensing salient object detection | |
Nie et al. | Adap-EMD: Adaptive EMD for aircraft fine-grained classification in remote sensing | |
Hua et al. | Specific emitter identification using adaptive signal feature embedded knowledge graph | |
Feng et al. | FCGCN: Feature Correlation Graph Convolution Network for Few-Shot Individual Identification | |
Wan et al. | Deep learning-based specific emitter identification using integral bispectrum and the slice of ambiguity function | |
Li et al. | Modulation recognition algorithm based on digital communication signal time-frequency image | |
Xu et al. | Attention-based Contrastive Learning for Few-shot Remote Sensing Image Classification | |
CN116805055A (en) | Radiation source individual identification method based on unsupervised momentum contrast learning | |
Yong | Research on Painting Image Classification Based on Transfer Learning and Feature Fusion | |
Zhu et al. | A dual evaluation multi‐scale template matching algorithm based on wavelet transform | |
Han et al. | Novel method for SEI based on 3D-Hilbert energy spectrum and multi-scale segmentation features | |
Wang et al. | PolSAR-SSN: an end-to-end superpixel sampling network for PolSAR image classification | |
Wang et al. | Hyperspectral target detection via global spatial-spectral attention network and background suppression | |
Zhao et al. | A generation method of insulator region proposals based on edge boxes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |