CN113255545B - Communication radiation source individual identification method combining artificial features and depth features - Google Patents

Communication radiation source individual identification method combining artificial features and depth features Download PDF

Info

Publication number
CN113255545B
CN113255545B CN202110617295.7A CN202110617295A CN113255545B CN 113255545 B CN113255545 B CN 113255545B CN 202110617295 A CN202110617295 A CN 202110617295A CN 113255545 B CN113255545 B CN 113255545B
Authority
CN
China
Prior art keywords
features
feature
fusion
radiation source
artificial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110617295.7A
Other languages
Chinese (zh)
Other versions
CN113255545A (en
Inventor
杨俊安
刘辉
黄科举
陈浩
曲凌志
王一
呼鹏江
陆俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202110617295.7A priority Critical patent/CN113255545B/en
Publication of CN113255545A publication Critical patent/CN113255545A/en
Application granted granted Critical
Publication of CN113255545B publication Critical patent/CN113255545B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • G06F2218/06Denoising by applying a scale-space analysis, e.g. using wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The invention discloses a communication radiation source individual identification method combining artificial features and depth features. Preprocessing an original communication signal and then respectively carrying out artificial feature extraction and depth feature extraction; fusing the artificial features and the depth features, and performing primary classification identification to obtain the currently optimal fusion features; and then, performing dimensionality reduction and reconstruction operation on the fusion features through related optimized parameters to obtain reconstructed fusion features, and finally obtaining the fusion features with the strongest characterization capability. The method combines data driving and model driving, and has stronger advantage in individual identification of communication radiation sources under the condition of small samples.

Description

Communication radiation source individual identification method combining artificial features and depth features
Technical Field
The invention belongs to the technical field of radiation source monitoring, and particularly relates to a communication radiation source individual identification method combining artificial features and depth features.
Background
The communication radiation source individual identification technology is a technology for realizing individual identification of a signal radiation source by analyzing fine features in a received communication signal. Due to the physical difference of the internal components of the communication signal transmitter generated in the manufacturing process and the unit module difference generated in the debugging process of the device, even the signals transmitted by the communication radiation source devices of the same model and the same working condition have slight difference. The individual identification technology of the communication radiation source can identify the individual radiation source only according to the physical layer characteristics of the equipment without deciphering the information of the transmitted signal, so that the individual identification technology of the communication radiation source is widely regarded in the military field and the civil field.
The traditional individual identification method of the communication radiation source mainly extracts artificially defined features for classification and identification. However, in this case, since the artificially defined features are influenced by human subjective factors, the characterization capability of the individual information of the radiation source is limited, which causes some problems in practical application. The main expression is in the following two aspects: one is poor robustness. The artificially defined characteristics are inevitably influenced by factors such as communication scenes, communication parameters, channel environments and the like, so that different individuals cannot be effectively distinguished in a specific scene; secondly, the characterization capability of the features is not comprehensive. The artificially defined features mainly come from features in various signal processing methods, such as high-order spectrum, time-frequency analysis and the like, and all information of the signals cannot be accurately represented, so that the accuracy in classifying different data is low; thirdly, the generalization ability is weak. Changes in the state of the radiation source, such as frequency, bandwidth, transmission rate, modulation pattern, etc., can result in changes in the characteristics, thereby causing a significant reduction in the recognition accuracy.
In recent years, deep learning methods have shown powerful learning capabilities in various fields, mainly involving tasks such as image and speech recognition, natural language processing, and the like. The deep learning method can automatically learn different feature representations of the samples according to different tasks, and benefits from the deep learning multilayer nonlinear structure, and internal features learned from data have strong information representation capability, so that data information can be represented comprehensively. The application of deep learning to the field of individual identification of communication radiation sources has the following two advantages: firstly, the adaptability is strong. For different communication scenes, communication parameters and channel environments, the characteristics of different radiation source individuals can be effectively distinguished and different individuals can be effectively identified only by training the deep neural network by using data under corresponding conditions; secondly, the characteristic capability of the features is strong. As long as the deep learning method has a large amount of training data, the characteristics with strong characterization capability can be learned by the multilayer nonlinear structure of the deep learning method, so that high identification accuracy is obtained. Therefore, a deep learning method and thought are introduced into the communication radiation source individual identification task, and the limitation that the traditional method only uses artificial features can be effectively overcome.
However, the strong information characterization capability of the deep learning method is often required to be achieved under the condition of sufficient training data, but in practical application, it is difficult to obtain enough signal data of a certain communication radiation source to perform deep neural network training, especially under the non-cooperative communication condition.
Disclosure of Invention
In order to solve the technical problem, the invention provides a communication radiation source individual identification method combining artificial features and depth features, which mainly comprises the following steps:
step 1: after the signals are decomposed, artificial feature extraction is carried out;
step 2: after the signal is preprocessed, depth feature extraction is carried out;
and step 3: fusing the artificial features with the depth features;
and 4, step 4: classifying and identifying different fusion characteristics through a support vector machine to find out the optimal fusion characteristics;
and 5: and carrying out principal component analysis and dimensionality reduction on the optimal original fusion features, then reconstructing, and obtaining the reconstructed fusion features under the optimal hyper-parameters through optimization to obtain the optimal signal representation.
Further, the step of performing artificial feature extraction after decomposing the signal in step 1 includes the following sub-steps:
step 1.1: decomposing the signal into different mode components by three different decomposition methods;
step 1.11 of empirical mode decomposition,decomposed original signalX(t)Expressed as:
Figure DEST_PATH_IMAGE001
imf thereini(t) is the ith IMF component, rn (t) is the monotonic function of the decomposition residue;
step 1.12, performing variation modal decomposition, wherein an original signal X (t) after decomposition is represented as:
Figure 933792DEST_PATH_IMAGE002
whereinkThe number of the mode decompositions is represented,
Figure DEST_PATH_IMAGE003
and
Figure 734388DEST_PATH_IMAGE004
representing all the mode components and their center frequencies,
Figure DEST_PATH_IMAGE005
in order to be a function of the dirac function,
Figure 335527DEST_PATH_IMAGE006
is the convolution operator;
Figure DEST_PATH_IMAGE007
which means that the derivative is taken over t,
Figure 6811DEST_PATH_IMAGE008
is a 2 norm.
Step 1.13 inherent time scale decomposition, after decomposition, the original signalX t Expressed as:
Figure DEST_PATH_IMAGE009
wherein L is a baseline component extraction operator, H =1-L represents a rotation component extraction operator, k and p are natural numbers, and the angle scale represents a value range.
Step 1.2: and extracting the artificial features of the signals through the decomposed different mode components, wherein the artificial features of the signals at least comprise spectral symmetry coefficients, information dimensions, box dimensions, envelope features, frequency spectrum features, second-order, third-order and fourth-order cumulants.
Further, the artificial features extracted in step 1 at least include 3 434-dimensional artificial feature vectors obtained respectively.
Further, after the signal is preprocessed in step 2, depth feature extraction is performed, which includes the following sub-steps:
step 2.1, after the original signal is preprocessed by continuous wavelet transformation, the original signal is put into a ResNet-18 network for deep feature extraction, and the preprocessed signalx(t) For its time-frequency energy distribution:
Figure 992960DEST_PATH_IMAGE010
whereinφ(t) As a function of the mother wavelet,φ * (t) As a complex function thereof.aIs a scale factor, and is a function of,bis a translation factor.
Further, after the signal is preprocessed in step 2, depth feature extraction is performed to obtain a 512-dimensional depth feature.
Further, the fusing the artificial feature and the depth feature in step 3 includes:
is provided withX n x×Y n y×AndZ n×zare three feature sets, among whichnThe number of the samples is the number of the samples,x,y,zis a characteristic dimension ofFAnd if the fusion feature is expressed as a fusion feature, the fusion feature obtained through the serial feature fusion strategy is as follows:
Figure DEST_PATH_IMAGE011
further, the step 4 comprises the following substeps:
step 4.1, respectively using the single feature and the fusion feature as data to train and recognize;
step 4.2, setting the experiment as a training set femdVerification set fvmdAnd test set fitdEach class comprises 100 samples, each data set comprises 500 samples, and finally, an identification result is obtained;
step 4.3, using the fusion features obtained when fusing all artificial features with depth features: (
Figure 595367DEST_PATH_IMAGE012
) When individual identification is performed, the identification performance is the best, wherein fdeepIs a set of depth features.
Further, the step 5 of reconstructing the optimal original fusion features after Principal Component Analysis (PCA) dimensionality reduction, and obtaining the reconstructed fusion features under the optimal hyper-parameters through optimization, wherein the step of obtaining the optimal signal characterization comprises the following substeps:
step 5.1, original fusion characteristic matrix is obtained
Figure DEST_PATH_IMAGE013
After principal component analysis PCA expressed as:
Figure 561049DEST_PATH_IMAGE014
step 5.2, the reconstructed fusion feature vector is expressed as:
Figure DEST_PATH_IMAGE015
whereinμIs thatXThe mean value vector of (a) is,kin order to select the number of the characteristic values,vthe characteristic vector corresponding to the characteristic value;
step 5.3, by the hyper-parameters of the characteristic value setkAnd optimizing to obtain the final optimal reconstruction fusion characteristics to characterize the original communication signals.
Further, the optimization described in step 5 is performed by experiments on the training set and the validation set on the training hyperparameter ktrainAnd verifying the hyperparameter ktestOptimizing is carried out, and the method is applied to the detection of test data to obtain the final experimental result.
Further, when k istrain=5 and ktestIf =4, the optimum recognition accuracy is obtained.
According to the method, data driving and model driving are combined, artificial features are fused with depth features by utilizing the characteristic that the artificial features are not limited by the number of samples to obtain signal features with stronger representation capability, and experiments on actually acquired signals show that the method has higher identification precision compared with single features.
Drawings
Fig. 1 is a flow chart diagram of a communication radiation source individual identification method based on feature fusion.
Fig. 2 is a schematic view of the process flow of dimension reduction and reconstruction in step 5.
Detailed Description
The invention aims to provide a communication radiation source individual identification method with high identification accuracy under the condition of a small sample.
The technical solution for realizing the purpose of the invention is a communication radiation source individual identification method based on feature fusion, which combines model drive and data drive, and comprises the following steps:
step 1: after the signals are decomposed, artificial feature extraction is carried out;
step 2: after the signal is preprocessed, depth feature extraction is carried out;
and step 3: fusing the artificial features with the depth features;
and 4, step 4: classifying and identifying different fusion characteristics through a Support Vector Machine (SVM) to find out the optimal fusion characteristics;
and 5: and carrying out principal component analysis and dimensionality reduction on the optimal original fusion features, then reconstructing, and obtaining the reconstructed fusion features under the optimal hyper-parameters through optimization to obtain the optimal signal representation.
Further, after the signal is decomposed in step 1, the artificial feature extraction is performed, specifically as follows:
step 1.1: decomposing the signal into different mode components by three different decomposition methods;
(1) empirical Mode Decomposition (EMD). After decomposition, the original signalX(t)Can be expressed as:
Figure 664003DEST_PATH_IMAGE016
whereinimf(t) is the eigenmode function (IMF) component,r n(t) is a monotonic function that decomposes the remainder.
(2) Variational Mode Decomposition (VMD). After decomposition, the original signalX(t) can be expressed as:
Figure DEST_PATH_IMAGE017
whereinkThe number of the mode decompositions is represented,
Figure 812263DEST_PATH_IMAGE003
and
Figure 525136DEST_PATH_IMAGE004
representing all the mode components and their center frequencies,
Figure 622405DEST_PATH_IMAGE005
in order to be a function of the dirac function,
Figure 829133DEST_PATH_IMAGE006
is the convolution operator;
Figure 330652DEST_PATH_IMAGE007
which means that the derivative is taken over t,
Figure 766706DEST_PATH_IMAGE008
is a 2 norm.
(3) Intrinsic Time-scale Decomposition (ITD). After decomposition, the original signalX t Can be expressed as:
Figure 933245DEST_PATH_IMAGE009
wherein L is a baseline component extraction operator, H =1-L represents a rotation component extraction operator, k and p are natural numbers, and the angle scale represents a value range.
Step 1.2: the artificial features of the signals are extracted through the decomposed different mode components, and the artificial features mainly comprise spectral symmetry coefficients, information dimensions, box dimensions, envelope features, spectrum features, second-order, third-order and fourth-order cumulants.
Further, after the signal is preprocessed in step 2, depth feature extraction is performed, specifically as follows:
preprocessing an original signal by Continuous Wavelet Transform (CWT), putting the preprocessed signal into a ResNet-18 network for deep feature extraction, and preprocessing the preprocessed signalx(t) For its time-frequency energy distribution:
Figure 371311DEST_PATH_IMAGE010
whereinφ(t) As a function of the mother wavelet,φ * (t) As a complex function thereof.aIs a scale factor, and is a function of,bis a translation factor.
Further, the fusing of the artificial features and the depth features in step 3 is specifically as follows:
is provided withX n x×Y n y×AndZ n×zare three feature sets, among whichnThe number of the samples is the number of the samples,x,y,zis a characteristic dimension ofFExpressed as fusion features, obtained through a serial feature fusion strategyThe fusion characteristics are as follows:
Figure 807846DEST_PATH_IMAGE011
further, in step 5, the optimal original fusion features selected in step 4 are reconstructed after being subjected to Principal Component Analysis (PCA) dimension reduction, and the reconstructed fusion features under the optimal hyper-parameters are obtained by optimizing, so as to obtain the optimal signal characterization, which is specifically as follows:
raw fusion feature matrix
Figure 823207DEST_PATH_IMAGE013
After PCA is expressed as:
Figure 793437DEST_PATH_IMAGE014
the reconstructed fused feature vector is represented as:
Figure 587474DEST_PATH_IMAGE015
whereinμIs thatXThe mean value vector of (a) is,kin order to select the number of the characteristic values,vand the feature vectors are corresponding to the feature values. By means of the pair of superparameterskAnd optimizing to obtain the final optimal reconstruction fusion characteristics to characterize the original communication signals.
The following detailed description of embodiments of the invention refers to the accompanying drawings.
With reference to fig. 1, the method for identifying individuals by using communication radiation sources based on feature fusion of the present invention includes the following steps:
step 1: the preprocessed signal shown in fig. 2 is subjected to three different signal decompositions to obtain each mode component, and then the mode components are subjected to artificial feature extraction. The extracted artificial features are frequency spectrum symmetry coefficient, information dimension, box dimension, envelope feature, frequency spectrum feature, second order, third order and fourth order cumulant to embody the non-linear, non-Gaussian and non-stationary characteristics of the communication signal. After the artificial feature extraction, 3 434-dimensional artificial feature vectors can be obtained respectively;
step 2: performing continuous wavelet transform on the preprocessed signals to obtain time-frequency energy distribution of the signals, performing depth feature extraction by taking the time-frequency energy distribution as the input of a depth Residual network (ResNet) network, and finally obtaining 512-dimensional depth features;
and step 3: different permutation and combination are carried out on the artificial features and the depth features by utilizing a serial fusion feature strategy to obtain a plurality of different fusion features;
and 4, step 4: and respectively using the single feature and the fusion feature as data to train and recognize. The experimental set is specifically a training set, a validation set, and a test set of 100 samples per class, 500 samples per data set. Finally, the recognition results shown in table 1 and fig. 2 were obtained.
TABLE 1 identification of unique features of actual signals
Figure DEST_PATH_IMAGE019
As can be seen from the experimental results shown in table 1 and fig. 2, the fused features have better recognition accuracy than the single features. Using the fusion features obtained when fusing all artificial features with depth features (
Figure 758692DEST_PATH_IMAGE012
) When individual identification is performed, the identification performance is the best, and can reach 81.8%. The recognition accuracy rate can be reduced by about 1% when one artificial feature is removed;
and 5: and (4) processing the current optimal fusion characteristics obtained in the step (4) through a flow as shown in fig. 2. Note that there is some difference in the PCA and reconstruction process for the training data and the test data. The reason is that the training data is labeled and can be processed separately for classification. The test data is unmarked and can only be processed as a whole. At the same time, to prevent disruption of the fusion featuresThe fine feature in (1) removes the initial normalization of the data while processing. In the reconstruction process, the selection of the feature vector has an influence on the reconstruction result, so that the final identification accuracy is further influenced. By experiment on training set and validation set, the training hyperparameter k istrainAnd verifying the hyperparameter ktestOptimizing is carried out, and the method is applied to the detection of test data to obtain the final experimental result. When k istrain=5 and ktestIf =40, the optimum recognition accuracy was 83.8%.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the embodiments of the present invention and not for limiting, and although the embodiments of the present invention are described in detail with reference to the above preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the embodiments of the present invention without departing from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A communication radiation source individual identification method combining artificial features and depth features is characterized by mainly comprising the following steps:
step 1: after decomposing the communication radiation source electromagnetic signals received by the receiver, carrying out artificial feature extraction; the extracted artificial features at least comprise 3 434-dimensional artificial feature vectors which are obtained respectively;
step 2: preprocessing the communication radiation source electromagnetic signals received by the receiver, and then extracting depth features; performing depth feature extraction to obtain 512-dimensional depth features finally;
and step 3: fusing the artificial features with the depth features;
and 4, step 4: classifying and identifying different fusion characteristics through a support vector machine to find out the optimal fusion characteristics;
and 5: and (3) reconstructing the optimal original fusion characteristics obtained in the step (4) after Principal Component Analysis (PCA) dimensionality reduction, and obtaining optimal signal representation by obtaining the reconstruction fusion characteristics under the optimal hyper-parameter through optimization, wherein the method comprises the following substeps:
step 5.1, setting the original fusion characteristic matrix X as { X ═ X1,x2,...,xnAfter principal component analysis, PCA, expressed as:
y=WT(x-μ),W=(v1,v2,...,vk);
step 5.2, the reconstructed fusion feature vector is expressed as:
Figure 2
,W=(v1,v2,...,vk)
wherein mu is the mean vector of X, k is the number of the selected characteristic values, and v is the characteristic vector corresponding to the characteristic values;
and 5.3, obtaining the final optimal reconstruction fusion characteristic by optimizing the hyper-parameter k of the characteristic value set to represent the original communication signal.
2. The individual identification method of the communication radiation source according to claim 1, wherein the step of performing artificial feature extraction after the signal is decomposed in step 1 comprises the following sub-steps:
step 1.1: decomposing the signal into different mode components by three different decomposition methods;
step 1.11 empirical mode decomposition, the decomposed original signal x (t) is represented as:
Figure FDA0003227080420000021
imf thereini(t) is the ith IMF component, rn(t) is a monotonic function of the decomposition residue;
step 1.12, performing variation modal decomposition, wherein an original signal X (t) after decomposition is represented as:
Figure FDA0003227080420000022
where k denotes the number of mode decompositions, { uk}:={u1,...,ukAnd { omega } andk}:={ω1,...,ωkdenotes all mode components and their center frequencies, δ (t) is the dirac function,
Figure FDA0003227080420000023
is the convolution operator;
Figure FDA0003227080420000024
which means that the derivative is taken over t,
Figure FDA0003227080420000025
is a 2 norm;
step 1.13 inherent time scale decomposition, after which the original signal XtExpressed as:
Figure FDA0003227080420000026
wherein L is a baseline component extraction operator, H is 1-L and represents a rotation component extraction operator, k and p are natural numbers, and the angle scale represents a value range;
step 1.2: and extracting the artificial features of the signals through the decomposed different mode components, wherein the artificial features of the signals at least comprise spectral symmetry coefficients, information dimensions, box dimensions, envelope features, frequency spectrum features, second-order, third-order and fourth-order cumulants.
3. The individual identification method of the communication radiation source according to claim 1, wherein the step 2 of performing depth feature extraction after preprocessing the signal comprises the following substeps:
step 2.1, after the original signal is preprocessed through continuous wavelet transformation, the original signal is put into a ResNet-18 network for deep feature extraction, and the preprocessed signal x (t) is the time-frequency energy distribution of the signal:
Figure FDA0003227080420000031
wherein
Figure FDA0003227080420000032
As a function of the mother wavelet,
Figure FDA0003227080420000033
as a complex function thereof; a is a scale factor and b is a translation factor.
4. The individual identification method of the radiation source according to claim 1, wherein the step 3 of fusing the artificial feature and the depth feature comprises:
let Xn×x、Yn×yAnd Zn×zThe method comprises the following steps of taking three feature sets, wherein n is the number of samples, x, y and z are feature dimensions, and enabling F to be expressed as a fusion feature, wherein the fusion feature obtained through a serial feature fusion strategy is as follows:
Fn×(x+y+z)=[X;Y;Z]。
5. the individual identification method of the radiation source according to claim 1, wherein the step 4 comprises the following substeps:
step 4.1, respectively using the single feature and the fusion feature as data to train and recognize;
step 4.2, setting the experiment as a training set femdVerification set fvmdAnd test set fitdObtaining identification results by 100 samples in each class;
step 4.3, using the fusion features (f) obtained when fusing all artificial features with depth featuresdeep+femd+fitd+fvmd) When individual identification is performed, the identification performance is the best, wherein fdeepIs a set of depth features.
6. The radiation source individual identification method defined in claim 1, whereinIn that, the optimization described in step 5 is performed by experiments on the training set and the validation set on the training hyperparameter ktrainAnd verifying the hyperparameter ktestOptimizing is carried out, and the optimization is applied to detection corresponding to the test data to obtain a final experiment result.
7. The radiation source individual identification method of claim 6, wherein when k istrain5 and ktestWhen 4, the best recognition accuracy is obtained.
CN202110617295.7A 2021-06-03 2021-06-03 Communication radiation source individual identification method combining artificial features and depth features Expired - Fee Related CN113255545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110617295.7A CN113255545B (en) 2021-06-03 2021-06-03 Communication radiation source individual identification method combining artificial features and depth features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110617295.7A CN113255545B (en) 2021-06-03 2021-06-03 Communication radiation source individual identification method combining artificial features and depth features

Publications (2)

Publication Number Publication Date
CN113255545A CN113255545A (en) 2021-08-13
CN113255545B true CN113255545B (en) 2021-09-21

Family

ID=77186259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110617295.7A Expired - Fee Related CN113255545B (en) 2021-06-03 2021-06-03 Communication radiation source individual identification method combining artificial features and depth features

Country Status (1)

Country Link
CN (1) CN113255545B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114492604A (en) * 2022-01-11 2022-05-13 电子科技大学 Radiation source individual identification method under small sample scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108268837A (en) * 2017-12-31 2018-07-10 厦门大学 Emitter Fingerprint feature extracting method based on Wavelet Entropy and chaotic characteristic
CN109307862A (en) * 2018-07-05 2019-02-05 西安电子科技大学 A kind of target radiation source individual discrimination method
CN110197209A (en) * 2019-05-15 2019-09-03 电子科技大学 A kind of Emitter Recognition based on multi-feature fusion
CN112613443A (en) * 2020-12-29 2021-04-06 北京理工大学重庆创新中心 Robustness communication radiation source intelligent identification method based on deep learning
CN112613423A (en) * 2020-12-26 2021-04-06 北京工业大学 Epilepsia electroencephalogram signal identification method based on machine learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180284758A1 (en) * 2016-05-09 2018-10-04 StrongForce IoT Portfolio 2016, LLC Methods and systems for industrial internet of things data collection for equipment analysis in an upstream oil and gas environment
KR20210018600A (en) * 2019-08-06 2021-02-18 현대자동차주식회사 System for recognizing facial expression

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108268837A (en) * 2017-12-31 2018-07-10 厦门大学 Emitter Fingerprint feature extracting method based on Wavelet Entropy and chaotic characteristic
CN109307862A (en) * 2018-07-05 2019-02-05 西安电子科技大学 A kind of target radiation source individual discrimination method
CN110197209A (en) * 2019-05-15 2019-09-03 电子科技大学 A kind of Emitter Recognition based on multi-feature fusion
CN112613423A (en) * 2020-12-26 2021-04-06 北京工业大学 Epilepsia electroencephalogram signal identification method based on machine learning
CN112613443A (en) * 2020-12-29 2021-04-06 北京理工大学重庆创新中心 Robustness communication radiation source intelligent identification method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep adversarial neural network for specific emitter identification under varying frequency;Keju HUANG.et.;《Bulletin of the Polish Academy of Sciences: Technical Sciences》;20210430;第69卷(第2期);第1-9页 *
基于固有时间尺度分解模型的通信辐射源特征提取算法;桂云川等;《计算机应用研究》;20170430;第34卷(第4期);第1172-1175页 *

Also Published As

Publication number Publication date
CN113255545A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
Chen et al. Environmental sound classification with dilated convolutions
CN100507971C (en) Independent component analysis based automobile sound identification method
CN112347910B (en) Signal fingerprint identification method based on multi-mode deep learning
CN113255545B (en) Communication radiation source individual identification method combining artificial features and depth features
Zhou et al. Specific emitter identification via bispectrum-radon transform and hybrid deep model
Han et al. Speech emotion recognition based on Gaussian kernel nonlinear proximal support vector machine
CN112712819A (en) Visual auxiliary cross-modal audio signal separation method
Behnam et al. Singular Lorenz Measures Method for seizure detection using KNN-Scatter Search optimization algorithm
Ansari et al. A survey of artificial intelligence approaches in blind source separation
Zhuang et al. Multicomponent signal decomposition using morphological operations
Lostanlen et al. Binaural scene classification with wavelet scattering
Sunny et al. Discrete wavelet transforms and artificial neural networks for recognition of isolated spoken words
Wang et al. Improved brain–computer interface signal recognition algorithm based on few-channel motor imagery
Zhou et al. An attention-based multi-scale convolution network for intelligent underwater acoustic signal recognition
CN114884516A (en) Supervised data compression method based on statistical method and Hilbert envelope spectrum
Sunny et al. Development of a speech recognition system for speaker independent isolated Malayalam words
Khan et al. Speech recognition: increasing efficiency of support vector machines
Hashemi et al. Persian music source separation in audio-visual data using deep learning
Kemiha et al. Single-Channel Blind Source Separation using Adaptive Mode Separation-Based Wavelet Transform and Density-Based Clustering with Sparse Reconstruction
Shen Application of transfer learning algorithm and real time speech detection in music education platform
Kuznetsov et al. Facial Expressions Analysis for Applications in the Study of Sign Language.
Alshebli et al. Multimodal biometric recognition using iris and face features
Qiao et al. CBS-GAN: A band selection based generative adversarial net for hyperspectral sample generation
Souli et al. Environmental sounds spectrogram classification using log-Gabor filters and multiclass support vector machines
Singh et al. E-PANNs: Sound Recognition Using Efficient Pre-trained Audio Neural Networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210921