CN114118131A - Attention mechanism-based multi-feature fusion wireless equipment radio frequency fingerprint extraction method - Google Patents

Attention mechanism-based multi-feature fusion wireless equipment radio frequency fingerprint extraction method Download PDF

Info

Publication number
CN114118131A
CN114118131A CN202111148113.2A CN202111148113A CN114118131A CN 114118131 A CN114118131 A CN 114118131A CN 202111148113 A CN202111148113 A CN 202111148113A CN 114118131 A CN114118131 A CN 114118131A
Authority
CN
China
Prior art keywords
radio frequency
feature
layer
attention
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111148113.2A
Other languages
Chinese (zh)
Inventor
刘铭
王鑫
韩晓艺
张天壮
程慈航
彭林宁
徐宇轩
张军霞
任佳鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN202111148113.2A priority Critical patent/CN114118131A/en
Publication of CN114118131A publication Critical patent/CN114118131A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention relates to a multi-feature fusion wireless equipment radio frequency fingerprint extraction method based on an attention mechanism, which comprises the following steps of 1: extracting signal characteristics of a wireless transmitter caused by three different hardware defects of carrier frequency offset, nonlinearity and frequency response distortion to obtain three-dimensional radio frequency fingerprints; step 2: respectively extracting carrier frequency offset characteristics, nonlinear characteristics and frequency response distortion characteristics by using a single-dimensional characteristic extraction module to obtain three single-dimensional radio frequency fingerprint extraction characteristics; and step 3: processing the extracted features of the three single-dimensional radio frequency fingerprints by using a multi-feature fusion module to obtain multi-feature fused radio frequency fingerprints; and 4, step 4: for the radio frequency fingerprint with multi-feature fusion, the classification of the wireless equipment is obtained through a full connection layer, and the training of a neural network is completed; and 5: and (3) extracting the characteristics of the signal to be identified based on the step (1) to obtain a radio frequency fingerprint, and identifying the radio frequency fingerprint by using the trained neural network to complete the classification of the wireless transmitter.

Description

Attention mechanism-based multi-feature fusion wireless equipment radio frequency fingerprint extraction method
Technical Field
The invention relates to the field of communication and information security, in particular to a multi-feature fusion wireless equipment radio frequency fingerprint extraction method based on an attention mechanism.
Background
Because the actual parameters of the electronic components of the equipment always have a certain deviation value from the nominal values, even if the electronic components are produced by the same manufacturer in the same batch, the formed transmitters still have distinguishable differences. Specifically, the frequency offset of the local oscillator, the phase noise, the nonlinearity of the digital-to-analog converter, the mixer and the power amplifier, and the result of the comprehensive action of various influencing factors related to the filter characteristic jointly form the radio frequency fingerprint with individual distinguishing characteristics. Fig. 1 illustrates a radio frequency front end in a typical wireless signal transmitter. It can be seen that the baseband signal enters a signal modulation link after being processed by a digital signal, the signal modulation is completed by analog circuits such as a digital-to-analog converter, a quadrature modulator, an up-converter, a power amplifier, a radio frequency oscillator and the like, and finally the radio frequency signal is transmitted out through an antenna. In the above process, the parameter error of the analog signal processing device is the source of the radio frequency fingerprint characteristics.
In the document [1], a scheme of performing device classification and identification by using composite radio frequency fingerprint features is adopted, different modulation features are combined for fusion, four features such as frequency offset, clustering center, constellation offset and I/Q offset are used, and a K-means clustering algorithm is used for evaluating the device identification performance.
The existing radio frequency fingerprint extraction technology is mostly based on single hardware defect, and has the problem of insufficient representation capability, so that the discrimination of fingerprint characteristics needs to be improved.
The existing radio frequency fingerprint extraction method based on multiple signal features is simple in comprehensive method of multiple signal features, key information of each feature is not comprehensively considered, and differences of importance of different features are not considered, so that the existing radio frequency fingerprint extraction method based on multiple signal features is low in robustness and has to be improved in identification accuracy.
The invention aims to provide a multi-feature fusion radio frequency fingerprint extraction method based on an attention mechanism, which can fuse signal features caused by various hardware defects, strengthen key features in the signal features, and consider the difference of importance of different dimensional features, so that the discrimination of radio frequency fingerprints is improved, and the accuracy of equipment identity identification is improved.
Disclosure of Invention
With the rapid development of wireless network technologies typified by 5G, a large number of wireless devices need to access a network. Wireless networks have open communication characteristics and face a non-negligible threat to information security. Current wireless networks employ traditional high-level cryptographic protocol-based identification mechanisms. Once the identity information of the user is stolen, the network is exposed to masquerading attacks by malicious users. The radio frequency fingerprint technology is one of the user identification means which is not easy to suffer from disguise attack in the wireless network security protection at present, and can uniquely determine the transmitter identity of a signal by analyzing the characteristics carried in a wireless signal and capable of reflecting the inherent defects of transmitter hardware. However, the conventional method for extracting features for a single hardware defect has the problems of insufficient representation capability and insufficient recognition accuracy. The invention provides a multi-feature fusion radio frequency fingerprint extraction method based on an attention mechanism. The method integrates signal characteristics caused by various hardware defects, expands the characteristic dimension of the radio frequency fingerprint, strengthens key characteristics in the signal characteristics by using an attention mechanism, and improves the accuracy of equipment identity identification.
In order to achieve the above purposes, the technical scheme adopted by the invention is as follows:
a multi-feature fusion wireless device radio frequency fingerprint extraction method based on an attention mechanism specifically comprises the following steps:
step 1: extracting signal characteristics of a wireless transmitter caused by three different hardware defects of carrier frequency offset, nonlinearity and frequency response distortion to obtain three-dimensional radio frequency fingerprints, wherein the three-dimensional radio frequency fingerprints and corresponding tags of the wireless transmitter form a data set for training a neural network;
step 2: based on the three-dimensional radio frequency fingerprints obtained in the step (1), respectively extracting carrier frequency offset characteristics, nonlinear characteristics and frequency response distortion characteristics by using three single-dimensional characteristic extraction modules to obtain three single-dimensional radio frequency fingerprint extraction characteristics;
and step 3: processing the extracted features of the three single-dimensional radio frequency fingerprints obtained in the step (2) by using a multi-feature fusion module to obtain multi-feature fused radio frequency fingerprints;
and 4, step 4: based on the multi-feature fused radio frequency fingerprint obtained in the step 3, classification of the wireless transmitters is obtained through a full connection layer, and training of a neural network is completed;
and 5: extracting signal characteristics of a wireless transmitter to be identified, which are caused by three different hardware defects of carrier frequency offset, nonlinearity and frequency response distortion, based on the step 1 to obtain three-dimensional radio frequency fingerprints; and further, the radio frequency fingerprints are identified by using the neural network trained in the step 4, and the classification of the wireless transmitters is completed.
On the basis of the scheme, the step 1 specifically comprises the following steps:
step 1.1, extracting carrier frequency offset features based on difference, in the communication process of wireless equipment, a receiver performs difference operation on received signals, and radio frequency fingerprints of carrier frequency offset dimensions are obtained after visualization processing, wherein the difference operation is specifically represented as follows:
step 1.1.1 the signal transmitted by the transmitter is expressed as:
Figure RE-GDA0003406841250000041
wherein S (t) represents a transmit signal, X (t) is a transmitter baseband signal,
Figure RE-GDA0003406841250000042
is an imaginary unit, fcTxIs the carrier frequency of the transmitter;
step 1.1.2 ignores the influence of channel, noise and other factors for clarity, where the signal received by the receiver is denoted as r (t), and r (t) is denoted as s (t);
the receiver down-converts to obtain a baseband signal represented as:
Figure RE-GDA0003406841250000043
wherein f iscRxIs the carrier frequency of the receiver and,
Figure RE-GDA0003406841250000044
phase error when receiving signals for a receiver;
when f iscRx≠fcTxThe baseband signal obtained by down-conversion of the receiver is represented as:
Figure RE-GDA0003406841250000045
wherein θ ═ fcRx-fcTxThe difference between the carrier frequency of the receiver and the carrier frequency of the transmitter.
Step 1.1.3 the differential processing procedure is expressed as:
Figure RE-GDA0003406841250000046
wherein D (t) is a differential junctionFruit, Y*Taking a conjugate value, and d is a differential interval;
after the difference processing, the phase rotation factor e in the difference result D (t)-i2πθnIs a fixed value, only related to the difference interval, and will not change due to the change of time.
Step 1.1.4, directly drawing the differentiated signals on a complex plane, then discretizing the complex plane into a series of pixel points, and expressing the number of differential operation results falling into a certain pixel region by using different RGB values to obtain a differential constellation locus diagram after visualization processing, namely a radio frequency fingerprint of a carrier frequency offset dimension;
step 1.2, extracting nonlinear features based on a bispectrum domain to obtain a nonlinear dimension radio frequency fingerprint, which specifically comprises the following steps:
step 1.2.1 signal data segmentation:
dividing captured signal data with the length of N { r (0), r (1), … r (N-1) } into K sections, wherein each section has M observation samples, and N is KM, subtracting the average value of the section from the M observation samples in each section, and using r(k)(n) denotes the nth observed sample of the kth piece of signal data, n is 0,1, …, M-1, K is 1,2, …, K;
step 1.2.2 calculating discrete fourier transform coefficients:
Figure RE-GDA0003406841250000051
wherein λ represents a discrete frequency variable, λ is 0,1, …, M/2; on the basis, obtaining a discrete Fourier transform coefficient;
step 1.2.3 calculating the triple correlation of the discrete Fourier transform coefficients and + -L around the frequency sampling point1Frequency domain smoothing is performed on each frequency component:
Figure RE-GDA0003406841250000052
wherein λ is12Is a discrete frequency variable of a dual spectral domain, 0 is less than or equal to lambda2≤λ1,λ12≤fs/2;fsIs the sampling frequency; delta0Represents the frequency sampling interval, Δ, in the dual spectral domain0=fs/N0;N0Is the total number of frequency samples; i.e. i1,i2Is a variable used to traverse neighborhood frequency components during accumulation; let N1=2L1+1, length of bispectral smoothing across adjacent frequencies, N0And N1A value of (1) satisfies that M is equal to N0·N1
Step 1.2.4 with omega12Representing angular frequencies of the dual spectral domain, will
Figure RE-GDA0003406841250000061
Substituting formula (6) to obtain a bispectrum density estimate of the kth signal data
Figure RE-GDA0003406841250000062
Then averaging the K-segment dual-spectrum density estimation to obtain a dual-spectrum density estimation value in the whole process:
Figure RE-GDA0003406841250000063
wherein the sampling frequency fsIs set to 1.
Step 1.2.5: converting the bispectrum density estimation value obtained in the step 1.2.4 into different RGB values to draw bispectrums, and obtaining a radio frequency fingerprint with nonlinear dimensionality;
step 1.3, extracting frequency response distortion characteristics based on short-time Fourier transform to obtain a radio frequency fingerprint of frequency response distortion dimensionality;
for a time domain signal f (t) received by a receiver, firstly, a time window function g (t- τ) is multiplied to intercept the time domain signal near τ to obtain a local signal, and then, fourier transform is performed on the obtained local signal to obtain a short-time fourier transform formula, as shown in formula (8):
Figure RE-GDA0003406841250000064
where ω is the angular frequency, e-jωtThe function of frequency limiting is achieved; tau is time delay, and by changing the value of tau constantly, the time window determined by g (t) moves constantly on the time axis, and the time domain signal f (t) is intercepted gradually, so that Fourier transform at different moments can be obtained.
The result F (τ, ω) of the short-time fourier transform is a two-dimensional function with respect to time τ and angular frequency ω.
Similar to the dual spectrogram, the value of F (tau, omega) is converted into different RGB values to represent, and a short-time Fourier transform time-frequency graph is drawn to obtain the radio frequency fingerprint with frequency response distortion dimensionality.
On the basis of the above scheme, the single-dimensional feature extraction module in step 2 includes: a volume block, a channel attention module, a spatial attention module, and a full connectivity layer.
On the basis of the scheme, the convolution block comprises three layers of convolution layers, the number of input channels of the first layer of convolution layer is 3, the number of output channels of the three layers of convolution layers is 32, 64 and 64 respectively, the sizes of convolution kernels of the three layers of convolution layers are all 3 multiplied by 3, and filling values are all 1; a ReLU activation function, a maximum pooling layer, a regularization layer and a Dropout layer are sequentially arranged behind each convolution layer; the size of the sliding window of the maximum pooling layer is 2 multiplied by 2; the regularization layer is used to normalize the data, and the Dropout layer has a drop rate of 0.2.
On the basis of the above scheme, the channel attention module comprises: the method comprises a maximum pooling layer, an average pooling layer, a multilayer perceptron and a Sigmoid activation function, wherein the specific processing procedure of the channel attention module comprises the following steps:
inputting the initial characteristics F output by the convolution block into a maximum pooling layer and an average pooling layer respectively, performing maximum pooling and average pooling, inputting the results of the maximum pooling and the average pooling into a multilayer perceptron respectively to obtain two characteristics, adding the two characteristics, and activating by using a Sigmoid activation function to obtain a channel attention mapping MC
The multilayer perceptron is composed of two layers of two-dimensional convolution layers, the sizes of convolution kernels are 1 multiplied by 1, the number of output channels is 4 and 64 respectively, a ReLU activation function is used after the first layer of convolution layer, and a Sigmoid activation function is used after the second layer of convolution layer;
on the basis of the above scheme, the spatial attention module comprises: an average pooling layer, a maximum pooling layer, a convolution layer and a Sigmoid function; the specific processing procedure of the space attention module comprises the following steps:
the feature F' is respectively input into a maximum pooling layer and an average pooling layer for calculating the average value and the maximum value on the channel dimension, the obtained average pooling and maximum pooling results are spliced on the channel dimension and then input into a convolution layer, the size of a convolution kernel in convolution operation is set to be 7 multiplied by 7,
setting the filling value as 3, setting the number of input and output channels as 2 and 1 respectively, and finally activating by using a Sigmoid activation function to obtain a result M of space attention mappingS
On the basis of the scheme, the step 2 specifically comprises the following steps:
step 2.1 Primary feature extraction Using volume blocks
Aiming at the radio frequency fingerprint of a single dimension extracted in the step 1, performing primary feature extraction by using a rolling block, wherein the radio frequency fingerprint is an image, the image of the radio frequency fingerprint of the single dimension obtains a primary feature F through the rolling block, the primary feature F comprises four dimensions B multiplied by C multiplied by H multiplied by W, B represents the number of samples in each batch, C represents the number of channels, H represents the height of a feature map, and W represents the width of the feature map;
step 2.2 enhancing key information in preliminary features using channel attention Module and spatial attention Module
Step 2.2.1 the preliminary features F obtain a channel attention map M by a channel attention moduleCThe channel attention map MCMultiplying the original feature F point which is not processed to obtain a feature F', thereby highlighting key information on the channel component;
the channel attention map is shown in equation (9):
MC(F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F))) (9)
where σ denotes Sigmoid function, AvgPool denotes average pooling, MaxPool denotes that maximum pooling MLP is a multilayer perceptron with hidden layers.
Step 2.2.2 feature F' obtains spatial attention map M through spatial attention moduleSMultiplying the feature F ' point with the unprocessed feature F ' point to obtain a feature F ', and highlighting key information on the space region;
the implementation principle of the spatial attention mapping is shown in formula (10):
MS(F)=σ(f([AvgPool(F);MaxPool(F)])) (10)
wherein M isS(F) Representing the formed spatial attention map, sigma denotes Sigmoid function, and f represents convolution operation. AvgPool stands for average pooling, MaxPool stands for maximum pooling, [ 2 ]]Indicating a splicing operation.
Thus, the overall attention mechanism is expressed as:
F′=MC(F)⊙F (11)
F"=MS(F')⊙F' (12)
wherein F represents a preliminary feature, MC(F) Indicating a channel attention map, MS(F ') represents a spatial attention map, the symbol represents a dot product of the elements, and F' is the final output; the feature F' still comprises four dimensions B × C × H × W;
step 2.3 further extraction of features based on fully connected layers
Unfolding the last three dimensions C multiplied by H multiplied by W of the characteristic F' into one dimension, converting the one dimension C multiplied by H multiplied by W into a B multiplied by L two-dimensional tensor, inputting the two-dimensional tensor into a layer of full-connected layer, and obtaining the one-dimensional characteristic representation of B radio frequency fingerprints of a single-dimensional radio frequency fingerprint source, wherein the output dimension is 256;
and 2.4, finally, respectively obtaining three-dimensional single-dimensional radio frequency fingerprint extraction characteristics based on the carrier frequency offset characteristics, the nonlinear characteristics and the frequency response distortion characteristics by using three single-dimensional characteristic extraction modules.
On the basis of the above scheme, the size of the image in step 2.1 is adjusted according to different application scenarios and complexity requirements, and the like, and the image includes 32 × 32, 64 × 64, 128 × 128, 256 × 256 pixels, and the like.
On the basis of the above scheme, the multi-feature fusion module in step 3 includes: a self-attention module, the self-attention module comprising: one layer of Tanh activation functions, one layer of full connectivity of 256 x 1 size, and one layer of Sigmoid activation functions.
On the basis of the scheme, the step 3 specifically comprises the following steps:
step 3.1: performing dimension increasing splicing on the three extracted features of the single-dimensional radio frequency fingerprint obtained in the step 2, and increasing a multi-feature dimension S to obtain a feature M containing three dimensions of BxSxL;
step 3.2: obtaining a self-attention map M by passing the feature M through a self-attention moduleA
Step 3.3: performing feature dimension transformation on the features M, the feature dimension transformation being specifically operative to: and exchanging the latter two dimensions to obtain a new dimension of BxLxS.
Step 3.4: and (4) carrying out tensor matrix multiplication on the processing results of the features M in the step (3.2) and the step (3.3) to finish the extraction of the multi-feature fusion radio frequency fingerprint.
On the basis of the above scheme, step 4 specifically includes:
dividing a data set into a training set and a verification set according to a certain proportion, inputting the radio frequency fingerprints in the training set into a neural network in batches, calculating a loss function based on the output of the neural network and the labels of the radio frequency fingerprints, updating parameters of the neural network by adopting a back propagation algorithm, and verifying the identification performance of the neural network by using the verification set in the process of continuously updating the parameters of the neural network;
when the recognition performance of the neural network on the verification set is not improved any more, completing the training of the neural network to obtain the neural network for extracting the radio frequency fingerprint of the wireless equipment with multi-feature fusion;
the invention has the beneficial effects that:
the invention comprehensively extracts the signal characteristics of the wireless transmitter caused by three different hardware defects to form the radio frequency fingerprint, including carrier frequency offset, nonlinearity, frequency response distortion and the like. The method strengthens the key characteristic area of the radio frequency fingerprint single-dimensional signal characteristic source of the wireless equipment based on the channel attention and space attention mechanism, distributes different weights for the signal characteristics of different sources based on the self-attention mechanism, expands the dimensionality of the radio frequency fingerprint, forms richer characteristic information of the radio frequency fingerprint, enables the radio frequency fingerprint sources of different dimensionalities to have complementary advantages and information synthesis, achieves the purpose of constructing efficient and stable radio frequency fingerprint with multi-characteristic fusion, and enables the identification of the wireless equipment to have better robustness.
Drawings
The invention has the following drawings:
FIG. 1 is a schematic diagram of a radio frequency front end of a wireless signal transmitter;
fig. 2 is a differential constellation trace diagram for different samples of different wireless devices;
FIG. 3 is a bispectrum diagram of different samples of different wireless devices;
FIG. 4 is a short-time Fourier transform time-frequency plot of different samples of different wireless devices;
FIG. 5 is a schematic flow chart of a multi-feature fusion wireless device radio frequency fingerprint extraction method based on an attention mechanism;
FIG. 6 is a schematic diagram of a single feature network attention mechanism performance result;
FIG. 7 is a graph comparing multi-feature fusion to single feature performance.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings 2 to 7.
The invention provides a multi-feature fusion radio frequency fingerprint extraction method based on an attention mechanism.
Step 1: and extracting signal characteristics caused by three hardware defects to form the radio frequency fingerprint.
Extracting signal characteristics of a wireless transmitter caused by three different hardware defects of carrier frequency offset, nonlinearity and frequency response distortion to obtain three-dimensional radio frequency fingerprints, wherein the three-dimensional radio frequency fingerprints and corresponding tags of the wireless transmitter form a data set for training a neural network;
step 1.1 Carrier frequency offset feature extraction based on Difference
The present invention uses carrier frequency offset as one of the radio frequency fingerprint characteristics of a wireless device based on the transmitter and receiver having different carrier frequencies.
In the communication process of the wireless device, the receiver performs a difference operation on the received signal to obtain a visualized difference Constellation trajectory Diagram (DCTF), which is a radio frequency fingerprint of a carrier frequency offset dimension.
The differential operation is specifically expressed as follows:
step 1.1.1 the signal transmitted by the transmitter is expressed as:
Figure RE-GDA0003406841250000131
wherein S (t) represents a transmit signal, X (t) is a transmitter baseband signal,
Figure RE-GDA0003406841250000132
is an imaginary unit, fcTxIs the carrier frequency of the transmitter.
Step 1.1.2, for clarity, ignoring the influence of channel, noise, etc., the signal received by the receiver is denoted as r (t), and r (t) is denoted as s (t);
the receiver down-converts to obtain a baseband signal represented as:
Figure RE-GDA0003406841250000133
wherein f iscRxIs the carrier frequency of the receiver and,
Figure RE-GDA0003406841250000134
phase error when receiving signals for a receiver;
when f iscRx≠fcTxThe baseband signal obtained by down-conversion of the receiver is represented as:
Figure RE-GDA0003406841250000135
wherein θ ═ fcRx-fcTxThe difference between the carrier frequency of the receiver and the carrier frequency of the transmitter.
Step 1.1.3 the differential processing procedure is expressed as:
Figure RE-GDA0003406841250000136
wherein D (t) is the difference result, Y*To take the conjugate value, d is the differential interval.
After the difference processing, the phase rotation factor e in the difference result D (t)-i2πθnIs a fixed value, only related to the difference interval, and will not change due to the change of time. Therefore, the difference processing operation obtains a fixed constellation trajectory map to reflect the individual difference between the devices by compensating the rotation effect of the constellation map, and the fixed constellation trajectory map is used as one of the sources of the radio frequency fingerprint feature extraction of the wireless device.
Step 1.1.4, the differentiated signals are directly drawn on a complex plane, then the complex plane is discretized into a series of pixel points, and the number of the differential operation results falling into a certain pixel area is represented by different RGB values[3]Obtaining a difference constellation locus diagram after visualization processing, as shown in fig. 2, which reflects the statistical characteristics of the difference operation result on a complex number plane, wherein the change of colors in the diagram shows the density degree of constellation locus points, and the deeper the color, the denser the distribution of the constellation locus points in the region with the darker color[4]. Accordingly, the carrier frequency offset can be visually represented in a visualized form, and the phase rotation degree on the constellation locus diagram can be visually represented.
Step 1.2 nonlinear feature extraction based on bispectrum domain
Based on the difference of the internal structures of the transmitter devices, the invention introduces the nonlinear characteristic extracted based on the bispectrum transformation as one of the radio frequency fingerprint characteristics of the wireless equipment. After a receiver acquires a received signal, estimation of a signal bispectrum is finished based on a limited observed value, the bispectrum estimation is mainly carried out through a non-parametric method, and a radio frequency fingerprint with a nonlinear dimension is obtained, and the method specifically comprises the following steps:
step 1.2.1 Signal data segmentation
The captured signal data of length N { r (0), r (1), … r (N-1) } is divided into K segments, each segment having M observation samples, i.e., N ═ KM, and the average of the samples in each segment is subtracted from the M observation samples in the segment. By r(k)And (n) represents the nth observation sample of the kth signal data, wherein n is 0,1, L, M-1, and K is 1,2, …, and K.
Step 1.2.2 calculating discrete Fourier transform coefficients
Figure RE-GDA0003406841250000141
Where λ ═ 0,1, …, M/2, represents a discrete frequency variable.
On this basis, Discrete Fourier Transform (DFT) coefficients are obtained.
Step 1.2.3 calculating the triple correlation of the discrete Fourier transform coefficients and + -L around the frequency sampling point1Frequency domain smoothing is performed on each frequency component:
Figure RE-GDA0003406841250000151
wherein λ is12Is a discrete frequency variable of a dual spectral domain, 0 is less than or equal to lambda2≤λ1,λ12≤fs/2;fsIs the sampling frequency; delta0Represents the frequency sampling interval, Δ, in the dual spectral domain0=fs/N0;N0Is the total number of frequency samples. i.e. i1,i2Is a variable used to traverse the neighborhood frequency components when accumulating. Let N1=2L1+1, representing bispectrumSliding over the length of the adjacent frequency, N0And N1Is such that M is equal to N0·N1
Step 1.2.4 with omega12Representing angular frequencies of a two-dimensional frequency domain, will
Figure RE-GDA0003406841250000152
Figure RE-GDA0003406841250000153
Substituting formula (6) to obtain a bispectrum density estimate of the kth signal data
Figure RE-GDA0003406841250000154
Then averaging the K-segment dual-spectrum density estimation to obtain a dual-spectrum density estimation value in the whole process:
Figure RE-GDA0003406841250000155
wherein the sampling frequency fsIs set to 1;
step 1.2.5: through the calculation process, the bispectral characteristics of the signals transmitted by each wireless device can be obtained. And further, drawing a bispectrum in a visual form based on the bispectrum density estimation value, namely converting the bispectrum density estimation value into different RGB values for presentation to obtain the radio frequency fingerprint with nonlinear dimensionality.
As shown in fig. 3, is a form of dual spectrum visualization of different samples of different wireless devices.
Step 1.3 frequency response distortion feature extraction based on short-time Fourier transform
In order to take account of the frequency spectrum density distribution in the frequency domain and the local Time information in the Time domain, the invention takes the information embodied after Short Time Fourier Transform (STFT) is carried out on the received signal as one of the sources for extracting the radio frequency fingerprint characteristics of the wireless equipment.
Specifically, for a time domain signal f (t) received by a receiver, a time window function g (t- τ) is multiplied to intercept the time domain signal near τ to obtain a local signal, and then fourier transform is performed on the obtained local signal to obtain a short-time fourier transform formula as follows:
Figure RE-GDA0003406841250000161
where ω is the angular frequency, e-jωtThe function of frequency limiting is achieved; tau is time delay, and by changing the value of tau constantly, the time window determined by g (t) moves constantly on the time axis, and the time domain signal f (t) is intercepted gradually, so that Fourier transform at different moments can be obtained. The result F (τ, ω) of the short-time fourier transform is a two-dimensional function with respect to time τ and angular frequency ω. Similar to bispectrum, the values of F (tau, omega) are converted into different RGB values to draw an STFT time-frequency graph, and the radio frequency fingerprint of the frequency response distortion dimension is obtained.
The STFT may reflect both time domain and frequency domain information of the signal under study. Fig. 4 shows STFT time-frequency diagrams of different wireless devices with some degree of discrimination.
Step 2: wireless device single-dimensional radio frequency fingerprint feature extraction based on channel attention and space attention mechanism
Based on the three-dimensional radio frequency fingerprints obtained in the step (1), respectively extracting carrier frequency offset characteristics, nonlinear characteristics and frequency response distortion characteristics by using three single-dimensional characteristic extraction modules to obtain three single-dimensional radio frequency fingerprint extraction characteristics;
the network structure for extracting the single-dimensional radio frequency fingerprint features of the invention is shown as a single-dimensional feature extraction module in figure 5.
The method specifically comprises three blocks of primary feature extraction, key information in the primary features of channel attention and spatial attention and further feature extraction based on full-link layers.
Step 2.1 Primary feature extraction Using volume blocks
And (2) performing preliminary feature extraction by using the volume block aiming at the radio frequency fingerprint of a single dimension obtained in the step (1), wherein the radio frequency fingerprint is an image, the size of the image is adjusted according to different application scenes and complexity requirements, and the size of the image comprises 32 × 32 pixels, 64 × 64 pixels, 128 × 128 pixels and 256 × 256 pixels.
The convolution block comprises three convolution layers, wherein the number of input channels of the first convolution layer is 3, the number of output channels of the three convolution layers is 32, 64 and 64 respectively, the sizes of convolution kernels of the three convolution layers are all 3 multiplied by 3, and filling values are all 1. After each convolution layer, a ReLU activation function, a maximum pooling layer, a regularization layer and a Dropout layer are arranged in sequence.
Wherein the sliding window size of the maximum pooling layer is 2 × 2; the regularization layer is used to normalize the data, and the Dropout layer has a drop rate of 0.2.
Obtaining a preliminary feature F through a sample by a rolling block, wherein the preliminary feature F comprises four dimensions B multiplied by C multiplied by H multiplied by W, B represents the number of samples in each batch, C represents the number of channels, H represents the height of a feature map, and W represents the width of the feature map;
step 2.2 enhancing key information in preliminary features using channel attention Module and spatial attention Module
Step 2.2.1 the preliminary features F obtain a channel attention map M by a channel attention moduleCThe channel attention map MCMultiplying the original feature F point which is not processed to obtain a feature F', thereby highlighting key information on the channel component;
the channel attention map may be represented by equation (9):
MC(F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F))) (9)
where σ denotes Sigmoid function, AvgPool denotes average pooling, MaxPool denotes that maximum pooling MLP is a multilayer perceptron with hidden layers.
The specific network structure is shown as "channel attention module" in fig. 5. The channel attention module includes: the system comprises an average pooling layer, a maximum pooling layer, a multilayer perceptron and a Sigmoid activation function;
inputting the initial characteristics F output by the convolution block into a maximum pooling layer and an average pooling layer respectively, performing maximum pooling and average pooling, inputting the results of the maximum pooling and the average pooling into a multilayer perceptron respectively to obtain two characteristics, and inputting the two characteristics into a multi-layer perceptronAfter the features are added, a channel attention mapping M is obtained by using Sigmoid activation function activationC
The specific implementation of average pooling and maximum pooling uses an adaptive approach, with the output size "output _ size" set to 1 at the time of code parameter setting, i.e. the size of the feature map is set to 1
×1。
The multilayer perceptron is realized by two layers of two-dimensional convolution layers, the sizes of the convolution kernels are 1 multiplied by 1, the number of output channels is 4 and 64 respectively, a ReLU activation function is used after the first layer of convolution layer, and a Sigmoid activation function is used after the second layer of convolution layer.
Step 2.2.2 feature F' obtains spatial attention map M through spatial attention moduleSAnd multiplying the point with the unprocessed characteristic F 'to obtain a characteristic F', and highlighting the key information on the space region.
The implementation principle of the spatial attention mapping is shown in formula (10):
MS(F)=σ(f([AvgPool(F);MaxPool(F)])) (10)
wherein M isS(F) Representing the formed spatial attention map, sigma denotes Sigmoid function, and f represents convolution operation. AvgPool stands for average pooling, MaxPool stands for maximum pooling, [ 2 ]]Indicating a splicing operation.
The specific network structure is shown as "spatial attention Module" in FIG. 5. The spatial attention module includes: an average pooling layer, a maximum pooling layer, a convolution layer and a Sigmoid function;
the characteristic F' is respectively subjected to average pooling and maximum pooling which are finished on the channel dimension, namely the average value and the maximum value on the channel dimension are calculated, and the number of channels is 1; splicing the obtained average pooling result and the maximum pooling result on channel dimension, inputting the result into a convolution layer, setting the size of a convolution kernel in convolution operation to be 7 multiplied by 7, setting a filling value to be 3, setting the number of input channels and output channels to be 2 and 1 respectively, and finally obtaining a result M of space attention mapping by using a Sigmoid function as an activation functionS
Thus, the overall attention mechanism is expressed as:
F'=MC(F)⊙F (11)
F"=MS(F')⊙F' (12)
wherein F represents a preliminary feature, MC(F) Indicating a channel attention map, MS(F ') represents a spatial attention map, the symbol represents a dot product of the elements, and F' is the final output; the feature F "still includes four dimensions B × C × H × W.
Step 2.3 further extraction of features based on fully connected layers
Further, the last three dimensions C × H × W of the feature F ″ are expanded into one dimension and converted into a two-dimensional tensor of B × L. Inputting the data into a full-link layer, wherein the output dimension is 256, and obtaining the one-dimensional feature representation of B radio frequency fingerprints of the single-dimensional radio frequency fingerprint source.
And 2.4, respectively obtaining three-dimensional single-dimensional radio frequency fingerprint extraction characteristics based on the carrier frequency offset characteristics, the nonlinear characteristics and the frequency response distortion characteristics by using three single-dimensional characteristic extraction modules.
And step 3: multi-feature fusion based on self-attention mechanism
And 3, completing multi-feature fusion of the radio frequency fingerprints in three dimensions based on a self-attention mechanism, wherein a specific network structure is shown as a multi-feature fusion module in fig. 5.
In the multi-feature fusion module, firstly, the features extracted from the three-dimensional single-dimensional radio frequency fingerprints finally obtained in the step 2 are subjected to dimension increasing splicing, the multi-feature dimension S is increased, and a feature M containing three dimensions of BxSxL is obtained, on one hand, the feature M obtains a self-attention mapping M through the self-attention moduleA(ii) a On the other hand, performing feature dimension transformation on the M, wherein the feature dimension transformation is specifically operated as follows: and finally, exchanging two dimensions, namely converting the dimension of the feature into B multiplied by L multiplied by S.
The self-attention module comprises a Tanh activation function layer, a full connection layer with the size of 256 multiplied by 1 and a Sigmoid activation function layer;
finally, tensor matrix multiplication is carried out on the results of the two steps of processing of the features M, and extraction of the multi-feature fusion radio frequency fingerprint is completed.
It should be noted that, in contrast performance, if the self-attention module is not selected, the feature dimension transformation operation needs to be implemented by performing a summation operation on the multi-feature dimension S instead.
Step 4, according to the multi-feature fusion radio frequency fingerprint obtained in the step 3, classifying the wireless equipment by utilizing a full connection layer, wherein the output dimensionality of the full connection layer is the class number of the wireless equipment, and training of a neural network is completed;
based on the scheme, the difference constellation locus diagram reflects the carrier frequency offset delta f, and the bispectrum reflects the nonlinear characteristic noAnd the effects of frequency response distortion characterized by short-time Fourier transform
Figure RE-GDA0003406841250000211
When feature level fusion is performed, as shown in equation (13):
Figure RE-GDA0003406841250000212
training a neural network using a tagged data set, the neural network using a supervised objective function. Wherein the data set is represented as: { (m)(1),y(1)),…,(m(i),y(i))},m(i)Representing the combination of three-dimensional radio frequency fingerprints extracted by the ith sample acquired by the same equipment based on carrier frequency offset characteristics, nonlinear characteristics and frequency response distortion characteristics, y(i)The label representing the ith sample, i.e., its corresponding device identity.
Dividing a data set into a training set and a verification set according to a certain proportion, inputting the radio frequency fingerprints in the training set into a neural network in batches, calculating a loss function based on the output of the neural network and the labels of the radio frequency fingerprints, and updating the parameters of the neural network by adopting a back propagation algorithm. And in the process of continuously updating the parameters of the neural network, verifying the identification performance of the neural network by using a verification set. And when the identification performance of the neural network on the verification set is not improved any more, completing the training of the neural network, and obtaining the neural network for extracting the radio frequency fingerprint of the wireless equipment with multi-feature fusion.
The trained neural network can be used for extracting the radio frequency fingerprints of the wireless equipment with multi-feature fusion, and classification of the wireless equipment is completed based on the radio frequency fingerprints with multi-feature fusion.
And 5: and (4) extracting signal characteristics of the wireless transmitter to be identified, which are caused by three different hardware defects, namely carrier frequency offset, nonlinearity and frequency response distortion, based on the step 1 to obtain three-dimensional radio frequency fingerprints, and identifying the radio frequency fingerprints by using the neural network trained in the step 4 to finish the classification of the wireless transmitter.
The channel attention module, the space attention module and the self-attention module are all lightweight modules, and the channel attention module, the space attention module and the self-attention module are applied to the whole neural network and have the plug-and-play characteristic, and whether a certain module is used or not can be selected under the condition that the structure of the whole network is not changed. For example, when comparing the performance of whether the multi-feature fusion adds the attention mechanism, only the attention mechanism module needs to be selected.
The feature extraction method based on the Convolutional neural Network (Convolutional block) used in the scheme may also be implemented by using other Convolutional-based Deep neural networks (e.g., VGG, google lenet, etc.), or Deep residual networks (ResNet), and their latest improvements, Dense connection networks (densneet), and other neural networks.
The input image size used in the scheme may be adjusted according to the application scenario and the complexity requirements of the system, and common sizes include, but are not limited to, 32 × 32, 64 × 64, 128 × 128, 256 × 256 pixels, and the like.
The invention extracts the signal characteristics of the wireless transmitter caused by three different hardware defects of carrier frequency offset, nonlinearity and frequency response distortion to obtain the three-dimensional radio frequency fingerprint. The method strengthens the key characteristic area of the single-dimensional radio frequency fingerprint of the wireless equipment based on the channel attention and space attention mechanism, distributes different weights to the radio frequency fingerprint characteristics with different dimensions based on the self-attention mechanism, forms richer characteristic information of the radio frequency fingerprint, enables the radio frequency fingerprints with different dimensions to have complementary advantages and information synthesis, and constructs the efficient and stable radio frequency fingerprint with multi-characteristic fusion, so that the wireless equipment identification has better robustness.
1. Single feature identification capability incorporating attention mechanism
And verifying the performance of the selected single signal characteristics which characterize different hardware defects. Based on a single feature extracted by the wireless device, firstly, an identification result of the wireless device is obtained on a general convolutional neural network only comprising a convolution block and a full-link layer, and further, an identification result of the wireless device is obtained in a convolutional neural network added with a channel attention module and a spatial attention module, and fig. 6 shows the accuracy of identifying different wireless devices on the convolutional neural network before and after an attention mechanism (channel attention and spatial attention) is introduced based on a single signal feature.
The abscissa in the figure represents different signal-to-noise ratio levels and the ordinate represents the accuracy of the recognition result. The DCTF represents a differential constellation diagram, and results based on each feature are distinguished by using curves of different line types, the dotted line represents the identification performance of a general convolutional neural network on a wireless device, and the solid line represents the identification performance after an attention mechanism (channel attention and spatial attention) is introduced. The observation result shows that the identification performance of the differential constellation locus diagram is optimal in the comparison of the single signal characteristics. Meanwhile, the identification accuracy of the single feature attention introducing mechanism is improved to a certain extent compared with the identification accuracy of the single feature attention introducing mechanism without the attention introducing mechanism, which shows that the attention introducing mechanism can strengthen the key features in the single feature, so that the identification performance of the single feature can be improved.
2. Recognition performance of multi-feature fusion with attention mechanism
Further, in order to verify the robustness of the radio frequency fingerprint extraction method based on the attention mechanism and multi-feature fusion, the performance of the neural network under the conditions of different signal to noise ratios is tested. According to the invention, artificial noise is added into the original data to simulate sampling data under different signal-to-noise ratios. Through analysis, the signal-to-noise ratio of the original sampling signal is 35-40 dB. Therefore, the experiment takes 5dB as a step length, noise is further added to the original signal, and finally an amplification data set with the signal-to-noise ratio varying from 0dB to 40dB is formed. Fig. 7 shows experimental performance of whether the multi-feature fusion scheme of the attention mechanism is introduced compared with other schemes of the single-feature attention mechanism.
In the graph, the abscissa represents the change of the signal-to-noise ratio, and the ordinate corresponds to the recognition accuracy. It can be found that the identification accuracy of the radio frequency fingerprint extraction method based on the attention mechanism and multi-feature fusion is higher than that of a scheme of introducing the attention mechanism into each single feature. Meanwhile, under the condition of low signal-to-noise ratio, the accuracy of the radio frequency fingerprint extraction method based on the attention mechanism and multi-feature fusion provided by the invention can still reach more than 90%. In addition, the radio frequency fingerprint extraction method based on the attention mechanism and the multi-feature fusion has the advantages that the accuracy fluctuation range is small in the whole signal-to-noise ratio range, the identity of the equipment can be stably identified, the identification accuracy is gradually improved along with the increase of the signal-to-noise ratio, the accuracy can reach 99.998%, and finally the accurate result of complete classification can be reached.
Therefore, the attention mechanism-based multi-feature fusion wireless equipment radio frequency fingerprint extraction method provided by the invention can be used for stably identifying equipment under the condition of low signal to noise ratio, so that the radio frequency fingerprint of the wireless equipment has higher resolution, and the accuracy of wireless equipment identity identification is improved.
[1]Peng L,Hu A,Zhang,J,et al.Design of a Hybrid RF Fingerprint Extraction and Device Classification Scheme[J].IEEE Internet of Things Journal,2019,6(1):349-360.
[2]Peng L,Zhang J,Liu M and Hu A.Deep Learning Based RF Fingerprint Identification Using Differential Constellation Trace Figure[J]. IEEE Transactions on Vehicular Technology,2020,69(1):1091–1095.
[3] Liu Ming, Liu Mian, Korea Art, Penglining, Paihua, Chen I , a radio frequency fingerprint-based power Internet of things equipment identity identification method [ J ] Chinese power, 2021,54(03):80-88.
[4] Penglining, Bedsura, Zhu Chang Ming, Jiangyu, constellation locus diagram-based radio frequency fingerprint extraction method [ J ]. information safety declaration, 2016,1(01):50-58.
Differential Constellation Trace Figure (DCTF)
Short Time Fourier Transform (STFT)
Fourier Transform (FT)
Those not described in detail in this specification are within the skill of the art.

Claims (10)

1. A multi-feature fusion wireless device radio frequency fingerprint extraction method based on an attention mechanism is characterized by comprising the following steps:
step 1: extracting signal characteristics of a wireless transmitter caused by three different hardware defects of carrier frequency offset, nonlinearity and frequency response distortion to obtain three-dimensional radio frequency fingerprints, wherein the three-dimensional radio frequency fingerprints and corresponding tags of the wireless transmitter form a data set for training a neural network;
step 2: based on the three-dimensional radio frequency fingerprints obtained in the step (1), respectively extracting carrier frequency offset characteristics, nonlinear characteristics and frequency response distortion characteristics by using three single-dimensional characteristic extraction modules to obtain three single-dimensional radio frequency fingerprint extraction characteristics;
and step 3: processing the extracted features of the three single-dimensional radio frequency fingerprints obtained in the step (2) by using a multi-feature fusion module to obtain multi-feature fused radio frequency fingerprints;
and 4, step 4: based on the multi-feature fused radio frequency fingerprint obtained in the step 3, classification of the wireless transmitters is obtained through a full connection layer, and training of a neural network is completed;
and 5: extracting signal characteristics of a wireless transmitter to be identified, which are caused by three different hardware defects of carrier frequency offset, nonlinearity and frequency response distortion, based on the step 1 to obtain three-dimensional radio frequency fingerprints; and further, the radio frequency fingerprints are identified by using the neural network trained in the step 4, and the classification of the wireless transmitters is completed.
2. The attention mechanism-based radio frequency fingerprint extraction method for multi-feature fusion wireless equipment according to claim 1, wherein the step 1 specifically comprises:
step 1.1, extracting carrier frequency offset features based on difference, in the communication process of wireless equipment, a receiver performs difference operation on received signals, and radio frequency fingerprints of carrier frequency offset dimensions are obtained after visualization processing, wherein the difference operation is specifically represented as follows:
step 1.1.1 the signal transmitted by the transmitter is expressed as:
Figure FDA0003286180010000021
wherein S (t) represents a transmit signal, X (t) is a transmitter baseband signal,
Figure FDA0003286180010000022
is an imaginary unit, fcTxIs the carrier frequency of the transmitter;
step 1.1.2, ignoring the influence of channel and noise factors, wherein a signal received by a receiver is denoted as r (t), and r (t) is s (t);
the receiver down-converts to obtain a baseband signal represented as:
Figure FDA0003286180010000023
wherein f iscRxIs the carrier frequency of the receiver and,
Figure FDA0003286180010000024
phase error when receiving signals for a receiver;
when f iscRx≠fcTxThe baseband signal obtained by down-conversion of the receiver is represented as:
Figure FDA0003286180010000025
wherein θ ═ fcRx-fcTxThe difference between the carrier frequency of the receiver and the carrier frequency of the transmitter;
step 1.1.3 the differential processing procedure is expressed as:
Figure FDA0003286180010000026
wherein D (t) is the difference result, Y*Taking a conjugate value, and d is a differential interval;
step 1.1.4, directly drawing the differentiated signals on a complex plane, then discretizing the complex plane into a series of pixel points, and expressing the number of differential operation results falling into a certain pixel area by using different RGB values to obtain a radio frequency fingerprint of carrier frequency offset dimensionality;
step 1.2, extracting nonlinear features based on a bispectrum domain to obtain a nonlinear dimension radio frequency fingerprint, which specifically comprises the following steps:
step 1.2.1 signal data segmentation:
dividing captured signal data with the length of N { r (0), r (1), … r (N-1) } into K sections, wherein each section has M observation samples, and N is KM, subtracting the average value of the section from the M observation samples in each section, and using r(k)(n) denotes the nth observed sample of the kth piece of signal data, n is 0,1, …, M-1, K is 1,2, …, K;
step 1.2.2 calculating discrete fourier transform coefficients:
Figure FDA0003286180010000031
wherein λ represents a discrete frequency variable, λ is 0,1, …, M/2; on the basis, obtaining a discrete Fourier transform coefficient;
step 1.2.3 calculating the triple correlation of the discrete Fourier transform coefficients and + -L around the frequency sampling point1Frequency domain smoothing is performed on each frequency component:
Figure FDA0003286180010000032
wherein λ is12Is a discrete frequency variable of a dual spectral domain, 0 is less than or equal to lambda2≤λ1,λ12≤fs/2;fsIs the sampling frequency; delta0Representing the frequency sampling interval, Δ, in the dual spectral domain0=fs/N0;N0Is the total number of frequency samples; i.e. i1,i2Is a variable used to traverse neighborhood frequency components during accumulation; let N1=2L1+1, length of bispectral smoothing across adjacent frequencies, N0And N1A value of (1) satisfies that M is equal to N0·N1
Step 1.2.4 with omega12Representing angular frequencies of the dual spectral domain, will
Figure FDA0003286180010000033
Substituting formula (6) to obtain a bispectrum density estimate of the kth signal data
Figure FDA0003286180010000034
Then averaging the K-segment dual-spectrum density estimation to obtain a dual-spectrum density estimation value in the whole process:
Figure FDA0003286180010000041
wherein the sampling frequency fsIs set to 1;
step 1.2.5: converting the bispectrum density estimation value obtained in the step 1.2.4 into different RGB values to draw bispectrums, and obtaining a radio frequency fingerprint with nonlinear dimensionality;
step 1.3, extracting frequency response distortion characteristics based on short-time Fourier transform to obtain a radio frequency fingerprint of frequency response distortion dimensionality;
for a time domain signal f (t) received by a receiver, firstly, a time window function g (t- τ) is multiplied to intercept the time domain signal near τ to obtain a local signal, and then, fourier transform is performed on the obtained local signal to obtain a short-time fourier transform formula, as shown in formula (8):
Figure FDA0003286180010000042
where ω is the angular frequency, e-jωtThe function of frequency limiting is achieved; tau is time delay, and a time window determined by g (t) continuously moves on a time axis through constantly changing the value of tau, and a time domain signal f (t) is gradually intercepted to obtain Fourier transform at different moments;
the result F (τ, ω) of the short-time fourier transform is a two-dimensional function with respect to time τ and angular frequency ω;
and converting the value of F (tau, omega) into different RGB values, drawing a short-time Fourier transform time-frequency graph, and obtaining the radio frequency fingerprint of the frequency response distortion dimension.
3. The attention-based multi-feature fusion wireless device radio frequency fingerprint extraction method of claim 1, wherein the single-dimensional feature extraction module of step 2 comprises: a volume block, a channel attention module, a spatial attention module, and a full connectivity layer.
4. The attention mechanism-based multi-feature fusion wireless device radio frequency fingerprint extraction method according to claim 3, wherein the convolution block comprises three layers of convolution layers, the number of input channels of the first layer of convolution layer is 3, the number of output channels of the three layers of convolution layers are respectively 32, 64 and 64, the convolution kernels of the three layers of convolution layers are all 3 x 3, and the padding values are all 1; a ReLU activation function, a maximum pooling layer, a regularization layer and a Dropout layer are sequentially arranged behind each convolution layer; the size of the sliding window of the maximum pooling layer is 2 multiplied by 2; the regularization layer is used to normalize the data, and the Dropout layer has a drop rate of 0.2.
5. The method of attention-based multi-feature fusion wireless device radio frequency fingerprint extraction as recited in claim 4, wherein the channel attention module comprises: the maximum pooling layer, the average pooling layer, the multilayer perceptron and the Sigmoid activation function;
the specific processing procedure of the channel attention module comprises the following steps:
inputting the initial characteristics F output by the convolution block into a maximum pooling layer and an average pooling layer respectively, performing maximum pooling and average pooling, inputting the results of the maximum pooling and the average pooling into a multilayer perceptron respectively to obtain two characteristics, adding the two characteristics, and activating by using a Sigmoid activation function to obtain a channel attention mapping MC
The multilayer perceptron is composed of two layers of two-dimensional convolution layers, the sizes of convolution kernels are 1 multiplied by 1, the number of output channels is 4 and 64 respectively, a ReLU activation function is used after the first layer of convolution layer, and a Sigmoid activation function is used after the second layer of convolution layer.
6. The attention mechanism-based multi-feature fusion wireless device radio frequency fingerprint extraction method of claim 5, wherein the spatial attention module comprises: an average pooling layer, a maximum pooling layer, a convolution layer and a Sigmoid function;
the specific processing procedure of the space attention module comprises the following steps:
respectively inputting the characteristics F' into a maximum pooling layer and an average pooling layer for calculating an average value and a maximum value on a channel dimension, splicing the obtained average pooling and maximum pooling results on the channel dimension, then inputting the spliced average pooling and maximum pooling results into a convolution layer, setting the size of a convolution kernel in convolution operation to be 7 multiplied by 7, setting a filling value to be 3, setting the number of input and output channels to be 2 and 1 respectively, and finally activating by using a Sigmoid activation function to obtain a result M of space attention mappingS
7. The attention mechanism-based radio frequency fingerprint extraction method for multi-feature fusion wireless equipment according to claim 3, wherein the step 2 specifically comprises the following steps:
step 2.1, using the volume block to perform primary feature extraction;
aiming at the radio frequency fingerprint of a single dimension obtained in the step 1, performing primary feature extraction by using a rolling block, wherein the radio frequency fingerprint is an image, the image of the radio frequency fingerprint of the single dimension obtains a primary feature F through the rolling block, the primary feature F comprises four dimensions B multiplied by C multiplied by H multiplied by W, B represents the number of samples in each batch, C represents the number of channels, H represents the height of a feature map, and W represents the width of the feature map;
step 2.2, enhancing key information in the preliminary features by utilizing a channel attention module and a space attention module;
step 2.2.1 the preliminary features F obtain a channel attention map M by a channel attention moduleCThe channel attention map MCMultiplying the original feature F point which is not processed to obtain a feature F', thereby highlighting key information on the channel component;
the channel attention map is shown in equation (9):
MC(F)=σ(MLP(AvgPool(F))+MLP(MaxPool(F))) (9)
wherein σ represents Sigmoid function, AvgPool represents average pooling, and MaxPool represents that the maximum pooled MLP is a multilayer perceptron with hidden layers;
step 2.2.2 feature F' obtains spatial attention map M through spatial attention moduleSMultiplying the feature F ' point with the unprocessed feature F ' point to obtain a feature F ', and highlighting key information on the space region;
the implementation principle of the spatial attention mapping is shown in formula (10):
MS(F)=σ(f([AvgPool(F);MaxPool(F)])) (10)
wherein M isS(F) Representing the formed spatial attention map, sigma representing Sigmoid function, and f representing convolution operation; AvgPool stands for average pooling, MaxPool stands for maximum pooling, [ 2 ]]Representing a splicing operation;
thus, the overall attention mechanism is expressed as:
F′=MC(F)⊙F (11)
F″=MS(F′)⊙F′ (12)
wherein F represents a preliminary feature, MC(F) Indicating a channelAttention map, MS(F ') representing spatial attention mapping, e-notation representing element dot product, and F' being the final output; the feature F' still comprises four dimensions B × C × H × W;
step 2.3, further extracting features based on the full-link layer;
unfolding the last three dimensions C multiplied by H multiplied by W of the characteristic F' into one dimension, converting the one dimension C multiplied by H multiplied by W into a B multiplied by L two-dimensional tensor, inputting the two-dimensional tensor into a layer of full-connected layer, and obtaining the one-dimensional characteristic representation of B radio frequency fingerprints of a single-dimensional radio frequency fingerprint source, wherein the output dimension is 256;
and 2.4, finally, respectively obtaining three-dimensional single-dimensional radio frequency fingerprint extraction characteristics based on the carrier frequency offset characteristics, the nonlinear characteristics and the frequency response distortion characteristics by using three single-dimensional characteristic extraction modules.
8. The attention-based multi-feature fusion wireless device radio frequency fingerprint extraction method of claim 7, wherein the multi-feature fusion module of step 3 comprises: a self-attention module, the self-attention module comprising: one layer of Tanh activation functions, one layer of full connectivity of 256 x 1 size, and one layer of Sigmoid activation functions.
9. The attention mechanism-based radio frequency fingerprint extraction method for multi-feature fusion wireless equipment according to claim 8, wherein the step 3 specifically comprises the following steps:
step 3.1: performing dimension increasing splicing on the three extracted features of the single-dimensional radio frequency fingerprint obtained in the step 2, and increasing a multi-feature dimension S to obtain a feature M containing three dimensions of BxSxL;
step 3.2: obtaining a self-attention map M by passing the feature M through a self-attention moduleA
Step 3.3: performing feature dimension transformation on the features M, the feature dimension transformation being specifically operative to: exchanging the latter two dimensions to obtain a new dimension of BxLxS;
step 3.4: and (4) carrying out tensor matrix multiplication on the processing results of the features M in the step (3.2) and the step (3.3) to finish the extraction of the multi-feature fusion radio frequency fingerprint.
10. The attention mechanism-based radio frequency fingerprint extraction method for multi-feature fusion wireless equipment according to claim 9, wherein the step 4 specifically comprises:
dividing a data set into a training set and a verification set according to a certain proportion, inputting the radio frequency fingerprints in the training set into a neural network in batches, calculating a loss function based on the output of the neural network and the labels of the radio frequency fingerprints, updating parameters of the neural network by adopting a back propagation algorithm, and verifying the identification performance of the neural network by using the verification set in the process of continuously updating the parameters of the neural network;
and when the identification performance of the neural network on the verification set is not improved any more, completing the training of the neural network to obtain the neural network for extracting the radio frequency fingerprint of the wireless equipment with multi-feature fusion.
CN202111148113.2A 2021-09-29 2021-09-29 Attention mechanism-based multi-feature fusion wireless equipment radio frequency fingerprint extraction method Pending CN114118131A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111148113.2A CN114118131A (en) 2021-09-29 2021-09-29 Attention mechanism-based multi-feature fusion wireless equipment radio frequency fingerprint extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111148113.2A CN114118131A (en) 2021-09-29 2021-09-29 Attention mechanism-based multi-feature fusion wireless equipment radio frequency fingerprint extraction method

Publications (1)

Publication Number Publication Date
CN114118131A true CN114118131A (en) 2022-03-01

Family

ID=80441647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111148113.2A Pending CN114118131A (en) 2021-09-29 2021-09-29 Attention mechanism-based multi-feature fusion wireless equipment radio frequency fingerprint extraction method

Country Status (1)

Country Link
CN (1) CN114118131A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612791A (en) * 2022-05-11 2022-06-10 西南民族大学 Target detection method and device based on improved attention mechanism
CN114760627A (en) * 2022-03-09 2022-07-15 江苏电力信息技术有限公司 Wireless equipment identification method based on radio frequency fingerprint and deep learning
CN114896887A (en) * 2022-05-20 2022-08-12 电子科技大学 Frequency-using equipment radio frequency fingerprint identification method based on deep learning
CN116963074A (en) * 2023-09-19 2023-10-27 硕橙(厦门)科技有限公司 Random fence-based dual-branch enhanced radio frequency signal fingerprint identification method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114760627A (en) * 2022-03-09 2022-07-15 江苏电力信息技术有限公司 Wireless equipment identification method based on radio frequency fingerprint and deep learning
CN114612791A (en) * 2022-05-11 2022-06-10 西南民族大学 Target detection method and device based on improved attention mechanism
CN114612791B (en) * 2022-05-11 2022-07-29 西南民族大学 Target detection method and device based on improved attention mechanism
CN114896887A (en) * 2022-05-20 2022-08-12 电子科技大学 Frequency-using equipment radio frequency fingerprint identification method based on deep learning
CN114896887B (en) * 2022-05-20 2023-04-25 电子科技大学 Frequency-using equipment radio frequency fingerprint identification method based on deep learning
CN116963074A (en) * 2023-09-19 2023-10-27 硕橙(厦门)科技有限公司 Random fence-based dual-branch enhanced radio frequency signal fingerprint identification method and device
CN116963074B (en) * 2023-09-19 2023-12-12 硕橙(厦门)科技有限公司 Random fence-based dual-branch enhanced radio frequency signal fingerprint identification method and device

Similar Documents

Publication Publication Date Title
CN114118131A (en) Attention mechanism-based multi-feature fusion wireless equipment radio frequency fingerprint extraction method
CN106845339B (en) Mobile phone individual identification method based on bispectrum and EMD fusion characteristics
CN111050315B (en) Wireless transmitter identification method based on multi-core two-way network
Oyedare et al. Estimating the required training dataset size for transmitter classification using deep learning
CN111461037B (en) End-to-end gesture recognition method based on FMCW radar
CN114881093B (en) Signal classification and identification method
CN116047427B (en) Small sample radar active interference identification method
Wu et al. DSLN: Securing Internet of Things through RF fingerprint recognition in low-SNR settings
Topal et al. Identification of smart jammers: Learning-based approaches using wavelet preprocessing
KR102347174B1 (en) Ensemble based radio frequency fingerprinting apparatus and method of identifying emitter using the same
Zha et al. Specific emitter identification based on paired sample and complex Fourier neural operator
Wang et al. Specific emitter identification based on deep adversarial domain adaptation
Shi et al. FedRFID: federated learning for radio frequency fingerprint identification of WiFi signals
CN113109780A (en) High-resolution range profile target identification method based on complex number dense connection neural network
Hiremath et al. Blind identification of radio access techniques based on time-frequency analysis and convolutional neural network
Yang et al. Deep learning based RFF recognition with differential constellation trace figure towards closed and open set
CN115186714B (en) Network card frequency spectrum fingerprint feature amplification method based on feature correlation and self-adaptive decomposition
Cun et al. Specific emitter identification based on eye diagram
CN115809426A (en) Radiation source individual identification method and system
CN111245821A (en) Radiation source identification method and device and radiation source identification model creation method and device
CN115600101A (en) Unmanned aerial vehicle signal intelligent detection method and device based on priori knowledge
Feng et al. FCGCN: Feature Correlation Graph Convolution Network for Few-Shot Individual Identification
Zhao et al. Specific emitter identification based on joint wavelet packet analysis
Yang et al. Conventional neural network-based radio frequency fingerprint identification using raw I/Q data
EP2684073B1 (en) Radio frequency digital receiver system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination