CN115694692A - Spectrum sensing method based on Transformer multi-head attention mechanism network under Alpha noise - Google Patents

Spectrum sensing method based on Transformer multi-head attention mechanism network under Alpha noise Download PDF

Info

Publication number
CN115694692A
CN115694692A CN202111625177.7A CN202111625177A CN115694692A CN 115694692 A CN115694692 A CN 115694692A CN 202111625177 A CN202111625177 A CN 202111625177A CN 115694692 A CN115694692 A CN 115694692A
Authority
CN
China
Prior art keywords
data
network
layer
training
attention mechanism
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111625177.7A
Other languages
Chinese (zh)
Inventor
曹秀俐
朱晓梅
李想
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN202111625177.7A priority Critical patent/CN115694692A/en
Publication of CN115694692A publication Critical patent/CN115694692A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Complex Calculations (AREA)

Abstract

The invention designs a frequency spectrum sensing method based on a Transformer multi-head attention mechanism network under Alpha noise, which is realized based on deep learning and mainly solves the problem of low calculation efficiency of models such as RNN (radio network) and LSTM (local maximum Transmission) under a complex environment. The method specifically comprises the following steps: first, the observation signals are processed with fractional low-order moments, the processed data are divided into a training set and a test set, and then the training set data are labeled. And sending the data with the label to a spectrum convolution layer for coarse feature extraction. Then, the data is subjected to position coding, and is sent into a Transformer model for training and learning. And sending the result obtained by the Transformer to an upper sampling layer for data scaling, and obtaining the training result of the network through an average pooling layer and a full connection layer. Finally, test data are sent to the transducers which have learned classification, label data obtained through network learning are compared with original label data by taking a threshold value as a boundary, and therefore the state of the master user is judged. The invention effectively improves the sensing performance of the frequency spectrum and has better detection effect under low signal-to-noise ratio.

Description

Spectrum sensing method based on Transformer multi-head attention mechanism network under Alpha noise
Technical Field
The invention relates to the technical field of computers, in particular to the field of deep learning and spectrum perception.
Background
With the rapid development of communication technology, the types of radio services are more and more, people have higher and higher requirements on network transmission, and the problem of spectrum resource deficiency is a hot topic of people. The concept of Cognitive Radio (CR) refers to that a wireless terminal senses and analyzes the surrounding wireless environment. The acquired information can timely adjust own transmission parameters, and the authorized frequency band resources are reasonably utilized to complete infinite transmission. In the process, the method requires an unauthorized secondary user to perform spectrum sensing under the condition of not interfering the authorized user, and performs data transmission on idle frequency band resources. However, in recent years, due to the problems of the drastic increase of the number of authorized user signals, the variety of the authorized user signals, the non-uniform channel for transmitting data and the like, a great challenge is brought to the spectrum sensing technology in the cognitive radio.
In our real life, the communication channel not only contains internal noise from within the or device system, such as: spectral leakage, also contains external noise from the outside, such as: artificial impulse noise. For Alpha stationary distributed noise, it is a mathematical model that can truly describe the pulse in a non-gaussian environment. And it has no Alpha and orders above the Alpha order, i.e. its order is between 0-2. In this case, the detection performance of the conventional spectrum sensing algorithm is obviously reduced. Therefore, the invention provides a frequency spectrum sensing method based on a Transformer multi-attention mechanism network under Alpha noise.
Disclosure of Invention
The purpose of the invention is as follows: in a complex and changeable non-Gaussian environment, the recognition degree of a secondary user to a changed main user signal state is effectively improved, and the method becomes an important problem for researching spectrum sensing. And the frame model realized based on the transform multi-head attention mechanism network can effectively improve the detection performance under Alpha noise.
The technical scheme is as follows: the invention designs a frequency spectrum sensing method based on a Transformer multi-head attention mechanism network under Alpha noise, which is used for improving the detection performance of a secondary user to a primary user under a complex noise environment and mainly comprises the following steps:
s10: in the experiment, alpha is used as background noise of a spectrum sensing algorithm for modeling, and digital modulation signals with different signal-to-noise ratios are used as receiving samples. Dividing the received samples into five groups and dividing them in a ratio of 4: 1, and using them as training data and test data, respectively;
s20: performing fractional low-order moment processing on training data in a training set, and making corresponding data-label sequences on the training data with the size of 3X26X26 to obtain a label set;
s30: sending the labeled data into a spectrum convolution layer for partial feature extraction;
s40: and constructing a Transformer internal network which comprises a position coding layer, a multi-head attention mechanism layer and a feedforward network layer. The structural parameters for initializing the Transformer network mainly comprise the number of encoders, the number of encoder units, an optimization function, a learning rate and the like;
s50: carrying out block processing on the labeled training data, carrying out position coding, and sending the position coded training data into an encoder layer of a Transformer model for training, learning and feature extraction;
s60: sending the result from the encoder to an upper sampling layer, integrating data characteristics through an average pooling layer, and obtaining a classification result of the network during training through a full connection layer;
s70: in the process of training a network by using a labeled training set, calculating a loss value according to a training loss function, drawing the loss value into a loss curve, executing S40 according to the trend of the loss curve, and continuously adjusting parameters in a Transformer to obtain a better network classification effect;
s80: and sending the test data in the test set into a Transformer model which has been classified by a society, and comparing the data obtained by network learning with the original label data by taking a threshold value as a boundary so as to judge the state of the master user.
Preferably, the step S20 includes: performing fractional low-order moment processing on training data in a training set, as shown in an expression (1):
Figure BSA0000262029610000031
wherein Z i (j) Representing the signal information received by the ith secondary user at the jth instant. Z i ' (j) is data processed by fractional low order moment, p is order of statistic, and alpha is characteristic index. When the data after fractional low-order moment processing is marked by a label, the label is respectively marked by a digital modulation signal sample [0,1]And Alpha noise samples [1,0]And (4) showing.
Preferably, the step S30 includes: the spectrum convolution layer has three layers of convolution operations, and the convolution is to roughly extract the characteristics of a local area in order to change the shape of signal data.
Preferably, said S40 comprises the following:
s4001: at the position encoding layer: we feed the chunked data into the transform network and then position code them. For data in different dimensions, a periodic function is used for controlling the relative position of each segment of signal, and the expression is shown as (2):
Figure BSA0000262029610000032
PE is a two-dimensional matrix, pos represents the position of a certain signal data in a pulse, e represents the position of a signal data feature, d model Representing the dimensions of the signal data features. By using trigonometric functions of sin and cos, the position of a certain segment of signal data characteristic can be adjusted, so as to fill the PE matrix, thereby completing the introduction of position coding.
S4002: in the multi-head attention mechanism layer: the Transformer used has six encoders and consists of a multi-headed attention mechanism and a feed-forward neural network. The expression for the multi-head attention mechanism is as follows:
Figure BSA0000262029610000041
the multi-head attention mechanism consists of E scaling dot product attention mechanisms, and in the first expression of (3), parameters Q (query), K (key) and V (value)) The internal characteristics of the data are jointly determined, Q represents an inquiry signal matrix, K represents a matrix of correlation information between the inquired signal and other signals, and V represents a matrix of inquired signals. In the second formula of (3), h e Indicating the e-th attention mechanism. And W e Q ,W e K ,W e V Are all parameter matrices. In the third equation of (3), we use the Contact function to stitch together all the dot product attention mechanisms.
S4003: the feed-forward network layer comprises two fully-connected layers, and the activation function used is the GELU.
Preferably, in S80, if the output test result is smaller than the threshold, it indicates that the signal of the primary user cannot be detected in the frequency band; if the output test result is larger than the threshold value, it indicates that the signal of the primary user exists in the detected frequency band, and the secondary user cannot access the frequency band.
Different from the existing treatment method, the invention has the beneficial effects that:
1. the invention adopts fractional low-order moments to preprocess the collected observation data, builds a Transformer network model, sends the labeled training data into the network, and achieves the aim of training the network by continuously adjusting the size of the network parameters. And finally, comparing the test result with a threshold value through a full connection layer, and accurately judging the state of the master user.
2. The impulse amplitude of the non-Gaussian noise is controlled by adjusting the size of the characteristic index by using the fraction low-order moment, so that the influence of noise in a non-Gaussian environment is reduced, and the detection efficiency of a secondary user is improved.
3. The use of spectral convolution layers allows for coarse extraction of local feature signals.
4. The multi-head attention mechanism using the Transformer can accelerate the speed of data parallel operation and improve the extraction accuracy of data features.
Drawings
The following describes embodiments of the present invention in more detail with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a diagram of a transform network feature extraction according to the present invention;
FIG. 3 is a flow chart illustrating the operation of the self-attention mechanism of the present invention;
FIG. 4 is a flow chart of the operation of the multi-head attention mechanism of the present invention;
FIG. 5 is a graph of ROC curves for different activation functions in accordance with the present invention;
FIG. 6 is a graph of ROC curves for different signal-to-noise ratios in accordance with the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings.
The invention provides a frequency spectrum sensing method based on a Transformer multi-head attention mechanism network under Alpha noise, which specifically comprises the following steps:
step 10: digital modulation signal samples and Alpha noise samples are collected. Dividing the training set and the test set by a quintuple cross validation mode, and then entering the step 20;
step 20: performing fractional low-order moment processing on training data in a training set, making corresponding data-label sequences on the training data with the size of 3X26X26 to obtain a label set, and then entering step 30;
step 30: sending the labeled data into a spectrum convolution layer for partial feature extraction, and entering step 40;
step 40: and constructing a Transformer internal network which comprises a position coding layer, a multi-head attention mechanism layer and a feedforward network layer. Initializing the structure parameters of the transform network, which mainly comprise the number of encoders, the number of encoder units, an optimization function, a learning rate and the like, and entering step 50;
step 50: partitioning the labeled training data, performing position coding, sending the data to an encoder layer of a Transformer model for training, learning and feature extraction, and entering the step 60;
step 60: sending the result from the encoder to an upper sampling layer, integrating data characteristics through an average pooling layer, obtaining a classification result of the network during training through a full connection layer, and entering step 70;
step 70: in the process of training the network by using the labeled training set, calculating a loss value according to a training loss function, drawing the loss value into a loss curve, executing a step 40 according to the trend of the loss curve, continuously adjusting parameters in a Transformer to obtain a better network classification effect, and entering a step 80;
step 80: and sending the test data in the test set into a Transformer model which has been classified by a society, and comparing the data obtained by network learning with the original label data by taking a threshold value as a boundary so as to judge the state of the master user.
Said step 40 comprises the following:
step 4001: at the position-coding layer: we feed the chunked data into the transform network and then position code them. For data in different dimensions, a periodic function is used for controlling the relative position of each segment of signal, and the expression is shown as (4):
Figure BSA0000262029610000061
PE is a two-dimensional matrix, pos represents the position of a certain signal data in a pulse, e represents the position of a signal data feature, d model Representing the dimensions of the signal data features. By using trigonometric functions of sin and cos, the position of a certain segment of signal data characteristic can be adjusted, so as to fill the PE matrix, thereby completing the introduction of position coding.
Step 4002: in the multi-head attention mechanism layer: the Transformer used has six encoders and consists of a multi-headed attention mechanism and a feedforward neural network. The expression for the multi-head attention mechanism is as follows:
Figure BSA0000262029610000071
the multi-head attention mechanism is formed by E scaling dot productsIn the first expression of (5), parameters Q (query), K (key) and V (value) jointly determine the internal characteristics of the data, Q represents a query signal matrix, K represents a matrix of correlation information between the queried signal and other signals, and V represents a matrix of the queried signal. In the second formula of (5), h e Indicating the e-th attention mechanism. And W e Q ,W e K ,W e V Are all parameter matrices. In the third equation of (5), we use the Contact function to stitch together all the dot product attention mechanisms.
Step 4003: the feed-forward network layer comprises two fully-connected layers, and the activation function used is the GELU.
Preferably, in the step 80, if the output test result is smaller than the threshold, it indicates that the signal of the primary user cannot be detected in the frequency band; if the output test result is larger than the threshold value, it indicates that the signal of the primary user exists in the detected frequency band, and the secondary user cannot access the frequency band.
Example 1
The invention is further described below with reference to the accompanying drawings. The following examples are provided to illustrate the technical solutions of the present invention clearly, but are not intended to limit the invention.
As shown in fig. 2 to 4, the detailed description of the network extraction signal features is provided. Fig. 2 shows a process of extracting signal features by a transform network. The observation data processed by the fractional low-order moment is used as input data of the model, and the size of the data at the moment is 3X26X26; then, changing the shape of input data by using three spectrum convolution layers, namely the size is 1024X4X4; and the data with changed shape is processed in blocks; and then, sending the data with the size of 16x1024 into a Transformer network for feature extraction. At this time, besides sending the data after being blocked into the network, the data of the blocks also needs to be position-coded, which is helpful for people to know the relevant information between the signal data; this data then needs to pass through six encoder modules, one attention layer and a forward propagation layer in each module, which can increase the speed of batch processing of the data. Then, we put the data through the encoder into the upsampling layer to scale the shape of the data. And sending the scaled data to the average pooling layer and the full-connection layer to obtain a network output result.
FIG. 3 is a flow chart of the self-attention mechanism.
Fig. 4 is an operation flow of a multi-head attention mechanism, which is mainly characterized by being capable of parallel operation and batch processing of data. The specific formula of the multi-head attention mechanism is as follows:
Figure BSA0000262029610000081
q denotes a matrix of queried signals, K denotes a matrix of correlation information between queried signals and other signals, V denotes a matrix of queried signals,
Figure BSA0000262029610000082
representing the dimensions of Q and K. In combination with the present invention, the internal features of the extracted data are determined by parameters Q (query), K (key) and V (value). Number of heads h =8,d k =d model H =512/8=64, wherein d k Must be a number that can be square root. We divide the result of the transposed inner product of Q and K by
Figure BSA0000262029610000083
The data can be scaled, and the scaled data has strong generalization capability.
Example 2
The invention is further described below with reference to the accompanying drawings. The following examples are provided to illustrate the technical solutions of the present invention clearly, but are not intended to limit the invention.
In the selection of the Transformer model, a Cross Entropy Loss function (Cross Entropy Loss) is adopted as the Loss function. The optimizer uses Adam function, and the learning rate is usually set to 0.001, and with the trend of gradient decreasing rate, we can adjust the learning rate appropriately to ensure the network can converge normally. For the value settings of some parameters, see table 1 in particular:
TABLE 1 parameter settings
Figure BSA0000262029610000091
In the experiment of the invention, an ROC curve is adopted to measure the effect of the network on spectrum sensing, the abscissa is false alarm probability, and the ordinate is detection probability. When the area value under the ROC curve is between 0.5 and 1.0, the detection effect is good. The closer the ROC curve is to the upper left corner, the higher the detection accuracy. In addition, the data which is subjected to fractional low-order moment processing and labeled is sent to a Transformer network for training, and the detection effect of the proposed Transformer and the traditional ED algorithm is compared by adjusting network parameters under the same conditions of other conditions.
The invention introduces a nonlinear function as an activation function of the hidden layer to enhance the expression capability of the network, improve the modeling precision of the complex function and reduce the overfitting phenomenon. In the experiments of the present invention, the activation functions used were ReLU, sigmoid, and Tanh, respectively.
As can be seen from fig. 5, under the same other conditions, the model using ReLU as the activation function has the highest detection probability and the best performance, and then is the Sigmoid function and finally is the Tanh function. Therefore, we choose ReLU as the activation function of the hidden layer.
Example 3
The invention is further described below with reference to the accompanying drawings. The following examples are provided to illustrate the technical solutions of the present invention clearly, but are not intended to limit the protection of the present invention.
Under non-gaussian conditions and non-fading channel conditions, we compare ROC plots of a conventional energy detection algorithm and an improved transform network under different signal-to-noise ratios.
It can be seen from fig. 6 that, under the condition of no channel fading, the Alpha noise model performs ROC performance graph of the Transformer model after fractional low-order moment processing under different signal-to-noise ratios. As can be seen from the figure, under the condition of low signal-to-noise ratio, the noise component is obviously higher than the signal component, and the data processed by the fractional low-order moment has obvious characteristics. When the signal-to-noise ratio is 0dB and the false alarm probability is 0.1, the detection probability of the transform neural network proposed by us is the highest and is close to 1. And when the signal-to-noise ratio is-20 dB and other conditions are the same, the detection effect is better when the false alarm probability is 0.1.

Claims (5)

1. The invention designs a frequency spectrum sensing method based on a Transformer multi-head attention mechanism network under Alpha noise, which is characterized by comprising the following steps: the method comprises the following steps:
s10: in the experiment, alpha is used as background noise of a spectrum sensing algorithm for modeling, digital modulation signals with different signal-to-noise ratios are used as receiving samples, the received data samples are divided into five groups and divided according to the proportion of 4: 1, and the five groups of data samples are respectively used as training data and test data;
s20: performing fractional low-order moment processing on training data in a training set, and making corresponding data-label sequences on the training data with the size of 3X26X26 to obtain a label set;
s30: sending the labeled data into a spectrum convolution layer for partial feature extraction;
s40: constructing a Transformer internal network which comprises a position coding layer, a multi-head attention mechanism layer and a feedforward network layer, wherein the structural parameters for initializing the Transformer network mainly comprise the number of encoders, the number of encoder units, an optimization function, a learning rate and the like;
s50: carrying out blocking processing on the labeled training data, carrying out position coding, and sending the position coded training data into an encoder layer of a Transformer model for training, learning and feature extraction;
s60: sending the result from the encoder to an upper sampling layer, integrating data characteristics through an average pooling layer, and obtaining a classification result of the network during training through a full connection layer;
s70: in the process of training the network by using the labeled training set, calculating a loss value according to a training loss function, drawing the loss value into a loss curve, executing S40 according to the trend of the loss curve, and continuously adjusting parameters in a Transformer to obtain a better network classification effect;
s80: and sending the test data in the test set into a Transformer model which has been classified by a society, and comparing the data obtained by network learning with the original label data by taking a threshold value as a boundary so as to judge the state of the master user.
2. The method for spectrum sensing under Alpha noise based on a transform multi-head attention mechanism network according to claim 1, wherein: s20 comprises the following steps: performing fractional low-order moment processing on training data in a training set, as shown in an expression (1):
Figure FSA0000262029600000021
wherein Z i (j) Representing signal information received by the ith secondary user at the jth instant, Z i ' (j) is data processed by fractional low-order moment, p is order of statistic, alpha is characteristic index, when data processed by fractional low-order moment is labeled by label, the label is respectively labeled by digital modulation signal sample [0,1]And Alpha noise samples [1,0]And (4) showing.
3. The method for spectrum sensing under Alpha noise based on a transform multi-head attention mechanism network according to claim 1, wherein: s30 includes: the spectrum convolution layer has three layers of convolution operations, and the convolution is to roughly extract the characteristics of a local area in order to change the shape of signal data.
4. The method for spectrum sensing under Alpha noise based on a transform multi-head attention mechanism network according to claim 1, wherein: s40 includes the steps of:
s4001: at the position-coding layer: sending the partitioned data into a Transformer network, and then carrying out position coding on the partitioned data; for data in different dimensions, a periodic function is used for controlling the relative position of each segment of signals, and the expression is as follows:
Figure FSA0000262029600000022
PE is a two-dimensional matrix, pos represents the position of a certain piece of signal data in a pulse, e represents the position of a signal data feature, and d model A dimension representing a characteristic of the signal data; by utilizing the trigonometric functions of sin and cos, the position of the data characteristic of a certain section of signal can be adjusted so as to fill the PE matrix, thereby completing the introduction of position coding;
s4002: in the multi-head attention mechanism layer: the used Transformer has six encoders, and it is composed of a multi-head attention mechanism and a feedforward neural network, and the expression of the multi-head attention mechanism is as follows:
Figure FSA0000262029600000031
the multi-head attention mechanism consists of E scaling dot product attention mechanisms, in a first expression of (3), parameters Q (query), K (key) and V (value) jointly determine internal characteristics of data, Q represents a query signal matrix, K represents a matrix of correlation information between a queried signal and other signals, and V represents a matrix of the queried signal; in the second formula of (3), h e The e-th attention mechanism is shown; and W e Q ,W e K ,W e V Are all parameter matrices; in the third equation of (3), we use the Contact function to splice all the dot product attention mechanisms;
s4003: the feed-forward network layer contains two fully-connected layers and the activation function used is the GELU.
5. The method for spectrum sensing under Alpha noise based on a transform multi-head attention mechanism network according to claim 1, wherein: s80 comprises the following steps: if the output test result is smaller than the threshold value, the signal of the master user cannot be detected in the frequency band; if the output test result is larger than the threshold value, it indicates that the signal of the primary user exists in the detected frequency band, and the secondary user cannot access the frequency band.
CN202111625177.7A 2021-12-28 2021-12-28 Spectrum sensing method based on Transformer multi-head attention mechanism network under Alpha noise Pending CN115694692A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111625177.7A CN115694692A (en) 2021-12-28 2021-12-28 Spectrum sensing method based on Transformer multi-head attention mechanism network under Alpha noise

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111625177.7A CN115694692A (en) 2021-12-28 2021-12-28 Spectrum sensing method based on Transformer multi-head attention mechanism network under Alpha noise

Publications (1)

Publication Number Publication Date
CN115694692A true CN115694692A (en) 2023-02-03

Family

ID=85059972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111625177.7A Pending CN115694692A (en) 2021-12-28 2021-12-28 Spectrum sensing method based on Transformer multi-head attention mechanism network under Alpha noise

Country Status (1)

Country Link
CN (1) CN115694692A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115834310A (en) * 2023-02-15 2023-03-21 四川轻化工大学 Communication signal modulation identification method based on LGTransformer
CN116996111A (en) * 2023-08-23 2023-11-03 中国科学院微小卫星创新研究院 Satellite spectrum prediction method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115834310A (en) * 2023-02-15 2023-03-21 四川轻化工大学 Communication signal modulation identification method based on LGTransformer
CN116996111A (en) * 2023-08-23 2023-11-03 中国科学院微小卫星创新研究院 Satellite spectrum prediction method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN115694692A (en) Spectrum sensing method based on Transformer multi-head attention mechanism network under Alpha noise
CN112731309B (en) Active interference identification method based on bilinear efficient neural network
CN111783558A (en) Satellite navigation interference signal type intelligent identification method and system
Zhang et al. Modulation recognition of underwater acoustic signals using deep hybrid neural networks
CN112712063A (en) Tool wear value monitoring method, electronic device and storage medium
CN110224771B (en) Spectrum sensing method and device based on BP neural network and information geometry
Ni et al. LPI radar waveform recognition based on multi-resolution deep feature fusion
CN108090462A (en) A kind of Emitter Fingerprint feature extracting method based on box counting dimension
CN114186234A (en) Malicious code detection algorithm based on lightweight network ESPNet
Yang et al. One-dimensional deep attention convolution network (ODACN) for signals classification
CN112347910A (en) Signal fingerprint identification method based on multi-mode deep learning
CN116628566A (en) Communication signal modulation classification method based on aggregated residual transformation network
Tan et al. Specific emitter identification based on software-defined radio and decision fusion
CN111669820B (en) Density peak value abnormity detection method and intelligent passive indoor positioning method
CN110830939B (en) Positioning method based on improved CPN-WLAN fingerprint positioning database
CN115687989A (en) Multi-user spectrum sensing method based on Transformer-DNN hybrid network under Laplace noise
CN115238748B (en) Modulation identification method based on Transformer and decision fusion
CN116257780A (en) Unsupervised feature extraction and self-adaptive DBSCAN clustering method based on blind signal separation
CN116405139A (en) Spectrum prediction model and method based on Informar
CN105792232B (en) Wireless channel " fingerprint " feature dynamic modelling method based on UKFNN
CN114998731A (en) Intelligent terminal navigation scene perception identification method
Gu et al. Exploiting ResNeXt with Convolutional Shortcut for Signal Modulation Classification at Low SNRs
CN112529035B (en) Intelligent identification method for identifying individual types of different radio stations
CN116484180B (en) System and method for extracting communication signal gene
CN115276856B (en) Channel selection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication