CN117932312B - Radio positioning recognition system based on space-time attention network and contrast loss - Google Patents

Radio positioning recognition system based on space-time attention network and contrast loss Download PDF

Info

Publication number
CN117932312B
CN117932312B CN202410334334.6A CN202410334334A CN117932312B CN 117932312 B CN117932312 B CN 117932312B CN 202410334334 A CN202410334334 A CN 202410334334A CN 117932312 B CN117932312 B CN 117932312B
Authority
CN
China
Prior art keywords
data
module
feature
neural network
radio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410334334.6A
Other languages
Chinese (zh)
Other versions
CN117932312A (en
Inventor
许奕东
王洪君
马艳庆
杨阳
刘云霞
宋长军
李芳�
王百洋
王娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202410334334.6A priority Critical patent/CN117932312B/en
Publication of CN117932312A publication Critical patent/CN117932312A/en
Application granted granted Critical
Publication of CN117932312B publication Critical patent/CN117932312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a radio positioning recognition system based on a space-time attention network and contrast loss, which belongs to the field of radio positioning recognition and comprises an unmanned plane, a frequency spectrograph and a signal modulation terminal; wherein the drone generates a radio signal; the spectrometer receives radio signals generated by the unmanned aerial vehicle and stores the radio signal sample data set; the signal modulation terminal comprises a data processing module and a neural network model; the data processing module preprocesses the data of the sample data set, the neural network model carries out data training and parameter updating through the input signals, and finally the distance and the direction of the source of the radio signal and the modulation type of the radio signal are output. The invention considers the characteristic interaction of the space and time of the radio signal, combines the advantages of the convolutional neural network CNN, the long and short time memory network LSTM and the attention mechanism, reduces the complexity of model operation, and improves the precision of radio signal positioning and modulation type classification.

Description

Radio positioning recognition system based on space-time attention network and contrast loss
Technical Field
The invention belongs to the field of radio positioning identification, and particularly relates to a radio positioning identification system based on a space-time attention network and contrast loss.
Background
Currently, unmanned aerial vehicles are applied to various fields, however, the flight of unmanned aerial vehicles may threaten public safety or violate legal regulations, so that the existence and activity of unmanned aerial vehicles need to be discovered in time, and necessary safety protection measures such as interference, interception or expulsion are adopted; meanwhile, the malicious use of the unmanned aerial vehicle may threaten important places, activities or facilities, and a technology capable of timely finding and identifying potential unmanned aerial vehicle threats is needed, so that corresponding countermeasures are adopted, and public safety is guaranteed; the unmanned aerial vehicle is positioned through the radio positioning system, and the modulation type identification is carried out on radio signals sent by the unmanned aerial vehicle, so that public safety can be ensured, and social order can be maintained.
Conventional radiolocalization systems typically rely on a priori knowledge and models for signal processing and position computation, requiring assumptions and modeling of signal features, propagation models, etc., which can lead to poor performance of the system in processing complex scenarios or unknown signal types. Moreover, conventional radiolocalization systems are typically designed and optimized for specific application scenarios, whose performance and applicability may be limited by the scenario and environment, and in the face of new application scenarios or requirements, the systems need to be redesigned and adjusted, resulting in lower flexibility of deployment and application. Meanwhile, the traditional radio positioning system has relatively weak anti-interference capability on signal noise and interference, and is easily influenced by external interference, so that the positioning result is inaccurate or fails. In contrast, a radio positioning system based on a deep learning model has certain advantages in the aspects of adaptability, generalization capability, interference resistance capability and the like.
Disclosure of Invention
The invention aims to solve one of the technical problems and provide a radio positioning identification system based on a space-time attention network and contrast loss, which uses a deep learning model and combines the advantages of a convolutional neural network CNN, a long-short-time memory network LSTM and an attention mechanism.
In order to achieve the above purpose, the invention adopts the following technical scheme:
The radio positioning recognition system based on the space-time attention network and the contrast loss comprises an unmanned aerial vehicle, a plurality of frequency spectrographs and a signal modulation terminal;
generating a radio signal in the flight process of the unmanned aerial vehicle;
The plurality of frequency spectrographs are respectively arranged at a plurality of different signal receiving points and are used for receiving radio signals generated by the unmanned aerial vehicle, the frequency spectrographs capture and display the received radio signals in the receiving process, and the received radio signals are stored to form a radio signal sample data set;
The signal modulation terminal comprises a data processing module and a multi-output neural network model;
The data processing module is used for dividing the radio signal sample data set into a training set, a verification set and a test set, preprocessing radio signals in the training set, the verification set and the test set, and inputting data in the training set, the verification set and the test set into a multi-output neural network model respectively; preprocessing includes transforming the radio signal to form an I/Q vector and an a/P vector;
The multi-output neural network model comprises a first identification module and a second identification module; the first identification module is used for extracting spatial features among different feature values in the data; the second identification module is used for extracting time sequence characteristics among different characteristic values in the data; a classifier is arranged behind the first identification module and used for stopping operation in advance;
The data processing module is further used for storing a plurality of modulation types of radio signal data, wherein the modulation types which can be classified in the first identification module are divided into a set A;
After receiving training set data input by a data processing module, the multi-output neural network model inputs data of which the modulation type belongs to a set A in the training set into a first recognition module for training, and sequentially inputs data which does not belong to the set A into the first recognition module and a second recognition module for training to obtain a trained multi-output neural network model;
after receiving verification set data input by a data processing module, the trained multi-output neural network model carries out feature fusion on data of which the data modulation type belongs to a set A after the data passes through a first recognition module to learn spatial features, and carries out feature fusion on data which does not belong to the set A after the data passes through a second recognition module to learn temporal features, so as to obtain features of directions and modulation types and features of distances; the method comprises the steps of performing loss calculation on the characteristics of the direction and the modulation type by using a loss function with contrast loss, performing loss calculation on the characteristics of the distance by using a mean square error loss function, and updating parameters of a multi-output neural network model in a back propagation process until the loss function keeps converging, so as to obtain the multi-output neural network model after updating the parameters;
After receiving the test set data input by the data processing module, the multi-output neural network model after updating the parameters automatically locates and modulates and classifies the radio signals in the test set, and finally outputs the distance and direction of the radio signal source and the modulation type of the radio signal.
In some embodiments of the present invention, the I/Q vector includes an in-phase component and a quadrature component, represented by equation (1):
(1);
Wherein, For I/Q vector of input data,/>For input radio signal data,/>Is an in-phase component of data,/>Is the orthogonal component of the data;
the a/P vector includes an amplitude component and a phase component, expressed by formula (2):
(2);
Wherein, For the A/P vector of the input data,/>For input radio signal data,/>Is the amplitude component of the data,/>Is the phase component of the data;
the conversion of the amplitude component and the phase component is represented by equation (3) and equation (4):
(3);
(4);
Wherein, Is an in-phase component of data,/>Is the orthogonal component of the data,/>As the amplitude component of the data,Is the phase component of the data.
In some embodiments of the invention, the first recognition module includes an I/Q stream module for processing I/Q vectors and an a/P stream module for processing a/P vectors; the I/Q flow module and the A/P flow module of the first identification module have the same structure and are formed by sequentially combining a plurality of CNN layers and a plurality of spatial attention mechanisms.
In some embodiments of the present invention, the spatial attention mechanism employs a squeeze and stimulus attention mechanism, and the method for the first recognition module to complete the spatial attention mechanism includes:
Constructing a corresponding SE block after each CNN layer of the first identification module to perform feature recalibration;
initializing and extracting characteristic weight parameters through global average pooling operation, extracting characteristics through extrusion operation, and executing global average pooling on the characteristics U in the space dimension H multiplied by W to obtain a result Wherein/>Structure after global average pooling is performed for feature U in spatial dimension H W,/>Is represented by formula (5):
(5);
Wherein, Feature mapping for the c-th channel/>Feature map/>For the feature U in the spatial dimension H W,/>H, W are the width and height of the feature map, respectively,/>Extracting a function for the characteristics;
Scaling and nonlinear transformation by excitation operation to generate feature weight vectors, controlling the weight parameter values using an activation function on the channel, the transformation form of the nonlinear transformation being represented by equation (6):
(6);
Wherein, For the generated feature weight vector,/>Activating a function for sigmoid,/>As ReLU function,/>,/>Is a transformation function;
Multiplying the corresponding channel characteristics with the characteristic weight vector, represented by formula (7):
(7);
Wherein, For the characteristics of the multiplied corresponding channel characteristics and characteristic weight vectors,/>Is scalar/>And feature map/>Channel level multiplication between,/>
In some embodiments of the invention, the second recognition module includes an I/Q stream module for processing I/Q vectors and an a/P stream module for processing a/P vectors; the I/Q flow module and the A/P flow module of the second identification module have the same structure and are formed by combining a plurality of LSTM layers with different lengths and a multi-scale time attention mechanism.
In some embodiments of the present invention, the second recognition module performs data splicing and clipping on the input data to form a plurality of groups of data with different lengths;
After obtaining multiple groups of data with different lengths, the second recognition module respectively inputs the multiple groups of data with different lengths into an LSTM layer with corresponding lengths for training, and the LSTM layer extracts time correlation by using an input gate and a forgetting gate to obtain the characteristics of multiple groups of different time sequences;
splicing and combining the characteristics of a plurality of groups of different time sequences into a characteristic group with a preset length;
the feature group is input into a multi-scale time attention mechanism for feature enhancement, the multi-scale time attention mechanism adopts an extrusion and excitation attention mechanism, and different weights are allocated to each feature channel by learning the importance degree of each feature channel so as to adjust the importance of each feature channel.
In some embodiments of the invention, the trained multi-output neural network model will be I/Q vectorsAnd A/P vectorThe characteristic functions of the corresponding input I/Q flow module and the A/P flow module are synthesized into final characteristic/>, through outer integrationTo complete feature fusion, expressed by equation (8):
(8);
Wherein, And/>Representing the characteristic functions of the I/Q stream module and the a/P stream module, respectively.
In some embodiments of the present invention, a loss function with contrast loss is used to enhance differences between modulation types of different radio signals and directions of sources of the radio signals in a training process of a multi-output neural network model, so as to improve discrimination capability of a representation learned by the multi-output neural network model;
The loss functions with contrast loss include cross entropy loss, L2 regularization term, and manhattan-based contrast loss function, represented by equation (9):
(9);
Wherein, For a loss function with contrast loss,/>Is a cross entropy loss function,/>Regularized term for L2,/>Is a manhattan-based contrast loss function;
The cross entropy loss function is used to maximize the posterior probability of correct classification, represented by equation (10):
(10);
Wherein M is the number of modulation classes, N is the number of samples, As a sign function, if sample i belongs to category c1, Otherwise 0,/>The prediction probability of the sample i belonging to the category c;
The L2 regularization term is used to prevent the network weights from maximizing to avoid overfitting, represented by equation (11):
(11);
Wherein, Is a super parameter used for controlling the size of regularization item,/>Is the mean square error, expressed by equation (12):
(12);
Wherein, Is the original data sum,/>To use the parameter/>Predicted data;
The manhattan-based contrast loss function is represented by equation (13):
(13);
Wherein, As index function,/>,/>Belonging to/>Modulation type,/>For customizing threshold values, for adjusting differences between inter-class features,/>Representing the Manhattan distance between a pair of feature vectors in a feature space, is represented by equation (14):
(14);
where n is the dimension of the data space, 、/>Respectively/>Data of the vector;
in some embodiments of the present invention, the multi-output neural network model after updating the parameters inputs the radio signals in the test set into the first recognition module to obtain the I/Q data and the a/P data of the learned spatial features;
Storing the I/Q data and the A/P data which are learned to the spatial characteristics into a data buffer pool, inputting the I/Q data and the A/P data into a classifier after a first identification module, and terminating in advance if the classifier after the first identification module can judge that the current radio signal data which are being processed are of the modulation type in the set A;
If the modulation type in the set A cannot be distinguished, the stored I/Q and A/P data which are learned to the spatial characteristics are extracted from the data buffer pool, and are input into a second identification module for learning and distinguishing.
In some embodiments of the present invention, the mean square error loss function is used to determine the distance from which the radio signal data originates, represented by the general expression (15):
(15);
where y is the target value, f (x) is the predicted value, and n is the number of samples.
The invention has the beneficial effects that:
1. the multi-output neural network model in the radio positioning system provided by the invention considers the space and time characteristic interaction of radio signals, combines the advantages of a convolutional neural network CNN, a long and short time memory network LSTM and an attention mechanism, effectively explores the characteristic interaction and space-time characteristics of original complex time signals, and reduces the running time of the model; in addition, the characteristics learned by the neural network model from the I/Q stream data and the A/P stream data are interacted in pairs, so that the diversity of the characteristics is increased, and the performance of automatic modulation classification of the model is further improved;
2. The multi-output neural network model provided by the invention uses a loss function with contrast loss, enhances the distinguishing capability of the representation learned by the model, promotes the maximization of the difference between modulation types, enhances the difference between different modulations, and improves the accuracy of radio signal positioning and modulation type classification;
3. The multi-output neural network model provided by the invention can solve two tasks of positioning and identifying through a CALA model architecture by sharing model parameters and characteristic representation, thereby improving efficiency and accuracy;
4. The classifier in the multi-output neural network model provided by the invention can realize early termination of classification, is beneficial to reducing the calculation amount of the model, improves the automatic modulation classification efficiency, and further improves the overall performance of the multi-output neural network model;
5. The multi-output neural network model provided by the invention can simultaneously optimize a plurality of output tasks, simultaneously solve a plurality of related tasks by sharing model parameters and feature representation, improve the comprehensive performance of the whole model, reduce the computational complexity, improve the generalization capability and save the model training time compared with independently training a plurality of single-output models.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, specific embodiments of the present application will be described in detail below with reference to the accompanying drawings, from which other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is a system block diagram of a radiolocation system provided by the present invention;
fig. 2 is a schematic diagram of positions of the unmanned aerial vehicle and the spectrometer provided by the invention;
FIG. 3 is a system flow diagram of a radiolocation system provided by the present invention;
FIG. 4 is a schematic diagram of a multi-output neural network model according to the present invention;
fig. 5 is a schematic structural diagram of a first identification module according to the present invention;
FIG. 6 is a schematic diagram of a second identification module according to the present invention;
FIG. 7 is a schematic diagram of a spatial attention mechanism according to the present invention;
Fig. 8 is a schematic structural diagram of a multi-scale time attention mechanism according to the present invention.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present application. As used herein, unless the context clearly indicates otherwise, the singular forms also are intended to include the plural forms, and furthermore, it is to be understood that the terms "comprises" and "comprising" and any variations thereof are intended to cover non-exclusive inclusions, such as, for example, processes, methods, systems, products or devices that comprise a series of steps or units, are not necessarily limited to those steps or units that are expressly listed, but may include other steps or units that are not expressly listed or inherent to such processes, methods, products or devices.
Embodiments of the invention and features of the embodiments may be combined with each other without conflict.
The technical scheme of the invention is described in detail below with reference to specific embodiments and attached drawings.
In one illustrative embodiment of a radio location identification system based on a spatio-temporal attention network and contrast loss of the present invention, as shown in fig. 1-8, the identification system includes a drone, a plurality of spectrometers, and a signal modulation terminal.
1-3, The unmanned aerial vehicle generates radio signals in the flight process, the radio signals generated by unmanned aerial vehicles with different purposes and behaviors have different modulation types, the behaviors and intentions of the unmanned aerial vehicle can be better evaluated by identifying the identification modulation types of the radio signals generated by the unmanned aerial vehicle in the unmanned aerial vehicle flight, potential safety risks can be timely found, and the flight activities of the unmanned aerial vehicle can be ensured not to threaten the surrounding environment and public safety.
The plurality of frequency spectrographs are respectively arranged at a plurality of different signal receiving points and are used for receiving radio signals generated by the unmanned aerial vehicle, and in the receiving process, the frequency spectrographs capture and display the received radio signals and store the received radio signals to form a radio signal sample data set. As shown in fig. 3, the spectrometers are respectively used at N signal receiving points in the radio signal range of the unmanned aerial vehicle to receive radio signals of the unmanned aerial vehicle, the spectrometers at the N signal receiving points set the same parameters such as center frequency, bandwidth, scanning speed, and the like, and in the receiving process, the received database samples can be sufficiently enriched by changing the behaviors of the unmanned aerial vehicle.
The signal modulation terminal comprises a data processing module and a multi-output neural network model.
The data processing module is used for dividing the radio signal sample data set into a training set, a verification set and a test set according to the proportion of 60%, 20% and 20%. The system is also used for preprocessing radio signals in the training set, the verification set and the test set and respectively inputting data in the training set, the verification set and the test set into a multi-output neural network model; preprocessing involves transforming the radio signal into I/Q (in-phase/quadrature) and a/P (amplitude/phase) vectors; the data processing module randomly breaks up data in the training set when dividing the training set.
In some embodiments of the present invention, the I/Q data formed by preprocessing is an I/Q vector, where the I/Q vector is composed of two data vectors of an in-phase component and a quadrature component, and local raw time features are extracted from the in-phase component and the quadrature component, and are represented by formula (1):
(1);
Wherein, For I/Q vector of input data,/>For input radio signal data,/>Is an in-phase component of data,/>Is the orthogonal component of the data.
The a/P data formed through preprocessing is an a/P vector composed of two real-value vectors of an amplitude component and a phase component, spatial features are learned from the amplitude and phase information, and the spatial features are expressed by the formula (2):
(2);
Wherein, For the A/P vector of the input data,/>For input radio signal data,/>Is the amplitude component of the data,/>Is the phase component of the data.
The conversion of the amplitude component and the phase component is represented by equation (3) and equation (4):
(3);
(4);
Wherein, Is an in-phase component of data,/>Is the orthogonal component of the data,/>As the amplitude component of the data,Is the phase component of the data.
As shown in fig. 4, the multi-output neural network model adopted by the invention is a CALA architecture model, and comprises a first identification module and a second identification module which are mutually connected in series; the first identification module adopts a CA architecture and is used for extracting spatial features among different feature values in data. The second identification module adopts LA architecture, and is used for extracting time sequence characteristics among different characteristic values in the data. The first recognition module and the second feature extraction module are respectively provided with a classifier, the classifier after the first recognition module mainly aims at the problem that the radio signal modulation type in the multiple outputs belongs to the set A, and the classifier after the second recognition module aims at the problem that the radio signal modulation type in the multiple outputs does not belong to the set A. If the radio signal modulation type in the multiple outputs belongs to the set A through the classifier after the first identification module, the whole neural network model can be terminated in advance, and the complexity of operation is reduced.
In some embodiments of the present invention, as shown in fig. 5, the first recognition module includes an I/Q stream module for processing I/Q vectors and an a/P stream module for processing a/P vectors; the I/Q flow module and the A/P flow module of the first identification module have the same structure and are sequentially composed of three CNN layers and three spatial attention mechanisms.
Because the input data of the I/Q flow module and the A/P flow module of the first identification module are different, the final specific parameters of the two flow modules are different. In this embodiment, the dropout rate of the CNN layers is set to 0.5 to avoid overfitting, the number of kernels used by the three CNN layers is 256,256,80, the sizes are 1×3,2×3, and 1×3, respectively, the convolution layers use ReLU as the activation function, while zero padding is applied before each convolution layer to adjust the output shape. The three CNN layers are sequentially connected, and a space attention mechanism is added behind each CNN layer, so that the main features of the space of the data can be more fully learned, and the classification capability of the model is improved.
In some embodiments of the present invention, as shown in fig. 6, the second identifying module includes an I/Q stream module for processing I/Q vectors and an a/P stream module for processing a/P vectors; the I/Q flow module and the A/P flow module of the second identification module have the same structure and are composed of three LSTM layers with different lengths and a multi-scale time attention mechanism.
The final specific parameters of the two stream modules are different due to the different input data of the I/Q stream module and the A/P stream module of the second identification module. In this embodiment, the dropout rate of the LSTM layer is set to 0.5 to avoid overfitting, and the input feature lengths of the three LSTM layers are 256,512,1024, respectively. A multi-scale time attention mechanism is arranged behind the three LSTM layers, so that the main characteristics of data time can be learned more fully, and the classification capability of the model is improved.
The data processing module is further configured to store a plurality of modulation types of the radio signal data, wherein the modulation types that can be classified at the first identification module are divided into a set a. The modulation types in set a are of a type that is easily distinguishable from the predetermined radio signals to be classified, and can be classified only through the first recognition module combined by the CNN layer and the spatial attention mechanism. In this embodiment, the predetermined modulation types to be classified include 8PSK, BPSK, QPSK, CPFSK, GFSK, 4PAM, DSB-AM, SSB-AM, 16QAM, 64QAM, WBFM; the modulation types in set a include 8PSK, BPSK, QPSK, CPFSK, GFSK, 4PAM.
The data processing module is further configured to store a radio direction set, where the radio direction is in one direction of 22.5 degrees, and 360 degrees are divided into 16 directions altogether, thereby forming a radio set of 16 directions. In the process of acquiring radio signals, as the radio signals are acquired at N acquisition points at different positions, a set formed by N acute angles of 22.5 degrees can appear, and the positions of radio signal transmission can be accurately obtained according to the intersection coincidence of two sides of the acute angles.
After the training set is divided in the data processing module, the data of which the data modulation type belongs to the set A and the data which does not belong to the set A in the training set are manually divided.
After receiving training set data input by a data processing module, the multi-output neural network model inputs data of which the modulation type belongs to a set A in the training set into a first recognition module for training, and sequentially inputs data which does not belong to the set A into the first recognition module and a second recognition module for training, so that the trained multi-output neural network model is obtained.
After receiving verification set data input by a data processing module, the trained multi-output neural network model performs feature fusion on data with data modulation types belonging to a set A after the data of the training set A learn spatial features through a first recognition module, and performs feature fusion on data not belonging to the set A after the data of the training set A learn temporal features through a second recognition module. In this embodiment, the data dimension of the I/Q is 2×1024, the a/P data dimension is 2×1024, and the feature dimension after fusion is 2×1024.
And calculating the loss of the characteristics of the direction and the modulation type by using a loss function with contrast loss, calculating the loss of the characteristics of the distance by using a mean square error loss function, and updating the parameters of the multi-output neural network model in the back propagation process.
And repeating the feature fusion and the parameter updating until the loss function with the contrast loss keeps converging, and obtaining the multi-output neural network model after the parameter updating.
After receiving the test set data input by the data processing module, the multi-output neural network model after updating the parameters automatically locates and modulates and classifies the radio signals in the test set, and the final output is multi-output and comprises the distance and direction of the radio signal source and the modulation type of the radio signal.
In some embodiments of the present invention, a multi-output neural network model after updating parameters first inputs radio signals in a test set into a first recognition module to obtain I/Q data and a/P data of learned spatial features;
Storing the I/Q data and the A/P data which are learned to the spatial characteristics into a data buffer pool, inputting the I/Q data and the A/P data into a classifier after a first identification module, and terminating in advance if the classifier after the first identification module can judge that the current radio signal data which are being processed are of the modulation type in the set A;
If the modulation type in the set A cannot be distinguished, the stored I/Q and A/P data which are learned to the spatial characteristics are extracted from the data buffer pool, and are input into a second identification module for learning and distinguishing.
In some embodiments of the present invention, as shown in fig. 7, the spatial attention mechanism in the first recognition model needs to consider that the influence level of the internal channel of each feature map is different, so that the extrusion and excitation attention mechanism (squeeze and excitation attention, abbreviated as SE) is adopted.
The first recognition model constructs a corresponding SE block after each CNN layer to execute feature recalibration, performs initialization extraction on feature weight parameters through global average pooling operation, and models correlation among channels through scaling operation.
Firstly, feature extraction is carried out through extrusion operation, global average pooling is carried out on the feature U in the space dimension H multiplied by W, and a result is obtainedWherein/>Structure after global average pooling is performed for feature U in spatial dimension H W,/>Is represented by formula (5):
(5);
Wherein, Feature mapping for the c-th channel/>Feature map/>For the feature U in the spatial dimension H W,/>H, W are the width and height of the feature map, respectively,/>A function is extracted for the feature.
The feature weight vectors are then generated via excitation operations scaling and nonlinear transformation, using the activation function on the channel to control the weight parameter values.
The transformation form of the nonlinear transformation is represented by formula (6):
(6);
Wherein, For the generated feature weight vector,/>Activating a function for sigmoid,/>As ReLU function,/>,,/>Is a transformation function.
And finally, multiplying the corresponding channel characteristics by the characteristic weight vectors to complete a space SE attention mechanism.
Multiplying the corresponding channel characteristics by the characteristic weight vector is represented by equation (7):
(7);
Wherein, For the characteristics of the multiplied corresponding channel characteristics and characteristic weight vectors,/>Is scalar/>And feature map/>Channel level multiplication between,/>
In some embodiments of the present invention, as shown in fig. 8, the method for extracting the time sequence characteristic between different feature values in the data by using the second recognition model specifically includes the following steps.
The second recognition model firstly performs data splicing and cutting on the input data to form three groups of data with the lengths of respectivelyIn the present embodiment,/>256, 512 And 1024, respectively.
And respectively inputting three groups of data with different lengths into an LSTM layer with corresponding lengths for training, and extracting time correlation by the LSTM layer by using an input gate and a forgetting gate to obtain the characteristics of three groups of different time sequences.
The three groups of characteristics of different time sequences are spliced and combined to obtain a characteristic group with a preset length M, wherein M is more than or equal to max {}. In this embodiment, M is 1024.
The feature group is input into a multi-scale time attention mechanism to perform feature enhancement, and different weights are allocated to each feature channel by learning the importance degree of each feature channel so as to adjust the importance of each feature channel.
It should be noted that, the multi-scale time attention mechanism in the second recognition model adopts the same extrusion and excitation attention mechanism as the spatial attention mechanism in the first recognition model, and is different in that the multi-scale time attention mechanism needs to cut and fuse the data features with different lengths output by the LSTM layer first, so as to form a feature group with a predetermined length, and then the SE attention mechanism adjusts the weight, and the input of the spatial attention mechanism is the output of convolution kernels with the number of kernels of 256,256,80, and the sizes of 1×3,2×3 and 1×3 of three CNN layers respectively, and the feature cutting and fusion are not needed.
In some embodiments of the present invention, the optimizer of the multiple-output neural network model is set to Adam, and the initial learning rate is set to 0.001 using a dynamic learning rate scheme.
In some embodiments of the invention, the trained multi-output neural network model will be I/Q vectorsAnd A/P vectorThe characteristic functions of the corresponding input I/Q flow module and the A/P flow module are synthesized into final characteristic/>, through outer integrationTo complete feature fusion, expressed by equation (8):
(8);
Wherein, And/>Representing the characteristic functions of the I/Q stream module and the a/P stream module, respectively.
In some embodiments of the present invention, a loss function with contrast loss is used to enhance the differences between the modulation types of different radio signals and the directions of the sources of the radio signals in the training process of the multi-output neural network model, so as to improve the discrimination capability of the representation learned by the multi-output neural network model.
The loss function with contrast loss includes cross entropy loss, L2 regularization term, and manhattan-based contrast loss function, represented by equation (9):
(9);
Wherein, For a loss function with contrast loss,/>Is a cross entropy loss function,/>Regularized term for L2,/>Is a manhattan based contrast loss function.
Wherein the cross entropy loss function is used to maximize the posterior probability of correct classification, represented by equation (10):
(10);
Wherein M is the number of modulation classes, N is the number of samples, As a sign function, if sample i belongs to category c1, Otherwise 0,/>The predicted probability that sample i belongs to category c.
The L2 regularization term is used to prevent the network weights from maximizing to avoid overfitting, represented by equation (11):
(11);
Wherein, Expressed as mean square error by equation (12)/(Is a super parameter and is used for controlling the size of the regularization term;
(12);
Wherein, Is the original data sum,/>To use the parameter/>Predicted data.
The manhattan-based contrast loss function is represented by equation (13):
(13);
Wherein, As index function,/>,/>Belonging to/>Modulation type,/>For customizing threshold values, for adjusting differences between inter-class features,/>Representing the Manhattan distance between a pair of feature vectors in a feature space, is represented by equation (14):
(14);
where n is the dimension of the data space, 、/>Respectively/>Vector data.
The classification capability of the whole multi-output neural network model is improved by minimizing the contrast loss, which is equivalent to reducing the distance between the feature vectors of the same modulation type, increasing the distance between the feature vectors of different modulation types and the loss function, and enabling the distance to exceed a threshold value
And simultaneously, updating parameters of the multi-output neural network model by using a small batch of random gradient descent and back propagation algorithm, and finally obtaining the multi-output neural network model after updating the parameters.
In some embodiments of the present invention, the mean square error loss function is used to determine the distance from which the radio signal data originates, represented by the general expression (15):
(15);
where y is the target value, f (x) is the predicted value, and n is the number of samples.
In some embodiments of the present invention, the classifier capable of determining whether the data is a modulation type in the set a in the multi-output neural network model mainly determines based on a soft tag obtained by the data passing through a loss function, if a maximum value in the soft tag of the data is greater than a predetermined threshold, determines that the data is a modulation type corresponding to the maximum soft tag, otherwise, determines that the data is not a type in the set a.
Finally, it should be noted that: in the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same; while the invention has been described in detail with reference to the preferred embodiments, those skilled in the art will appreciate that: modifications may be made to the specific embodiments of the present invention or equivalents may be substituted for part of the technical features thereof; without departing from the spirit of the invention, it is intended to cover the scope of the invention as claimed.

Claims (6)

1. A spatio-temporal attention network and contrast loss based radio location identification system comprising: unmanned aerial vehicle, a plurality of spectrum appearance and signal modulation terminal;
generating a radio signal in the flight process of the unmanned aerial vehicle;
The frequency spectrographs are respectively arranged at a plurality of different signal receiving points and are used for receiving radio signals generated by the unmanned aerial vehicle, and in the receiving process, the frequency spectrographs capture and display the received radio signals and store the received radio signals to form a radio signal sample data set;
the signal modulation terminal comprises a data processing module and a multi-output neural network model;
The data processing module is used for dividing the radio signal sample data set into a training set, a verification set and a test set, preprocessing radio signals in the training set, the verification set and the test set, and respectively inputting data in the training set, the verification set and the test set into the multi-output neural network model; the preprocessing includes transforming the radio signal to form an I/Q vector and an a/P vector;
The multi-output neural network model comprises a first identification module and a second identification module; the first identification module is used for extracting spatial features among different feature values in the data; the second identification module is used for extracting time sequence characteristics among different characteristic values in the data; a classifier is arranged behind the first identification module and used for stopping operation in advance;
The first identification module comprises an I/Q flow module for processing I/Q vectors and an A/P flow module for processing A/P vectors; the I/Q flow module and the A/P flow module of the first identification module have the same structure and are formed by sequentially combining a plurality of CNN layers and a plurality of spatial attention mechanisms;
The spatial attention mechanism adopts a squeezing and exciting attention mechanism, and the method for the first identification module to complete the spatial attention mechanism comprises the following steps:
Constructing a corresponding SE block after each CNN layer of the first identification module to perform feature recalibration;
initializing and extracting characteristic weight parameters through global average pooling operation, extracting characteristics through extrusion operation, and executing global average pooling on the characteristics U in the space dimension H multiplied by W to obtain a result Wherein/>Structure after global average pooling is performed for feature U in spatial dimension H W,/>Is represented by formula (5):
(5);
Wherein, Feature mapping for the c-th channel/>Feature map/>For the feature U in the spatial dimension H W,/>H, W are the width and height of the feature map, respectively,/>Extracting a function for the characteristics;
scaling by the excitation operation and generating a feature weight vector using the activation function on the channel to control the weight parameter values, and a nonlinear transformation whose transformation form is represented by equation (6):
(6);
Wherein, For the generated feature weight vector,/>Activating a function for sigmoid,/>As ReLU function,/>,/>Is a transformation function;
Multiplying the corresponding channel characteristics with the characteristic weight vector, represented by formula (7):
(7);
Wherein, For the characteristics of the multiplied corresponding channel characteristics and characteristic weight vectors,/>Is scalar/>And feature map/>Channel level multiplication between,/>;
The second identification module comprises an I/Q flow module for processing I/Q vectors and an A/P flow module for processing A/P vectors; the I/Q flow module and the A/P flow module of the second identification module have the same structure and are formed by combining a plurality of LSTM layers with different lengths and a multi-scale time attention mechanism;
the second identification module performs data splicing and cutting on the input data to form a plurality of groups of data with different lengths;
after obtaining multiple groups of data with different lengths, the second recognition module respectively inputs the multiple groups of data with different lengths into an LSTM layer with corresponding lengths for training, and the LSTM layer extracts time correlation by using an input gate and a forgetting gate to obtain characteristics of multiple groups of different time sequences;
splicing and combining the characteristics of a plurality of groups of different time sequences into a characteristic group with a preset length;
Inputting the feature group into the multi-scale time attention mechanism for feature enhancement, wherein the multi-scale time attention mechanism adopts an extrusion and excitation attention mechanism, and distributes different weights for each feature channel by learning the importance degree of each feature channel so as to adjust the importance of each feature channel;
the data processing module is further configured to store a plurality of modulation types of radio signal data, wherein the modulation types that can be classified at the first identification module are divided into a set a;
After the multi-output neural network model receives the training set data input by the data processing module, inputting the data of which the modulation type belongs to the set A in the training set into a first recognition module for training, and inputting the data which does not belong to the set A into the first recognition module and a second recognition module in sequence for training to obtain a trained multi-output neural network model;
after receiving verification set data input by the data processing module, the trained multi-output neural network model performs feature fusion after data of the verification set data modulation type belonging to the set A are subjected to spatial feature learning by the first recognition module, and performs feature fusion after data not belonging to the set A are subjected to temporal feature learning by the second recognition module, so as to obtain features of directions and modulation types and features of distances; calculating the loss of the characteristics of the direction and the modulation type by using a loss function with comparative loss, calculating the loss of the characteristics of the distance by using a mean square error loss function, and updating the parameters of the multi-output neural network model in the back propagation process until the loss function keeps converging, so as to obtain the multi-output neural network model after updating the parameters;
After receiving the test set data input by the data processing module, the multi-output neural network model after updating the parameters automatically locates and modulates and classifies the radio signals in the test set, and finally outputs the distance and direction of the radio signal source and the modulation type of the radio signals.
2. The spatio-temporal attention network and contrast loss based radio location identification system of claim 1, wherein said I/Q vector includes an in-phase component and a quadrature component, represented by equation (1):
(1);
Wherein, For I/Q vector of input data,/>For input radio signal data,/>Is an in-phase component of data,/>Is the orthogonal component of the data;
The a/P vector includes an amplitude component and a phase component, represented by formula (2):
(2);
Wherein, For the A/P vector of the input data,/>For input radio signal data,/>Is the amplitude component of the data,/>Is the phase component of the data;
The conversion of the amplitude component and the phase component is represented by equation (3) and equation (4):
(3);
(4);
Wherein, Is an in-phase component of data,/>Is the orthogonal component of the data,/>Is the amplitude component of the data,/>Is the phase component of the data.
3. The radio location identification system based on a spatiotemporal attention network and contrast loss as claimed in claim 1, wherein,
The trained multi-output neural network model uses I/Q vectorsAnd A/P vector/>The characteristic functions of the corresponding input I/Q flow module and the A/P flow module are synthesized into final characteristic/>, through outer integrationTo complete feature fusion, expressed by equation (8):
(8);
Wherein, And/>Representing the characteristic functions of the I/Q stream module and the a/P stream module, respectively.
4. The spatiotemporal attention network and contrast loss based radio location identification system of claim 1, wherein the loss function with contrast loss is used to enhance differences between modulation types and radio signal source directions of different radio signals during the multi-output neural network model training process to enhance discrimination capability of the learned characterization of the multi-output neural network model;
the loss function with contrast loss includes cross entropy loss, L2 regularization term, and manhattan-based contrast loss function, represented by equation (9):
(9);
Wherein, For a loss function with contrast loss,/>Is a cross entropy loss function,/>Regularized term for L2,/>Is a manhattan-based contrast loss function;
the cross entropy loss function is used to maximize the posterior probability of correct classification, represented by equation (10):
(10);
Wherein M is the number of modulation classes, N is the number of samples, As a sign function, if sample i belongs to category c,/>1, Otherwise 0,/>The prediction probability of the sample i belonging to the category c;
The L2 regularization term is used to prevent the network weights from maximizing to avoid overfitting, represented by equation (11):
(11);
Wherein, Is a super parameter used for controlling the size of regularization item,/>For/>Represented by formula (12):
(12);
Wherein, Is the original data sum,/>To use the parameter/>Predicted data;
the manhattan-based contrast loss function is represented by equation (13):
(13);
Wherein, As index function,/>,/>Belonging to/>Modulation type,/>For customizing threshold values, for adjusting differences between inter-class features,/>Representing the Manhattan distance between a pair of feature vectors in a feature space, is represented by equation (14):
(14);
where n is the dimension of the data space, 、/>Respectively/>Vector data.
5. The radio location identification system based on a spatiotemporal attention network and contrast loss as claimed in claim 1, wherein,
The multi-output neural network model after parameter updating inputs the radio signals in the test set into the first identification module to obtain the I/Q data and the A/P data of the learned spatial characteristics;
Storing the I/Q data and the A/P data which are learned to the spatial characteristics into a data buffer pool, inputting the I/Q data and the A/P data into a classifier after a first identification module, and terminating in advance if the classifier after the first identification module can judge that the current radio signal data which are being processed are of the modulation type in the set A;
If the modulation type in the set A cannot be judged, the stored I/Q data and A/P data which are learned to the spatial characteristics are extracted from the data buffer pool, and are input into a second identification module for learning and judging.
6. The radio location identification system based on a spatiotemporal attention network and contrast loss as claimed in claim 1, wherein,
The mean square error loss function is used to determine the distance from which the radio signal data originates, represented by the general expression (15):
(15);
where y is the target value, f (x) is the predicted value, and n is the number of samples.
CN202410334334.6A 2024-03-22 2024-03-22 Radio positioning recognition system based on space-time attention network and contrast loss Active CN117932312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410334334.6A CN117932312B (en) 2024-03-22 2024-03-22 Radio positioning recognition system based on space-time attention network and contrast loss

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410334334.6A CN117932312B (en) 2024-03-22 2024-03-22 Radio positioning recognition system based on space-time attention network and contrast loss

Publications (2)

Publication Number Publication Date
CN117932312A CN117932312A (en) 2024-04-26
CN117932312B true CN117932312B (en) 2024-06-04

Family

ID=90759596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410334334.6A Active CN117932312B (en) 2024-03-22 2024-03-22 Radio positioning recognition system based on space-time attention network and contrast loss

Country Status (1)

Country Link
CN (1) CN117932312B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109379318A (en) * 2018-11-16 2019-02-22 西安电子科技大学 DQPSK modulated signal demodulation method based on CNN and LSTM
CN110099020A (en) * 2019-05-23 2019-08-06 北京航空航天大学 A kind of unmanned plane electromagnetic signal management and Modulation Mode Recognition method
CN114465855A (en) * 2022-01-17 2022-05-10 武汉理工大学 Attention mechanism and multi-feature fusion based automatic modulation recognition method
CN114527441A (en) * 2022-01-11 2022-05-24 中国计量大学 Radar signal identification method of LSTM network based on multi-head attention mechanism
CN114675249A (en) * 2022-03-25 2022-06-28 中国人民解放军陆军工程大学 Attention mechanism-based radar signal modulation mode identification method
CN114726692A (en) * 2022-04-27 2022-07-08 西安电子科技大学 Radiation source modulation mode identification method based on SEResNet-LSTM
CN114881092A (en) * 2022-06-16 2022-08-09 杭州电子科技大学 Signal modulation identification method based on feature fusion
CN115530788A (en) * 2022-09-17 2022-12-30 贵州大学 Arrhythmia classification method based on self-attention mechanism
CN115601833A (en) * 2022-10-13 2023-01-13 湖北工业大学(Cn) Myoelectric gesture recognition memory network method and system integrating double-layer attention and multi-stream convolution
KR20230036249A (en) * 2021-09-07 2023-03-14 세종대학교산학협력단 Anomaly recognition method and system based on lstm
CN115804602A (en) * 2022-12-21 2023-03-17 西京学院 Electroencephalogram emotion signal detection method, equipment and medium based on attention mechanism and with multi-channel feature fusion
CN115879030A (en) * 2022-11-07 2023-03-31 云南大学 Network attack classification method and system for power distribution network
CN116016071A (en) * 2022-12-09 2023-04-25 浙江工业大学 Modulation signal identification method based on double-flow fusion CNN-BiLSTM network
CN116304865A (en) * 2023-02-14 2023-06-23 中国电子科技集团公司第五十二研究所 Radar signal modulation mode identification method based on improved attention mechanism
CN117582235A (en) * 2023-12-08 2024-02-23 西安邮电大学 Electrocardiosignal classification method based on CNN-LSTM model
CN117648948A (en) * 2023-11-30 2024-03-05 沈阳理工大学 CNN-LSTM network signal modulation identification method based on signal periodicity characteristics

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10887009B2 (en) * 2018-10-08 2021-01-05 Nec Corporation G-OSNR estimation on dynamic PS-QAM channels using hybrid neural networks
CN115137300A (en) * 2021-03-31 2022-10-04 京东方科技集团股份有限公司 Signal detection method, signal detection device, electronic apparatus, and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109379318A (en) * 2018-11-16 2019-02-22 西安电子科技大学 DQPSK modulated signal demodulation method based on CNN and LSTM
CN110099020A (en) * 2019-05-23 2019-08-06 北京航空航天大学 A kind of unmanned plane electromagnetic signal management and Modulation Mode Recognition method
KR20230036249A (en) * 2021-09-07 2023-03-14 세종대학교산학협력단 Anomaly recognition method and system based on lstm
CN114527441A (en) * 2022-01-11 2022-05-24 中国计量大学 Radar signal identification method of LSTM network based on multi-head attention mechanism
CN114465855A (en) * 2022-01-17 2022-05-10 武汉理工大学 Attention mechanism and multi-feature fusion based automatic modulation recognition method
CN114675249A (en) * 2022-03-25 2022-06-28 中国人民解放军陆军工程大学 Attention mechanism-based radar signal modulation mode identification method
CN114726692A (en) * 2022-04-27 2022-07-08 西安电子科技大学 Radiation source modulation mode identification method based on SEResNet-LSTM
CN114881092A (en) * 2022-06-16 2022-08-09 杭州电子科技大学 Signal modulation identification method based on feature fusion
CN115530788A (en) * 2022-09-17 2022-12-30 贵州大学 Arrhythmia classification method based on self-attention mechanism
CN115601833A (en) * 2022-10-13 2023-01-13 湖北工业大学(Cn) Myoelectric gesture recognition memory network method and system integrating double-layer attention and multi-stream convolution
CN115879030A (en) * 2022-11-07 2023-03-31 云南大学 Network attack classification method and system for power distribution network
CN116016071A (en) * 2022-12-09 2023-04-25 浙江工业大学 Modulation signal identification method based on double-flow fusion CNN-BiLSTM network
CN115804602A (en) * 2022-12-21 2023-03-17 西京学院 Electroencephalogram emotion signal detection method, equipment and medium based on attention mechanism and with multi-channel feature fusion
CN116304865A (en) * 2023-02-14 2023-06-23 中国电子科技集团公司第五十二研究所 Radar signal modulation mode identification method based on improved attention mechanism
CN117648948A (en) * 2023-11-30 2024-03-05 沈阳理工大学 CNN-LSTM network signal modulation identification method based on signal periodicity characteristics
CN117582235A (en) * 2023-12-08 2024-02-23 西安邮电大学 Electrocardiosignal classification method based on CNN-LSTM model

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Intrusion Detection Method for Industrial Control System Based on Parallel CNN-LSTM Neural Network Improved by Self-Attention;Zhefan Zhang etc.;《2023 3rd International Conference on Electronic Information Engineering and Computer Science (EIECS)》;20240221;全文 *
利用并联CNN-LSTM的调制样式识别算法;翁建新;赵知劲;占锦敏;;信号处理;20190525(第05期);全文 *
基于卷积双向长短期神经网络的调制方式识别;谭继远;张立民;钟兆根;;火力与指挥控制;20200615(第06期);全文 *
基于多注意力卷积神经网络的特定目标情感分析;梁斌;刘全;徐进;周倩;章鹏;;计算机研究与发展;20170815(第08期);全文 *
基于相位变换和CNN-BiLSTM的自动调制识别算法;胡国乐,李鹏,林事力,纵彪;《电讯技术》;20240110;全文 *
结合注意力机制的CNN-LSTM心电信号识别;张锐,曾鑫;《计算机应用与软件》;20231231;第40卷(第12期);全文 *

Also Published As

Publication number Publication date
CN117932312A (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN109902806B (en) Method for determining target bounding box of noise image based on convolutional neural network
CN111178260B (en) Modulation signal time-frequency diagram classification system based on generation countermeasure network and operation method thereof
CN112163465B (en) Fine-grained image classification method, fine-grained image classification system, computer equipment and storage medium
EP3289529B1 (en) Reducing image resolution in deep convolutional networks
CN103020985B (en) A kind of video image conspicuousness detection method based on field-quantity analysis
WO2019232772A1 (en) Systems and methods for content identification
CN111401516A (en) Neural network channel parameter searching method and related equipment
CN112995150B (en) Botnet detection method based on CNN-LSTM fusion
US20220121949A1 (en) Personalized neural network pruning
CN104463194A (en) Driver-vehicle classification method and device
US11695898B2 (en) Video processing using a spectral decomposition layer
CN114092793B (en) End-to-end biological target detection method suitable for complex underwater environment
CN110298394A (en) A kind of image-recognizing method and relevant apparatus
CN113850838A (en) Ship voyage intention acquisition method and device, computer equipment and storage medium
CN113228062A (en) Deep integration model training method based on feature diversity learning
CN115982613A (en) Signal modulation identification system and method based on improved convolutional neural network
CN115035159A (en) Video multi-target tracking method based on deep learning and time sequence feature enhancement
CN117932312B (en) Radio positioning recognition system based on space-time attention network and contrast loss
CN117593623A (en) Lightweight vehicle detection method based on improved YOLOv8n model
CN113160117A (en) Three-dimensional point cloud target detection method under automatic driving scene
CN114724245B (en) Incremental learning human body action recognition method based on CSI
Gai et al. Spectrum sensing method based on residual cellular network
CN113343924B (en) Modulation signal identification method based on cyclic spectrum characteristics and generation countermeasure network
CN116340846A (en) Aliasing modulation signal identification method for multi-example multi-label learning under weak supervision
CN115761552A (en) Target detection method, system, equipment and medium for airborne platform of unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant