CN117544963B - Method and equipment for identifying radiation source of cross-mode communication signal based on FTGan-Yolo - Google Patents
Method and equipment for identifying radiation source of cross-mode communication signal based on FTGan-Yolo Download PDFInfo
- Publication number
- CN117544963B CN117544963B CN202410010436.2A CN202410010436A CN117544963B CN 117544963 B CN117544963 B CN 117544963B CN 202410010436 A CN202410010436 A CN 202410010436A CN 117544963 B CN117544963 B CN 117544963B
- Authority
- CN
- China
- Prior art keywords
- ftgan
- yolo
- unknown
- radiation source
- rgb picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000005855 radiation Effects 0.000 title claims abstract description 61
- 238000004891 communication Methods 0.000 title claims abstract description 52
- 238000012546 transfer Methods 0.000 claims abstract description 15
- 230000006870 function Effects 0.000 claims description 76
- 238000012549 training Methods 0.000 claims description 37
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 11
- 238000004088 simulation Methods 0.000 claims description 7
- 238000012795 verification Methods 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 6
- 230000007704 transition Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 abstract description 12
- 238000009826 distribution Methods 0.000 abstract description 8
- 238000013459 approach Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 13
- 238000013135 deep learning Methods 0.000 description 8
- 230000006872 improvement Effects 0.000 description 5
- 230000006978 adaptation Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 238000013526 transfer learning Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000001737 promoting effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000003313 weakening effect Effects 0.000 description 2
- 235000018185 Betula X alpestris Nutrition 0.000 description 1
- 235000018212 Betula X uliginosa Nutrition 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 230000003042 antagnostic effect Effects 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W12/00—Security arrangements; Authentication; Protecting privacy or anonymity
- H04W12/60—Context-dependent security
- H04W12/69—Identity-dependent
- H04W12/79—Radio fingerprint
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2131—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on a transform domain processing, e.g. wavelet transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0876—Network architectures or network communication protocols for network security for authentication of entities based on the identity of the terminal or configuration, e.g. MAC address, hardware or software configuration or device fingerprint
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W12/00—Security arrangements; Authentication; Protecting privacy or anonymity
- H04W12/06—Authentication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Hardware Design (AREA)
- Probability & Statistics with Applications (AREA)
- Power Engineering (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of wireless communication, in particular to a radiation source identification method and equipment for cross-mode communication signals based on FTGan-Yolo. The method consists of a generic RFF extractor and novel feature transfer generation countermeasure network. By this approach, a single SEI classifier can be used to identify transmitters that span different modalities of signals, FTGan-Yolo is able to transfer the RGB picture features of different modalities to identifiable distributions without supervision. The single enhanced transmitter identification system of the method is capable of identifying transmitters that span different modal signals as the modal signals are transformed. The method has low resource requirements and does not increase the complexity of the radio frequency fingerprint extraction process; meanwhile, the method aims at the problem of multi-mode radio frequency fingerprint signals in the field of radio frequency fingerprint identification, achieves efficient and general extraction of outstanding RGB picture features from the multi-mode signals, and provides a radio frequency fingerprint extraction and identification method which can adapt to practical application environments.
Description
Technical Field
The invention relates to the field of wireless communication, in particular to a radiation source identification method and equipment for cross-mode communication signals based on FTGan-Yolo.
Background
Currently, with the development of technology, various industries begin to perform technology and information interconnection in a wireless manner, and the wireless interconnection manner usually adopts a communication protocol and carries a protocol address to indicate the identity of the device. When the wireless interconnection security is involved, the communication protocol address is easy to impersonate the identity or maliciously deceptively cheat the terminal equipment in a human mode due to the software modulation mode adopted by the communication protocol address.
Thus, in recent years Special device identification (specialty EmitterIdentificaion, SEI) based on radio frequency fingerprints (Radio Frequency Fingeprint, RFF) have been proposed, which, because they are determined by the uniqueness of the device's factory circuits, prove to be unique to each device and are difficult to spoof or impersonate. However, to improve communication stability, wireless devices transmit more than one mode signal. The pattern represents a signal waveform or signal type. This brings about the practical problem: new modal signal arrival requires a new RFF extractor; the workload of constructing different SEI classifiers for identifying mode signals with different modes and the complexity thereof; in practice it is very difficult to collect and label all signals.
In the SEI field, radio frequency fingerprint identification is generally realized in two ways, one is a traditional way, and association calculation among different signals is realized through Fourier change, energy spectrum calculation, filter use and the like in the communication field, so that whether the radio frequency fingerprint belongs to a certain device is judged; one is based on a deep learning approach, where the distribution of the radio frequency fingerprint is learned and the identity of the device is determined by a deep learning network.
Existing SEI systems consist of an RFF feature extractor and an emitter classifier, both of which are constructed based on a single modality signal. However, in processing a multi-modal signal, it is necessary to first identify the signal pattern at which one signal arrives. Furthermore, different modal signals require different RFF extractors and SEI classifiers, resulting in increased complexity and reduced robustness in processing unknown modal signals. To more cost effectively address the challenges of identifying wireless devices across different modality signals, one possible solution is to develop a generic RFF extractor that unifies the structure and semantics of the multiple modalities. In addition, the introduction of deep learning methods to improve the adaptability of the classifier to multi-modal signals is also critical. Meanwhile, the radiation source identification based on the deep learning mode is usually based on a single waveform characteristic, and the current deep learning method is usually optimized based on a graphic image, so that how to convert the signal characteristic into a form more suitable for the deep learning network is also very important.
Currently "Kuzdeba s.; robinson j.; carmack j. Transfer Learning with Radio Frequency signs. InProceedings of the 2021 IEEE 18th 618Annual Consumer Communications&Networking Conference (CCNC), las Vegas, NV, USA, 2021; pp. 1-9." propose to pretrain the SEIDL classifier with the adsb signal and fine tune it to identify the transmitter with wifi signal. However, this approach still requires continuous fine tuning whenever a new modal signal is introduced, which also results in multiple models being built for different modal signals.
The method for identifying the cross mode of the image is provided by 'Tang H.; jia K. Discriminative adversarial domain adaptation. In Proceedings of the34, th AAAI Conference on Artificial, 628 Intelligent, 3 April 2020, pp., 5940-5947', but the method cannot directly act on the signal through experiments on the signal, and the identification accuracy is extremely low.
In summary, the performance of the current radio frequency fingerprint extraction and identification under the dynamic interference and complex environment is very easy and unreliable, the pre-processing is seriously dependent on the processing of a professional communication algorithm, the fingerprint extraction mode does not have cross-mode universality, the identification network efficiency is low, and the like, and the problems are needed to be solved in the field of SEI practical application.
Disclosure of Invention
The invention aims to solve the problems of poor robustness, no cross-mode universality and low network identification efficiency in the prior art, and provides a radiation source identification method and equipment for cross-mode communication signals based on FTGan-Yolo.
In order to achieve the above object, the present invention provides the following technical solutions:
a radiation source identification method of a cross-mode communication signal based on FTGan-Yolo comprises the following steps:
a: acquiring a universal RFF characteristic of a radiation source to be identified, and converting the universal RFF characteristic based on a frequency domain into an RGB picture characteristic based on a cross domain characteristic through a time-frequency function;
b: inputting the universal RGB picture characteristics into a pre-constructed FTGan-Yolo network to generate identification characteristics;
c: according to the identification characteristics, matching the corresponding radiation sources, and outputting the emitter ID or signal pattern corresponding to the radiation source to be identified;
the FTGan-Yolo network is used for extracting characteristic differences of a known modal radiation source and an unknown modal radiation source and outputting identification characteristics; the pre-construction of the FTGan-Yolo network comprises the following steps:
s1: establishing a mixed data set crossing signal modes;
s2: extracting general RFF features of the mixed data set, and converting the general RFF features into RGB picture features based on cross domain features through a time-frequency function;
s3: performing model training on the FTGan-Yolo network through the RGB picture characteristics, and outputting the FTGan-Yolo network after model convergence;
the FTGan-Yolo network comprises an identifiable feature extractor, a generator, a discriminator, and a loss function; the identifiable feature extractor is used for extracting identifying features in the RGB picture features; the generator is for learning a characteristic difference between a known modal radiation source and an unknown modal radiation source; the discriminator is used for judging whether the characteristic is the characteristic generated by the generator.
As a preferred embodiment of the present invention, the identifiable feature extractor includes a plurality of CBS modules, ELAN modules, and MP1 modules;
the CBS module consists of a convolution layer, a BN layer and a SiLu activation function which are sequentially connected;
the ELAN module is formed by splicing a plurality of CBS modules;
the MP1 module is formed by splicing a plurality of CBS modules and a maximum pooling layer.
As a preferred scheme of the invention, the mixed data set is divided into a training set and a verification set according to a preset proportion, and the training set and the verification set comprise a real data set, a simulation data set and a public data set;
wherein the real data set is used as a known mode of model training of the FTGan-Yolo network, and consists of pre-processed ads-b signals;
the simulation data set and the public data set are used as unknown modes for training an FTGan-Yolo network model, and the simulation data set is composed of preset modal communication response signals; the common dataset is composed of wifi signals.
As a preferred embodiment of the present invention, the loss function of the FTGan-Yolo network is as follows:
,
wherein,loss function for FTGan-Yolo generator,/->For the FTGan-Yolo discriminator loss function, alpha and beta are preset parameters, < ->Loss function for basic generator, +.>As a basis for the loss function of the arbiter,for transfer loss function->For the amplitude loss function>To smooth transition losses.
As a preferred embodiment of the present invention, the basic generator loss functionSaid base arbiter loss function ++>The expression of (2) is as follows:
,
wherein,calculating the expected variance in the case of loss values for unknown modality data,/->Calculating the expected variance in the case of loss values for the known modality data,/->Is a binary cross entropy loss function; />Is filled with 0 and has a size ofIs a feature matrix of (1); d is the discrimination function of FTGan-Yolo,>the value of (1) is 1 or 0, and when 1 is 0, represents that the input value is the combined data of the modal difference data and the known modal data, and when 0 is 0, represents that the input value is the generated modal difference data and the generated modal data; />Is filled with 1 and has a size ofIs a feature matrix of (1); />The value of (1) is 1 or 0, and when 1 is 0, represents that the input value is the combined data of the modal difference data and the known modal data, and when 0 is 0, represents that the input value is the generated modal difference data and the generated modal data; g is a generating function of FTGan-Yolo, and G (x) is a generating characteristic of output when input is x; />Is an RGB picture feature of a known modality, +.>Is an RGB picture feature of unknown modality, +.>Is the difference in RGB picture characteristics of the known and unknown modes; />Representing differences in the generated features->Is the RGB picture feature that generates the known pattern.
As a preferred embodiment of the present invention, the transfer loss functionThe expression of (2) is:
,
wherein,is mean square error>Calculating the expected variance in the case of loss values for unknown modality data,/->Representing differences in the generated features->Is the difference in RGB picture characteristics of the unknown and known modes,is to generate RGB picture features of known mode, < >>Is an RGB picture feature of a known modality, +.>Is an RGB picture feature of an unknown modality.
As a preferred embodiment of the present invention, the amplitude loss functionThe expression of (2) is:
wherein,is a binary regularization function; />Is filled with 0, the magnitude of which is +.>Feature matrix of>Is a batch difference loss value; />Representing batch size, +.>Representing the difference in the characteristics of the generation,difference of RGB picture characteristics for known mode and unknown mode,/->Is a characteristic of RGB pictures of a known mode,is an unknown pattern of RGB picture features.
As a preferred embodiment of the present invention, the smooth transition lossThe expression of (2) is:
wherein,calculating the expected variance in the case of loss values for unknown modality data,/->Is mean square error>,Is->Sample set in one period, +.>Representing batch size, +.>Representing differences in the ith batch generation feature, +.>The sample set representing this period generates an average value of the difference values for making the sample individual generated in this period +.>Obeying collective distribution; />For input of +.>The generated characteristics of the time output.
In the step C, the characteristic matching of the radiation source is carried out through a pre-constructed unsupervised clustering function as a preferable scheme of the invention.
As a preferred embodiment of the present invention, the step S3 includes the steps of:
s31: extracting RGB picture characteristics of the mixed data set;
s32: unsupervised training of the FTGan-Yolo network to transfer the general feature morphology of the unknown modality to the general feature morphology of the known modality;
s33: performing supervised training on the identifiable feature extractor through a universal feature system with a known mode until the model converges;
s34: and sending the universal features of all unknown modes to the FTGan-Yolo network for processing, transferring the universal feature forms of all unknown modes to the universal feature forms of known modes, and outputting the FTGan-Yolo network.
A radiation source identification device for FTGan-Yolo based cross-modal communication signals comprising at least one processor, and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding claims.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a radiation source identification method of a cross-mode communication signal based on FTGan-Yolo, which consists of a general RFF extractor and a novel feature transfer generation countermeasure network (FTGan, feature Transfer Generative Adversarial Network). By this approach, a single SEI classifier can be used to identify transmitters that span different modalities of signals, FTGan-Yolo is able to transfer the RGB picture features of different modalities to identifiable distributions without supervision. The single enhanced transmitter identification system of the method is capable of identifying transmitters that span different modal signals as the modal signals are transformed. The method has low resource requirements and does not increase the complexity of the radio frequency fingerprint extraction process; meanwhile, the method aims at the problem of multi-mode radio frequency fingerprint signals in the field of radio frequency fingerprint identification, achieves efficient and general extraction of outstanding RGB picture features from the multi-mode signals, and provides a radio frequency fingerprint extraction and identification method which can adapt to practical application environments.
Drawings
FIG. 1 is a schematic flow chart of a radiation source identification method based on a cross-modal communication signal of an embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of a pre-construction flow of an FTGan-Yolo network in a method for identifying a radiation source of a cross-mode communication signal based on an FTGan-Yolo according to embodiment 1 of the present invention;
FIG. 3 is a flowchart of the construction of an FTGan-Yolo network in the method for identifying a radiation source of a cross-mode communication signal based on the FTGan-Yolo according to embodiment 2 of the present invention;
fig. 4 is a schematic structural diagram of a residual kernel in a radiation source identification method of a cross-mode communication signal based on FTGan-Yolo according to embodiment 2 of the present invention;
FIG. 5 is a schematic structural diagram of an identifiable feature extractor in a method for identifying a radiation source of a cross-modal communication signal based on FTGan-Yolo according to embodiment 2 of the present invention;
FIG. 6 is a network structure diagram of ELAN and MP1 in an identifiable feature extractor of a radiation source identification method of cross-modal communication signals based on FTGan-Yolo according to embodiment 2 of the present invention;
FIG. 7 is a diagram showing the structure of a conventional Gan network and a FTGan-Yolo network in a method for identifying a radiation source of a cross-modal communication signal according to embodiment 2 of the present invention;
FIG. 8 is a schematic diagram of a cross-modal identification flow in a method for identifying a radiation source of a cross-modal communication signal based on FTGan-Yolo according to embodiment 3 of the present invention;
FIG. 9 is an RGB diagram of an ADS-B signal after time-frequency processing in a radiation source identification method based on a cross-modal communication signal of an FTGan-Yolo according to embodiment 4 of the present invention;
fig. 10 is an RGB diagram of a WiFi signal after time-frequency processing in a radiation source identification method based on FTGan-Yolo cross-mode communication signals according to embodiment 4 of the present invention;
FIG. 11 is a comprehensive SEI precision confusion matrix based on the multi-mode signal in the radiation source identification method based on the cross-mode communication signal of the FTGan-Yolo according to embodiment 4 of the present invention;
fig. 12 is a schematic structural diagram of a radiation source identification device based on a cross-modal communication signal according to an FTGan-Yolo according to embodiment 5 of the present invention, which uses a cross-modal communication signal based on FTGan-Yolo according to embodiment 1.
Detailed Description
The present invention will be described in further detail with reference to test examples and specific embodiments. It should not be construed that the scope of the above subject matter of the present invention is limited to the following embodiments, and all techniques realized based on the present invention are within the scope of the present invention.
Example 1
As shown in fig. 1, a radiation source identification method of a cross-mode communication signal based on FTGan-Yolo includes the following steps:
a: and acquiring the universal RFF characteristic of the radiation source to be identified, and converting the universal RFF characteristic based on the frequency domain into RGB picture characteristic based on the cross domain characteristic through a time-frequency function.
B: and inputting the RGB picture characteristics into a pre-constructed FTGan-Yolo network to generate identification characteristics.
C: and outputting an emitter ID or a signal pattern corresponding to the radiation source to be identified according to the identification feature matching corresponding radiation source.
The FTGan-Yolo network is used for extracting characteristic differences of known modal radiation sources and unknown modal radiation sources and outputting identification characteristics. As shown in fig. 2, the pre-construction of the FTGan-Yolo network includes the following steps:
s1: a hybrid dataset is established across the signal patterns.
S2: extracting general RFF features of the mixed data set, and converting the general RFF features into RGB picture features based on cross domain features through a time-frequency function;
s3: and carrying out model training on the FTGan-Yolo network through the RGB picture characteristics, and outputting the FTGan-Yolo network after model convergence.
The FTGan-Yolo network comprises an identifiable feature extractor, a generator, a discriminator, and a loss function; the identifiable feature extractor is used for extracting identifying features in the RGB picture features; the generator is for learning a characteristic difference between a known modal radiation source and an unknown modal radiation source; the discriminator is used for judging whether the characteristic is generated by the generator or not; i.e. to determine whether the input features are those generated by the generator or those of real data.
Example 2
The embodiment is a specific implementation method of the FTGan-Yolo network in the method for identifying radiation sources of cross-modal communication signals based on FTGan-Yolo described in embodiment 1, as shown in fig. 3, including the following steps:
s1: a hybrid dataset is established across the signal patterns.
The mixed data set is divided into a training set and a verification set according to a preset proportion, and the training set and the verification set comprise a real data set, a simulation data set and a public data set.
Wherein the real dataset is made up of pre-processed ads-b (Automatic Dependent Surveillance Broadcast, broadcast auto-correlation monitoring) signals as a known model of model training of the FTGan-Yolo network.
The simulation data set and the public data set are used as unknown modes for training an FTGan-Yolo network model, and the simulation data set is composed of preset modal communication response signals; the common dataset is composed of wifi signals.
S2: and extracting the general RFF characteristics of the mixed data set, and converting the general RFF characteristics into RGB picture characteristics based on cross domain characteristics through a time-frequency function.
The general RFF characteristic of the mixed data set is that the data in the mixed data set is subjected to signal length alignment in an up-sampling or down-sampling mode, so that all signals are kept at an intermediate frequency point and the same length.
The RFF features based on the frequency domain, which are aligned in all modes, are further converted into RGB picture features based on cross domain features through a time-frequency function, and the conventional popular deep learning network structure is optimized based on a picture structure, so that the waveform structure is converted into the picture structure for processing and optimizing; the conversion process is to pretreat the cross domain features through a common residual network to further form an rgb diagram of a 3x224x224 structure; meanwhile, the RGB multichannel picture format based on the cross domain is more beneficial to weakening signal characteristics of different modes and promoting the characterization of RFF equipment characteristics.
S3: and carrying out model training on the FTGan-Yolo network through the RGB picture characteristics, and outputting the FTGan-Yolo network after model convergence.
The FTGan-Yolo network comprises an identifiable feature extractor, a generator, a discriminator, and a loss function; the identifiable feature extractor is used for extracting identifying features in the RGB picture features; the generator is for learning a characteristic difference between a known modal radiation source and an unknown modal radiation source; the discriminator is used for inputting the characteristics into the characteristics generated by the generator or the characteristics of the real data. According to the invention, the identifiable image feature extraction network is optimized by combining the yolov7 with the residual error network, so that the difference between modes is weakened, and the characterization capability of RGB picture features is improved.
The present invention enables identification of transmitters from known modal signals by means of an identifiable feature extractor (SEI classifier). The identifiable feature extractor is designed by combining a partial network based on YoLo-v7 with a residual network and extracts reliable identifiable features based on a signal picture format, and the residual network structure generally adopts a multi-layer residual network structure, and a residual kernel of the residual network structure is shown in fig. 4. Compared to conventional convolutional networks, the ResNet model shows better performance in handling gradient explosions. Furthermore, the number of layers of a common feature extraction network (such as AlexNet, VGG, or GoogLeNet) is excessive, which may not be necessary due to the lower dimensionality of the signal features. Specifically, as shown in fig. 5, the identifiable feature extractor of the present embodiment includes a plurality of CBS modules, ELAN modules, and MP1 modules. The CBS modules are composed of a convolution layer, a BN layer and a SiLu activation function which are sequentially connected, the convolution kernels of different CBS modules are different in size, and numbers in the CBS modules represent the convolution kernel size and the step size of the convolution kernels. As shown in fig. 6 (where cat is a splicing operation, and the ELAN module is formed by splicing several CBS modules). The MP1 module is formed by splicing a plurality of CBS modules and a maximum pooling layer. According to the invention, only the difference features among different modes are extracted by the identifiable feature extractor, so that the training load of the FTGan-Yolo network is greatly reduced, and the training efficiency of the FTGan-Yolo network is improved. Meanwhile, the invention creatively enhances the input structure of the FTGan-Yolo network, and the input structure can enhance the whole network structure in training.
The generator performs feature encoding and decoding using multi-layer convolutional and deconvolution network terminology.
The discriminator is composed of a multi-layer fully connected network for identifying whether the characteristics generated by the generator are characteristics of real data.
The loss function is used for optimizing the generator and the discriminator, so that the two networks achieve better effects in training. The specific network improvements are shown below:
1. improvement of the model:
1) Improved Gan architecture:
gan is currently very popular in graphics generation, but their architecture is not well suited for direct generation of waveforms. FTGan is inspired and upgraded on a Gan basis. As shown in fig. 7, which depicts an architectural comparison of a generic Gan network and FTGan. Unlike conventional Gan designs, the FTGan generator in this embodiment focuses only on learning the differences between different signal patterns. This approach allows for a simpler and accurate network and is also more suitable for computation of waveform structures. Furthermore, since the difference between the different mode signals has a lower information entropy than the complete signal, the present embodiment proposes FTGan to generate feature differences between features of the known and unknown mode signals. By removing the feature difference in the unknown modal signal features, feature matching with the known modal signals can be realized, and the accuracy and efficiency of waveform feature generation are improved.
2) Further improvements to FTGAN structure:
comparison of original FTGAN structure with modified FTGAN-YOLO structure:
(1) the FTGAN-YOLO converts signals of different modes into RGB images for processing, and the existing popular deep learning network structure and GAN network are optimized based on the picture structure, so that the waveform structure is converted into the picture structure for processing and optimizing.
(2) The RGB multichannel picture format converted into the cross domain based on the combination of the general frequency domain RGB picture features is more beneficial to weakening the signal features of different modes and promoting the characterization of RFF equipment features.
(3) The FTGAN-YOLO identifiable feature extraction network is optimized for RGB images, and the network structure of YOLO-v7 is combined with a residual network to optimize the performance of the identifiable feature extraction network.
(4) Aiming at the characteristics of the rgb picture, the loss function of the FTGan is optimized and improved.
2. Detailed improvement of FTGan-Yolo:
the network structure of FTGan-Yolo is shown in the right frame composition of fig. 7:
1) There are two inputs, the known Mode 1 feature is expressed asThe unknown Mode 2/3/4 feature is expressed as. It should be noted that no tag is required during the training of FTGan-Yolo;
2)is input +.>,/>Outputting the generated difference +.>;
3) Removing differences from unknown modal characteristics to obtainIs>Matching is carried out;
4) The arbiter is improved by enhancing the input. In general, conventionalBy calculating +.>And->Similarity between to determine whether the input sample is a true or generated "true" sample. />Is a "true" sample generated by the Gan generator. Then, update and enhance ++with similarity>Is a parameter of (a). FTGan-Yolo is suggested to be entered +.>Andto push the discriminator and utilize the enhanced discriminatorTo excite the generator. These inputs may calculate the similarity between the true sample and the generated "true" sample, as well as the similarity between the true difference and the generated "true" difference;
5) Three penalty functions are added to improve the performance of the generator and enhance the "true" feel of the generated features.
3. The method comprises the steps of establishing an countermeasure generation network FTGan-Yolo, wherein the network architecture is shown in a right frame composition of FIG. 7:
the unlabeled signal of ads-b (mode 1) in the data set is taken as a base domain, and the characteristic distribution is taken asThe wifi signal and the rm1 and rm2 signals (mode 2/3/4) unlabeled signals in the data set are used as transfer domains, and the characteristic distribution is used as +.>. The FTGan-Yolo target is training->Minimizing the antagonistic objective function.
Wherein the method comprises the steps of,/>D is a discriminant function and G is a generator function.
Typically the D loss and G loss functions of Gan are defined as follows:
wherein during the Gan training processIndicating an ascent, atFor increasing->To converge the function D;representing a drop, for reducing +.>To converge the function G; />Is +.>Filling with 0; />Is +.>Filling with 1; />Represented is a binary cross entropy loss function. In FTGan-Yolo, the details of the improvement and the loss function are as follows. The loss function of the FTGan-Yolo network is as follows:
,
wherein,loss function for FTGan-Yolo generator,/->For the FTGan-Yolo arbiter loss function, α and β are preset parameters (default setting in this embodiment +.>,/>),/>Loss function for basic generator, +.>Based on the arbiter loss function,/o>For transfer loss function->For the amplitude loss function>To smooth transition losses.
(a) Basic generator loss functionFoundation discriminator loss function ++>
The embodiment improvesAnd->: by means of->And->Making FTGan-Yolo more robust, expressed as follows:
,
wherein,x in (a) is in an ascending trend, +.>X in (a) is in a descending trend, +.>Calculating the expected variance in the case of loss values for unknown modality data,/->Calculating the expected variance in the case of loss values for the known modality data,/->Is a binary cross entropy loss function; />Is filled with 0, the magnitude of which is +.>Is a feature matrix of (1); d is a discrimination function of FTGan-Yolo, D (x) is a probability value of whether x is true when the input is x, +.>The value of (1) is 1 or 0, and when 1 is 0, represents that the input value is the combined data of the modal difference data and the known modal data, and when 0 is 0, represents that the input value is the generated modal difference data and the generated modal data; />Is filled with 1 and has a size of +.>Is a feature matrix of (1); />The value of (1) is 1 or 0, and when 1 is 0, represents that the input value is the combined data of the modal difference data and the known modal data, and when 0 is 0, represents that the input value is the generated modal difference data and the generated modal data; g is a generating function of FTGan-Yolo, and G (x) is a generating characteristic of output when input is x; />Is an RGB picture feature of a known modality, +.>Is an RGB picture feature of unknown modality, +.>Is the difference in RGB picture characteristics of the known and unknown modes; />Representing differences in the generated features->Is the RGB picture feature that generates the known pattern.At the same time calculate->And->Similarity between->And->Similarity between;attempting to generate a failure to be +.>Differentiated "true" samples. By calculating the loss of difference between sample combinations, the D and G of FTGan-Yolo can be trained more robustly, and the enhanced input can significantly improve the ability of FTGan-Yolo to generate "true" differences, thereby enabling transfer of unknown modal characteristics to known modal characteristics.
(b) Transfer loss function
To ensure thatAnd->Accuracy of the difference between the two, the present embodiment adds +.>And by descendingTo train FTGan-Yolo, expressed as:
,
wherein,is mean square error>Calculating the expected variance in the case of loss values for unknown modality data,/->Representing differences in the generated features->Is the difference in RGB picture characteristics of the unknown and known modes,is to generate RGB picture features of known mode, < >>Is an RGB picture feature of a known modality, +.>Is an RGB picture feature of an unknown modality. By reducing the losses between the generated differences and the true differences, and between the generated known modality features and the true known modality features->Making G a more reliable "true" discrepancy.
(c) Amplitude loss function
In order to obtain a more suitable generation difference, the generated amplitude should be limited, and the present embodiment proposes an amplitude loss, which is expressed as:
wherein,is a binary regularization function; />Is filled with 0, the magnitude of which is +.>Is a feature matrix of (1); />Is a batch difference loss value; />Representing batch size, +.>Representing the difference in the characteristics of the generation,difference of RGB picture characteristics for known mode and unknown mode,/->Is a characteristic of RGB pictures of a known mode,is an unknown pattern of RGB picture features.
(d) Smooth transition loss
In order to smooth the RGB picture characteristics of the transmitted signal, the FTGan-Yolo optimizes it and drops it during training, and its expression is:
wherein,calculating the expected variance in the case of loss values for unknown modality data,/->Is mean square error>,Is->Sample set in one period, +.>Representing batch size, +.>Representing differences in the ith batch generation feature, +.>The sample set representing this period generates an average value of the difference values for making the sample individual generated in this period +.>Obeying collective distribution; />For input of +.>The generated characteristics of the time output. The loss function is added to improve the distribution balance of different modal characteristics.
To sum up, the training flowchart of the FTGan-Yolo network is shown in fig. 8:
s31, extracting RGB picture features of the mixed data set, namely obtaining RFF universal features through alignment, and converting the RFF universal features into RGB picture features based on cross domain features through a time-frequency function so as to unify an input structure of a network;
s32, unsupervised training of the FTGan-Yolo network (back propagation update) is carried out, and all the general characteristic forms of the unknown modes are transferred into general characteristic forms of the known modes;
s33, performing supervised training on the identifiable feature extractor through a general feature system with a known mode until the model converges; the extractor may extract identifiable features from the known modal generic features;
s34, sending the general features of all unknown modes to the FTGan-Yolo network for processing, transferring the general feature forms of all unknown modes to the general feature forms of known modes, and outputting the FTGan-Yolo network. Thus, the common features of all modalities have been converted to "known modality common features" and then the identifiable features are extracted by the identifiable feature extractor.
Example 3
This embodiment differs from the previous embodiment in that step C performs feature matching of the radiation source by means of a pre-trained unsupervised clustering function. Such as K-Means functions, affinity Propagation functions, agglomerative Clustering functions, meanShift Clustering functions, bisection K-Means functions, DBSCAN functions, OPTICS functions, and BIRCH functions.
Example 4
The embodiment is a specific implementation construction mode of the FTGan-Yolo network in the method for identifying a radiation source of a cross-mode communication signal based on FTGan-Yolo described in embodiment 2, which includes the following steps:
s1: a hybrid dataset is established across the signal patterns.
The hybrid dataset includes three parts:
a first part: a real dataset;
the real data set consists of the ads-b signals after pretreatment; the part is a real aircraft communication signal, data are collected on a real aircraft for 12 months by a real-time spectrum analyzer, and the preamble pulses of the signal are used for SEI, and each signal contains 4 pulses; the center frequency is set to 1090MHz, the acquisition frequency is set to 150MHz, the PPM modulation is performed, and the signal to noise ratio is about 10db. The RGB diagram of the ads-b signal after the time-frequency function conversion is shown in fig. 9.
A second part: a common dataset;
the public data set consists of WiFi signals, 3 antennas of the same equipment are used for WiFi 1-3, each transmitting pulse is used for SEI of each signal, IEEE802.11a/g protocol, 2.4GHz is adopted, the speed is 20MS/s, BPSK modulation is adopted, and the signal to noise ratio is 5db. The RGB diagram of the WiFi signal after the WiFi signal is converted by the time-frequency function is shown in fig. 10.
Third section: simulating a data set;
the analog data set is composed of preset modal communication response signals, namely ads-b a/c mode data. The part is that an aircraft communication response signal is generated by a signal generator, is generated by three waveform generators and is collected by an oscilloscope, a preamble pulse is used for SEI, each signal comprises 2 pulses, the center frequency is set to 1030MHz, the collection frequency is set to 1030MHz, and the signal ratio is about 10db through PPM modulation.
The ads-b s mode is referred to as ads-b signal, and ads-ba/c mode is referred to as rm1 and rm2 for collision avoidance.
Wherein the real dataset is used as a known pattern for model training of the FTGan-Yolo network and the simulated dataset and the common dataset are used as an unknown pattern for model training of the FTGan-Yolo network. Specifically, in this embodiment, the mixed data set of the cross-signal modes includes 4 signal modes in total, and 3000 pieces of unlabeled data of the first portion are selected as the known modes of FTGan-Yolo training in the FTGan-Yolo training process; and 9000 pieces of unlabeled data of the second part and the third part are selected as unknown modes of FTGan-Yolo training. In classifier training, 1800 pieces of mark data are selected from the first part of data, the training set and the verification set are 900 pieces of data respectively, 300 pieces of data are selected from each 1 mode of each 1 emitter in the second part and the third part, and 2700 pieces of data are selected as the verification set.
S2: extracting general RFF features of the mixed data set, and converting the general RFF features into RGB picture features based on cross domain features through a time-frequency function;
s3: and carrying out model training on the FTGan-Yolo network through the RGB picture characteristics, and outputting the FTGan-Yolo network after model convergence.
After the FTGan-Yolo network is established, the radiation sources to be identified are identified, a final identification effect diagram is shown in fig. 11, a confusion matrix shows that the average accuracy of 12 kinds of identifiers is about 86%, all current searchable results are only subjected to emitter identification aiming at single-mode signals, for example, emitter unified identification is carried out on 4 kinds of signals in the scheme by using the results to be popularized to multi-mode identification or the two references "Kuzdeba S.; robinson J.; carmack J. Transfer Learning with Radio Frequency Signals. InProceedings of the 2021 IEEE 18th 618Annual Consumer Communications&Networking Conference (CCNC), las Vegas, NV, USA, 2021; pp. 1-9.; and" Tangh.; jia K. Discriminative adversarial domain adaptation. In Proceedings of the34th AAAI Conference on Artificial 628Intelligence, 3 April 2020; pp. 5940-5947', and the method provided by cross-mode identification is almost incapable of achieving emitter unified identification or the accuracy rate of the 4 kinds of signals in the scheme is below 50%; and as shown in fig. 11, the figure is based on 12 devices, the recognition accuracy of four mode signals is compared with the method 1 being FTGAN based on waveform data, the method 2 being a cross-mode recognition method based on RGB image, using paper 1"tang, h.; jia, k. Discriminative adversarial domain adaptation, in Proceedings of the34th AAAI Conference on Artificial,Intelligence,New York, NY, USA, 3 April 2020, pp. 5940-5947 ', the method of image cross-domain recognition is compared with method 3 being based on signal format, using paper 2" kuzdeba, s.; robinson, j.; campack, j. Transfer Learning with Radio Frequency signs.in Proceedings of the 2021 IEEE 18thAnnual Consumer Communications&Networking Conference (CCNC), las Vegas, NV, USA, 9-12january 2021, pp. -9', the false recognition categories of different transmitters all being aggregated in other transmitters of the mode to which the signal belongs, so that when the clustering device is targeted for recognition of the signal mode, the signal mode can reach 99% of non-supervision clustering accuracy.
Example 5
As shown in fig. 12, a radiation source identification device for FTGan-Yolo based cross-modal communication signals includes at least one processor, a memory communicatively coupled to the at least one processor, and at least one input-output interface communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method for radiation source identification of FTGan-Yolo-based cross-modal communication signals as described in the previous embodiments. The input/output interface may include a display, a keyboard, a mouse, and a USB interface for inputting and outputting data.
Those skilled in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
The above-described integrated units of the invention, when implemented in the form of software functional units and sold or used as stand-alone products, may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (9)
1. The radiation source identification method of the cross-mode communication signal based on the FTGan-Yolo is characterized by comprising the following steps of:
a: acquiring a frequency domain-based general RFF characteristic of a radiation source to be identified, and converting the frequency domain-based general RFF characteristic into an RGB picture characteristic based on a cross domain characteristic through a time-frequency function;
b: inputting the RGB picture characteristics into a pre-constructed FTGan-Yolo network to generate identification characteristics;
c: according to the identification characteristics, matching the corresponding radiation sources, and outputting the emitter ID or signal pattern corresponding to the radiation source to be identified;
the FTGan-Yolo network is used for extracting characteristic differences of a known modal radiation source and an unknown modal radiation source and outputting identification characteristics; the pre-construction of the FTGan-Yolo network comprises the following steps:
s1: establishing a mixed data set crossing signal modes; the mixed data set comprises a real data set, a public data set and a simulation data set;
the real data set consists of the ads-b signals after pretreatment; the public data set consists of wifi signals; the analog data set is composed of preset modal communication response signals;
s2: extracting general RFF features of the mixed data set, and converting the general RFF features into RGB picture features based on cross domain features through a time-frequency function;
s3: performing model training on the FTGan-Yolo network through the RGB picture characteristics, and outputting the FTGan-Yolo network after model convergence;
the FTGan-Yolo network comprises an identifiable feature extractor, a generator, a discriminator, and a loss function; the identifiable feature extractor is used for extracting identifying features in the RGB picture features; the generator is for learning a characteristic difference between a known modal radiation source and an unknown modal radiation source; the discriminator is used for judging whether the characteristic is generated by the generator or not;
wherein the loss function of the FTGan-Yolo network is as follows:
,
wherein,loss function for FTGan-Yolo generator,/->For the FTGan-Yolo discriminator loss function, alpha and beta are preset parameters, < ->Loss function for basic generator, +.>Based on the arbiter loss function,/o>For transfer loss function->For the amplitude loss function>To smooth transition losses.
2. The method for identifying a radiation source of a cross-modal communication signal based on FTGan-Yolo of claim 1, wherein the identifiable feature extractor comprises a plurality of CBS modules, ELAN modules and MP1 modules;
the CBS module consists of a convolution layer, a BN layer and a SiLu activation function which are sequentially connected;
the ELAN module is formed by splicing a plurality of CBS modules;
the MP1 module is formed by splicing a plurality of CBS modules and a maximum pooling layer.
3. The method for identifying radiation sources of cross-modal communication signals based on FTGan-Yolo according to claim 1, wherein the mixed data set is divided into a training set and a verification set according to a preset proportion, and comprises a real data set, an analog data set and a public data set;
wherein the real dataset is used as a known model for model training of the FTGan-Yolo network;
the simulated dataset and the common dataset are used as unknown patterns for FTGan-Yolo network model training.
4. The method for radiation source identification of FTGan-Yolo based cross-modal communication signals of claim 1, wherein the base generator loss functionSaid base arbiter loss function ++>The expression of (2) is as follows:
,
wherein,calculating the expected variance in the case of loss values for unknown modality data,/->Calculating the expected variance in the case of loss values for the known modality data,/->Is a binary cross entropy loss function; />Is filled with 0 and has a size ofIs a feature matrix of (1); d is the discrimination function of FTGan-Yolo,>is 1 or 0; />Is filled with 1 and has a size of +.>Is a feature matrix of (1); />Is 1 or 0; g is a generating function of FTGan-Yolo, and G (x) is a generating characteristic of output when input is x; />Is an RGB picture feature of a known modality, +.>Is an RGB picture feature of unknown modality, +.>Is the difference in RGB picture characteristics of the known and unknown modes; />Representing differences in the generated features->Is the RGB picture feature that generates the known pattern.
5. The method for radiation source identification of FTGan-Yolo based cross-modal communication signals as defined in claim 1, wherein the transfer loss functionThe expression of (2) is:
,
wherein,is mean square error>Calculating the expected variance in the case of loss values for unknown modality data,/->Representing differences in the generated features->Is the difference between the RGB picture characteristics of the unknown and known modes,/and>is to generate RGB picture features of known mode, < >>Is an RGB picture feature of a known modality, +.>Is an RGB picture feature of an unknown modality.
6. The method for radiation source identification of FTGan-Yolo based cross-modal communication signals as defined in claim 5, wherein the amplitude loss functionThe expression of (2) is:
wherein,is a binary regularization function; />Is filled with 0, the magnitude of which is +.>Feature matrix of>Is a batch difference loss value; />Representing batch size, +.>Representing differences in the generated features->Difference of RGB picture characteristics for known mode and unknown mode,/->RGB picture feature for known mode, +.>Is an unknown pattern of RGB picture features.
7. The method for identifying a radiation source of a cross-modal communication signal based on FTGan-Yolo of claim 1, wherein the smooth transition lossThe expression of (2) is:
wherein,calculating the expected variance in the case of loss values for unknown modality data,/->Is mean square error>,Is->Sample set in one period, +.>Representing batch size, +.>Representing differences in the ith batch generation feature, +.>The sample set representing this period generates an average of the difference values; />For input of +.>The generated characteristics of the time output.
8. The method for identifying radiation sources of cross-modal communication signals based on FTGan-Yolo according to claim 1, wherein S3 comprises the steps of:
s31: extracting RGB picture characteristics of the mixed data set;
s32: unsupervised training of the FTGan-Yolo network to transfer the general feature morphology of the unknown modality to the general feature morphology of the known modality;
s33: performing supervised training on the identifiable feature extractor through a universal feature system with a known mode until the model converges;
s34: and sending the universal features of all unknown modes to the FTGan-Yolo network for processing, transferring the universal feature forms of all unknown modes to the universal feature forms of known modes, and outputting the FTGan-Yolo network.
9. A radiation source identification device for FTGan-Yolo based cross-modal communication signals comprising at least one processor and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410010436.2A CN117544963B (en) | 2024-01-04 | 2024-01-04 | Method and equipment for identifying radiation source of cross-mode communication signal based on FTGan-Yolo |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410010436.2A CN117544963B (en) | 2024-01-04 | 2024-01-04 | Method and equipment for identifying radiation source of cross-mode communication signal based on FTGan-Yolo |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117544963A CN117544963A (en) | 2024-02-09 |
CN117544963B true CN117544963B (en) | 2024-03-26 |
Family
ID=89784546
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410010436.2A Active CN117544963B (en) | 2024-01-04 | 2024-01-04 | Method and equipment for identifying radiation source of cross-mode communication signal based on FTGan-Yolo |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117544963B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109840287A (en) * | 2019-01-31 | 2019-06-04 | 中科人工智能创新技术研究院(青岛)有限公司 | A kind of cross-module state information retrieval method neural network based and device |
CN112347910A (en) * | 2020-11-05 | 2021-02-09 | 中国电子科技集团公司第二十九研究所 | Signal fingerprint identification method based on multi-mode deep learning |
CN112668498A (en) * | 2020-12-30 | 2021-04-16 | 西安电子科技大学 | Method, system, terminal and application for identifying individual intelligent increment of aerial radiation source |
CN116257750A (en) * | 2022-11-09 | 2023-06-13 | 南京大学 | Radio frequency fingerprint identification method based on sample enhancement and deep learning |
CN116258719A (en) * | 2023-05-15 | 2023-06-13 | 北京科技大学 | Flotation foam image segmentation method and device based on multi-mode data fusion |
CN117131436A (en) * | 2023-08-28 | 2023-11-28 | 电子科技大学 | Radiation source individual identification method oriented to open environment |
CN117195031A (en) * | 2022-05-27 | 2023-12-08 | 华东师范大学 | Electromagnetic radiation source individual identification method based on neural network and knowledge-graph dual-channel system |
CN117332117A (en) * | 2023-09-28 | 2024-01-02 | 天津理工大学 | Video clip retrieval method and system based on cross-modal correspondence matching and data set unbiasing |
CN117349726A (en) * | 2023-09-30 | 2024-01-05 | 中科云谷科技有限公司 | Fault diagnosis method, computing device, and readable storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ATE507537T1 (en) * | 2005-12-16 | 2011-05-15 | Technion Res And Dev Of Foundation Ltd | METHOD AND DEVICE FOR DETERMINING SIMILARITY BETWEEN SURFACES |
-
2024
- 2024-01-04 CN CN202410010436.2A patent/CN117544963B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109840287A (en) * | 2019-01-31 | 2019-06-04 | 中科人工智能创新技术研究院(青岛)有限公司 | A kind of cross-module state information retrieval method neural network based and device |
CN112347910A (en) * | 2020-11-05 | 2021-02-09 | 中国电子科技集团公司第二十九研究所 | Signal fingerprint identification method based on multi-mode deep learning |
CN112668498A (en) * | 2020-12-30 | 2021-04-16 | 西安电子科技大学 | Method, system, terminal and application for identifying individual intelligent increment of aerial radiation source |
CN117195031A (en) * | 2022-05-27 | 2023-12-08 | 华东师范大学 | Electromagnetic radiation source individual identification method based on neural network and knowledge-graph dual-channel system |
CN116257750A (en) * | 2022-11-09 | 2023-06-13 | 南京大学 | Radio frequency fingerprint identification method based on sample enhancement and deep learning |
CN116258719A (en) * | 2023-05-15 | 2023-06-13 | 北京科技大学 | Flotation foam image segmentation method and device based on multi-mode data fusion |
CN117131436A (en) * | 2023-08-28 | 2023-11-28 | 电子科技大学 | Radiation source individual identification method oriented to open environment |
CN117332117A (en) * | 2023-09-28 | 2024-01-02 | 天津理工大学 | Video clip retrieval method and system based on cross-modal correspondence matching and data set unbiasing |
CN117349726A (en) * | 2023-09-30 | 2024-01-05 | 中科云谷科技有限公司 | Fault diagnosis method, computing device, and readable storage medium |
Non-Patent Citations (7)
Title |
---|
An Adaptive Specific Emitter Identification System for Dynamic Noise Domain;Hongyu Yang等;《IEEE Internet of Things Journal ( Volume: 9, Issue: 24, 15 December 2022)》;20220802;全文 * |
Discriminative Adversarial Domain Adaptation;Tang Hui等;《In proceedings of the 34th AAAI conference on artificial628intelligence》;20200403;全文 * |
Multimodal Hierarchical CNN Feature Fusion for Stress Detection;Radhika Kuttala等;《IEEE Access ( Volume: 11)》;20230116;全文 * |
Transfer Learning with Radio Frequency Signals;Scott Kuzdeba等;《2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC)》;20210311;全文 * |
基于多域判别核典型相关分析的辐射源指纹特征融合方法;孙丽婷等;《信息科学》;20230106;全文 * |
多特征全卷积网络的地空通话语音增强方法;杨红雨等;《四川大学学报》;20200326;全文 * |
融合跨域多尺度特征的深度伪造检测方法;庞帅;《中国优秀硕士论文电子期刊网》;20230606;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117544963A (en) | 2024-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107247947B (en) | Face attribute identification method and device | |
CN109063565B (en) | Low-resolution face recognition method and device | |
CN111241291B (en) | Method and device for generating countermeasure sample by utilizing countermeasure generation network | |
CN112269868B (en) | Use method of machine reading understanding model based on multi-task joint training | |
CN111209878A (en) | Cross-age face recognition method and device | |
CN110175248B (en) | Face image retrieval method and device based on deep learning and Hash coding | |
CN116049412B (en) | Text classification method, model training method, device and electronic equipment | |
CN111428557A (en) | Method and device for automatically checking handwritten signature based on neural network model | |
CN112188306B (en) | Label generation method, device, equipment and storage medium | |
CN110688888B (en) | Pedestrian attribute identification method and system based on deep learning | |
CN116226785A (en) | Target object recognition method, multi-mode recognition model training method and device | |
CN112102424A (en) | License plate image generation model construction method, generation method and device | |
CN114821196A (en) | Zero sample image identification method and identification device, medium and computer terminal thereof | |
CN110502989A (en) | A kind of small sample EO-1 hyperion face identification method and system | |
CN113849653A (en) | Text classification method and device | |
CN117544963B (en) | Method and equipment for identifying radiation source of cross-mode communication signal based on FTGan-Yolo | |
CN116363712B (en) | Palmprint palm vein recognition method based on modal informativity evaluation strategy | |
CN115132181A (en) | Speech recognition method, speech recognition apparatus, electronic device, storage medium, and program product | |
CN113379594A (en) | Face shape transformation model training, face shape transformation method and related device | |
CN114038035A (en) | Artificial intelligence recognition device based on big data | |
CN112232378A (en) | Zero-order learning method for fMRI visual classification | |
CN113238197A (en) | Radar target identification and data judgment method based on Bert and BiLSTM | |
CN115129861B (en) | Text classification method and device, storage medium and electronic equipment | |
CN116821408B (en) | Multi-task consistency countermeasure retrieval method and system | |
CN117115469B (en) | Training method, device, storage medium and equipment for image feature extraction network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |