CN113435247A - Intelligent identification method, system and terminal for communication interference - Google Patents
Intelligent identification method, system and terminal for communication interference Download PDFInfo
- Publication number
- CN113435247A CN113435247A CN202110541106.2A CN202110541106A CN113435247A CN 113435247 A CN113435247 A CN 113435247A CN 202110541106 A CN202110541106 A CN 202110541106A CN 113435247 A CN113435247 A CN 113435247A
- Authority
- CN
- China
- Prior art keywords
- network
- interference
- sub
- global
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004891 communication Methods 0.000 title claims abstract description 94
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000012545 processing Methods 0.000 claims abstract description 6
- 238000012549 training Methods 0.000 claims description 62
- 230000006870 function Effects 0.000 claims description 52
- 101100455978 Arabidopsis thaliana MAM1 gene Proteins 0.000 claims description 24
- 230000002776 aggregation Effects 0.000 claims description 23
- 238000004220 aggregation Methods 0.000 claims description 23
- 238000009826 distribution Methods 0.000 claims description 20
- 238000010586 diagram Methods 0.000 claims description 19
- 230000007704 transition Effects 0.000 claims description 16
- 238000012360 testing method Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 230000001149 cognitive effect Effects 0.000 claims description 5
- 238000001228 spectrum Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000006835 compression Effects 0.000 claims description 3
- 238000007906 compression Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 230000002452 interceptive effect Effects 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 230000019771 cognition Effects 0.000 abstract description 2
- 238000011160 research Methods 0.000 description 8
- 230000007547 defect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 201000004569 Blindness Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002068 genetic effect Effects 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 239000002245 particle Substances 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 241001655736 Catalpa bignonioides Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012567 pattern recognition method Methods 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Bioethics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Monitoring And Testing Of Transmission In General (AREA)
Abstract
The invention belongs to the technical field of communication interference cognition, and discloses an intelligent communication interference identification method, a system and a terminal. The communication interference intelligent recognition system comprises: a communication interference signal processing module; a sub-network model building module; and a communication interference type identification module. The method can complete intelligent identification of the communication interference type under the support of a small sample set, has good identification accuracy and generalization performance, and can protect the privacy of sample data.
Description
Technical Field
The invention belongs to the technical field of communication interference cognition, and particularly relates to a communication interference intelligent identification method, a system and a terminal.
Background
At present: the communication interference type identification technology plays a very key role in spectrum monitoring and cognitive radio, and can identify a specific interference mode of an electronic device by performing characteristic analysis on interference after the electronic device is interfered, so that preparation work is prepared for subsequent anti-interference measures. Accurate identification of the interference type is a precondition for effective implementation of a corresponding anti-interference method, and the technology has very important significance for improving the anti-interference capability of electronic equipment such as a wireless communication system.
The research on communication interference identification mainly focuses on the selection of characteristic parameters and the design of a classifier, and how to extract characteristics and improve the performance of the classifier is a hotspot of research. In recent years, deep learning has been rapidly developed in various fields, and much research has been conducted in the field of interference recognition. The method uses a plurality of fixed characteristic parameters, so that the method has strong dependence on the selection of the characteristic parameters and poor generalization performance (Lelinpu. interference identification technology research and implementation [ D ]. Western Ann electronic technology university, 2014.). Von mises are used for extracting characteristics of interference signals by means of singular value decomposition, classification is achieved by means of a full-connection network, and the types of interference which can be identified are limited due to the fact that the extracted characteristics are single (von mises, catalpa wangensis. interference identification based on singular value decomposition and a neural network [ J ]. electronic and information academic, 2020, 42.). Lemin et al propose a communication interference recognition algorithm based on an SVM algorithm optimized by using a genetic immune particle swarm, optimize a feature combination by using the genetic immune particle swarm algorithm, avoid the defects of randomness and blindness in manually selecting features to a certain extent, but still have strong dependence on the selection of feature parameters and poor generalization performance (Lemin, Lexindong, Huangxin. communication interference recognition based on an improved SVM [ J ] modern electronic technology, 2016, 39(24): 26-29.). Chen dynasty et al propose an interference identification method based on fractional order Fourier transform for a direct sequence spread spectrum system, extract a plurality of characteristics of time domain, frequency domain and fractional order domain, and combine a hierarchical decision tree to realize the identification function, the method has the advantages of low algorithm complexity and stable performance, but the method lacks the characteristic capable of distinguishing digital modulation signals, and can not identify false target interference (Chen dynasty, Wernel chess, Zhuanshi, broadband interference identification method [ J ] electro-optic and control based on fractional order Fourier transform, 2013(10): 106-. Zhang Chibo et al proposed a broadband communication interference signal recognition method based on a spectrogram and a neural network, which uses a short-time Fourier transform module value as a network input to realize interference recognition by using a double hidden layer network, and has limited information mining capability of a shallow layer neural network and insufficient stability of recognition performance (Zhang Chibo, fan Yaxuan, Mengbcep. communication interference pattern recognition method based on the spectrogram and the neural network [ J ]. the report of terahertz science and electronic information, 2019, 17(06):959 + 963.). The design network has good identification precision and certain migration capability, and the designed network cannot adapt to the small sample condition with insufficient samples because the designed network uses the traditional deep learning training method (research on wireless communication interference signal identification and processing technology [ D ] 2020 ] based on deep learning). Uppal a J et al propose a CNN-based wireless signal identification method, which extracts a spectrogram and a constellation of a signal as input of a network, realizes Classification of various wireless Signals, and has high accuracy, and the method also uses a conventional Deep Learning training method, and cannot adapt to the situation of insufficient samples (Uppal a J, height M, Haftel W, et al.
Through the above analysis, the problems and defects of the prior art are as follows: the recognizable interference type is mainly suppressed interference, the research on deceptive interference is less, the recognition performance is not stable enough, and the generalization performance is poor; most of communication interference identification researches based on artificial intelligence are carried out under the condition of sufficient samples, and the researches are not much in the aspect of small samples; with a single network model, the privacy and security of interference sample data stored in multiple places cannot be guaranteed.
The difficulty in solving the above problems and defects is: the proper characteristics are selected, so that the difference between various different interferences can be highlighted, and the defects of randomness and blindness in characteristic selection manually can be avoided as much as possible; designing a network model and a training method which can adapt to the condition of a small sample; a distributed network training method capable of ensuring data security is designed.
The significance of solving the problems and the defects is as follows: the number of identifiable interference types can be expanded, and more prior information is provided for subsequent anti-interference measures; the identification can be completed under the condition of a small sample, the method is more suitable for the actual condition, and the sample collection cost is reduced; the method can avoid directly transmitting interference sample data, ensures the privacy of the interference data and is more beneficial to information safety protection.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a communication interference intelligent identification method, a system and a terminal.
The invention is realized in such a way that an intelligent identification method of communication interference comprises the following steps:
the method comprises the steps of intelligently representing received communication interference signals, extracting time-frequency distribution, fractional Fourier transform and a constellation diagram of the communication interference signals as deep network input, so as to highlight differences among different interferences and help network convergence;
building a distributed network, and building a sub-network model based on small sample learning so as to endow the sub-network with the capability of adapting to the small sample condition;
the distributed network is trained through federal learning to obtain a global optimal output model, and communication interference type recognition is completed to ensure privacy and safety of interference data stored in various places.
Further, the intelligently characterizing the received interference signal, and extracting the time-frequency distribution, the fractional fourier transform and the constellation map thereof as the deep network input specifically comprises:
the intelligent representation of the interference signal is carried out, firstly, the smooth pseudo-wigner-willi distribution (SPWVD) time-frequency distribution of the received interference signal J (t) is calculated, and the calculation formula is as follows:
wherein,SPWVD for interference signal J (t), h (τ) as a function of time window, g (u- τ) as a function of frequency window, J*(t) is the conjugate of the interference signal J (t), and t and f are the corresponding time and frequency, respectively.
The fractional fourier transform (FRFT) of the interference signal j (t) is then calculated as follows:
wherein FpIs a fractional Fourier transform operator, kernel function Kp(t, u) is
Whereinp is a transformation order, and alpha is p pi/2 represents the rotation angle of the time frequency plane;
the FRFT is the expansion of signals on a group of orthogonal Chirp bases, the FRFT of a certain order of the linear sweep frequency interference signals is a delta function, and the linear sweep frequency interference signals and other interference signals can be well distinguished by means of the focusing performance of the linear sweep frequency interference signals;
when the characteristic is extracted, the value of p is continuously adjusted to obtain a fractional order transformation matrix of the interference signal:
then extracting an interference constellation diagram for distinguishing a standard interference signal and a deception interference signal;
according to the above feature extraction procedure, the interference signal feature is expressed as:
wherein,for disturbed SPWVD conversion, XJ(u) fractional Fourier transform of the interference, SJIs the constellation of the interfering signal.
Further, the building of the distributed network and the building of the sub-network model based on small sample learning specifically include: introducing a dense connection network DenseNet into the sub-network structure based on small sample learning, and updating local sub-network parameters by using a model independent element learning MAML method; x is the number ofiIs the output of the ith layer in the dense block, Hi(.) is the nonlinear transformation function of the i layer, which is composed of batch normalization, activation function and convolution layer; the different network layers in the dense block adopt a dense connection form, namely the input of the ith layer is the output of the ith-1 layer and the stack of all the layer outputs in between, then xiExpressed as:
xi=Hi([x0,x1,…,xi-1]);
wherein [. ]]Showing the concatenation of the characteristic maps HiIf the number of output channels is constant k, then the i-network will have k0+ kX (i-1) feature maps, k0K is the number of channels in the input layer, also called the growth rate;
the transition layer is used for connecting the two dense blocks, has the function of adjusting the size of a characteristic diagram and consists of a 1 multiplied by 1 convolution layer and a pooling layer with the step length of 2; the number of the feature maps output by the transition layer is thetam, wherein m is the number of the feature maps output by a dense block before the transition layer, and theta is more than 0 and less than or equal to 1 and is a compression factor;
the network model structure comprises a plurality of dense blocks and transition layers, wherein the input of the network is an interference signal characteristic diagram of 128 multiplied by 3, the characteristic diagram firstly passes through a 7 multiplied by 7 convolutional layer with a step length of 2 and a 4 multiplied by 4 maximum pooling layer with a step length of 2 and then enters a first dense block, the 3 multiplied by 3 convolutional layer with a step length of 1 in the dense blocks keeps the size of the characteristic diagram unchanged, the growth rate of the dense blocks is 8, namely the number of convolutional cores used by the convolutional layers is 8, then the dense blocks pass through three transition layers and two dense blocks and then reach a full-connection layer, and finally a normalized exponential function softmax is used for obtaining a classification result;
using an MAML method when the local network parameters are updated, wherein the MAML is a small sample learning method and aims to obtain a better interference recognition model initialization parameter and finish the training of the next task on the parameter; the MAML divides a communication interference sample set into a plurality of N-ways, K-shot training and testing tasks to train an identification model, wherein N is the number of types to be identified by the model, K is the number of samples under each type of interference, and simultaneously divides the samples in each task into a support set SuperSet and a query set QuerySet;
MAML first initializes the primary network parameters phi0Selecting some samples from the collected communication interference samples to form a training task m, and copying main network parameters to obtain a unique network of the m tasksOptimizing the task unique network once by using the Support Set of the task m, and then obtaining the Querry Set based onLoss ofAnd calculateFor theUsing the gradient and the learning rate alpha of the main networkmetaUpdating the main network parameter by using a gradient back propagation algorithm to obtain phi1:
Wherein phi is0For the initial network parameter, phi1For the parameters of the primary network after a one-time update,the sign is derived for the gradient.
Further, then, selecting the next training task to perform the same updating operation on the main network, wherein the specific training steps of the MAML network are as follows:
1) selecting N training tasks and a plurality of testing tasks from a communication interference sample;
2) constructing a communication interference identification main network and initializing a parameter phi0;
3) Performing iterative training on the interference recognition network;
4) optimizing the identification network by using a Support Set of the test task, and evaluating the performance of the identification network by using a query Set of the test task;
the network uses a Cross Entropy function Cross entry when calculating the loss, the function is used for expressing the distance between the expected probability and the output probability distribution, and the process of minimizing the Cross is to minimize the relative Entropy between the expected label and the output label and is expressed as:
where N is the length of the network output vector, yiIn the form of an actual value of the value,is a predicted value.
Further, the iteratively training the interference identification network specifically includes:
b using the Support Set of m tasks, learning rate alpha based on task mmTo pairPerform one optimization and update
c for one time optimizationQuerry Set computation loss using m tasksAnd calculateTo pairA gradient of (a);
d multiplying the learning rate alpha of the main network by the gradient obtained in the step cmetaTo theta0Is updated to obtain phi1;
e repeating steps a-d on the training task.
Further, the training of the distributed network through federal learning to obtain a global output model, and the implementation of the interference type recognition specifically includes: the distributed network architecture for the interference identification is composed of a plurality of nodes, and each node is provided with an independent network model and an interference sample database; in the training process, one central node is selected from all edge nodes to serve as a fusion center and is responsible for parameter fusion and input coordination sub-networks to complete federal learning;
federal learning first requires local training of sub-networks, all of which use local data sets for one or more rounds of updated network parameters wiThe method comprises the steps that the parameters are sent to a central node through a communication network, the central node aggregates received sub-network parameters through a certain aggregation rule to obtain a global parameter w, and then the global parameter is sent to each sub-network to be trained continuously. The global loss function can be obtained by federal learning at a central node, and the calculation formula is as follows:
wherein W is the global network parameter, loss is the global loss, lossiFor the i-th sub-network loss in global parameters and local sample sets, DiFor the size of the local sample set, N is the total number of subnetworks. The global loss function can not be directly obtained at the central node, and each sub-network loss is required to be transmitted to the central node by using a transmission network;
final objective of federated learning to find a global network parameter w*Minimizing the global loss function loss (w):
the method is characterized in that a distributed gradient descent method is used for realizing the minimization of a global loss function, and the local model parameter of each child node is set as wi(t), where t is 0,1,2, N denotes the number of training iterations, and where t is 0, all sub-network parameters are initialized to the same parameter wi(0) When is coming into contact withWhen t is more than 0, w is calculated based on iterative parameters and local loss functionsi(t), the process of gradient descent in the local data set and the parameters is called local updating, after a plurality of local updating, the central node performs global aggregation, and updates the local parameters at the sub-networks into the weighted average of all the sub-networks to obtain global parameters:
wherein D isiIs the ith sub-network sample set size, wi(t) is a parameter of the ith sub-network at the moment t, and D is the sum of the sizes of all sub-network sample sets;
during each training iteration, the sub-network is updated locally, possibly followed by a global aggregation stepTo represent parameters of the sub-network at node i after the possible global aggregation. If no global clustering is performed in iteration tThen closing ruleIf global aggregation is performed in iteration tOrder toFor the ith sub-network, the local update rule is as follows:
where η is the sub-network learning rate, lossiIs the loss of the ith subnet;
after obtaining the global loss function, the central node takes the global loss function as a standard for judging whether the current global parameter is good or bad, and updates the output parameter:
wherein w (t) is a global parameter, wfIs an output parameter.
Further, a sub-node network carries out tau step local updating and then carries out one-time global aggregation, and the final output model parameter is wfThe specific steps of federal learning training are given below:
1) constructing an interference recognition distributed network and establishing a plurality of sub-networksiThe sub-networks use the MAML network structure, take the i as the ith sub-network, initialize Wf,wi(0) Andare the same parameter;
2) for the ith network, acquiring an intelligent interference representation gamma, and performing local updating once by using an MAML method to obtain wi(t), wherein t represents a local update number;
3) judging whether the current local updating times are integral multiples of tau or not, if yes, performing the following steps a and b, and if not, performing the step c:
a, all sub-networks are sent to a central node, global aggregation is carried out, and global parameters are sent to sub-nodes
4) Repeating 2) to 3) until the training is finished to obtain the final global network parameter wf。
It is a further object of the invention to provide a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
intelligently representing the received communication interference signals, and extracting time-frequency distribution, fractional Fourier transform and a constellation map of the communication interference signals as deep network input;
building a distributed network, and building a sub-network model based on small sample learning;
and training the distributed network through federal learning to obtain a global optimal output model and finish the identification of the communication interference type.
Another object of the present invention is to provide an intelligent communication interference recognition system for implementing the intelligent communication interference recognition method, the intelligent communication interference recognition system comprising:
the communication interference signal processing module is used for intelligently representing the received communication interference signal and extracting time-frequency distribution, fractional Fourier transform and a constellation map of the communication interference signal as deep network input;
the sub-network model building module is used for building a distributed network and building a sub-network model based on small sample learning;
and the communication interference type identification module is used for training the distributed network through federal learning to obtain a global optimal output model and finish the identification of the communication interference type.
Another object of the present invention is to provide a terminal, where the terminal is configured to implement the method for intelligently identifying communication interference, and the terminal includes: the system comprises a frequency spectrum monitoring terminal, a cognitive radio terminal and a communication interference cognitive terminal.
By combining all the technical schemes, the invention has the advantages and positive effects that: the invention extracts various effective characteristics, introduces a network structure based on DenseNet, enhances the network characteristic mining capability, and improves the interference identification performance and generalization capability; a sub-network training method based on model independent learning is introduced, so that the network can complete the recognition task under the condition of a small sample, the method can be more suitable for the actual condition, and the sample collection cost is reduced; a distributed network architecture is introduced, and the distributed network is trained by using federal learning, so that the privacy and the safety of interference data are guaranteed.
Drawings
Fig. 1 is a flowchart of an intelligent identification method for communication interference according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of an intelligent identification system for communication interference according to an embodiment of the present invention;
in fig. 2: 1. a communication interference signal processing module; 2. a sub-network model building module; 3. and a communication interference type identification module.
Fig. 3 is a schematic diagram of a simulation experiment result provided by the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In view of the problems in the prior art, the present invention provides a method, a system and a terminal for intelligently identifying communication interference, which are described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the method for intelligently identifying communication interference provided by the present invention includes the following steps:
s101: intelligently representing the received communication interference signals, and extracting time-frequency distribution, fractional Fourier transform and a constellation map of the communication interference signals as deep network input;
s102: building a distributed network, and building a sub-network model based on small sample learning;
s103: and training the distributed network through federal learning to obtain a global optimal output model and finish the identification of the communication interference type.
Those skilled in the art can also implement the method of intelligently identifying communication interference by using other steps, and the method of intelligently identifying communication interference provided by the present invention in fig. 1 is only one specific embodiment.
As shown in fig. 2, the system for intelligently identifying communication interference provided by the present invention includes:
the communication interference signal processing module 1 is used for intelligently representing the received communication interference signal and extracting time-frequency distribution, fractional Fourier transform and a constellation map of the communication interference signal as deep network input;
the sub-network model building module 2 is used for building a distributed network and building a sub-network model based on small sample learning;
and the communication interference type identification module 3 is used for training the distributed network through federal learning to obtain a global optimal output model and finish the identification of the communication interference type.
The technical solution of the present invention is further described below with reference to the accompanying drawings.
The intelligent identification method for the communication interference specifically comprises the following steps:
the method comprises the steps that firstly, intelligent representation is carried out on received interference signals, and time frequency distribution, fractional Fourier transform and a constellation diagram of the interference signals are extracted to be used as deep network input;
communication interference can be classified into press type interference and deceptive interference according to the generation form and action of the interference. The compressive interference mainly includes single tone interference, multi-tone interference, noise frequency modulation interference, partial frequency band noise interference, linear frequency sweep interference and the like, and the deceptive interference mainly includes narrowband random binary code modulation interference, broadband random binary code modulation interference and the like.
Further, the interference signal is intelligently represented, and smooth pseudo-wigner-willi distribution (SPWVD) time-frequency distribution of the received interference signal j (t) is calculated firstly, wherein the calculation formula is as follows:
wherein,SPWVD for interference signal J (t), h (τ) as a function of time window, g (u- τ) as a function of frequency window, J*(t) is the conjugate of the interference signal J (t), and t and f are the corresponding time and frequency, respectively.
The fractional fourier transform (FRFT) of the interference signal j (t) is then calculated as follows:
wherein FpIs a fractional Fourier transform operator, kernel function Kp(t, u) is
Whereinp is the transformation order, and alpha is p pi/2 to represent the rotation angle of the time frequency plane.
The FRFT is the expansion of the signals on a group of orthogonal Chirp bases, the FRFT of a certain order of the linear sweep frequency interference signals is a delta function, and the linear sweep frequency interference signals and other interference signals can be well distinguished by means of the focusing performance of the linear sweep frequency interference signals.
When the characteristic is extracted, the value of p is continuously adjusted to obtain a fractional order transformation matrix of the interference signal:
And then extracting an interference constellation diagram which is mainly used for distinguishing the system interference signal and the deception interference signal.
According to the above feature extraction procedure, the interference signal feature can be expressed as:
wherein,for disturbed SPWVD conversion, XJ(u) fractional Fourier transform of the interference, SJIs the constellation of the interfering signal.
Secondly, building a distributed network and building a sub-network model based on small sample learning;
dense connection networks (DenseNet) are introduced in the sub-network structure based on small sample learning, and local sub-network parameters are updated using a model independent meta learning (MAML) method.
DenseNet gets rid of the fixed thinking of deepening the network by widening, compresses the scale of network parameters to a great extent by utilizing feature multiplexing and configuration bypass, lightens the gradient disappearance phenomenon, enables the network to be more easily converged, and has good regular effect and overfitting resistance. Its main body is mainly composed of dense blocks and transition layer, supposing xiIs the output of the ith layer in the dense block, Hi(. h) is the nonlinear transformation function of the i-th layer, consisting of batch normalization, activation function, and convolutional layer. The different network layers in the dense block adopt a dense connection form, namely the input of the ith layer is the output of the ith-1 layer and the stack of all the layer outputs in between, then xiCan be expressed as:
xi=Hi([x0,x1,…,xi-1]);
wherein [. ]]Showing the concatenation of the feature maps. HiIf the number of output channels is constant k, then the i-network will have k0+ kX (i-1) feature maps, k0K is also called the growth rate for the number of channels in the input layer, and is generally smaller. The dense connection mode not only reduces the parameter quantity, but also enables each layer to obtain gradient information from loss functions and input, improves the information flow of the network and improves the performance of the network.
The transition layer is used for connecting two dense blocks, has the function of adjusting the size of a characteristic diagram and consists of a 1 multiplied by 1 convolution layer and a pooling layer with the step length of 2. The number of the characteristic graphs output by the transition layer is thetam, wherein m is the number of the characteristic graphs output by a dense block before the transition layer, and theta is more than 0 and less than or equal to 1 and is a compression factor.
The network model structure designed by the invention comprises a plurality of dense blocks and transition layers, wherein the input of a network is an interference signal characteristic diagram of 128 multiplied by 3, the characteristic diagram firstly passes through a 7 multiplied by 7 convolutional layer with a step length of 2 and a 4 multiplied by 4 maximum pooling layer with a step length of 2, then enters a first dense block, the 3 multiplied by 3 convolutional layer with a step length of 1 in the dense blocks keeps the size of the characteristic diagram unchanged, the growth rate of the dense blocks is 8, namely the number of convolutional cores used by the convolutional layers is 8, then the dense blocks pass through three transition layers and two dense blocks and then reach a full connection layer, and finally a classification result is obtained by using a normalized exponential function (softmax).
And the MAML method is used during the updating of the local network parameters, is a small sample learning method and aims to obtain a better interference recognition model initialization parameter on which the training of the next task is completed. The MAML divides a communication interference sample Set into a plurality of N-ways, K-shot training and testing tasks to train an identification model, wherein N is the number of types to be identified by the model, K is the number of samples under each type of interference, and simultaneously divides the samples in each task into a support Set (SuperSet) and a Query Set (Query Set).
MAML first initializes the primary network parameters phi0Then selecting some samples from the collected communication interference samplesThe training task m is formed, and the main network parameters are copied to obtain the unique network of the m tasksOptimizing the task unique network once by using the Support Set of the task m, and then obtaining the Querry Set based onLoss ofAnd calculateFor theUsing the gradient and the learning rate alpha of the main networkmetaUpdating the main network parameter by using a gradient back propagation algorithm to obtain phi1:
Wherein phi is0For the initial network parameter, phi1For the parameters of the primary network after a one-time update,the sign is derived for the gradient.
And then, selecting the next training task to perform the same updating operation on the main network, wherein the specific training steps of the MAML network are as follows:
1) selecting N training tasks and a plurality of testing tasks from a communication interference sample;
2) constructing a communication interference identification main network and initializing a parameter phi0;
3) Performing iterative training on the interference recognition network:
b using the Support Set of m tasks, learning rate alpha based on task mmTo pairPerform one optimization and update
c for one time optimizationQuerry Set computation loss using m tasksAnd calculateTo pairA gradient of (a);
d multiplying the learning rate alpha of the main network by the gradient obtained in the step cmetaTo theta0Is updated to obtain phi1;
e repeating steps a-d on the training task;
4) and using the Support Set of the test task to optimize the identification network, and using the query Set of the test task to evaluate the performance of the identification network.
The network uses a Cross Entropy function (Cross Entropy) in calculating the loss, which can be used to represent the distance between the desired probability and the output probability distribution, and the process of minimizing the Cross is to minimize the relative Entropy between the desired label and the output label, which can be expressed as:
where N is the length of the network output vector, yiIn the form of an actual value of the value,is a predicted value.
And thirdly, training the distributed network through federal learning to obtain a global output model, and completing the identification of the interference type.
The distributed network architecture for interference identification is composed of a plurality of nodes, and each node is provided with an independent network model and an interference sample database. In the training process, one central node is selected from all edge nodes to serve as a fusion center, and the central node is responsible for parameter fusion and input coordination sub-networks to complete federal learning.
Federal learning first requires local training of sub-networks, all of which use local data sets for one or more rounds of updated network parameters wiThe method comprises the steps that the parameters are sent to a central node through a communication network, the central node aggregates received sub-network parameters through a certain aggregation rule to obtain a global parameter w, and then the global parameter is sent to each sub-network to be trained continuously. The global loss function can be obtained by federal learning at a central node, and the calculation formula is as follows:
wherein W is the global network parameter, loss is the global loss, lossiFor the i-th sub-network loss in global parameters and local sample sets, DiFor the size of the local sample set, N is the total number of subnetworks. The global loss function is not directly available at the central node, requiring and using a transmission network to transmit each sub-network loss to the central node.
Final objective of federated learning to find a global network parameter w*Minimizing the global loss function loss (w):
here, the distributed gradient descent method is used to realize the minimization of the global loss function, and the local model parameter of each child node is set as wi(t), where t is 0,1,2, N denotes the number of training iterations, and where t is 0, all sub-network parameters are initialized to the same parameter wi(0) When t is more than 0, calculating to obtain w based on the parameters of the previous iteration and the local loss functioni(t), the process of gradient descent in the local data set and the parameters is called local update, after a plurality of local updates, the central node performs global aggregation, and updates the local parameters at the sub-networks into the weighted average of all the sub-networks to obtain global parameters:
wherein D isiIs the ith sub-network sample set size, wiAnd (t) is a parameter of the ith sub-network at the moment t, and D is the sum of the sizes of all the sub-network sample sets.
During each training iteration, the sub-network is updated locally, possibly followed by a global aggregation step, usingTo represent parameters of the sub-network at node i after the possible global aggregation. If no global clustering is performed in iteration tThen closing ruleIf global aggregation is performed in iteration tOrder toFor the ith sub-network, the local update rule is as follows:
where η is the sub-network learning rate, lossiIs the loss of the ith subnet.
After obtaining the global loss function, the central node takes the global loss function as a standard for judging whether the current global parameter is good or bad, and updates the output parameter:
wherein w (t) is a global parameter, wfIs an output parameter.
Supposing that the sub-node network carries out tau step local update and then carries out one global aggregation, and the final output model parameter is wfThe specific steps of federal learning training are given below:
1) constructing an interference recognition distributed network and establishing a plurality of sub-networksiThe sub-networks use the MAML network structure, take the i as the ith sub-network, initialize Wf,wi(0) Andare the same parameter.
2) For the ith network, acquiring an intelligent interference representation gamma, and performing local updating once by using an MAML method to obtain wi(t), where t represents the number of local updates.
3) Judging whether the current local updating times are integral multiples of tau or not, if yes, performing the following steps a and b, and if not, performing the step c:
a, all sub-networks are sent to a central node, global aggregation is carried out, and global parameters are sent to sub-nodes
4) Repeating 2) to 3) until the training is finished to obtain the final global network parameter wf。
The technical effects of the present invention will be described in detail with reference to experiments.
To verify the validity of the proposed distributed network, simulation experiments were performed. The parameters of the experimental operating environment and the type of the interference signal are consistent with the experimental setting of 4.6.1, the number of the subnodes is set to be 3, the dry-to-noise ratio of the training set of the subnodes is set to be [ -10:2:15] dB, 100 samples are generated under each dry-to-noise ratio of each type of signal, the dry-to-noise ratio of the testing set is set to be [ -10:2:15] dB, 400 samples are generated under each dry-to-noise ratio of each type of interference, the global aggregation interval is set to be 4, the parameters are initialized by using kaiming, the learning rate of the MAML main network is 0.0002, the learning rate of the subtask is 0.04, the batch size is set to be 105, the identification performance of the distributed network is shown in FIG. 3, and when the dry-to-noise ratio is 2dB, the recognition rate of each interference reaches more than 90 percent, when the dry-to-noise ratio is 4dB, the recognition rate of each interference is close to 100 percent, therefore, the method can effectively finish various communication interference identifications under the support of a small sample.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. An intelligent identification method for communication interference is characterized in that the intelligent identification method for communication interference comprises the following steps:
intelligently representing the received communication interference signals, and extracting time-frequency distribution, fractional Fourier transform and a constellation map of the communication interference signals as deep network input;
building a distributed network, and building a sub-network model based on small sample learning;
and training the distributed network through federal learning to obtain a global optimal output model and finish the identification of the communication interference type.
2. The method according to claim 1, wherein the intelligently characterizing the received interference signal and extracting the time-frequency distribution, the fractional fourier transform and the constellation as the deep network input specifically comprises:
the intelligent representation of the interference signal is carried out, firstly, the smooth pseudo-wigner-willi distribution (SPWVD) time-frequency distribution of the received interference signal J (t) is calculated, and the calculation formula is as follows:
wherein,SPWVD for interference signal J (t), h (τ) as a function of time window, g (u- τ) as a function of frequency window, J*(t) is the conjugate of the interference signal j (t), and t and f are the corresponding time and frequency, respectively;
then, a fractional Fourier transform (FRFT) of the interference signal J (t) is calculated, wherein the calculation method comprises the following steps:
wherein FpIs a fractional Fourier transform operator, kernel function Kp(t, u) is
Whereinp is a transformation order, and alpha is p pi/2 represents the rotation angle of the time frequency plane;
the FRFT is the expansion of signals on a group of orthogonal Chirp bases, the FRFT of a certain order of the linear sweep frequency interference signals is a delta function, and the linear sweep frequency interference signals and other interference signals can be well distinguished by means of the focusing performance of the linear sweep frequency interference signals;
when the characteristic is extracted, the value of p is continuously adjusted to obtain a fractional order transformation matrix of the interference signal:
then extracting an interference constellation diagram for distinguishing a standard interference signal and a deception interference signal;
according to the above feature extraction procedure, the interference signal feature is expressed as:
3. The method according to claim 1, wherein the building of the distributed network and the building of the sub-network model based on small sample learning specifically comprise: introducing a dense connection network DenseNet into the sub-network structure based on small sample learning, and updating local sub-network parameters by using a model independent element learning MAML method; x is the number ofiIs the output of the ith layer in the dense block, Hi(.) is the nonlinear transformation function of the i layer, which is composed of batch normalization, activation function and convolution layer; the different network layers in the dense block adopt a dense connection form, namely the input of the ith layer is the output of the ith-1 layer and the stack of all the layer outputs in between, then xiExpressed as:
xi=Hi([x0,x1,…,xi-1]);
wherein [ ·]Stitching of presentation feature maps,HiIf the number of output channels is constant k, then the i-network will have k0+ kX (i-1) feature maps, k0K is the number of channels in the input layer, also called the growth rate;
the transition layer is used for connecting the two dense blocks, has the function of adjusting the size of a characteristic diagram and consists of a 1 multiplied by 1 convolution layer and a pooling layer with the step length of 2; the number of the feature maps output by the transition layer is thetam, wherein m is the number of the feature maps output by a dense block before the transition layer, and theta is more than 0 and less than or equal to 1 and is a compression factor;
the network model structure comprises a plurality of dense blocks and transition layers, wherein the input of the network is an interference signal characteristic diagram of 128 multiplied by 3, the characteristic diagram firstly passes through a 7 multiplied by 7 convolutional layer with a step length of 2 and a 4 multiplied by 4 maximum pooling layer with a step length of 2 and then enters a first dense block, the 3 multiplied by 3 convolutional layer with a step length of 1 in the dense blocks keeps the size of the characteristic diagram unchanged, the growth rate of the dense blocks is 8, namely the number of convolutional cores used by the convolutional layers is 8, then the dense blocks pass through three transition layers and two dense blocks and then reach a full-connection layer, and finally a normalized exponential function softmax is used for obtaining a classification result;
using an MAML method when the local network parameters are updated, wherein the MAML is a small sample learning method and aims to obtain a better interference recognition model initialization parameter and finish the training of the next task on the parameter; the MAML divides a communication interference sample Set into a plurality of N-ways, K-shot training and testing tasks to train an identification model, wherein N is the number of types to be identified by the model, K is the number of samples under each type of interference, and simultaneously divides the samples in each task into a Support Set and a Query Set;
MAML first initializes the primary network parameters phi0Selecting some samples from the collected communication interference samples to form a training task m, and copying main network parameters to obtain a unique network of the m tasksOptimizing the task unique network once by using the Support Set of the task m, and then obtaining the Querry Set based onLoss ofAnd calculateFor theUsing the gradient and the learning rate alpha of the main networkmetaUpdating the main network parameter by using a gradient back propagation algorithm to obtain phi1:
4. The intelligent identification method of communication interference according to claim 3, characterized in that, subsequently, the next training task is selected to perform the same updating operation on the main network, and the specific training steps of the MAML network are as follows:
1) selecting N training tasks and a plurality of testing tasks from a communication interference sample;
2) constructing a communication interference identification main network and initializing a parameter phi0;
3) Performing iterative training on the interference recognition network;
4) optimizing the identification network by using a Support Set of the test task, and evaluating the performance of the identification network by using a query Set of the test task;
the network uses a Cross Entropy function Cross entry when calculating the loss, the function is used for expressing the distance between the expected probability and the output probability distribution, and the process of minimizing the Cross is to minimize the relative Entropy between the expected label and the output label and is expressed as:
5. The intelligent communication interference recognition method of claim 4, wherein the iterative training of the interference recognition network specifically comprises:
b using the Support Set of m tasks, learning rate alpha based on task mmTo pairPerform one optimization and update
c for one time optimizationQuerry Set computation loss using m tasksAnd calculateTo pairA gradient of (a);
d multiplying the learning rate alpha of the main network by the gradient obtained in the step cmetaTo theta0Is updated to obtain phi1;
e repeating steps a-d on the training task.
6. The intelligent recognition method for communication interference according to claim 1, wherein the distributed network is trained through federal learning to obtain a global output model, and the completion of the recognition of the interference type specifically includes: the distributed network architecture for the interference identification is composed of a plurality of nodes, and each node is provided with an independent network model and an interference sample database; in the training process, one central node is selected from all edge nodes to serve as a fusion center and is responsible for parameter fusion and input coordination sub-networks to complete federal learning;
federal learning first requires local training of sub-networks, all of which use local data sets for one or more rounds of updated network parameters wiThe method comprises the following steps that the information is sent to a central node through a communication network, the central node aggregates received sub-network parameters through a certain aggregation rule to obtain a global parameter w, then the global parameter is sent to each sub-network to be trained, federal learning can obtain a global loss function at the central node, and the calculation formula is as follows:
wherein W is the global network parameter, loss is the global loss, lossiFor the i-th sub-network loss in global parameters and local sample sets, DiThe size of a local sample set is defined, N is the total number of sub-networks, a global loss function cannot be directly obtained at a central node, and the loss of each sub-network is required to be transmitted to the central node by using a transmission network;
final objective of federated learning to find a global network parameter w*Minimizing the global loss function loss (w):
the method is characterized in that a distributed gradient descent method is used for realizing the minimization of a global loss function, and the local model parameter of each child node is set as wi(t), where t is 0,1,2, N denotes the number of training iterations, and where t is 0, all sub-network parameters are initialized to the same parameter wi(0) When t is greater than 0, calculating w based on the iterative parameter and the local loss functioni(t), the process of gradient descent in the local data set and the parameters is called local updating, after a plurality of local updating, the central node performs global aggregation, and updates the local parameters at the sub-networks into the weighted average of all the sub-networks to obtain global parameters:
wherein D isiIs the ith sub-network sample set size, wi(t) is a parameter of the ith sub-network at the moment t, and D is the sum of the sizes of all sub-network sample sets;
during each training iteration, the sub-network is updated locally, possibly followed by a global aggregation stepTo indicate the parameters of the sub-network at node i after the possible global aggregation, if no global aggregation is performed in iteration tThen closing ruleIf global aggregation is performed in iteration tOrder toFor the ith sub-network, the local update rule is as follows:
where η is the sub-network learning rate, lossiIs the loss of the ith subnet;
after obtaining the global loss function, the central node takes the global loss function as a standard for judging whether the current global parameter is good or bad, and updates the output parameter:
wherein w (t) is a global parameter, wfIs an output parameter.
7. The intelligent communication interference recognition method of claim 6, wherein the sub-node network performs a global aggregation after τ -step local update, and the final output model parameter is wfThe specific steps of federal learning training are given below:
1) constructing an interference recognition distributed network and establishing a plurality of sub-networksiThe sub-networks use the MAML network structure, take the i as the ith sub-network, initialize Wf,wi(0) Andare the same parameter;
2) for the ith network, acquiring an intelligent interference representation gamma, and performing primary office by using an MAML methodPartial update, obtaining wi(t), wherein t represents a local update number;
3) judging whether the current local updating times are integral multiples of tau or not, if yes, performing the following steps a and b, and if not, performing the step c:
a, all sub-networks are sent to a central node, global aggregation is carried out, and global parameters are sent to sub-nodes
4) Repeating 2) to 3) until the training is finished to obtain the final global network parameter wf。
8. A computer device, characterized in that the computer device comprises a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of:
intelligently representing the received communication interference signals, and extracting time-frequency distribution, fractional Fourier transform and a constellation map of the communication interference signals as deep network input;
building a distributed network, and building a sub-network model based on small sample learning;
and training the distributed network through federal learning to obtain a global optimal output model and finish the identification of the communication interference type.
9. An intelligent communication interference recognition system for implementing the intelligent communication interference recognition method according to any one of claims 1 to 7, wherein the intelligent communication interference recognition system comprises:
the communication interference signal processing module is used for intelligently representing the received communication interference signal and extracting time-frequency distribution, fractional Fourier transform and a constellation map of the communication interference signal as deep network input;
the sub-network model building module is used for building a distributed network and building a sub-network model based on small sample learning;
and the communication interference type identification module is used for training the distributed network through federal learning to obtain a global optimal output model and finish the identification of the communication interference type.
10. A terminal, wherein the terminal is configured to implement the method for intelligently identifying communication interference according to any one of claims 1 to 7, and the terminal comprises: the system comprises a frequency spectrum monitoring terminal, a cognitive radio terminal and a communication interference cognitive terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110541106.2A CN113435247B (en) | 2021-05-18 | 2021-05-18 | Intelligent recognition method, system and terminal for communication interference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110541106.2A CN113435247B (en) | 2021-05-18 | 2021-05-18 | Intelligent recognition method, system and terminal for communication interference |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113435247A true CN113435247A (en) | 2021-09-24 |
CN113435247B CN113435247B (en) | 2023-06-23 |
Family
ID=77802662
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110541106.2A Active CN113435247B (en) | 2021-05-18 | 2021-05-18 | Intelligent recognition method, system and terminal for communication interference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113435247B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114021458A (en) * | 2021-11-05 | 2022-02-08 | 西安晟昕科技发展有限公司 | Small sample radar radiation source signal identification method based on parallel prototype network |
CN114154545A (en) * | 2021-12-07 | 2022-03-08 | 中国人民解放军32802部队 | Intelligent unmanned aerial vehicle measurement and control signal identification method under strong mutual interference condition |
CN114205016A (en) * | 2021-12-13 | 2022-03-18 | 南京邮电大学 | Interference suppression method for smart park electric power wireless heterogeneous network |
CN115018087A (en) * | 2022-07-26 | 2022-09-06 | 北京融数联智科技有限公司 | Training method and system for multi-party longitudinal logistic regression algorithm model |
CN115296759A (en) * | 2022-07-15 | 2022-11-04 | 电子科技大学 | Interference identification method based on deep learning |
CN115941082A (en) * | 2022-10-09 | 2023-04-07 | 中国人民解放军军事科学院战争研究院 | Distributed cooperative interference identification method for unmanned aerial vehicle communication system |
CN117032055A (en) * | 2023-10-10 | 2023-11-10 | 深圳市潼芯传感科技有限公司 | Industrial equipment intelligent control system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110991263A (en) * | 2019-11-12 | 2020-04-10 | 华中科技大学 | Non-invasive load identification method and system for resisting background load interference |
US20200279288A1 (en) * | 2019-03-01 | 2020-09-03 | Mastercard International Incorporated | Deep learning systems and methods in artificial intelligence |
CN111967309A (en) * | 2020-07-03 | 2020-11-20 | 西安电子科技大学 | Intelligent cooperative identification method and system for electromagnetic signals |
CN112132027A (en) * | 2020-09-23 | 2020-12-25 | 青岛科技大学 | Underwater sound signal modulation mode inter-class identification method based on improved dense neural network |
CN112731309A (en) * | 2021-01-06 | 2021-04-30 | 哈尔滨工程大学 | Active interference identification method based on bilinear efficient neural network |
-
2021
- 2021-05-18 CN CN202110541106.2A patent/CN113435247B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200279288A1 (en) * | 2019-03-01 | 2020-09-03 | Mastercard International Incorporated | Deep learning systems and methods in artificial intelligence |
CN110991263A (en) * | 2019-11-12 | 2020-04-10 | 华中科技大学 | Non-invasive load identification method and system for resisting background load interference |
CN111967309A (en) * | 2020-07-03 | 2020-11-20 | 西安电子科技大学 | Intelligent cooperative identification method and system for electromagnetic signals |
CN112132027A (en) * | 2020-09-23 | 2020-12-25 | 青岛科技大学 | Underwater sound signal modulation mode inter-class identification method based on improved dense neural network |
CN112731309A (en) * | 2021-01-06 | 2021-04-30 | 哈尔滨工程大学 | Active interference identification method based on bilinear efficient neural network |
Non-Patent Citations (2)
Title |
---|
CHELSEA FINN ET AL.: ""Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks"", 《INTERNATIONAL CONFERENCE ON MACHINE LEARNING. PMLR》 * |
刘明骞 等: ""多类型的雷达有源干扰感知新方法"", 《西安交通大学学报》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114021458A (en) * | 2021-11-05 | 2022-02-08 | 西安晟昕科技发展有限公司 | Small sample radar radiation source signal identification method based on parallel prototype network |
CN114021458B (en) * | 2021-11-05 | 2022-11-04 | 西安晟昕科技发展有限公司 | Small sample radar radiation source signal identification method based on parallel prototype network |
CN114154545A (en) * | 2021-12-07 | 2022-03-08 | 中国人民解放军32802部队 | Intelligent unmanned aerial vehicle measurement and control signal identification method under strong mutual interference condition |
CN114205016A (en) * | 2021-12-13 | 2022-03-18 | 南京邮电大学 | Interference suppression method for smart park electric power wireless heterogeneous network |
CN114205016B (en) * | 2021-12-13 | 2023-06-23 | 南京邮电大学 | Interference suppression method for intelligent park electric power wireless heterogeneous network |
CN115296759A (en) * | 2022-07-15 | 2022-11-04 | 电子科技大学 | Interference identification method based on deep learning |
CN115018087A (en) * | 2022-07-26 | 2022-09-06 | 北京融数联智科技有限公司 | Training method and system for multi-party longitudinal logistic regression algorithm model |
CN115018087B (en) * | 2022-07-26 | 2023-05-09 | 北京融数联智科技有限公司 | Training method and system for multipartite longitudinal logistic regression algorithm model |
CN115941082A (en) * | 2022-10-09 | 2023-04-07 | 中国人民解放军军事科学院战争研究院 | Distributed cooperative interference identification method for unmanned aerial vehicle communication system |
CN115941082B (en) * | 2022-10-09 | 2024-06-04 | 中国人民解放军军事科学院战争研究院 | Distributed cooperative interference identification method for unmanned aerial vehicle communication system |
CN117032055A (en) * | 2023-10-10 | 2023-11-10 | 深圳市潼芯传感科技有限公司 | Industrial equipment intelligent control system |
Also Published As
Publication number | Publication date |
---|---|
CN113435247B (en) | 2023-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113435247A (en) | Intelligent identification method, system and terminal for communication interference | |
Zhang et al. | An efficient deep learning model for automatic modulation recognition based on parameter estimation and transformation | |
Huang et al. | Incremental extreme learning machine with fully complex hidden nodes | |
Yang et al. | Deep learning aided method for automatic modulation recognition | |
Che et al. | Spatial-temporal hybrid feature extraction network for few-shot automatic modulation classification | |
CN109450834A (en) | Signal of communication classifying identification method based on Multiple feature association and Bayesian network | |
CN112364729A (en) | Modulation identification method based on characteristic parameters and BP neural network | |
CN113452408B (en) | Network station frequency hopping signal sorting method | |
CN111428817A (en) | Defense method for resisting attack by radio signal identification | |
CN110120926A (en) | Modulation mode of communication signal recognition methods based on evolution BP neural network | |
CN111050315B (en) | Wireless transmitter identification method based on multi-core two-way network | |
CN112910811B (en) | Blind modulation identification method and device under unknown noise level condition based on joint learning | |
CN112039820A (en) | Communication signal modulation and identification method for quantum image group mechanism evolution BP neural network | |
CN101783777A (en) | Digital modulation signal recognizing method | |
CN114726692B (en) | SERESESESENet-LSTM-based radiation source modulation mode identification method | |
Yang et al. | One-dimensional deep attention convolution network (ODACN) for signals classification | |
CN114520758A (en) | Signal modulation identification method based on instantaneous characteristics | |
Zhang et al. | A deep learning approach for modulation recognition | |
CN115392285A (en) | Deep learning signal individual recognition model defense method based on multiple modes | |
Ali et al. | Modulation format identification using supervised learning and high-dimensional features | |
Wang et al. | Automatic modulation classification based on CNN, LSTM and attention mechanism | |
Ya et al. | Modulation recognition of digital signal based on deep auto-ancoder network | |
CN117560104A (en) | Construction method of interpretable machine learning-assisted channel model in mixed traffic | |
Huang et al. | Radio frequency fingerprint identification method based on ensemble learning | |
CN115809426A (en) | Radiation source individual identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |