CN116047427B - Small sample radar active interference identification method - Google Patents
Small sample radar active interference identification method Download PDFInfo
- Publication number
- CN116047427B CN116047427B CN202310320140.6A CN202310320140A CN116047427B CN 116047427 B CN116047427 B CN 116047427B CN 202310320140 A CN202310320140 A CN 202310320140A CN 116047427 B CN116047427 B CN 116047427B
- Authority
- CN
- China
- Prior art keywords
- interference
- sample
- representing
- feature map
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/36—Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
- G01S7/417—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention discloses a small sample radar active interference identification method, which comprises the following steps: constructing a plurality of radar active interference signal models, and processing the radar active interference signal models to obtain a reference data set for interference identification; dividing the reference data set into a training set, a verification set and a test set; constructing a network model comprising a feature extraction module and a prototype classification module based on polynomial loss based on a metric learning idea; wherein the feature extraction module comprises a variable convolution layer and an aggregate attention block; training the network model by using the training set, and storing the optimal network model parameters by using the verification set until the network converges; and performing performance evaluation on the trained network model by using the test set, and performing identification of small sample radar active interference by using the trained network model. The method not only can accurately extract the interference features under the condition that the sample is extremely scarce, but also improves the characterization capability of the model on feature dispersion and weak feature interference, and improves the recognition performance and robustness.
Description
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a small sample radar active interference identification method.
Background
With the continuous development of electronic countermeasure technology, many radar active interference samples with specific interference effects are continuously proposed. The complex and changeable radar active interference causes the radar to face serious electromagnetic interference threat, and the real target and parameters thereof cannot be effectively detected. Therefore, searching for a rapid and efficient radar active interference recognition algorithm in a complex electromagnetic environment has become an important research direction in the radar countermeasure field in recent years.
At present, although typical radar interference identification algorithms based on conventional convolutional neural networks (Convolutional Neural Network, CNN) can make great progress on measured or simulated data sets, the research of these algorithms is still in the beginning stage, which mainly takes place in two aspects: (1) Although CNN-based recognition algorithms can predict test samples of the interference type that have occurred during the training phase with high accuracy, recognition accuracy of these algorithms may be greatly reduced in the face of test samples of the interference type that have never occurred during the training phase. (2) The successful application of traditional CNN in the field of computer vision mainly benefits from that a large labeled dataset is used for learning model parameters in a network training stage, and once sufficient label data cannot be obtained, a CNN-based model has excessive fitting risk, so that the network performance is seriously reduced. However, in reality, it is difficult to obtain a large number of labeled high-quality radar interference data meeting the model training requirement, which directly limits the performance of the conventional CNN-based interference recognition algorithm in a practical scene.
In conclusion, the existing radar active interference recognition algorithm has poor recognition accuracy and weak migration capability on novel interference under the condition of extremely lack of samples; meanwhile, the prior art has poor characterization capability on characteristic dispersion and weak characteristic interference, and cannot meet the requirements of complex and changeable electromagnetic environments in practical application.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a small sample radar active interference identification method. The technical problems to be solved by the invention are realized by the following technical scheme:
a method for small sample radar active interference identification, comprising:
constructing a plurality of radar active interference signal models, and performing short-time Fourier transform on the radar active interference signal models to obtain an interference time-frequency image; taking the interference video image as a reference data set for interference identification, and dividing the reference data set into a training set, a verification set and a test set;
constructing an aggregate attention variable convolution prototype network model comprising a feature extraction module and a prototype classification module based on a metric learning idea; the feature extraction module comprises a variable convolution layer and an aggregation attention block, and the prototype classification module is a prototype classification module based on polynomial loss;
Training the network model by using the training set, and storing optimal network model parameters by using the verification set until the network converges to obtain a trained network model;
and performing performance evaluation on the trained network model by using the test set, and performing recognition of small sample radar active interference by using the trained network model.
In one embodiment of the invention, the radar active disturbance signal model includes a jammer, a spoof disturbance, and a composite disturbance combined from the jammer and the spoof disturbance.
In one embodiment of the present invention, the training the network model using the training set includes:
generating a support set and a query set from the training set by adopting a scenario training mechanism and utilizing a random sample extraction method;
extracting features with remote context information from samples of the support set and the query set by using the variable convolution layer to obtain a geometric enhancement feature map;
carrying out channel domain weighting on the geometric enhancement feature map by utilizing the aggregation attention block to obtain a channel enhancement feature map;
and classifying the channel enhancement feature map by using the prototype classification module to obtain a sample real label.
In one embodiment of the present invention, the extracting features with remote context information from the support set and query set samples by using the variable convolution layer to obtain a geometric enhancement feature map includes:
obtaining a feature map based on the samples of the support set and the query set;
introducing a group of leachable offset into the feature map to enhance the sampling process on the regular grid, so as to obtain an offset feature map;
and weighting the characteristic map after the offset by utilizing the leachable modulation quantity to obtain a geometric enhancement characteristic map.
In one embodiment of the invention, the process of obtaining the geometric enhancement feature is formulated as:
wherein, the liquid crystal display device comprises a liquid crystal display device,geometric enhancement feature representing variable convolutional layer output, < >>A feature map is represented as a map of features,representation->Two-dimensional sampling position on->Representation->The weight of the convolution kernel is calculated,expressed as +.>Is +.>Neighborhood (S)>Is the integral offset of the convolution operation, +.>And->Respectively represent the leachable offset and modulation in the variable convolution, and +.>Is in the range of +.>,/>。
In one embodiment of the present invention, the performing channel domain weighting on the geometric enhancement feature map by using the aggregate attention block to obtain a channel enhancement feature map includes:
And respectively carrying out global maximum pooling, global average pooling and global soft pooling on the geometric enhancement feature map, and carrying out feature fusion to obtain a channel enhancement feature map.
In one embodiment of the present invention, the process of obtaining the channel enhancement feature map is formulated as:
wherein, the liquid crystal display device comprises a liquid crystal display device,channel enhancement feature map representing aggregate attention block output,/->Representing the channel dimension product, +.>A geometric enhancement feature representing the variable convolutional layer output;
weight matrix representing output, +.>For Sigmoid function, ++>Representing a full connection layer, ">And->The weight of the full connection layer; />、/>And->Representing global maximum pooling, global average pooling and global soft pooling, respectively, +.>、/>、/>Respectively representing the characteristics after global maximum pooling, global average pooling and global soft pooling.
In one embodiment of the present invention, the classifying the channel enhancement feature map by using the prototype classification module to obtain a sample real label includes:
and calculating the distance from the query set sample to the support set prototype sample in the channel enhancement feature map, and outputting a class label of the prototype sample with the minimum distance from the query set sample so as to classify the query sample and obtain a real label of the query sample.
In one embodiment of the present invention, the storing the optimal network model parameters using the validation set includes:
after each training is completed, the corresponding polynomial loss function is calculated, and the parameters of the network model are updated accordingly to save the optimal network model parameters.
In one embodiment of the invention, the polynomial loss function is expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,
representing total amount of category->Representing a certain category->,/>Representing a query set->Is the first polynomial coefficient, ++>Probability distribution representing the distance of a sample of a query set to a prototype of a support set, +.>Representing support sets,/->Indicate->Sample of query sets, ++>Indicate->True tags of individual query set samples, +.>Representing the number of samples of the query set.
The invention has the beneficial effects that:
the invention provides a small sample radar active interference identification method based on a measurement learning-based small sample learning framework, which is capable of accurately extracting interference characteristics under extremely scarce samples, improving the characterization capability of a model on characteristic dispersion and weak characteristic interference, improving identification performance and robustness, and meeting various electromagnetic environment requirements of complexity and variability in practical application.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a schematic flow chart of a small sample radar active interference identification method provided by an embodiment of the invention;
FIG. 2 is a block diagram of an aggregate attention-based variable convolution prototype network provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a variable convolution layer provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an aggregate attention block provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a prototype classification module provided by an embodiment of the invention;
FIG. 6 is a time-frequency image of a 50-class disturbance in a disturbance data set constructed in accordance with the present invention;
FIG. 7 is a schematic diagram of the partitioning strategy of the present invention for an interference data set;
FIG. 8 is a schematic diagram of the feature visualization results of the proposed algorithm and the conventional FSL algorithm of the present invention;
FIG. 9 is a visual diagram of simulation test t-SNE clustering results, wherein (a) is PN, (b) is RN, (c) is CMN,(d) DN4, (e) CAN, (f) A 2 -DCPNet。
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Example 1
Small sample Learning (FSL) has proven to be one of the most effective methods to solve the problem of small sample image recognition as a new deep Learning paradigm. On one hand, the FSL can solve the problems of insufficient generalization performance and poor adaptability to new tasks of the traditional neural network model, and on the other hand, the FSL can relieve the challenge of seriously reducing the performance of the model due to over fitting under the condition of sample deficiency. The prototype network (Prototypical Network, PN) takes the mean of the feature vectors of each class of samples in the support set as a class prototype in the metric space and applies the nearest neighbor idea to classify the query sample into the prototype with the nearest Euclidean distance. Different from the above non-parametric measurement calculation mode, the Relationship Network (RN) uses neural Network training to obtain a learnable nonlinear similarity measurement function to replace a manually defined distance measurement method (such as euclidean distance and cosine distance), so as to identify the query sample. From the point of view of second order statistics, a covariance metric network (Covariance Metric Network, CMN) implements class characterization and distance metrics by constructing a covariance matrix between feature vectors of each sample, which contains two key modules, namely a local covariance representation and a covariance metric. The first module extracts rich feature representations and the second measures the relationship between the query samples by computing their distribution consistency with each category. The deep nearest neighbor neural network (Deep Nearest Neural Network, DN 4) completes the classification of the query sample by directly using the local features of the original image and measuring the migratable local features with the local descriptors based on the image-to-class, i.e. by comparing the degree of similarity between the input image and the local descriptors of each class. A cross-attention network (Cross Attention Network, CAN) obtains feature vectors for the support samples and the query samples through a feature extractor and then generates cross-attention between the query samples and the support samples using a cross-attention module to learn more identifying features.
Based on the problems, the embodiment provides a small sample learning framework based on measurement learning for the problem of sample scarcity faced by radar active interference recognition, and provides a small sample radar active interference recognition method based on an aggregate attention variable convolution prototype network for the problem of poor discrete feature and weak feature characterization capability in an interference time-frequency image by the existing small sample learning method.
Specifically, referring to fig. 1, fig. 1 is a flow chart of a small sample radar active interference identification method according to an embodiment of the present invention, which mainly includes three stages of constructing a data set and a network model, training a network, and testing the network. These three stages are described in sequence in detail below.
1. Modeling data sets and networks
Step 1: constructing a plurality of radar active interference signal models, and performing short-time Fourier transform on the radar active interference signal models to obtain an interference time-frequency image; the interference video image is used as a reference data set of interference identification, and the reference data set is divided into a training set, a verification set and a test set.
In this embodiment, the radar active disturbance signal model includes a jammer, a jammer and a composite disturbance composed of the jammer and the jammer.
Specifically, the suppression interference mainly includes the following:
1) Noise amplitude modulation interference (Noise Amplitude Modulation Jamming, NAMJ)
Noise amplitude modulation interference refers to an interference signal whose carrier frequency remains unchanged by modulating amplitude information of a carrier signal with a noise signal. The mathematical expression of noise amplitude modulation interference is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing noise amplitude modulation disturbance, modulation noise->Is zero mean and variance +.>In the intervalA distributed generalized stationary random process, typically a gaussian white noise signal; />Is->Uniformly distributed random variables and +.>Independent; />、/>Is constant.
2) Noise FM interference (Noise Frequency Modulation Jamming, NFMJ)
Noise fm interference refers to an interference signal whose amplitude remains unchanged by modulating the frequency information of a carrier signal with a noise signal. The mathematical expression of noise fm interference is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing noise FM interference, modulating noise->Is zero mean and variance +.>Is generally Gaussian white noise signal; />Is->Uniformly distributed random variables and +.>Independent; />、/>、/>The amplitude, center frequency and chirp rate of the noise fm interferer are constant, respectively.
3) Noise product interference (Noise Product Jamming, NPJ)
The noise product interference is achieved by multiplying the radar signal intercepted by the jammer and the noise signal in the time domain and outputting the signal through a large interference signal ratio (Jamming to Signal Ratio, JSR), and the purpose of submerging the real target echo signal is achieved, and the mathematical expression is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing noise product disturbance>For intercepted radar signals +.>Is a gaussian white noise signal.
4) Noise convolution interference (Noise Convolution Jamming, NCJ)
The noise convolution interference is performed by convolving the intercepted radar signal and the noise signal in the time domain, so that a suppression effect can be formed on the real target echo in the time domain and the frequency domain at the same time, and the mathematical expression is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing noise convolution interference>For intercepted radar signals +.>In the case of a gaussian white noise signal,representing the convolution operator.
5) Multi-point interference (Multi-Point Frequency Jamming, MPFJ)
The multi-point frequency signal is formed by directly superposing a plurality of single-frequency signals in a time domain, and multi-point frequency interference is generated by high-power transmission of an interference machine, and the mathematical expression is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing multi-point interference->Representing the number of single-point frequency disturbances, +.>Indicate->Individual single-frequency interference signals- >、/>And->Respectively +.>Amplitude, carrier frequency and phase modulation functions of the single-frequency interference signals.
6) Swept frequency interference (Sweep Frequency Jamming SFJ)
Sweep interference means that the interference signal performs periodic frequency scanning in a wider interference frequency band, so as to effectively cover the target echo signal. Compared with noise interference, sweep frequency interference can realize more uniform interference frequency spectrum and more efficient utilization of interference energy in interference bandwidth, so that overload phenomenon of an adversary radar receiver is more likely to occur. Generally, the swept-frequency interference can be classified into sine-wave Modulated swept-frequency interference (SFJ 1), saw-tooth Modulated swept-frequency interference (SFJ 2), and Trapezoidal-wave swept-frequency interference (SFJ 3). Furthermore, the swept frequency interference can be mathematically modeled as:
wherein, the liquid crystal display device comprises a liquid crystal display device,indicating the disturbance of the sweep frequency->,/>,/>And->Respectively representing the amplitude, frequency, modulation factor and frequency modulation function of the interfering signal.
Further, the spoofing interference mainly includes the following:
1) Dense decoy interference (Dense False Target Jamming, DFTJ)
The dense false target interference is generally generated by adopting a full pulse sampling delay superposition method. Specifically, the jammer carries out delay superposition one by one on the intercepted radar signals to generate dense false target interference, and the method solves the problem that the false target formed after the interference enters the radar receiver is too sparse due to overlong delay of interference forwarding caused by the fact that the jammer forwards the intercepted radar signals in sequence. The time domain expression of dense decoys is:
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing dense decoy interference +.>Radar signal representing the interception of jammers, +.>、/>Respectively represent +.>Amplitude and superposition delay of the individual interfering signals, +.>Indicating the number of times the jammer superimposes the jammer signal.
2) Intermittent sampling forwarding interference (Interrupted Sampling and Forwarding Jamming, ISFJ)
The intermittent sampling forwarding interference is that an interference machine carries out rapid slicing sampling, storage and forwarding on an intercepted radar signal based on a DRFM technology to form an interference signal similar to a target, and the interference signal covers a real target in a time domain and a frequency domain, so that the detection and tracking of the target are seriously affected. The time domain expression of intermittent sample-and-forward interference can be expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing intermittent sample forwarding interference +.>Radar signal representing the interception of jammers, +.>Representing the pulse width of the slice pulse, < >>The number of slice pulses is indicated.
3) Intermittent sampling repeat interference (Interrupted Sampling and Repeating Jamming, ISRJ)
The intermittent sampling repeat interference is obviously different from the intermittent sampling repeat interference in that the jammer repeats the current sampling signal according to the preset times after sampling a section of radar signal, then repeats the repeat after sampling a small section of radar signal, and the process is repeated until the radar signal is finished. The time domain expression of intermittent sampling repeat forwarding is:
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing intermittent sampling repeat forwarding interference,/->Radar signal representing the interception of jammers, +.>Representing the pulse width of the slice pulse, < >>Representing the number of slice pulses, +.>Indicating the number of times the jammer repeatedly forwards the slice pulse.
4) Sample pulse interference (Sample Pulse Forwarding Jamming, SPFJ)
The sample pulse interference is used as a novel self-defense interference pattern, the requirement of receiving and transmitting isolation is met by reducing the sampling time of an jammer to radar signals, and the modulation principle is as follows: the jammer firstly uses DRFM to store the front edge of the intercepted radar signal, namely the pulse front edge signal, and then repeatedly forwards the pulse front edge signal to obtain the sample pulse forwarding interference. The time domain expression of the sample pulse forwarding interference is:
wherein, the liquid crystal display device comprises a liquid crystal display device,indicating presentation pulse disturbance +.>Radar signal representing the interception of jammers, +.>Representing the pulse width of the sampling pulse,/->Indicating the repeated forwarding times of the jammer.
5) Multi-decoy interference (Multiple False Target Jamming, MFTJ)
The multi-decoy interference is usually formed by modulating and sequentially repeating the intercepted radar signals by an jammer, and the significant difference between the multi-decoy interference and the dense decoy is that: i) In one interference period, the number of decoys generated by the multi-decoy interference is obviously less than that of the dense decoy interference; ii) the multi-decoy interference adopts a sequential forwarding strategy, and the dense decoys adopt a delay superposition repeated forwarding strategy, so that decoys formed by the multi-decoy interference are obviously more sparse than the dense decoys; iii) The multi-decoy interference is modulated by high fidelity, and the quality of the generated decoys is obviously superior to that of the dense decoy interference, so that the radar is more difficult to distinguish between true decoys. The time domain expression of the multi-decoy interference is:
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing multiple decoy disturbances, < >>Radar signal representing the interception of jammers, +.>Representing the period of the jammer forwarding the jammer signal, +.>Representing the repeated forwarding times of the jammer,/->、/>Respectively represent +.>Amplitude and doppler information of the individual interfering signals.
6) Comb spectrum modulation interference (Comb Spectrum Modulation Jamming, CSJ)
In general, comb spectrum modulation interference is coherent spoofing interference generated by an jammer modulating the time domain product of a comb spectrum signal (Comb Spectrum Signal, CSS) and an intercepted radar signal, which is essentially a multi-component frequency-shifted interference. The time domain expression of comb spectrum modulation interference is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing comb-like spectral modulation interference->Radar signal representing the interception of jammers, +.>Representing comb spectrum signal>Representing the number of comb spectrum signals,/->、/>Representing the amplitude and carrier frequency of the comb spectrum signal, respectively.
In general, the composite interference mainly includes modes of "suppression interference+suppression interference", "suppression interference+spoofing interference" and "spoofing interference+spoofing interference", wherein the "suppression interference+spoofing interference" has both the "suppression" and "spoofing" characteristics, so that a better interference effect can be obtained in practical application, and the "suppression interference+spoofing interference" is widely applied to electronic interference countermeasure. Therefore, the present embodiment is mainly aimed at analyzing and researching the "suppression interference+spoofing interference" type composite interference. As a preferred implementation, this embodiment selects 7 kinds of suppression interference and 5 kinds of spoofing interference to combine to generate 35 kinds of composite interference.
The invention adopts short-time Fourier transform to perform time-frequency transform on more than 50 interference signals, obtains an interference time-frequency image and uses the interference time-frequency image as a reference data set for interference identification.
Step 2: and constructing a network model comprising a feature extraction module and a prototype classification module based on the metric learning idea.
Specifically, referring to fig. 2, fig. 2 is a block diagram of a prototype network based on a variable convolution of aggregated attention, where a feature extraction module includes a variable convolution layer and an aggregated attention block, and a prototype classification module is a prototype classification module based on polynomial loss according to an embodiment of the present invention.
Specifically, the feature extraction network provided in this embodiment is implemented based on a res net18 network structure, and first, the variable convolution layer (Deformable Convolutional Layers, DCLs) enhances the characterization capability of the model to complex and unknown geometric transformations, and extracts the spatial dimension attention of the features to adaptively select important areas, thereby expanding the effective receptive field of convolution. Then use Aggregate Attention block (Aggregate-Attention Blocks, A 2 Bs) generates an attention mask for the channel domain and uses it to automatically select important channels to further extract more discriminative features. Finally, the prototype classification module learns an excellent metric space for interference recognition based on Polynomial Loss (polymial Loss), in which better inter-class separability can be obtained.
2. Training network
Step 3: and training the network model by using the training set, and storing the optimal network model parameters by using the verification set until the network converges to obtain a trained network model.
In this embodiment, the scenario training strategy is mainly used to study'-way/>-shot "identifies a problem. In particular, the small sample recognition problem is usually modeled as "-/->-way/>-shot "identifies problems, i.e. model from +.>Get +.>The tagged images are stretched and the unlabeled images are required to be correctly classified. Unlike conventional recognition problems, which require the training set tag field to be identical to the verification set and test set tag fields, the small sample recognition problem requires classification of new classes after training. This therefore requires that the images used for training and the images for verification and testing must come from orthogonal tag domains.
More specifically, a given data setIs divided into three parts, namely,/>,Wherein->Indicate->Original feature vectors and label information of the image. Furthermore, tag set->And->And->Is orthogonal and has->。
In the meta-training phase, the embodiment is thatIs selected at random->Category of->Random sampling +. >Generating meta-tasks from images>. In->In the individual categories->The sheet of images is further divided into two sets, each containing +.>Sheet image and +.>A picture, i.e. support set->And a query set. The purpose of FSL is to exploit a given support set +.>The limited marked samples in (a) contain a priori knowledge about the set of queries that have not been seen +.>Classification is carried out:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing query sample->Identified as support set->Middle->Probability of class. Similarly, the present embodiment is in the dataset +.>And->Upper definition meta-task->And->For meta-verification and meta-testing. Furthermore, the objective of this embodiment is to train the proposed model to learn migratable deep meta-knowledge from these meta-learning tasks, then save the optimal model through meta-verification, and finally show the generalization accuracy by taking the average of the model recognition accuracy in meta-test.
Step 3 specifically includes:
31 Generating a support set and a query set from the training set by adopting a scenario training mechanism and utilizing a random sample extraction method;
32 Extracting features with remote context information from samples of the support set and the query set by using a variable convolution layer to obtain a geometric enhancement feature map;
33 Channel domain weighting is carried out on the geometric enhancement feature map by utilizing the aggregation attention block, so that a channel enhancement feature map is obtained;
34 Classifying the channel enhancement feature map by using a prototype classification module to obtain a sample real label.
In this embodiment, the specific process of extracting the joint attention feature by using the feature extraction module is as follows: first, for the firstMeta-tasks of sub-episode training>By support set->Sample->And query set->Sample->Assembled collection of components;
Input into feature extraction modules, i.e.
Wherein, the liquid crystal display device comprises a liquid crystal display device,and->Representing aggregated attention variable convolutions in feature extraction modules, respectivelyLayer (Aggregate-Attention Deformable Convolutional Layers, A) 2 -DCLs) and parameters of the last fully connected layer,/->Representing feature extraction module,/->Representing the full connection layer in the feature extraction module, < >>Representing the final visual feature output with joint attention. Furthermore, the->Mainly composed of DCLs and A 2 Bs composition.
The specific implementation principle of each module is described in detail below with reference to the accompanying drawings.
Referring to fig. 3, fig. 3 is a schematic diagram of a variable convolution layer according to an embodiment of the present disclosure. In this embodiment, extracting features with remote context information from support set and query set samples using a variable convolution layer, a geometry enhanced feature map is obtained, comprising:
obtaining a feature map based on samples of the support set and the query set;
Introducing a group of leachable offset into the feature map to enhance the sampling process on the regular grid, so as to obtain an offset feature map;
and weighting the characteristic map after the offset by utilizing the leachable modulation quantity to obtain a geometric enhancement characteristic map.
Unlike conventional convolution operations, the variable convolution achieves invariance (e.g., translational, rotational, scale invariance) of geometric transformations by introducing bias and modulation mechanisms, while having the ability to adaptively select important regions in the feature map. Specifically, DCLs integrate volumes into three steps, first enhancing the regular grid by introducing a set of learnable offsets into the input feature mapAnd in the sampling process, weighting the characteristic diagram after the deviation by using a leachable modulation quantity, and finally carrying out weighted summation on the sampling characteristic by using a conventional convolution check.
It is noted that both the offset estimation and the modulation estimation occur in two dimensions. For input imagesLet the output feature map be +.>And use->Representation->A two-dimensional sampling position on the sample. To->Is +.>The variable convolution may be defined as:
in the method, in the process of the invention,expressed as +.>Is +. >A neighborhood. />Is the integral offset of the convolution operation; />Representation->Weight of convolution kernel +_>And->Respectively represent the leachable offset and modulation in the variable convolution, and +.>Is in the range of +.>Wherein->。/>Is added to the integral offset +.>The sampling position of the variable convolution kernel is varied in the feature space.
Thus, for principle sampling centersCan be adaptively processed by a variable convolution. However, due to +.>Usually in decimal, the present embodiment uses bilinear interpolation calculation +.>Thereby generating an accurate offset. Furthermore, the->And->Is obtained by applying a separate convolution layer to the same input feature map, and the two convolution layers have the same spatial resolution and expansion as the current convolution layer. Thus, assume that the input maps toWherein->,/>And->Height, width and number of channels (i.e. number of filters), respectively +.>And->Can be expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Respectively representing convolution operator and Sigmoid function, and has +.>,/>。
Referring to fig. 4, fig. 4 is a schematic diagram of an aggregate attention block according to an embodiment of the present invention. In this embodiment, the channel domain weighting is performed on the geometric enhancement feature map by using the aggregate attention block to obtain a channel enhancement feature map, including:
And respectively carrying out global maximum pooling, global average pooling and global soft pooling on the geometric enhancement feature map, and carrying out feature fusion to obtain the channel enhancement feature map.
In order to further improve the channel Attention performance without introducing additional learnable parameters and increasing the complexity of the model, the embodiment provides an Aggregate-Attention Blocks (A 2 Bs). Specifically, the embodiment fuses multiple effective information by aggregating different global pooling operations (namely global max pooling, global average pooling and global soft pooling) in the channel domain, and further extracts the attention mask of the channel domain with more discernability so as to adaptively select important channel information. Assuming that the characteristics of the DCLs output map toWherein->,/>And->The height, width and number of channels of the output feature map are represented, respectively. Thus, A 2 The weights of the channels in Bs can be expressed as:
in the method, in the process of the invention,,/>for Sigmoid function, ++>Representing a full connection layer, ">And->Is the weight of the full connection layer. />、/>、/>And respectively representing the characteristics after global maximum pooling, global average pooling and global soft pooling. Furthermore, in order to reduce network parameters, reduce model complexity, < >>And- >Is set to +.>And->Wherein->Indicating a fold reduction. />,/>And->The method respectively represents global maximum pooling, global average pooling and global soft pooling, and the mathematical expression is as follows:
in summary, the output of the feature extraction module can be expressed as:
In this way, the model extracts the more discernable high-dimensional features from the raw image data. More specifically, the feature extraction module combines spatial attention of DCLs with A 2 The aggregate channel attention of Bs learns the more discriminative feature, i.e., the joint attention of classes differentially directs class prototypes.
Referring to fig. 5, fig. 5 is a schematic diagram of a prototype classification module according to an embodiment of the invention. In this embodiment, classifying the channel enhancement feature map by using a prototype classification module to obtain a sample real label includes:
and calculating the distance from the query set sample to the support set prototype sample in the channel enhancement feature map, and outputting a class label of the prototype sample with the minimum distance from the query set sample to classify the query sample so as to obtain a real label of the query sample.
In addition, after each training is completed, the verification set is used to store the optimal network model parameters, and the optimal network model parameters are stored mainly by calculating the corresponding polynomial loss function and updating the parameters of the network model accordingly.
Specifically, the features obtained by the model from the feature extraction module characterize more efficient class prototypes, and then the recognition task is completed through the prototype classification module. The classifier predicts the real label of the query sample by comparing the distance between the query sample and the category prototype in the measurement space and utilizesAnd optimizing the model to obtain the optimal inter-class separability. Ideally, the metric-based FSL method classifies query samples according to their distance from the center of a given support set category. In order to simplify the calculation of the distance information, the prototype +.>To approximately replace the true category center, wherein +.>Representation category->Is a prototype of (c). Typically, class prototypes are obtained by computing the average of the support set sample feature maps.
in the method, in the process of the invention,representation category->Support of (a)Number of collection samples. The recognition task is accomplished by computing a probability distribution of the distances of the query samples to the support set prototypes:
in the method, in the process of the invention,representing Euclidean distance operator,>representation category->Is a prototype of (c). More specifically, the model classifies the query sample by outputting a class label of the prototype having the smallest distance from the query sample: / >
Cross entropy characterizes the difference between the model-learned distribution and the true distribution by measuring the distance between two probability distributions, and is therefore widely used for machine learning and deep learning. In computer vision tasks, cross entropy loss is generally expressed as:
in the method, in the process of the invention,representation model at->The prediction probability of the true category of the target in the secondary story training. />The specific form in the method of the present invention can be expressed as:
furthermore, the processing unit is configured to,to->The Taylor expansion of the basis can be expressed as:
however, cross entropy loss is not flexibly applicable to a variety of computer vision tasks due to the fixed functional form. Thus, the present invention introduces a more flexible, more versatile loss function framework: polyLoss. Specifically, the key idea of PolyLoss is to decompose the cross entropy loss into a series of weighted polynomial-based sums, the functional form of which can be expressed as:
in the method, in the process of the invention,is polynomial coefficient +.>Representing the predicted probability of the model for the true class of the target. Therefore, by changing->To adjust the importance of the corresponding polynomial basis, an optimal loss function form can be accurately formulated for different tasks. However, for an infinite number of polynomial coefficients +. >The operation difficulty is huge. Therefore, optimizing the most general functional form of PolyLoss is not feasible. Optimizing the air for reducing parametersThe model is prevented from being unable to converge, and the invention adopts a more concise and effective loss function with a function form: />The mathematical expression is as follows: />
In the method, in the process of the invention,is the first polynomial coefficient. And updating parameters of the model according to the polynomial loss function until the network converges.
3. Test network
Step 4: and performing performance evaluation on the trained network model by using the test set, and performing identification of small sample radar active interference by using the trained network model.
Aiming at the problem of sample scarcity faced by radar active interference identification, the invention provides a small sample learning framework based on measurement learning, and solves the problems of sample scarcity, poor novel interference identification accuracy and weak migration capability of the existing method. Meanwhile, the invention provides a small sample radar active interference identification method based on a concentrated attention variable convolution prototype network, which aims at solving the problem that the existing small sample learning method has poor capability of representing discrete features and weak features in an interference time-frequency image, can accurately extract interference features under the condition that samples are extremely scarce, and improves the identification robustness of a model to feature discrete and weak feature interference.
Example two
The simulation test is performed on the small sample radar active interference identification method provided by the first embodiment, and the simulation test is compared with the existing FSL method to verify the beneficial effects of the invention.
1. Data set
In order to evaluate the performance of the interference recognition algorithm provided by the invention, 50 interference signals are firstly simulated through an MATLAB software platform, and the JSR ranges of the interference signals and the deception interference are respectively set to be 30-50dB and 10-30dB in consideration of the difference between the modulation mode of suppressing interference and deception interference and the interference purpose. Furthermore, the parameters and index ranges of the composite interference depend on the signal components constituting the composite interference. Finally, a data set is generated according to the modulation principle of each interference signal and different JNRs (jnr=0db, 10db,20db,30 db), and a time-frequency image of 50 interference signals is generated through short-time fourier transform, as shown in fig. 6. For each JNR, each interferer randomly generated 1000 time-frequency images, and the dataset had a total of 50 x 1000 samples, each sample having a size of 1 x 128, within the interference simulation parameters. For convenience of experimental analysis hereinafter, the sub-data sets of different JNRs are respectively denoted as jamset_0db, jamset_10dB, jamset_20dB, and jamset_30dB, and their aggregate sets are named JamSet, i.e., jamset= { jamset_0dB,JamSet_10dB, jamSet _20dB, jamset_30dB }.
Further, the obtained data set is divided into a training set, a verification set and a test set according to a certain proportion, and the division strategy is shown in fig. 7.
2. Implementation details
1) Feature extraction network architecture
For fair comparison with existing FSL methods, the present invention builds a feature extraction network in ResNet 18. Specifically, the res net18 consists of 8 residual blocks, where each residual block contains 3 convolutional blocks (identical to Conv64F convolutional block structure) and one residual link layer, and each convolutional block consists of one convolutional layer, one batch normalization layer, one ReLU layer, and one max pooling layer.
2) Scenario training strategy
The invention is characterized in that-way/>And (3) quick learning is performed by adopting a story training strategy in the shot experiment. Specifically, for a 5-way1-shot experiment, each category in the support set and the query set contains 1 sample and 15 samples, respectively. Thus (2)In one meta-training, the total amount of samples in the input model is +.>. Similarly, for the 5-way5-shot experiment, the total amount of samples in the input model was +.>. For prototype classification module->First coefficient of->And respectively selecting parameter values corresponding to the value with the highest recognition accuracy on the verification set as the parameter inputs of the 5-way1-shot and the 5-way5-shot according to experimental results. In addition, the invention uses Adam optimizer to train model end-to-end and sets initial learning rate and weight attenuation coefficient to 0.001 and +. >. For all models, 100 episode meta-training was set up on JamSet, each episode meta-training containing 2000 meta-tasks. In particular, the input of all models used in this section is adjusted to +.>。
3) Verification and testing
After the training of each story is completed, the super parameters are optimized through the verification set, the optimal model with the highest recognition precision is stored, and finally the average recognition precision of the model is displayed on the test set. Similar to the storybook training phase, during the validation and testing phases, data is also divided into support sets and query sets for model input. 600 meta-tasks were set up during both validation and testing and given an average accuracy of 95% confidence intervals for these meta-tasks.
3. Experimental results
Interference time-frequency diagram generated by simulationThe image is input to the proposed algorithm A 2 The dcnet and the other five comparison algorithms were tested and the recognition accuracy was obtained. Table 1 shows the results of the proposed algorithm compared with the conventional FSL algorithm on the simulation dataset JamSet.
Table 1 comparison of the test results of the proposed algorithm and the conventional FSL algorithm
The above results indicate that: a is that 2 Compared with the traditional FSL algorithm, the DCCNet obtains the highest recognition accuracy on JamSet, and obtains the recognition accuracy of 87.56% and 94.34% on 5-way1-shot and 5-way5-shot respectively. Compared with the optimal recognition accuracy of the traditional FSL algorithm, namely the recognition accuracy of the 5-way1-shot and the 5-way5-shot are respectively 84.77 percent and 91.18 percent (obtained by CAN), A 2 DCNet is increased by at least 2.79% and 3.16%. In particular, the algorithm A of the present invention compares with the typical prototype recognition algorithm, namely PN 2 The recognition accuracy of DCCNet on 5-way1-shot and 5-way5-shot is respectively improved by 5.53% and 6.82%. Therefore, the algorithm provided by the invention is obviously superior to other traditional FSL algorithms when facing complex interference data sets, and shows the robustness of the interference recognition model under the condition of extremely scarcity of samples.
To highlight A proposed 2 Migration performance of dcnet under different JNR, this example sets up several comparative experiments on JamSet. Specifically, all methods were first saved with the optimal model through episodic training on JamSet, then tested directly on the remaining sub-data sets with these models, and the test results are shown in table 2.
Table 2 comparison of the test results of the proposed algorithm and the conventional FSL algorithm
As can be seen from Table 2, the trained model is directly used for testing on JamSet_0dB, and compared with the traditional FSL algorithm, the algorithm provided by the invention achieves the highest recognition accuracy, namely 66.35% and 82.81% respectively. Similarly, the algorithm provided by the invention is obviously superior to the traditional FSL algorithm in terms of JamSet_10dB and JamSet_20dB. Thus, the above results indicate that the algorithm presented herein has optimal migration performance compared to the conventional FSL algorithm, showing great potential for application in actual interference scenarios.
In order to further verify the superiority of the algorithm provided by the invention, feature visualization and t-SNE clustering result visualization are carried out on all algorithms. Referring to fig. 8, fig. 8 is a schematic diagram showing the feature visualization results of the algorithm proposed by the present invention and the conventional FSL algorithm. From the results shown in fig. 8, the algorithm provided by the invention can more completely characterize the interference characteristics in the time-frequency image. Moreover, the characteristic visualization results of NFMJ+ISRJ and SFJ2+DFTJ show that the weak characteristic is represented more finely by the algorithm provided by the invention. Therefore, the characterization capability of the algorithm provided by the invention on discrete interference features and weak features in composite interference is obviously superior to that of the traditional FSL algorithm.
FIG. 9 is a visualization of t-SNE clustering results. As can be seen from fig. 9, for the most difficult clustering interferences, i.e., NFMJ, nfmj+dftj, nfmj+isfj, nfmj+isrj, and sfj2+dftj, the proposed algorithm of the present invention shows robustness in the case of small samples compared to the clustering results achieved by the conventional FSL algorithm.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.
Claims (7)
1. A method for identifying small sample radar active interference, comprising:
constructing a plurality of radar active interference signal models, and performing short-time Fourier transform on the radar active interference signal models to obtain an interference time-frequency image; taking the interference time-frequency image as a reference data set for interference identification, and dividing the reference data set into a training set, a verification set and a test set;
constructing an aggregate attention variable convolution prototype network model comprising a feature extraction module and a prototype classification module based on a metric learning idea; the feature extraction module comprises a variable convolution layer and an aggregation attention block, and the prototype classification module is a prototype classification module based on polynomial loss;
training the network model by using the training set, and storing optimal network model parameters by using the verification set until the network converges to obtain a trained network model; comprising the following steps:
generating a support set and a query set from the training set by adopting a scenario training mechanism and utilizing a random sample extraction method;
obtaining a feature map based on the support set and samples of the query set;
introducing a group of leachable offset into the feature map to enhance the sampling process on the regular grid, so as to obtain an offset feature map;
Weighting the characteristic map after offset by utilizing the learnable modulation quantity to obtain a geometric enhancement characteristic map; wherein, the acquisition process of the geometric enhancement feature map is expressed as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,geometric enhancement feature map representing variable convolutional layer output, < >>Representing a feature map, p 0 =(x 0 ,y 0 ) Representation->Two-dimensional sampling position, ω (p n ) Representing 3×The weight of the convolution kernel is 3,expressed as p 0 3X 3 neighborhood as center, +.>Is the integral offset of the convolution operation, { Δp n |n=1, 2, …, N } and { Δm } n I n=1, 2, …, N } represent the learnable offset and modulation amounts in the variable convolution, respectively, and Δm n Is in the range of [0,1 ]],/>
Carrying out channel domain weighting on the geometric enhancement feature map by utilizing the aggregation attention block to obtain a channel enhancement feature map;
classifying the channel enhancement feature map by using the prototype classification module to obtain a sample real label;
and performing performance evaluation on the trained network model by using the test set, and performing recognition of small sample radar active interference by using the trained network model.
2. The method of claim 1, wherein the radar active disturbance signal model includes a jammer suppression, a jammer spoof, and a composite disturbance combined from the jammer suppression and the jammer spoof.
3. The method for identifying small sample radar active interference according to claim 1, wherein the performing channel domain weighting on the geometric enhancement feature map by using the aggregate attention block to obtain a channel enhancement feature map comprises:
and respectively carrying out global maximum pooling, global average pooling and global soft pooling on the geometric enhancement feature map, and carrying out feature fusion to obtain a channel enhancement feature map.
4. The method of claim 3, wherein the process of obtaining the channel enhancement profile is formulated as:
wherein, the liquid crystal display device comprises a liquid crystal display device,channel enhancement feature map representing aggregate attention block output,/->Representing the channel dimension product, +.>A geometric enhancement feature representing the variable convolutional layer output;
the weight matrix representing the output, sigma (,) is Sigmoid function,/o>Represents a fully connected layer, W D And W is U The weight of the full connection layer; />And->Respectively represent global maximum pooling and global levelingEqualizing pooling and global soft pooling, +.>Respectively representing the characteristics after global maximum pooling, global average pooling and global soft pooling.
5. The method for identifying small sample radar active interference according to claim 1, wherein said classifying the channel enhancement feature map by using the prototype classification module to obtain a sample true tag comprises:
And calculating the distance from the query set sample to the support set prototype sample in the channel enhancement feature map, and outputting a class label of the prototype sample with the minimum distance from the query set sample so as to classify the query set sample and obtain a real label of the query set sample.
6. The method of small sample radar active disturbance identification according to claim 1, wherein said using said verification set to store optimal network model parameters includes:
after each training is completed, the corresponding polynomial loss function is calculated, and the parameters of the network model are updated accordingly to save the optimal network model parameters.
7. The method of claim 6, wherein the polynomial loss function is expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,
c represents the total number of categories, C represents a certain category, and c=1, 2, … C,represents the set of queries, e 1 Representing the polynomial coefficients of the leader ++>Probability distribution representing the distance of a sample of a query set to a prototype of a support set, +.>Representing support sets,/->Representing the ith query set sample, +.>A real tag representing the ith query set sample, +.>Representing the number of samples of the query set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310320140.6A CN116047427B (en) | 2023-03-29 | 2023-03-29 | Small sample radar active interference identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310320140.6A CN116047427B (en) | 2023-03-29 | 2023-03-29 | Small sample radar active interference identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116047427A CN116047427A (en) | 2023-05-02 |
CN116047427B true CN116047427B (en) | 2023-06-23 |
Family
ID=86133530
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310320140.6A Active CN116047427B (en) | 2023-03-29 | 2023-03-29 | Small sample radar active interference identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116047427B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116482618B (en) * | 2023-06-21 | 2023-09-19 | 西安电子科技大学 | Radar active interference identification method based on multi-loss characteristic self-calibration network |
CN117289218B (en) * | 2023-11-24 | 2024-02-06 | 西安电子科技大学 | Active interference identification method based on attention cascade network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114564982A (en) * | 2022-01-19 | 2022-05-31 | 中国电子科技集团公司第十研究所 | Automatic identification method for radar signal modulation type |
CN115081475A (en) * | 2022-06-08 | 2022-09-20 | 西安电子科技大学 | Interference signal identification method based on Transformer network |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7193558B1 (en) * | 2003-09-03 | 2007-03-20 | The United States Of America As Represented By The Secretary Of The Navy | Radar processor system and method |
KR102110973B1 (en) * | 2019-10-25 | 2020-05-14 | 에스티엑스엔진 주식회사 | Robust CFAR Method for Noise Jamming Detection |
CN114037001A (en) * | 2021-10-11 | 2022-02-11 | 中国人民解放军92578部队 | Mechanical pump small sample fault diagnosis method based on WGAN-GP-C and metric learning |
CN114895263A (en) * | 2022-05-26 | 2022-08-12 | 西安电子科技大学 | Radar active interference signal identification method based on deep migration learning |
CN115097396A (en) * | 2022-06-21 | 2022-09-23 | 西安电子科技大学 | Radar active interference identification method based on CNN and LSTM series model |
-
2023
- 2023-03-29 CN CN202310320140.6A patent/CN116047427B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114564982A (en) * | 2022-01-19 | 2022-05-31 | 中国电子科技集团公司第十研究所 | Automatic identification method for radar signal modulation type |
CN115081475A (en) * | 2022-06-08 | 2022-09-20 | 西安电子科技大学 | Interference signal identification method based on Transformer network |
Also Published As
Publication number | Publication date |
---|---|
CN116047427A (en) | 2023-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116047427B (en) | Small sample radar active interference identification method | |
CN111541511B (en) | Communication interference signal identification method based on target detection in complex electromagnetic environment | |
CN107808138B (en) | Communication signal identification method based on FasterR-CNN | |
Li | Research on radar signal recognition based on automatic machine learning | |
CN112560803A (en) | Radar signal modulation identification method based on time-frequency analysis and machine learning | |
CN111723701B (en) | Underwater target identification method | |
CN111507047B (en) | Inverse scattering imaging method based on SP-CUnet | |
CN112859014A (en) | Radar interference suppression method, device and medium based on radar signal sorting | |
CN109726649B (en) | Remote sensing image cloud detection method and system and electronic equipment | |
CN109409442A (en) | Convolutional neural networks model selection method in transfer learning | |
CN107392863A (en) | SAR image change detection based on affine matrix fusion Spectral Clustering | |
CN110163040B (en) | Radar radiation source signal identification technology in non-Gaussian clutter | |
CN114881093B (en) | Signal classification and identification method | |
Orduyilmaz et al. | Machine learning-based radar waveform classification for cognitive EW | |
Wei et al. | Intra-pulse modulation radar signal recognition based on Squeeze-and-Excitation networks | |
Liu et al. | Radar signal recognition based on triplet convolutional neural network | |
CN113673312A (en) | Radar signal intra-pulse modulation identification method based on deep learning | |
Huang et al. | Radar waveform recognition based on multiple autocorrelation images | |
CN115565019A (en) | Single-channel high-resolution SAR image ground object classification method based on deep self-supervision generation countermeasure | |
Ding et al. | Combination of global and local filters for robust SAR target recognition under various extended operating conditions | |
Kamal et al. | Generative adversarial learning for improved data efficiency in underwater target classification | |
Xiao et al. | Active jamming recognition based on bilinear EfficientNet and attention mechanism | |
CN115951315B (en) | Radar spoofing interference identification method and system based on improved wavelet packet energy spectrum | |
CN112285667A (en) | Neural network-based anti-ground clutter processing method | |
CN112215199A (en) | SAR image ship detection method based on multi-receptive-field and dense feature aggregation network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |