CN116047427B - Small sample radar active interference identification method - Google Patents

Small sample radar active interference identification method Download PDF

Info

Publication number
CN116047427B
CN116047427B CN202310320140.6A CN202310320140A CN116047427B CN 116047427 B CN116047427 B CN 116047427B CN 202310320140 A CN202310320140 A CN 202310320140A CN 116047427 B CN116047427 B CN 116047427B
Authority
CN
China
Prior art keywords
interference
sample
representing
feature map
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310320140.6A
Other languages
Chinese (zh)
Other versions
CN116047427A (en
Inventor
周峰
樊伟伟
杜金标
刘磊
李川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202310320140.6A priority Critical patent/CN116047427B/en
Publication of CN116047427A publication Critical patent/CN116047427A/en
Application granted granted Critical
Publication of CN116047427B publication Critical patent/CN116047427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/36Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a small sample radar active interference identification method, which comprises the following steps: constructing a plurality of radar active interference signal models, and processing the radar active interference signal models to obtain a reference data set for interference identification; dividing the reference data set into a training set, a verification set and a test set; constructing a network model comprising a feature extraction module and a prototype classification module based on polynomial loss based on a metric learning idea; wherein the feature extraction module comprises a variable convolution layer and an aggregate attention block; training the network model by using the training set, and storing the optimal network model parameters by using the verification set until the network converges; and performing performance evaluation on the trained network model by using the test set, and performing identification of small sample radar active interference by using the trained network model. The method not only can accurately extract the interference features under the condition that the sample is extremely scarce, but also improves the characterization capability of the model on feature dispersion and weak feature interference, and improves the recognition performance and robustness.

Description

Small sample radar active interference identification method
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a small sample radar active interference identification method.
Background
With the continuous development of electronic countermeasure technology, many radar active interference samples with specific interference effects are continuously proposed. The complex and changeable radar active interference causes the radar to face serious electromagnetic interference threat, and the real target and parameters thereof cannot be effectively detected. Therefore, searching for a rapid and efficient radar active interference recognition algorithm in a complex electromagnetic environment has become an important research direction in the radar countermeasure field in recent years.
At present, although typical radar interference identification algorithms based on conventional convolutional neural networks (Convolutional Neural Network, CNN) can make great progress on measured or simulated data sets, the research of these algorithms is still in the beginning stage, which mainly takes place in two aspects: (1) Although CNN-based recognition algorithms can predict test samples of the interference type that have occurred during the training phase with high accuracy, recognition accuracy of these algorithms may be greatly reduced in the face of test samples of the interference type that have never occurred during the training phase. (2) The successful application of traditional CNN in the field of computer vision mainly benefits from that a large labeled dataset is used for learning model parameters in a network training stage, and once sufficient label data cannot be obtained, a CNN-based model has excessive fitting risk, so that the network performance is seriously reduced. However, in reality, it is difficult to obtain a large number of labeled high-quality radar interference data meeting the model training requirement, which directly limits the performance of the conventional CNN-based interference recognition algorithm in a practical scene.
In conclusion, the existing radar active interference recognition algorithm has poor recognition accuracy and weak migration capability on novel interference under the condition of extremely lack of samples; meanwhile, the prior art has poor characterization capability on characteristic dispersion and weak characteristic interference, and cannot meet the requirements of complex and changeable electromagnetic environments in practical application.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a small sample radar active interference identification method. The technical problems to be solved by the invention are realized by the following technical scheme:
a method for small sample radar active interference identification, comprising:
constructing a plurality of radar active interference signal models, and performing short-time Fourier transform on the radar active interference signal models to obtain an interference time-frequency image; taking the interference video image as a reference data set for interference identification, and dividing the reference data set into a training set, a verification set and a test set;
constructing an aggregate attention variable convolution prototype network model comprising a feature extraction module and a prototype classification module based on a metric learning idea; the feature extraction module comprises a variable convolution layer and an aggregation attention block, and the prototype classification module is a prototype classification module based on polynomial loss;
Training the network model by using the training set, and storing optimal network model parameters by using the verification set until the network converges to obtain a trained network model;
and performing performance evaluation on the trained network model by using the test set, and performing recognition of small sample radar active interference by using the trained network model.
In one embodiment of the invention, the radar active disturbance signal model includes a jammer, a spoof disturbance, and a composite disturbance combined from the jammer and the spoof disturbance.
In one embodiment of the present invention, the training the network model using the training set includes:
generating a support set and a query set from the training set by adopting a scenario training mechanism and utilizing a random sample extraction method;
extracting features with remote context information from samples of the support set and the query set by using the variable convolution layer to obtain a geometric enhancement feature map;
carrying out channel domain weighting on the geometric enhancement feature map by utilizing the aggregation attention block to obtain a channel enhancement feature map;
and classifying the channel enhancement feature map by using the prototype classification module to obtain a sample real label.
In one embodiment of the present invention, the extracting features with remote context information from the support set and query set samples by using the variable convolution layer to obtain a geometric enhancement feature map includes:
obtaining a feature map based on the samples of the support set and the query set;
introducing a group of leachable offset into the feature map to enhance the sampling process on the regular grid, so as to obtain an offset feature map;
and weighting the characteristic map after the offset by utilizing the leachable modulation quantity to obtain a geometric enhancement characteristic map.
In one embodiment of the invention, the process of obtaining the geometric enhancement feature is formulated as:
Figure SMS_1
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_2
geometric enhancement feature representing variable convolutional layer output, < >>
Figure SMS_9
A feature map is represented as a map of features,
Figure SMS_12
representation->
Figure SMS_5
Two-dimensional sampling position on->
Figure SMS_6
Representation->
Figure SMS_10
The weight of the convolution kernel is calculated,
Figure SMS_13
expressed as +.>
Figure SMS_3
Is +.>
Figure SMS_7
Neighborhood (S)>
Figure SMS_11
Is the integral offset of the convolution operation, +.>
Figure SMS_15
And->
Figure SMS_4
Respectively represent the leachable offset and modulation in the variable convolution, and +.>
Figure SMS_8
Is in the range of +.>
Figure SMS_14
,/>
Figure SMS_16
In one embodiment of the present invention, the performing channel domain weighting on the geometric enhancement feature map by using the aggregate attention block to obtain a channel enhancement feature map includes:
And respectively carrying out global maximum pooling, global average pooling and global soft pooling on the geometric enhancement feature map, and carrying out feature fusion to obtain a channel enhancement feature map.
In one embodiment of the present invention, the process of obtaining the channel enhancement feature map is formulated as:
Figure SMS_17
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_18
channel enhancement feature map representing aggregate attention block output,/->
Figure SMS_19
Representing the channel dimension product, +.>
Figure SMS_20
A geometric enhancement feature representing the variable convolutional layer output;
Figure SMS_21
Figure SMS_23
weight matrix representing output, +.>
Figure SMS_25
For Sigmoid function, ++>
Figure SMS_29
Representing a full connection layer, ">
Figure SMS_24
And->
Figure SMS_26
The weight of the full connection layer; />
Figure SMS_30
、/>
Figure SMS_32
And->
Figure SMS_22
Representing global maximum pooling, global average pooling and global soft pooling, respectively, +.>
Figure SMS_27
、/>
Figure SMS_28
、/>
Figure SMS_31
Respectively representing the characteristics after global maximum pooling, global average pooling and global soft pooling.
In one embodiment of the present invention, the classifying the channel enhancement feature map by using the prototype classification module to obtain a sample real label includes:
and calculating the distance from the query set sample to the support set prototype sample in the channel enhancement feature map, and outputting a class label of the prototype sample with the minimum distance from the query set sample so as to classify the query sample and obtain a real label of the query sample.
In one embodiment of the present invention, the storing the optimal network model parameters using the validation set includes:
after each training is completed, the corresponding polynomial loss function is calculated, and the parameters of the network model are updated accordingly to save the optimal network model parameters.
In one embodiment of the invention, the polynomial loss function is expressed as:
Figure SMS_33
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_34
Figure SMS_35
representing total amount of category->
Figure SMS_41
Representing a certain category->
Figure SMS_43
,/>
Figure SMS_38
Representing a query set->
Figure SMS_39
Is the first polynomial coefficient, ++>
Figure SMS_42
Probability distribution representing the distance of a sample of a query set to a prototype of a support set, +.>
Figure SMS_45
Representing support sets,/->
Figure SMS_36
Indicate->
Figure SMS_40
Sample of query sets, ++>
Figure SMS_44
Indicate->
Figure SMS_46
True tags of individual query set samples, +.>
Figure SMS_37
Representing the number of samples of the query set.
The invention has the beneficial effects that:
the invention provides a small sample radar active interference identification method based on a measurement learning-based small sample learning framework, which is capable of accurately extracting interference characteristics under extremely scarce samples, improving the characterization capability of a model on characteristic dispersion and weak characteristic interference, improving identification performance and robustness, and meeting various electromagnetic environment requirements of complexity and variability in practical application.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a schematic flow chart of a small sample radar active interference identification method provided by an embodiment of the invention;
FIG. 2 is a block diagram of an aggregate attention-based variable convolution prototype network provided by an embodiment of the present invention;
FIG. 3 is a schematic diagram of a variable convolution layer provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an aggregate attention block provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a prototype classification module provided by an embodiment of the invention;
FIG. 6 is a time-frequency image of a 50-class disturbance in a disturbance data set constructed in accordance with the present invention;
FIG. 7 is a schematic diagram of the partitioning strategy of the present invention for an interference data set;
FIG. 8 is a schematic diagram of the feature visualization results of the proposed algorithm and the conventional FSL algorithm of the present invention;
FIG. 9 is a visual diagram of simulation test t-SNE clustering results, wherein (a) is PN, (b) is RN, (c) is CMN,(d) DN4, (e) CAN, (f) A 2 -DCPNet。
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Example 1
Small sample Learning (FSL) has proven to be one of the most effective methods to solve the problem of small sample image recognition as a new deep Learning paradigm. On one hand, the FSL can solve the problems of insufficient generalization performance and poor adaptability to new tasks of the traditional neural network model, and on the other hand, the FSL can relieve the challenge of seriously reducing the performance of the model due to over fitting under the condition of sample deficiency. The prototype network (Prototypical Network, PN) takes the mean of the feature vectors of each class of samples in the support set as a class prototype in the metric space and applies the nearest neighbor idea to classify the query sample into the prototype with the nearest Euclidean distance. Different from the above non-parametric measurement calculation mode, the Relationship Network (RN) uses neural Network training to obtain a learnable nonlinear similarity measurement function to replace a manually defined distance measurement method (such as euclidean distance and cosine distance), so as to identify the query sample. From the point of view of second order statistics, a covariance metric network (Covariance Metric Network, CMN) implements class characterization and distance metrics by constructing a covariance matrix between feature vectors of each sample, which contains two key modules, namely a local covariance representation and a covariance metric. The first module extracts rich feature representations and the second measures the relationship between the query samples by computing their distribution consistency with each category. The deep nearest neighbor neural network (Deep Nearest Neural Network, DN 4) completes the classification of the query sample by directly using the local features of the original image and measuring the migratable local features with the local descriptors based on the image-to-class, i.e. by comparing the degree of similarity between the input image and the local descriptors of each class. A cross-attention network (Cross Attention Network, CAN) obtains feature vectors for the support samples and the query samples through a feature extractor and then generates cross-attention between the query samples and the support samples using a cross-attention module to learn more identifying features.
Based on the problems, the embodiment provides a small sample learning framework based on measurement learning for the problem of sample scarcity faced by radar active interference recognition, and provides a small sample radar active interference recognition method based on an aggregate attention variable convolution prototype network for the problem of poor discrete feature and weak feature characterization capability in an interference time-frequency image by the existing small sample learning method.
Specifically, referring to fig. 1, fig. 1 is a flow chart of a small sample radar active interference identification method according to an embodiment of the present invention, which mainly includes three stages of constructing a data set and a network model, training a network, and testing the network. These three stages are described in sequence in detail below.
1. Modeling data sets and networks
Step 1: constructing a plurality of radar active interference signal models, and performing short-time Fourier transform on the radar active interference signal models to obtain an interference time-frequency image; the interference video image is used as a reference data set of interference identification, and the reference data set is divided into a training set, a verification set and a test set.
In this embodiment, the radar active disturbance signal model includes a jammer, a jammer and a composite disturbance composed of the jammer and the jammer.
Specifically, the suppression interference mainly includes the following:
1) Noise amplitude modulation interference (Noise Amplitude Modulation Jamming, NAMJ)
Noise amplitude modulation interference refers to an interference signal whose carrier frequency remains unchanged by modulating amplitude information of a carrier signal with a noise signal. The mathematical expression of noise amplitude modulation interference is:
Figure SMS_47
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_48
representing noise amplitude modulation disturbance, modulation noise->
Figure SMS_53
Is zero mean and variance +.>
Figure SMS_55
In the interval
Figure SMS_50
A distributed generalized stationary random process, typically a gaussian white noise signal; />
Figure SMS_51
Is->
Figure SMS_54
Uniformly distributed random variables and +.>
Figure SMS_56
Independent; />
Figure SMS_49
、/>
Figure SMS_52
Is constant.
2) Noise FM interference (Noise Frequency Modulation Jamming, NFMJ)
Noise fm interference refers to an interference signal whose amplitude remains unchanged by modulating the frequency information of a carrier signal with a noise signal. The mathematical expression of noise fm interference is:
Figure SMS_57
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_59
representing noise FM interference, modulating noise->
Figure SMS_62
Is zero mean and variance +.>
Figure SMS_65
Is generally Gaussian white noise signal; />
Figure SMS_60
Is->
Figure SMS_61
Uniformly distributed random variables and +.>
Figure SMS_64
Independent; />
Figure SMS_66
、/>
Figure SMS_58
、/>
Figure SMS_63
The amplitude, center frequency and chirp rate of the noise fm interferer are constant, respectively.
3) Noise product interference (Noise Product Jamming, NPJ)
The noise product interference is achieved by multiplying the radar signal intercepted by the jammer and the noise signal in the time domain and outputting the signal through a large interference signal ratio (Jamming to Signal Ratio, JSR), and the purpose of submerging the real target echo signal is achieved, and the mathematical expression is as follows:
Figure SMS_67
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_68
representing noise product disturbance>
Figure SMS_69
For intercepted radar signals +.>
Figure SMS_70
Is a gaussian white noise signal.
4) Noise convolution interference (Noise Convolution Jamming, NCJ)
The noise convolution interference is performed by convolving the intercepted radar signal and the noise signal in the time domain, so that a suppression effect can be formed on the real target echo in the time domain and the frequency domain at the same time, and the mathematical expression is as follows:
Figure SMS_71
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_72
representing noise convolution interference>
Figure SMS_73
For intercepted radar signals +.>
Figure SMS_74
In the case of a gaussian white noise signal,
Figure SMS_75
representing the convolution operator.
5) Multi-point interference (Multi-Point Frequency Jamming, MPFJ)
The multi-point frequency signal is formed by directly superposing a plurality of single-frequency signals in a time domain, and multi-point frequency interference is generated by high-power transmission of an interference machine, and the mathematical expression is as follows:
Figure SMS_76
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_78
representing multi-point interference->
Figure SMS_81
Representing the number of single-point frequency disturbances, +.>
Figure SMS_83
Indicate->
Figure SMS_79
Individual single-frequency interference signals- >
Figure SMS_80
、/>
Figure SMS_82
And->
Figure SMS_84
Respectively +.>
Figure SMS_77
Amplitude, carrier frequency and phase modulation functions of the single-frequency interference signals.
6) Swept frequency interference (Sweep Frequency Jamming SFJ)
Sweep interference means that the interference signal performs periodic frequency scanning in a wider interference frequency band, so as to effectively cover the target echo signal. Compared with noise interference, sweep frequency interference can realize more uniform interference frequency spectrum and more efficient utilization of interference energy in interference bandwidth, so that overload phenomenon of an adversary radar receiver is more likely to occur. Generally, the swept-frequency interference can be classified into sine-wave Modulated swept-frequency interference (SFJ 1), saw-tooth Modulated swept-frequency interference (SFJ 2), and Trapezoidal-wave swept-frequency interference (SFJ 3). Furthermore, the swept frequency interference can be mathematically modeled as:
Figure SMS_85
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_86
indicating the disturbance of the sweep frequency->
Figure SMS_87
,/>
Figure SMS_88
,/>
Figure SMS_89
And->
Figure SMS_90
Respectively representing the amplitude, frequency, modulation factor and frequency modulation function of the interfering signal.
Further, the spoofing interference mainly includes the following:
1) Dense decoy interference (Dense False Target Jamming, DFTJ)
The dense false target interference is generally generated by adopting a full pulse sampling delay superposition method. Specifically, the jammer carries out delay superposition one by one on the intercepted radar signals to generate dense false target interference, and the method solves the problem that the false target formed after the interference enters the radar receiver is too sparse due to overlong delay of interference forwarding caused by the fact that the jammer forwards the intercepted radar signals in sequence. The time domain expression of dense decoys is:
Figure SMS_91
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_92
representing dense decoy interference +.>
Figure SMS_93
Radar signal representing the interception of jammers, +.>
Figure SMS_94
、/>
Figure SMS_95
Respectively represent +.>
Figure SMS_96
Amplitude and superposition delay of the individual interfering signals, +.>
Figure SMS_97
Indicating the number of times the jammer superimposes the jammer signal.
2) Intermittent sampling forwarding interference (Interrupted Sampling and Forwarding Jamming, ISFJ)
The intermittent sampling forwarding interference is that an interference machine carries out rapid slicing sampling, storage and forwarding on an intercepted radar signal based on a DRFM technology to form an interference signal similar to a target, and the interference signal covers a real target in a time domain and a frequency domain, so that the detection and tracking of the target are seriously affected. The time domain expression of intermittent sample-and-forward interference can be expressed as:
Figure SMS_98
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_99
representing intermittent sample forwarding interference +.>
Figure SMS_100
Radar signal representing the interception of jammers, +.>
Figure SMS_101
Representing the pulse width of the slice pulse, < >>
Figure SMS_102
The number of slice pulses is indicated.
3) Intermittent sampling repeat interference (Interrupted Sampling and Repeating Jamming, ISRJ)
The intermittent sampling repeat interference is obviously different from the intermittent sampling repeat interference in that the jammer repeats the current sampling signal according to the preset times after sampling a section of radar signal, then repeats the repeat after sampling a small section of radar signal, and the process is repeated until the radar signal is finished. The time domain expression of intermittent sampling repeat forwarding is:
Figure SMS_103
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_104
representing intermittent sampling repeat forwarding interference,/->
Figure SMS_105
Radar signal representing the interception of jammers, +.>
Figure SMS_106
Representing the pulse width of the slice pulse, < >>
Figure SMS_107
Representing the number of slice pulses, +.>
Figure SMS_108
Indicating the number of times the jammer repeatedly forwards the slice pulse.
4) Sample pulse interference (Sample Pulse Forwarding Jamming, SPFJ)
The sample pulse interference is used as a novel self-defense interference pattern, the requirement of receiving and transmitting isolation is met by reducing the sampling time of an jammer to radar signals, and the modulation principle is as follows: the jammer firstly uses DRFM to store the front edge of the intercepted radar signal, namely the pulse front edge signal, and then repeatedly forwards the pulse front edge signal to obtain the sample pulse forwarding interference. The time domain expression of the sample pulse forwarding interference is:
Figure SMS_109
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_110
indicating presentation pulse disturbance +.>
Figure SMS_111
Radar signal representing the interception of jammers, +.>
Figure SMS_112
Representing the pulse width of the sampling pulse,/->
Figure SMS_113
Indicating the repeated forwarding times of the jammer.
5) Multi-decoy interference (Multiple False Target Jamming, MFTJ)
The multi-decoy interference is usually formed by modulating and sequentially repeating the intercepted radar signals by an jammer, and the significant difference between the multi-decoy interference and the dense decoy is that: i) In one interference period, the number of decoys generated by the multi-decoy interference is obviously less than that of the dense decoy interference; ii) the multi-decoy interference adopts a sequential forwarding strategy, and the dense decoys adopt a delay superposition repeated forwarding strategy, so that decoys formed by the multi-decoy interference are obviously more sparse than the dense decoys; iii) The multi-decoy interference is modulated by high fidelity, and the quality of the generated decoys is obviously superior to that of the dense decoy interference, so that the radar is more difficult to distinguish between true decoys. The time domain expression of the multi-decoy interference is:
Figure SMS_114
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_115
representing multiple decoy disturbances, < >>
Figure SMS_116
Radar signal representing the interception of jammers, +.>
Figure SMS_117
Representing the period of the jammer forwarding the jammer signal, +.>
Figure SMS_118
Representing the repeated forwarding times of the jammer,/->
Figure SMS_119
、/>
Figure SMS_120
Respectively represent +.>
Figure SMS_121
Amplitude and doppler information of the individual interfering signals.
6) Comb spectrum modulation interference (Comb Spectrum Modulation Jamming, CSJ)
In general, comb spectrum modulation interference is coherent spoofing interference generated by an jammer modulating the time domain product of a comb spectrum signal (Comb Spectrum Signal, CSS) and an intercepted radar signal, which is essentially a multi-component frequency-shifted interference. The time domain expression of comb spectrum modulation interference is:
Figure SMS_122
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_123
representing comb-like spectral modulation interference->
Figure SMS_124
Radar signal representing the interception of jammers, +.>
Figure SMS_125
Representing comb spectrum signal>
Figure SMS_126
Representing the number of comb spectrum signals,/->
Figure SMS_127
、/>
Figure SMS_128
Representing the amplitude and carrier frequency of the comb spectrum signal, respectively.
In general, the composite interference mainly includes modes of "suppression interference+suppression interference", "suppression interference+spoofing interference" and "spoofing interference+spoofing interference", wherein the "suppression interference+spoofing interference" has both the "suppression" and "spoofing" characteristics, so that a better interference effect can be obtained in practical application, and the "suppression interference+spoofing interference" is widely applied to electronic interference countermeasure. Therefore, the present embodiment is mainly aimed at analyzing and researching the "suppression interference+spoofing interference" type composite interference. As a preferred implementation, this embodiment selects 7 kinds of suppression interference and 5 kinds of spoofing interference to combine to generate 35 kinds of composite interference.
The invention adopts short-time Fourier transform to perform time-frequency transform on more than 50 interference signals, obtains an interference time-frequency image and uses the interference time-frequency image as a reference data set for interference identification.
Step 2: and constructing a network model comprising a feature extraction module and a prototype classification module based on the metric learning idea.
Specifically, referring to fig. 2, fig. 2 is a block diagram of a prototype network based on a variable convolution of aggregated attention, where a feature extraction module includes a variable convolution layer and an aggregated attention block, and a prototype classification module is a prototype classification module based on polynomial loss according to an embodiment of the present invention.
Specifically, the feature extraction network provided in this embodiment is implemented based on a res net18 network structure, and first, the variable convolution layer (Deformable Convolutional Layers, DCLs) enhances the characterization capability of the model to complex and unknown geometric transformations, and extracts the spatial dimension attention of the features to adaptively select important areas, thereby expanding the effective receptive field of convolution. Then use Aggregate Attention block (Aggregate-Attention Blocks, A 2 Bs) generates an attention mask for the channel domain and uses it to automatically select important channels to further extract more discriminative features. Finally, the prototype classification module learns an excellent metric space for interference recognition based on Polynomial Loss (polymial Loss), in which better inter-class separability can be obtained.
2. Training network
Step 3: and training the network model by using the training set, and storing the optimal network model parameters by using the verification set until the network converges to obtain a trained network model.
In this embodiment, the scenario training strategy is mainly used to study'
Figure SMS_129
-way/>
Figure SMS_130
-shot "identifies a problem. In particular, the small sample recognition problem is usually modeled as "-/->
Figure SMS_131
-way/>
Figure SMS_132
-shot "identifies problems, i.e. model from +.>
Figure SMS_133
Get +.>
Figure SMS_134
The tagged images are stretched and the unlabeled images are required to be correctly classified. Unlike conventional recognition problems, which require the training set tag field to be identical to the verification set and test set tag fields, the small sample recognition problem requires classification of new classes after training. This therefore requires that the images used for training and the images for verification and testing must come from orthogonal tag domains.
More specifically, a given data set
Figure SMS_136
Is divided into three parts, namely
Figure SMS_139
,/>
Figure SMS_143
Figure SMS_137
Wherein->
Figure SMS_138
Indicate->
Figure SMS_141
Original feature vectors and label information of the image. Furthermore, tag set->
Figure SMS_144
And->
Figure SMS_135
And->
Figure SMS_140
Is orthogonal and has->
Figure SMS_142
In the meta-training phase, the embodiment is that
Figure SMS_148
Is selected at random->
Figure SMS_150
Category of->
Figure SMS_153
Random sampling +. >
Figure SMS_147
Generating meta-tasks from images>
Figure SMS_151
. In->
Figure SMS_154
In the individual categories->
Figure SMS_157
The sheet of images is further divided into two sets, each containing +.>
Figure SMS_145
Sheet image and +.>
Figure SMS_149
A picture, i.e. support set->
Figure SMS_155
And a query set
Figure SMS_156
. The purpose of FSL is to exploit a given support set +.>
Figure SMS_146
The limited marked samples in (a) contain a priori knowledge about the set of queries that have not been seen +.>
Figure SMS_152
Classification is carried out:
Figure SMS_158
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_160
representing query sample->
Figure SMS_163
Identified as support set->
Figure SMS_164
Middle->
Figure SMS_161
Probability of class. Similarly, the present embodiment is in the dataset +.>
Figure SMS_162
And->
Figure SMS_165
Upper definition meta-task->
Figure SMS_166
And->
Figure SMS_159
For meta-verification and meta-testing. Furthermore, the objective of this embodiment is to train the proposed model to learn migratable deep meta-knowledge from these meta-learning tasks, then save the optimal model through meta-verification, and finally show the generalization accuracy by taking the average of the model recognition accuracy in meta-test.
Step 3 specifically includes:
31 Generating a support set and a query set from the training set by adopting a scenario training mechanism and utilizing a random sample extraction method;
32 Extracting features with remote context information from samples of the support set and the query set by using a variable convolution layer to obtain a geometric enhancement feature map;
33 Channel domain weighting is carried out on the geometric enhancement feature map by utilizing the aggregation attention block, so that a channel enhancement feature map is obtained;
34 Classifying the channel enhancement feature map by using a prototype classification module to obtain a sample real label.
In this embodiment, the specific process of extracting the joint attention feature by using the feature extraction module is as follows: first, for the first
Figure SMS_167
Meta-tasks of sub-episode training>
Figure SMS_168
By support set->
Figure SMS_169
Sample->
Figure SMS_170
And query set->
Figure SMS_171
Sample->
Figure SMS_172
Assembled collection of components
Figure SMS_173
Input into feature extraction modules, i.e.
Figure SMS_174
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_175
and->
Figure SMS_176
Representing aggregated attention variable convolutions in feature extraction modules, respectivelyLayer (Aggregate-Attention Deformable Convolutional Layers, A) 2 -DCLs) and parameters of the last fully connected layer,/->
Figure SMS_177
Representing feature extraction module,/->
Figure SMS_178
Representing the full connection layer in the feature extraction module, < >>
Figure SMS_179
Representing the final visual feature output with joint attention. Furthermore, the->
Figure SMS_180
Mainly composed of DCLs and A 2 Bs composition.
The specific implementation principle of each module is described in detail below with reference to the accompanying drawings.
Referring to fig. 3, fig. 3 is a schematic diagram of a variable convolution layer according to an embodiment of the present disclosure. In this embodiment, extracting features with remote context information from support set and query set samples using a variable convolution layer, a geometry enhanced feature map is obtained, comprising:
obtaining a feature map based on samples of the support set and the query set;
Introducing a group of leachable offset into the feature map to enhance the sampling process on the regular grid, so as to obtain an offset feature map;
and weighting the characteristic map after the offset by utilizing the leachable modulation quantity to obtain a geometric enhancement characteristic map.
Unlike conventional convolution operations, the variable convolution achieves invariance (e.g., translational, rotational, scale invariance) of geometric transformations by introducing bias and modulation mechanisms, while having the ability to adaptively select important regions in the feature map. Specifically, DCLs integrate volumes into three steps, first enhancing the regular grid by introducing a set of learnable offsets into the input feature map
Figure SMS_181
And in the sampling process, weighting the characteristic diagram after the deviation by using a leachable modulation quantity, and finally carrying out weighted summation on the sampling characteristic by using a conventional convolution check.
It is noted that both the offset estimation and the modulation estimation occur in two dimensions. For input images
Figure SMS_182
Let the output feature map be +.>
Figure SMS_183
And use->
Figure SMS_184
Representation->
Figure SMS_185
A two-dimensional sampling position on the sample. To->
Figure SMS_186
Is +.>
Figure SMS_187
The variable convolution may be defined as:
Figure SMS_188
in the method, in the process of the invention,
Figure SMS_190
expressed as +.>
Figure SMS_195
Is +. >
Figure SMS_197
A neighborhood. />
Figure SMS_191
Is the integral offset of the convolution operation; />
Figure SMS_194
Representation->
Figure SMS_198
Weight of convolution kernel +_>
Figure SMS_201
And->
Figure SMS_189
Respectively represent the leachable offset and modulation in the variable convolution, and +.>
Figure SMS_193
Is in the range of +.>
Figure SMS_199
Wherein->
Figure SMS_200
。/>
Figure SMS_192
Is added to the integral offset +.>
Figure SMS_196
The sampling position of the variable convolution kernel is varied in the feature space.
Thus, for principle sampling centers
Figure SMS_202
Can be adaptively processed by a variable convolution. However, due to +.>
Figure SMS_211
Usually in decimal, the present embodiment uses bilinear interpolation calculation +.>
Figure SMS_212
Thereby generating an accurate offset. Furthermore, the->
Figure SMS_204
And->
Figure SMS_206
Is obtained by applying a separate convolution layer to the same input feature map, and the two convolution layers have the same spatial resolution and expansion as the current convolution layer. Thus, assume that the input maps to
Figure SMS_207
Wherein->
Figure SMS_210
,/>
Figure SMS_203
And->
Figure SMS_205
Height, width and number of channels (i.e. number of filters), respectively +.>
Figure SMS_208
And->
Figure SMS_209
Can be expressed as:
Figure SMS_213
Figure SMS_214
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_215
and->
Figure SMS_216
Respectively representing convolution operator and Sigmoid function, and has +.>
Figure SMS_217
,/>
Figure SMS_218
Referring to fig. 4, fig. 4 is a schematic diagram of an aggregate attention block according to an embodiment of the present invention. In this embodiment, the channel domain weighting is performed on the geometric enhancement feature map by using the aggregate attention block to obtain a channel enhancement feature map, including:
And respectively carrying out global maximum pooling, global average pooling and global soft pooling on the geometric enhancement feature map, and carrying out feature fusion to obtain the channel enhancement feature map.
In order to further improve the channel Attention performance without introducing additional learnable parameters and increasing the complexity of the model, the embodiment provides an Aggregate-Attention Blocks (A 2 Bs). Specifically, the embodiment fuses multiple effective information by aggregating different global pooling operations (namely global max pooling, global average pooling and global soft pooling) in the channel domain, and further extracts the attention mask of the channel domain with more discernability so as to adaptively select important channel information. Assuming that the characteristics of the DCLs output map to
Figure SMS_219
Wherein->
Figure SMS_220
,/>
Figure SMS_221
And->
Figure SMS_222
The height, width and number of channels of the output feature map are represented, respectively. Thus, A 2 The weights of the channels in Bs can be expressed as:
Figure SMS_223
in the method, in the process of the invention,
Figure SMS_235
,/>
Figure SMS_227
for Sigmoid function, ++>
Figure SMS_231
Representing a full connection layer, ">
Figure SMS_229
And->
Figure SMS_233
Is the weight of the full connection layer. />
Figure SMS_237
、/>
Figure SMS_239
、/>
Figure SMS_234
And respectively representing the characteristics after global maximum pooling, global average pooling and global soft pooling. Furthermore, in order to reduce network parameters, reduce model complexity, < >>
Figure SMS_238
And- >
Figure SMS_224
Is set to +.>
Figure SMS_228
And->
Figure SMS_226
Wherein->
Figure SMS_230
Indicating a fold reduction. />
Figure SMS_232
,/>
Figure SMS_236
And->
Figure SMS_225
The method respectively represents global maximum pooling, global average pooling and global soft pooling, and the mathematical expression is as follows:
Figure SMS_240
Figure SMS_241
Figure SMS_242
in summary, the output of the feature extraction module can be expressed as:
Figure SMS_243
;/>
in the method, in the process of the invention,
Figure SMS_244
representing the channel dimension product, +.>
Figure SMS_245
In this way, the model extracts the more discernable high-dimensional features from the raw image data. More specifically, the feature extraction module combines spatial attention of DCLs with A 2 The aggregate channel attention of Bs learns the more discriminative feature, i.e., the joint attention of classes differentially directs class prototypes.
Referring to fig. 5, fig. 5 is a schematic diagram of a prototype classification module according to an embodiment of the invention. In this embodiment, classifying the channel enhancement feature map by using a prototype classification module to obtain a sample real label includes:
and calculating the distance from the query set sample to the support set prototype sample in the channel enhancement feature map, and outputting a class label of the prototype sample with the minimum distance from the query set sample to classify the query sample so as to obtain a real label of the query sample.
In addition, after each training is completed, the verification set is used to store the optimal network model parameters, and the optimal network model parameters are stored mainly by calculating the corresponding polynomial loss function and updating the parameters of the network model accordingly.
Specifically, the features obtained by the model from the feature extraction module characterize more efficient class prototypes, and then the recognition task is completed through the prototype classification module. The classifier predicts the real label of the query sample by comparing the distance between the query sample and the category prototype in the measurement space and utilizes
Figure SMS_246
And optimizing the model to obtain the optimal inter-class separability. Ideally, the metric-based FSL method classifies query samples according to their distance from the center of a given support set category. In order to simplify the calculation of the distance information, the prototype +.>
Figure SMS_247
To approximately replace the true category center, wherein +.>
Figure SMS_248
Representation category->
Figure SMS_249
Is a prototype of (c). Typically, class prototypes are obtained by computing the average of the support set sample feature maps.
Specifically, category
Figure SMS_250
Prototype->
Figure SMS_251
The calculation formula of (2) is as follows:
Figure SMS_252
in the method, in the process of the invention,
Figure SMS_253
representation category->
Figure SMS_254
Support of (a)Number of collection samples. The recognition task is accomplished by computing a probability distribution of the distances of the query samples to the support set prototypes:
Figure SMS_255
in the method, in the process of the invention,
Figure SMS_256
representing Euclidean distance operator,>
Figure SMS_257
representation category->
Figure SMS_258
Is a prototype of (c). More specifically, the model classifies the query sample by outputting a class label of the prototype having the smallest distance from the query sample: / >
Figure SMS_259
Cross entropy characterizes the difference between the model-learned distribution and the true distribution by measuring the distance between two probability distributions, and is therefore widely used for machine learning and deep learning. In computer vision tasks, cross entropy loss is generally expressed as:
Figure SMS_260
in the method, in the process of the invention,
Figure SMS_261
representation model at->
Figure SMS_262
The prediction probability of the true category of the target in the secondary story training. />
Figure SMS_263
The specific form in the method of the present invention can be expressed as:
Figure SMS_264
furthermore, the processing unit is configured to,
Figure SMS_265
to->
Figure SMS_266
The Taylor expansion of the basis can be expressed as:
Figure SMS_267
however, cross entropy loss is not flexibly applicable to a variety of computer vision tasks due to the fixed functional form. Thus, the present invention introduces a more flexible, more versatile loss function framework: polyLoss. Specifically, the key idea of PolyLoss is to decompose the cross entropy loss into a series of weighted polynomial-based sums, the functional form of which can be expressed as:
Figure SMS_268
in the method, in the process of the invention,
Figure SMS_269
is polynomial coefficient +.>
Figure SMS_270
Representing the predicted probability of the model for the true class of the target. Therefore, by changing->
Figure SMS_271
To adjust the importance of the corresponding polynomial basis, an optimal loss function form can be accurately formulated for different tasks. However, for an infinite number of polynomial coefficients +. >
Figure SMS_272
The operation difficulty is huge. Therefore, optimizing the most general functional form of PolyLoss is not feasible. Optimizing the air for reducing parametersThe model is prevented from being unable to converge, and the invention adopts a more concise and effective loss function with a function form: />
Figure SMS_273
The mathematical expression is as follows: />
Figure SMS_274
In the method, in the process of the invention,
Figure SMS_275
is the first polynomial coefficient. And updating parameters of the model according to the polynomial loss function until the network converges.
3. Test network
Step 4: and performing performance evaluation on the trained network model by using the test set, and performing identification of small sample radar active interference by using the trained network model.
Aiming at the problem of sample scarcity faced by radar active interference identification, the invention provides a small sample learning framework based on measurement learning, and solves the problems of sample scarcity, poor novel interference identification accuracy and weak migration capability of the existing method. Meanwhile, the invention provides a small sample radar active interference identification method based on a concentrated attention variable convolution prototype network, which aims at solving the problem that the existing small sample learning method has poor capability of representing discrete features and weak features in an interference time-frequency image, can accurately extract interference features under the condition that samples are extremely scarce, and improves the identification robustness of a model to feature discrete and weak feature interference.
Example two
The simulation test is performed on the small sample radar active interference identification method provided by the first embodiment, and the simulation test is compared with the existing FSL method to verify the beneficial effects of the invention.
1. Data set
In order to evaluate the performance of the interference recognition algorithm provided by the invention, 50 interference signals are firstly simulated through an MATLAB software platform, and the JSR ranges of the interference signals and the deception interference are respectively set to be 30-50dB and 10-30dB in consideration of the difference between the modulation mode of suppressing interference and deception interference and the interference purpose. Furthermore, the parameters and index ranges of the composite interference depend on the signal components constituting the composite interference. Finally, a data set is generated according to the modulation principle of each interference signal and different JNRs (jnr=0db, 10db,20db,30 db), and a time-frequency image of 50 interference signals is generated through short-time fourier transform, as shown in fig. 6. For each JNR, each interferer randomly generated 1000 time-frequency images, and the dataset had a total of 50 x 1000 samples, each sample having a size of 1 x 128, within the interference simulation parameters. For convenience of experimental analysis hereinafter, the sub-data sets of different JNRs are respectively denoted as jamset_0db, jamset_10dB, jamset_20dB, and jamset_30dB, and their aggregate sets are named JamSet, i.e., jamset= { jamset_0dB,JamSet_10dB, jamSet _20dB, jamset_30dB }.
Further, the obtained data set is divided into a training set, a verification set and a test set according to a certain proportion, and the division strategy is shown in fig. 7.
2. Implementation details
1) Feature extraction network architecture
For fair comparison with existing FSL methods, the present invention builds a feature extraction network in ResNet 18. Specifically, the res net18 consists of 8 residual blocks, where each residual block contains 3 convolutional blocks (identical to Conv64F convolutional block structure) and one residual link layer, and each convolutional block consists of one convolutional layer, one batch normalization layer, one ReLU layer, and one max pooling layer.
2) Scenario training strategy
The invention is characterized in that
Figure SMS_277
-way/>
Figure SMS_280
And (3) quick learning is performed by adopting a story training strategy in the shot experiment. Specifically, for a 5-way1-shot experiment, each category in the support set and the query set contains 1 sample and 15 samples, respectively. Thus (2)In one meta-training, the total amount of samples in the input model is +.>
Figure SMS_281
. Similarly, for the 5-way5-shot experiment, the total amount of samples in the input model was +.>
Figure SMS_278
. For prototype classification module->
Figure SMS_279
First coefficient of->
Figure SMS_282
And respectively selecting parameter values corresponding to the value with the highest recognition accuracy on the verification set as the parameter inputs of the 5-way1-shot and the 5-way5-shot according to experimental results. In addition, the invention uses Adam optimizer to train model end-to-end and sets initial learning rate and weight attenuation coefficient to 0.001 and +. >
Figure SMS_283
. For all models, 100 episode meta-training was set up on JamSet, each episode meta-training containing 2000 meta-tasks. In particular, the input of all models used in this section is adjusted to +.>
Figure SMS_276
3) Verification and testing
After the training of each story is completed, the super parameters are optimized through the verification set, the optimal model with the highest recognition precision is stored, and finally the average recognition precision of the model is displayed on the test set. Similar to the storybook training phase, during the validation and testing phases, data is also divided into support sets and query sets for model input. 600 meta-tasks were set up during both validation and testing and given an average accuracy of 95% confidence intervals for these meta-tasks.
3. Experimental results
Interference time-frequency diagram generated by simulationThe image is input to the proposed algorithm A 2 The dcnet and the other five comparison algorithms were tested and the recognition accuracy was obtained. Table 1 shows the results of the proposed algorithm compared with the conventional FSL algorithm on the simulation dataset JamSet.
Table 1 comparison of the test results of the proposed algorithm and the conventional FSL algorithm
Figure SMS_284
The above results indicate that: a is that 2 Compared with the traditional FSL algorithm, the DCCNet obtains the highest recognition accuracy on JamSet, and obtains the recognition accuracy of 87.56% and 94.34% on 5-way1-shot and 5-way5-shot respectively. Compared with the optimal recognition accuracy of the traditional FSL algorithm, namely the recognition accuracy of the 5-way1-shot and the 5-way5-shot are respectively 84.77 percent and 91.18 percent (obtained by CAN), A 2 DCNet is increased by at least 2.79% and 3.16%. In particular, the algorithm A of the present invention compares with the typical prototype recognition algorithm, namely PN 2 The recognition accuracy of DCCNet on 5-way1-shot and 5-way5-shot is respectively improved by 5.53% and 6.82%. Therefore, the algorithm provided by the invention is obviously superior to other traditional FSL algorithms when facing complex interference data sets, and shows the robustness of the interference recognition model under the condition of extremely scarcity of samples.
To highlight A proposed 2 Migration performance of dcnet under different JNR, this example sets up several comparative experiments on JamSet. Specifically, all methods were first saved with the optimal model through episodic training on JamSet, then tested directly on the remaining sub-data sets with these models, and the test results are shown in table 2.
Table 2 comparison of the test results of the proposed algorithm and the conventional FSL algorithm
Figure SMS_285
As can be seen from Table 2, the trained model is directly used for testing on JamSet_0dB, and compared with the traditional FSL algorithm, the algorithm provided by the invention achieves the highest recognition accuracy, namely 66.35% and 82.81% respectively. Similarly, the algorithm provided by the invention is obviously superior to the traditional FSL algorithm in terms of JamSet_10dB and JamSet_20dB. Thus, the above results indicate that the algorithm presented herein has optimal migration performance compared to the conventional FSL algorithm, showing great potential for application in actual interference scenarios.
In order to further verify the superiority of the algorithm provided by the invention, feature visualization and t-SNE clustering result visualization are carried out on all algorithms. Referring to fig. 8, fig. 8 is a schematic diagram showing the feature visualization results of the algorithm proposed by the present invention and the conventional FSL algorithm. From the results shown in fig. 8, the algorithm provided by the invention can more completely characterize the interference characteristics in the time-frequency image. Moreover, the characteristic visualization results of NFMJ+ISRJ and SFJ2+DFTJ show that the weak characteristic is represented more finely by the algorithm provided by the invention. Therefore, the characterization capability of the algorithm provided by the invention on discrete interference features and weak features in composite interference is obviously superior to that of the traditional FSL algorithm.
FIG. 9 is a visualization of t-SNE clustering results. As can be seen from fig. 9, for the most difficult clustering interferences, i.e., NFMJ, nfmj+dftj, nfmj+isfj, nfmj+isrj, and sfj2+dftj, the proposed algorithm of the present invention shows robustness in the case of small samples compared to the clustering results achieved by the conventional FSL algorithm.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (7)

1. A method for identifying small sample radar active interference, comprising:
constructing a plurality of radar active interference signal models, and performing short-time Fourier transform on the radar active interference signal models to obtain an interference time-frequency image; taking the interference time-frequency image as a reference data set for interference identification, and dividing the reference data set into a training set, a verification set and a test set;
constructing an aggregate attention variable convolution prototype network model comprising a feature extraction module and a prototype classification module based on a metric learning idea; the feature extraction module comprises a variable convolution layer and an aggregation attention block, and the prototype classification module is a prototype classification module based on polynomial loss;
training the network model by using the training set, and storing optimal network model parameters by using the verification set until the network converges to obtain a trained network model; comprising the following steps:
generating a support set and a query set from the training set by adopting a scenario training mechanism and utilizing a random sample extraction method;
obtaining a feature map based on the support set and samples of the query set;
introducing a group of leachable offset into the feature map to enhance the sampling process on the regular grid, so as to obtain an offset feature map;
Weighting the characteristic map after offset by utilizing the learnable modulation quantity to obtain a geometric enhancement characteristic map; wherein, the acquisition process of the geometric enhancement feature map is expressed as follows:
Figure FDA0004228136950000011
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004228136950000012
geometric enhancement feature map representing variable convolutional layer output, < >>
Figure FDA0004228136950000017
Representing a feature map, p 0 =(x 0 ,y 0 ) Representation->
Figure FDA0004228136950000013
Two-dimensional sampling position, ω (p n ) Representing 3×The weight of the convolution kernel is 3,
Figure FDA0004228136950000014
expressed as p 0 3X 3 neighborhood as center, +.>
Figure FDA0004228136950000015
Is the integral offset of the convolution operation, { Δp n |n=1, 2, …, N } and { Δm } n I n=1, 2, …, N } represent the learnable offset and modulation amounts in the variable convolution, respectively, and Δm n Is in the range of [0,1 ]],/>
Figure FDA0004228136950000016
Carrying out channel domain weighting on the geometric enhancement feature map by utilizing the aggregation attention block to obtain a channel enhancement feature map;
classifying the channel enhancement feature map by using the prototype classification module to obtain a sample real label;
and performing performance evaluation on the trained network model by using the test set, and performing recognition of small sample radar active interference by using the trained network model.
2. The method of claim 1, wherein the radar active disturbance signal model includes a jammer suppression, a jammer spoof, and a composite disturbance combined from the jammer suppression and the jammer spoof.
3. The method for identifying small sample radar active interference according to claim 1, wherein the performing channel domain weighting on the geometric enhancement feature map by using the aggregate attention block to obtain a channel enhancement feature map comprises:
and respectively carrying out global maximum pooling, global average pooling and global soft pooling on the geometric enhancement feature map, and carrying out feature fusion to obtain a channel enhancement feature map.
4. The method of claim 3, wherein the process of obtaining the channel enhancement profile is formulated as:
Figure FDA0004228136950000021
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004228136950000022
channel enhancement feature map representing aggregate attention block output,/->
Figure FDA0004228136950000023
Representing the channel dimension product, +.>
Figure FDA0004228136950000024
A geometric enhancement feature representing the variable convolutional layer output;
Figure FDA0004228136950000025
Figure FDA0004228136950000026
the weight matrix representing the output, sigma (,) is Sigmoid function,/o>
Figure FDA0004228136950000027
Represents a fully connected layer, W D And W is U The weight of the full connection layer; />
Figure FDA0004228136950000028
And->
Figure FDA0004228136950000029
Respectively represent global maximum pooling and global levelingEqualizing pooling and global soft pooling, +.>
Figure FDA00042281369500000210
Respectively representing the characteristics after global maximum pooling, global average pooling and global soft pooling.
5. The method for identifying small sample radar active interference according to claim 1, wherein said classifying the channel enhancement feature map by using the prototype classification module to obtain a sample true tag comprises:
And calculating the distance from the query set sample to the support set prototype sample in the channel enhancement feature map, and outputting a class label of the prototype sample with the minimum distance from the query set sample so as to classify the query set sample and obtain a real label of the query set sample.
6. The method of small sample radar active disturbance identification according to claim 1, wherein said using said verification set to store optimal network model parameters includes:
after each training is completed, the corresponding polynomial loss function is calculated, and the parameters of the network model are updated accordingly to save the optimal network model parameters.
7. The method of claim 6, wherein the polynomial loss function is expressed as:
Figure FDA0004228136950000031
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure FDA0004228136950000032
c represents the total number of categories, C represents a certain category, and c=1, 2, … C,
Figure FDA0004228136950000033
represents the set of queries, e 1 Representing the polynomial coefficients of the leader ++>
Figure FDA0004228136950000034
Probability distribution representing the distance of a sample of a query set to a prototype of a support set, +.>
Figure FDA0004228136950000035
Representing support sets,/->
Figure FDA0004228136950000036
Representing the ith query set sample, +.>
Figure FDA0004228136950000037
A real tag representing the ith query set sample, +.>
Figure FDA0004228136950000038
Representing the number of samples of the query set.
CN202310320140.6A 2023-03-29 2023-03-29 Small sample radar active interference identification method Active CN116047427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310320140.6A CN116047427B (en) 2023-03-29 2023-03-29 Small sample radar active interference identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310320140.6A CN116047427B (en) 2023-03-29 2023-03-29 Small sample radar active interference identification method

Publications (2)

Publication Number Publication Date
CN116047427A CN116047427A (en) 2023-05-02
CN116047427B true CN116047427B (en) 2023-06-23

Family

ID=86133530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310320140.6A Active CN116047427B (en) 2023-03-29 2023-03-29 Small sample radar active interference identification method

Country Status (1)

Country Link
CN (1) CN116047427B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116482618B (en) * 2023-06-21 2023-09-19 西安电子科技大学 Radar active interference identification method based on multi-loss characteristic self-calibration network
CN117289218B (en) * 2023-11-24 2024-02-06 西安电子科技大学 Active interference identification method based on attention cascade network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564982A (en) * 2022-01-19 2022-05-31 中国电子科技集团公司第十研究所 Automatic identification method for radar signal modulation type
CN115081475A (en) * 2022-06-08 2022-09-20 西安电子科技大学 Interference signal identification method based on Transformer network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7193558B1 (en) * 2003-09-03 2007-03-20 The United States Of America As Represented By The Secretary Of The Navy Radar processor system and method
KR102110973B1 (en) * 2019-10-25 2020-05-14 에스티엑스엔진 주식회사 Robust CFAR Method for Noise Jamming Detection
CN114037001A (en) * 2021-10-11 2022-02-11 中国人民解放军92578部队 Mechanical pump small sample fault diagnosis method based on WGAN-GP-C and metric learning
CN114895263A (en) * 2022-05-26 2022-08-12 西安电子科技大学 Radar active interference signal identification method based on deep migration learning
CN115097396A (en) * 2022-06-21 2022-09-23 西安电子科技大学 Radar active interference identification method based on CNN and LSTM series model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564982A (en) * 2022-01-19 2022-05-31 中国电子科技集团公司第十研究所 Automatic identification method for radar signal modulation type
CN115081475A (en) * 2022-06-08 2022-09-20 西安电子科技大学 Interference signal identification method based on Transformer network

Also Published As

Publication number Publication date
CN116047427A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN116047427B (en) Small sample radar active interference identification method
CN111541511B (en) Communication interference signal identification method based on target detection in complex electromagnetic environment
CN107808138B (en) Communication signal identification method based on FasterR-CNN
Li Research on radar signal recognition based on automatic machine learning
CN112560803A (en) Radar signal modulation identification method based on time-frequency analysis and machine learning
CN111723701B (en) Underwater target identification method
CN111507047B (en) Inverse scattering imaging method based on SP-CUnet
CN112859014A (en) Radar interference suppression method, device and medium based on radar signal sorting
CN109726649B (en) Remote sensing image cloud detection method and system and electronic equipment
CN109409442A (en) Convolutional neural networks model selection method in transfer learning
CN107392863A (en) SAR image change detection based on affine matrix fusion Spectral Clustering
CN110163040B (en) Radar radiation source signal identification technology in non-Gaussian clutter
CN114881093B (en) Signal classification and identification method
Orduyilmaz et al. Machine learning-based radar waveform classification for cognitive EW
Wei et al. Intra-pulse modulation radar signal recognition based on Squeeze-and-Excitation networks
Liu et al. Radar signal recognition based on triplet convolutional neural network
CN113673312A (en) Radar signal intra-pulse modulation identification method based on deep learning
Huang et al. Radar waveform recognition based on multiple autocorrelation images
CN115565019A (en) Single-channel high-resolution SAR image ground object classification method based on deep self-supervision generation countermeasure
Ding et al. Combination of global and local filters for robust SAR target recognition under various extended operating conditions
Kamal et al. Generative adversarial learning for improved data efficiency in underwater target classification
Xiao et al. Active jamming recognition based on bilinear EfficientNet and attention mechanism
CN115951315B (en) Radar spoofing interference identification method and system based on improved wavelet packet energy spectrum
CN112285667A (en) Neural network-based anti-ground clutter processing method
CN112215199A (en) SAR image ship detection method based on multi-receptive-field and dense feature aggregation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant