CN116482618A - Radar active interference identification method based on multi-loss characteristic self-calibration network - Google Patents

Radar active interference identification method based on multi-loss characteristic self-calibration network Download PDF

Info

Publication number
CN116482618A
CN116482618A CN202310741199.2A CN202310741199A CN116482618A CN 116482618 A CN116482618 A CN 116482618A CN 202310741199 A CN202310741199 A CN 202310741199A CN 116482618 A CN116482618 A CN 116482618A
Authority
CN
China
Prior art keywords
feature
self
calibration
characteristic
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310741199.2A
Other languages
Chinese (zh)
Other versions
CN116482618B (en
Inventor
周峰
樊伟伟
汪思瑶
田甜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202310741199.2A priority Critical patent/CN116482618B/en
Publication of CN116482618A publication Critical patent/CN116482618A/en
Application granted granted Critical
Publication of CN116482618B publication Critical patent/CN116482618B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/021Auxiliary means for detecting or identifying radar signals or the like, e.g. radar jamming signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/023Interference mitigation, e.g. reducing or avoiding non-intentional interference with other HF-transmitters, base station transmitters for mobile communication or other radar systems, e.g. using electro-magnetic interference [EMI] reduction techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/36Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Evolutionary Computation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to a radar active interference identification method based on a multi-loss characteristic self-calibration network, which comprises the steps of obtaining to-be-identified time-frequency spectrogram data comprising radar active interference signals; inputting the time-frequency spectrogram data to be identified into a trained multi-loss characteristic self-calibration network to identify the interference type, so as to obtain an identification classification result; the interference type identification is performed according to the following steps: performing interference feature self-adaptive extraction on the time-frequency spectrogram data to be identified, adaptively reducing the distance between the interference features of the same category in the feature space, and enlarging the distance between the interference features of different categories in the feature space to obtain interference feature vectors of different types; and carrying out dimension reduction mapping on the interference feature vectors of different types, and classifying to obtain a recognition classification result. The method is more accurate in capturing and characterizing the fine features of the complex mixing interference.

Description

Radar active interference identification method based on multi-loss characteristic self-calibration network
Technical Field
The invention belongs to the technical field of radar signal processing, and particularly relates to a radar active interference identification method based on a multi-loss characteristic self-calibration network.
Background
The increasingly complex electromagnetic environment severely threatens the detection efficiency and survival safety of the radar. Since most of the existing radar anti-interference technologies aim at specific types of interference, only if an interference pattern is accurately identified, a corresponding effective countermeasures can be taken. Therefore, research on the radar interference recognition algorithm can provide priori information for subsequent radar interference resistance, and has important application value.
The traditional radar interference recognition algorithm needs to manually analyze and extract various characteristics, so that the universality is poor and the generalization capability is weak. And methods based on manual screening features are susceptible to variations in the interference-to-noise ratio and other interference signal parameters.
The deep convolutional neural network (Deep Convolutional Neural Network, DCNN) has been widely used in the field of image processing due to its adaptive feature extraction and characterization capabilities. Currently, there are some classical deep learning-based radar active interference identification methods. The depth fusion convolutional network (Deep Fusion Convolution Neural Network, DFCNN) structure is composed of three sub-networks of a one-dimensional convolutional neural network, a two-dimensional convolutional neural network and a fusion network, the four one-dimensional convolutional neural networks respectively extract high-dimensional characteristics of the original radar echo after the interference takes a real part, an imaginary part, a phase and an amplitude, the two-dimensional convolutional neural network extracts time-frequency characteristics of the interference, the two parts of characteristics are input into the fusion network for further characteristic fusion extraction after being spliced, and soft tag smoothing is further provided for relieving overfitting. On the basis, lv et al of the Western-type electronic technology university propose a radar active deception jamming recognition algorithm based on weighted integration CNN (Weighted Ensemble CNN with Transfer Learning, WECNN-TL) of transfer learning; firstly, a reference data set of the algorithm network training is formed by mixing time spectrums of simulation and actual measurement interference, and secondly, in order to fully mine potential information of interference signals, a real part, an imaginary part, a modulus and a phase are extracted from the interference time spectrums to be combined into 15 sub data sets; the network designs 15 sub-classifiers by using the thought of a guided clustering algorithm (Bootstrap aggregating, bagging), respectively excavates the structural features of each sub-data set and makes individual prediction results, and finally obtains the overall prediction results of the set model by using a weighted voting algorithm, thereby achieving the effect of further improving the radar interference recognition precision in the test stage. The interference recognition network (Jamming Recognition Network, JRNet) can enhance the robustness of interference recognition in the presence of rotational distortion, subtle differences, and the like, by using asymmetric convolution blocks (Asymmetric Convolution Block, ACB) without adding additional computational effort. The transfer learning utilizes knowledge learned from a source domain to help a learning task of a target domain, and after AlexNet is pre-trained on a data set ImageNet, an input layer and an output layer are respectively adjusted to be a time-frequency diagram containing interference radar echo and an interference recognition result. The convolutional neural network (Recognition Convolution Neural Network, RCNN) is identified, interference parameters are measured from the time-frequency diagram of the echo signals by using the OS-CFAR, and then the interference is extracted and sent to the pre-trained CNN network for classification.
The radar active interference identification is realized by utilizing the deep learning theory, and the defects of poor artificial dependence and robustness of the traditional identification method are overcome. However, the existing radar active interference recognition method based on deep learning has the problems of sensitive characteristic parameters to interference patterns, limited recognized interference types, insufficient robustness and the like, and particularly reduces the capability of extracting and aggregating interference characteristics when facing recognition tasks with small recognizable characteristic differences among interference and multiple interference patterns.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a radar active interference identification method based on a multi-loss characteristic self-calibration network. The technical problems to be solved by the invention are realized by the following technical scheme:
the embodiment of the invention provides a radar active interference identification method based on a multi-loss characteristic self-calibration network, which comprises the following steps:
acquiring to-be-identified time-frequency spectrogram data comprising radar active interference signals;
inputting the to-be-identified time-frequency spectrogram data into a trained multi-loss characteristic self-calibration network to identify the interference type, so as to obtain an identification classification result;
the trained multi-loss characteristic self-calibration network comprises a characteristic extraction module, a characteristic mapping module and a reconstruction module which are sequentially connected, wherein the trained multi-loss characteristic self-calibration network is obtained by training and updating parameters of the multi-loss characteristic self-calibration network by utilizing a mixed loss function, the mixed loss function is determined by cross entropy loss propagated by a training set sample label, cluster loss of the characteristic mapping module and mean square error loss of the reconstruction module, and interference type identification is carried out according to the following steps:
The feature extraction module is used for carrying out interference feature self-adaptive extraction on the to-be-identified time-frequency spectrogram data, adaptively reducing the distance of the same-category interference features in a feature space, and enlarging the distance of different-category interference features in the feature space to obtain different-category interference feature vectors;
and classifying after the feature mapping module performs dimension reduction mapping on the different types of interference feature vectors to obtain the recognition classification result.
In one embodiment of the present invention, the training method of the multi-loss feature self-calibration network comprises:
acquiring an original time-frequency spectrogram data set comprising radar active interference signals and non-interference radar echoes, wherein the original time-frequency spectrogram data set comprises a training set sample, a training set sample tag, a verification set sample and a verification set sample tag;
inputting the training set sample and the training set sample label into the multi-loss characteristic self-calibration network for training;
determining the mixing loss function according to the cross entropy loss of the training set sample label propagation, the clustering loss of the feature mapping module and the mean square error loss of the reconstruction module;
updating parameters of the multi-loss characteristic self-calibration network by utilizing the mixed loss function;
And selecting the network after each round of training by using the verification set sample and the verification set sample label, and taking the model with the highest recognition accuracy as the trained multi-loss characteristic self-calibration network.
In one embodiment of the invention, the feature extraction module comprises a first feature self-calibration convolution block, a second feature self-calibration convolution block, a third feature self-calibration convolution block, and a fourth feature self-calibration convolution block, wherein,
the first characteristic self-calibration convolution block, the second characteristic self-calibration convolution block, the third characteristic self-calibration convolution block and the fourth characteristic self-calibration convolution block are sequentially connected and are used for sequentially carrying out characteristic self-calibration convolution on the to-be-identified time-frequency spectrogram data to obtain the different types of interference characteristic vectors.
In one embodiment of the present invention, the first, second, third and fourth characteristic self-calibrating convolution blocks have the same structure, each comprising a first convolution block, a second convolution block, a third convolution block, a channel characteristic self-calibrating module, a spatial characteristic self-calibrating module, an up-dimension module, a first summing module and a first max pooling layer, wherein,
The first convolution block, the second convolution block and the third convolution block are sequentially connected and are used for sequentially carrying out convolution processing on the input feature images of the feature self-calibration convolution blocks to obtain an output feature image of the third convolution block;
the channel characteristic self-calibration module is used for carrying out channel characteristic self-calibration on the output characteristic diagram of the third convolution block to obtain a channel characteristic self-calibration characteristic diagram;
the spatial characteristic self-calibration module is used for performing spatial characteristic self-calibration on the channel characteristic self-calibration characteristic diagram to obtain a spatial characteristic self-calibration characteristic diagram;
the dimension lifting module is used for carrying out dimension lifting operation on the input feature map of the feature self-calibration convolution block to obtain a dimension lifting feature map;
the first adding module is used for adding the dimension-rising characteristic diagram to the space characteristic self-calibration characteristic diagram to obtain an adding characteristic diagram;
the first maximum pooling layer is configured to downsample the added feature map to obtain an output feature map of a feature self-calibration convolution block:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing dimension up operation, ++>Representing the extracted features->Indicate->Input feature map of individual feature self-calibrating convolution block, < >>Indicate->Weight parameters of the individual characteristic self-calibrating convolution block, < > >Representing maximum pooling.
In one embodiment of the present invention, the channel characteristic self-calibration module includes an adaptive maximum pooling layer, an adaptive average pooling layer, a multi-layer perceptron, a second addition module, a channel weight normalization module, and a first multiplication module, wherein,
the adaptive maximum pooling layer is used for carrying out adaptive maximum pooling on the height of the feature diagram and the width of the feature diagram in the output feature diagram of the third convolution block at the same time, and inducing the maximum response on each channel dimension;
the self-adaptive average pooling layer is used for carrying out self-adaptive average pooling on the height of the feature diagram and the width of the feature diagram in the output feature diagram of the third convolution block at the same time, and inducing the average response on each channel dimension;
the multi-layer perceptron is used for sequentially carrying out feature scaling, feature restoration and channel dimension information extraction on the maximum value response on each channel dimension to obtain maximum value response output, and sequentially carrying out feature scaling, feature restoration and channel dimension information extraction on the average response on each channel dimension to obtain average response output;
the second adding module is used for adding and fusing the maximum response output and the average response output to obtain an adding characteristic diagram;
The channel weight normalization module is used for normalizing the weights of the channels in the addition feature map by using an activation function to obtain channel normalized weights;
the first multiplication module is used for multiplying the channel normalization weight with the output feature map of the third convolution block to obtain a channel feature self-calibration feature map:
wherein, the liquid crystal display device comprises a liquid crystal display device,self-calibration template representing channel characteristics, +.>Representing an activation function->Representing a multi-layer perceptron @, @>Representing adaptive mean pooling +.>Representing adaptive max pooling, +.>An output characteristic diagram representing a third convolution block, < >>,/>Representing vector space, ++>Representing a characteristic map->Number of channels->Representing a characteristic map->Height (I) of (II)>Representing a characteristic map->Width of the material.
In one embodiment of the present invention, the spatial feature self-calibration module includes a second max-pooling layer, an average pooling layer, a stitching module, a convolution module, a spatial weight normalization module, and a second multiplication module, wherein,
the second maximum pooling layer is used for carrying out channel dimension maximum pooling on the channel characteristic self-calibration feature map to obtain a maximum pooling feature map compressed to a space dimension;
the average pooling layer is used for carrying out channel dimension average pooling on the channel characteristic self-calibration feature map to obtain an average pooling feature map compressed to a space dimension;
The splicing module is used for fusing the maximum pooling feature images and the average pooling feature images by adopting a splicing method to obtain splicing feature images;
the convolution module is used for carrying out convolution mapping on the spliced feature images to obtain mapping feature images;
the spatial weight normalization module is used for normalizing the spatial weight in the mapping feature map by using an activation function to obtain a spatial normalized weight;
the second multiplying module is configured to multiply the spatial normalization weight with the channel characteristic self-calibration feature map to obtain a spatial characteristic self-calibration feature map:
wherein, the liquid crystal display device comprises a liquid crystal display device,self-calibration template representing spatial features, +.>Representing convolution weights, ++>Representing an activation function->Representing average pooling>Representing maximum pooling, ++>A self-calibration feature map of the channel features is shown,,/>representing vector space, ++>Representing a characteristic map->Number of channels->Representing a characteristic map->Height (I) of (II)>Representing a characteristic map->Width of the material.
In one embodiment of the present invention, the overall formulation of the channel feature self-calibration module and the spatial feature self-calibration module is:
wherein, the liquid crystal display device comprises a liquid crystal display device,output feature map representing spatial feature self-calibration module, +.>An output characteristic diagram representing a third convolution block, < > >,/>Representing vector space, ++>Representing a characteristic map->Number of channels->Representing a characteristic map->Height (I) of (II)>Representing a characteristic map->Width (L)/(L)>Self-calibration template representing channel characteristics, +.>Self-calibration template representing spatial features, +.>Representing element-wise multiplication.
In one embodiment of the invention, the feature mapping module comprises a first fully-connected layer and a second fully-connected layer, wherein,
the first full connection layer is used for projecting the flattened features of the different types of interference feature vectors into a feature space to obtain embedded vectors;
and the second full connection layer is used for classifying the embedded vectors to obtain the identification classification result.
In one embodiment of the present invention, the reconstruction module includes a third fully-connected layer, a fourth nonlinear layer, a fourth fully-connected layer, a fifth nonlinear layer, a fifth fully-connected layer, and an activation function layer that are sequentially connected.
In one embodiment of the invention, the mixing loss function is:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Representing an adjustable weight superparameter, +.>Representing a cross entropy loss function, ">Indicating batch size, +.>Representing training samples->Is (are) true tags->Probability distribution representing predicted outcome,/->Representing a cluster loss function, +. >,/>Representing feature vector +.>Feature center of belonging category->Represents the mean square error loss function, ">,/>Representing reconstructed pixel points, < >>Representing the original image pixel points.
Compared with the prior art, the invention has the beneficial effects that:
according to the identification method, the multi-loss characteristic self-calibration network refines the extracted interference characteristics by utilizing a characteristic self-calibration mechanism, the characteristic extraction module performs interference characteristic self-adaptive extraction on the to-be-identified time-frequency spectrogram data, the distance between the same-category interference characteristics in the characteristic space is adaptively reduced, the distance between different-category interference characteristics in the characteristic space is enlarged, on one hand, the pixel relationship which depends on a long distance can be extracted, on the other hand, key characteristics can be extracted in a targeted manner, the interference identification task with insignificant characteristic difference is facilitated, the defect that the traditional convolution module only pays attention to local information, and often ignores global information is overcome, and the aggregation capability is higher; meanwhile, the invention adopts the mixed loss function and combines the constraint of clustering loss to lead the final inductive combined advanced features of the network to be more distinguishable, and utilizes the characteristics in the aggregation class and the auxiliary information of the reconstruction input task to achieve the effect of improving the generalization of the model on the recognition task, can recognize more types of interference and has robustness. Therefore, the method is more accurate in capturing and characterizing the fine characteristics of the complex mixed interference, and higher recognition precision and more robust performance are shown in recognition tasks with various interference patterns.
Drawings
Fig. 1 is a schematic flow chart of a radar active interference identification method based on a multi-loss feature self-calibration network according to an embodiment of the present invention;
FIG. 2 is a flow chart of a training method of a multi-loss feature self-calibration network according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a multi-loss feature self-calibration network according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a time-frequency image of class 19 radar active interference and class 1 non-interference radar echoes constructed in accordance with an embodiment of the present invention;
FIGS. 5 a-5 h are schematic views of visualization of t-SNE clustering results under different methods according to embodiments of the present invention;
FIG. 6 is a schematic diagram of the feature visualization results of the proposed method and other comparative methods of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Example 1
Referring to fig. 1, fig. 1 is a flowchart of a radar active interference identification method based on a multi-loss feature self-calibration network according to an embodiment of the present invention, where the method includes the steps of:
s1, acquiring to-be-identified time-frequency spectrogram data comprising radar active interference signals.
S2, inputting the to-be-identified time-frequency spectrogram data into a trained multi-loss characteristic self-calibration network to identify the interference type, and obtaining an identification classification result.
The trained multi-loss characteristic self-calibration network comprises a characteristic extraction module, a characteristic mapping module and a reconstruction module which are sequentially connected, wherein the trained multi-loss characteristic self-calibration network is obtained by training and updating parameters of the multi-loss characteristic self-calibration network by utilizing a mixed loss function, and the mixed loss function is determined by cross entropy loss propagated by a training set sample label, cluster loss of the characteristic mapping module and mean square error loss of the reconstruction module.
The interference type identification is performed according to the following steps:
s21, carrying out interference feature self-adaptive extraction on the time-frequency spectrogram data to be identified through a feature extraction module, adaptively reducing the distance of the same-category interference features in a feature space, and enlarging the distance of different-category interference features in the feature space to obtain different-category interference feature vectors.
S22, performing dimension reduction mapping on the interference feature vectors of different types through a feature mapping module, and then classifying to obtain a recognition classification result.
It is emphasized that the reconstruction module is used for the training process of the multi-loss characteristic self-calibration network, and the reconstruction module does not participate in the recognition process when the trained multi-loss characteristic self-calibration network performs interference type recognition on the time-frequency spectrogram data to be recognized.
In the embodiment, the multi-loss characteristic self-calibration network refines the extracted interference characteristics by utilizing a characteristic self-calibration mechanism, the characteristic extraction module performs interference characteristic self-adaptive extraction on the to-be-identified time-frequency spectrogram data, the distance between the same-category interference characteristics in the characteristic space is reduced in a self-adaptive manner, the distance between different-category interference characteristics in the characteristic space is increased, on one hand, the pixel relationship which is depended on in a long distance can be extracted, on the other hand, the key characteristics can be extracted in a targeted manner, the interference identification task with unobvious characteristic difference is facilitated, the defect that the traditional convolution module only pays attention to local information, and often ignores global information is overcome, and the aggregation capability is higher; meanwhile, the embodiment adopts a mixed loss function and combines constraint of clustering loss to lead the final inductive combined advanced features of the network to be more distinguishable, and utilizes the aggregated intra-class features and the reconstructed auxiliary information of the input task to achieve the effect of improving generalization of the model on the recognition task, can recognize more types of interference and has robustness. Therefore, the method is more accurate in capturing and characterizing the fine characteristics of the complex mixed interference, and higher recognition precision and more robust performance are shown in recognition tasks with various interference patterns.
In order to obtain a trained multi-loss characteristic self-calibration network, the idea of the embodiment is as follows: firstly, a radar active interference signal model is established, an interference time-frequency image is obtained through short-time Fourier transformation, the interference time-frequency image is used as a reference data set for interference identification, and the reference data set is recorded as an original time-frequency spectrogram data set. Then, a multi-loss characteristic self-calibration network is constructed, and the main constituent modules comprise: the device comprises a feature extraction module, a feature mapping module and a reconstruction module. Then, inputting data of an original time-frequency spectrogram data set into a multi-loss characteristic self-calibration network, projecting an interference echo time-frequency spectrogram into a high-dimensional abstract characteristic separable space through the network, finely characterizing identifiable characteristics of different types of interference, and simplifying learning targets and optimization difficulty of the network by utilizing a jump connection structure; meanwhile, the characteristic self-calibration convolution block refined model in the characteristic extraction module is utilized to improve the representation capability of the characteristic self-calibration convolution block refined model on different types of interference intrinsic characteristics, so that the aggregation of intra-class interference characteristic vectors and the separation of inter-class characteristic vectors are realized in a characteristic space. Finally, the prediction of the real labels of the training set is completed in the classification layer, and corresponding clustering loss, cross entropy loss and mean square error loss functions are calculated, so that the parameters of the model are updated by the enhanced multi-element mixed loss function until the network converges.
Referring to fig. 2, fig. 2 is a flowchart illustrating a training method of a multi-loss feature self-calibration network according to an embodiment of the present invention. The training method of the multi-loss characteristic self-calibration network comprises the following steps:
s201, acquiring an original time-frequency spectrogram data set comprising radar active interference signals and non-interference radar echoes, wherein the original time-frequency spectrogram data set comprises training set samples, training set sample labels, verification samples and verification set sample labels.
Specifically, according to the modulation principle of each interference and different dry-to-Noise Ratio (JNR), an original time-frequency spectrogram data set containing 19 types of radar active interference signals and 1 type of non-interference radar echoes is established by utilizing a multi-type radar active interference generation mechanism and short-time fourier transform, the original time-frequency spectrogram data set is labeled according to different interference types, and then the original time-frequency spectrogram data set and a label file are labeled according to 6:2:2, the training set, the verification set and the test set are divided according to the proportion, the training set sample corresponds to the training set sample label, the verification set sample corresponds to the verification set sample label, and the test set sample corresponds to the test set sample label.
In this embodiment, 7 kinds of signals with different dry noise ratios, i.e., 0dB, 5dB, 10dB, 15dB, 20dB, 25dB and 30dB, are randomly generated, and each signal has 19 kinds of radar active interference time-frequency diagrams and 1 kind of non-interference radar echo time-frequency diagrams as sample sets. Specifically, the sample data size is The batch size was 64.
S202, inputting the training set samples and the training set sample labels into a multi-loss characteristic self-calibration network for training.
First, a multiple-loss feature self-calibration network is constructed.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a multi-loss feature self-calibration network according to an embodiment of the invention. The multi-loss characteristic self-calibration network comprises a characteristic extraction module, a characteristic mapping module and a reconstruction module which are connected in sequence.
In a specific embodiment, the feature extraction module includes a first feature self-calibration convolution block, a second feature self-calibration convolution block, a third feature self-calibration convolution block, and a fourth feature self-calibration convolution block. The first characteristic self-calibration convolution block, the second characteristic self-calibration convolution block, the third characteristic self-calibration convolution block and the fourth characteristic self-calibration convolution block are sequentially connected and are used for sequentially carrying out characteristic self-calibration convolution on the time-frequency spectrogram data to be identified to obtain different types of interference characteristic vectors.
In a specific embodiment, the first characteristic self-calibration convolution block, the second characteristic self-calibration convolution block, the third characteristic self-calibration convolution block and the fourth characteristic self-calibration convolution block have the same structure, and each of the first, second, third, channel characteristic self-calibration modules, spatial characteristic self-calibration modules, dimension lifting modules, first adding modules and first maximum pooling layers are included.
The first convolution block, the second convolution block and the third convolution block are sequentially connected and are used for sequentially carrying out convolution processing on the input feature images of the feature self-calibration convolution block to obtain an output feature image of the third convolution block. The channel characteristic self-calibration module is used for carrying out channel characteristic self-calibration on the output characteristic diagram of the third convolution block to obtain a channel characteristic self-calibration characteristic diagram. The spatial characteristic self-calibration module is used for performing spatial characteristic self-calibration on the channel characteristic self-calibration characteristic diagram to obtain the spatial characteristic self-calibration characteristic diagram. The dimension lifting module is used for carrying out dimension lifting operation on the input feature map of the feature self-calibration convolution block to obtain a dimension lifting feature map. The first adding module is used for adding the dimension-rising characteristic diagram to the space characteristic self-calibration characteristic diagram to obtain an adding characteristic diagram. The first maximum pooling layer is used for downsampling the added feature map to obtain an output feature map of the feature self-calibration convolution block. It is understood that the first convolution block, the second convolution block, the third convolution block, the channel characteristic self-calibration module, and the spatial characteristic self-calibration module are sequentially connected in series.
Further, the first convolution block includes a first convolution layer, a first batch of normalization layers, and a first non-linear layer connected in sequence. The second convolution block comprises a second convolution layer, a second batch of normalization layers and a second nonlinear layer which are sequentially connected; the third convolution block includes a third convolution layer and a third non-linear layer connected in sequence. Wherein, the first nonlinear layer, the second nonlinear layer and the third nonlinear layer can all adopt LeakyReLU nonlinear layers.
Specifically, in the first characteristic self-calibration convolution block, the first convolution layer, the second convolution layer and the third convolution layer all have 64 convolution kernels, and the convolution kernels are all of the same sizeStep size is 1 and fill is 1. In the second characteristic self-calibration convolution block, the first convolution layer, the second convolution layer and the third convolution layer all have 128 convolution kernels, and the convolution kernels are in the size of +.>Step size is 1 and fill is 1. In the third characteristic self-calibration convolution block, the first convolution layer, the second convolution layer and the third convolution layer all have 256 convolution kernels, and the convolution kernels are in the size of +.>Step size is 1 and fill is 1. In the fourth characteristic self-calibration convolution block, the first convolution layer, the second convolution layer and the third convolution layer all have 512 convolution kernels, and the convolution kernels are in the size of +.>Step size is 1 and fill is 1. In each characteristic self-calibration convolution block, the output of the third convolution block is connected into a channel characteristic self-calibration module and a space characteristic self-calibration module which are connected in series.
Further, the channel characteristic self-calibration module comprises an adaptive maximum pooling layer, an adaptive average pooling layer, a multi-layer perceptron, a second addition module, a channel weight normalization module and a first multiplication module. The adaptive maximum pooling layer is used for carrying out adaptive maximum pooling on the height of the feature map and the width of the feature map in the output feature map of the third convolution block at the same time, and inducing the maximum value response on each channel dimension. The adaptive average pooling layer is used for carrying out adaptive average pooling on the height of the feature map and the width of the feature map in the output feature map of the third convolution block at the same time, and summarizing the average response on each channel dimension. The multi-layer perceptron is used for sequentially carrying out feature scaling, feature restoration and channel dimension information extraction on the maximum value response on each channel dimension to obtain maximum value response output, and sequentially carrying out feature scaling, feature restoration and channel dimension information extraction on the average response on each channel dimension to obtain average response output. And the second adding module is used for adding and fusing the maximum response output and the average response output to obtain an added characteristic diagram. The channel weight normalization module is used for normalizing the weights of the channels in the additional feature map by using the activation function to obtain the channel normalization weights. The first multiplication module is used for multiplying the channel normalization weight with the output feature map of the third convolution block to obtain a channel feature self-calibration feature map.
Specifically, it is assumed that the characteristic diagram of the input channel characteristic self-calibration module is,/>Representing vector space, ++>Representing a characteristic map->Number of channels->Is a characteristic diagram->Height (I) of (II)>Is a characteristic diagram->Width of the material. First, adaptive max pooling layer pair +.>Is of the order of (feature map->Height) and a third dimension (feature map +.>Width) is adaptively max pooled (Adaptive Max Pooling) to generalize the maximum response +.>The method comprises the steps of carrying out a first treatment on the surface of the Meanwhile, adaptive average pooling layer pair +.>Is of the order of (feature map->Height) and a third dimension (feature map +.>Width) parallel adaptive averaging pooling (Adaptive Average Pooling), the average response over each channel dimension is generalized. Next, the (E) is (are) added>And->Sending the information to a multi-layer perceptron (Multilayer Perceptron, MLP) with shared parameters, and further extracting useful information of channel dimensions after feature scaling and feature restoration to obtain parallel output of the perceptron: maximum response output and average response output. Then, a second adding module performs summation operation on parallel outputs of the perceptron to add and fuse the maximum response output and the average response output, and the added feature map passes through a channel weight normalization module to normalize the weight of the channel to be between 0 and 1 by using a Sigmoid activation function so as to obtain a channel normalized weight; finally, a first multiplication module multiplies the channel normalized weight and the input feature map, namely the output feature map of the third convolution block, to obtain a pass Trace features self-calibrate the feature map. The formula of the channel characteristic self-calibration feature map can be expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,self-calibration template representing channel characteristics, +.>Representing an activation function->Representing a multi-layer perceptron @, @>Representing adaptive mean pooling +.>Representing adaptive max pooling, +.>An output characteristic diagram representing a third convolution block, < >>,/>Representing vector space, ++>Representing a characteristic map->Number of channels->Representing a characteristic map->Height (I) of (II)>Representing a characteristic map->Width of the material.
In one particular embodiment, the multi-layer perceptron includes a fourth convolution layer, a ReLU nonlinear activation layer, and a fifth convolution layer. The convolution kernel sizes of the fourth convolution layer and the fifth convolution layer are allThe method comprises the steps of carrying out a first treatment on the surface of the The number of convolution kernels in the fourth convolution layer is +.>For the scaling, the adjustment can be made according to the number of channels of the input feature map, in this embodiment +.>The method comprises the steps of carrying out a first treatment on the surface of the The number of convolution kernels in the fifth convolution layer is +.>. Setting a first characteristic self-calibration convolution block, a second characteristic self-calibration convolution block and a third characteristic self-calibration convolution block>16, fourth characteristic self-calibration convolutional block scaling +.>32.
Further, the spatial feature self-calibration module comprises a second maximum pooling layer, an average pooling layer, a splicing module, a convolution module, a spatial weight normalization module and a second multiplying module. The second maximum pooling layer is used for carrying out channel dimension maximum pooling on the channel characteristic self-calibration characteristic diagram to obtain a maximum pooling characteristic diagram compressed to a space dimension. The average pooling layer is used for carrying out channel dimension average pooling on the channel characteristic self-calibration feature map to obtain an average pooling feature map compressed to a space dimension. The splicing module is used for fusing the maximum pooling feature images and the average pooling feature images by adopting a splicing method to obtain the spliced feature images. The convolution module is used for carrying out convolution mapping on the spliced feature images to obtain a mapping feature image. The spatial weight normalization module is used for normalizing the spatial weight in the mapping feature map by using the activation function to obtain the spatial normalization weight. The second multiplying module is used for multiplying the space normalization weight and the channel characteristic self-calibration characteristic diagram to obtain the space characteristic self-calibration characteristic diagram.
Specifically, the feature map processed by the channel feature self-calibration moduleInput spatial feature self-calibration module, wherein +.>Representing vector space, ++>Representing a characteristic map->Number of channels->Representing a characteristic map->Height (I) of (II)>Representing a characteristic map->Width of the material. Feature map->Parallel pooling of the second maximum of the channel dimension and averaging of the channel dimension results in a maximum pooling feature map compressed to the spatial dimension +.>And an average pooling feature map compressed to the spatial dimension +.>. The splicing module adopts a splicing method to obtain two characteristic diagrams +.>Andand (5) information fusion to obtain a spliced characteristic diagram. The convolution module carries out convolution mapping on the spliced feature images to further obtain a mapping feature image; wherein the number of convolution kernels is 1, and the size of the convolution kernels is +.>Filling with->Rounding up and/or taking in>The value can be manually adjusted according to the size of the input feature diagram; for example, the convolution kernel size of the spatial feature self-calibration module in the first, second and third feature self-calibration convolution blocks is set +.>For 7, the convolution kernel size of the space feature self-calibration module in the fourth feature self-calibration convolution block is set to +.>5. And the spatial weight normalization module normalizes the spatial weight of the mapping feature map to be between 0 and 1 by using a Sigmoid activation function to obtain a spatial normalized weight. And finally, multiplying the spatial normalization weight and the channel characteristic self-calibration characteristic diagram by a second multiplying module to obtain a characteristic diagram output after spatial characteristic calibration, namely the spatial characteristic self-calibration characteristic diagram. The formula for the spatial signature self-calibration signature can be expressed as:
Wherein, the liquid crystal display device comprises a liquid crystal display device,self-calibration template representing spatial features, +.>Representing convolution weights, ++>Representing an activation function->Representing average pooling>Representing maximum pooling, ++>A self-calibration feature map of the channel features is shown,,/>representing vector space, ++>Representing a characteristic map->Number of channels->Representing a characteristic map->Height (I) of (II)>Representing a characteristic map->Width of the material.
Further, the channel characteristic self-calibration module and the space characteristic self-calibration module are arranged in series after three-layer convolution, and the integral formula of the channel characteristic self-calibration module and the space characteristic self-calibration module is expressed as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,output feature map representing spatial feature self-calibration module, +.>An output characteristic diagram representing a third convolution block, < >>,/>Representing vector space, ++>Is a characteristic diagram->Number of channels->Is a characteristic diagram->Height (I) of (II)>Is a characteristic diagramWidth (L)/(L)>Self-calibration template representing channel characteristics, +.>Self-calibration template representing spatial features, +.>Representing element-wise multiplication.
Further, a jump connection is formed by the dimension increasing module and the first adding module. The jump connection enables the input of the characteristic self-calibration convolution block to pass through a cross-layer data path, the calculation of the characteristic self-calibration convolution block is skipped, and the first addition module is directly utilized to be added on the output characteristic diagram after the channel characteristic self-calibration and the space characteristic self-calibration after the dimension increasing operation of the dimension increasing module. Specifically, the dimension-increasing module consists of The sixth convolution layer and the third normalization layer are completed, and the number of convolution kernels is consistent with the number of channels output by the characteristic self-calibration module.
Further, in each characteristic self-calibration convolution block, a pooling kernel is added after jump connection to be asDownsampling is performed at the first largest pooling layer of (2) the pooling layer step size.
Specifically, the formula of the output feature map of the feature self-calibration convolution block can be expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing dimension up operation, ++>Representing the extracted features->Indicate->The individual features self-calibrate the input feature map of the convolution block,/>indicate->Weight parameters of the individual characteristic self-calibrating convolution block, < >>Representing maximum pooling.
In a particular embodiment, the feature mapping module includes a first fully-connected layer and a second fully-connected layer. The first full-connection layer is used for projecting flattened features of different types of interference feature vectors into a feature space to obtain embedded vectors. The second full connection layer is used for classifying the embedded vectors to obtain recognition classification results.
Specifically, the feature mapping module receives the dimension from the feature extraction module asAfter flattening, the feature map of (1) is embedded into vectors through a first full-connection layer with the node number of 256, and finally, the recognition classification result is obtained through a second full-connection layer with the node number of 20.
In a specific embodiment, the reconstruction module includes a third fully-connected layer, a fourth nonlinear layer, a fourth fully-connected layer, a fifth nonlinear layer, a fifth fully-connected layer, and an activation function layer that are sequentially connected.
Specifically, the reconstruction network is composed of three full-connection layers and two ReLU nonlinear layers alternately, the third full-connection layer has 1024 nodes, the fourth nonlinear layer has 4096 nodes, and the fifth full-connection layer has 16384 nodes; finally activating the output by the Sigmoid function, and recombining the output into the same time-frequency spectrogram as the original time-frequency spectrogramSize.
The training set samples and training set sample labels are then input into a constructed multi-loss feature self-calibration network to train the network.
S203, determining a mixing loss function according to the cross entropy loss of the training set sample label propagation, the clustering loss of the feature mapping module and the mean square error loss of the reconstruction module.
Specifically, the three loss functions of clustering loss, cross entropy loss and mean square error loss are summed to obtain a mixed loss function.
First, the cluster loss of the embedded vector output by the feature mapping module is calculated. For the feature vector extracted by the feature mapping module in each iterative training process Calculate their cluster loss:
in the method, in the process of the invention,indicating batch size, +.>Representing feature vector +.>The feature center of the category updates the position with the change of the feature vector of the same category. From the clustering loss formula, the smaller the distance from the similar sample to the characteristic center is, the better the clustering loss is, and the effect of compactness in the constraint class is achieved.
Then, calculating a cross entropy loss function corresponding to each training set sample label propagation:
in the method, in the process of the invention,representing training samples->Is true of (2)Label (S)>Representing the probability distribution of the predicted outcome.
And finally, reconstructing an input interference time-frequency spectrogram represented by the category by using the generalized prediction result by a reconstruction module, wherein a mean square error loss function is constructed by calculating the Euclidean distance between an output image pixel and an original image pixel point, and the calculation formula is as follows:
in the method, in the process of the invention,reconstructing pixel points +.>Is the original image pixel point.
Combining the three loss functions, an enhanced hybrid loss function is obtained:
in the middle ofAnd->For an adjustable weight super parameter +.>Set to 0.5 @, ->Set to 5e -4
The design of the enhanced mixed loss function is based on the idea of multitasking learning, and cluster loss and mean square error loss are added on the basis of common cross entropy loss. The clustering loss can realize aggregation of the interference feature vectors of the same type in a high-dimensional feature space, the robustness of the extracted features of the network is enhanced, and the mean square error loss can reduce the loss of the features in network propagation. The enhanced hybrid loss function utilizes information exchange and auxiliary training between different optimization processes to improve the classification performance of tag propagation in the embedded space.
S204, updating parameters of the multi-loss characteristic self-calibration network by using the mixed loss function.
Specifically, the parameters of the network are updated according to the enhanced mixed loss function, and the steps are repeated until the training times are completed, so that the network convergence is achieved.
S205, selecting a network after each round of training by using a verification set sample and a verification set sample label, and taking a model with highest recognition accuracy as a trained multi-loss characteristic self-calibration network.
Specifically, a verification set sample and a verification set sample label are utilized to select a network with fixed super parameters after each round of training is finished, and a model with highest recognition accuracy is used as a trained multi-loss characteristic self-calibration network.
Further, after the trained multi-loss characteristic self-calibration network is obtained, the test set sample can be input into the network for identification test, and the identification accuracy is calculated.
In this embodiment, the input original time-frequency spectrogram data sequentially passes through a feature extraction module and a feature mapping module, and is embedded with a function through a feature vectorMapping an interference time-frequency spectrum from an original space to a feature space, wherein +.>Representing learning parameters->Representing vector space, ++ >Representing the vector space dimension before mapping, +.>Representing the mapped vector space dimension. After training, the multi-loss characteristic self-calibration network can adaptively pay attention to interference characteristics with stronger identification, adaptively reduce the distance of characteristic vectors of samples of the same category in a high-dimensional characteristic space, and enlarge the distance of samples of different categories in the characteristic space, so that the classification difficulty after dimension reduction mapping of the interference characteristic vectors of different categories is reduced; the jump connection structure solves the problem of gradient disappearance caused by model deepening, and is easier to optimize, so that the feature extraction module adopts the jump connection structure; further, a characteristic self-calibration module is added after the multi-layer convolution operation, so that the network can adaptively refine the characteristics, and the separability of different types of interference characteristics is enhanced. Finally, as the pooling process can bring information loss, the classifier is connected with an input reconstruction module, and the relation between the potential feature mapping space and the input is directly established to assist in optimizing the network. />
In summary, the radar active interference identification method based on the multi-loss characteristic self-calibration network provided by the embodiment utilizes the deep nonlinear neural network to deeply excavate time-frequency characteristics among different types of interference, maps the time-frequency characteristics into a high-dimensional characteristic space, fits separable planes, and realizes high-precision intelligent interference identification. On one hand, the characteristic self-calibration convolution block is designed, and the defect that the traditional convolution module only pays attention to local information and often ignores global information is overcome. By introducing a feature calibration module with space dimension and channel dimension, the self-adaptive calibration network extracts the features of the interference, so that the extraction precision and the characterization effect of the network on different types of interference are improved, and the jump connection structure can eliminate the gradient vanishing problem caused by the increase of the network depth; on the other hand, the mixed loss function with multiple constraints is introduced to aggregate the intra-class characteristics of interference in a high-dimensional space, the inter-class characteristics of interference are amplified, the radar active interference identification performance of a network is improved, and a novel method is provided for radar interference identification. Therefore, the method can be used for radar anti-interference processing, provides important priori information for radar anti-interference strategy selection, and improves the anti-interference capability and information acquisition capability of SAR in a complex electromagnetic environment.
Further, the effect of the radar active interference identification method based on the multi-loss characteristic self-calibration network is described through simulation experiments.
1. Data set
The data used in the experiment are nineteen kinds of radar interference simulation time-frequency diagrams and one kind of non-interference radar echo signal time-frequency diagrams, as shown in fig. 4, and fig. 4 is a time-frequency image schematic diagram of 19 kinds of radar active interference and 1 kind of non-interference radar echo constructed in the embodiment of the invention. The interference types include two main types, single interference and composite interference. The single interference types are: noise amplitude modulation interference, noise frequency modulation interference, noise product interference, noise convolution interference, multi-point frequency interference, sine wave modulation sweep interference, saw-tooth modulation sweep interference, square wave modulation sweep interference, dense decoy interference, intermittent sampling forwarding interference, intermittent sampling repetition forwarding interference, sample presentation pulse interference, multi-decoy interference, and comb spectrum modulation interference. The composite interference includes: frequency modulation+dense decoy interference, frequency modulation+intermittent sample forwarding interference, frequency modulation+intermittent sample repetition forwarding interference, noise convolution+dense decoy interference, noise convolution+intermittent sample forwarding interference. According to the modulation principle of each interference signal and different dry Noise ratios (JNR), the dry Noise ratios are 7 kinds of 0dB, 5dB, 10dB, 15dB, 20dB, 25dB and 30dB, and the time-frequency image of the interference signal is generated through short-time Fourier transformation. At the same time, in order to ensure the balance of the categories in the samples, each disturbance randomly generates 1000 samples, each sample has the size of . Samples were all according to training set: verification set: test set = 6:2: 2.
2. Implementation details
1) Selecting experimental data according to the requirements, and dividing a training set, a verification set and a test set;
2) The hardware platform of the simulation experiment of the invention is: the CPU is AMD Ryzen 9 5900HX Radeon Graphics, sixteen cores, the main frequency is 3.30GHz, and the memory size is 32GB; the video memory size is 16GB. The invention uses an AdamW optimizer and a learning rate updating method of a StepLR to set the initial learning rate to 0.0005, the attenuation step length to 10 and the attenuation rate to 0.5. Training samples are input into a network for training, 30 epochs are trained on all models, and the model with the highest recognition accuracy on the verification set is stored according to the effect of the network with the fixed super parameters on the verification set after each round of training is finished.
3) Inputting the test sample into the optimal model for testing and obtaining the recognition rate, and comparing the recognition accuracy rate with the recognition accuracy rate of other intelligent interference recognition methods, wherein the results are shown in the table 1 and the table 2:
table 1 comparison of the method of the present invention with other recognition network recognition accuracy in jnr=30 dB, 25dB, 20dB and 15dB datasets
Table 2 comparison of the method of the present invention with other recognition network recognition accuracy in jnr=10 dB, 5dB and 0dB data sets
The result shows that the radar active interference identification method based on the multi-loss characteristic self-calibration network has higher identification accuracy in the whole sample training results of 0dB, 5dB, 10dB, 15dB, 20dB, 25dB and 30dB dry-noise ratio, and the change of JNR does not cause large-range fluctuation of the identification accuracy of the method, so that the robustness of the method is shown.
Referring to fig. 5 a-5 h, fig. 5 a-5 h are schematic diagrams showing visualization of clustering results of t-SNE (t-Distributed Stochastic Neighbor Embedding) in different methods according to the embodiments of the present invention, wherein fig. 5a is a res net method, fig. 5b is a JRNet method, fig. 5c is a 2DCNN method, fig. 5d is a RCNN method, fig. 5e is an AlexNet method, fig. 5f is a 1DCNN method, fig. 5g is a DFCNN method, and fig. 5h is a method according to the embodiments. It can be seen from fig. 5 a-5 h that the proposed method forms 20 clusters that are significantly compact in the feature space, and that the clusters are far apart from each other. This illustrates that the method achieves a more clustered within-class, more separated feature mapping effect between classes in a high-dimensional feature space, compared to other interference recognition methods, demonstrating the robustness of the method.
Referring to fig. 6, fig. 6 is a schematic diagram showing the feature visualization results of the method according to the present invention and other comparison methods. Because of the particularities of the data processed by the WECNN, ECNN, 1DCNN and DFCNN methods, no thermodynamic diagram for feature visualization can be generated. From the results in the figure, the classification basis of the method of the present invention is more concentrated on the interference area of the time-frequency diagram. The feature visualization result proves that the characterization accuracy of the method provided by the invention on the interference features is obviously superior to that of other interference recognition methods.
Therefore, the radar active interference identification method based on the multi-loss characteristic self-calibration network combines an interference time-frequency image, and is matched with the characteristic self-calibration module and the input reconstruction module on the basis of the traditional convolution network, the radar active interference identification method based on the multi-loss characteristic self-calibration network can adaptively capture characteristics, fine characterization of the characteristics is achieved, and therefore the characteristic extraction capacity of the network is improved; the latter establishes a link between the feature space and the input, thereby improving the classification performance of the tag propagation in space. The model is optimized through an enhanced mixed loss function formed by three items of cross entropy loss, clustering loss and mean square error loss, so that feature vectors extracted by the model are more concentrated in a high-dimensional feature space class and are more dispersed among classes. Meanwhile, the test result on the simulation data set shows that the method provided by the invention has better performance on different interference-to-noise ratios than other intelligent interference identification methods.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (10)

1. A radar active interference identification method based on a multi-loss characteristic self-calibration network is characterized by comprising the following steps:
acquiring to-be-identified time-frequency spectrogram data comprising radar active interference signals;
inputting the to-be-identified time-frequency spectrogram data into a trained multi-loss characteristic self-calibration network to identify the interference type, so as to obtain an identification classification result;
the trained multi-loss characteristic self-calibration network comprises a characteristic extraction module, a characteristic mapping module and a reconstruction module which are sequentially connected, wherein the trained multi-loss characteristic self-calibration network is obtained by training and updating parameters of the multi-loss characteristic self-calibration network by utilizing a mixed loss function, the mixed loss function is determined by cross entropy loss propagated by a training set sample label, cluster loss of the characteristic mapping module and mean square error loss of the reconstruction module, and interference type identification is carried out according to the following steps:
The feature extraction module is used for carrying out interference feature self-adaptive extraction on the to-be-identified time-frequency spectrogram data, adaptively reducing the distance of the same-category interference features in a feature space, and enlarging the distance of different-category interference features in the feature space to obtain different-category interference feature vectors;
and classifying after the feature mapping module performs dimension reduction mapping on the different types of interference feature vectors to obtain the recognition classification result.
2. The method for identifying radar active interference based on a multiple-loss feature self-calibration network according to claim 1, wherein the training method of the multiple-loss feature self-calibration network comprises:
acquiring an original time-frequency spectrogram data set comprising radar active interference signals and non-interference radar echoes, wherein the original time-frequency spectrogram data set comprises a training set sample, a training set sample tag, a verification set sample and a verification set sample tag;
inputting the training set sample and the training set sample label into the multi-loss characteristic self-calibration network for training;
determining the mixing loss function according to the cross entropy loss of the training set sample label propagation, the clustering loss of the feature mapping module and the mean square error loss of the reconstruction module;
Updating parameters of the multi-loss characteristic self-calibration network by utilizing the mixed loss function;
and selecting the network after each round of training by using the verification set sample and the verification set sample label, and taking the model with the highest recognition accuracy as the trained multi-loss characteristic self-calibration network.
3. The method for radar active interference identification based on a multiple loss feature self-calibration network of claim 1, wherein the feature extraction module comprises a first feature self-calibration convolution block, a second feature self-calibration convolution block, a third feature self-calibration convolution block, and a fourth feature self-calibration convolution block, wherein,
the first characteristic self-calibration convolution block, the second characteristic self-calibration convolution block, the third characteristic self-calibration convolution block and the fourth characteristic self-calibration convolution block are sequentially connected and are used for sequentially carrying out characteristic self-calibration convolution on the to-be-identified time-frequency spectrogram data to obtain the different types of interference characteristic vectors.
4. The method for radar active interference identification based on a multiple-loss feature self-calibration network according to claim 3, wherein the first feature self-calibration convolution block, the second feature self-calibration convolution block, the third feature self-calibration convolution block, and the fourth feature self-calibration convolution block have the same structure, each include a first convolution block, a second convolution block, a third convolution block, a channel feature self-calibration module, a spatial feature self-calibration module, an up-dimension module, a first addition module, and a first maximum pooling layer,
The first convolution block, the second convolution block and the third convolution block are sequentially connected and are used for sequentially carrying out convolution processing on the input feature images of the feature self-calibration convolution blocks to obtain an output feature image of the third convolution block;
the channel characteristic self-calibration module is used for carrying out channel characteristic self-calibration on the output characteristic diagram of the third convolution block to obtain a channel characteristic self-calibration characteristic diagram;
the spatial characteristic self-calibration module is used for performing spatial characteristic self-calibration on the channel characteristic self-calibration characteristic diagram to obtain a spatial characteristic self-calibration characteristic diagram;
the dimension lifting module is used for carrying out dimension lifting operation on the input feature map of the feature self-calibration convolution block to obtain a dimension lifting feature map;
the first adding module is used for adding the dimension-rising characteristic diagram to the space characteristic self-calibration characteristic diagram to obtain an adding characteristic diagram;
the first maximum pooling layer is configured to downsample the added feature map to obtain an output feature map of a feature self-calibration convolution block:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing dimension up operation, ++>Representing the extracted features->Indicate->Input feature map of individual feature self-calibrating convolution block, < >>Indicate->Weight parameters of the individual characteristic self-calibrating convolution block, < > >Representing maximum pooling.
5. The method of claim 4, wherein the channel characteristic self-calibration module comprises an adaptive maximum pooling layer, an adaptive average pooling layer, a multi-layer perceptron, a second addition module, a channel weight normalization module, and a first multiplication module,
the adaptive maximum pooling layer is used for carrying out adaptive maximum pooling on the height of the feature diagram and the width of the feature diagram in the output feature diagram of the third convolution block at the same time, and inducing the maximum response on each channel dimension;
the self-adaptive average pooling layer is used for carrying out self-adaptive average pooling on the height of the feature diagram and the width of the feature diagram in the output feature diagram of the third convolution block at the same time, and inducing the average response on each channel dimension;
the multi-layer perceptron is used for sequentially carrying out feature scaling, feature restoration and channel dimension information extraction on the maximum value response on each channel dimension to obtain maximum value response output, and sequentially carrying out feature scaling, feature restoration and channel dimension information extraction on the average response on each channel dimension to obtain average response output;
the second adding module is used for adding and fusing the maximum response output and the average response output to obtain an adding characteristic diagram;
The channel weight normalization module is used for normalizing the weights of the channels in the addition feature map by using an activation function to obtain channel normalized weights;
the first multiplication module is used for multiplying the channel normalization weight with the output feature map of the third convolution block to obtain a channel feature self-calibration feature map:
wherein, the liquid crystal display device comprises a liquid crystal display device,self-calibration template representing channel characteristics, +.>Representing an activation function->Representing a multi-layer perceptron @, @>Representing adaptive mean pooling +.>Representing an adaptive maximization pooling,an output characteristic diagram representing a third convolution block, < >>,/>Representing vector space, ++>Representing a characteristic map->The number of channels is determined by the number of channels,representing a characteristic map->Height (I) of (II)>Representing a characteristic map->Width of the material.
6. The method for radar active interference identification based on a multiple loss feature self-calibration network of claim 4, wherein said spatial feature self-calibration module comprises a second max-pooling layer, an average pooling layer, a stitching module, a convolution module, a spatial weight normalization module, and a second squaring module, wherein,
the second maximum pooling layer is used for carrying out channel dimension maximum pooling on the channel characteristic self-calibration feature map to obtain a maximum pooling feature map compressed to a space dimension;
The average pooling layer is used for carrying out channel dimension average pooling on the channel characteristic self-calibration feature map to obtain an average pooling feature map compressed to a space dimension;
the splicing module is used for fusing the maximum pooling feature images and the average pooling feature images by adopting a splicing method to obtain splicing feature images;
the convolution module is used for carrying out convolution mapping on the spliced feature images to obtain mapping feature images;
the spatial weight normalization module is used for normalizing the spatial weight in the mapping feature map by using an activation function to obtain a spatial normalized weight;
the second multiplying module is configured to multiply the spatial normalization weight with the channel characteristic self-calibration feature map to obtain a spatial characteristic self-calibration feature map:
wherein, the liquid crystal display device comprises a liquid crystal display device,self-calibration template representing spatial features, +.>Representing convolution weights, ++>The activation function is represented as a function of the activation,representing average pooling>Representing maximum pooling, ++>A self-calibration feature map of the channel features is shown,,/>representing vector space, ++>Representing a characteristic map->Number of channels->Representing a characteristic map->Height (I) of (II)>Representing a characteristic map->Width of the material.
7. The method for identifying radar active interference based on a multi-loss feature self-calibration network according to claim 4, wherein the overall formula of the channel feature self-calibration module and the spatial feature self-calibration module is expressed as:
Wherein, the liquid crystal display device comprises a liquid crystal display device,output feature map representing spatial feature self-calibration module, +.>An output characteristic diagram representing a third convolution block,,/>representing vector space, ++>Representing a characteristic map->Number of channels->Representing a characteristic map->Height (I) of (II)>Representing a characteristic map->Width (L)/(L)>Self-calibration template representing channel characteristics, +.>Self-calibration template representing spatial features, +.>Representing element-wise multiplication.
8. The method of claim 1, wherein the feature mapping module comprises a first fully-connected layer and a second fully-connected layer, wherein,
the first full connection layer is used for projecting the flattened features of the different types of interference feature vectors into a feature space to obtain embedded vectors;
and the second full connection layer is used for classifying the embedded vectors to obtain the identification classification result.
9. The radar active interference identification method based on the multi-loss feature self-calibration network according to claim 1, wherein the reconstruction module comprises a third full-connection layer, a fourth nonlinear layer, a fourth full-connection layer, a fifth nonlinear layer, a fifth full-connection layer and an activation function layer which are sequentially connected.
10. The method for identifying radar active interference based on a multi-loss feature self-calibration network according to claim 1, wherein the mixing loss function is:
wherein, the liquid crystal display device comprises a liquid crystal display device,and->Representing an adjustable weight superparameter, +.>Representing a cross entropy loss function, ">,/>Indicating batch size, +.>Representing training samples->Is (are) true tags->Probability distribution representing predicted outcome,/->Representing a cluster loss function, +.>,/>Representing feature vector +.>Feature center of belonging category->Represents the mean square error loss function, ">,/>Representing reconstructed pixel points, < >>Representing the original image pixel points.
CN202310741199.2A 2023-06-21 2023-06-21 Radar active interference identification method based on multi-loss characteristic self-calibration network Active CN116482618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310741199.2A CN116482618B (en) 2023-06-21 2023-06-21 Radar active interference identification method based on multi-loss characteristic self-calibration network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310741199.2A CN116482618B (en) 2023-06-21 2023-06-21 Radar active interference identification method based on multi-loss characteristic self-calibration network

Publications (2)

Publication Number Publication Date
CN116482618A true CN116482618A (en) 2023-07-25
CN116482618B CN116482618B (en) 2023-09-19

Family

ID=87212292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310741199.2A Active CN116482618B (en) 2023-06-21 2023-06-21 Radar active interference identification method based on multi-loss characteristic self-calibration network

Country Status (1)

Country Link
CN (1) CN116482618B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117233706A (en) * 2023-11-16 2023-12-15 西安电子科技大学 Radar active interference identification method based on multilayer channel attention mechanism

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2451710C1 (en) * 1972-12-08 1992-05-14 Siemens Ag Arrangement for disturbing a monopulse tracking radar device by re-emission in cross polarization
US5239309A (en) * 1991-06-27 1993-08-24 Hughes Aircraft Company Ultra wideband radar employing synthesized short pulses
RU2193782C2 (en) * 2000-09-19 2002-11-27 Федеральное государственное унитарное предприятие "Научно-исследовательский институт измерительных приборов" Procedure evaluating characteristics of radar exposed to active jamming
CN112731309A (en) * 2021-01-06 2021-04-30 哈尔滨工程大学 Active interference identification method based on bilinear efficient neural network
CN114201987A (en) * 2021-11-09 2022-03-18 北京理工大学 Active interference identification method based on self-adaptive identification network
CN114488140A (en) * 2022-01-24 2022-05-13 电子科技大学 Small sample radar one-dimensional image target identification method based on deep migration learning
CN114895263A (en) * 2022-05-26 2022-08-12 西安电子科技大学 Radar active interference signal identification method based on deep migration learning
US20220349986A1 (en) * 2021-04-30 2022-11-03 Nxp B.V. Radar communication with interference suppression
CN115494466A (en) * 2022-09-22 2022-12-20 东南大学 Self-calibration method for distributed radar
CN116047427A (en) * 2023-03-29 2023-05-02 西安电子科技大学 Small sample radar active interference identification method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE2451710C1 (en) * 1972-12-08 1992-05-14 Siemens Ag Arrangement for disturbing a monopulse tracking radar device by re-emission in cross polarization
US5239309A (en) * 1991-06-27 1993-08-24 Hughes Aircraft Company Ultra wideband radar employing synthesized short pulses
RU2193782C2 (en) * 2000-09-19 2002-11-27 Федеральное государственное унитарное предприятие "Научно-исследовательский институт измерительных приборов" Procedure evaluating characteristics of radar exposed to active jamming
CN112731309A (en) * 2021-01-06 2021-04-30 哈尔滨工程大学 Active interference identification method based on bilinear efficient neural network
US20220349986A1 (en) * 2021-04-30 2022-11-03 Nxp B.V. Radar communication with interference suppression
CN114201987A (en) * 2021-11-09 2022-03-18 北京理工大学 Active interference identification method based on self-adaptive identification network
CN114488140A (en) * 2022-01-24 2022-05-13 电子科技大学 Small sample radar one-dimensional image target identification method based on deep migration learning
CN114895263A (en) * 2022-05-26 2022-08-12 西安电子科技大学 Radar active interference signal identification method based on deep migration learning
CN115494466A (en) * 2022-09-22 2022-12-20 东南大学 Self-calibration method for distributed radar
CN116047427A (en) * 2023-03-29 2023-05-02 西安电子科技大学 Small sample radar active interference identification method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WEIWEI FAN: "Deceptive jamming template synthesis for SAR based on generative adversarial nets", SIGNAL PROCESSING *
蒋留兵;周小龙;车俐;: "基于无载波超宽带雷达的小样本人体动作识别", 电子学报, no. 03 *
马博俊: "基于贝叶斯深度学习的一维雷达有源干扰信号识别方法", 信号处理, vol. 39, no. 2 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117233706A (en) * 2023-11-16 2023-12-15 西安电子科技大学 Radar active interference identification method based on multilayer channel attention mechanism
CN117233706B (en) * 2023-11-16 2024-02-06 西安电子科技大学 Radar active interference identification method based on multilayer channel attention mechanism

Also Published As

Publication number Publication date
CN116482618B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN107316013B (en) Hyperspectral image classification method based on NSCT (non-subsampled Contourlet transform) and DCNN (data-to-neural network)
Liu et al. Polarimetric convolutional network for PolSAR image classification
CN110135267B (en) Large-scene SAR image fine target detection method
CN109949255B (en) Image reconstruction method and device
Wang et al. TS-I3D based hand gesture recognition method with radar sensor
CN109522857B (en) People number estimation method based on generation type confrontation network model
Dong et al. Exploring vision transformers for polarimetric SAR image classification
Li et al. Complex contourlet-CNN for polarimetric SAR image classification
CN109145979A (en) sensitive image identification method and terminal system
CN110516728B (en) Polarized SAR terrain classification method based on denoising convolutional neural network
CN105117736B (en) Classification of Polarimetric SAR Image method based on sparse depth heap stack network
CN104732244A (en) Wavelet transform, multi-strategy PSO (particle swarm optimization) and SVM (support vector machine) integrated based remote sensing image classification method
Lu et al. Blind image quality assessment based on the multiscale and dual‐domains features fusion
CN114595732B (en) Radar radiation source sorting method based on depth clustering
Zhang et al. Polarimetric HRRP recognition based on ConvLSTM with self-attention
CN116482618B (en) Radar active interference identification method based on multi-loss characteristic self-calibration network
CN113344045B (en) Method for improving SAR ship classification precision by combining HOG characteristics
CN115311502A (en) Remote sensing image small sample scene classification method based on multi-scale double-flow architecture
CN113673312A (en) Radar signal intra-pulse modulation identification method based on deep learning
CN111639697A (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
Guo et al. Radar signal recognition based on CNN with a hybrid attention mechanism and skip feature aggregation
CN113989256A (en) Detection model optimization method, detection method and detection device for remote sensing image building
CN114067217A (en) SAR image target identification method based on non-downsampling decomposition converter
CN112800882A (en) Mask face posture classification method based on weighted double-flow residual error network
CN117115675A (en) Cross-time-phase light-weight spatial spectrum feature fusion hyperspectral change detection method, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant