CN112232369B - Anti-foil strip interference method based on convolutional neural network - Google Patents

Anti-foil strip interference method based on convolutional neural network Download PDF

Info

Publication number
CN112232369B
CN112232369B CN202010971162.5A CN202010971162A CN112232369B CN 112232369 B CN112232369 B CN 112232369B CN 202010971162 A CN202010971162 A CN 202010971162A CN 112232369 B CN112232369 B CN 112232369B
Authority
CN
China
Prior art keywords
data
neural network
foil
target
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010971162.5A
Other languages
Chinese (zh)
Other versions
CN112232369A (en
Inventor
王宏波
黄靖涵
冯苗苗
上官泽鹏
金煌煌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010971162.5A priority Critical patent/CN112232369B/en
Publication of CN112232369A publication Critical patent/CN112232369A/en
Application granted granted Critical
Publication of CN112232369B publication Critical patent/CN112232369B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/36Means for anti-jamming, e.g. ECCM, i.e. electronic counter-counter measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses an anti-foil strip interference method based on a convolutional neural network, which is simple in calculation process, does not need much human intervention, and has general adaptability to foil strip interference in various forms. The invention relates to an anti-foil strip interference method, which comprises the following steps: (10) simulating echo data: foil strips and target Doppler echo data are generated in a simulation mode; (20) simulation data processing: extracting and sampling the foil strips and target Doppler echo data, and performing normalization processing and intercepting zero padding processing; (30) constructing a data set: mixing typical targets and foil data, adding classification labels, and constructing a neural network training data set; (40) determining a convolutional neural network structure: determining a specific convolutional neural network structure, and training and debugging the convolutional neural network by using a training data set; (50) foil strip and target classification and identification: and classifying and identifying the actually measured foil strips and target echo data by using the trained neural network to realize foil strip interference resistance.

Description

Anti-foil strip interference method based on convolutional neural network
Technical Field
The invention belongs to the field of radar electronic countermeasure, and particularly relates to a method for resisting foil strip interference.
Background
The foil strip is used as an important interference device in passive electronic countermeasure, has the advantages of simple manufacture, low price, convenient use, large interference range, mature technology and the like, and has early use time and wide range. The foil strips are distributed randomly in the space, generated scattering causes interference on the radar, the characteristics of the foil strips are similar to noise, target echo signals can be covered or deceptive interference is realized to form false targets, and the interference on the radar detection and reception system of the other party is realized.
At present, foil interference is mainly forwarding interference, dilution interference and centroid interference, and through strong interference of foil cloud echoes, protection of a target and deception and interference of a radar are achieved.
In the prior art, in foil interference resistance, a foil and a target are mostly determined by performing statistical characteristic analysis and time-frequency analysis on a foil radar echo, for example, based on a characteristic of foil doppler spectrum broadening, foil polarization echo characteristics, and the like.
Therefore, the prior art has problems that: the identification and classification based on the statistical characteristics of the target echo and the foil strip echo have complex calculation process and limited use scenes. The stability and accuracy of the prior knowledge are required to be quite high.
Disclosure of Invention
The invention aims to provide an anti-foil interference method based on a convolutional neural network, which is simple in calculation process, does not need much human intervention, and has general adaptability to foil interference in various forms.
The technical solution for realizing the purpose of the invention is as follows:
a foil strip interference resisting method based on a convolutional neural network is characterized by comprising the following steps:
(10) Simulation echo data: foil strips and target Doppler echo data are generated in a simulation mode;
(20) Simulation data processing: extracting and sampling the foil strips and target Doppler echo data, and performing normalization processing and intercepting zero padding processing;
(30) Constructing a data set: mixing typical targets and foil strip data, adding classification labels, and constructing a neural network training data set;
(40) Determining the convolutional neural network structure: determining a specific convolutional neural network structure, and training and debugging the convolutional neural network by using a training data set;
(50) And (3) foil strip and target classification and identification, namely classifying and identifying the actually measured foil strip and target echo data by using the trained neural network to realize foil strip interference resistance.
Compared with the prior art, the invention has the following remarkable advantages:
1. the processing process of the foil strips and the target echo data is simple, and complex data feature extraction is not needed.
2. In the practical application process, only the determined convolutional neural network structure is needed to be used for carrying out echo classification on the target and the foil strip, the identification speed is high, and the foil strip interference resistance reliability is high;
drawings
Fig. 1 is a main flow chart of the method for foil strip interference resistance based on the convolutional neural network.
FIG. 2 is a flow chart of the steps of simulating echo data in FIG. 1
FIG. 3 is a flowchart of the simulation data processing steps of FIG. 1
Fig. 4 is a structural diagram of the constructed convolutional neural network.
FIG. 5 is a graph of error rate change of foil strip and target classification by a convolutional neural network.
Detailed Description
As shown in FIG. 1, the method for foil strip interference resistance based on the convolutional neural network comprises the following steps:
(10) Simulation of echo data: foil strips and target Doppler echo data are generated in a simulation mode;
as shown in fig. 2, the step of (10) simulating echo data comprises:
(11) Foil strip data simulation: foil strip Doppler echo data are generated through simulation according to the following formula,
Figure BDA0002684082820000021
in the formula, k i For amplitude coefficient of echo signal, k 1i Is the transmission coefficient, k 2i Loss of the range gate of the system, k 3i For video magnification, k d Doppler amplification factor, omega di Is the Doppler angular frequency, tau, of the scattering cell i i For time delay, tau, of echo signals and transmitted signals 0 Is the signal pulse width;
in foil cloud doppler simulation, the foil cloud radar scattering area is expressed as:
σ i =0.86λ 2 cos 4 θ (2)
in the formula, lambda is the radar working wavelength, and theta is the included angle between the incident electromagnetic wave and the foil strip;
(12) Target data simulation: target doppler echo data is generated by simulation according to the following formula,
Figure BDA0002684082820000022
in the formula, k it Is a radar scattering cross section sigma with a target surface element i and the power P of a transmitted signal ti And the bullet distance R i Amplitude coefficient, k, of the echo signal of interest 1i Is the transmission coefficient, k 2i For loss of range gates of the system, k 3i For video magnification, k d Is the Doppler magnification, omega di Is the Doppler angular frequency, τ, of the scattering cell i i For delay of echo signals and transmitted signals, tau 0 Is the signal pulse width.
The simulation data consists of Doppler echo data of the target under two conditions of head-on and tail-off, wherein: head-on: yaw angle variation range: alpha is more than or equal to 165 degrees and less than or equal to 192 degrees, and the pitch angle variation range is as follows: beta is more than or equal to-15 degrees and less than or equal to 12 degrees, and the variation range of the rolling angle is as follows: beta is more than or equal to 15 degrees and less than or equal to 12 degrees; and (3) rear pursuit: yaw angle variation range: alpha is more than or equal to 15 degrees and less than or equal to 12 degrees, and the pitch angle variation range is as follows: beta is more than or equal to 15 degrees and less than or equal to 12 degrees, and the variation range of the roll angle is as follows: beta is more than or equal to 15 degrees below zero and less than or equal to 12 degrees below zero.
(20) Simulation data processing: extracting and sampling the foil strips and target Doppler echo data, and performing normalization processing and intercepting zero padding processing;
as shown in fig. 3, the (20) simulation data processing step includes:
(21) Data extraction sampling: for the foil strips and the target Doppler echo data, extracting each group of echo data in a row, and sampling at 1/20 equal intervals to obtain foil strips and target Doppler echo data with the length of 128 and unchanged envelope characteristics;
(22) And (3) intercepting and zero padding data after sampling: respectively carrying out normalization and interception zero padding treatment on the extracted and sampled foil strip data and target data according to the following formula,
(221) The foil strip data is normalized according to the following formula,
X′=X/x max (4)
wherein, X is the data of the foil strip after sampling, and is recorded as:
X=[x 1 ,x 2 ,x 3 ...,x 128 ] (5)
x max to extract the maximum value in the foil strip data after sampling, the method is recorded as follows:
x max =max{X} (6);
(222) Normalizing the extracted and sampled target data according to the following formula,
T′=T/t max (7)
in the formula, T is the target data after sampling, and is recorded as:
T=[t 1 ,t 2 ,t 3 ...,t 128 ] (8)
t max recording the maximum value in the sampled target data as:
t max =max{T} (9);
(223) For the half-irradiation condition, the normalized foil strip data is cut off and filled with zero according to the following formula,
X c =[0...,0,X′,0...,0] (10);
(224) For the half-irradiation condition, the normalized target data is cut off and zero-filled according to the following formula,
T c =[0...,0,T′,0...,0] (11)。
(30) Constructing a data set: mixing typical targets and foil strip data, adding classification labels, and constructing a neural network training data set;
the step (30) of constructing the data set specifically comprises:
mixing the processed foil strips with a target Doppler echo data set in a random mode, wherein each line of data is a group of echo data during mixing, labeling the category of each line of data at the tail of the data set in a 0 or 1 mode, and constructing 1000 groups of echo training sets applied to training a convolutional neural network by using the mode.
(40) Determining the convolutional neural network structure: determining a specific convolutional neural network structure, and training and debugging the convolutional neural network by using a training data set;
the neural network model consists of the following structure:
and (3) rolling layers: the convolution layer is used for carrying out feature extraction on input data, the convolution layer internally comprises a plurality of convolution kernels, and each element forming each convolution kernel corresponds to a weight coefficient and a deviation value.
A pooling layer: after the convolutional layer is built in the pooling layer, the number of bits of the feature vector output by the convolutional layer is reduced through pooling, the pooling layer is scaling mapping of the data of the previous layer, a maximum pooling method is adopted in the model, a maximum pooling operator extracts a local maximum value from the input features, the number of trainable parameters is reduced, and the robustness of the features is improved.
Full connection layer: the fully-connected layer is built at the last part of the hidden layer of the convolutional neural network, and only transmits signals to other fully-connected layers.
The activation function selects a ReLU function, whose expression is:
f(x)=max(0,x) (12)
the constructed convolutional neural network structure is shown in fig. 4.
The step (40) of determining the convolutional neural network structure specifically comprises the following steps:
determining a specific convolutional neural network structure, training by using a training data set, and debugging the convolutional neural network to adapt to the classification and identification of the foil strips and the target Doppler echoes;
in the specific convolutional neural network structure, the following parameters are set for each layer of the convolutional neural network:
the input data size is set to 128 × 1; the size of the first winding layer is set to be 3 multiplied by 1, and the step length is 1;
the size of the first pooling layer is set to 3 × 1, and the step length is 3;
the size of the convolution kernel of the second convolution layer is 3 multiplied by 1, and the step length is 1;
the size of the second pooling layer is 2 multiplied by 1, and the step length is 2;
the size of the third convolution layer is 5 multiplied by 1, and the step length is 1;
the size of the third pooling layer is 2 × 1, and the step length is 2;
the gradient descent method in the convolutional neural network is selected to use a small batch gradient descent method, the size of the mini-batch is set to be 50, and the learning rate of the gradient descent is set to be 0.1.
The full connection layer obtains 96 neurons for input of a Softmax classifier, and input categories are judged;
as shown in fig. 5, as the number of iterations increases in the training, the error rate of the training set converges, which shows that the convolutional neural network can adapt to the classification of the foil strips and the target doppler echo data. Under various intersection conditions of the targets, the classification accuracy of the network on the foil strips and the targets can be kept above 85%, and the requirements of identification classification and foil strip interference resistance of the foil strips and the targets are well met.
(50) And (3) foil strip and target classification and identification, namely classifying and identifying the actually measured foil strip and target echo data by using the trained neural network to realize foil strip interference resistance.
After the convolutional neural network model is determined, foil strip and target echo data can be input into the model in the form of 128 points in each group, and after the processing of the neural network, the category of the echo data is output, so that the classification and identification of the foil strip and the target are completed. Thereby achieving the effect of resisting the interference of the foil strips.
According to the process, the Doppler echoes of the target and the foil strip are obtained through simulation based on the convolutional neural network, the data set used for training the convolutional neural network is constructed, and the convolutional neural network structure suitable for classification and identification of the target and the foil strip is determined. In practical application, the acquired foil strip and target echo data are input into a convolutional neural network, and the effect of foil strip interference resistance is realized through the classification capability of the neural network. The whole process does not need to carry out excessive feature extraction on echo data, the signal processing time is short, the identification speed is high, the identification process time is less than 5ms, and the practical application requirements can be well met.

Claims (4)

1. A foil strip interference resisting method based on a convolutional neural network is characterized by comprising the following steps:
(10) Simulation echo data: foil strips and target Doppler echo data are generated in a simulation mode;
(20) Simulation data processing: extracting and sampling the foil strips and target Doppler echo data, and performing normalization processing and intercepting zero filling processing;
(30) Constructing a data set: mixing typical targets and foil data, adding classification labels, and constructing a neural network training data set;
(40) Determining the convolutional neural network structure: determining a specific convolutional neural network structure, and training and debugging the convolutional neural network by using a training data set;
(50) Foil strips and target classification and identification: classifying and identifying the actually measured foil strips and target echo data by using the trained neural network to realize foil strip interference resistance;
the (20) simulation data processing step includes:
(21) Data extraction sampling: for the foil strips and the target Doppler echo data, extracting each group of echo data in a row, and sampling at 1/20 equal intervals to obtain foil strips and target Doppler echo data with the length of 128 and unchanged envelope characteristics;
(22) And (3) intercepting and zero padding data after sampling: respectively carrying out normalization and interception zero padding treatment on the extracted and sampled foil strip data and target data according to the following formula,
(221) The foil strip data is normalized according to the following formula,
X′=X/x max (1)
wherein, X is the data of the foil strip after sampling, and is recorded as:
X=[x 1 ,x 2 ,x 3 ...,x 128 ] (2)
x max to extract the maximum value in the sampled foil strip data, record as:
x max =max{X} (3);
(222) Normalizing the extracted and sampled target data according to the following formula,
T′=T/t max (4)
in the formula, T is the target data after sampling, and is recorded as:
T=[t 1 ,t 2 ,t 3 ...,t 128 ] (5)
t max recording the maximum value in the sampled target data as:
t max =max{T} (6);
(223) For the half-irradiation condition, the normalized foil strip data is cut off and filled with zero according to the following formula,
X c =[0...,0,X′,0...,0] (7);
(224) For the half-irradiation condition, the normalized target data is cut off and zero-filled according to the following formula,
T c =[0...,0,T′,0…,0] (8)。
2. the foil strip interference resistant method according to claim 1, wherein the (10) simulating echo data step comprises:
(11) Foil strip data simulation: foil strip Doppler echo data are generated through simulation according to the following formula,
Figure FDA0003747903200000021
in the formula, k i For amplitude coefficient of echo signal, k 1i Is the transmission coefficient, k 2i For loss of range gates of the system, k 3i To video magnification, k d Is the Doppler magnification, omega di Is the Doppler angular frequency, τ, of the scattering cell i i For time delay, tau, of echo signals and transmitted signals 0 Is the signal pulse width;
in foil cloud doppler simulation, the foil cloud radar scattering area is expressed as:
σ i =0.86λ 2 cos 4 θ (10)
in the formula, lambda is the radar working wavelength, and theta is the included angle between the incident electromagnetic wave and the foil strip;
(12) Target data simulation: target Doppler echo data is generated through simulation according to the following formula,
Figure FDA0003747903200000022
in the formula, k it Is a radar scattering cross section sigma with a target surface element i and the power P of a transmitted signal ti And the bullet distance R i Amplitude coefficient, k, of the echo signal of interest 1i Is the transmission coefficient, k 2i For loss of range gates of the system, k 3i To video magnification, k d Is the Doppler magnification, omega di Is the Doppler angular frequency, tau, of the scattering cell i i For delay of echo signals and transmitted signals, tau 0 Is the signal pulse width.
3. The foil strip interference resistant method according to claim 1, wherein the step of (30) constructing a data set is in particular:
mixing the processed foil strips with a target Doppler echo data set in a random mode, wherein each line of data is a group of echo data during mixing, labeling the category of each line of data at the tail of the data set in a 0 or 1 mode, and constructing 1000 groups of echo training sets applied to training a convolutional neural network by using the mode.
4. The foil strip interference resistant method according to claim 1, wherein the step of (40) determining the convolutional neural network structure is specifically:
determining a specific convolutional neural network structure, training by using a training data set, and debugging the convolutional neural network to adapt to the classification and identification of the foil strips and the target Doppler echoes;
in the specific convolutional neural network structure, the following parameters are set for each layer of the convolutional neural network:
the input data size is set to 128 × 1; the size of the first winding layer is set to be 3 multiplied by 1, and the step length is 1;
the size of the first pooling layer is set to be 3 x 1, and the step length is 3;
the size of the convolution kernel of the second convolution layer is 3 multiplied by 1, and the step length is 1;
the size of the second pooling layer is 2 multiplied by 1, and the step length is 2;
the size of the third convolution layer is 5 multiplied by 1, and the step length is 1;
the size of the third pooling layer is 2 × 1, and the step length is 2;
the gradient descent method in the convolutional neural network is selected to use a small batch gradient descent method, the size of the mini-batch is set to be 50, and the learning rate of the gradient descent is set to be 0.1.
CN202010971162.5A 2020-09-16 2020-09-16 Anti-foil strip interference method based on convolutional neural network Active CN112232369B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010971162.5A CN112232369B (en) 2020-09-16 2020-09-16 Anti-foil strip interference method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010971162.5A CN112232369B (en) 2020-09-16 2020-09-16 Anti-foil strip interference method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN112232369A CN112232369A (en) 2021-01-15
CN112232369B true CN112232369B (en) 2022-10-28

Family

ID=74106958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010971162.5A Active CN112232369B (en) 2020-09-16 2020-09-16 Anti-foil strip interference method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN112232369B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934101A (en) * 2019-01-24 2019-06-25 西安电子科技大学 Radar clutter recognition method based on convolutional neural networks
CN110378191A (en) * 2019-04-25 2019-10-25 东南大学 Pedestrian and vehicle classification method based on millimeter wave sensor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2845191B1 (en) * 2012-05-04 2019-03-13 Xmos Inc. Systems and methods for source signal separation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934101A (en) * 2019-01-24 2019-06-25 西安电子科技大学 Radar clutter recognition method based on convolutional neural networks
CN110378191A (en) * 2019-04-25 2019-10-25 东南大学 Pedestrian and vehicle classification method based on millimeter wave sensor

Also Published As

Publication number Publication date
CN112232369A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
Chen et al. False-alarm-controllable radar detection for marine target based on multi features fusion via CNNs
CN107728142B (en) Radar high-resolution range profile target identification method based on two-dimensional convolutional network
CN111722199B (en) Radar signal detection method based on convolutional neural network
CN107728143B (en) Radar high-resolution range profile target identification method based on one-dimensional convolutional neural network
CN109471074B (en) Radar radiation source identification method based on singular value decomposition and one-dimensional CNN network
CN111983619B (en) Underwater acoustic target forward scattering acoustic disturbance positioning method based on transfer learning
CN104111449B (en) A kind of space-time adaptive processing method of based on broad sense inner product of improvement
CN110907908B (en) Navigation radar signal sorting method based on envelope analysis
CN109753874A (en) A kind of low slow small classification of radar targets method based on machine learning
CN109948722B (en) Method for identifying space target
CN111368930B (en) Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning
CN109934101A (en) Radar clutter recognition method based on convolutional neural networks
CN114895263A (en) Radar active interference signal identification method based on deep migration learning
CN115061126A (en) Radar cluster target behavior identification method based on multi-dimensional parameter neural network
CN114117912A (en) Sea clutter modeling and inhibiting method under data model dual drive
CN108983187B (en) Online radar target identification method based on EWC
CN112946600B (en) Method for constructing radar HRRP database based on WGAN-GP
CN112232369B (en) Anti-foil strip interference method based on convolutional neural network
CN112213697A (en) Feature fusion method for radar deception jamming recognition based on Bayesian decision theory
Tang et al. SAR deception jamming target recognition based on the shadow feature
CN112098952B (en) Radar reconnaissance clutter suppression method based on time domain statistical processing
CN114936570A (en) Interference signal intelligent identification method based on lightweight CNN network
CN114296067A (en) Pulse Doppler radar low-slow small target identification method based on LSTM model
CN114298093A (en) IFF signal intelligent classification and identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant