CN111652170A - Secondary radar signal processing method based on two-channel residual error deep neural network - Google Patents

Secondary radar signal processing method based on two-channel residual error deep neural network Download PDF

Info

Publication number
CN111652170A
CN111652170A CN202010517007.6A CN202010517007A CN111652170A CN 111652170 A CN111652170 A CN 111652170A CN 202010517007 A CN202010517007 A CN 202010517007A CN 111652170 A CN111652170 A CN 111652170A
Authority
CN
China
Prior art keywords
training
layers
data
secondary radar
residual error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010517007.6A
Other languages
Chinese (zh)
Inventor
沈晓峰
都雪
廖阔
王子健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010517007.6A priority Critical patent/CN111652170A/en
Publication of CN111652170A publication Critical patent/CN111652170A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/74Systems using reradiation of radio waves, e.g. secondary radar systems; Analogous systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/023Interference mitigation, e.g. reducing or avoiding non-intentional interference with other HF-transmitters, base station transmitters for mobile communication or other radar systems, e.g. using electro-magnetic interference [EMI] reduction techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention belongs to the technical field of radars, and particularly relates to a secondary radar signal processing method based on a two-channel residual error deep neural network. The method comprises the steps of firstly obtaining sample data of secondary radar response signals and preprocessing the data set. And then, a novel two-channel residual error deep neural network is constructed based on a deep learning method. The two-channel residual error deep neural network is composed of two characteristic extraction channels, each channel is subjected to residual error addition for multiple times, and the two channels are connected through residual errors. And inputting the training set and the verification set, training the residual two-channel depth network, and stopping training when the parameters are optimal. And finally, inputting the test data into a network, and predicting a secondary radar response signal. The network model can reduce information loss and fully extract deep features of secondary radar signals. The method has excellent denoising performance, can accurately predict the secondary radar time sequence signal, and meets the requirement of noise suppression.

Description

Secondary radar signal processing method based on two-channel residual error deep neural network
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a secondary radar signal processing method based on a two-channel residual error deep neural network.
Background
The secondary radar system is a main monitoring means for national defense and domestic civil aviation, and can be interfered by noise in the transmission process to influence the definition of signal transmission and also cause the reduction of the stability and reliability of electric wave transmission. Although the traditional denoising method has an effect, the traditional denoising method also has noise residue, and the signal detection is influenced. The machine learning technology is developed rapidly nowadays, and as a branch of machine learning, deep learning and convolutional neural networks obtain huge achievements in the fields of AlphaGo man-machine chess, network big data analysis and the like. How to remove signal noise by using deep learning and the powerful function of CNN is one of the issues facing the signal processing field. Therefore, the method is explored in machine learning and deep learning which rise in recent years, is specially used for training a neural network suitable for removing noise by a secondary radar, and has important theoretical significance and practical significance.
Disclosure of Invention
The invention aims to construct a two-channel residual error deep neural network based on a deep learning algorithm, and provides a method capable of effectively suppressing noise and predicting secondary radar response signals.
The invention adopts the technical scheme that the secondary radar signal processing method based on the two-channel residual error deep neural network comprises the following steps:
s1, constructing a training set and a verification set:
taking the demodulated secondary radar response time sequence signal added with Gaussian white noise as a training data set, and recording the training data set as
Figure BDA0002530491880000011
Wherein N represents the number of signal samples, Z represents the time step of the signal, and the primary secondary radar response signal without noise is taken as a label and recorded as
Figure BDA0002530491880000012
Dividing the training data set and the verification data set into a training data set and a verification data set according to a proportion;
randomly disordering the response signal training sample data, carrying out dimension expansion on the sample data and the label to form a 3D tensor in a form of (n, t, g), wherein n represents the number of samples, t represents a time step length, and g represents the number of feature layers;
normalizing the data, and mapping all the characteristics of the data to the same scale of 0-1 to obtain a training set and a verification set;
s2, constructing a two-channel residual error deep neural network, which comprises a shallow layer feature extraction part, a deep layer feature extraction part and an up-sampling part;
the shallow feature extraction part comprises two convolution layers which are connected in series and have convolution kernel sizes of 1 x 3 and data tensor sizes of (n,512, 64);
the deep feature extraction part comprises two branches with the same structure and connected in parallel, each branch comprises 7 one-dimensional convolutional layers, 2 posing pooling layers and 3 residual error addition connecting layers, and the data tensor sizes of the 7 one-dimensional convolutional layers are (n,512,128), (n,512,128), (n, 256), (n, 128), n, 128) and (n,128,64) in sequence; performing residual addition on data once through 2 convolutional neural units, re-injecting the previous output tensor into a downstream data stream, performing maximum pooling operation of pool _ size 2 after the previous two residual additions, down-sampling the number of training parameters by 2 times, inputting the 7 th convolutional layer after the third residual addition, and connecting the output tensors of the 7 th convolutional layers of two channels to form the output of a deep feature extraction part; residual errors are connected between the two channels at respective residual error addition nodes, so that the extraction of signal characteristics is fully enhanced, and the loss of characteristic information is reduced;
the up-sampling part comprises two size 2 up-sampling layers and three convolution layers, the up-sampling layers and the convolution layers are alternately connected, the time step of gradually recovering tensors is 512, the convolution kernel size of the last convolution layer is 1 x 1, the number of characteristic layers is 1, prediction time sequence signal data are output, and the tensor size is (n,512, 1);
s3, training the constructed two-channel residual error deep neural network by adopting a training set, and adjusting the hyper-parameter by adopting a verification set; using mean square error MSE asIs a regression predicted loss function, and has the formula
Figure BDA0002530491880000021
Checking the internal state of the training model by using a callback function, stopping training when the verification loss is observed to be not improved any more and the parameters reach the best, and storing the model parameters to obtain a trained two-channel residual deep neural network;
and S4, inputting the obtained secondary radar response signal into the trained two-channel residual deep neural network, and obtaining the secondary radar time sequence response signal after noise suppression processing.
The method has the advantages that the residual error connected two-channel deep neural network can reduce information loss, fully extracts deep features of secondary radar response signals, has excellent denoising performance, can accurately predict secondary radar time sequence signals, and meets the noise suppression requirement.
Drawings
FIG. 1 is a schematic diagram of a two-channel residual deep neural network model structure;
FIG. 2 is a flow chart of secondary radar signal processing based on a two-channel residual depth neural network.
Detailed Description
The technical scheme of the invention is further described in detail by combining the drawings and the embodiment:
examples
The example includes the following steps:
step one, acquiring a sample data set: and taking the secondary radar response time sequence signal with total number of samples 60000 and time step 512 as sample data. Taking the demodulated secondary radar response time sequence signal added with Gaussian white noise SNR (signal to noise ratio) 5 as a training data set, and recording the training data set as the training data set
Figure BDA0002530491880000031
Where N60000 represents the number of signal samples and Z512 represents the signal time step. The primary secondary radar response signal without noise is taken as a training label and is recorded as
Figure BDA0002530491880000032
And dividing the sample data into a training set, a verification set and a test set according to the proportion of (0.6,0.2 and 0.2).
And secondly, preprocessing the data set: randomly disordering secondary radar training sample data, performing dimension expansion on the sample data and the labels, and forming a 3D tensor in a form of (n, t, g), wherein n represents the number of samples, t represents a time step length, and g represents the number of characteristic layers. And carrying out batch standardization processing on the data, and mapping all characteristics of the data to the same scale 0-1.
Thirdly, constructing a two-channel residual error deep neural network: FIG. 1 is a schematic diagram of a two-channel residual deep neural network model structure. The two-channel residual deep neural network is composed of two feature extraction channels. First, signal data is input to two convolution kernels which are connected in series, the size of each convolution kernel is 1 × 3, the data tensor size is (n,512,64), and a convolution neural unit CONV1D is subjected to shallow feature learning, and the convolution operation formula is y ═ Σ f (τ) g (x- τ) d τ.
Then inputting training sample data into a two-channel deep network, wherein the two channels have the same structure. Each channel contains 7 one-dimensional convolutional neural units, 2 firing pooling layers and 3 residual-adding connected layers. The data is subjected to residual addition once through 2 convolution neural units, and the previous output tensor is injected into the downstream data stream again. The first two groups of residuals are added and then a maximal pooling operation of pool _ size 2 is performed to sample the number of training parameters by 2 times. Residual errors are connected between the two channels at respective residual error addition nodes, so that the extraction of signal characteristics is fully enhanced, and the loss of characteristic information is reduced.
In the two-channel feature extraction, the feature layer number of each channel is correspondingly changed to extract the depth features, and the tensor size change of each stage is as follows in sequence by combining the pooling operation: (n,512,128), (n, 256), (n, 128), (n,128, 64).
And finally, connecting the output tensors of the two paths, and fusing the learned deep features. The 2 size 2 upsampled layers are interleaved with the convolutional network layers, and the time step of gradually recovering the tensor is 512. And finally, setting a layer of convolution layers CONV1D with the convolution kernel size of 1 x 1 and the number of characteristic layers of 1, and outputting prediction time sequence signal data with the tensor size of (n,512, 1).
Fourthly, training a two-channel residual error deep neural network: and inputting the training set and the verification set which are preprocessed in the second part into the two-channel depth residual error neural network constructed in the third step, and adopting a Mean Square Error (MSE) as a loss function of regression prediction. And when a training module model is called, monitoring the state and the performance of the model by using a callback function, stopping training when the verification loss is observed not to be improved any more and the parameters reach the best, and storing the model.
And fifthly, outputting the secondary radar time sequence signal in a prediction mode: inputting the test set preprocessed in the second step into the neural network model stored in the fourth step, outputting a secondary radar time sequence response signal subjected to noise suppression by the constructed two-channel residual error depth network, and recording the secondary radar time sequence response signal as a time sequence response signal
Figure BDA0002530491880000041
Where N is the number of test set samples and W512 represents the signal time step.
A number of experiments were conducted using different secondary radar response signal data in various noise environments, where the minimum signal-to-noise ratio SNR is-5 and the maximum signal-to-noise ratio SNR is 20. The method has excellent denoising performance, can accurately predict the secondary radar time sequence signal, and meets the requirement of noise suppression.

Claims (1)

1. The secondary radar signal processing method based on the two-channel residual error depth network is characterized by comprising the following steps of:
s1, constructing a training set and a verification set:
taking the demodulated secondary radar response time sequence signal added with Gaussian white noise as a training data set, and recording the training data set as
Figure FDA0002530491870000011
Wherein N represents the number of signal samples, Z represents the time step of the signal, the source of the noise not addedThe first and second radar response signals are used as labels and recorded as
Figure FDA0002530491870000012
Dividing the training data set and the verification data set into a training data set and a verification data set according to a proportion;
randomly disordering the response signal training sample data, carrying out dimension expansion on the sample data and the label to form a 3D tensor in a form of (n, t, g), wherein n represents the number of samples, t represents a time step length, and g represents the number of feature layers;
normalizing the data, and mapping all the characteristics of the data to the same scale to obtain a training set and a verification set;
s2, constructing a two-channel residual error deep neural network, which sequentially comprises a shallow layer feature extraction part, a deep layer feature extraction part and an up-sampling part;
the shallow feature extraction part comprises two convolution layers which are connected in series and have convolution kernel sizes of 1 x 3 and data tensor sizes of (n,512, 64);
the deep feature extraction part comprises two branches with the same structure and connected in parallel, each branch comprises 7 one-dimensional convolutional layers, 2 posing pooling layers and 3 residual error addition connecting layers, and the data tensor sizes of the 7 one-dimensional convolutional layers are (n,512,128), (n,512,128), (n, 256), (n, 128), n, 128) and (n,128,64) in sequence; performing residual addition on data once through 2 convolutional neural units, re-injecting the previous output tensor into a downstream data stream, performing maximum pooling operation of pool _ size 2 after the previous two residual additions, down-sampling the number of training parameters by 2 times, inputting the 7 th convolutional layer after the third residual addition, and connecting the output tensors of the 7 th convolutional layers of two channels to form the output of a deep feature extraction part;
the up-sampling part comprises two up-sampling layers and three convolution layers, the up-sampling layers and the convolution layers are alternately connected, the time step of gradually recovering a tensor is 512, the convolution kernel size of the last convolution layer is 1 x 1, the number of characteristic layers is 1, prediction time sequence signal data are output, and the tensor size is (n,512, 1);
s3, training the constructed two-channel residual error deep neural network by adopting a training set, and adjusting the hyper-parameter by adopting a verification set; the mean square error MSE is used as a loss function of regression prediction, the callback function is used for checking the internal state of a training model, when the verification loss is observed to be not improved any more and the parameters reach the best, the training is stopped, the model parameters are stored, and a well-trained two-channel residual error deep neural network is obtained;
and S4, inputting the obtained secondary radar response signal into the trained two-channel residual deep neural network, and obtaining the secondary radar time sequence response signal after noise suppression processing.
CN202010517007.6A 2020-06-09 2020-06-09 Secondary radar signal processing method based on two-channel residual error deep neural network Pending CN111652170A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010517007.6A CN111652170A (en) 2020-06-09 2020-06-09 Secondary radar signal processing method based on two-channel residual error deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010517007.6A CN111652170A (en) 2020-06-09 2020-06-09 Secondary radar signal processing method based on two-channel residual error deep neural network

Publications (1)

Publication Number Publication Date
CN111652170A true CN111652170A (en) 2020-09-11

Family

ID=72351085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010517007.6A Pending CN111652170A (en) 2020-06-09 2020-06-09 Secondary radar signal processing method based on two-channel residual error deep neural network

Country Status (1)

Country Link
CN (1) CN111652170A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214929A (en) * 2020-09-27 2021-01-12 电子科技大学 Radar interference suppression method for intermittent sampling repeated forwarding type interference

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428229A (en) * 2018-03-14 2018-08-21 大连理工大学 It is a kind of that apparent and geometric properties lung's Texture Recognitions are extracted based on deep neural network
CN108875787A (en) * 2018-05-23 2018-11-23 北京市商汤科技开发有限公司 A kind of image-recognizing method and device, computer equipment and storage medium
CN108960212A (en) * 2018-08-13 2018-12-07 电子科技大学 Based on the detection of human joint points end to end and classification method
CN109117744A (en) * 2018-07-20 2019-01-01 杭州电子科技大学 A kind of twin neural network training method for face verification
CN110210644A (en) * 2019-04-17 2019-09-06 浙江大学 The traffic flow forecasting method integrated based on deep neural network
CN110232653A (en) * 2018-12-12 2019-09-13 天津大学青岛海洋技术研究院 The quick light-duty intensive residual error network of super-resolution rebuilding
CN110276721A (en) * 2019-04-28 2019-09-24 天津大学 Image super-resolution rebuilding method based on cascade residual error convolutional neural networks
CN110348494A (en) * 2019-06-27 2019-10-18 中南大学 A kind of human motion recognition method based on binary channels residual error neural network
CN110363151A (en) * 2019-07-16 2019-10-22 中国人民解放军海军航空大学 Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN110568483A (en) * 2019-07-22 2019-12-13 中国石油化工股份有限公司 Automatic evaluation method for seismic linear noise suppression effect based on convolutional neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428229A (en) * 2018-03-14 2018-08-21 大连理工大学 It is a kind of that apparent and geometric properties lung's Texture Recognitions are extracted based on deep neural network
CN108875787A (en) * 2018-05-23 2018-11-23 北京市商汤科技开发有限公司 A kind of image-recognizing method and device, computer equipment and storage medium
CN109117744A (en) * 2018-07-20 2019-01-01 杭州电子科技大学 A kind of twin neural network training method for face verification
CN108960212A (en) * 2018-08-13 2018-12-07 电子科技大学 Based on the detection of human joint points end to end and classification method
CN110232653A (en) * 2018-12-12 2019-09-13 天津大学青岛海洋技术研究院 The quick light-duty intensive residual error network of super-resolution rebuilding
CN110210644A (en) * 2019-04-17 2019-09-06 浙江大学 The traffic flow forecasting method integrated based on deep neural network
CN110276721A (en) * 2019-04-28 2019-09-24 天津大学 Image super-resolution rebuilding method based on cascade residual error convolutional neural networks
CN110348494A (en) * 2019-06-27 2019-10-18 中南大学 A kind of human motion recognition method based on binary channels residual error neural network
CN110363151A (en) * 2019-07-16 2019-10-22 中国人民解放军海军航空大学 Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN110568483A (en) * 2019-07-22 2019-12-13 中国石油化工股份有限公司 Automatic evaluation method for seismic linear noise suppression effect based on convolutional neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214929A (en) * 2020-09-27 2021-01-12 电子科技大学 Radar interference suppression method for intermittent sampling repeated forwarding type interference

Similar Documents

Publication Publication Date Title
CN112131760B (en) CBAM model-based prediction method for residual life of aircraft engine
CN111523509B (en) Equipment fault diagnosis and health monitoring method integrating physical and depth expression characteristics
CN112200244A (en) Intelligent detection method for anomaly of aerospace engine based on hierarchical countermeasure training
CN111610518B (en) Secondary radar signal denoising method based on depth residual separation convolutional network
CN111580151B (en) SSNet model-based earthquake event time-of-arrival identification method
CN112926728B (en) Small sample turn-to-turn short circuit fault diagnosis method for permanent magnet synchronous motor
CN113865868A (en) Rolling bearing fault diagnosis method based on time-frequency domain expression
CN111624570B (en) Radar target identification method based on two-dimensional convolutional neural network
CN114329826A (en) Plane cascade steady flow prediction method based on generative confrontation network
CN115014789B (en) Double-sensor aeroengine case fault source acoustic emission positioning method based on CNN-GCN
CN113642414A (en) Method for predicting residual service life of rolling bearing based on Transformer model
CN116465623A (en) Gearbox service life prediction method based on sparse converter
CN115290326A (en) Rolling bearing fault intelligent diagnosis method
CN111652170A (en) Secondary radar signal processing method based on two-channel residual error deep neural network
CN114357372A (en) Aircraft fault diagnosis model generation method based on multi-sensor data driving
CN108108666B (en) Hybrid matrix estimation method based on wavelet analysis and time-frequency single-source detection
CN117725465A (en) Mechanical equipment fault diagnosis method based on improved depth residual error shrinkage network
CN111610517B (en) Secondary radar signal processing method based on deep four-channel network
CN117809164A (en) Substation equipment fault detection method and system based on multi-mode fusion
Zhang et al. Bearing fault diagnosis base on multi-scale 2D-CNN model
CN116595313A (en) Fault diagnosis method and system based on multi-scale residual error attention network
CN116630728A (en) Machining precision prediction method based on attention residual error twin network
CN114980723B (en) Fault prediction method and system for suction nozzle of cross-working-condition chip mounter
CN112329534B (en) Radar target identification method based on two-dimensional weighted residual convolution neural network
CN110631827B (en) Gearbox fault diagnosis method based on improved collaborative representation classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200911