CN111220958B - Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network - Google Patents

Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network Download PDF

Info

Publication number
CN111220958B
CN111220958B CN201911254099.7A CN201911254099A CN111220958B CN 111220958 B CN111220958 B CN 111220958B CN 201911254099 A CN201911254099 A CN 201911254099A CN 111220958 B CN111220958 B CN 111220958B
Authority
CN
China
Prior art keywords
layer
neural network
convolutional neural
output
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911254099.7A
Other languages
Chinese (zh)
Other versions
CN111220958A (en
Inventor
金钰
赵永亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Ningyuan Electronic And Electrical Technology Co ltd
Original Assignee
Xi'an Ningyuan Electronic And Electrical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Ningyuan Electronic And Electrical Technology Co ltd filed Critical Xi'an Ningyuan Electronic And Electrical Technology Co ltd
Priority to CN201911254099.7A priority Critical patent/CN111220958B/en
Publication of CN111220958A publication Critical patent/CN111220958A/en
Application granted granted Critical
Publication of CN111220958B publication Critical patent/CN111220958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to a radar target Doppler image classification recognition method based on a one-dimensional convolutional neural network, which comprises the steps of acquiring original echo data through radar equipment, obtaining each frame of data containing targets and clutter through pulse compression and moving target detection, extracting one-dimensional Doppler images in a distance unit where the targets are located according to Doppler differences of different targets, and forming a data set; designing a one-dimensional convolutional neural network model, and initializing model parameters; training the network through forward propagation and backward propagation processes, and calculating a loss function; and (3) performing iterative training until the loss function converges or reaches the maximum number of times, and obtaining the one-dimensional convolutional neural network model after the training is finished.

Description

Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a target classification and identification method based on a one-dimensional convolutional neural network.
Background
The radar is intended to be a radio detection device, the original function of which is to detect and range objects. When the target has a relative speed in the radar sight line direction and is irradiated by the radar, the carrier frequency of the radar reflected wave generates a modulation phenomenon, which is called Doppler frequency shift, and the offset of the carrier frequency is the Doppler frequency of the target. Meanwhile, if the object has micro-motions such as swinging, rotating and the like of other parts relative to the self-translation besides the self-translation, the micro-motions generate secondary sidebands around the Doppler frequency of the object, and the secondary sidebands are called micro-Doppler frequencies. In the Doppler frequency spectrum after the moving target detection, the main peak of the target and micro Doppler information around the same distance unit where the main peak is located can be clearly seen, and the fluctuation condition of the structure speed of each part of the target is reflected, so that the radar target can be classified and identified by utilizing the Doppler information in the radar echo signal.
The traditional radar target classification and identification process is complex, target features are required to be manually extracted first, the features are classified and identified by utilizing a classification algorithm in the existing machine learning method, and the method is long in time, inaccurate and low in efficiency. In recent years, with the rising and vigorous development of artificial intelligence, deep learning is being widely researched and applied in the field of intelligent signal processing. The convolutional neural network has great advantages in image recognition and target detection. The convolutional neural network has the advantages that the characteristic learning is richer, the expression capability is stronger, the local connection and weight sharing characteristics of the convolutional neural network reduce the complexity of a model, the number of weights is reduced, the structural information is effectively reserved, the characteristic can be automatically extracted through a convolutional kernel, a good target recognition function is realized, and the higher recognition accuracy is obtained.
Therefore, the invention researches a radar target Doppler image classification recognition method based on a one-dimensional convolutional neural network, and discusses how to perform effective high-performance target recognition on a low-cost radar.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides a radar target Doppler image classification and identification method based on a one-dimensional convolutional neural network, which realizes the integration of target feature extraction and target identification, reduces blindness and uncertainty caused by manual participation and improves the accuracy of radar target classification and identification.
Technical proposal
A radar target Doppler image classification recognition method based on a one-dimensional convolutional neural network utilizes Doppler information in radar echo signals to realize target feature extraction and recognition integration through the one-dimensional convolutional neural network; the method is characterized by comprising the following steps:
step 1: pulse compression is carried out on radar echo data, distance-Doppler-amplitude data are obtained after moving target detection, one-dimensional Doppler data in a distance unit where a target is located are extracted, the one-dimensional Doppler data are divided into a training set and a testing set, and all data are marked by using a one-hot coding format;
the pulse compression processing refers to compressing a transmitted wide pulse signal into a narrow pulse signal, and essentially realizes matched filtering of the signal; the moving target detection system is composed of a group of adjacent and partially overlapped filter groups and a narrow-band Doppler filter group covering the whole Doppler frequency range; when the number of the filter banks is the whole power of 2, finishing a moving target detection filter by using a fast Fourier transform algorithm; the Doppler shift is extracted by a quadrature phase detector and the received signal is described as:
Figure SMS_1
where a is the amplitude of the received signal, f 0 Is the carrier frequency of the transmitted signal,
Figure SMS_2
is the phase shift on the received signal due to the object motion by mixing with the transmitted signal of the formula:
s t (t)=acos(2πf 0 t)
after passing through the synchronous detector I, and the low pass filter, the output of the I channel is:
Figure SMS_3
mixing with a 90 degree phase shifted transmit signal, passing through synchronous detector II, and a low pass filter, the output of the Q channel is:
Figure SMS_4
a complex doppler signal is formed as follows:
Figure SMS_5
the method comprises the steps of calculating data again according to a modular value arrangement, an I-path arrangement and a Q-path arrangement of signals, obtaining data of each frame of radar echo with clutter removed, extracting one-dimensional Doppler data in a distance unit in each frame where a target is located, forming a data set, marking the target to construct a training set and a testing set, and marking all data by using a one-hot coding format;
step 2: constructing a one-dimensional convolutional neural network model, and initializing model parameters;
the one-dimensional convolutional neural network comprises ten layers, namely four convolutional layers, four pooling layers and two full-connection layers; training the one-dimensional convolutional neural network by using the constructed training set and the test set; the specific process is as follows:
(2a) Constructing a convolution layer of a one-dimensional convolution neural network: the data size of the input network is m multiplied by m, the set convolution layer comprises K convolution kernels, the convolution kernel size is F multiplied by F, and the filling size is that
Figure SMS_6
Step S, the size of the output after convolution is
Figure SMS_7
The activation function adopts a sigma (h) function; wherein the two-dimensional convolution operation feature map is calculated as:
Figure SMS_8
wherein X is input data, W is a convolution kernel of a filter, b is a bias vector, H is a feature diagram obtained after convolution operation, i and j represent elements in a matrix;
the output through the non-line activation function ReLu () is:
σ(H(i,j))=σ((X*W)[i,j]+b)=max(0,(X*W)[i,j]+b)
the output of the first convolution layer is:
H l =Relu(H l-1 *W l +b l )
wherein H is l For the output matrix of the first layer, H l-1 For the output matrix of layer 1, W l Is the convolution kernel of the first layer, b l Is the bias vector of the first layer;
(2b) Constructing a pooling layer of a one-dimensional convolutional neural network: compressing each submatrix of the input tensor, setting the size k multiplied by k of a pooling area, and setting the pooling standard as the maximum pooling; the input is m dimension and the output is
Figure SMS_9
(2c) Constructing a full connection layer of a one-dimensional convolutional neural network: setting a full-connection layer activation function and the number L of neurons of each full-connection layer, wherein the activation function generally uses a Sigmoid function, and the output of the first full-connection layer is as follows:
H l =σ(W l H l-1 +b l )
wherein the Sigmoid function is:
Figure SMS_10
step 3: training a network model: the method comprises two processes of forward propagation and backward propagation; the method comprises the following specific steps:
forward propagation process of convolutional neural network: in the forward propagation process, carrying out convolution operation on each layer of parameters of the convolutional neural network and the layer of data, adding bias, gradually transmitting forward, and finally obtaining the processing result of the whole network; wherein the superscript represents the number of layers, the x represents the convolution; the steps are as follows:
input: the method comprises the steps of (1) defining a sample sequence X, the number L of layers of a model and the types of all hidden layers, and defining the number K of convolution kernels for convolution layers, wherein the dimension of a convolution kernel matrix is smaller than F multiplied by F, the filling size P and the step S; for the pooling layer, defining a pooling area k multiplied by k, and pooling standard; for the full connection layer, defining an activation function of the full connection layer and the number of neurons of each layer;
and (3) outputting: convolutional neural network layer L output H L
I filling according to input layerFilling the size P, filling the edges of the original sequence to obtain an input tensor H 1
II, initializing all hidden layer parameters W, b;
III cycle: l=2, …, L-1:
if the first layer is a convolutional layer, the output is: h l =Relu(z l )=Relu(H l-1 *W l +b l );
If the first layer is a pooled layer, the output is: h l =pool(H l-1 ) Here pool is according to the pool area size kxk
And a process of narrowing the input tensor by pooling criteria;
if the first layer is a fully connected layer, the output is: h l =Sigmoid(z l )=Sigmoid(W l H l-1 +b l ) The method comprises the steps of carrying out a first treatment on the surface of the IV output layer L layer: h L =Softmax(z L )=Softmax(W L H L-1 +b L );
The goal of convolutional neural network training is to minimize the loss function, which is commonly used as a cross entropy loss function: h is a i Is H L Middle element
Figure SMS_11
Back propagation process of convolutional neural network: the input X is transmitted forward and then is calculated with the real label y through the loss function i The difference between them, called the residual; the optimization method is a gradient descent method, residual errors are reversely propagated through gradient descent, trainable parameters W and b of each layer of the convolutional neural network are updated layer by layer according to a chain type derivative rule, and learning rate eta is used for controlling the strength of the residual errors in reverse propagation; the method comprises the following steps:
input: loss function J calculated during forward propagation, gradient iteration parameters: iteration step eta, maximum iteration times M, stopping an iteration threshold epsilon;
and (3) outputting: w, b of each hidden layer and output layer of the convolutional neural network;
i calculating the output layer by loss function
Figure SMS_12
Loop II l=l-1, …,2, gradient was calculated according to the back propagation algorithm:
if it is currently the fully connected layer: delta l =(W l+1 ) T δ l+1 ⊙σ v (z l );
If it is currently the pooling layer: delta l =upsample(δ l+1 )⊙σ (z l );
If the current is a convolutional layer: delta l =δ l+1 *rot180((W l+1 )⊙σ (z l );
III calculating W of the first layer l ,b l
W l =W l-1 -η(H l-1l )
Figure SMS_13
Wherein ≡Hadamard product is shown for the same vector A (a 1 ,a 2 ,…a n ) T And
B(b 1 ,b 2 ,…b n ) T then A.sup.B.sup.= (a) 1 b 1 ,a 2 b 2 ,…a n b n ) T The method comprises the steps of carrying out a first treatment on the surface of the The upsample function completes logic of pooling error matrix amplification and error redistribution; rot180 indicates that the convolution kernel is rotated 180 degrees;
step 4: repeating the training process of the network model in the step 3 until the loss function converges or the training reaches the maximum iteration number, so as to obtain a one-dimensional convolutional neural network model which can be used for target identification;
step 5: inputting the test set into the constructed one-dimensional convolutional neural network model, training to identify the precision change curve along with the increase of the iteration times.
Advantageous effects
The radar target Doppler image classification and identification method based on the one-dimensional convolutional neural network provided by the invention has the following advantages compared with the prior art:
processing is carried out at a signal level, so that side information lost in a secondary processing process based on image recognition is reduced; the method breaks through the traditional classification and identification method, can intelligently process different types of target echo data, and designs a universal method with self-adaptability, self-learning and unsupervised properties, so as to adapt to various complex environments; according to the method, the convolutional neural network in deep learning is applied to radar target recognition, target features can be independently learned and extracted according to training data, blindness and uncertainty caused by manual participation are reduced, and classification recognition accuracy is improved; the one-dimensional convolutional neural network is utilized for target identification, so that integrated processing of feature extraction and classification identification can be realized, and the processing time and hardware cost are reduced.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a frame of data of an example of the present invention after moving object detection including an object;
FIG. 3 is an original one-dimensional Doppler image of a target of an embodiment of the present invention;
FIG. 4 is a block diagram of a one-dimensional convolutional neural network of an example of the present invention;
FIG. 5 is a graph of recognition accuracy versus iteration number for an example of the present invention.
Detailed Description
The invention will now be further described with reference to examples, figures:
the method comprises the steps of acquiring original echo data through radar equipment, obtaining each frame of data containing targets and clutter through pulse compression and moving target detection, and extracting one-dimensional Doppler images in a distance unit where the targets are located according to Doppler differences of different targets to form a data set; designing a one-dimensional convolutional neural network model, and initializing model parameters; training the network through forward propagation and backward propagation processes, and calculating a loss function; and (3) performing iterative training until the loss function converges or reaches the maximum number of times, and obtaining the one-dimensional convolutional neural network model after the training is finished. The specific implementation steps comprise the following steps:
(1) Pulse compression is carried out on radar echo data, distance-Doppler-amplitude data are obtained after moving target detection, one-dimensional Doppler data in a distance unit where a target is located are extracted, the one-dimensional Doppler data are divided into a training set and a testing set, and all data are marked by using a one-hot coding format;
(2) Constructing a one-dimensional convolutional neural network model, and initializing model parameters;
(3) Training a network model: the method comprises two processes of forward propagation and backward propagation;
(4) And (3) repeating the training process of the network model in the step (3) until the loss function converges or the training reaches the maximum iteration number, and obtaining the one-dimensional convolutional neural network model for target recognition.
(5) Inputting the test set into the constructed one-dimensional convolutional neural network model, training to identify the precision change curve along with the increase of the iteration times.
Referring to fig. 1, the specific implementation steps of this example are as follows:
(1) The radar echo data is subjected to pulse compression, distance-Doppler two-dimensional data are obtained after moving target detection, one-dimensional Doppler data in a distance unit where a target is located are extracted, and the target is marked to construct a training set and a testing set; marking all data by using a one-hot coding format;
(1a) The pulse compression processing refers to compressing a transmitted wide pulse signal into a narrow pulse signal, and the method can transmit the wide pulse to improve average power and radar detection capability, and can maintain the distance resolution of the narrow pulse, and is essentially to realize matched filtering of the signal. The theory of matched filtering is a design principle of a signal optimal detection system, and the matched filter is an optimal linear filter under the criterion of maximum output signal-to-noise ratio.
Let the input end of the linear filter add the mixed waveform of signal and noise as:
Figure SMS_14
x(t)=s(t)+n(t)
assuming that the noise is white, its mean value is 0, and the power spectral density is: p (P) n (w)=n 0 And/2, the spectral function of the signal is S (w), and the transmission characteristic H (w) of the filter is matched. The transmission characteristics of the optimal linear filter can be deduced from the maximum criterion of the output signal to noise ratio:
Figure SMS_15
wherein: k is an amplitude normalization constant, S * (W) is the complex conjugate of signal S (W).
The output of the matched filter is:
Figure SMS_16
(frequency domain)
Figure SMS_17
(time domain)
The pulse pressure result output has envelope of sinc function and is expressed as
Figure SMS_18
The maximum value is obtained at the point, and a pulse appears, namely the purpose of time domain pulse compression is achieved.
(1b) And detecting the moving target of the signal, wherein the moving target detection system is composed of a group of adjacent and partially overlapped filter sets and a narrow-band Doppler filter set which covers the whole Doppler frequency range. The N adjacent Doppler filter groups are formed by weighting and summing N output transverse filters (N pulses and N-1 delay lines) differently.
The frequency response function of each impulse response is:
Figure SMS_19
in practice, when the number of filter banks is the whole power of 2, the moving object detection filter can be completed with a fast fourier transform algorithm.
(1c) Obtaining a one-dimensional Doppler image of a radar target
Doppler radar uses the Doppler effect to measure the radial velocity of a target, doppler shift is extracted by a quadrature phase detector, and the received signal is described as
Figure SMS_20
Where a is the amplitude of the received signal, f 0 Is the carrier frequency of the transmitted signal,
Figure SMS_21
is the upward phase shift of the received signal due to the object motion, by mixing with the transmitted signal of the formula,
s t (t)=acos(2πf 0 t)
after passing through the synchronous detector I, and the low pass filter, the output of the I channel is:
Figure SMS_22
mixed with a 90 degree phase shifted transmit signal, passed through synchronous detector II, and after a low pass filter, the output of the Q channel is
Figure SMS_23
A complex doppler signal is formed as follows:
Figure SMS_24
the data are recalculated according to the modular value arrangement, the I-path arrangement and the Q-path arrangement, so that each frame of data of radar echo can be obtained, 10 channel data near 0 frequency are removed, ground clutter is concentrated here, the target is confirmed to be in a determined channel and a distance gate, and one-dimensional Doppler data in a distance unit are extracted in each frame where the target is located.
Referring to fig. 3: it can be seen that the main peak represents the main speed of the target, the side lobe represents the fluctuation condition of the target, the main lobe of a person is obvious in alleviation of side lobe fluctuation, the main lobe of the vehicle is sharp, and the side lobe amplitude is small. The extracted Doppler information of the vehicles is taken as 120 Doppler channel data which extend from the left to the right and are equivalent to the main peak.
(1d) All data is marked using the one-hot encoding format.
(2) Constructing a one-dimensional convolutional neural network model, and initializing model parameters;
referring to fig. 4, the one-dimensional convolutional neural network described in the practice of the present invention comprises ten layers, four convolutional layers, four pooling layers, and two fully-connected layers. Training the one-dimensional convolutional neural network by using the constructed training set and the test set. The specific process is as follows:
(2a) Constructing a convolution layer of a one-dimensional convolution neural network: the data size of the input network is m multiplied by m, the set convolution layer comprises K convolution kernels, the convolution kernel size is F multiplied by F, and the filling size is that
Figure SMS_25
Step S, the size of the output after convolution is
Figure SMS_26
The activation function employs a sigma (h) function. Wherein the two-dimensional convolution operation feature map is calculated as: />
Figure SMS_27
Where X is the input data (image or sequence), W is the filter (convolution kernel), b is the offset vector, and H is the feature map obtained after the convolution operation.
The output through the non-line activation function ReLu () is:
σ(H(i,j))=σ((X*W)[i,j]+b)=max(0,(X*W)[i,j]+b)
the output of the first convolution layer is:
H l =Relu(G l-1 *W l +b l )
wherein H is l For the output matrix of the first layer, H l-1 For the output matrix of layer 1, W l Is the convolution kernel of the first layer, b l Is the bias vector of the first layer.
(2b) Constructing a pooling layer of a one-dimensional convolutional neural network: each sub-matrix of the input tensor is compressed, the pooling area size k x k is set, and the pooling standard is the maximum pooling. The input is m dimension and the output is
Figure SMS_28
(2c) Constructing a full connection layer of a one-dimensional convolutional neural network: setting a full-connection layer activation function and the number L of neurons of each full-connection layer, wherein the activation function generally uses a Sigmoid function, and the output of the first full-connection layer is as follows:
H l =σ(W l H l-1 +b l )
wherein the Sigmoid function is:
Figure SMS_29
(3) The one-dimensional convolutional neural network training provided by the embodiment of the invention mainly carries out two processes of forward propagation and backward propagation:
(3a) Forward propagation process of convolutional neural network: in the forward propagation process, each layer of parameters of the convolutional neural network and the layer of parameters are subjected to convolution operation and offset, and the parameters are gradually transmitted forwards, so that a processing result of the whole network is finally obtained.
Wherein the superscript stands for layer number, x stands for convolution. The steps are as follows:
input: the method comprises the steps of a sample sequence X, the number L of the model layers and the types of all hidden layers, defining the number K of convolution kernels for convolution layers, wherein the dimension of a convolution kernel matrix is smaller than F multiplied by F, the filling size P is larger than the stride S. For the pooling layer, a pooling area kxk, a pooling criterion (max or average) is defined. For the fully connected layer, the activation function of the fully connected layer and the number of neurons of each layer are defined.
And (3) outputting: convolutional neural network layer L output H L
I, filling the edges of the original sequence according to the filling size P of the input layer to obtain an input tensor H 1
II, initializing all hidden layer parameters W, b;
III cycle: l=2, …, L-1:
if the first layer is a convolutional layer, the output is: h l =Relu(z l )=Relu(H l-1 *W l +b l );
If the first layer is a pooled layer, the output is: h l =pool(H l-1 ) Here pool is a process of narrowing the input tensor according to the pooling area size k×k and the pooling criterion;
if the first layer is a fully connected layer, the output is: h l =Sigmoid(z l )=Sigmoid(W l H l-1 +b l )。
IV output layer L layer: h L =Softmax(z L )=Softmax(W L H L-1 +b L )。
The goal of convolutional neural network training is to minimize the loss function, which is commonly used as a cross entropy loss function: h is a i Is H L Middle element
Figure SMS_30
(3b) Back propagation process of convolutional neural network: the input X is transmitted forward and then is calculated with the real label y through the loss function i The difference between them is called the residual error. The optimization method is a gradient descent method, residual errors are reversely propagated through gradient descent, trainable parameters W and b of each layer of the convolutional neural network are updated layer by layer according to a chain derivation rule, and learning rate eta is used for controlling the strength of the residual errors in reverse propagation. The method comprises the following steps:
input: loss function J calculated during forward propagation, gradient iteration parameters: iteration step eta, maximum iteration times M, and stopping the iteration threshold epsilon.
And (3) outputting: w, b of each hidden layer and output layer of the convolutional neural network.
I calculating the output layer by loss function
Figure SMS_31
Loop II L-1, …, j=2, gradient was calculated according to the back propagation algorithm:
if it is currently the fully connected layer: delta l =(W l+1 ) T δ l+1 ⊙σ (z l );
If it is currently the pooling layer: delta l =upsample(δ l+1 )⊙σ (z l );
If the current is a convolutional layer: delta l =δ l+1 *rot180((W l+1 )⊙σ (z l )。
III calculating W of the first layer l ,b l
W l =W l-1 -η(H l-1l )
Figure SMS_32
Wherein ≡Hadamard product is shown for the same vector A (a 1 ,a 2 ,…a n ) T And B (B) 1 ,b 2 ,…b n ) T Then A.sup.B.sup.= (a) 1 b 1 ,a 2 b 2 ,…a n b n ) T The method comprises the steps of carrying out a first treatment on the surface of the The upsample function completes logic of pooling error matrix amplification and error redistribution; rot180 indicates that the convolution kernel is rotated 180 degrees.
(4) Repeating the step (3) until the loss function converges to a sufficiently small value or the training reaches the maximum iteration number, and finishing the training to obtain a one-dimensional convolutional neural network model which can be used for target recognition and a recognition precision change curve along with the increase of the iteration number. This process is continually iterated for the network training,
the effect of the invention can be further illustrated by experiments:
1. experimental conditions
The invention relies on actual measurement radar data, and is true, reliable and valuable. The hardware platform of the invention patent: intel Core i7CPU, memory 8GB, software platform is: windows 10 operating system and PyCharm editor (python 3.6).
2. Experimental results and comparative experimental description:
under the simulation conditions, the simulation is respectively carried out on the support vector machine classifier, the k-nearest neighbor classifier and the support vector machine classifier, and the results are as follows:
(1) Support vector machine classifier experimental results:
(1a) The distribution of the vehicle entropy values is as follows: 0.6-1.25 (mean: 0.891458), the distribution of human entropy values being 0.9-1.3 (mean: 1.150314); a support vector machine classifier is adopted to obtain the classification standard of the target person
The determination rate is as follows: 91.333%, the classification accuracy of the target vehicle is: 93.667%;
(1b) And combining the amplitude characteristic and the entropy value characteristic after the min-max normalization, and adopting a support vector machine classifier to obtain the recognition accuracy rate of 98%.
(2) k-nearest neighbor classifier experimental results:
and combining the amplitude characteristic and the entropy value characteristic after the min-max normalization, and adopting a K-nearest neighbor classifier to obtain the recognition accuracy rate of 98.67%.
(3) Experimental results of the inventive examples:
a one-dimensional convolutional neural network is adopted to input a one-dimensional Doppler data (112 x 1) test set of a target (a person and a car),
the obtained recognition accuracy is 99.33%, and the recognition accuracy is a change curve of recognition accuracy along with the iteration number in combination with fig. 5.
Compared with a comparison method, the convolutional neural network constructed by the method has the advantage that the recognition accuracy is improved.
The above description is only one specific example of the present invention and does not constitute any limitation of the present invention. It will be apparent to those skilled in the art that various modifications and changes in form and detail may be made without departing from the principles and construction of the invention, but these modifications and changes based on the inventive concept are still within the scope of the appended claims.

Claims (1)

1. A radar target Doppler image classification recognition method based on a one-dimensional convolutional neural network utilizes Doppler information in radar echo signals to realize target feature extraction and recognition integration through the one-dimensional convolutional neural network;
the method is characterized by comprising the following steps:
step 1: pulse compression is carried out on radar echo data, distance-Doppler-amplitude data are obtained after moving target detection, one-dimensional Doppler data in a distance unit where a target is located are extracted, the one-dimensional Doppler data are divided into a training set and a testing set, and all data are marked by using a one-hot coding format;
the pulse compression processing refers to compressing a transmitted wide pulse signal into a narrow pulse signal, and essentially realizes matched filtering of the signal; the moving target detection system is composed of a group of adjacent and partially overlapped filter groups and a narrow-band Doppler filter group covering the whole Doppler frequency range; when the number of the filter banks is the whole power of 2, finishing a moving target detection filter by using a fast Fourier transform algorithm; the Doppler shift is extracted by a quadrature phase detector and the received signal is described as:
Figure FDA0004173294330000011
where a is the amplitude of the received signal, f 0 Is the carrier frequency of the transmitted signal,
Figure FDA0004173294330000012
is the phase shift on the received signal due to the object motion by mixing with the transmitted signal of the formula:
s t (t)=acos(2πf 0 t)
after passing through the synchronous detector I, and the low pass filter, the output of the I channel is:
Figure FDA0004173294330000013
mixing with a 90 degree phase shifted transmit signal, passing through synchronous detector II, and a low pass filter, the output of the Q channel is:
Figure FDA0004173294330000014
a complex doppler signal is formed as follows:
Figure FDA0004173294330000015
the method comprises the steps of calculating data again according to a modular value arrangement, an I-path arrangement and a Q-path arrangement of signals, obtaining data of each frame of radar echo with clutter removed, extracting one-dimensional Doppler data in a distance unit in each frame where a target is located, forming a data set, marking the target to construct a training set and a testing set, and marking all data by using a one-hot coding format;
step 2: constructing a one-dimensional convolutional neural network model, and initializing model parameters;
the one-dimensional convolutional neural network comprises ten layers, namely four convolutional layers, four pooling layers and two full-connection layers; training the one-dimensional convolutional neural network by using the constructed training set and the test set; the specific process is as follows:
(2a) Constructing a convolution layer of a one-dimensional convolution neural network: the data size of the input network is m multiplied by m, the set convolution layer comprises K convolution kernels, the convolution kernel size is F multiplied by F, and the filling size is that
Figure FDA0004173294330000021
Step S, the size of the output after convolution is
Figure FDA0004173294330000022
The activation function adopts a sigma (h) function; wherein the two-dimensional convolution operation feature map is calculated as:
Figure FDA0004173294330000023
wherein X is input data, W is a convolution kernel of a filter, b is a bias vector, H is a feature diagram obtained after convolution operation, i and j represent elements in a matrix;
the output through the non-line activation function ReLu () is:
σ(H(i,j))=σ((X*W)[i,j]+b)=max(0,(X*W)[i,j]+b)
the output of the first convolution layer is:
H l =Relu(H l-l *W l +b l )
wherein H is l For the output matrix of the first layer, H l-1 For the output matrix of layer 1, W l Is the convolution kernel of the first layer, b l Is the bias vector of the first layer;
(2b) Constructing a pooling layer of a one-dimensional convolutional neural network: compressing each submatrix of the input tensor, setting the size k multiplied by k of a pooling area, and setting the pooling standard as the maximum pooling; the input is m dimension and the output is
Figure FDA0004173294330000024
(2c) Constructing a full connection layer of a one-dimensional convolutional neural network: setting a full-connection layer activation function and the number L of neurons of each full-connection layer, wherein the activation function generally uses a Sigmoid function, and the output of the first full-connection layer is as follows:
H l =o(W l H l-1 +b l )
wherein the Sigmoid function is:
Figure FDA0004173294330000031
step 3: training a network model: the method comprises two processes of forward propagation and backward propagation; the method comprises the following specific steps:
forward propagation process of convolutional neural network: in the forward propagation process, carrying out convolution operation on each layer of parameters of the convolutional neural network and the layer of data, adding bias, gradually transmitting forward, and finally obtaining the processing result of the whole network; wherein the superscript represents the number of layers, the x represents the convolution; the steps are as follows:
input: the method comprises the steps of (1) defining a sample sequence X, the number L of layers of a model and the types of all hidden layers, and defining the number K of convolution kernels for convolution layers, wherein the dimension of a convolution kernel matrix is smaller than F multiplied by F, the filling size P and the step S; for the pooling layer, defining a pooling area k multiplied by k, and pooling standard; for the full connection layer, defining an activation function of the full connection layer and the number of neurons of each layer;
and (3) outputting: convolutional neural network layer L output H L
I, filling the edges of the original sequence according to the filling size P of the input layer to obtain an input tensor H 1
II, initializing all hidden layer parameters W, b;
III cycle: l=2, …, L-1:
if the first layer is a convolutional layer, the output is: h l =Relu(z l )=Relu(H l-1 *W l +b l );
If the first layer is a pooled layer, the output is: h l =pool(H l-1 ) Here pool is a process of narrowing the input tensor according to the pooling area size k×k and the pooling criterion;
if the first layer is a fully connected layer, the output is: h l =Sigmoid(x l )=Sigmoid(W l H l-1 +b l ) The method comprises the steps of carrying out a first treatment on the surface of the IV output layer L layer: h L =Softmax(z L )=Softmax(W L H L-1 +b L );
The goal of convolutional neural network training is to minimize the loss function, which is commonly used as a cross entropy loss function: h is a i Is H L Middle element
Figure FDA0004173294330000041
Back propagation process of convolutional neural network: the input X is transmitted forward and then is calculated with the real label y through the loss function i The difference between them, called the residual; the optimization method is a gradient descent method, residual errors are reversely propagated through gradient descent, trainable parameters W and b of each layer of the convolutional neural network are updated layer by layer according to a chain type derivative rule, and learning rate eta is used for controlling the strength of the residual errors in reverse propagation; the method comprises the following steps:
input: loss function J calculated during forward propagation, gradient iteration parameters: iteration step eta, maximum iteration times M, stopping an iteration threshold epsilon;
and (3) outputting: w, b of each hidden layer and output layer of the convolutional neural network;
i calculating the output layer by loss function
Figure FDA0004173294330000042
Loop II l=l-1, …,2, gradient was calculated according to the back propagation algorithm:
if it is currently the fully connected layer: delta l =(W l+1 ) T δ l+1 ⊙σ′(z l );
If it is currently the pooling layer: delta l =upsample(δ l+1 )⊙σ′(z l );
If the current is a convolutional layer: delta l =δ l+1 *rot180((W l+1 )⊙σ′(z l );
III calculating W of the first layer l ,b l
W l =W l-1 -η(H l-1l )
Figure FDA0004173294330000043
Wherein ≡Hadamard product is the same for both dimensionsVector A (a) 1 ,a 2 ,…a n ) T And B (B) 1 ,b 2 ,…b n ) T Then A.sup.B.sup.= (a) 1 b 1 ,a 2 b 2 ,…a n b n ) T The method comprises the steps of carrying out a first treatment on the surface of the The upsample function completes logic of pooling error matrix amplification and error redistribution; rot180 indicates that the convolution kernel is rotated 180 degrees;
step 4: repeating the training process of the network model in the step 3 until the loss function converges or the training reaches the maximum iteration number, so as to obtain a one-dimensional convolutional neural network model which can be used for target identification;
step 5: inputting the test set into the constructed one-dimensional convolutional neural network model, training to identify the precision change curve along with the increase of the iteration times.
CN201911254099.7A 2019-12-10 2019-12-10 Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network Active CN111220958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911254099.7A CN111220958B (en) 2019-12-10 2019-12-10 Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911254099.7A CN111220958B (en) 2019-12-10 2019-12-10 Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network

Publications (2)

Publication Number Publication Date
CN111220958A CN111220958A (en) 2020-06-02
CN111220958B true CN111220958B (en) 2023-05-26

Family

ID=70808062

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911254099.7A Active CN111220958B (en) 2019-12-10 2019-12-10 Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network

Country Status (1)

Country Link
CN (1) CN111220958B (en)

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111796272B (en) * 2020-06-08 2022-09-16 桂林电子科技大学 Real-time gesture recognition method and computer equipment for through-wall radar human body image sequence
CN112180338B (en) * 2020-06-10 2022-03-01 四川九洲电器集团有限责任公司 Holographic digital array radar target quantity estimation method and system
CN111814578B (en) * 2020-06-15 2021-03-05 南京森林警察学院 Method for extracting frequency of ultralow frequency Doppler signal
CN113108632B (en) * 2020-06-19 2022-08-26 山东大学 Three-heat-source shell-and-tube heat exchanger capable of switching heat sources according to temperature
CN111811617B (en) * 2020-07-10 2022-06-14 杭州电子科技大学 Liquid level prediction method based on short-time Fourier transform and convolutional neural network
CN111882809A (en) * 2020-07-21 2020-11-03 重庆现代建筑产业发展研究院 Method and system for guaranteeing fire safety of residential area based on Internet of things
CN112001843B (en) * 2020-07-28 2022-09-06 南京理工大学 Infrared image super-resolution reconstruction method based on deep learning
CN111985349B (en) * 2020-07-30 2024-04-05 河海大学 Classification recognition method and system for radar received signal types
CN111950198B (en) * 2020-08-10 2024-02-02 北京环境特性研究所 Ground clutter simulation method based on neural network
CN111722199B (en) * 2020-08-10 2023-06-20 上海航天电子通讯设备研究所 Radar signal detection method based on convolutional neural network
CN111985825B (en) * 2020-08-26 2023-10-27 东北大学 Crystal face quality assessment method for roller mill orientation instrument
CN112187413B (en) * 2020-08-28 2022-05-03 中国人民解放军海军航空大学航空作战勤务学院 SFBC (Small form-factor Block code) identifying method and device based on CNN-LSTM (convolutional neural network-Link State transition technology)
CN112231975A (en) * 2020-10-13 2021-01-15 中国铁路上海局集团有限公司南京供电段 Data modeling method and system based on reliability analysis of railway power supply equipment
CN112597764B (en) * 2020-12-23 2023-07-25 青岛海尔科技有限公司 Text classification method and device, storage medium and electronic device
CN112816982A (en) * 2020-12-31 2021-05-18 中国电子科技集团公司第十四研究所 Radar target detection method
CN113281776A (en) * 2021-01-08 2021-08-20 浙江大学 Laser radar target intelligent detector for complex underwater dynamic target
CN112861634B (en) * 2021-01-11 2024-04-09 南京大学 Multimode vortex beam demultiplexing method based on neural network
CN112731331B (en) * 2021-01-12 2022-03-04 西安电子科技大学 Micro-motion target noise steady identification method based on signal-to-noise ratio adaptive network
CN112882010B (en) * 2021-01-12 2022-04-05 西安电子科技大学 High-resolution range profile target identification method based on signal-to-noise ratio field knowledge network
CN112784916B (en) * 2021-01-29 2022-03-04 西安电子科技大学 Air target micro-motion parameter real-time extraction method based on multitask convolutional network
CN112836674B (en) * 2021-02-28 2024-03-26 西北工业大学 Underwater target identification method based on micro Doppler characteristics
CN113191185A (en) * 2021-03-10 2021-07-30 中国民航大学 Method for classifying targets of unmanned aerial vehicle by radar detection through Dense2Net
CN113050077B (en) * 2021-03-18 2022-07-01 电子科技大学长三角研究院(衢州) MIMO radar waveform optimization method based on iterative optimization network
CN113221631B (en) * 2021-03-22 2023-02-10 西安电子科技大学 Sequence pulse anti-interference target detection method based on convolutional neural network
CN113189556B (en) * 2021-04-13 2022-05-03 电子科技大学 MIMO radar moving target detection method under composite Gaussian clutter environment
CN113296087B (en) * 2021-05-25 2023-09-22 沈阳航空航天大学 Frequency modulation continuous wave radar human body action recognition method based on data enhancement
CN113312848B (en) * 2021-06-10 2022-10-04 太原理工大学 Intelligent design method of optical system with adaptive target information extraction algorithm as target
CN113376610B (en) * 2021-06-22 2023-06-30 西安电子科技大学 Narrow-band radar target detection method based on signal structure information
CN113359135B (en) * 2021-07-07 2023-08-22 中国人民解放军空军工程大学 Training method, application method, device and medium for imaging and recognition model
CN113568068B (en) * 2021-07-22 2022-03-29 河南大学 Strong convection weather prediction method based on MPI parallel three-dimensional neural network
CN113671478B (en) * 2021-07-27 2024-04-23 西安电子科技大学 High-speed maneuvering target identification data processing method based on multi-core CPU
CN113805151A (en) * 2021-08-17 2021-12-17 青岛本原微电子有限公司 Attention mechanism-based medium repetition frequency radar target detection method
CN113807186A (en) * 2021-08-18 2021-12-17 南京理工大学 Radar target identification method based on multi-channel multiplexing convolutional neural network
CN113708855B (en) * 2021-09-29 2023-07-25 北京信息科技大学 OTFS data driving and receiving method, system and medium based on deep learning
CN114062305B (en) * 2021-10-15 2024-01-26 中国科学院合肥物质科学研究院 Single grain variety identification method and system based on near infrared spectrum and 1D-In-Resnet network
CN113985393B (en) * 2021-10-25 2024-04-16 南京慧尔视智能科技有限公司 Target detection method, device and system
CN114759991B (en) * 2022-03-28 2023-09-22 扬州大学 Cyclostationary signal detection and modulation identification method based on visibility graph
CN116908808B (en) * 2023-09-13 2023-12-01 南京国睿防务系统有限公司 RTN-based high-resolution one-dimensional image target recognition method
CN117671374A (en) * 2023-12-09 2024-03-08 北京无线电测量研究所 Method and device for identifying image target of inverse synthetic aperture radar

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220606B (en) * 2017-05-22 2020-05-19 西安电子科技大学 Radar radiation source signal identification method based on one-dimensional convolutional neural network
CN107728143B (en) * 2017-09-18 2021-01-19 西安电子科技大学 Radar high-resolution range profile target identification method based on one-dimensional convolutional neural network
US20190204834A1 (en) * 2018-01-04 2019-07-04 Metawave Corporation Method and apparatus for object detection using convolutional neural network systems
JP2021516763A (en) * 2018-02-05 2021-07-08 メタウェーブ コーポレーション Methods and equipment for object detection using beam steering radar and convolutional neural network systems
CN109407067B (en) * 2018-10-13 2023-06-27 中国人民解放军海军航空大学 Radar moving target detection and classification integrated method based on time-frequency graph convolution neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于高分辨一维多普勒像的雷达目标机动检测算法;祝依龙 等;《自动化学报》;第37卷(第8期);901-914 *
高分辨一维多普勒像;祝依龙 等;《雷达科学与技术》;第8卷(第1期);32-36 *

Also Published As

Publication number Publication date
CN111220958A (en) 2020-06-02

Similar Documents

Publication Publication Date Title
CN111220958B (en) Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network
CN108872984B (en) Human body identification method based on multi-base radar micro Doppler and convolutional neural network
CN112001270B (en) Ground radar automatic target classification and identification method based on one-dimensional convolutional neural network
CN112184849B (en) Intelligent processing method and system for complex dynamic multi-target micro-motion signals
CN111160176B (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
CN108182450A (en) A kind of airborne Ground Penetrating Radar target identification method based on depth convolutional network
Al Hadhrami et al. Transfer learning with convolutional neural networks for moving target classification with micro-Doppler radar spectrograms
Al Hadhrami et al. Ground moving radar targets classification based on spectrogram images using convolutional neural networks
CN112859014A (en) Radar interference suppression method, device and medium based on radar signal sorting
CN113376600B (en) Pedestrian radar echo denoising method based on RSDNet
CN111580104B (en) Maneuvering target high-resolution ISAR imaging method based on parameterized dictionary
KR102275960B1 (en) System and method for searching radar targets based on deep learning
Alnujaim et al. Generative adversarial networks to augment micro-Doppler signatures for the classification of human activity
CN110954885A (en) Adaptive target reconstruction method for frequency agile radar based on SBL
CN113534065B (en) Radar target micro-motion feature extraction and intelligent classification method and system
CN111368930B (en) Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning
CN114117912A (en) Sea clutter modeling and inhibiting method under data model dual drive
KR20220141748A (en) Method and computer readable storage medium for extracting target information from radar signal
Rahman et al. Multi-frequency rf sensor data adaptation for motion recognition with multi-modal deep learning
Park et al. Increasing accuracy of hand gesture recognition using convolutional neural network
Qu et al. Indoor human behavior recognition method based on wavelet scattering network and conditional random field model
CN117233706A (en) Radar active interference identification method based on multilayer channel attention mechanism
CN110361709B (en) Vehicle-mounted millimeter wave radar target identification method based on dynamic false alarm probability
Shreyamsha Kumar et al. Target identification using harmonic wavelet based ISAR imaging
CN113805152A (en) Two-phase coding radar signal distance super-resolution method based on target sparsity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant