CN111160176B - Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network - Google Patents

Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network Download PDF

Info

Publication number
CN111160176B
CN111160176B CN201911320163.7A CN201911320163A CN111160176B CN 111160176 B CN111160176 B CN 111160176B CN 201911320163 A CN201911320163 A CN 201911320163A CN 111160176 B CN111160176 B CN 111160176B
Authority
CN
China
Prior art keywords
output
layer
radar
amplitude spectrum
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911320163.7A
Other languages
Chinese (zh)
Other versions
CN111160176A (en
Inventor
谢仁宏
孙泽渝
芮义斌
李鹏
郭山红
王欢
王丽妍
边晨光
吕宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201911320163.7A priority Critical patent/CN111160176B/en
Publication of CN111160176A publication Critical patent/CN111160176A/en
Application granted granted Critical
Publication of CN111160176B publication Critical patent/CN111160176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a one-dimensional convolution neural network ground radar target classification method based on fusion characteristics. The method comprises the following steps: preprocessing radar echo data samples in the training sample set and the testing sample set, obtaining an amplitude spectrum and a power spectrum of the echo data through FFT (fast Fourier transform), performing power transformation on the amplitude spectrum, and inputting three groups of feature vectors as features in three channels; determining a one-dimensional convolutional neural network architecture, and sending the extracted three-channel characteristics to a full connection layer; classifying by using a softmax classifier, and calculating to obtain an error between an output prediction tag and a real tag; correcting the network weight coefficient by using a gradient descent method until the maximum iteration number is reached, extracting the corresponding network weight coefficient, and determining a final classifier model; and sending the radar echo sample of the test set to be identified into a classifier obtained by training to finish target classification. The invention has fewer parameters and can efficiently and accurately finish the target classification.

Description

Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
Technical Field
The invention belongs to the technical field of radar digital signal processing, and particularly relates to a one-dimensional convolution neural network ground radar target classification method based on fusion characteristics.
Background
At present, the low-resolution radar target identification is mostly based on artificial feature extraction of radar target data, and the quality of identification performance depends on the quality of features. For pulse doppler radar, the currently widely used means of target identification is the radar audio signal generated by the radar operator distinguishing between different ground moving targets. However, this method imposes high training requirements on the radar operator, and the sensory differences and subjectivity of the operator will result in non-real-time classification, which is not suitable for real-time operation. In addition, the low-resolution radar can also obtain a power spectrum through Fourier transform, and then characteristics are manually extracted from the power spectrum for target classification. However, the two methods cannot avoid subjective factors of human participation, so that the classification and identification of the radar target lack objectivity. Therefore, the development of automatic target recognition capability has important practical significance and application value in the current radar system.
The automatic target recognition of the radar applies corresponding knowledge of machine learning and pattern recognition, and information hidden in radar echoes is extracted to reflect the attributes of the targets. The overall work flow of the target identification mechanism is shown in fig. 1 and mainly comprises two parts, firstly, input features of a collected radar sample training set are fused, then, network parameters are further trained according to a selected classifier framework to obtain classifier coefficients, after the training, the classification enters a second part, namely, the identification of unknown targets, the unknown target sample set is also called a test set, and after data preprocessing is carried out by adopting the same method, a trained classifier is used for carrying out identification decision to obtain a final classification result.
Different object classification methods differ mainly in feature extraction methods and classifier design. For high-resolution range profiles, there are often feature extraction methods for data preprocessing such as power spectrum difference and power transformation. For target identification of low-resolution radar, many current classification models perform fourier transform (FFT), time-frequency analysis, and other processing on echo signals. In contrast, the power spectrum obtained by Fourier transform has better engineering realization significance while obtaining the distinguishing characteristic value. In addition, there are several main classes of classifier designs: the template matching algorithm: the calculation amount is small, but the dependency on the template library is high; checking a machine learning algorithm: the method is based on a statistical learning theory and a kernel function, and is represented by a Support Vector Machine (SVM), so that the method is simple to implement. The SVM provides a structure risk minimization principle based on a statistical learning theory, can obtain a stable global optimal solution as a convex optimization problem, and is most widely applied at present. However, the SVM only obtains a good result on a small sample training set, when the training sample size is large, the storage and calculation of the SVM will consume a large amount of machine memory and operation time, and the selection of the kernel function is also a problem which is not solved at present. Thirdly, an artificial neural network algorithm: the neural network has the capability of autonomous learning, the characteristics do not need to be extracted manually, the calculated amount is large, the risk that the training result falls into local optimum exists, and the training result is unstable.
These methods have certain disadvantages, such as the need for manual feature extraction, unsuitability for training of large sample libraries, and so on. With the rapid development of artificial intelligence and integrated circuit technology, the relevant knowledge of artificial intelligence is applied to the field of radar target identification more and more widely. However, in the prior art, high-resolution range profiles such as SAR, ISAR imaging radar and HRRP are mostly applied, and radar target classification is carried out by utilizing a network structure of image classification. In the field of low-resolution radar target classification, the existing development is to perform target classification by using a micro-Doppler feature two-dimensional image of an echo. Although the method achieves a good classification effect, due to the resource requirement required by image processing and the limitations of low-resolution radar cost and volume, the real-time performance of processing cannot be guaranteed in the actual engineering implementation.
Disclosure of Invention
The invention aims to provide a one-dimensional convolution neural network ground radar target classification method based on fusion characteristics, which can improve the judgment efficiency, accuracy and generalization capability.
The technical solution for realizing the purpose of the invention is as follows: a one-dimensional convolution neural network ground radar target classification method based on fusion features comprises the following steps:
step 1, preprocessing radar echo data samples in a training sample set and a testing sample set, obtaining an amplitude spectrum and a power spectrum of echo data through FFT, performing power transformation on the amplitude spectrum, and taking three groups of feature vectors as three channels of feature input;
step 2, determining a one-dimensional convolutional neural network architecture, and sending the extracted three-channel characteristics to a full connection layer;
step 3, classifying by using a softmax classifier, and calculating to obtain an error between an output prediction label and a real label;
step 4, correcting the network weight coefficient by using a gradient descent method until the maximum iteration times is reached, extracting the corresponding network weight coefficient, and determining a final classifier model;
and 5, sending the radar echo sample of the test set to be identified into a classifier obtained by training to finish target classification.
Further, in step 1, preprocessing is performed on the radar echo data samples in the training sample set and the test sample set, an amplitude spectrum and a power spectrum of the echo data are obtained through FFT, then power transformation is performed on the amplitude spectrum, and three groups of feature vectors are used as three channels for feature input, specifically as follows:
(1.1) assume that the input radar single echo sample is x n N is an N-point sequence of a radar time-domain waveform;
(1.2) carrying out Fourier transform on the N-point sequence to obtain an amplitude spectrum X of the radar An
(1.3) solving the square value of the amplitude spectrum to obtain the power spectrum of the radar, and carrying out logarithmic transformation to obtain X Pn
(1.4) performing power transformation on the amplitude spectrum to obtain power transformation X of the radar amplitude spectrum Mn The power exponent is 0.2,
X Mn =(X An ) 0.2 ,n=1,...,N (1)
(1.5) integrating the amplitude spectrum, power spectrum and power transformation of the radar into three parallel eigenvectors to obtain a new eigenvector X n
X n =[X An ;X Pn ;X Mn ],n=1,...,N (2)
Further, the step 2 of determining the one-dimensional convolutional neural network architecture and sending the extracted three-channel features to the full link layer specifically includes:
(2.1) setting each convolution block to comprise a convolution layer, a batch processing normalization layer and an activation function layer, setting the convolution step length to be 1, and enabling the output dimension to be the same as the input dimension by adopting zero filling operation in the convolution;
the dimension of the original radar echo data input into the convolutional layer is (None,1, N,1), and the size of the convolved data is changed into (None,1, N, L) 1 ) Wherein L is 1 Representing the depth of the convolutional layer, the convolutional layer output is noted as output if the convolution process is as follows:
Figure BDA0002326917850000031
wherein, represents convolution operation, M represents channel number, conv represents convolution layer, conv _ b j Representing the offset coefficient of the corresponding position, i represents the ith input neuron, subscript j represents the jth output neuron, f is an activation function, the activation function is an LeakyReLU function, and the output is relu _ output after passing through the activation function:
Figure BDA0002326917850000032
(2.2) setting the repetition frequency of a rolling block to be 3, connecting the output with a maximum pooling layer, setting the maximum pooling parameter to be 1 multiplied by 8 in window size and 8 in step length; the output of the pooling layer is used as the input of a convolution block II, the convolution block II is repeated for 3 times, the output is sent to the largest pooling layer, the window size is 1 multiplied by 8, and the step length is 8;
output of the pooled output j Expressed as:
pooling j =down(relu_output j ) (5)
wherein, all down (-) represents stepping input data by a set step length, and each position selects the maximum value in the current window as the representative of the current position;
(2.3) setting the discarding rate of the discarding layer to be 0.7, wherein the discarding layer works with the neuron inhibition with the probability of 0.7 in each training process;
(2.4) setting a full connection layer Fc, stretching the output of the rolling block II into a one-dimensional structure, and taking the output of the rolling block II as the input of the Fc layer, thus obtaining the output Fc _ output of the Fc layer j Is composed of
Figure BDA0002326917850000041
Wherein, pooling 2 Represents the output of the second pooling layer, j represents the j-th neuron, Fc ij Represents pooling 2 The weight coefficient between the ith neuron and the jth neuron in the Fc layer, Fc _ b j Representing the bias coefficient corresponding to the jth neuron.
Further, the step 3 of classifying by using a softmax classifier and calculating to obtain an error between the output predicted tag and the real tag, specifically as follows:
sending the output of the preceding full-link layer Fc into a softmax classifier by adopting the softmax classifier, and outputting S of the softmax classifier j Is composed of
Figure BDA0002326917850000042
Wherein C represents the number of the types to be classified, namely the number of the output neurons of the Fc layer;
using cross entropy loss function L cross_entropy
Figure BDA0002326917850000043
Wherein, y j Is the true label of the specimen;
and calculating to obtain the error between the output predicted label and the real label.
Compared with the prior art, the invention has the remarkable advantages that: (1) the method has the advantages that the target attribute of the ground reconnaissance radar is automatically identified, the radar time domain echo data are utilized to extract the characteristics of the echo, and a classifier is designed, so that the target attribute discrimination capability of the ground reconnaissance radar is improved; (2) training the classifier by using the fusion characteristics of a certain number of echo samples, determining the hyper-parameters of the network structure and the corresponding weight coefficients of each layer, having fewer parameters, and efficiently and accurately finishing target classification; (3) the method is simple to realize and easy to understand, improves the accuracy and generalization capability of the classifier by a feature fusion method, has high judgment efficiency, and has excellent performance in the identification of the ground radar target attribute of the low-resolution radar.
The present invention is described in further detail below with reference to the attached drawing figures.
Drawings
Fig. 1 is a flow chart of radar target classification identification.
Fig. 2 is a schematic structural diagram of a one-dimensional convolutional neural network classifier.
Fig. 3 is a schematic diagram of typical human-vehicle echo samples, where (a) is a schematic diagram of human-vehicle echo samples, (b) is a schematic diagram of corresponding power spectrum SNR of 50dB for human-vehicle echo samples, (c) is a schematic diagram of corresponding power spectrum SNR of human-vehicle echo amplitude spectrum, (d) is a schematic diagram of power transformation of corresponding power spectrum SNR of human-vehicle echo amplitude spectrum, (e) is a schematic diagram of corresponding vehicle-vehicle echo amplitude spectrum, (f) is a schematic diagram of corresponding power spectrum SNR of 50dB for vehicle-vehicle echo amplitude spectrum, (g) is a schematic diagram of corresponding power spectrum SNR of vehicle-vehicle echo amplitude spectrum, and (h) is a schematic diagram of power transformation of corresponding vehicle-vehicle sample amplitude spectrum.
Fig. 4 is a schematic diagram of the accuracy of the validation set and the loss function value during the training process, (a) is a schematic diagram of the loss function value, and (b) is a schematic diagram of the accuracy of the validation set.
Detailed Description
The invention discloses a one-dimensional convolution neural network ground radar target classification method based on fusion characteristics, which comprises the following steps:
step 1, preprocessing radar echo data samples in a training sample set and a testing sample set, obtaining an amplitude spectrum and a power spectrum of echo data through FFT, performing power transformation on the amplitude spectrum, and taking three groups of feature vectors as three channels of feature input;
step 2, determining a one-dimensional convolutional neural network architecture as shown in fig. 2, and sending the extracted three-channel characteristics to a full-link layer;
step 3, classifying by using a softmax classifier, and calculating to obtain an error between an output prediction tag and a real tag;
step 4, correcting the network weight coefficient by using a gradient descent method until the maximum iteration times is reached, extracting the corresponding network weight coefficient, and determining a final classifier model;
and 5, sending the radar echo samples of the test set to be identified into a classifier obtained by training, finishing target classification, and judging the accuracy of the verification model according to a final output result.
Further, in step 1, preprocessing is performed on radar echo data samples in the training sample set and the test sample set, an amplitude spectrum and a power spectrum of echo data are obtained through FFT, power transformation is performed on the amplitude spectrum, and three groups of feature vectors are used as three channels for feature input, specifically as follows:
(1.1) assume that the input radar single echo sample is x n N is an N-point sequence of a radar time-domain waveform;
(1.2) carrying out Fourier transform on the N-point sequence to obtain an amplitude spectrum X of the radar An
(1.3) solving the square value of the amplitude spectrum to obtain the power spectrum of the radar, and carrying out logarithmic transformation to obtain X Pn
(1.4) performing power transformation on the amplitude spectrum to obtain power transformation X of the radar amplitude spectrum Mn The power exponent is 0.2,
X Mn =(X An ) 2 ,n=1,...,N (1)
(1.5) integrating the amplitude spectrum, power spectrum and power transformation of the radar into three parallel eigenvectors to obtain a new eigenvector X n
X n =[X An ;X Pn ;X Mn ],n=1,...,N (2)
Further, the step 2 of determining the one-dimensional convolutional neural network architecture sends the extracted three-channel features to the full-link layer, and the model sequentially includes: three volume blocks, a maximum pooling layer, a discarding layer, a full connection layer, a Softmax classifier, a cross entropy loss function. Wherein, the single convolution block comprises a convolution layer, a batch processing normalization layer and a LeakyReLU activation layer in sequence. The method comprises the following specific steps:
(2.1) setting each convolution block to comprise a convolution layer, a batch processing normalization layer and an activation function layer, setting the convolution step length to be 1, and enabling the output dimension to be the same as the input dimension by adopting zero padding operation in the convolution;
the dimensionality of original radar echo data input into the convolutional layer is (None,1, N,1), and the size of the convolved data is changed into (None,1, N, L) 1 ) Wherein L is 1 Representing the depth of the convolutional layer, the convolutional layer output is noted as output if the convolution process is as follows:
Figure BDA0002326917850000061
wherein, represents convolution operation, M represents channel number, conv represents convolution layer, conv _ b j Representing the bias coefficient of the corresponding position, i represents the ith input neuron, subscript j represents the jth output neuron, f is an activation function, the activation function is a LeakyReLU function, and the output is relu _ output after the activation function:
Figure BDA0002326917850000062
(2.2) setting the repetition frequency of a rolling block to be 3, connecting the output with a maximum pooling layer, setting the maximum pooling parameter to be 1 multiplied by 8 in window size and 8 in step length; the output of the pooling layer is used as the input of a convolution block II, the convolution block II is repeated for 3 times, the output is sent to the largest pooling layer, the window size is 1 multiplied by 8, and the step length is 8;
output of Pooling j Expressed as:
pooling j =down(relu_output j ) (5)
wherein, all down (-) represents stepping input data by a set step length, and each position selects the maximum value in the current window as the representative of the current position;
(2.3) setting the discarding rate of the discarding layer to be 0.7, wherein the discarding layer works with the neuron inhibition with the probability of 0.7 in each training process;
(2.4) setting a full-connection layer Fc, stretching the output of the rolling block II into a one-dimensional structure, and using the one-dimensional structure as the input of the Fc layer, wherein the output of the Fc layer is expressed as
Figure BDA0002326917850000071
Wherein, pooling 2 Represents the output of the second pooling layer, j represents the j-th neuron, Fc ij Represents pooling 2 The weight coefficient between the ith neuron and the jth neuron in the Fc layer, Fc _ b j Representing the bias coefficient corresponding to the jth neuron.
Further, in step 3, classifying by using a softmax classifier, and calculating to obtain an error between the output predicted tag and the real tag, specifically as follows:
sending the output of the preceding full-link layer Fc into a softmax classifier by adopting the softmax classifier, and outputting S of the softmax classifier j Is composed of
Figure BDA0002326917850000072
Wherein C represents the number of the types to be classified, namely the number of the output neurons of the Fc layer;
using cross entropy loss function L cross_entropy
Figure BDA0002326917850000073
Wherein, y j Is the true label of the specimen;
and calculating to obtain the error between the output predicted label and the real label.
The method is simple to implement and easy to understand, the accuracy and generalization capability of the classifier are further improved through the feature fusion method, the judgment efficiency is high, and the method has excellent performance in the ground radar target attribute identification of the low-resolution radar.
The invention is further described in detail below with reference to the drawings and specific embodiments.
Examples
The embodiment provides a ground reconnaissance radar target identification method which is based on three-channel feature fusion of amplitude spectrum, power spectrum and amplitude spectrum power transformation and carries out autonomous determination according to input feature improved one-dimensional convolution neural network structure hyper-parameters. The method is improved on the basis of the original LeNet-5 network, the number of network layers and the dimensionality of a convolution kernel are reduced, the one-dimensional convolution neural network classifier for processing radar data is obtained, the scale of network parameters of a new structure is small, and the instantaneity of a target identification and classification function is guaranteed.
As a specific embodiment, the ground radar target identification method based on the feature fusion one-dimensional convolutional neural network mainly comprises five steps:
firstly, data preprocessing is carried out by utilizing radar baseband echo signals
1. Suppose the input radar single echo sample is x n N is an N-point sequence of a radar time-domain waveform;
2. fourier transform is carried out on the N point sequence to obtain an amplitude spectrum X of the radar An ,n=1,...,N;
3. The power spectrum of the radar is obtained by solving the square value of the amplitude spectrum and the logarithm transformation X is carried out Pn ,n=1,...,N;
4. The power transformation is carried out on the amplitude spectrum,
Figure BDA0002326917850000081
obtaining power transformation X of radar amplitude spectrum Mn ,n=1,...,N。
5. The amplitude spectrum, power spectrum and power conversion of the radar are integrated into three parallel eigenvectors to obtain a new eigenvector X n
X n =[X An ;X Pn ;X Mn ],n=1,...,N (2)
And secondly, constructing a forward propagation process of the CNN network structure as shown in figure 2, and training network parameters by using a training sample library.
1. Setting each convolution block to comprise a convolution layer, a batch processing normalization layer and an activation function layer, setting the convolution step size to be 1, and enabling the output dimension to be the same as the input dimension by adopting zero padding operation in the convolution. The dimension of the original radar echo data input into the convolutional layer is (None,1, N,1), and the size of the convolved data is changed into (None,1, N, L) 1 ) The convolution process is as follows, then the convolution layer output is denoted as output:
Figure BDA0002326917850000082
where represents convolution operation, M represents number of channels, conv represents convolution layer, conv _ b j Representing the offset coefficient of the corresponding position, i represents the ith input neuron, j represents the jth output neuron, f is an activation function, the activation function is a LeakyReLU function, and the output is relu _ output after passing through the activation function:
Figure BDA0002326917850000091
2. the repetition times of the rolling block is set to be 3, the output is connected with a maximum pooling layer, the maximum pooling parameter is set to be 1 multiplied by 8 in window size, and the step length is 8. The output of the pooling layer is used as the input of a convolution block II, the convolution block II is repeated for 3 times, the output is sent to the largest pooling layer, the window size is 1 multiplied by 8, and the step length is 8. The pooled output is expressed as:
pooling j =down(relu_output j ) (5)
wherein, all down (-) represents stepping the input data by a certain step length, and each position selects the maximum value in the current window as the representative of the current position.
3. The discard rate of the discard layer was set to 0.7, and the discard layer worked with 0.7 probability of suppressing neurons during each training.
4. And setting a full connecting layer Fc, and stretching the output of the convolution kernel II into a one-dimensional structure as the input of the Fc layer.
Figure BDA0002326917850000092
Wherein, the puffing 2 Represents the output of the second pooling layer, j represents the j-th neuron, Fc ij Represents pooling 2 The weight coefficient between the ith neuron and the jth neuron in the Fc layer, Fc _ b j Representing the bias coefficient corresponding to the jth neuron.
And thirdly, setting a classifier and a loss function.
The output of the preceding full link layer Fc is fed into the softmax classifier using a softmax classifier, which has:
Figure BDA0002326917850000093
c represents the number of the species to be classified, namely the number of Fc layer output neurons. Finally, cross entropy loss function is adopted
Figure BDA0002326917850000094
Wherein, y j Is a true label for the specimen(s),
and fourthly, updating the coefficients of the one-dimensional convolution neural network.
The weight coefficient updating method adopts a back propagation calculation method, solves the gradient of the output layer and each hidden layer of the loss function, updates the corresponding coefficient according to a gradient descent method, and completes a training process. And repeating the training process until the maximum training times, and recording the network weight coefficient and the bias coefficient at the moment.
And fifthly, constructing a classifier and testing performance.
And completing the design of the classifier according to the set network structure and the trained correlation coefficient, and testing by using the actually measured low-resolution ground reconnaissance radar sample in the sample library.
The effect of the invention is further illustrated by the verification of two types of human and vehicle target samples, and the FFT magnitude spectrum, power spectrum and magnitude spectrum power transformation results of typical human and vehicle samples are shown in fig. 3(a) to 3 (h). 5000 samples in a sample library are selected as a training set, 15000 samples are selected as a verification set, and the obtained recognition rate is 97.8%.
The invention achieves the following remarkable effects: the method is simple to implement, and the radar time domain echo data are subjected to FFT to obtain an amplitude spectrum, a power spectrum and an amplitude spectrum after power transformation, and the amplitude spectrum is used as fusion characteristic input of the one-dimensional convolution neural network. The target classification of the low-resolution radar is carried out in a deep convolution mode, and compared with other models and classification methods, the method has the advantages of fewer parameters and higher accuracy. Training 5000 samples of a training set in a sample library to obtain a proper network weight parameter; and finally, the samples to be identified are sent to the constructed network for classification, according to the classification experiment of 7500 samples of people and vehicles, the accuracy rate reaches 97.8%, as shown in fig. 4(a) to 4(b), the fact that the network structure is not fitted shows good classification and identification capabilities.
In conclusion, the method has wide prospect in the application of the battlefield low-resolution ground reconnaissance radar, and has important significance in the discrimination and situation estimation of the battlefield radar ground target attribute. According to the method, three groups of feature vectors of a radar echo power spectrum, a magnitude spectrum and a magnitude spectrum after power transformation are used as three channels of feature input aiming at the characteristics of traditional low-resolution radar echo data. On the basis, a classical convolutional neural network model is improved, so that the model is suitable for processing radar one-dimensional characteristic data. Compared with the traditional neural network, the obtained classifier model has fewer parameters and can efficiently and accurately complete the function of target classification.

Claims (2)

1. A one-dimensional convolution neural network ground radar target classification method based on fusion features is characterized by comprising the following steps:
step 1, preprocessing radar echo data samples in a training sample set and a test sample set, obtaining an amplitude spectrum and a power spectrum of echo data through FFT (fast Fourier transform), performing power transformation on the amplitude spectrum, and taking three groups of feature vectors as three channels of feature input;
step 2, determining a one-dimensional convolutional neural network architecture, and sending the extracted three-channel characteristics to a full connection layer;
step 3, classifying by using a softmax classifier, and calculating to obtain an error between an output prediction tag and a real tag;
step 4, correcting the network weight coefficient by using a gradient descent method until the maximum iteration times is reached, extracting the corresponding network weight coefficient, and determining a final classifier model;
step 5, sending the radar echo sample of the test set to be identified into a classifier obtained by training to finish target classification;
step 1, preprocessing radar echo data samples in a training sample set and a test sample set, obtaining an amplitude spectrum and a power spectrum of echo data through FFT, performing power transformation on the amplitude spectrum, and taking three groups of feature vectors as three channels of feature input, wherein the method specifically comprises the following steps:
(1.1) assume that the input radar single echo sample is x n N is an N-point sequence of the radar time domain waveform;
(1.2) carrying out Fourier transform on the N point sequence to obtain an amplitude spectrum X of the radar An
(1.3) solving the square value of the amplitude spectrum to obtain the power spectrum of the radar, and carrying out logarithmic transformation to obtain X Pn
(1.4) performing power transformation on the amplitude spectrum to obtain power transformation X of the radar amplitude spectrum Mn The power exponent is 0.2,
X Mn =(X An ) 0.2 ,n=1,...,N (1)
(1.5) integrating the amplitude spectrum, the power spectrum and the power transformation of the radar into three parallel eigenvectors to obtain a new eigenvector X n
X n =[X An ;X Pn ;X Mn ],n=1,...,N (2)
2, determining a one-dimensional convolutional neural network architecture, and sending the extracted three-channel characteristics to a full connection layer, wherein the three-channel characteristics are as follows:
(2.1) setting each convolution block to comprise a convolution layer, a batch processing normalization layer and an activation function layer, setting the convolution step length to be 1, and enabling the output dimension to be the same as the input dimension by adopting zero filling operation in the convolution;
the dimension of the original radar echo data input into the convolutional layer is (None,1, N,1), and the size of the convolved data is changed into (None,1, N, L) 1 ) Wherein L is 1 Representing the depth of the convolutional layer, the convolutional layer output is noted as output as follows:
Figure FDA0003725974810000021
wherein, represents convolution operation, M represents channel number, conv represents convolution layer, conv _ b j Representing the offset coefficient of the corresponding position, i represents the ith input neuron, subscript j represents the jth output neuron, f is an activation function, the activation function is an LeakyReLU function, and the output is relu _ output after passing through the activation function:
Figure FDA0003725974810000022
(2.2) setting the repetition times of a rolling block I to be 3, connecting the output with a maximum pooling layer, setting the maximum pooling parameter to be 1 multiplied by 8 in window size and 8 in step length; the output of the pooling layer is used as the input of a convolution block II, the convolution block II is repeated for 3 times, the output is sent to the largest pooling layer, the window size is 1 multiplied by 8, and the step length is 8;
output after poolingpooling j Expressed as:
pooling j =down(relu_output j ) (5)
wherein, all down (-) represents stepping input data by a set step length, and each position selects the maximum value in the current window as the representative of the current position;
(2.3) setting the discarding rate of the discarding layer to be 0.7, wherein the discarding layer inhibits the neuron from working with the probability of 0.7 in each training process;
(2.4) setting a full connection layer Fc, stretching the output of the rolling block II into a one-dimensional structure, and taking the one-dimensional structure as the input of the Fc layer, then outputting the output Fc _ output of the Fc layer j Is composed of
Figure FDA0003725974810000023
Wherein, the puffing 2 Represents the output of the second pooling layer, j represents the j-th neuron, Fc ij Represents pooling 2 The weight coefficient between the ith neuron and the jth neuron in the Fc layer, Fc _ b j Representing the bias coefficient corresponding to the jth neuron.
2. The method for classifying ground radar targets based on one-dimensional convolutional neural network fused features as claimed in claim 1, wherein the step 3 is to classify by using a softmax classifier, and calculate the error between the output predicted label and the real label, specifically as follows:
sending the output of the preceding full-link layer Fc into a softmax classifier by adopting the softmax classifier, and outputting S of the softmax classifier j Is composed of
Figure FDA0003725974810000031
Wherein C represents the number of the types to be classified, namely the number of the output neurons of the Fc layer;
using cross entropy loss function L cross_entropy
Figure FDA0003725974810000032
Wherein, y j Is the true label of the sample;
and calculating to obtain the error between the output predicted label and the real label.
CN201911320163.7A 2019-12-19 2019-12-19 Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network Active CN111160176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911320163.7A CN111160176B (en) 2019-12-19 2019-12-19 Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911320163.7A CN111160176B (en) 2019-12-19 2019-12-19 Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network

Publications (2)

Publication Number Publication Date
CN111160176A CN111160176A (en) 2020-05-15
CN111160176B true CN111160176B (en) 2022-09-06

Family

ID=70557385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911320163.7A Active CN111160176B (en) 2019-12-19 2019-12-19 Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network

Country Status (1)

Country Link
CN (1) CN111160176B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111832432B (en) * 2020-06-23 2022-03-18 华中科技大学 Cutter wear real-time prediction method based on wavelet packet decomposition and deep learning
CN112034464B (en) * 2020-08-31 2024-06-25 上海英恒电子有限公司 Target classification method
CN112526476A (en) * 2020-12-21 2021-03-19 上海无线电设备研究所 Radar target identification method based on resonance scattering echo signals
CN113238197B (en) * 2020-12-29 2023-07-04 杭州电子科技大学 Radar target identification and judgment method based on Bert and BiLSTM
CN113050109A (en) * 2021-04-01 2021-06-29 河海大学常州校区 Laser ranging method based on deep learning
CN113111774B (en) * 2021-04-12 2022-10-28 哈尔滨工程大学 Radar signal modulation mode identification method based on active incremental fine adjustment
CN113156382B (en) * 2021-04-13 2023-12-01 中国科学院微电子研究所 Signal identification method and device for vehicle-mounted range radar
CN113159218A (en) * 2021-05-12 2021-07-23 北京联合大学 Radar HRRP multi-target identification method and system based on improved CNN
CN113985393B (en) * 2021-10-25 2024-04-16 南京慧尔视智能科技有限公司 Target detection method, device and system
CN115470873B (en) * 2022-11-14 2023-04-18 中国人民解放军国防科技大学 Radar radiation source identification method and system
CN117647803B (en) * 2023-11-17 2024-06-07 乾元科学研究院 Target recognition system and method for low-speed unmanned aerial vehicle

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106338722A (en) * 2016-08-25 2017-01-18 电子科技大学 High-resolution radar one-dimension range profile target identification method based on multi-order samples
CN107220606A (en) * 2017-05-22 2017-09-29 西安电子科技大学 The recognition methods of radar emitter signal based on one-dimensional convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932480B (en) * 2018-06-08 2022-03-15 电子科技大学 Distributed optical fiber sensing signal feature learning and classifying method based on 1D-CNN

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106338722A (en) * 2016-08-25 2017-01-18 电子科技大学 High-resolution radar one-dimension range profile target identification method based on multi-order samples
CN107220606A (en) * 2017-05-22 2017-09-29 西安电子科技大学 The recognition methods of radar emitter signal based on one-dimensional convolutional neural networks

Also Published As

Publication number Publication date
CN111160176A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111160176B (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
CN112001270B (en) Ground radar automatic target classification and identification method based on one-dimensional convolutional neural network
CN110516596B (en) Octave convolution-based spatial spectrum attention hyperspectral image classification method
CN109993280B (en) Underwater sound source positioning method based on deep learning
CN109060001B (en) Multi-working-condition process soft measurement modeling method based on feature transfer learning
CN111220958A (en) Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network
CN107563433B (en) Infrared small target detection method based on convolutional neural network
CN110033473B (en) Moving target tracking method based on template matching and depth classification network
CN107977683B (en) Joint SAR target recognition method based on convolution feature extraction and machine learning
CN105116397B (en) Radar high resolution range profile target identification method based on MMFA models
CN106443632B (en) The radar target identification method of multitask Factor Analysis Model is kept based on label
CN104537647A (en) Target detection method and device
CN111050315B (en) Wireless transmitter identification method based on multi-core two-way network
CN111368930B (en) Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning
CN113780242A (en) Cross-scene underwater sound target classification method based on model transfer learning
CN108171119B (en) SAR image change detection method based on residual error network
CN112990082B (en) Detection and identification method of underwater sound pulse signal
CN111126332A (en) Frequency hopping signal classification method based on contour features
CN115438708A (en) Classification and identification method based on convolutional neural network and multi-mode fusion
CN114879185A (en) Radar target intelligent identification method based on task experience migration
CN108983187B (en) Online radar target identification method based on EWC
CN112835008B (en) High-resolution range profile target identification method based on attitude self-adaptive convolutional network
CN111368653A (en) Low-altitude small target detection method based on R-D (R-D) graph and deep neural network
CN113221758B (en) GRU-NIN model-based underwater sound target identification method
CN113344046A (en) Method for improving SAR image ship classification precision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant