CN109784311B - Target identification method based on linear frequency modulation wavelet atomic network - Google Patents
Target identification method based on linear frequency modulation wavelet atomic network Download PDFInfo
- Publication number
- CN109784311B CN109784311B CN201910108831.3A CN201910108831A CN109784311B CN 109784311 B CN109784311 B CN 109784311B CN 201910108831 A CN201910108831 A CN 201910108831A CN 109784311 B CN109784311 B CN 109784311B
- Authority
- CN
- China
- Prior art keywords
- substep
- wavelet
- atomic
- linear frequency
- frequency modulation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000012549 training Methods 0.000 claims abstract description 36
- 238000013528 artificial neural network Methods 0.000 claims abstract description 23
- 230000006870 function Effects 0.000 claims description 30
- 230000004913 activation Effects 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000003631 expected effect Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 102220053993 rs28929485 Human genes 0.000 claims description 6
- 102220492605 Numb-like protein_S17A_mutation Human genes 0.000 claims description 4
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 claims description 3
- 102220613637 Potassium voltage-gated channel subfamily C member 4_S15A_mutation Human genes 0.000 claims description 3
- 102220521910 THAP domain-containing protein 1_S21C_mutation Human genes 0.000 claims description 3
- 102220247850 rs1421233354 Human genes 0.000 claims description 3
- 101001118566 Homo sapiens 40S ribosomal protein S15a Proteins 0.000 claims description 2
- 238000000605 extraction Methods 0.000 abstract description 28
- 230000009466 transformation Effects 0.000 abstract description 17
- 238000012545 processing Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 7
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000017105 transposition Effects 0.000 description 3
- 238000007635 classification algorithm Methods 0.000 description 2
- 230000036039 immunity Effects 0.000 description 2
- 238000012567 pattern recognition method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000013095 identification testing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011056 performance test Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Landscapes
- Radar Systems Or Details Thereof (AREA)
Abstract
This creation provides a target identification method based on linear frequency modulation wavelet atomic network, including: step S1: training a linear frequency modulation wavelet atomic network off line; and, step S2: and (5) classifying the targets by using the linear frequency modulation wavelet atomic network obtained in the step (S1) and outputting an identification result. The device is based on a three-layer feedforward neural network structure, linear frequency modulation wavelet atoms are used as a characteristic extraction basis function of an input layer, the input layer realizes characteristic extraction through linear frequency modulation wavelet atom transformation, and a neural network classifier is formed by a hidden layer and an output layer. This creation has the following advantages: 1. richer characteristic information of the target can be obtained through linear frequency modulation wavelet atomic transformation, better effective data support is provided for a classifier, and the accuracy of target identification is improved; 2. the combined feature extraction and classification can adjust parameters in real time according to different target features and recognition environments, and improves the recognition performance and the anti-noise performance of the recognition system.
Description
Technical Field
The creation relates to the field of signal processing and pattern recognition, in particular to a target recognition method based on a linear frequency modulation wavelet atomic network.
Background
With the rapid development of modern signal processing technology and the urgent need of practical application, the automatic target recognition technology plays an increasingly important role in modern high-technology war. At present, an identification technology based on an electromagnetic scattering image is widely applied to identification of various targets in the sea, land and air, and is one of research hotspots in the field of target identification.
Automatic object recognition systems typically include two separate stages of feature extraction and classification decision. For the feature extraction method, an article "radar high-range resolution one-dimensional image target identification" on pages 35 (1): 53-60 of "system engineering and electronic technology" in Guo Zunhua, li Da, zhang Bayan 2013 is mentioned: modern signal processing techniques are usually adopted to extract effective and reliable electromagnetic scattering image features, such as frequency domain features like Fourier transform amplitude, power spectrum, bispectrum and the like, and time frequency features like wavelet transform and Gabor transform. For the classification method, pattern recognition methods are often used as classifiers, such as neural networks, support vector machines, deep learning, etc., and these pattern recognition methods are described in the following papers: DUIN R, P and KALSKA E, 2007, studies in Computational Intelligence, pages 63, INVENTION, pages 221-259, INVENTION, guo Zunhua 2018, TELECOMMUNICATIONS, pages 58 (10) 1121-1126, ONE-DIMENSIONAL convolutional neural Networks for Radar high resolution distance image Recognition, FENG B, CHEN B, LIU H W, 393, pages 61 379-393, RADIRP HRRP targeting with pages, et al, IDS, KALSKA E, page 58.
However, in the prior art, the biggest defect is that the feature extraction and the target classification stages are performed independently, the identification method can only perform the feature extraction and classification operations on the targets respectively, the feature extraction and classification parts of the identification system are difficult to achieve coordination work aiming at the complexity of the target features and the uncertainty of the identification process, and because the parameters are not synergistically optimal, the identification performance cannot be optimal when the classifier classifies according to the feature information, and the identification rate is influenced. This is the first drawback of the prior art.
In addition, in the prior art, there is a second drawback as follows: for targets with complex scattering characteristics, a certain damage can be caused to a stable scattering model by a scattering point shielding phenomenon and a moving scattering point on the target, and the traditional method is difficult to extract effective characteristics of the targets; for target identification under a complex environment background, how to ensure the identification accuracy as much as possible under a noise condition needs to be considered, that is, how to acquire characteristic information which contains richer targets and reflects the essence of the targets, so that the anti-noise performance of the identification system is improved.
In the prior art, due to the existence of the first defect and the second defect: once target misrecognition occurs, serious accidents can be caused, huge property loss is caused, and in military wars, misjudgment can cause serious casualties.
Therefore, how to quickly and accurately identify the complex environmental targets is one of the great challenges facing the research field, and is a major problem to be researched and solved urgently.
For the problem, if the recognition system can simultaneously acquire feature information of the target electromagnetic scattering center, the geometric shape, the size, the time shift, the frequency shift and the like, and select appropriate classifier parameters according to different targets and environmental conditions, a better target recognition effect is expected to be realized.
However, in a common method for extracting features by time-frequency transformation, the basic fourier transform can only obtain frequency domain features of a target, the Gabor transform can simultaneously obtain time-frequency features, and the wavelet transform contains time domain, frequency domain and scale information of the target, but cannot well describe complex scattering points.
The applicant is aware of The fact that The expanding method of wavelet Transform method proposed by MANN S and HAYKIN S1995 in IEEE transformations on Signal Processing 43 (11): 2745-2761 page [ The chirp Transform: physical configurations-chirp wavelet atomic Transform ] adds chirp parameters on The basis of wavelet Transform, can obtain a series of Transform coefficients of different time-frequency plane slopes, and thus can obtain more complete characteristic information of The target.
However, this will again present new challenges: the richer the extracted feature information is, the more challenge the recognition system can bring to the classification of the features, and how to select a proper classifier becomes the key to success or failure of recognition.
Disclosure of Invention
One purpose of the present authoring is to implement feature extraction and object recognition for objects that are relatively complex with respect to scattering characteristics.
In order to achieve the purpose, the method adopts the linear frequency modulation wavelet atomic transformation as a characteristic extraction function of the neural network to extract the target time-frequency-scale-linear frequency modulation four-dimensional space characteristic.
The applicant considers that the time-frequency joint domain description can more accurately reflect the target characteristics, and the key of the time-frequency characteristic extraction lies in how to determine a group of optimal basis functions for determining signal classification. Because the linear frequency modulation wavelet transformation is popularized from a three-dimensional time-frequency-scale space to a four-dimensional time-frequency-scale-linear frequency modulation space, linear frequency modulation wavelet atoms contain richer characteristic information than wavelets and Gabor atoms, transformation parameters of the linear frequency modulation wavelet atoms are related to target physical characteristics, time-frequency local performance is more excellent, and extraction of complex target characteristics is more advantageous. Therefore, the creation adopts the linear frequency modulation wavelet atomic transformation as a feature extraction function of the neural network to extract the time-frequency-scale-linear frequency modulation four-dimensional spatial feature of the target.
Another objective of the present creation is how to ensure the recognition accuracy as much as possible under noisy conditions, i.e. to improve the noise immunity and recognition performance of the recognition system.
In order to achieve the purpose, the method collaboratively adjusts the atomic parameters of the linear frequency modulation wavelet and the parameters of the classifier, and jointly realizes feature extraction and target classification.
The applicant considers that, when identifying the features, the parameters of the classification algorithm also influence the identification performance of the target to a certain extent. Therefore, the creation optimizes and adjusts the parameters of the features while training the classifier, realizes synchronous training, can coordinate the parameters according to different target data, and can obtain better recognition effect and noise resistance than a recognition system with mutually independent feature extraction and target classification. Therefore, the creation organically combines the chirp wavelet atoms with a multilayer feedforward neural network with the advantages of strong self-adaptive learning capacity, distributed storage, parallel cooperative processing and the like, and provides a chirp wavelet atom network method for cooperatively adjusting chirp wavelet atom parameters and classifier parameters to realize combined feature extraction and classification of targets.
In order to achieve the above purpose, the present creation provides a target identification method based on a chirp wavelet atomic network, which adopts a chirp wavelet atomic network to perform target identification. The chirp wavelet atomic network is based on a three-layer feedforward neural network structure shown in fig. 2, chirp wavelet atoms are used as a feature extraction basis function of an input layer, the input layer realizes feature extraction through chirp wavelet atomic transformation, and a neural network classifier is formed by a hidden layer and an output layer.
The object identification method based on the linear frequency modulation wavelet atomic network comprises the following steps:
step S1: training a linear frequency modulation wavelet atomic network off line; and the number of the first and second groups,
step S2: classifying the targets by using the linear frequency modulation wavelet atomic network obtained in the step S1 and outputting an identification result;
wherein, step S1 comprises substep S11 of inputting a training sample vector x n And randomly initializing a chirp wavelet atomic parameter set beta of a chirp wavelet atomic network k ={u k ,ξ k ,s k ,c k And classifier weightsAnd &>
Wherein N =1,2.. N is the number of samples, K =1,2.. K is the number of input layer atoms, H =1,2.. H is the number of hidden layer nodes, M =1,2.. M is the number of output layer nodes; beta is a beta k ={u k ,ξ k ,s k ,c k Includes a time shift parameter u k Frequency shift parameter xi k Size parameter s k And linear frequency c k ;The connection weight between the input layer and the hidden layer is used; />The weight of the connection between the hidden layer and the output layer.
Substep S12, using the data sequence t according to equation 1 and equation 2 l And chirp wavelet atomic parameter set beta k To calculate the atoms of the chirp wavelet,
and forming linear frequency-modulated wavelet atomic vector g based on the linear frequency-modulated wavelet atomic vector k ;
Wherein, the linear frequency modulation wavelet atom parameter set beta k Can be obtained from substep S11;
wherein, t l Is a data sequence, where L =1,2, and L is the sample length; g (t) is the basic window function of the chirp wavelet transform:
wherein, the linear frequency modulation wavelet atomic vectorSuperscript T is the matrix transpose, and e is the natural constant.
Substep S13, using the chirp wavelet atomic vector g obtained in substep S12, according to equation 3 k And the training sample vector x input in sub-step S11 n Calculating the characteristic value phi of all samples at each atomic node nk :
A substep S14 of obtaining a characteristic value phi of the sample from the substep S13 nk The classifier of the input neural network calculates the output result o of the hidden layer according to the formula 4-7 by the activation function nh And output result y of the output layer nm :
Wherein N =1,2.. N is the number of samples; h =1,2,.. H is the number of hidden nodes, m =1,2, M is the number of output layer nodes; intermediate variablesAnd &>The calculation formula of (a) is expressed as:
substep S15, comprising substeps S15A-S15B;
in which, in sub-step S15A, the desired output d of the sample is made nm Is 1 or 0; wherein, d nm 1 belongs to a preset target; d nm 0 does not belong to the preset target;
substep S15B of outputting d according to the desire nm And the actual output y nm Using equation 8, the mean square error E is calculated:
and a substep S16 of judging whether the expected effect is achieved, and finishing the following steps if the expected effect is achieved: comparing the mean square error E with a preset threshold; if the mean square error E is less than the preset threshold value, the learning process is ended, and the substep S18 is switched to; if the mean square error E is more than or equal to a preset threshold value, continuing the following substeps;
substep S17, weight adjustment, and adjustment of chirp wavelet atomic parameters, and then iterative loop:
substep S18: preserving chirp wavelet atomic parameters beta k ={u k ,ξ k ,s k ,c k And the connection weight between the input layer and the hidden layerAnd the connection weight between hidden layer and output layer->And finishing the training.
The substep S17 comprises substeps S17A-S17C;
substep S17A of reversely adjusting the connection weight between the input layer and the hidden layer by using a Levenberg-Marquardt algorithmAnd the connection weight between hidden layer and output layer->
Substep S17B, adjusting the set of atomic node parameters β according to the gradient descent algorithm using equation 9 k ={u k ,ξ k ,s k ,c k }:
Wherein n is the iteration number, eta is the learning rate,is a partial derivative operator, E is a mean square error;
substep S17C, iterative loop: returning to the substep S12, and performing the next training period;
wherein, in the next training period, the set of chirp wavelet atomic parameters β for sub-step S12 k Is obtained from substep S17B.
The target identification method based on the linear frequency modulation wavelet atomic network comprises the following steps of S2, classifying targets by using the linear frequency modulation wavelet atomic network obtained in the S1 and outputting identification results; wherein, step S2 includes the following substeps:
substep S21: inputting data and preprocessing the data;
the substep S21 comprises substeps S21A-S21C; the execution sequence of the substeps S21A-S21C is arbitrary;
substep S21A:
input training sample vector x n And data preprocessing is carried out; wherein N =1,2., N is the number of samples,
substep S21B:
inputting the linear frequency modulation wavelet atom parameter set beta of the linear frequency modulation wavelet atom network obtained according to the step S1 k ={u k ,ξ k ,s k ,c k Where K =1,2, K is the number of input layer atoms, β k ={u k ,ξ k ,s k ,c k Includes a time shift parameter u k Frequency shift parameter xi k Scale parameter s k And linear frequency c k ;
Substep S21C:
inputting the classifier weight of the linear frequency modulation wavelet atomic network obtained according to the step S1And &>Wherein the classifier weights of a chirp wavelet atomic network->And &>Comprising connection weights between an input layer and a hidden layer>And the connection weight between hidden layer and output layer->/>
Substep S22, according toEquation 1 and equation 2, using a data sequence t l And the set of chirp wavelet atomic parameters beta input at substep S21 k To calculate the atoms of the chirp wavelet,
and forming a chirp wavelet atomic vector g therefrom k ;
Wherein, t l Is a data sequence, where L =1,2, and L is the sample length; wherein g (t) is the basic window function of the chirp wavelet transform:
wherein, the linear frequency modulation wavelet atomic vectorSuperscript T is the matrix transpose, and e is the natural constant.
Substep S23: using the chirp wavelet atomic vector g obtained in sub-step S22 according to equation 3 k And the training sample vector x input in sub-step S21 n Calculating the characteristic value phi of all samples at each atomic node nk :
Substep S24: the characteristic value phi of the sample obtained from the substep S23 nk The classifier of the input neural network calculates the output result o of the hidden layer according to the formula 4-7 by the activation function nh And output result y of the output layer nm : according to the output result y of the output layer nm To determine whether the sample belongs to the p-th class target:
substep S25: and outputting a classification result to finish the identification.
The beneficial effect of this creation is:
compared with the prior art, the creation has the following remarkable advantages:
1. richer characteristic information of the target can be obtained through linear frequency modulation wavelet atomic transformation, better effective data support is provided for a classifier, and certain advantages are achieved for improving the accuracy of target identification of a system;
2. the combined feature extraction and classification can adjust parameters in real time according to different target features and recognition environments, and the recognition performance and the anti-noise performance of the recognition system are improved.
Drawings
FIG. 1 is a diagram of an implementation of a target recognition technique that combines feature extraction and classification;
FIG. 2 is a diagram of a chirp wavelet atomic network structure;
FIG. 3A shows the recognition rate of Gabor atomic network based on different azimuth data of radar under different SNR conditions;
FIG. 3B is a graph showing the recognition rate of a chirp wavelet atomic network based on different azimuth data of a radar under different signal-to-noise ratios;
description of reference numerals: 1-input vector (sample data set), 2-vector product, 3-input layer, 4-quantity product, 5-hidden layer, 6-quantity product, 7-output layer, 8-output (classification result), 9-Sigmoid activation function, 10-connection weight between hidden layer and output layer, 11-Sigmoid activation function, 12-connection weight between input layer and hidden layer, 13-characteristic value and 14-chirp wavelet atomic vector set.
Detailed Description
In order to make the technical means, creation features, achievement purposes and effects of the creation easier to understand, the creation is further described below with reference to the accompanying drawings and the detailed description.
The applicant considers that the neural network has the advantages of strong self-adaptive learning capability, distributed storage, parallel cooperative processing and the like through multi-party screening and multiple simulation tests, and has certain advantages for the learning of complex data, so that the neural network can be used as a classifier to realize the classification of complex target characteristics.
For how to improve the anti-noise performance of the identification system, on one hand, the applicant considers that the target data needs to be preprocessed, and noise is reduced as much as possible;
on the other hand, the recognition system should be able to adapt adaptively to different environmental conditions. According to the Wavelet atomic Network articles of Wavelet Neural Networks of pages a Practical guide, 2013 and SHI Y, ZHANG X D, IEEE Transactions on Signal Processing 2001 at 49 (12) of pages 2994-3004, proposed by ALEXANDRIDIS A K and ZAPRANIS A D.2013 at 42.
By combining the factors, the applicant finally considers that a method of combining linear frequency modulation wavelet atomic transformation and a neural network is adopted to extract four-dimensional time-frequency space effective information of the target and synchronously realize the classification of the target.
Aiming at the technical problems of how to extract effective target features and designing a reliable classification algorithm in the two target identification fields, the creation provides a method capable of realizing combined feature extraction and target classification: the linear frequency modulation wavelet atomic network is used for improving the recognition rate and the anti-noise performance of a target recognition system and reducing the great loss caused by misidentification when recognition technology is adopted in real life.
The method adopts the linear frequency modulation wavelet atomic transformation to extract the time-frequency-scale-linear frequency modulation rate four-dimensional spatial characteristics of the target signal, and adjusts the linear frequency modulation wavelet atomic parameters and the classifier parameters in a coordinated manner, so that the method has higher recognition rate and better anti-noise performance.
The creation provides a target identification method based on a linear frequency modulation wavelet atomic network, and the linear frequency modulation wavelet atomic network is adopted for target identification. The chirp wavelet atomic network is based on a three-layer feed-forward neural network as shown in figure 2. FIG. 2 is a diagram of a chirp wavelet atomic network structure; the method comprises the following steps of 1-inputting a vector (sample data set), 2-vector product, 3-inputting layer, 4-quantity product, 5-hiding layer, 6-quantity product, 7-outputting layer, 8-outputting (classification result), 9-Sigmoid activation function, 10-connection weight between the hiding layer and the outputting layer, 11-Sigmoid activation function, 12-connection weight between the inputting layer and the hiding layer, 13-characteristic value and 14-linear frequency modulation wavelet atom vector set.
As shown in fig. 1, the present creation proposes a target identification method based on a chirp wavelet atomic network, including:
step S1, off-line training:
training a linear frequency modulation wavelet atomic network off line; and the number of the first and second groups,
step S2, online identification:
and classifying the targets by using the linear frequency modulation wavelet atomic network obtained in the step S1 and outputting an identification result.
For the steps, P linear frequency modulation wavelet atomic networks can be simultaneously realized in parallel to obtain characteristic parameters of the P-type targets.
P =1,2,.., P, step S1 of off-line training of a chirp wavelet atomic network of a class P target, includes the following sub-steps:
substep S11, network initialization:
as shown in the input vector (sample data set) 1 of FIG. 2, a training sample vector x is input n And randomly initializing a chirp wavelet atomic parameter set beta of a chirp wavelet atomic network k ={u k ,ξ k ,s k ,c k } and classifier weightsAnd &>
Wherein N =1,2., N is the number of samples, K =1,2., K is the number of input layer atoms, H =1,2., H is the number of hidden layer nodes, M =1,2., M is the number of output layer nodes;
wherein, the linear frequency modulation wavelet atom parameter set beta of the linear frequency modulation wavelet atom network k ={u k ,ξ k ,s k ,c k Includes a time shift parameter u k Frequency shift parameter xi k Size parameter s k And linear frequency c k (ii) a Wherein the time shift parameter u k Representing the time centre of the signal, the frequency shift parameter xi k Representing the center of frequency of the signal, the scale parameter s k Representing the stretching of the atoms of chirped wavelets, chirp c k Representing the slope of the signal on a time-frequency plane;
wherein the classifier weights of the chirp wavelet atomic networkAnd &>Is the connection weight of each layer of the network, including an input layer and a hidden layerIs connected with the weight->And the connection weight between hidden layer and output layer->
Substep S12, calculating chirp wavelet atoms:
using the data sequence t according to equation 1 and equation 2 l And a linear frequency modulation wavelet atomic parameter set beta k To calculate the atoms of the chirp wavelet,
and forming a chirp wavelet atomic vector g therefrom k
Wherein, the linear frequency modulation wavelet atom parameter set beta k Can be derived from substep S11-when substep S17C of the iterative loop is not started, the set of chirp wavelet atomic parameters β k Is derived from substep S11; after the start of iteration loop substep S17C, a set of chirp wavelet atomic parameters β k Is obtained from substep S17B.
Wherein, formula 1 is a linear frequency modulation wavelet atomic expression; t is t l Is a data sequence, where L =1,2, and L is the sample data length;
wherein g (t) is a basic window function of the chirp wavelet transform, a gaussian window function is generally adopted as the basic window function of the chirp wavelet atomic transform, and the expression is as follows:
Wherein, the linear frequency modulation wavelet atom is taken as the characteristic extraction basis function of the input layer 3 in figure 2, the input layer 3 realizes the characteristic extraction through the linear frequency modulation wavelet atom transformation,
wherein, the superscript T is the matrix transposition, e is the natural constant;
substep S13, calculating a feature value of the input layer 3 as shown in fig. 2, to realize feature extraction:
using the chirp wavelet atomic vector g obtained in sub-step S12 according to equation 3 k And the training sample vector x input in sub-step S11 n Calculating the characteristic value phi of all samples at each atomic node nk :
I.e. the characteristic value phi of the sample nk Is a linear frequency-modulated wavelet atomic vectorAnd training sample vector x n =[x n (t 1 ),x n (t 2 ),...,x n (t l )] T Absolute value of the vector inner product of (1); this is by using a chirp wavelet atomic vector g k For sample vector x n Carrying out linear frequency modulation wavelet atomic transformation;
wherein, the superscript T is the matrix transposition, e is the natural constant;
substep S14, calculating the output o of hidden layer 5 of FIG. 2 nh And output result y of output layer 7 of FIG. 2 nm :
The characteristic value phi of the sample obtained from substep S13 nk The classifier of the input neural network calculates the output result o of the hidden layer according to the formula 4-7 through the activation functions of the node 11 of the hidden layer 5 and the node 9 of the output layer 7 nh And output result y of the output layer nm :
Wherein, a neural network classifier is formed by a hidden layer 5 and an output layer 7; in the creation, the characteristic value parameters of the input layer 3 and the neural network classifier parameters of the hidden layer 5 and the output layer 7 are coordinated and optimized; in the present creation, feature extraction (see substep S13) and object classification (S14-S16) of a chirp wavelet atomic network are jointly implemented.
Wherein N =1,2.. N is the number of samples; h =1,2,.. H is the number of hidden layer nodes, M =1,2,. Wherein M is the number of output layer nodes; the activation function adopts a Sigmoid function;
substep S15, calculating the mean square error E:
substep S15, comprising substeps S15A-S15B;
substep S15A of letting the desired output d of the sample be nm Is 1 or 0; wherein d is nm 1 belongs to a preset p-th class target; d nm 0 does not belong to the preset p-th class target;
substep S15B, outputting d according to the desire nm And the actual output y nm Using equation 8, the mean square error E is calculated:
and a substep S16 of judging whether the expected effect is achieved, and finishing the following steps if the expected effect is achieved:
comparing the mean square error E with a preset threshold;
if the mean square error E is smaller than the preset threshold, ending the learning process, and going to substep S18;
if the mean square error E is more than or equal to a preset threshold value, continuing the following substeps;
substep S17, weight adjustment, adjustment of linear frequency modulation wavelet atomic parameters, and iteration loop:
substep S17, comprising substeps S17A-S17C;
in the creation, the automatic adjustment of classifier weight (substep S17A) and the automatic adjustment of chirp wavelet atomic time-frequency parameters (substep S17B) are performed jointly. Reverse layer-by-layer adjustment: output layer > hidden layer > input layer.
Substep S17A, weight adjustment:
method for reversely adjusting connection weight of each layer of network by utilizing Levenberg-Marquardt algorithmAnd &>
Substep S17B, adjusting the chirp wavelet atomic parameters:
adjusting the atomic node parameter set beta according to a gradient descent algorithm using equation 9 k ={u k ,ξ k ,s k ,c k }:
Wherein n is the iteration number, eta is the learning rate,is a partial derivative operator, E is a mean square error; the gradient descent algorithm is based on a delta learning rule;
substep S17C, iterative loop:
returning to the substep S12, performing the next training period;
substep S17C, iterative loop: returning to the substep S12, and performing the next training period;
wherein, when the iterative loop is not started, the linear frequency modulation wavelet atomic parameter set beta k Is derived from substep S11; after starting the iterative loop, in the next training period, the set of chirp wavelet atomic parameters β for sub-step S12 k Is obtained from sub-step S17B.
Substep S18: preserving chirp wavelet atomic parameters beta k ={u k ,ξ k ,s k ,c k Connection weights with layers of the networkAnd &>And finishing the training.
After the offline recognition step S1 is completed, an online recognition step S2 may be performed.
P =1,2,.. P, identification of a chirp wavelet atomic network for a class P object is as follows:
step S2, online identification:
and classifying the targets by using the linear frequency modulation wavelet atomic network obtained in the step S1 and outputting an identification result.
Accordingly, in step S2, the identification of the p-th class object includes the following sub-steps:
substep S21: inputting data and preprocessing the data;
a substep S21 comprising substeps S21A-S21C; the execution sequence of the substeps S21A-S21C is arbitrary;
substep S21A:
as shown in the input vector (sample data set) 1 of FIG. 2, a training sample vector x is input n And data preprocessing is carried out;
wherein N =1,2., N is the number of samples,
substep S21B:
inputting the linear frequency modulation wavelet atom parameter set beta of the linear frequency modulation wavelet atom network obtained according to the step S1 k ={u k ,ξ k ,s k ,c k }
Wherein K =1,2.. K is the input layer atom number, the chirp wavelet atom parameter set β of the chirp wavelet atom network k ={u k ,ξ k ,s k ,c k Includes a time shift parameter u k Frequency shift parameter xi k Size parameter s k And linear frequency c k (ii) a Wherein the time shift parameter u k Representing the time centre of the signal, the frequency shift parameter xi k Representing the center of frequency of the signal, the scale parameter s k Representing the extensional variation of the chirp wavelet atom, chirp frequency c k Representing the slope of the signal on a time-frequency plane;
substep S21C:
inputting the classifier weight of the linear frequency modulation wavelet atomic network obtained according to the step S1And &>
Wherein the classifier weights of the chirp wavelet atomic networkAnd &>Is the connection weight of each layer of the network, including the connection weight between the input layer and the hidden layer>And the connection weight between hidden layer and output layer->
Substep S22, calculating chirp wavelet atoms:
using the data sequence t according to equation 1 and equation 2 l And the set of chirp wavelet atomic parameters beta input at substep S21 k To calculate the atoms of the chirp wavelet,
and forming a chirp wavelet atomic vector g therefrom k
Wherein, formula 1 is a linear frequency modulation wavelet atomic expression; t is t l Is a data sequence, where L =1,2, and L is the sample data length;
wherein g (t) is a basic window function of the chirp wavelet transform, a gaussian window function is generally adopted as the basic window function of the chirp wavelet atomic transform, and the expression is as follows:
Wherein, the linear frequency modulation wavelet atom is taken as the characteristic extraction basis function of the input layer 3 in figure 2, the input layer 3 realizes the characteristic extraction through the linear frequency modulation wavelet atom transformation,
wherein, the superscript T is the matrix transposition, e is the natural constant;
substep S23: the eigenvalues φ of all samples at each atomic node of the input layer 3 of FIG. 2 are calculated according to equations 1-3 nk ;
Using the value obtained in substep S22 according to equation 3To linear frequency-modulated wavelet atomic vector g k And the training sample vector x input in sub-step S21 n Calculating the characteristic value phi of all samples at each atomic node nk :
I.e. the characteristic value phi of the sample nk Is a linear frequency-modulated wavelet atomic vectorAnd training sample vector x n =[x n (t 1 ),x n (t 2 ),...,x n (t l )] T Absolute value of the vector inner product of (1); this is by using a chirp wavelet atomic vector g k For sample vector x n Carrying out linear frequency modulation wavelet atomic transformation;
substep S24, calculating the output o of hidden layer 5 of FIG. 2 nh And output result y of output layer 7 nm Judging whether the sample belongs to the p-th class target:
the characteristic value phi of the sample obtained from the substep S23 nk The classifier of the input neural network calculates the output result o of the hidden layer according to the formula 4-7 through the activation functions of the node 11 of the hidden layer 5 and the node 9 of the output layer 7 nh And output result y of the output layer nm : according to the output result y of the output layer nm To determine whether the sample belongs to the p-th class target:
wherein the activation function adopts a Sigmoid function;
so far, the task of classifying the target according to each sample output value is completed, and the following steps are carried out:
substep S25: outputting a classification result:
and outputting a classification result to finish the identification.
To illustrate the beneficial effects of the present creation, the following four examples are listed. Wherein examples 1-3 are: the identification test is carried out by a control variable method (azimuth angle and signal-to-noise ratio). Example 4 is a comprehensive test, and the test results are shown in fig. 3.
Example 1
The object recognition method provided by the invention is used for training and testing a linear frequency modulation wavelet atomic network, four types of electromagnetic scattering images with azimuth angles of 0-45 degrees and signal-to-noise ratio of 30dB are used as sample data, and compared with the existing method (a back propagation neural network, a wavelet neural network and a Gabor atomic network), the recognition rate result shown in table 1 is obtained. This example shows that the present creation has better recognition performance for radar narrow-angle scattering images.
TABLE 1 azimuthal angle 0-45 deg. range data discrimination (%)
Example 2
The four networks described in example 1 were trained and tested using electromagnetic scattering images of four types of targets with azimuth angles of 0-180 ° and signal-to-noise ratios of 30dB as sample data to obtain the recognition rate results shown in table 2. This embodiment shows that the present creation basically has a relatively high recognition rate for the radar wide-angle scattering image compared to other prior art target recognition methods.
TABLE 2 azimuthal angle range data discrimination (%) -180
Example 3
The anti-noise performance test is performed on the four networks obtained by training in example 2 by using electromagnetic scattering images of four types of targets with azimuth angles of 0-180 degrees and signal-to-noise ratios of 5dB as test sample data, and the recognition rate results shown in Table 3 are obtained. This example shows that the present creation has good noise immunity for radar scattering image identification.
TABLE 3 SNR 5dB data discrimination (%)
Example 4
In the prior art, electromagnetic scattering images of four types of targets with azimuth angles of 0-45 degrees, 0-90 degrees, 0-180 degrees and signal-to-noise ratios of 30 dB-5 dB are used as sample data, and comprehensive training and testing are performed on a Gabor atomic network as shown in figure 3A, so that average recognition rate results of the four types of targets are obtained.
In example 4, electromagnetic scattering images of four types of targets with azimuth angles of 0 to 45 °, 0 to 90 °, 0 to 180 °, and signal-to-noise ratios of 30dB to 5dB are used as sample data, and as shown in fig. 3B, comprehensive training and testing are performed on a linear frequency modulation wavelet atomic network to obtain an average recognition rate result of the four types of targets.
This embodiment 4 shows that, with the increase of image noise, the radar scattering image recognition performance of the present creation is reduced to a smaller extent, and the recognition rate is overall higher than that of the existing Gabor atomic network recognition method.
In conclusion, the beneficial effects of the creation are:
compared with the prior art, the creation has the following remarkable advantages:
1. richer characteristic information of the target can be obtained through linear frequency modulation wavelet atomic transformation, better effective data support is provided for a classifier, and certain advantages are achieved for improving the accuracy of target identification of a system;
2. the combined feature extraction and classification can adjust parameters in real time according to different target features and recognition environments, and improves the recognition performance and the anti-noise performance of the recognition system.
The above description is intended to be illustrative, and not restrictive, and it will be understood by those skilled in the art that many modifications, variations, or equivalents may be made without departing from the spirit and scope of the present disclosure as defined in the following claims.
Claims (10)
1. A target identification method based on a linear frequency modulation wavelet atomic network is characterized by comprising the following steps:
step S1: training a linear frequency modulation wavelet atomic network off line; and the number of the first and second groups,
step S2: classifying the targets by using the linear frequency modulation wavelet atomic network obtained in the step S1 and outputting an identification result;
wherein, step S1 comprises substep S11 of inputting a training sample vector x n And randomly initializing a chirp wavelet atomic parameter set beta of a chirp wavelet atomic network k ={u k ,ξ k ,s k ,c k And classifier weightsAnd &>
Wherein N =1,2, …, N is the number of samples, K =1,2.. K is the number of input layer atoms, H =1,2.. H is the number of hidden layer nodes, M =1,2.. M is the number of output layer nodes; beta is a beta k ={u k ,ξ k ,s k ,c k Includes a time shift parameter u k Frequency shift parameter xi k Scale parameter s k And linear frequency c k ;The connection weight between the input layer and the hidden layer is obtained; />The weight of the connection between the hidden layer and the output layer.
2. The chirp wavelet atomic network-based object identification method of claim 1, wherein after substep S11, step S1 further comprises:
substep S12, using the data sequence t according to equation 1 and equation 2 l And chirp wavelet atomic parameter set beta k To calculate the atoms of the chirp wavelet,
and forming a chirp wavelet atomic vector g therefrom k ;
Wherein, the linear frequency modulation wavelet atom parameter set beta k Can be obtained from substep S11;
wherein, t l Is a data sequence, where L =1,2, and L is the sample length; g (t) is the basic window function of the chirp wavelet transform:
3. The chirp wavelet atomic network-based object identification method of claim 2, wherein after substep S12, step S1 further comprises:
substep S13, using the chirp wavelet atomic vector g obtained in substep S12, according to equation 3 k And the training sample vector x input in sub-step S11 n Calculating the characteristic value phi of all samples at each atomic node nk :
4. A method for identifying an object based on a chirp wavelet atomic network as claimed in claim 3, wherein after substep S13, step S1 further comprises:
a substep S14 of obtaining a characteristic value phi of the sample from the substep S13 nk The classifier of the input neural network calculates the output result o of the hidden layer according to the formula 4-7 by the activation function nh And output result y of the output layer nm :
Wherein N =1,2.. N is the number of samples; h =1,2,.. H is the number of hidden nodes, M =1,2,. Wherein M is the output layer nodeCounting; intermediate variablesAnd &>The calculation formula of (a) is expressed as:
5. the chirp wavelet atomic network-based object identification method of claim 4, wherein after substep S14, step S1 further comprises substep S15:
substep S15, comprising substeps S15A-S15B;
wherein, in the substep S15A, the expected output d of the sample is made nm Is 1 or 0; wherein, d nm 1 belongs to a preset target; d nm 0 does not belong to the preset target;
substep S15B, outputting d according to the desire nm And the actual output y nm Using equation 8, the mean square error E is calculated:
6. the chirp wavelet atomic network-based object identification method as claimed in claim 5, wherein after substep S15, step S1 further comprises substeps S16-S18,
and a substep S16 of judging whether the expected effect is achieved, and finishing the following steps if the expected effect is achieved: comparing the mean square error E with a preset threshold; if the mean square error E is smaller than the preset threshold, ending the learning process, and going to substep S18; if the mean square error E is more than or equal to a preset threshold value, continuing the following substeps;
substep S17, weight adjustment, adjustment of linear frequency modulation wavelet atomic parameters, and iteration loop:
7. The method for identifying an object based on a chirp wavelet atomic network as claimed in claim 6, wherein the substep S17 comprises substeps S17A-S17C;
substep S17A: reversely adjusting connection weight between input layer and hidden layer by utilizing Levenberg-Marquardt algorithmAnd the connection weight between hidden layer and output layer->
Substep S17B: adjusting the atomic node parameter set beta according to a gradient descent algorithm using equation 9 k ={u k ,ξ k ,s k ,c k }:
Wherein n is the iteration number, eta is the learning rate,is a partial derivative operator, E is a mean square error;
substep S17C: returning to the substep S12, and performing the next training period;
wherein, in the next training period, the set of chirp wavelet atomic parameters β for sub-step S12 k Is obtained from substep S17B.
8. The method for identifying objects based on a chirp wavelet atomic network as claimed in claim 7, wherein the step S2 comprises the sub-steps of:
substep S21: inputting data and preprocessing the data;
the substep S21 comprises substeps S21A-S21C; the execution sequence of the substeps S21A-S21C is arbitrary;
substep S21A:
input training sample vector x n And carrying out data preprocessing; wherein N =1,2, …, N is the number of samples,
substep S21B:
inputting the linear frequency modulation wavelet atom parameter set beta of the linear frequency modulation wavelet atom network obtained according to the step S1 k ={u k ,ξ k ,s k ,c k Where K =1,2, K is the number of input layer atoms, β k ={u k ,ξ k ,s k ,c k Includes a time shift parameter u k Frequency shift parameter xi k Size parameter s k And linear frequency c k ;
Substep S21C:
inputting the classifier weight of the linear frequency modulation wavelet atomic network obtained according to the step S1And &>In which the division of the chirp wavelet atomic networkWeight of analog device>And &>Comprising a connection weight between an input layer and a hidden layer>And the connection weight between hidden layer and output layer->
9. The chirp wavelet atomic network-based object identification method of claim 8, wherein after the sub-step S21, the step S2 further comprises the sub-steps of:
substep S22, using the data sequence t according to equation 1 and equation 2 l And the set of chirp wavelet atomic parameters beta input at substep S21 k To calculate the atoms of the chirp wavelet,
and forming a chirp wavelet atomic vector g therefrom k ;
Wherein, t l Is a data sequence, wherein L =1,2, …, and L is a sample length; wherein g (t) is the basic window function of the linear frequency modulation wavelet transform:
10. The chirp wavelet atomic network-based object identification method of claim 9, wherein after substep S22, step S2 further comprises the substeps of:
substep S23: using the chirp wavelet atomic vector g obtained in sub-step S22 according to equation 3 k And the training sample vector x input in sub-step S21 n Calculating the characteristic value phi of all samples at each atomic node nk :
Substep S24: the characteristic value phi of the sample obtained from the substep S23 nk The classifier of the input neural network calculates the output result o of the hidden layer according to the formula 4-7 by the activation function nh And output result y of the output layer nm : according to the output result y of the output layer nm To determine whether the sample belongs to the p-th class target:
Substep S25: and outputting a classification result to finish the identification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910108831.3A CN109784311B (en) | 2019-02-03 | 2019-02-03 | Target identification method based on linear frequency modulation wavelet atomic network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910108831.3A CN109784311B (en) | 2019-02-03 | 2019-02-03 | Target identification method based on linear frequency modulation wavelet atomic network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109784311A CN109784311A (en) | 2019-05-21 |
CN109784311B true CN109784311B (en) | 2023-04-18 |
Family
ID=66504235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910108831.3A Active CN109784311B (en) | 2019-02-03 | 2019-02-03 | Target identification method based on linear frequency modulation wavelet atomic network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109784311B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110297238B (en) * | 2019-06-24 | 2023-03-31 | 山东大学 | Joint feature extraction and classification method based on self-adaptive chirp wavelet filtering |
CN113238200B (en) * | 2021-04-20 | 2024-09-17 | 上海志良电子科技有限公司 | Classification method of radar linear frequency modulation signals based on validity verification |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104408481A (en) * | 2014-12-05 | 2015-03-11 | 西安电子科技大学 | Deep wavelet neural network-based polarimetric SAR (synthetic aperture radar) image classification method |
CN104792522A (en) * | 2015-04-10 | 2015-07-22 | 北京工业大学 | Intelligent gear defect analysis method based on fractional wavelet transform and BP neutral network |
CN107194433A (en) * | 2017-06-14 | 2017-09-22 | 电子科技大学 | A kind of Radar range profile's target identification method based on depth autoencoder network |
CN109188414A (en) * | 2018-09-12 | 2019-01-11 | 北京工业大学 | A kind of gesture motion detection method based on millimetre-wave radar |
-
2019
- 2019-02-03 CN CN201910108831.3A patent/CN109784311B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104408481A (en) * | 2014-12-05 | 2015-03-11 | 西安电子科技大学 | Deep wavelet neural network-based polarimetric SAR (synthetic aperture radar) image classification method |
CN104792522A (en) * | 2015-04-10 | 2015-07-22 | 北京工业大学 | Intelligent gear defect analysis method based on fractional wavelet transform and BP neutral network |
CN107194433A (en) * | 2017-06-14 | 2017-09-22 | 电子科技大学 | A kind of Radar range profile's target identification method based on depth autoencoder network |
CN109188414A (en) * | 2018-09-12 | 2019-01-11 | 北京工业大学 | A kind of gesture motion detection method based on millimetre-wave radar |
Non-Patent Citations (2)
Title |
---|
Hervé Glotin.FAST CHIRPLET TRANSFORM INJECTS PRIORS IN DEEP LEARNING OF ANIMAL CALLS AND SPEECH.ICLR 2017.2017,全文. * |
谢将剑 等.基于Chirplet语图特征和深度学习的鸟类物种识别方法.北京林业大学学报.2018,第第40卷卷(第第40卷期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN109784311A (en) | 2019-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109389058B (en) | Sea clutter and noise signal classification method and system | |
CN108256436B (en) | Radar HRRP target identification method based on joint classification | |
CN102608589B (en) | Radar target identification method on basis of biomimetic pattern identification theory | |
CN111835444B (en) | Wireless channel scene identification method and system | |
CN109784311B (en) | Target identification method based on linear frequency modulation wavelet atomic network | |
CN109977724A (en) | A kind of Underwater Target Classification method | |
CN112014801A (en) | Composite interference identification method based on SPWVD and improved AlexNet | |
CN114117912A (en) | Sea clutter modeling and inhibiting method under data model dual drive | |
CN110045336B (en) | Convolutional neural network-based radar interference identification method and device | |
Ramezanpour et al. | Two-stage beamforming for rejecting interferences using deep neural networks | |
CN115372925A (en) | Array robust adaptive beam forming method based on deep learning | |
Zheng et al. | Time-frequency feature-based underwater target detection with deep neural network in shallow sea | |
Turhan‐Sayan et al. | Electromagnetic target classification using time–frequency analysis and neural networks | |
Zhu et al. | Gabor filter approach to joint feature extraction and target recognition | |
Li et al. | Aerospace target identification-Comparison between the matching score approach and the neural network approach | |
Ali et al. | An improved gain vector to enhance convergence characteristics of recursive least squares algorithm | |
CN109784318A (en) | The recognition methods of Link16 data-link signal neural network based | |
CN110297238B (en) | Joint feature extraction and classification method based on self-adaptive chirp wavelet filtering | |
CN112346056B (en) | Resolution characteristic fusion extraction method and identification method of multi-pulse radar signals | |
Lee et al. | ISAR autofocus by minimizing entropy of eigenimages | |
Ye et al. | Research on machine learning algorithm based on contour matching modal matrix | |
Liu et al. | A direction of arrival estimation method based on deep learning | |
CN114994657B (en) | Radar repetition frequency variation steady target identification method based on repetition frequency self-adaptive network | |
Li et al. | Chirplet-atoms network approach to high-resolution range profiles automatic target recognition | |
CN116679278B (en) | Target radar detection method under strong ground clutter interference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |