CN109784311A - Target identification method based on chirplet networks of atoms - Google Patents

Target identification method based on chirplet networks of atoms Download PDF

Info

Publication number
CN109784311A
CN109784311A CN201910108831.3A CN201910108831A CN109784311A CN 109784311 A CN109784311 A CN 109784311A CN 201910108831 A CN201910108831 A CN 201910108831A CN 109784311 A CN109784311 A CN 109784311A
Authority
CN
China
Prior art keywords
sub
chirplet
networks
atoms
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910108831.3A
Other languages
Chinese (zh)
Other versions
CN109784311B (en
Inventor
郭尊华
李怡霏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201910108831.3A priority Critical patent/CN109784311B/en
Publication of CN109784311A publication Critical patent/CN109784311A/en
Application granted granted Critical
Publication of CN109784311B publication Critical patent/CN109784311B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

This creation provides a kind of target identification method based on chirplet networks of atoms, comprising: step S1: off-line training chirplet networks of atoms;And step S2: the chirplet networks of atoms obtained using step S1 classifies to target and exports recognition result.This creation is based on three_layer planar waveguide structure, the feature extraction basic function using chirplet atom as input layer, and input layer realizes feature extraction by chirplet Atom Transformation, constitutes neural network classifier by hidden layer and output layer.This creation has the advantage that 1. can obtain the richer characteristic information of target by chirplet Atom Transformation, provides more preferably valid data support for classifier, improves the accuracy of target identification;2. union feature extracts and classification, it can be adjusted according to different target feature and environment-identification real-time perfoming parameter, improve the recognition performance and noiseproof feature of identifying system.

Description

Target identification method based on chirplet networks of atoms
Technical field
This creation is related to signal processing, area of pattern recognition, more particularly to a kind of based on chirplet networks of atoms Target identification method.
Background technique
With the rapid development of modern signal processing technology and the urgent need of practical application, Target Recognition Increasingly important role is played in tech war.Currently, the identification technology based on electromagnetic scattering image by It is widely used in the identification of all kinds of targets of sea, land and air, is one of research hotspot of field of target recognition.
Automatic Target Recognition System generally comprises two independent stages of feature extraction and categorised decision.For feature extraction Method, Guo Zunhua, Li Da, Zhang Baiyan 2013 35 (1) in " system engineering and electronic technology ": 53-60 pages of article " radar High Range Resolution is one-dimensional as target identification " it mentions: it generallys use the effectively reliable electromagnetism of modern signal processing technology extraction and dissipates Characteristics of image is penetrated, such as whens Fourier transformation amplitude, power spectrum, bispectrum frequency domain character and wavelet transformation and Gabor transformation etc. Frequency feature.For classification method, classifier is made frequently with mode identification method, such as neural network, support vector machine and depth Practise etc., these mode identification methods are as described in following paper: DUIN R, P and KALSKA E 2007 in " Studies in Computational Intelligence " 63:221-259 pages proposition " The Science of Pattern Recognition.Achievements and Perspectives ", Yin Heyi, Guo Zunhua 2018 in " telecom technology " 58 (10): " one-dimensional convolutional neural networks are identified for radar high resolution range profile ", the FENG B of 1121-1126 pages of proposition, " the Radar HRRP of CHEN B, LIU H W.2017 year proposition at 61:379-393 pages of " Pattern Recognition " Target Recognition with Deep Networks " etc..
But in the prior art, maximum defect is that feature extraction and the independent of target classification stage carry out, such Recognition methods is merely able to carry out target respectively the operation of feature extraction and classification, for target signature complexity and identified The uncertainty of journey, the feature extraction of identifying system and classified part are difficult to realize co-ordination, since parameter is not to cooperate with most It is excellent, when classifier is classified according to characteristic information, it will lead to recognition performance and be unable to reach most preferably, influence discrimination.This is existing There is the first defect of technology.
In addition, in the prior art, also there is following second of defect: the target more complicated for scattering properties, dissipate Movement scattering point in exit point masking phenomenon and target will cause certain destruction to stable scattering model, and traditional method is difficult To realize the extraction to such target effective feature;For the target identification under complex environment background, also need to consider in grass How to guarantee recognition correct rate under part as far as possible, i.e., how to obtain feature letter that is richer comprising target and reflecting target essence Breath, improves the noiseproof feature of identifying system.
Conventionally, as the presence of the first defect and second of defect: accidentally knowing once target occurs, will cause Major accident leads to huge property loss, and in military war, erroneous judgement will cause great casualties.
Therefore, how quickly and accurately to identify that complex environment target is one of the great challenge that the research field faces, The even more main problem that there is an urgent need to study and solve.
For this problem, if identifying system can obtain simultaneously Electromagnetic Scattering of Target center, geometry, size, time shift, The characteristic informations such as frequency displacement, and suitable classifier parameters are selected according to different target and environmental condition, then it is expected to realize preferable Target identification effect.
However, common time-frequency conversion extracts in the method for feature, basic Fourier transformation can only obtain the frequency of target Characteristic of field, Gabor transformation can obtain time-frequency characteristics simultaneously, and wavelet transformation contains time domain, frequency domain and the dimensional information of target, But good description not can be carried out for complicated scattering point still.
Applicant passes through consulting literatures data, it is thus understood that MANN S and HAYKIN S1995 are in " IEEE Transactions on Signal Processing " 43 (11): 2745-2761 pages proposition small wave converting method expansion Exhibition method paper " The Chirplet Transform:Physical Considerations " --- chirplet is former Son transformation, increases linear frequency modulation parameter on the basis of wavelet transformation, can obtain a series of different time-frequency plane slopes Transformation coefficient, it is hereby achieved that the characteristic information that target is more perfect.
However, this will draw new problem again: the characteristic information of extraction is abundanter, will be to identifying system to these features Classification bring bigger challenge, how to select suitable classifier then and become the success or failure of identification crucial.
Summary of the invention
One purpose of this creation is to realize to the target more complicated for scattering properties, realizes feature extraction and target Identification.
In order to achieve the above object, this creation uses feature extraction of the chirplet Atom Transformation as neural network Function extracts the chirped space-time feature of T/F-scale-of target.
Wherein, applicant considers, can more accurately reflect target signature using the description of time-frequency combination domain, time-frequency characteristics mention The key taken is how to determine the optimal basic function of one group of decision Modulation recognition.Since chirplet transformation is from three-dimensional Popularization of the time-frequency-scale space to four-dimensional time-frequency-scale-linear frequency modulation rate space, linear frequency modulation Small echo atom can include the characteristic information more richer than small echo and Gabor atom, and its transformation parameter and target physical feature phase It closes, time-frequency Local Property is more superior, to have more advantage to the extraction of complex target feature.Therefore, this creation is adjusted using linear Feature extraction function of the frequency small echo Atom Transformation as neural network, the T/F-scale-for extracting target are chirped Space-time feature.
Another purpose of this creation is how to guarantee recognition correct rate as far as possible under noise conditions, that is, improves identification system The noiseproof feature and recognition effect of system.
In order to achieve the above object, this creation corrdinated adjustment chirplet atomic parameter and classifier parameters, joint Realize feature extraction and target classification.
Wherein, applicant considers that when identifying to feature, the parameter of sorting algorithm also influences target to a certain extent Recognition performance.Therefore, this creation also optimizes adjustment to the parameter of feature while being trained to classifier, realizes Training is synchronized, it can be according to different target data coordination parameters, it should can obtain more mutual with target classification than feature extraction The better recognition effect of independent identifying system and noiseproof feature.Therefore, this creation by chirplet atom and has very The multilayer feedforward neural network for the advantages that strong adaptive learning ability, distributed storage and concurrent collaborative is handled organically combines, It is proposed a kind of chirplet networks of atoms method, corrdinated adjustment chirplet atomic parameter and classifier parameters are realized The union feature of target extracts and classification.
In order to achieve the above object, this creation proposes a kind of target identification side based on chirplet networks of atoms Method carries out target identification using chirplet networks of atoms.Chirplet networks of atoms is based on as shown in Figure 2 three Layer Architecture of Feed-forward Neural Network, the feature extraction basic function using chirplet atom as input layer, input layer pass through Chirplet Atom Transformation realizes feature extraction, constitutes neural network classifier by hidden layer and output layer.
This target identification method based on chirplet networks of atoms that this creation proposes, comprising:
Step S1: off-line training chirplet networks of atoms;And
Step S2: the chirplet networks of atoms obtained using step S1 classifies to target and exports identification knot Fruit;
Wherein, step S1, including sub-step S11 input training sample vector xn, and random initializtion chirplet The chirplet atomic parameter collection β of networks of atomsk={ ukk,sk,ckAnd classifier weightWith
Wherein n=1,2 ..., N are sample number, and k=1,2 ..., K are input layer atomicity, and h=1,2 ..., H are hidden Node layer number, m=1,2 ..., M are output layer number of nodes;βk={ ukk,sk,ckIt include shifting parameter uk, frequency shift parameters ξk、 Scale parameter skWith linear frequency modulation rate ckConnection weight between input layer and hidden layer;For hidden layer and output layer it Between connection weight.
Sub-step S12 utilizes data sequence t according to formula 1 and formula 2lWith chirplet atomic parameter collection βk, come Chirplet atom is calculated,
And chirplet atom vector g is formed accordinglyk
Wherein, chirplet atomic parameter collection βkIt can be obtained from sub-step S11;
Wherein, tlFor data sequence, wherein l=1,2 ..., L are sample length;G (t) is chirplet transformation Basic window function:
Wherein, chirplet atom vectorSubscript T is square Battle array transposition, e is natural constant.
Sub-step S13 utilizes the chirplet atom vector g obtained in sub-step S12 according to formula 3kAnd The training sample vector x inputted in sub-step S11n, all samples are calculated in the characteristic value φ of each atomic nodenk:
Sub-step S14, will be from the characteristic value φ of the obtained sample of sub-step S13nkThe classifier of neural network is inputted, is led to Activation primitive is crossed, according to formula 4-7, calculates the output result o of hidden layernhAnd the output result y of output layernm:
Wherein, n=1,2 ..., N are sample number;H=1,2 ..., H are the number of hidden nodes, and m=1,2 ..., M are output Node layer number;Intermediate variableWithCalculation formula statement are as follows:
Sub-step S15, including sub-step S15A-S15B;
Wherein, sub-step S15A enables the desired output d of samplenmIt is 1 or 0;Wherein, dnmBelong to preset target for 1;dnm Preset target is not belonging to for 0;
Sub-step S15B, according to desired output dnmWith reality output ynm, using formula 8, calculate mean square error E:
Sub-step S16 judges whether to achieve the desired results, reach, completes: comparing with preset threshold mean square error E Compared with;If mean square error E is less than preset threshold, terminate learning process, and go to sub-step S18;If mean square error E is more than or equal to Preset threshold then continues following sub-step;
Sub-step S17, weight adjust, and adjust chirplet atomic parameter, then iterative cycles:
Sub-step S18: chirplet atomic parameter β is savedk={ ukk,sk,ckAnd input layer and hidden layer between Connection weightAnd the connection weight between hidden layer and output layerComplete training.
Wherein, sub-step S17, including sub-step S17A-S17C;
Sub-step S17A reversely adjusts the connection between input layer and hidden layer using Levenberg-Marquardt algorithm WeightAnd the connection weight between hidden layer and output layer
Sub-step S17B adjusts atomic node parameter set β using formula 9 according to gradient descent algorithmk={ ukk,sk, ck}:
Wherein, n is the number of iterations, and η is learning rate,It calculates and accords with for partial derivative, E is mean square error;
Sub-step S17C, iterative cycles: sub-step S12 is returned, next cycle of training is carried out;
Wherein, the chirplet atomic parameter collection β in next cycle of training, for sub-step S12kIt is from sub-step What rapid S17B was obtained.
The target identification method based on chirplet networks of atoms, step S2 are the lines obtained using S1 Property frequency modulation small echo networks of atoms classifies to target and exports recognition result;Wherein, step S2 includes following sub-step:
Sub-step S21: input data, line number of going forward side by side Data preprocess;
Sub-step S21 includes sub-step S21A-S21C;The execution sequencing of sub-step S21A-S21C is arbitrary;
Sub-step S21A:
Input training sample vector xn, line number of going forward side by side Data preprocess;Wherein n=1,2 ..., N are sample number,
Sub-step S21B:
Input the chirplet atomic parameter collection β according to the obtained chirplet networks of atoms of step S1k= {ukk,sk,ckWherein, k=1,2 ..., K are input layer atomicity, βk={ ukk,sk,ckIt include shifting parameter uk, frequency Shifting parameter ξk, scale parameter skWith linear frequency modulation rate ck
Sub-step S21C:
Input the classifier weight according to the obtained chirplet networks of atoms of step S1WithWherein, line The classifier weight of property frequency modulation small echo networks of atomsWithIncluding the connection weight between input layer and hidden layerWith it is hidden Connection weight between layer and output layer
Sub-step S22 utilizes data sequence t according to formula 1 and formula 2lWith the linear frequency modulation inputted in sub-step S21 Small echo atomic parameter collection βk, to calculate chirplet atom,
And chirplet atom vector g is formed accordinglyk
Wherein, tlFor data sequence, wherein l=1,2 ..., L are sample length;Wherein, g (t) is chirplet The basic window function of transformation:
Wherein, chirplet atom vectorSubscript T is Matrix transposition, e are natural constant.
Sub-step S23: according to formula 3, the chirplet atom vector g obtained in sub-step S22 is utilizedkAnd The training sample vector x inputted in sub-step S21n, all samples are calculated in the characteristic value φ of each atomic nodenk:
Sub-step S24: will be from the characteristic value φ of the obtained sample of sub-step S23nkThe classifier of neural network is inputted, is led to Activation primitive is crossed, according to formula 4-7, calculates the output result o of hidden layernhAnd the output result y of output layernm: according to output layer Export result ynmCome whether judgement sample belongs to pth class target:
Wherein, intermediate variableWithCalculation formula statement are as follows:
Sub-step S25: output category result completes identification.
The beneficial effect of this creation is:
This creation has following remarkable advantage compared with prior art:
1. the richer characteristic information of target can be obtained by chirplet Atom Transformation, provided for classifier More preferably valid data support, and the accuracy for improving target identification to system has certain advantage;
2. union feature extracts and classification, can be adjusted according to different target feature and environment-identification real-time perfoming parameter, Improve the recognition performance and noiseproof feature of identifying system.
Detailed description of the invention
Fig. 1 is that the target identification technology realization that union feature extracts and classifies is schemed;
Fig. 2 is chirplet networks of atoms structure chart;
Fig. 3 A is identification of the Gabor networks of atoms based on radar different direction angular data under the conditions of different signal-to-noise ratio Rate;
Fig. 3 B is the chirplet networks of atoms based on radar different direction angular data under the conditions of different signal-to-noise ratio Discrimination;
Description of symbols: 1- input vector (sample data set), 2- vector product, 3- input layer, 4- quantity product, 5- Hidden layer, 6- quantity product, 7- output layer, 8- export (classification results), 9-Sigmoid activation primitive, 10- hidden layer and output interlayer Connection weight, 11-Sigmoid activation primitive, the connection weight between 12- input layer and hidden layer, 13- characteristic value, 14- linearly adjusts Frequency small echo atom vector set.
Specific embodiment
Technical means, creative features, achievable purpose and effectiveness to realize this creation are easy to understand, below with reference to This creation is further described in the drawings and specific embodiments.
Applicant is by multi-party screening and multinomial emulation testing, it is contemplated that neural network has very strong adaptive learning energy The advantages that power, distributed storage and concurrent collaborative are handled, reveals the learning table of complex data certain advantage, therefore can adopt Neural network is used to realize the classification to complex target feature as classifier.
For how to improve the noiseproof feature of identifying system, on the one hand applicant considers to need to locate target data in advance Reason, as much as possible reduction noise;
On the other hand, identifying system should be enable in face of varying environment condition, adaptively make corresponding adjustment.According to The small echo of ALEXANDRIDIS A K and ZAPRANIS A D.2013 year proposition at 42:1-27 pages of " Neural Networks " Networks of atoms article " Wavelet Neural Networks:a Practical Guide., 2013 " and SHI Y, ZHANG X D. " IEEE Transactions on Signal Processing " 2001 in 49 (12): 2994-3004 pages of proposition Gabor networks of atoms paper " A Gabor Atom Network for Signal Classification with Application in Radar Target Recognition " the methods of, applicant expect according to union feature extract and The identifying system structure of target classification realizes the corrdinated adjustment of feature extraction and classified part, is then expected to enhance it to complicated ring The anti-interference ability in border.
Amid all these factors, applicant is finally considered as what chirplet Atom Transformation was combined with neural network Method extracts the four-dimensional time frequency space effective information of target, and the synchronous classification for realizing target.
For how extracting effective target signature and the sorting algorithm of reliable design the two field of target recognition Technical problem, this creation propose a kind of method that can be realized union feature and extract with target classification: chirplet atom Network, to improve the discrimination and noiseproof feature of target identification system, and reduce when using identification technology in real life by The heavy losses caused by accidentally knowing.
T/F-scale-linear frequency modulation rate of echo signal is extracted in this creation using chirplet Atom Transformation Space-time feature, corrdinated adjustment chirplet atomic parameter and classifier parameters, discrimination with higher and preferably Noiseproof feature.
This creation proposes a kind of target identification method based on chirplet networks of atoms, using chirplet Networks of atoms carries out target identification.Chirplet networks of atoms is based on three_layer planar waveguide as shown in Figure 2.Fig. 2 For chirplet networks of atoms structure chart;Wherein, 1- input vector (sample data set), 2- vector product, 3- input layer, 4- quantity product, 5- hidden layer, 6- quantity product, 7- output layer, 8- export (classification results), 9-Sigmoid activation primitive, 10- The connection weight of hidden layer and output interlayer, 11-Sigmoid activation primitive, the connection weight between 12- input layer and hidden layer, 13- are special Value indicative, 14- chirplet atom vector set.
As shown in Figure 1, this creation proposes a kind of target identification method based on chirplet networks of atoms, packet It includes:
Step S1, off-line training:
Off-line training chirplet networks of atoms;And
Step S2, online recognition:
The chirplet networks of atoms obtained using S1 classifies to target and exports recognition result.
For above-mentioned steps, P classification target can be obtained simultaneously to P chirplet networks of atoms Parallel Implementation Characteristic parameter.
P=1,2 ..., P, the off-line training step S1 of pth classification target chirplet networks of atoms, including it is following Sub-step:
Sub-step S11, netinit:
As shown in the input vector (sample data set) 1 of Fig. 2, training sample vector x is inputtedn, and random initializtion is linear The chirplet atomic parameter collection β of frequency modulation small echo networks of atomsk={ ukk,sk,ckAnd classifier weightWith
Wherein n=1,2 ..., N are sample number, and k=1,2 ..., K are input layer atomicity, h=1, and 2 ..., H is hidden Node layer number, m=1,2 ..., M are output layer number of nodes;
Wherein, the chirplet atomic parameter collection β of chirplet networks of atomsk={ ukk,sk,ckInclude Shifting parameter uk, frequency shift parameters ξk, scale parameter skWith linear frequency modulation rate ck;Wherein, shifting parameter ukIn the time for indicating signal The heart, frequency shift parameters ξkIndicate the center frequency of signal, scale parameter skIndicate the telescopic variation of chirplet atom, linearly Frequency modulation rate ckIndicate slope of the signal on time-frequency plane;
Wherein, the classifier weight of chirplet networks of atomsWithIt is the connection weight of each layer of network, packet Include the connection weight between input layer and hidden layerConnection weight between hidden layer and output layer
Sub-step S12 calculates chirplet atom:
According to formula 1 and formula 2, data sequence t is utilizedlWith chirplet atomic parameter collection βk, to calculate linear tune Frequency small echo atom,
And chirplet atom vector g is formed accordinglyk
Wherein, chirplet atomic parameter collection βkCan be obtained since sub-step S11 --- iterative cycles not When step S17C, chirplet atomic parameter collection βkIt is to be obtained from sub-step S11;Starting iterative cycles sub-step After S17C, chirplet atomic parameter collection βkIt is to be obtained from sub-step S17B.
Wherein, formula 1 is chirplet atomic expression;tlFor data sequence, wherein l=1,2 ..., L are sample Notebook data length;
Wherein, g (t) is the basic window function of chirplet transformation, is generally adjusted using Gauss function as linear The basic window function of frequency small echo Atom Transformation, expression formula are as follows:
Wherein, chirplet atom vector
Wherein, using chirplet atom as the feature extraction basic function of input layer 3 in Fig. 2, input layer 3 passes through line Property frequency modulation small echo Atom Transformation realize feature extraction,
Wherein, subscript T is matrix transposition, and e is natural constant;
Sub-step S13 calculates the characteristic value of the input layer 3 such as Fig. 2, realizes feature extraction:
According to formula 3, the chirplet atom vector g obtained in sub-step S12 is utilizedkAnd in sub-step S11 The training sample vector x of middle inputn, all samples are calculated in the characteristic value φ of each atomic nodenk:
That is the characteristic value φ of samplenkIt is chirplet atom vector With training sample vector xn=[xn(t1),xn(t2),...,xn(tl)]TInner product of vectors absolute value;This is small using linear frequency modulation Wave atom vector gkTo sample vector xnCarry out chirplet Atom Transformation;
Wherein, subscript T is matrix transposition, and e is natural constant;
Sub-step S14 calculates the output result o of the hidden layer 5 of Fig. 2nhWith the output result y of the output layer 7 of Fig. 2nm:
It will be from the characteristic value φ of the obtained sample of sub-step S13nkThe classifier for inputting neural network, passes through 5 section of hidden layer The activation primitive of point 11 and 7 node 9 of output layer calculates the output result o of hidden layer according to formula 4-7nhAnd the output knot of output layer Fruit ynm:
Wherein, neural network classifier is constituted by hidden layer 5 and output layer 7;In this creation, the characteristic value of input layer 3 is joined Several and hidden layer 5 and the neural network classifier parameter of output layer 7 are coordinated and optimized;In this creation, it is able to joint and realizes line The feature extraction (see sub-step S13) of property frequency modulation small echo networks of atoms and target classification (S14-S16).
Wherein, n=1,2 ..., N are sample number;H=1,2 ..., H are the number of hidden nodes, and m=1,2 ..., M are output Node layer number;The activation primitive uses Sigmoid function;
Intermediate variableWithCalculation formula statement are as follows:
Sub-step S15 calculates mean square error E:
Sub-step S15, including sub-step S15A-S15B;
Sub-step S15A enables the desired output d of samplenmIt is 1 or 0;Wherein, dnmBelong to preset pth class target for 1;dnm Preset pth class target is not belonging to for 0;
Sub-step S15B, according to desired output dnmWith reality output ynm, using formula 8, calculate mean square error E:
Sub-step S16 judges whether to achieve the desired results, reach, completes:
Mean square error E is compared with preset threshold;
If mean square error E is less than preset threshold, terminate learning process, and go to sub-step S18;
If mean square error E is more than or equal to preset threshold, continue following sub-step;
Sub-step S17, weight adjust, and adjust chirplet atomic parameter, then iterative cycles:
Sub-step S17, including sub-step S17A-S17C;
In this creation, adjust automatically classifier weight (sub-step S17A) and adjust automatically chirplet atomic time Frequency parameter (sub-step S17B), joint carry out.It reversely must successively adjust: output layer-> hidden layer-> input layer.
Sub-step S17A, weight adjustment:
The connection weight of each layer of network is reversely adjusted using Levenberg-Marquardt algorithmWith
Sub-step S17B adjusts chirplet atomic parameter:
Atomic node parameter set β is adjusted using formula 9 according to gradient descent algorithmk={ ukk,sk,ck}:
Wherein, n is the number of iterations, and η is learning rate,It calculates and accords with for partial derivative, E is mean square error;Gradient descent algorithm base In delta learning rules;
Sub-step S17C, iterative cycles:
Sub-step S12 is returned, next cycle of training is carried out;
Sub-step S17C, iterative cycles: sub-step S12 is returned, next cycle of training is carried out;
Wherein, when not starting iterative cycles, chirplet atomic parameter collection βkIt is to be obtained from sub-step S11; Chirplet atomic parameter collection β after starting iterative cycles, in next cycle of training, for sub-step S12kBe from Sub-step S17B is obtained.
Sub-step S18: chirplet atomic parameter β is savedk={ ukk,sk,ckAnd each layer of network connection weightWithComplete training.
After completing identified off-line step S1, so that it may carry out online recognition step S2.
P=1,2 ..., P, the identification step of pth classification target chirplet networks of atoms are as follows:
Step S2, online recognition:
The chirplet networks of atoms obtained using S1 classifies to target and exports recognition result.
Correspondingly, in step s 2, pth classification target is identified, including following sub-step:
Sub-step S21: input data, line number of going forward side by side Data preprocess;
Sub-step S21, including sub-step S21A-S21C;Wherein, the execution sequencing of sub-step S21A-S21C is to appoint Meaning;
Sub-step S21A:
As shown in the input vector (sample data set) 1 of Fig. 2, training sample vector x is inputtedn, line number of going forward side by side Data preprocess;
Wherein n=1,2 ..., N are sample number,
Sub-step S21B:
Input the chirplet atomic parameter collection β according to the obtained chirplet networks of atoms of step S1k= {ukk,sk,ck}
Wherein, k=1,2 ..., K are input layer atomicity, the chirplet original of chirplet networks of atoms Subparameter collection βk={ ukk,sk,ckIt include shifting parameter uk, frequency shift parameters ξk, scale parameter skWith linear frequency modulation rate ck;Its In, shifting parameter ukIndicate the time centre of signal, frequency shift parameters ξkIndicate the center frequency of signal, scale parameter skIndicate line The telescopic variation of property frequency modulation small echo atom, linear frequency modulation rate ckIndicate slope of the signal on time-frequency plane;
Sub-step S21C:
Input the classifier weight according to the obtained chirplet networks of atoms of step S1With
Wherein, the classifier weight of chirplet networks of atomsWithIt is the connection weight of each layer of network, packet Include the connection weight between input layer and hidden layerConnection weight between hidden layer and output layer
Sub-step S22 calculates chirplet atom:
According to formula 1 and formula 2, data sequence t is utilizedlWith the chirplet atom ginseng inputted in sub-step S21 Manifold βk, to calculate chirplet atom,
And chirplet atom vector g is formed accordinglyk
Wherein, formula 1 is chirplet atomic expression;tlFor data sequence, wherein l=1,2 ..., L are sample Notebook data length;
Wherein, g (t) is the basic window function of chirplet transformation, is generally adjusted using Gauss function as linear The basic window function of frequency small echo Atom Transformation, expression formula are as follows:
Wherein, chirplet atom vector
Wherein, using chirplet atom as the feature extraction basic function of input layer 3 in Fig. 2, input layer 3 passes through line Property frequency modulation small echo Atom Transformation realize feature extraction,
Wherein, subscript T is matrix transposition, and e is natural constant;
Sub-step S23: all samples are calculated in the characteristic value of each atomic node of the input layer 3 of Fig. 2 according to formula 1-3 φnk
According to formula 3, the chirplet atom vector g obtained in sub-step S22 is utilizedkAnd in sub-step S21 The training sample vector x of middle inputn, all samples are calculated in the characteristic value φ of each atomic nodenk:
That is the characteristic value φ of samplenkIt is chirplet atom vector With training sample vector xn=[xn(t1),xn(t2),...,xn(tl)]TInner product of vectors absolute value;This is small using linear frequency modulation Wave atom vector gkTo sample vector xnCarry out chirplet Atom Transformation;
Sub-step S24 calculates the output result o of the hidden layer 5 of Fig. 2nhWith the output result y of output layer 7nm, judgement sample is It is no to belong to pth class target:
It will be from the characteristic value φ of the obtained sample of sub-step S23nkThe classifier for inputting neural network, passes through 5 section of hidden layer The activation primitive of point 11 and 7 node 9 of output layer calculates the output result o of hidden layer according to formula 4-7nhAnd the output knot of output layer Fruit ynm: according to the output result y of output layernmCome whether judgement sample belongs to pth class target:
Wherein, the activation primitive uses Sigmoid function;
Intermediate variableWithCalculation formula statement are as follows:
So far, classifying according to each sample output valve to target for task is completed, next:
Sub-step S25: output category result:
Output category result completes identification.
In order to illustrate the beneficial effect of this creation, following four embodiment is listed.Wherein, embodiment 1-3 is: being become with control Amount method (azimuth, signal-to-noise ratio) carries out identification test.Embodiment 4 is integration test, and test results are shown in figure 3.
Embodiment 1
Chirplet networks of atoms is trained and is tested by the target identification method of this creation, using orientation 0 °~45 ° of angle, signal-to-noise ratio 30dB four classification target electromagnetic scattering images as sample data, compared with existing method (reverse transmittance nerve network, wavelet neural network, Gabor networks of atoms), obtains discrimination result as shown in Table 1.This reality It applies example and shows that this creation has preferable recognition performance to radar narrow angle dispersion image.
0 °~45 ° of 1 azimuth of table range data discrimination (%)
Embodiment 2
Using 0 °~180 ° of azimuth, signal-to-noise ratio 30dB four classification target electromagnetic scattering images as sample data, it is right Four kinds of networks described in embodiment 1 are trained and test, and obtain discrimination result as shown in Table 2.This embodiment shows phase Than other target identification methods in the prior art, this creation has relatively high identification to radar wide-angle dispersion image substantially Rate.
0 °~180 ° of 2 azimuth of table range data discrimination (%)
Embodiment 3
Using 0 °~180 ° of azimuth, signal-to-noise ratio 5dB four classification target electromagnetic scattering images as test sample data, The four kinds of networks obtained to the training of embodiment 2 carry out noiseproof feature test, obtain discrimination result as shown in table 3.This is implemented Example shows that this creation has good noiseproof feature to the identification of radar scattering image.
3 signal-to-noise ratio 5dB data discrimination (%) of table
Embodiment 4
In the prior art, using 0 °~45 ° of azimuth, 0 °~90 °, 0 °~180 °, four classes of signal-to-noise ratio 30dB~5dB The electromagnetic scattering image of target is as sample data, as shown in Figure 3A, to Gabor networks of atoms, carries out combined training and test, Obtain four class target average recognition rate results.
Embodiment 4, using 0 °~45 ° of azimuth, 0 °~90 °, 0 °~180 °, the four class targets of signal-to-noise ratio 30dB~5dB Electromagnetic scattering image as sample data, as shown in Figure 3B, combined training and survey are carried out to chirplet networks of atoms Examination, obtains four class target average recognition rate results.
This embodiment 4 shows the increase with picture noise, this creation to the recognition performance range of decrease of radar scattering image compared with Small, discrimination is integrally higher than existing Gabor networks of atoms recognition methods.
In conclusion the beneficial effect of this creation is:
This creation has following remarkable advantage compared with prior art:
1. the richer characteristic information of target can be obtained by chirplet Atom Transformation, provided for classifier More preferably valid data support, and the accuracy for improving target identification to system has certain advantage;
2. union feature extracts and classification, can be adjusted according to different target feature and environment-identification real-time perfoming parameter, Improve the recognition performance and noiseproof feature of identifying system.
Described above to be merely exemplary for this creation, and not restrictive, those of ordinary skill in the art understand, In the case where not departing from spirit and scope defined by claims appended below, many modifications, variation or equivalent can be made, But it falls in the protection scope of this creation.

Claims (10)

1. a kind of target identification method based on chirplet networks of atoms characterized by comprising
Step S1: off-line training chirplet networks of atoms;And
Step S2: the chirplet networks of atoms obtained using step S1 classifies to target and exports recognition result;
Wherein, step S1, including sub-step S11 input training sample vector xn, and random initializtion chirplet atom The chirplet atomic parameter collection β of networkk={ ukk,sk,ckAnd classifier weightWith
Wherein n=1,2 ..., N are sample number, and k=1,2 ..., K are input layer atomicity, and h=1,2 ..., H are hidden node Number, m=1,2 ..., M are output layer number of nodes;βk={ ukk,sk,ckIt include shifting parameter uk, frequency shift parameters ξk, scale ginseng Number skWith linear frequency modulation rate ckConnection weight between input layer and hidden layer;Company between hidden layer and output layer Connect weight.
2. the target identification method as described in claim 1 based on chirplet networks of atoms, which is characterized in that in son After step S11, step S1 further include:
Sub-step S12 utilizes data sequence t according to formula 1 and formula 2lWith chirplet atomic parameter collection βk, to calculate Chirplet atom,
And chirplet atom vector g is formed accordinglyk
Wherein, chirplet atomic parameter collection βkIt can be obtained from sub-step S11;
Wherein, tlFor data sequence, wherein l=1,2 ..., L are sample length;G (t) is the basic of chirplet transformation Window function:
Wherein, chirplet atom vectorSubscript T turns for matrix It sets, e is natural constant.
3. the target identification method as claimed in claim 2 based on chirplet networks of atoms, which is characterized in that in son After step S12, step S1 further include:
Sub-step S13 utilizes the chirplet atom vector g obtained in sub-step S12 according to formula 3kAnd in sub-step The training sample vector x inputted in rapid S11n, all samples are calculated in the characteristic value φ of each atomic nodenk:
4. the target identification method as claimed in claim 3 based on chirplet networks of atoms, which is characterized in that in son After step S13, step S1 further include:
Sub-step S14, will be from the characteristic value φ of the obtained sample of sub-step S13nkThe classifier for inputting neural network, by swashing Function living calculates the output result o of hidden layer according to formula 4-7nhAnd the output result y of output layernm:
Wherein, n=1,2 ..., N are sample number;H=1,2 ..., H are the number of hidden nodes, and m=1,2 ..., M are output layer section Points;Intermediate variableWithCalculation formula statement are as follows:
5. the target identification method as claimed in claim 4 based on chirplet networks of atoms, which is characterized in that in son After step S14, step S1 further includes sub-step S15:
Sub-step S15, including sub-step S15A-S15B;
Wherein, sub-step S15A enables the desired output d of samplenmIt is 1 or 0;Wherein, dnmBelong to preset target for 1;dnmIt is 0 It is not belonging to preset target;
Sub-step S15B, according to desired output dnmWith reality output ynm, using formula 8, calculate mean square error E:
6. the target identification method as claimed in claim 5 based on chirplet networks of atoms, which is characterized in that in son After step S15, step S1 further includes sub-step S16-S18,
Sub-step S16 judges whether to achieve the desired results, reach, completes: being compared to mean square error E with preset threshold; If mean square error E is less than preset threshold, terminate learning process, and go to sub-step S18;If mean square error E is more than or equal to pre- If threshold value, then continue following sub-step;
Sub-step S17, weight adjust, and adjust chirplet atomic parameter, then iterative cycles:
Sub-step S18: chirplet atomic parameter β is savedk={ ukk,sk,ckAnd input layer and hidden layer between connection WeightAnd the connection weight between hidden layer and output layerComplete training.
7. the target identification method as claimed in claim 6 based on chirplet networks of atoms, which is characterized in that sub-step Rapid S17, including sub-step S17A-S17C;
Sub-step S17A: the connection weight between input layer and hidden layer is reversely adjusted using Levenberg-Marquardt algorithmAnd the connection weight between hidden layer and output layer
Sub-step S17B: atomic node parameter set β is adjusted using formula 9 according to gradient descent algorithmk={ ukk,sk,ck}:
Wherein, n is the number of iterations, and η is learning rate,It calculates and accords with for partial derivative, E is mean square error;
Sub-step S17C: sub-step S12 is returned, next cycle of training is carried out;
Wherein, the chirplet atomic parameter collection β in next cycle of training, for sub-step S12kIt is from sub-step What S17B was obtained.
8. the target identification method as claimed in claim 7 based on chirplet networks of atoms, which is characterized in that step S2 includes following sub-step:
Sub-step S21: input data, line number of going forward side by side Data preprocess;
Sub-step S21 includes sub-step S21A-S21C;The execution sequencing of sub-step S21A-S21C is arbitrary;
Sub-step S21A:
Input training sample vector xn, line number of going forward side by side Data preprocess;Wherein n=1,2 ..., N are sample number,
Sub-step S21B:
Input the chirplet atomic parameter collection β according to the obtained chirplet networks of atoms of step S1k={ uk, ξk,sk,ckWherein, k=1,2 ..., K are input layer atomicity, βk={ ukk,sk,ckIt include shifting parameter uk, frequency displacement ginseng Number ξk, scale parameter skWith linear frequency modulation rate ck
Sub-step S21C:
Input the classifier weight according to the obtained chirplet networks of atoms of step S1WithWherein, linear frequency modulation The classifier weight of small echo networks of atomsWithIncluding the connection weight between input layer and hidden layerWith hidden layer with it is defeated Connection weight between layer out
9. the target identification method as claimed in claim 8 based on chirplet networks of atoms, which is characterized in that in son After step S21, step S2 further includes following sub-step:
Sub-step S22 utilizes data sequence t according to formula 1 and formula 2lWith the chirplet original inputted in sub-step S21 Subparameter collection βk, to calculate chirplet atom,
And chirplet atom vector g is formed accordinglyk
Wherein, tlFor data sequence, wherein l=1,2 ..., L are sample length;Wherein, g (t) is chirplet transformation Basic window function:
Wherein, chirplet atom vectorSubscript T turns for matrix It sets, e is natural constant.
10. the target identification method as claimed in claim 9 based on chirplet networks of atoms, which is characterized in that After sub-step S22, step S2 further includes following sub-step:
Sub-step S23: according to formula 3, the chirplet atom vector g obtained in sub-step S22 is utilizedkAnd in sub-step The training sample vector x inputted in rapid S21n, all samples are calculated in the characteristic value φ of each atomic nodenk:
Sub-step S24: will be from the characteristic value φ of the obtained sample of sub-step S23nkThe classifier for inputting neural network, by swashing Function living calculates the output result o of hidden layer according to formula 4-7nhAnd the output result y of output layernm: according to the output of output layer As a result ynmCome whether judgement sample belongs to pth class target:
Wherein, intermediate variableWithCalculation formula statement are as follows:
Sub-step S25: output category result completes identification.
CN201910108831.3A 2019-02-03 2019-02-03 Target identification method based on linear frequency modulation wavelet atomic network Active CN109784311B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910108831.3A CN109784311B (en) 2019-02-03 2019-02-03 Target identification method based on linear frequency modulation wavelet atomic network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910108831.3A CN109784311B (en) 2019-02-03 2019-02-03 Target identification method based on linear frequency modulation wavelet atomic network

Publications (2)

Publication Number Publication Date
CN109784311A true CN109784311A (en) 2019-05-21
CN109784311B CN109784311B (en) 2023-04-18

Family

ID=66504235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910108831.3A Active CN109784311B (en) 2019-02-03 2019-02-03 Target identification method based on linear frequency modulation wavelet atomic network

Country Status (1)

Country Link
CN (1) CN109784311B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110297238A (en) * 2019-06-24 2019-10-01 山东大学 Union feature based on adaptive chirp wavelet filtering extracts and classification method
CN113238200A (en) * 2021-04-20 2021-08-10 上海志良电子科技有限公司 Radar chirp signal classification method based on validity verification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408481A (en) * 2014-12-05 2015-03-11 西安电子科技大学 Deep wavelet neural network-based polarimetric SAR (synthetic aperture radar) image classification method
CN104792522A (en) * 2015-04-10 2015-07-22 北京工业大学 Intelligent gear defect analysis method based on fractional wavelet transform and BP neutral network
CN107194433A (en) * 2017-06-14 2017-09-22 电子科技大学 A kind of Radar range profile's target identification method based on depth autoencoder network
CN109188414A (en) * 2018-09-12 2019-01-11 北京工业大学 A kind of gesture motion detection method based on millimetre-wave radar

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408481A (en) * 2014-12-05 2015-03-11 西安电子科技大学 Deep wavelet neural network-based polarimetric SAR (synthetic aperture radar) image classification method
CN104792522A (en) * 2015-04-10 2015-07-22 北京工业大学 Intelligent gear defect analysis method based on fractional wavelet transform and BP neutral network
CN107194433A (en) * 2017-06-14 2017-09-22 电子科技大学 A kind of Radar range profile's target identification method based on depth autoencoder network
CN109188414A (en) * 2018-09-12 2019-01-11 北京工业大学 A kind of gesture motion detection method based on millimetre-wave radar

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HERVÉ GLOTIN: "FAST CHIRPLET TRANSFORM INJECTS PRIORS IN DEEP LEARNING OF ANIMAL CALLS AND SPEECH" *
谢将剑 等: "基于Chirplet语图特征和深度学习的鸟类物种识别方法" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110297238A (en) * 2019-06-24 2019-10-01 山东大学 Union feature based on adaptive chirp wavelet filtering extracts and classification method
CN110297238B (en) * 2019-06-24 2023-03-31 山东大学 Joint feature extraction and classification method based on self-adaptive chirp wavelet filtering
CN113238200A (en) * 2021-04-20 2021-08-10 上海志良电子科技有限公司 Radar chirp signal classification method based on validity verification

Also Published As

Publication number Publication date
CN109784311B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN111178260B (en) Modulation signal time-frequency diagram classification system based on generation countermeasure network and operation method thereof
CN110363151A (en) Based on the controllable radar target detection method of binary channels convolutional neural networks false-alarm
CN108229404A (en) A kind of radar echo signal target identification method based on deep learning
CN109389058A (en) Sea clutter and noise signal classification method and system
CN112702294B (en) Modulation recognition method for multi-level feature extraction based on deep learning
CN113657491A (en) Neural network design method for signal modulation type recognition
CN111835444A (en) Wireless channel scene identification method and system
CN110276256A (en) Based on the low signal-to-noise ratio Modulation Recognition of Communication Signal method and device for adjusting ginseng accidental resonance
CN109784311A (en) Target identification method based on chirplet networks of atoms
Qin et al. Radar waveform recognition based on deep residual network
CN109887047A (en) A kind of signal-image interpretation method based on production confrontation network
CN115392326A (en) Modulation recognition method based on combined multi-modal information and domain antagonistic neural network
CN116866129A (en) Wireless communication signal detection method
CN114239657A (en) Time sequence signal identification method based on complex value interference neural network
CN114254141A (en) End-to-end radar signal sorting method based on depth segmentation
CN110751049B (en) Defense method facing signal sampling gradient attack
CN114980122A (en) Small sample radio frequency fingerprint intelligent identification system and method
CN113361346A (en) Scale parameter self-adaptive face recognition method for replacing adjustment parameters
Sun et al. Deep learning based pedestrian detection
Bai et al. A new radar signal modulation recognition algorithm based on time-frequency transform
CN114266955A (en) Remote sensing image scene classification method
Turhan‐Sayan et al. Electromagnetic target classification using time–frequency analysis and neural networks
Feng et al. FCGCN: Feature Correlation Graph Convolution Network for Few-Shot Individual Identification
CN110297238A (en) Union feature based on adaptive chirp wavelet filtering extracts and classification method
Sui et al. Frequency-hopping signal radio sorting based on stacked auto-encoder subtle feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant