CN105550636B - A kind of method and device of target type discrimination - Google Patents
A kind of method and device of target type discrimination Download PDFInfo
- Publication number
- CN105550636B CN105550636B CN201510884182.8A CN201510884182A CN105550636B CN 105550636 B CN105550636 B CN 105550636B CN 201510884182 A CN201510884182 A CN 201510884182A CN 105550636 B CN105550636 B CN 105550636B
- Authority
- CN
- China
- Prior art keywords
- target
- acoustical signal
- feature
- confidence level
- type
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
Abstract
The embodiment of the invention discloses a kind of method and devices of target type discrimination, the method comprise the steps that passing through the acoustical signal of the transaudient array acquisition target of plane, the target includes the unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, any one in powered paraglider, direction estimation is carried out to the acoustical signal, and airspace filter processing is carried out to the acoustical signal after direction estimation, after airspace filter processing, extract the feature of the acoustical signal, the corresponding feature vector of the feature of the acoustical signal and the clarification of objective, according to the feature of the acoustical signal, the corresponding feature vector of the feature of the acoustical signal and the clarification of objective, determine the type confidence level of the target, the type confidence level is identified by fusion identifying technology, with the type of the determination target, this hair Bright embodiment improves the accuracy of target type discrimination.
Description
Technical field
The present invention relates to detection technology field, in particular to a kind of method and device of target type discrimination.
Background technique
China has a vast territory, the important defending target distribution whole nation, is easy to receive that armed helicopter, cruise missile, nobody drives
The attack for sailing high-tech aircraft such as aircraft etc., there is an urgent need to establish the Low Altitude Target Detection early warning system of protection important area,
The target of low-latitude flying is since flying height is low, radar cross section is small, and general radar is difficult to find, but these targets are being flown
The sound radiated in the process is difficult to eliminate, and carries out positioning to low flyer using these sound and identification is modern scholars
The big key technology of passive acoustics detectiona two of common concern.
The acoustical signal radiated using different target types is different, by extracting the vocal print feature of different target acoustical signal,
Classification analysis further is carried out to vocal print feature using the method for pattern-recognition, can reach the mesh identified to different target
's.Specifically, needing to acquire the acoustical signal data of target, to collected in the prior art when carrying out target type discrimination
Target acoustic signal data carry out feature-extraction analysis, then recycle the Classification and Identification technology including template matching and neural network
The type of target is identified.
However, template matching technique needs could use under given conditions, nerual network technique requires have sufficient sample
Notebook data need to adequately train network, it is desirable that network weight coefficient converges to globally optimal solution, and otherwise neural network is in reality
Border using when be extremely difficult to expected effect, therefore sample Classification and Identification technology in the prior art and be difficult to reach in actual use
To desired effect, the accuracy of recognition result is influenced.
Summary of the invention
The embodiment of the present invention provides a kind of recognition methods of target type, can effectively improve the accuracy of target identification.
First aspect provides a kind of method of target positioning in the embodiment of the present invention, comprising:
By the acoustical signal of the transaudient array acquisition target of plane, the target include low-latitude flying unmanned plane, it is light-duty fly
Machine, dynamic-delta-wing, cruise missile, any one in powered paraglider;
Direction estimation is carried out to the acoustical signal, and airspace filter processing is carried out to the acoustical signal after direction estimation;
After airspace filter processing, the feature of the acoustical signal, the corresponding feature vector of feature of the acoustical signal are extracted
And the clarification of objective;
According to the spy of the corresponding feature vector of the feature of the feature of the acoustical signal, the acoustical signal and the target
Sign, determines the type confidence level of the target;
The type confidence level is identified by fusion identifying technology, with the type of the determination target.
Optionally, after the acoustical signal by the transaudient array acquisition target of plane, and direction is carried out to the acoustical signal
Estimation, and to after direction estimation acoustical signal carry out airspace filter processing before, further includes:
Adaptive noise inhibition processing is carried out to the acoustical signal.
Optionally, described to include: to acoustical signal progress adaptive noise inhibition processing
Inhibition processing is carried out to the acoustical signal using the adaptive noise suppression technology based on wavelet decomposition.
Optionally, behind the direction for estimating the acoustical signal, institute is determined in the direction at each moment according to the acoustical signal
State the track of target.
Optionally, the feature for extracting the acoustical signal includes:
When the sound power spectral line of radiation spectrum fundamental frequency is greater than threshold value, pass through helicopter line spectrum and harmonic wave collection detection algorithm pair
Frequency domain character is analyzed, to obtain line spectral frequencies feature;
The corresponding feature vector of feature for extracting the acoustical signal includes:
Calculate auto-correlation coefficient and cepstrum coefficient;
The auto-correlation coefficient and the cepstrum coefficient are combined, the temporal signatures vector of several dimensions is obtained;
Imparametrization power spectrumanalysis is carried out, to calculate power spectrum;
Using calculated power spectrum, the frequency domain character vector of several dimensions is obtained;
Calculate energy feature, standard deviation characteristic, spectrum gravity center characteristics and the wavelet packet Sample Entropy feature of each frequency band of acoustical signal;
Energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature are combined, several dimensions are obtained
Several wavelet packet character amounts;
It is described to extract the clarification of objective and include:
Orientation and its change rate to the target carry out tracking prediction, obtain the dynamic signature of flight path of the target.
Optionally, the feature according to the acoustical signal determines that the type confidence level of the target includes:
Characteristic frequency library is pre-established according to the line spectral frequencies feature;
The characteristic frequency of acoustical signal described in extract real-time;
According to the match condition in the characteristic frequency extracted and the characteristic frequency library, the of the target is determined
One type confidence level;
The corresponding feature vector of the feature according to the acoustical signal determines that the type confidence level of the target includes:
Temporal signatures sub-neural network is determined respectively according to the quantity of the dimension of the temporal signatures vector and the target
Output and input the number of plies;
The Second Type confidence level of the target is determined by the temporal signatures sub-neural network;
Frequency domain character sub-neural network is determined respectively according to the quantity of the dimension of the frequency domain character vector and the target
Output and input the number of plies;
The third type confidence level of the target is determined by the frequency domain character sub-neural network;
Determine wavelet packet character son nerve respectively according to the quantity of the dimension of the Wavelet Packet Energy Eigenvector and the target
Network outputs and inputs the number of plies;
The 4th type confidence level of the target is determined by the wavelet packet character sub-neural network;
The type confidence level that the target is determined according to the clarification of objective includes:
By track association identification technology, the track and the dynamic signature of flight path are associated, determine the mesh
The 5th type confidence level of target.
Optionally, described that the type confidence level is identified by fusion identifying technology, with the determination target
Type includes:
Using the first kind confidence level, Second Type confidence level, third type confidence level, the 4th type confidence level,
5th type confidence level and weighted value calculate the total value of type confidence level, choose described in the corresponding type conduct of maximum value
The type of target.
Optionally, the type confidence level is identified by fusion identifying technology described, with the determination target
Type before further include:
The method for identifying and classifying combined using genetic algorithm and neural network carries out neural network weight coefficient global excellent
Change.
Optionally, when the target is low-altitude low-velocity small targets, pass through the sound of target described in the transaudient array acquisition of cross
Signal, the transaudient array of cross include 12 transaudient array elements and 4 orthogonal horizon bars.
Second aspect provides a kind of target type discrimination device in the embodiment of the present invention, comprising:
Acquisition module, for passing through the acoustical signal of the transaudient array acquisition target of plane, the target includes low-latitude flying
Unmanned plane, light aerocraft, dynamic-delta-wing, cruise missile, any one in powered paraglider;
First processing module for carrying out direction estimation to the acoustical signal, and carries out the acoustical signal after direction estimation
Airspace filter processing;
Extraction module, for extracting the feature of the acoustical signal, the feature pair of the acoustical signal after airspace filter processing
The feature vector and the clarification of objective answered;
First determining module, for feature, the corresponding feature vector of feature of the acoustical signal according to the acoustical signal
And the clarification of objective, determine the type confidence level of the target;
Identification module, for being identified by fusion identifying technology to the type confidence level, with the determination target
Type.
Optionally, described device further include:
Second processing module, for believing after the acoustical signal by the transaudient array acquisition target of plane, and to the sound
Number carry out direction estimation, and to after direction estimation acoustical signal carry out airspace filter processing before, to the acoustical signal carry out from
Adapt to noise suppressed processing.
Optionally, the Second processing module is specifically used for utilizing the adaptive noise suppression technology pair based on wavelet decomposition
The acoustical signal carries out inhibition processing.
Optionally, described device further include:
Second determining module, for behind the direction for estimating the acoustical signal, according to the acoustical signal at each moment
Direction determines the track of the target.
Optionally, the extraction module is specifically used for:
When the sound power spectral line of radiation spectrum fundamental frequency is greater than threshold value, pass through helicopter line spectrum and harmonic wave collection detection algorithm pair
Frequency domain character is analyzed, to obtain line spectral frequencies feature;
Calculate auto-correlation coefficient and cepstrum coefficient;
The auto-correlation coefficient and the cepstrum coefficient are combined, the temporal signatures vector of several dimensions is obtained;
Imparametrization power spectrumanalysis is carried out, to calculate power spectrum;
Using calculated power spectrum, the frequency domain character vector of several dimensions is obtained;
Calculate energy feature, standard deviation characteristic, spectrum gravity center characteristics and the wavelet packet Sample Entropy feature of each frequency band of acoustical signal;
Energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature are combined, several dimensions are obtained
Several wavelet packet character amounts;
Orientation and its change rate to the target carry out tracking prediction, obtain the dynamic signature of flight path of the target.
Optionally, the determining module is specifically used for:
Characteristic frequency library is pre-established according to the line spectral frequencies feature;
The characteristic frequency of acoustical signal described in extract real-time;
According to the match condition in the characteristic frequency extracted and the characteristic frequency library, the of the target is determined
One type confidence level;
Temporal signatures sub-neural network is determined respectively according to the quantity of the dimension of the temporal signatures vector and the target
Output and input the number of plies;
The Second Type confidence level of the target is determined by the temporal signatures sub-neural network;
Frequency domain character sub-neural network is determined respectively according to the quantity of the dimension of the frequency domain character vector and the target
Output and input the number of plies;
The third type confidence level of the target is determined by the frequency domain character sub-neural network
Determine wavelet packet character son nerve respectively according to the quantity of the dimension of the Wavelet Packet Energy Eigenvector and the target
Network outputs and inputs the number of plies;
The 4th type confidence level of the target is determined by the wavelet packet character sub-neural network;
By track association identification technology, the track and the dynamic signature of flight path are associated, determine the mesh
The 5th type confidence level of target.
Optionally, the identification module is specifically used for:
Using the first kind confidence level, Second Type confidence level, third type confidence level, the 4th type confidence level,
5th type confidence level and weighted value calculate the total value of type confidence level, choose described in the corresponding type conduct of maximum value
The type of target.
Optionally, described device further include:
Global optimization module, for being identified by fusion identifying technology to the type confidence level described, with true
Before the type of the fixed target, system is weighed to neural network using the method for identifying and classifying that genetic algorithm and neural network combine
Number carries out global optimization.
Optionally, the acquisition module is specifically used for when the target is low-altitude low-velocity small targets, transaudient by cross
The acoustical signal of target described in array acquisition, the transaudient array of cross include 12 transaudient array elements and 4 orthogonal water
Flat bar.
As can be seen from the above technical solutions, the embodiment of the present invention has the advantage that
In the embodiment of the present invention, by the acoustical signal of the transaudient array acquisition target of plane, the target includes low-latitude flying
Unmanned plane, light aerocraft, dynamic-delta-wing, cruise missile, any one in powered paraglider, the acoustical signal is carried out
Direction estimation, and airspace filter processing is carried out to the acoustical signal after direction estimation, after airspace filter processing, extract the sound letter
Number feature, the corresponding feature vector of feature of the acoustical signal and the clarification of objective, according to the acoustical signal
Feature, the corresponding feature vector of feature of the acoustical signal and the clarification of objective, determine the type confidence of the target
Degree, identifies the type confidence level by fusion identifying technology, with the type embodiment of the present invention of the determination target
Improve the accuracy of target positioning.In the embodiment of the present invention, the feature using the feature of acoustical signal, the acoustical signal is corresponding
Feature vector and the clarification of objective determine the type confidence level of target, then by fusion identifying technology to the class
Type confidence level is identified, can effectively improve the accuracy of target type discrimination.
Detailed description of the invention
Fig. 1 is the embodiment schematic diagram of the method for target type discrimination in the embodiment of the present invention;
Fig. 2 is microphone array structural schematic diagram in the embodiment of the present invention;
Fig. 3 is the adaptive noise restrainable algorithms flow diagram of wavelet decomposition in the embodiment of the present invention;
Fig. 4 is the template matching flow diagram of characteristic frequency in the embodiment of the present invention;
Fig. 5 is that the structure of target type discrimination device in the embodiment of the present invention is intended to.
Specific embodiment
The embodiment of the invention provides a kind of methods of target type discrimination, improve the accuracy of target type discrimination.
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work
It encloses.
Description and claims of this specification and the (if present)s such as term " first " in above-mentioned attached drawing, " second "
It is to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that the number used in this way
According to being interchangeable under appropriate circumstances, so as to the embodiments described herein can in addition to the content for illustrating or describing herein with
Outer sequence is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover non-exclusive packet
Contain, for example, what the process, method, system, product or equipment for containing a series of steps or units were not necessarily limited to be clearly listed
Those step or units, but may include be not clearly listed or it is intrinsic for these process, methods, product or equipment
Other step or units.
Below to the embodiment of target type discrimination method in the embodiment of the present invention, it is introduced.
Referring to Fig. 1, method one embodiment that target positions in the embodiment of the present invention includes:
101, by the acoustical signal of the transaudient array acquisition target of plane, the target includes the unmanned plane, light of low-latitude flying
Type aircraft, dynamic-delta-wing, cruise missile, any one in powered paraglider.
Optionally, as shown in Fig. 2, passing through the transaudient array acquisition institute of cross when the target is low-altitude low-velocity small targets
The acoustical signal of target is stated, the transaudient array of cross includes 12 transaudient array elements and 4 orthogonal horizon bars.
102, direction estimation is carried out to the acoustical signal, and airspace filter processing is carried out to the acoustical signal after direction estimation.
It is understood that in the embodiment of the present invention, can use detection, orientation algorithm to the filtered signal of time domain into
Goal orientation is completed in the estimation of row orientation, and each moment orientation result forms the track of target.
Wherein, target detection, orientation use general-purpose algorithm, and airspace filter uses general beamforming algorithm, the present invention
It repeats no more.
It optionally, in the present embodiment can also include: adaptive to acoustical signal progress between step 101 and 102
Noise suppressed is answered to handle.
It is understood that each channel targets acoustical signal all includes other noises such as ambient noise, wind noise, these are made an uproar
Sound some belongs to white noise some and belongs to coloured noise, in the present embodiment, by carrying out adaptive noise suppression to the acoustical signal
System processing, i.e., be filtered each channel signal in time domain, signal-to-noise ratio can be improved.
Specifically, in the present embodiment, using the adaptive noise suppression technology based on wavelet decomposition to the acoustical signal into
Row adaptive noise inhibition processing, as shown in figure 3, wavelet basis uses db6 in the adaptive noise suppression technology of wavelet decomposition
Small echo carries out 6 layers of wavelet decomposition to the collected target acoustic signal of each array element of microphone array first, carries out to each layer data single
Solely reconstruct, proposes a kind of precondition: the ambient noise in each channel is uncorrelated or correlation very little, noise number after wavelet decomposition
According to each channel of the layer at place is also uncorrelated or correlation very little, the data after each channel decomposition are carried out according to this precondition
Cross-correlation calculation, the thresholding of cross-correlation coefficient is 0.3, when the cross-correlation coefficient in this layer of each channel is less than 0.3, it is believed that changing layer is
Noise floor extracts all noise floors and carries out data reconstruction, obtain noise data, noise data is inputted to general adaptive filter
Wave device carries out noise suppressed.
103, after airspace filter processing, the feature of the acoustical signal, the corresponding feature of feature of the acoustical signal are extracted
Vector and the clarification of objective.
Optionally, when the clarification of objective is frequecy characteristic, the feature for extracting the acoustical signal includes: to work as spoke
When the sound power spectral line spectrum fundamental frequency penetrated is greater than threshold value, frequency domain character is carried out by helicopter line spectrum and harmonic wave collection detection algorithm
Analysis, to obtain line spectral frequencies feature.
It is understood that from the point of view of power spectrum there is apparent line spectrum feature in the Small object of low-latitude flying mostly, or
The wide range feature of narrow band frequency feature, similar jet plane is less, and since the engine generally used is smaller, in order to reach
To certain power, revolving speed is generally higher, and the sound power spectral line spectrum fundamental frequency of radiation is in 70Hz~160Hz.In 50Hz or more
Line spectral frequencies feature is extracted using helicopter line spectrum, harmonic wave collection detection algorithm.
Optionally, the corresponding feature vector of feature for extracting the acoustical signal includes following several modes:
One, when the acoustical signal feature is temporal signatures, auto-correlation coefficient and cepstrum coefficient are calculated, it will be described from phase
Relationship number and the cepstrum coefficient combine, and obtain the temporal signatures vector of several dimensions.
Specifically, when auto-correlation coefficient and cepstrum coefficient are 20, the temporal signatures vector of available 40 dimension.
Auto-correlation coefficient can obtain by the following method:
For the Small object of low-altitude low-velocity flight, the single channel voice signal after wave beam output can regard steady random letter as
Number, for random stationary signal, its auto-correlation function in the time domain can be by following formula approximate calculation:
In formula, RxxIndicate that the time-domain signal auto-correlation coefficient being calculated, x (n) indicate the time-domain signal after wave beam output,
τ indicates time delay.Auto-correlation coefficient on the 0 time delay point left side takes 99 values, the right to take 100 values after finding out, and from left to right successively every 10
A coefficient point is averaged, and synthesis obtains 20 characteristic points altogether, using the difference of different target type autocorrelation characteristic point to mesh
Mark type is identified.
Cepstrum coefficient can obtain by the following method:
Assuming that target acoustic signal frequency is limited, limited extent, and all in microphone design indication range, then microphone
The stable system of cause and effect can be regarded as, can be described with the transmission function of a rational form, which meets minimum
The condition of phase system can model the system using linear prediction model, and then obtain its linear prediction model and fall
Spectral coefficient, different its cepstrum coefficient of echo signal type are generally different, so as to be carried out using the coefficient to target type
Identification.
The output valve that linear prediction model is expressed as current time can be multiplied by phase with the output valve in past certain period of time
The coefficient answered is estimated that expression is as follows:
In formula,Indicate the predicted value at current time, x (n) indicates the actual value at current time, aiI=1,2 ..., p table
The coefficient of linear prediction model, p indicate order.For the coefficient of linear prediction model can be used minimum mean square error method into
Row estimation.Cepstrum coefficient is calculated with the following method after obtaining linear prediction model coefficient:
In formula, ciIndicate cepstrum coefficient, other same formulas of symbol (2).
When actually calculating, order takes 20, estimates 20 cepstrum coefficients as target signature, with 20 auto-correlation coefficients
40 parameters of total extraction are combined as target temporal signatures vector Ft=(r1,r2,…,r20,m1,m2,…,m20。
Two, when the acoustical signal feature is frequency domain character, imparametrization power spectrumanalysis is carried out, to calculate power
Spectrum, using calculated power spectrum, obtains the frequency domain character vector of several dimensions.
Specifically, the frequency domain character vector of several dimensions can be obtained in the following way:
The calculating of imparametrization power spectrumanalysis is carried out in the acoustical signal exported to wave beam, imparametrization power spectrumanalysis is general
With modified periodogram method, calculation method is as follows:
In formula, x (n) is sequence of time-domain samples, PXIt (w) is the power spectrum acquired, c (n) is window function.
After the signal exported to wave beam carries out power spectrumanalysis, the signal of 10Hz~650Hz is taken, is divided into 40 sections,
Every section of averaging energy obtains target frequency domain character vector
Three, when the acoustical signal feature is wavelet packet character, energy feature, the standard deviation for calculating each frequency band of acoustical signal are special
Sign, spectrum gravity center characteristics and wavelet packet Sample Entropy feature, by energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet sample
Entropy feature is combined, and obtains the wavelet packet character amount of several dimensions.
Specifically, in practical applications, the wavelet packet character of 32 dimensions can be calculated by above-mentioned four kinds of features
Amount, calculating process are as follows:
WAVELET PACKET DECOMPOSITION is a kind of effective Time-Frequency Analysis Method, and signal is carried out multi-level draw by it from low to high
Point, the features such as the energy of signal, spectrum center of gravity, second moment, Wavelet Entropy can sufficiently show the characteristic of such target on each level, small
The step of wave packet feature extraction, is as follows:
1) signal decomposition: 3 layers of WAVELET PACKET DECOMPOSITION are carried out to the data that wave beam exports using wavelet function sym6, obtain 8
The wavelet packet coefficient of frequency band.
2) signal reconstruction: 8 obtained frequency band wavelet packet coefficients are decomposed to step 1) and are reconstructed, 8 frequency band weights are obtained
Structure time domain waveform.
3) it calculates each frequency band feature amount: setting the sampling length of signal as N, calculate each feature using the signal that step 2) obtains
Amount, calculation method are as follows:
A) energy feature
The energy balane of each frequency band is as follows:
Energy normalized:
B) standard deviation characteristic
C) gravity center characteristics are composed
Seek the power spectrum of each frequency band:
(4) formula of utilization calculates each band power and composes to obtain
In formula, fs is sample rate, and k is discrete frequency, AiIt (k) is the amplitude of respective frequencies,For the phase of respective frequencies
Position.
Seek each band spectrum center of gravity:
D) wavelet packet Sample Entropy feature
Calculating each frequency band Sample Entropy, steps are as follows:
S1, mould-fixed dimension d=N/10 is given, each sequence of frequency bands forms d n dimensional vector n:
Xi(m)=[xi(m), x (m+1) ..., x (m+d-1)], m=1,2 ..., N-d+1 (10)
S2, X is calculatedi(m) and XiThe distance between (g):
Li(m, g)=max | xi(m+b)-xi(g+b) |, b=1,2 ..., d-1, m ≠ g (11)
S3, given threshold value r, to each b Data-Statistics LiThe number P of (m, g) < r, and calculate:
Seek the average value to all b:
S4, d=d+1 is enabled, repeats step s1~s3, B is calculatedd+1(r)
S5, Sample Entropy is calculated are as follows:
4 kinds of characteristic quantities are combined 32 dimension Wavelet Packet Energy Eigenvector of composition: Fw=(E1,E2,…,E8,V1,V2,…,V8,
C1,C2,…,C8,S1,S2,…,S8)。
Four, when the feature is motion feature, the extraction clarification of objective includes: the orientation to the target
And its change rate carries out tracking prediction, obtains the dynamic signature of flight path of the target.
Wherein, microphone array, which receives, can calculate target relative to microphone by orientation algorithm after target acoustic signal
The orientation of array can extrapolate rate of azimuth change of the target relative to microphone array by each moment target bearing, and thus, it is possible to can
The target information observed leaned on includes target bearing and its change rate, is modeled using kalman filter method, to target
Orientation and its change rate carry out tracking prediction, to can get target dynamic signature of flight path.
104, according to the corresponding feature vector of feature of the feature of the acoustical signal, the acoustical signal and the target
Feature determines the type confidence level of the target.
Wherein, determine that the type confidence level of the target includes: to utilize template matching skill according to the feature of the acoustical signal
Art pre-establishes characteristic frequency library according to the line spectral frequencies feature, the characteristic frequency of acoustical signal described in extract real-time, according to mentioning
Match condition in the characteristic frequency got and the characteristic frequency library, determines the first kind confidence level of the target.
Specifically, as shown in figure 4, the sound of the propeller target emanation of low-latitude flying has obvious line spectrum fundamental frequency mostly,
Sound of these targets under various state of flights is acquired and analyzed in advance, the frequecy characteristic under various states is carried out
It summarizes, establishes corresponding characteristic frequency library as the template to these target identifications, in practical application, extract real-time target sound
It is compared with characteristic frequency library if there is characteristic frequency for the characteristic frequency of sound, if it is possible to matching well, output
The confidence level of the target type is higher, if not finding occurrence in frequency library, this method exports setting for such target
Reliability is lower.
Wherein, when the corresponding feature vector of feature according to the acoustical signal determines the type confidence level of the target,
Specifically, the corresponding feature vector of feature that can use sub-neural network designing technique combination acoustical signal determines the target
Type confidence level.
Wherein, temporal signatures nerve is determined respectively according to the quantity of the dimension of the temporal signatures vector and the target
Network outputs and inputs the number of plies, for example, when target includes propeller target, powered paraglider, cruise missile, jet-propelled mesh
Mark, other targets and ambient noise totally 6 class target when, output layer number is determined as 6 according to target classification number.Hidden layer mind
Number through member is by empirical equation:It obtains, n in formulakFor the number of hidden neuron, niFor input nerve
The number of member, noFor the number of output neuron, a is the constant between 1~10, and hidden neuron number is also in practical determination
It to be carried out according to test of many times result preferred.Input layer select constant value for 1 transforming function transformation function, hidden layer transforming function transformation function select hyperbolic letter
Number, output layer transforming function transformation function selection percentage function.
It is understood that the corresponding feature vector of feature using sub-neural network designing technique combination acoustical signal determines
The type confidence level of the target includes following several:
One, the sub- nerve net of temporal signatures is determined according to the quantity of the dimension of the temporal signatures vector and the target respectively
Network outputs and inputs the number of plies, and the Second Type confidence level of the target is determined by the temporal signatures sub-neural network.
It is understood that input layer designs 40 constant value transforming function transformation functions, middle layer when temporal signatures vector dimension is 40
Empirically formula and actual tests result design 26 hyperbolic functions neurons, and output layer neuron number is 6.
Two, the sub- nerve net of frequency domain character is determined according to the quantity of the dimension of the frequency domain character vector and the target respectively
Network outputs and inputs the number of plies, and the third type confidence level of the target is determined by the frequency domain character sub-neural network.
It is understood that input layer designs 40 constant value transforming function transformation functions, middle layer when frequency domain character vector dimension is 40
Empirically formula and actual tests result design 26 hyperbolic functions neurons, and output layer neuron number is 6.
Three, wavelet packet character son mind is determined respectively according to the quantity of the dimension of the Wavelet Packet Energy Eigenvector and the target
The number of plies is output and input through network, the 4th type confidence of the target is determined by the wavelet packet character sub-neural network
Degree.
It is understood that input layer designs 32 constant value transforming function transformation functions, intermediate when Wavelet Packet Energy Eigenvector dimension is 32
Empirically formula and actual tests result design 26 hyperbolic functions neurons to layer, and output layer neuron number is 6.
Wherein, the result of above-mentioned each sub-neural network output is that can be used as the confidence level of respective objects type.
In this implementation, the type confidence level that the target is determined according to the clarification of objective includes: to pass through track
It is associated with identification technology, the track and the dynamic signature of flight path are associated, determine the 5th type confidence of the target
Degree.
It is understood that according to motion subtree as a result, if the angle of current time actual measurement and angle speed
The angle and angular rate matching of degree and prediction algorithm output, illustrate the current time target or previous moment or previous time
The same target of section, target type is consistent with the type of the target of previous moment, and the confidence level for exporting such target is high, no
Then confidence level is low.
It optionally, can also include: to utilize genetic algorithm and neural network in embodiments of the present invention before step 104
The method for identifying and classifying combined carries out global optimization to neural network weight coefficient.
Wherein, genetic algorithm and neural network are combined, using the global cost function of Feedback error as something lost
The fitness function of propagation algorithm carries out global optimization to neural network weight coefficient, and the neural network that training finishes has better
Extensive adaptability, the specific steps are as follows:
1) neural network weight coefficient, is put into weight coefficient matrix, is hidden layer weight coefficient matrix, is output layer weight coefficient square
Battle array;
2) weight coefficient matrix, is initialized by weight coefficient matrix dimension, obtains initialization population, population scale M is enabled and being evolved
Algebra g=0;
3) every chromosome band in population, is entered into neural network, every chromosome is calculated using neural network forward direction process
Output, calculate fitness function obtain the fitness value F of every chromosome in population;
4), the best individual of fitness in contemporary population is copied directly in next-generation population, while directly superseded kind
The worst chromosome of fitness in group;
5) M individual composition a new generation kind is selected with the ratio back-and-forth method of " roulette " form in the remaining individual of population
Group;
6), in population of new generation, single point crossing is carried out with certain probability, the individual in Population Regeneration;
7) it, to the individual in population, is made a variation with certain probability, the individual in Population Regeneration;
8) fitness value individual in population, is calculated, judges whether the best value of fitness meets required precision, or sentence
It is disconnected whether to have reached the maximum evolution number, if meeting condition, terminate, otherwise enters step 3);By setting centainly
Precision conditions and evolution number, the globally optimal solution of neural network weight.
105, the type confidence level is identified by fusion identifying technology, with the type of the determination target.
First kind confidence level, the second class confidence level, third class confidence level, the 4th class confidence level, the 5th class in step 104
Confidence level, every kind of 6 class confidence levels of output, can be expressed as follows:
In formula, BiIndicate the confidence level vector of i-th kind of method output, αiIt indicates to export confidence level vector to i-th kind of method
Weighted value, bijIndicate the jth class objective degrees of confidence of i-th kind of method output.Weighted value is designed by a large amount of verification experimental verification are as follows: α
Then there are the total confidence level of jth class target in=[0.5 0.8 0.6 0.6 0.8] are as follows:
Finally, by finding total confidence level BTjJ=1, the corresponding target type of 2 ..., 6 maximum values, as total algorithm
Export result.
In the embodiment of the present invention, by the acoustical signal of the transaudient array acquisition target of plane, the target includes low-latitude flying
Unmanned plane, light aerocraft, dynamic-delta-wing, cruise missile, any one in powered paraglider, the acoustical signal is carried out
Direction estimation, and airspace filter processing is carried out to the acoustical signal after direction estimation, after airspace filter processing, extract the sound letter
Number feature, the corresponding feature vector of feature of the acoustical signal and the clarification of objective, according to the acoustical signal
Feature, the corresponding feature vector of feature of the acoustical signal and the clarification of objective, determine the type confidence of the target
Degree, identifies the type confidence level by fusion identifying technology, with the type embodiment of the present invention of the determination target
Improve the accuracy of target positioning.In the embodiment of the present invention, the feature using the feature of acoustical signal, the acoustical signal is corresponding
Feature vector and the clarification of objective determine the type confidence level of target, then by fusion identifying technology to the class
Type confidence level is identified, can effectively improve the accuracy of target type discrimination.
The embodiment of the device of target type discrimination in the embodiment of the present invention is described below.
Referring to Fig. 5, the one embodiment for the device 500 that target positions in the embodiment of the present invention includes:
Acquisition module 501, for passing through the acoustical signal of the transaudient array acquisition target of plane, the target includes low-latitude flying
Unmanned plane, light aerocraft, dynamic-delta-wing, cruise missile, any one in powered paraglider;
First processing module 502, for the acoustical signal carry out direction estimation, and to the acoustical signal after direction estimation into
The processing of row airspace filter;
Extraction module 503, for extracting the feature of the acoustical signal, the spy of the acoustical signal after airspace filter processing
Levy corresponding feature vector and the clarification of objective;
First determining module 504, for according to the feature of the acoustical signal, the corresponding feature of feature of the acoustical signal to
Amount and the clarification of objective, determine the type confidence level of the target;
Identification module 505, for being identified by fusion identifying technology to the type confidence level, with the determination mesh
Target type.
In the embodiment of the present invention, acquisition module 501 passes through the acoustical signal of the transaudient array acquisition target of plane, the target packet
The unmanned plane of low-latitude flying, light aerocraft, dynamic-delta-wing, cruise missile, any one in powered paraglider are included, mould is handled
Block 502 carries out direction estimation to the acoustical signal, and carries out airspace filter processing to the acoustical signal after direction estimation, filters in airspace
After wave processing, extraction module 503 extracts the feature of the acoustical signal, the corresponding feature vector of feature of the acoustical signal and institute
State clarification of objective, determining module 504 according to the acoustical signal feature, the corresponding feature vector of feature of the acoustical signal
And the clarification of objective, determine the type confidence level of the target, identification module 505 is by fusion identifying technology to described
Type confidence level is identified, the accuracy of target positioning is improved with the type embodiment of the present invention of the determination target.This
In inventive embodiments, the spy of the corresponding feature vector of the feature of the feature of acoustical signal, the acoustical signal and the target is utilized
Sign, determines the type confidence level of target, is then identified by fusion identifying technology to the type confidence level, can be effective
Improve the accuracy of target type discrimination.
Optionally, described device 500 can also include Second processing module 506, for adopting by the transaudient array of plane
After the acoustical signal for collecting target, and direction estimation is carried out to the acoustical signal, and airspace is carried out to the acoustical signal after direction estimation
Before filtering processing, adaptive noise inhibition processing is carried out to the acoustical signal.
Optionally, the Second processing module 506 is specifically used for inhibiting skill using the adaptive noise based on wavelet decomposition
Art carries out inhibition processing to the acoustical signal.
Optionally, described device further include:
Second determining module 507, for behind the direction for estimating the acoustical signal, according to the acoustical signal at each moment
Direction determine the track of the target.
Optionally, the extraction module 503 is specifically used for: when the sound power spectral line of radiation spectrum fundamental frequency is greater than threshold value,
Frequency domain character is analyzed by helicopter line spectrum and harmonic wave collection detection algorithm, to obtain line spectral frequencies feature;Calculating is come from
Related coefficient and cepstrum coefficient;The auto-correlation coefficient and the cepstrum coefficient are combined, the temporal signatures of several dimensions are obtained
Vector;Imparametrization power spectrumanalysis is carried out, to calculate power spectrum;Using calculated power spectrum, several dimensions are obtained
Frequency domain character vector;Energy feature, standard deviation characteristic, spectrum gravity center characteristics and the wavelet packet Sample Entropy for calculating each frequency band of acoustical signal are special
Sign;Energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature are combined, several dimensions are obtained
Wavelet packet character amount;Orientation and its change rate to the target carry out tracking prediction, and the dynamic track for obtaining the target is special
Sign.
Optionally, the determining module 504 is specifically used for pre-establishing characteristic frequency library according to the line spectral frequencies feature;
The characteristic frequency of acoustical signal described in extract real-time;According to the matching in the characteristic frequency and the characteristic frequency library extracted
Situation determines the first kind confidence level of the target;According to the quantity of the dimension of the temporal signatures vector and the target
Determine temporal signatures sub-neural network respectively outputs and inputs the number of plies;By described in temporal signatures sub-neural network determination
The Second Type confidence level of target;Determine that frequency domain is special respectively according to the quantity of the dimension of the frequency domain character vector and the target
Sign sub-neural network outputs and inputs the number of plies;The third type of the target is determined by the frequency domain character sub-neural network
Confidence level determines the sub- nerve net of wavelet packet character according to the dimension of the Wavelet Packet Energy Eigenvector and the quantity of the target respectively
Network outputs and inputs the number of plies;The 4th type confidence level of the target is determined by the wavelet packet character sub-neural network;
By track association identification technology, the track and the dynamic signature of flight path are associated, determine the 5th of the target
Type confidence level.
Optionally, described device further includes global optimization module 508, is used for described through fusion identifying technology to described
Type confidence level is identified, before the type of the determination target, the knowledge that is combined using genetic algorithm and neural network
Other classification method carries out global optimization to neural network weight coefficient.
Optionally, the acquisition module is specifically used for when the target is low-altitude low-velocity small targets, transaudient by cross
The acoustical signal of target described in array acquisition, the transaudient array of cross include 12 transaudient array elements and 4 orthogonal water
Flat bar.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit
It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the present invention
Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey
The medium of sequence code.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although referring to before
Stating embodiment, invention is explained in detail, those skilled in the art should understand that: it still can be to preceding
Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
It modifies or replaces, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.
Claims (9)
1. a kind of method of target type discrimination characterized by comprising
By the acoustical signal of the transaudient array acquisition target of plane, the target includes the unmanned plane of low-latitude flying, light aerocraft, moves
Power dalta wing, cruise missile, any one in powered paraglider;
Direction estimation is carried out to the acoustical signal, and airspace filter processing is carried out to the acoustical signal after direction estimation;
Airspace filter processing after, extract the feature of the acoustical signal, the corresponding feature vector of feature of the acoustical signal and
The clarification of objective;
According to the corresponding feature vector of the feature of the feature of the acoustical signal, the acoustical signal and the clarification of objective, really
The type confidence level of the fixed target;
The type confidence level is identified by fusion identifying technology, with the type of the determination target
The feature for extracting the acoustical signal includes:
When the sound power spectral line of radiation spectrum fundamental frequency is greater than threshold value, by helicopter line spectrum and harmonic wave collection detection algorithm to frequency domain
Feature is analyzed, to obtain line spectral frequencies feature;
The corresponding feature vector of feature for extracting the acoustical signal includes:
Calculate auto-correlation coefficient and cepstrum coefficient;
The auto-correlation coefficient and the cepstrum coefficient are combined, the temporal signatures vector of several dimensions is obtained;
Imparametrization power spectrumanalysis is carried out, to calculate power spectrum;
Using calculated power spectrum, the frequency domain character vector of several dimensions is obtained;
Calculate energy feature, standard deviation characteristic, spectrum gravity center characteristics and the wavelet packet Sample Entropy feature of each frequency band of acoustical signal;
Energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature are combined, several dimensions are obtained
Wavelet packet character amount;
It is described to extract the clarification of objective and include:
Orientation and its change rate to the target carry out tracking prediction, obtain the dynamic signature of flight path of the target.
2. the method according to claim 1, wherein by the acoustical signal of the transaudient array acquisition target of plane it
Afterwards, and to the acoustical signal carry out direction estimation, and to after direction estimation acoustical signal carry out airspace filter processing before, also wrap
It includes:
Adaptive noise inhibition processing is carried out to the acoustical signal.
3. according to the method described in claim 2, it is characterized in that, described carry out at adaptive noise inhibition the acoustical signal
Reason includes:
Inhibition processing is carried out to the acoustical signal using the adaptive noise suppression technology based on wavelet decomposition.
4. the method according to claim 1, wherein further include:
Behind the direction for estimating the acoustical signal, the boat of the target is determined in the direction at each moment according to the acoustical signal
Mark.
5. the method according to claim 1, wherein
The feature according to the acoustical signal determines that the type confidence level of the target includes:
Characteristic frequency library is pre-established according to the line spectral frequencies feature;
The characteristic frequency of acoustical signal described in extract real-time;
According to the match condition in the characteristic frequency and the characteristic frequency library extracted, the first kind of the target is determined
Type confidence level;
The corresponding feature vector of the feature according to the acoustical signal determines that the type confidence level of the target includes:
The defeated of temporal signatures sub-neural network is determined respectively according to the quantity of the dimension of the temporal signatures vector and the target
Enter and export the number of plies;
The Second Type confidence level of the target is determined by the temporal signatures sub-neural network;
The defeated of frequency domain character sub-neural network is determined respectively according to the quantity of the dimension of the frequency domain character vector and the target
Enter and export the number of plies;
The third type confidence level of the target is determined by the frequency domain character sub-neural network;
Wavelet packet character sub-neural network is determined respectively according to the quantity of the dimension of the Wavelet Packet Energy Eigenvector and the target
Output and input the number of plies;
The 4th type confidence level of the target is determined by the wavelet packet character sub-neural network;
The type confidence level that the target is determined according to the clarification of objective includes:
By track association identification technology, the track and the dynamic signature of flight path are associated, determine the target
5th type confidence level.
6. according to the method described in claim 5, it is characterized in that, it is described by fusion identifying technology to the type confidence level
It is identified, the type with the determination target includes:
Utilize the first kind confidence level, Second Type confidence level, third type confidence level, the 4th type confidence level, the 5th
Type confidence level and weighted value calculate the total value of type confidence level, choose the corresponding type of maximum value as the target
Type.
7. according to the method described in claim 5, it is characterized in that, it is described by fusion identifying technology to the type confidence
Degree is identified, before the type of the determination target further include:
Global optimization is carried out to neural network weight coefficient using the method for identifying and classifying that genetic algorithm and neural network combine.
8. method according to any one of claim 1 to 6, which is characterized in that when the target is the small mesh of low-altitude low-velocity
When mark, by the acoustical signal of target described in the transaudient array acquisition of cross, the transaudient array of cross includes 12 transaudient array elements
With 4 orthogonal horizon bars.
9. a kind of target type discrimination device characterized by comprising
Acquisition module, for passing through the acoustical signal of the transaudient array acquisition target of plane, the target includes nobody of low-latitude flying
Machine, light aerocraft, dynamic-delta-wing, cruise missile, any one in powered paraglider;
First processing module for carrying out direction estimation to the acoustical signal, and carries out airspace to the acoustical signal after direction estimation
Filtering processing;
Extraction module, for after airspace filter processing, feature, the feature of the acoustical signal for extracting the acoustical signal to be corresponding
Feature vector and the clarification of objective;
First determining module, for according to the feature of the acoustical signal, the corresponding feature vector of feature of the acoustical signal and
The clarification of objective determines the type confidence level of the target;
Identification module, for being identified by fusion identifying technology to the type confidence level, with the class of the determination target
Type
The feature for extracting the acoustical signal includes:
When the sound power spectral line of radiation spectrum fundamental frequency is greater than threshold value, by helicopter line spectrum and harmonic wave collection detection algorithm to frequency domain
Feature is analyzed, to obtain line spectral frequencies feature;
The corresponding feature vector of feature for extracting the acoustical signal includes:
Calculate auto-correlation coefficient and cepstrum coefficient;
The auto-correlation coefficient and the cepstrum coefficient are combined, the temporal signatures vector of several dimensions is obtained;
Imparametrization power spectrumanalysis is carried out, to calculate power spectrum;
Using calculated power spectrum, the frequency domain character vector of several dimensions is obtained;
Calculate energy feature, standard deviation characteristic, spectrum gravity center characteristics and the wavelet packet Sample Entropy feature of each frequency band of acoustical signal;
Energy feature, standard deviation characteristic, spectrum gravity center characteristics and wavelet packet Sample Entropy feature are combined, several dimensions are obtained
Wavelet packet character amount;
It is described to extract the clarification of objective and include:
Orientation and its change rate to the target carry out tracking prediction, obtain the dynamic signature of flight path of the target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510884182.8A CN105550636B (en) | 2015-12-04 | 2015-12-04 | A kind of method and device of target type discrimination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510884182.8A CN105550636B (en) | 2015-12-04 | 2015-12-04 | A kind of method and device of target type discrimination |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105550636A CN105550636A (en) | 2016-05-04 |
CN105550636B true CN105550636B (en) | 2019-03-01 |
Family
ID=55829819
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510884182.8A Active CN105550636B (en) | 2015-12-04 | 2015-12-04 | A kind of method and device of target type discrimination |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105550636B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106157952B (en) * | 2016-08-30 | 2019-09-17 | 北京小米移动软件有限公司 | Sound identification method and device |
CN106842179B (en) * | 2016-12-23 | 2019-11-26 | 成都赫尔墨斯科技股份有限公司 | A kind of anti-UAV system based on acoustic detection |
CN106873626B (en) * | 2017-03-31 | 2020-07-03 | 芜湖博高光电科技股份有限公司 | Passive positioning and searching system |
CN108965789B (en) * | 2017-05-17 | 2021-03-12 | 杭州海康威视数字技术股份有限公司 | Unmanned aerial vehicle monitoring method and audio-video linkage device |
CN110832408B (en) * | 2017-07-03 | 2022-03-25 | 深圳市大疆创新科技有限公司 | Neural network based image target tracking by aircraft |
CA3080399A1 (en) | 2017-10-30 | 2019-05-09 | The Research Foundation For The State University Of New York | System and method associated with user authentication based on an acoustic-based echo-signature |
CN108280395B (en) * | 2017-12-22 | 2021-12-17 | 中国电子科技集团公司第三十研究所 | Efficient identification method for flight control signals of low-small-slow unmanned aerial vehicle |
CN110826583A (en) * | 2018-08-14 | 2020-02-21 | 珠海格力电器股份有限公司 | Fault determination method and device, storage medium and electronic device |
CN109270520B (en) * | 2018-10-18 | 2020-05-19 | 四川九洲空管科技有限责任公司 | Processing method for acquiring secondary radar response target identity code based on amplitude information |
CN109375204B (en) * | 2018-10-26 | 2021-04-13 | 中电科思仪科技股份有限公司 | Target detection method, system, equipment and medium based on radar |
CN109658944B (en) * | 2018-12-14 | 2020-08-07 | 中国电子科技集团公司第三研究所 | Helicopter acoustic signal enhancement method and device |
CN109864740B (en) * | 2018-12-25 | 2022-02-01 | 北京津发科技股份有限公司 | Surface electromyogram signal acquisition sensor and equipment in motion state |
CN110084094B (en) * | 2019-03-06 | 2021-07-23 | 中国电子科技集团公司第三十八研究所 | Unmanned aerial vehicle target identification and classification method based on deep learning |
CN110020685A (en) * | 2019-04-09 | 2019-07-16 | 山东超越数控电子股份有限公司 | A kind of preprocess method based on adaptive-filtering and limited Boltzmann machine, terminal and readable storage medium storing program for executing |
CN110209993B (en) * | 2019-06-17 | 2023-05-05 | 中国电子科技集团公司信息科学研究院 | Information extraction method and system for detection target |
CN110390949B (en) * | 2019-07-22 | 2021-06-15 | 苏州大学 | Underwater sound target intelligent identification method based on big data |
CN111626093B (en) * | 2020-03-27 | 2023-12-26 | 国网江西省电力有限公司电力科学研究院 | Method for identifying related bird species of power transmission line based on sound power spectral density |
CN113420743A (en) * | 2021-08-25 | 2021-09-21 | 南京隼眼电子科技有限公司 | Radar-based target classification method, system and storage medium |
CN114113837B (en) * | 2021-11-15 | 2024-04-30 | 国网辽宁省电力有限公司朝阳供电公司 | Transformer live detection method and system based on acoustic characteristics |
CN114999529B (en) * | 2022-08-05 | 2022-11-01 | 中国民航大学 | Airplane type classification method for airport aviation noise |
CN116150594B (en) * | 2023-04-18 | 2023-07-07 | 长鹰恒容电磁科技(成都)有限公司 | Method for identifying switch element characteristics in spectrum test data |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7286703B2 (en) * | 2004-09-30 | 2007-10-23 | Fujifilm Corporation | Image correction apparatus, method and program |
CN102928435A (en) * | 2012-10-15 | 2013-02-13 | 南京航空航天大学 | Aircraft skin damage identification method and device based on image and ultrasound information fusion |
CN103245524A (en) * | 2013-05-24 | 2013-08-14 | 南京大学 | Acoustic fault diagnosis method based on neural network |
CN103679746A (en) * | 2012-09-24 | 2014-03-26 | 中国航天科工集团第二研究院二O七所 | object tracking method based on multi-information fusion |
-
2015
- 2015-12-04 CN CN201510884182.8A patent/CN105550636B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7286703B2 (en) * | 2004-09-30 | 2007-10-23 | Fujifilm Corporation | Image correction apparatus, method and program |
CN103679746A (en) * | 2012-09-24 | 2014-03-26 | 中国航天科工集团第二研究院二O七所 | object tracking method based on multi-information fusion |
CN102928435A (en) * | 2012-10-15 | 2013-02-13 | 南京航空航天大学 | Aircraft skin damage identification method and device based on image and ultrasound information fusion |
CN103245524A (en) * | 2013-05-24 | 2013-08-14 | 南京大学 | Acoustic fault diagnosis method based on neural network |
Non-Patent Citations (4)
Title |
---|
基于小波包分解的声信号特征提取方法;范海宁 等;《现代电子技术》;20050430(第4期);第20-21,28页 |
基于支持向量机的低空飞行目标声识别;陈虎虎 等;《系统工程与电子技术》;20050131;第27卷(第1期);第46-48页 |
声探测技术在低空开放中的应用;张学磊 等;《电声技术》;20150112;第39卷(第1期);第1-2节 |
飞机声信号的特征提取与识别;钱汉明 等;《探测与控制学报》;20030930;第25卷(第3期);第1-3节,摘要,图2、5 |
Also Published As
Publication number | Publication date |
---|---|
CN105550636A (en) | 2016-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105550636B (en) | A kind of method and device of target type discrimination | |
Chen et al. | False-alarm-controllable radar detection for marine target based on multi features fusion via CNNs | |
Chang et al. | Learning representations of emotional speech with deep convolutional generative adversarial networks | |
Mosavi et al. | Multi-layer perceptron neural network utilizing adaptive best-mass gravitational search algorithm to classify sonar dataset | |
Shi et al. | Human activity recognition based on deep learning method | |
Regev et al. | Classification of single and multi propelled miniature drones using multilayer perceptron artificial neural network | |
CN107315996A (en) | A kind of noise characteristic extracting method of ships under water based on IMF Energy-Entropies and PCA | |
Darzikolaei et al. | Classification of radar clutters with artificial neural network | |
KR20190019713A (en) | System and method for classifying based on support vector machine for uav sound identification | |
CN104732970A (en) | Ship radiation noise recognition method based on comprehensive features | |
Himawan et al. | Deep Learning Techniques for Koala Activity Detection. | |
CN109977724A (en) | A kind of Underwater Target Classification method | |
Li et al. | Underwater target classification using deep learning | |
CN103116740B (en) | A kind of Underwater targets recognition and device thereof | |
CN106814351B (en) | Aircraft Targets classification method based on three rank LPC techniques | |
Ganchev et al. | Automatic acoustic identification of singing insects | |
CN103994820A (en) | Moving target identification method based on micro-aperture microphone array | |
CN113640768B (en) | Low-resolution radar target identification method based on wavelet transformation | |
CN112882012B (en) | Radar target noise robust identification method based on signal-to-noise ratio matching and echo enhancement | |
Mohankumar et al. | Implementation of an underwater target classifier using higher order spectral features | |
Farrokhrooz et al. | Ship noise classification using probabilistic neural network and AR model coefficients | |
KR101251373B1 (en) | Sound classification apparatus and method thereof | |
Brooks et al. | Drone recognition by micro-doppler and kinematic | |
CN109473112B (en) | Pulse voiceprint recognition method and device, electronic equipment and storage medium | |
Vaz et al. | Marine acoustic signature recognition using convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |